url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1710.03745 | Erdos-Hajnal conjecture for graphs with bounded VC-dimension | The Vapnik-Chervonenkis dimension (in short, VC-dimension) of a graph is defined as the VC-dimension of the set system induced by the neighborhoods of its vertices. We show that every $n$-vertex graph with bounded VC-dimension contains a clique or an independent set of size at least $e^{(\log n)^{1 - o(1)}}$. The dependence on the VC-dimension is hidden in the $o(1)$ term. This improves the general lower bound, $e^{c\sqrt{\log n}}$, due to Erdos and Hajnal, which is valid in the class of graphs satisfying any fixed nontrivial hereditary property. Our result is almost optimal and nearly matches the celebrated Erdos-Hajnal conjecture, according to which one can always find a clique or an independent set of size at least $e^{\Omega(\log n)}$. Our results partially explain why most geometric intersection graphs arising in discrete and computational geometry have exceptionally favorable Ramsey-type properties.Our main tool is a partitioning result found by Lovász-Szegedy and Alon-Fischer-Newman, which is called the "ultra-strong regularity lemma" for graphs with bounded VC-dimension. We extend this lemma to $k$-uniform hypergraphs, and prove that the number of parts in the partition can be taken to be $(1/\varepsilon)^{O(d)}$, improving the original bound of $(1/\varepsilon)^{O(d^2)}$ in the graph setting. We show that this bound is tight up to an absolute constant factor in the exponent. Moreover, we give an $O(n^k)$-time algorithm for finding a partition meeting the requirements. Finally, we establish tight bounds on Ramsey-Turán numbers for graphs with bounded VC-dimension. | \section{Introduction}
During the relatively short history of computational geometry, there were many breakthroughs that originated from results in extremal combinatorics \cite{GRT17}. Range searching turned out to be closely related to discrepancy theory \cite{Ch00}, linear programming to McMullen's Upper Bound theorem and to properties of the facial structure of simplicial complexes \cite{St96}, motion planning to the theory of Davenport-Schinzel sequences and to a wide variety of other forbidden configuration results \cite{ShA95}, graph drawing and VLSI design to the crossing lemma, to the Szemer\'edi-Trotter theorem, and to flag algebras \cite{Ta13}. A particularly significant example that found many applications in discrete and computational geometry, was the discovery of Haussler and Welzl \cite{HW87}, according to which many geometrically defined set systems have bounded Vapnik-Chervonenkis dimension. Erd\H os's ``Probabilistic Method'' \cite{AS} or ``Random Sampling'' techniques, as they are often referred to in computational context, had been observed to be ``unreasonably effective'' in discrete geometry and geometric approximation algorithms \cite{H11}. Haussler and Welzl offered an explanation and a tool: set systems of bounded Vapnik-Chervonenkis dimension admit much smaller hitting sets and ``epsilon-nets'' than other set systems with similar parameters.
It was also observed a long time ago that geometrically defined graphs and set systems have unusually strong Ramsey-type properties. According to the quantitative version of Ramsey's theorem, due to Erd\H os and Szekeres \cite{es}, every graph on $n$ vertices contains a clique or an independent set of size at least $\frac{1}{2}\log n$. In \cite{erdos2}, Erd\H os proved that this bound is tight up to a constant factor. However, every intersection graph of $n$ segments in the plane, say, has a much larger clique or an independent set, whose size is at least $n^{\varepsilon}$ for some $\varepsilon>0$ \cite{LMPT}. The proof extends to intersection graphs of many other geometric objects \cite{alon}. Interestingly, most classes of graphs and hypergraphs in which a similar phenomenon has been observed turned out to have (again!) bounded Vapnik-Chervonenkis dimension. (We will discuss this fact in a little more detail at the end of the Introduction.)
The problem can be viewed as a special case of a celebrated conjecture of Erd\H os and Hajnal \cite{hajnal}, which is one of the most challenging open problems in Ramsey theory. Let $P$ be a {\em hereditary} property of finite graphs, that is, if $G$ has property $P$, then so do all of its induced subgraphs. Erd\H os and Hajnal conjectured that for every hereditary property $P$ which is not satisfied by all graphs, there exists a constant $\varepsilon(P)>0$ such that every graph of $n$ vertices with property $P$ has a clique or an independent set of size at least $n^{\varepsilon(P)}$. They proved the weaker lower bound $e^{\varepsilon(P)\sqrt{\log n}}$. According to the discovery of Haussler and Welzl mentioned above, the Vapnik-Chervonenkis dimension of most classes of ``naturally'' defined graphs arising in geometry is bounded from above by a constant $d$. The property that the Vapnik-Chervonenkis dimension of a graph is at most $d$, is hereditary.
The aim of this paper is to investigate whether the observation that the Erd\H os-Hajnal conjecture tends to hold for geometrically defined graphs can be ascribed to the fact that they have bounded VC-dimension. Our first theorem (Theorem 1 below) shows that the answer to this question is likely to be positive. To continue, we need to agree on the basic definitions and terminology.
Let $\mathcal{F}$ be a set system on a ground set $V$. The {\em Vapnik-Chervonenkis dimension} ({\em VC-dimension}, for short) of $\mathcal{F}$ is the {\em largest} integer $d$ for which there exists a $d$-element set $S\subset V$ such that for every subset $B\subset S$, one can find a member $A\in \mathcal{F}$ with $A\cap S=B$. Given a graph $G = (V,E)$, for any vertex $v \in V$, let $N(v)$ denote the neighborhood of $v$ in $G$, that is, the set of vertices in $V$ that are connected to $v$. We note that $v$ itself is not in $N(v)$. Then we say that $G$ has \emph{VC-dimension}~$d$, if the set system induced by the neighborhoods in $G$, i.e. $\mathcal{F} = \{N(v) \subset V: v \in V\}$, has VC-dimension $d$. Let us remark that although the edges of $G$ also form a 2-uniform set system $\mathcal{F}' = \{e\in E(G)\}$, the VC-dimension of $G$ defined above is usually different from the VC-dimension of $\mathcal{F}'$.
The VC-dimension of a set system is one of the most useful combinatorial parameters that measures its complexity, and, apart from its geometric applications, it has proved to be relevant in many other branches of pure and applied mathematics, such as statistics, logic, learning theory, and real algebraic geometry. The notion was introduced by Vapnik and Chervonenkis~\cite{VC71} in 1971, as a tool in mathematical statistics. Kranakis et al.~\cite{KKRUW} observed that the VC-dimension of a graph can be determined in quasi-polynomial time and, for bounded degree graphs, in quadratic time. Schaefer \cite{Sch99}, addressing a question of Linial, proved that determining the VC-dimension of a set system is $\Sigma_3^p$-complete. For each positive integer $d$, Anthony, Brightwell, and Cooper \cite{ABC95} determined the threshold for the Erd\H{o}s-R\'enyi random graph $G(n,p)$ to have VC-dimension $d$ (see also~\cite{KPW92}). Given a bipartite graph $F$, its \emph{closure} is defined as the set of all graphs that can be obtained from $F$ by adding edges between two vertices in the same part. It is known (see \cite{LS}) that a class of graphs has bounded VC-dimension if and only if none of its members contains any induced subgraph that belongs to the closure of some fixed bipartite graph $F$.
Our first result states that the Erd\H os-Hajnal conjecture ``almost holds'' for graphs of bounded VC-dimension.
\begin{theorem}\label{jacob}
Let $d$ be a fixed positive integer. If $G$ is an $n$-vertex graph with VC-dimension at most $d$, then $G$ contains a clique or independent set of size $e^{(\log n)^{1 - o(1)}}$.
\end{theorem}
\noindent Note that the dependence of the bound on $d$ is hidden in the $o(1)$-notation.
There has been a long history of studying off-diagonal Ramsey numbers, where one is interested in finding the maximum size of an independent set guaranteed in a $K_s$-free graph on $n$ vertices with $s$ fixed. An old result of Ajtai, Koml\'os, and Szemer\'edi \cite{AKS} states that all such graphs contain independent sets of size $cn^{\frac{1}{s-1}}(\log n)^{\frac{s-2}{s-1}}$. In the other direction, Spencer \cite{Sp} used the Lov\'asz Local Lemma to show that there are $K_s$-free graphs on $n$ vertices and with no independent set of size $c'n^{\frac{2}{s+1}}\log n$. This bound was later improved by Bohman and Keevash~\cite{BK} to $c'n^{\frac{2}{s+1}}(\log n)^{1 - \frac{2}{(s + 1)(s - 2)}}$. In Section \ref{lll}, we give a simple proof, extending Spencer's argument, showing that there are $K_s$-free graphs with bounded VC-dimension and with no large independent sets.
\begin{theorem}\label{offdiag}
For fixed $s\geq 3$ and $d \geq 5$ such that $d \geq s+2$, there exists a $K_s$-free graph on $n$ vertices with VC-dimension at most $d$ and no independent set of size $cn^{\frac{2}{s+1}}\log n$, where $c = c(d)$.
\end{theorem}
\noindent For large $s$ ($s > d$), a result of Fox and Sudakov (Theorem 1.9 in \cite{FS}) implies that all $n$-vertex $K_s$-free graphs $G$ with VC-dimension $d$ contain an independent set of size $n^{\frac{1}{c\log s}}$ where $c = c(d)$.
\bigskip
\noindent \textbf{Regularity lemma for hypergraphs with bounded VC-dimension.} First, we generalize the definition of VC-dimension for graphs to {\em hypergraphs}. Given a $k$-uniform hypergraph $H = (V,E)$, for any $(k-1)$-tuple of distinct vertices $v_1,\ldots, v_{k-1} \in V$, let $$N(v_1,\ldots, v_{k-1}) = \{u \in V: \{v_1,\ldots, v_{k-1},u\} \in E(H)\}.$$ Then we say that {\em $H$ has VC-dimension $d$}, if the set system $$\mathcal{F} = \{N(v_1,\ldots, v_{k-1})\subset V: v_1,\ldots, v_{k-1} \in V\}$$ has VC-dimension $d$. Of course, the hyperedges of $H$ form a set system, but the VC-dimension of this set system is usually {\em different} from the VC-dimension of $H$ defined above. The latter one is defined as the VC-dimension of the set system $\mathcal{F}$ induced by the neighborhoods of the vertices of $H$, rather than by the hyperedges.
The \emph{dual} of the set system $(V,\mathcal{F})$ on the ground set $V$ is the set system obtained by interchanging the roles of $V$ and $\mathcal{F}$. That is, it is the set system $(\mathcal{F},\mathcal{F}^{\ast})$, where the ground set is $\mathcal{F}$ and
$$\mathcal{F}^{\ast} = \{\{A\in \mathcal{F}: v\in A\}:v \in V\}.$$
In other words, $\mathcal{F}^{\ast}$ is isomorphic to the set system whose ground set is ${V\choose k-1}$, and each set is a maximal collection of $(k-1)$-tuples $\{S_1,\ldots, S_p\}$ such that for all $i$, $v\cup S_i \in E(H)$ for some fixed $v$. Hence, we have $(\mathcal{F}^{\ast})^{\ast} = \mathcal{F}$, and it is known that if $\mathcal{F}$ has VC-dimension $d$, then $\mathcal{F}^{\ast}$ has VC-dimension at most $2^d + 1$. We say that $H =(V,E)$ has \emph{dual VC-dimension} $d$ if $\mathcal{F}^{\ast}$ has VC-dimension $d$.
The main tool used to prove Theorem \ref{jacob} is an ultra-strong regularity lemma for graphs with bounded VC-dimension obtained by Lov\'asz and Szegedy \cite{LS} and Alon, Fischer, and Newman \cite{AFN}. Here, we extend the ultra-strong regularity lemma to uniform hypergraphs.
Given $k$ vertex subsets $V_1,\ldots, V_k$ of a $k$-uniform hypergraph $H$, we write $E(V_1,\ldots, V_k)$ to be the set of edges going across $V_1,\ldots, V_k$, that is, the set of edges with exactly one vertex in each $V_i$. The \emph{density} across $V_1,\ldots, V_k$ is defined as $\frac{|E(V_1,\ldots, V_k)|}{|V_1|\cdots |V_k|}$. We say that the $k$-tuple $(V_1,\ldots, V_k)$ is \emph{$\varepsilon$-homogeneous} if the density across it is less than $\varepsilon$ or greater than $1-\varepsilon$. A partition is called {\em equitable} if any two parts differ in size by at most one.
In \cite{LS}, Lov\'asz and Szegedy established an \emph{ultra-strong} regularity lemma for graphs ($k = 2$) with bounded VC-dimension, which states that for any $\varepsilon > 0$, there is a (least) $K = K(\varepsilon)$ such that the vertex set $V$ of a graph with VC-dimension $d$ has a partition into at most $K\leq (1/\varepsilon)^{O(d^2)}$ parts such that all but at most an $\varepsilon$-fraction of the pairs of parts are $\varepsilon$-homogeneous. A better bound was obtained by Alon, Fischer, and Newman \cite{AFN} for bipartite graphs with bounded VC-dimension, who showed that the number of parts in the partition can be taken to be $(d/\varepsilon)^{O(d)}$. Since the VC-dimension of a graph $G$ is equivalent to the dual VC-dimension of $G$, we generalize their result to hypergraphs with the following result.
\begin{theorem}\label{reg1}
Let $\varepsilon \in (0,1/4)$ and let $H = (V,E)$ be an $n$-vertex $k$-uniform hypergraph with dual VC-dimension $d$. Then $V$ has an equitable partition $V = V_1\cup\cdots\cup V_K$ with $8/\varepsilon \leq K \leq c(1/\varepsilon)^{2d + 1}$ parts such that all but an $\varepsilon$-fraction of the $k$-tuples of parts are $\varepsilon$-homogeneous. Here $c = c(d,k)$ is a constant depending only on $d$ and $k$. Moreover, there is an $O(n^k)$ time algorithm for computing such a partition.
\end{theorem}
Our next result shows that the partition size in the theorem above is tight up to an absolute constant factor in the exponent.
\begin{theorem}\label{tight}
For $d\geq 16$ and $\varepsilon \in (0,1/100)$, there is a graph $G$ with VC-dimension $d$ such that any equitable vertex partition on $G$ with the property that all but an $\varepsilon$-fraction of the pairs of parts are $\varepsilon$-homogeneous, requires at least $(5\varepsilon)^{-d/4}$ parts.
\end{theorem}
\bigskip
\noindent \textbf{Ramsey-Tur\'an numbers.} Let $F$ be a fixed graph. The Ramsey-Tur\'an number $\mathbf{RT}(n,F,o(n))$ is the maximum number of edges an $n$-vertex graph $G$ can have without containing $F$ as a subgraph and having independence number $o(n)$. Ramsey-Tur\'an numbers were introduced by S\'os \cite{sos}, motivated by the classical theorems of Ramsey and Tur\'an and their connections to geometry, analysis, and number theory. One of the earliest results in Ramsey-Tur\'an theory appeared in \cite{ES70}. It states that for $p\geq 2$, we have
$$\mathbf{RT}(n,K_{2p-1},o(n)) = \frac{1}{2}\left(1 - \frac{1}{p-1}\right)n^2 + o(n^2).$$
\noindent For the case when the excluded clique has an even number of vertices, Szemer\'edi \cite{Sz72} applied the graph regularity lemma to show that
$$\mathbf{RT}(n,K_4,o(n)) \leq \frac{1}{8}n^2 + o(n^2),$$
\noindent and several years later, Bollob\'as--Erd\H{o}s \cite{BE76} gave a surprising geometric construction which shows that this bound is tight. For larger cliques, a result of Erd\H os, Hajnal, S\'os, and Szemer\'edi \cite{EHSS} states that
$$\mathbf{RT}(n,K_{2p},o(n)) = \frac{1}{2}\left(1-\frac{3}{3p-2}\right)n^2 + o(n^2)$$
\noindent holds for every $p\ge 2$. For more results in Ramsey-Tur\'an theory, see the survey of Simonovits and S\'os \cite{SS}.
Here we give tight bounds on Ramsey-Tur\'an numbers for graphs with bounded VC-dimension, showing that the densities for $K_{2p}$ and for $K_{2p-1}$ are the same in this setting, and are different from what we have in the classical setting in the even case.
Let $\mathbf{RT}_{d}(n,K_p,o(n))$ be the maximum number of edges that an $n$-vertex $K_p$-free graph of VC-dimension at most $d$ can have if its independence number is $o(n)$.
\begin{theorem}\label{rt}
For fixed integers $d\geq 4$ and $p \geq 3$, we have
$$\mathbf{RT}_{d}(n,K_{2p-1},o(n)) = \mathbf{RT}_{d}(n,K_{2p},o(n)) = \frac{1}{2}\left(1 - \frac{1}{p-1}\right)n^2 + o(n^2).$$
\end{theorem}
\bigskip
\noindent \textbf{Semi-algebraic graphs vs. graphs with bounded VC-dimension.} A \emph{semi-algebraic} graph $G$, is a graph whose vertices are points in $\RR^d$ and edges are pairs of points that satisfy a semi-algebraic relation of constant complexity.\footnote{A binary semi-algebraic relation $E$ on a point set $P\subset \RR^d$ is the set of pairs of points $(p,q)$ from $P$ whose $2d$ coordinates satisfy a boolean combination of a fixed number of polynomial inequalities.} In a sequence of recent works \cite{alon,CFPSS,FPS14}, several authors have shown that classical Ramsey and Tur\'an-type results in combinatorics can be significantly improved for semi-algebraic graphs.
It follows from the Milnor-Thom theorem (see \cite{mat}) that semi-algebraic graphs of bounded complexity have bounded VC-dimension. Therefore, all results in this paper on properties of graphs of bounded VC-dimension apply to semi-algebraic graphs of bounded description complexity. However, a graph being semi-algebraic of bounded complexity is a much more restrictive condition than having bounded VC-dimension. In particular, it is known (it follows, e.g., from \cite{ABC95}) that for each $\varepsilon>0$ there is a positive integer $d=d(\varepsilon)$ such that the number of $n$-vertex graphs with VC-dimension $d$ is $2^{\Omega(n^{2-\varepsilon})}$, while the Milnor-Thom theorem can be used to deduce that the number of $n$-vertex semi-algebraic graphs coming from a relation with bounded ``description complexity'' is only $2^{O(n\log n)}$. Furthermore, it is known \cite{alon} that semi-algebraic graphs have the {\em strong Erd\H os-Hajnal property}, that is, there exists a constant $\delta>0$ such that every $n$-vertex semi-algebraic graph of bounded complexity contains a complete or an empty {\em bipartite} graph whose parts are of size at least $\delta n$. This is not true, in general, for graphs with bounded VC-dimension. In particular, the probabilistic construction in Section \ref{lll} shows the following.
\begin{theorem}\label{notstrongeh}
For fixed $d \geq 5$ and for every sufficiently large $n$, there is an $n$-vertex graph $G = (V,E)$ with VC-dimension at most $d$ with the property that there are no two disjoint subsets $A,B\subset V(G)$ such that $|A|,|B| \geq 4n^{4/d}\log n$ and $(A,B)$ is homogeneous, that is, either $A\times B \subset E(G)$ or $(A\times B) \cap E(G) = \emptyset$.
\end{theorem}
It follows from a result of Alon \emph{et al.}~\cite{alon} that a stronger regularity lemma holds for semi-algebraic graphs of bounded description complexity, where all but an $\varepsilon$-fraction of the pairs of parts in the equitable partition are complete or empty, instead of just $\varepsilon$-homogeneous as in the bounded VC-dimension case (see \cite{PS01}). This result was further extended to $k$-uniform hypergraphs by Fox et al.~\cite{FGLNP}, and the authors \cite{FPS14} recently showed that it holds with a polynomial number of parts.
\medskip
\noindent \textbf{Organization.} In the next section, we prove Theorem \ref{reg1}. In Section \ref{graphramsey}, we prove Theorem~\ref{jacob}, which nearly settles the Erd\H os-Hajnal conjecture for graphs with bounded VC-dimension. In Section~\ref{lll}, we prove Theorems \ref{offdiag} and \ref{notstrongeh}. In Section \ref{ramseyturan}, we prove Theorem \ref{rt}. We conclude by discussing a number of other results for graphs and hypergraphs with bounded VC-dimension. We systemically omit floors and ceilings whenever they are not crucial for sake of clarity in our presentation. All logarithms are natural logarithms.
\section{Regularity partition for hypergraphs with bounded VC-dimension}
In this section, we prove Theorem \ref{reg1}. We start by recalling several classic results on set systems with bounded VC-dimension. Let $\mathcal{F}$ be a set system on a ground set $V$. The \emph{primal shatter function} of $\mathcal{F}$ is defined as
\begin{equation*}
\pi_{\mathcal{F}}(z) = \max\limits_{V'\subset V, |V'| = z}|\{A\cap V':A \in \mathcal{F}\}|.
\end{equation*}
\noindent In other words, $\pi_{\mathcal{F}}(z)$ is a function whose value at $z$ is the maximum possible number of distinct intersections of the sets of $\mathcal{F}$ with a $z$-element subset of $V$. The \emph{dual shatter function} of $(V,\mathcal{F})$, denoted by $\pi^{\ast}_{\mathcal{F}}$, whose value at $z$ is defined as the maximum number of equivalence classes on $V$ defined by a $z$-element subfamily $\mathcal{Y}\subset \mathcal{F}$, where two points $x,y \in V$ are {\em equivalent} with respect to $\mathcal{Y}$ if $x$ belongs to the same sets of $\mathcal{Y}$ as $y$ does. In other words, the dual shatter function of $\mathcal{F}$ is the primal shatter function of the dual set system $\mathcal{F}^{\ast}$.
The VC-dimension of $\mathcal{F}$ is closely related to its shatter functions. A famous result of Sauer \cite{Sa72}, Shelah \cite{Sh72}, Perles, and Vapnik-Chervonenkis \cite{VC71} states the following.
\begin{lemma}\label{sauer}
If $\mathcal{F}$ is a set system with VC-dimension $d$, then
$$\pi_{\mathcal{F}}(z) \leq \sum_{i=0}^{d}{z\choose i}.$$
\end{lemma}
\noindent On the other hand, suppose that the primal shatter function of $\mathcal{F}$ satisfies $\pi_{\mathcal{F}}(z) \leq cz^d$ for all $z$. Then, if the VC-dimension of $\mathcal{F}$ is $d_0$, we have $2^{d_0} \leq c(d_0)^d$, which implies $d_0 \leq 4d\log (cd)$. It is known that if $\mathcal{F}$ has VC-dimension $d$, then $\mathcal{F}^{\ast}$ has VC-dimension at most $2^d + 1$.
Given two sets $A_1,A_2 \in \mathcal{F}$, the {\em symmetric difference} of $A_1$ and $A_2$, denoted by $A_1\triangle A_2$, is the set $(A_1\cup A_2)\setminus(A_1\cap A_2)$. We say that the set system $\mathcal{F}$ is $\delta$-\emph{separated} if for any two sets $A_1, A_2 \in \mathcal{F}$ we have $|A_1\triangle A_2| \geq \delta$. The following \emph{packing lemma} was proved by Haussler in \cite{H95}.
\begin{lemma}\label{packing}
Let $\mathcal{F}$ be a set system on a ground set $V$ such that $|V| = n$ and $\pi_{\mathcal{F}}(z) \leq cz^d$ for all $z$. If $\mathcal{F}$ is $\delta$-separated, then $|\mathcal{F}| \leq c_1(n/\delta)^d$ where $c_1 = c_1(c,d)$.
\end{lemma}
\noindent We will use Lemma \ref{packing} and the following lemma to prove Theorem \ref{reg1}.
\medskip
\begin{lemma}\label{pattern}
Let $0 < \varepsilon < 1/2$ and $H = (W_1\cup \cdots \cup W_k,E)$ be a $k$-partite $k$-uniform hypergraph such that $|W_i| = m$ for all $i$. If $(W_1,\ldots, W_k)$ is not $\varepsilon$-homogeneous, then there are at least $\varepsilon(1- \varepsilon) m^{k + 1}$ pairs of $k$-tuples $(e,e')$, where $|e\cap e'| = k-1$, $e \in E(H)$, $e'\not\in E(H)$, and $|e\cap W_i| = |e'\cap W_i| = 1$ for all $i$.
\end{lemma}
\begin{proof}
Let $\varepsilon_j$ be the fraction of pairs of $k$-tuples $(e,e')$, each containing one vertex in each $W_i$ and agree on all vertices except in $W_j$, and $e$ is an edge and $e'$ is not an edge. It suffices to show that $\varepsilon_1 + \varepsilon_2 + \cdots + \varepsilon_k\geq \varepsilon(1-\varepsilon)$.
Pick vertices $a_i,b_i \in W_i$ uniformly at random with repetition for $i = 1,2,\ldots, k$. For $0 \leq i \leq k$, let $e_i = \{a_j: j \leq i\} \cup \{b_j : j > i\}$. In particular, $e_k = (a_1,\ldots, a_k)$ and $e_0 = (b_1,\ldots, b_k)$. Then let $X$ be the event that $e_0$ and $e_k$ have different adjacency, that is, $e_0$ is an edge and $e_k$ is not an edge, or $e_0$ is not an edge and $e_k$ is an edge. Then we have
$$Pr[X] \geq 2\varepsilon(1 - \varepsilon),$$
\noindent since $(W_1,\ldots, W_k)$ is not $\varepsilon$-homogeneous. Let $X_i$ be the event that $e_i$ and $e_{i + 1}$ have different adjacency, and let $Y$ be the event that at least one event $X_i$ occurs. Then by the union bound, we have
$$Pr[Y] \leq Pr[X_0] + Pr[X_1] + \cdots + Pr[X_{k-1}] = 2\varepsilon_1 + 2\varepsilon_2 + \cdots + 2\varepsilon _k .$$
On the other hand, if $X$ occurs, then $Y$ occurs. Therefore $2\varepsilon_1 + 2\varepsilon_2 + \cdots + 2\varepsilon_k \geq Pr[Y] \geq Pr[X] \geq 2(1-\varepsilon)\varepsilon$, which completes the proof. \end{proof}
\medskip
\noindent \begin{proof}[Proof of Theorem \ref{reg1}] Let $ 0 < \varepsilon < 1/4$ , $k \geq 2$, and $H = (V,E)$ be an $n$-vertex $k$-uniform hypergraph with dual VC-dimension $d$. For every vertex $v\in V$, let $N(v)$ denote the set of $(k-1)$-tuples $S \in {V\choose k-1}$ such that $v\cup S \in E(H)$. Let $\mathcal{F}$ be the set-system whose ground set is ${V \choose k-1}$, and $A \in \mathcal{F}$ if and only if $A = N(v)$ for some vertex $v \in V$. Hence $\mathcal{F} = \{N(v): v \in V\}$ has VC-dimension $d$. Set $\delta = \frac{\varepsilon^2}{4k^2}{n\choose k-1}$. By examining each vertex and its neighborhood one by one, we greedily construct a maximal set $S\subset V(H)$ such that $\mathcal{F}' = \{N(s): s \in S\}$ is $\delta$-separated. By Lemma \ref{packing}, we have $|S| \leq c_1(4k^2/\varepsilon^2)^d$. Let $S = \{s_1,s_2,\ldots, s_{|S|}\}$.
We define a partition $\mathcal{Q}: V = U_1\cup \cdots \cup U_{|S|}$ of the vertex set such that $v \in U_i$ if $i$ is the smallest index such that $|N(v)\triangle N(s_i)| < \delta$. Such an $i$ always exists, since $S$ is maximal. By the triangle inequality, for $u,v \in U_i$, we have $|N(u)\triangle N(v)| <2\delta$. Set $K = 8k|S|/\varepsilon$. Partition each part $U_i$ into parts of size $|V|/K = n/K$ and possibly one additional part of size less than $n/K$. Collect these additional parts and divide them into parts of size $|V|/K$ to obtain an equitable partition $\mathcal{P}:V=V_1 \cup \cdots \cup V_K$ into $K$ parts. The number of vertices of $V$ belonging to parts $V_i$ that are not fully contained in one part of $\mathcal{Q}$ is at most $|S||V|/K$. Hence, the fraction of (unordered) $k$-tuples $(V_{i_1},\ldots, V_{i_k})$ such that at least one of the parts is not fully contained in some part of $\mathcal{Q}$ is at most $k|S|/K=\varepsilon/8$. Let $X$ denote the set of unordered $k$-tuples of parts $(V_{i_1},\ldots,V_{i_k})$ such that each part is fully contained in a part of $\mathcal{Q}$ (though, in not necessarily the same part) and $(V_{i_1},\ldots,V_{i_k})$ is not $\varepsilon$-homogeneous.
Let $T$ be the set of pairs of $k$-tuples $(e,e')$, such that $|e\cap e'| = k-1$, $e \in E(H)$, $e'\not\in E(H)$, $|e\cap V_{i_j}| = |e'\cap V_{i_j}| = 1$ for $j = 1,2,\ldots, k$, and $(V_{i_1},\ldots, V_{i_k})\in X$. Notice that for $(e,e') \in T$, such that $e\cap V_{i_j} = b$, $e'\cap V_{i_j} = b'$, $b \neq b'$, and $V_{i_j}$ lies completely inside a part in $\mathcal{Q}$, we have $|N(b)\triangle N(b')| \leq 2\delta$. Therefore
$$|T| \leq K\left(\frac{n}{K}\right)^22\delta \leq \frac{\varepsilon^2}{2Kk^2} n^2 {n\choose k-1} .$$
\noindent On the other hand, by Lemma \ref{pattern}, every $k$-tuple of parts $(V_{i_1},\ldots,V_{i_k})$ that is not $\varepsilon$-homogeneous gives rise to at least $\varepsilon(1-\varepsilon)(n/K)^{k + 1}$ pairs $(e,e')$ in $T$. Hence $|T| \geq |X| \varepsilon(1-\varepsilon)(n/K)^{k + 1}$. Since $\varepsilon < 1/4$ and $k \geq 2$, the inequalities above imply that
$$|X| \leq (2\varepsilon/3){K\choose k}.$$
\noindent Thus, the fraction of $k$-tuples of parts in $\mathcal{P}$ that are not $\varepsilon$-homogeneous is at most $\varepsilon/8 + 2\varepsilon/3 < \varepsilon$, and $K \leq c(1/\varepsilon)^{2d + 1}$ where $c = c(k,d)$.
Finally, it remains to show that the partition $\mathcal{P}$ can be computed in $O(n^k)$ time. Given two vertices $s,v, \in V$, we have $|N(s) \triangle N(v)| = |N(s)| + |N(v)| - 2|N(s) \cap N(v)|$. Therefore we can determine if $|N(s) \triangle N(u)| < \delta$ in $O(n^{k-1})$ time. Hence the maximal set $S\subset V$ described above (and therefore the partition $\mathcal{Q}$) can be computed in $O(n^{k})$ time since $|S| \leq n$. The final equitable partition $\mathcal{P}$ requires an additional $O(n)$ time, which gives a total running time of $O(n^k)$. \end{proof}
We now establish Theorem \ref{tight} which shows that the partition size in Theorem \ref{reg1} is tight up to an absolute constant factor in the exponent.
\medskip
\begin{proof}[Proof of Theorem \ref{tight}] Given two vertex subsets $X,Y$ of a graph $G $, we write $e_G(X,Y)$ for the number of edges between $X$ and $Y$ in $G$, and write $d_G(X,Y)$ for the density of edges between $X$ and $Y$, that is, $d_G(X,Y) = \frac{e_G(X,Y)}{|X||Y|}$. The pair $(X,Y)$ is said to be $(\varepsilon, \delta)$-\emph{regular} if for all $X'\subset X$ and $Y'\subset Y$ with $|X'| \geq \delta |X|$ and $|Y'| \geq \delta |Y|$, we have $|d_G(X,Y) - d_G(X',Y')| \leq \varepsilon$. In the case that $\varepsilon = \delta$, we just say \emph{$\varepsilon$-regular}. We will make use of the following construction due to Conlon and Fox.
\begin{lemma}[\cite{CF12}]\label{nreg}
For $d\geq 16$ and $\varepsilon \in (0,1/100)$, there is a graph $H$ on $n = \lceil (5\varepsilon)^{-d/2}\rceil$ vertices such that for every equitable vertex partition of $H$ with at most $\sqrt{n}$ parts, there are at least an $\varepsilon$-fraction of the pairs of parts which are not $(4/5)$-regular.
\end{lemma}
Let $H = (V,E)$ be the graph obtained from Lemma \ref{nreg} on $n = \lceil (5\varepsilon)^{-d/2}\rceil$ vertices, where $\varepsilon \in (0,1/100)$ and $d \geq 16$, and consider a random subgraph $G\subset H$ by picking each edge in $E$ independently with probability $p = n^{-2/d} = 5\varepsilon$. Then we have the following.
\begin{lemma}\label{probnreg}
In the random subgraph $G$, with probability at least $1 -n^{-2}$, every pair of disjoint subsets $X,Y \subset V$, with $|X| \leq |Y|$, satisfy
\begin{equation}\label{show}
|e_G(X,Y) - p\cdot e_H(X,Y)| < \sqrt{g},\end{equation}
\noindent where $g = 2|X||Y|^2\ln(ne/|Y|)$.
\end{lemma}
\begin{proof}
For fixed sets $X,Y \subset V(G)$, where $|X| = u_1$ and $|Y| = u_2$, let $E_H(X,Y) = \{e_1,\ldots, e_m\}$. We define $S_i = 1$ if edge $e_i$ is picked and $S_i = 0$ otherwise, and set $S = S_1 +\cdots + S_m$. A Chernoff-type estimate (see Theorem A.1.4 in \cite{AS}) implies that for $a > 0$, $Pr[|S-pm| > a] < 2e^{-2a^2/m}.$ Since $m \leq u_1u_2$, the probability that (\ref{show}) does not hold is less than $2e^{-2g/(u_1u_2)}$. By the union bound, the probability that there are disjoint sets $X,Y \subset V(G)$ for which (\ref{show}) does not hold is at most
$$\begin{array}{ccl}
\sum\limits_{u_2 = 1}^n \sum\limits_{u_1 = 1}^{u_2} {n\choose u_2}{n-u_2\choose u_1} 2e^{-2g/(u_1u_2)}& \leq & \sum\limits_{u_2 = 1}^n \sum\limits_{u_1 = 1}^{u_2} \left(\frac{ne}{u_2}\right)^{u_2}\left(\frac{ne}{u_1}\right)^{u_1} 2e^{-2g/(u_1u_2)} \\\\
& \leq & \sum\limits_{u_2 = 1}^n \sum\limits_{u_1 = 1}^{u_2} 2\left(\frac{ne}{u_2}\right)^{-2u_2} \leq n^{-2}.
\end{array}$$
\end{proof}
By the analysis in Section \ref{lll}, the probability that $G$ has VC-dimension at least $d+1$ is at most $${n\choose d+1}n^{2^{d+1}}p^{(d+1)2^{d}} \leq n^{d + 1} n^{-2^{d+1}/d} < \frac{1}{10},$$ since $d\geq 16$. Therefore, the union bound implies that there is a subgraph $G \subset H$ such that $G$ has VC-dimension at most $d$, and every pair of disjoint subsets $X,Y \subset V$, with $|X|\leq |Y|$, satisfy
\begin{equation}\label{ineq}
|e_G(X,Y) - p\cdot e_H(X,Y)| < \sqrt{2|X||Y|^2\ln(ne/|Y|)}.
\end{equation}
We will now show that for every equitable vertex partition of $G$ into fewer than $\sqrt{n} = (5\varepsilon)^{-d/4}$ parts, there are at least an $\varepsilon$-fraction of the pairs of parts which are not $\varepsilon$-homogenous.
Let $\mathcal{P}$ be a equitable partition on $V$ into $t$ parts, where $t < \sqrt{n} = (5\varepsilon)^{-d/4}$. By Lemma \ref{nreg}, there are at least $\varepsilon{t\choose 2}$ pairs of parts in $\mathcal{P}$ which are not $(4/5)$-regular in $H$. Let $(X,Y)$ be such a pair. Then there are subsets $X'\subset X$ and $Y'\subset Y$ such that $|X'| \geq 4|X|/5$, $|Y'|\geq 4|Y|/5$, and
$$|d_H(X,Y) - d_H(X',Y')| \geq 4/5.$$
\noindent Moreover, by (\ref{ineq}), we have
$$|e_G(X,Y) - p\cdot e_H(X,Y)| \leq \sqrt{2}\left(\frac{n}{t}\right)^{3/2}\ln(te) \leq \frac{\sqrt{2}\ln(te)}{n^{1/4}}(n/t)^2.$$
\noindent Since $d \geq 16$ and $\varepsilon \in (0,1/100)$, this implies
$$|e_G(X,Y) - p\cdot e_H(X,Y)| \leq (5\varepsilon)^2\sqrt{2}\ln(te) (n/t)^2 \leq \frac{\varepsilon}{4}(n/t)^2. $$
\noindent Hence $|d_G(X,Y) - p\cdot d_H(X,Y)| \leq \varepsilon/4$. Therefore we have
$$|d_G(X',Y') - d_G(X,Y)| \geq p\cdot |d_H(X',Y') - d_H(X,Y)| - 2\frac{\varepsilon}{4} \geq 4\varepsilon - \frac{\varepsilon}{2} > 3\varepsilon.$$
Finally, it is easy to see that $(X,Y)$ is not $\varepsilon$-homogeneous in $G$. Indeed if $(X,Y)$ were $\varepsilon$-homogeneous, then we have either $d_G(X,Y) < \varepsilon$ or $d_G(X,Y) > 1-\varepsilon$. In the former case we have $d_G(X',Y') > 3\varepsilon$, which implies $$e_G(X,Y) \geq e_G(X',Y') > 3\varepsilon \frac{4|X|}{5}\frac{4|Y|}{5} >\varepsilon|X||Y|,$$ contradiction. In the latter case, we have $d(X',Y') < 1-3\varepsilon$, and a similar analysis shows that $e_G(X,Y) < (1 - \varepsilon)|X||Y|$, contradiction.
Thus, any equitable vertex partition on $G$ such that all but an $\varepsilon$-fraction of the pairs of parts are $\varepsilon$-homogeneous, requires at least $(5\varepsilon)^{-d/4}$ parts. \end{proof}
\section{Proof of Theorem \ref{jacob}}\label{graphramsey}
The family $\mathcal{G}$ of all \emph{complement reducible graphs}, or \emph{cographs}, is defined as follows: The graph with one vertex is in $\mathcal{G}$, and if two graphs $G, H \in \mathcal{G}$, then so does their disjoint union, and the graph obtained by taking their disjoint union and adding all edges between $G$ and $H$. Clearly, every induced subgraph of a cograph is a cograph, and it is well known that every cograph on $n$ vertices contains a clique or independent set of size $\sqrt{n}$.
Let $f_d(n)$ be the largest integer $f$ such that every graph $G$ with $n$ vertices and VC-dimension at most $d$ has an induced subgraph on $f$ vertices which is a cograph. Cographs are perfect graphs, so that Theorem \ref{jacob} is an immediate consequence of the following result.
\begin{theorem}
For any $\delta \in (0,1/2)$ and for every integer $d\geq 1$, there is a $c = c(d,\delta)$ such that $f_d(n) \geq e^{c(\log n)^{1 - \delta}}$ for every $n$.
\end{theorem}
\begin{proof}
For simplicity, let $f(n) = f_d(n)$. The proof is by induction on $n$. The base case $n = 1$ is trivial. For the inductive step, assume that the statement holds for all $n' < n$. Let $\delta > 0$ and let $G = (V,E)$ be an $n$-vertex graph with VC-dimension $d$. We will determine $c \in (0,1)$ later.
Set $\varepsilon = (1/32)e^{-3c(\log n)^{1 - \delta}}$. We apply Theorem \ref{reg1} to obtain an equitable partition $\mathcal{P}:V = V_1\cup \cdots \cup V_K$ into at most $K \leq \varepsilon^{-c_4}$ parts, where $c_4 = O(d)$, such that all but an $\varepsilon$-fraction of the pairs of parts are $\varepsilon$-homogeneous. We call an unordered pair of distinct vertices $(u,v)$ \emph{bad} if at least one of the following holds:
\begin{enumerate}
\item $(u,v)$ lie in the same part, or
\item $u \in V_i$ and $v \in V_j$, $i\neq j$, where $(V_i,V_j)$ is not $\varepsilon$-homogeneous, or
\item $u \in V_i$ and $v \in V_j$, $i\neq j$, $uv \in E(G)$ and $|E(V_i,V_j)| < \varepsilon |V_i||V_j|$, or
\item $u \in V_i$ and $v \in V_j$, $i\neq j$, $uv \not\in E(G)$ and $|E(V_i,V_j)| >(1- \varepsilon)|V_i||V_j|$.
\end{enumerate}
\noindent By Theorem \ref{reg1}, the number of bad pairs of vertices in $G$ is at most
$$K{n/K\choose 2} + \left(\frac{n}{K}\right)^2\varepsilon {K\choose 2}+ \varepsilon \left(\frac{n}{K}\right)^2 (1-\varepsilon){K\choose 2}\leq 2\varepsilon {n\choose 2 }.$$
By Tur\'an's Theorem, there is a subset $R\subset S$ of at least $\frac{1}{4\varepsilon}$ vertices such that $R$ does not contain any bad pairs. This implies that all vertices of $R$ are in distinct parts of $\mathcal{P}$. Furthermore, if $uv$ are adjacent in $R$, then the corresponding parts $V_i,V_j$ satisfy $|E(V_i,V_j)| \geq (1 - \varepsilon) |V_i||V_j|$, and if $uv$ are not adjacent, then we have $|E(V_i,V_j)| < \varepsilon |V_i||V_j|$. Since the induced graph $G[R]$ has VC-dimension at most $d$, $G[R]$ contains a cograph $U_0$ of size $t = f(1/(4\varepsilon))$, which, by the induction hypothesis, is a set of size at least $e^{c(\log(1/4\varepsilon))^{1-\delta}}$. Without loss of generality, we denote the corresponding parts of $U_0$ as $V_1,\ldots, V_t$. Each part contains $n/K$ vertices.
For each vertex $u \in V_1$, let $d_b(u)$ denote the number of bad pairs $uv$, where $v \in V_i$ for $i = 2,\ldots, t$. Then there is a subset $V'_1\subset V_1$ of size $\frac{n}{2K}$, such that each vertex $u \in V'_1$ satisfies $d_b(u) < 8t\varepsilon (n/K)$. Indeed, otherwise at least $n/(2K)$ vertices in $V_1$ satisfies $d_b(u) \geq 8t\varepsilon (n/K)$, which implies
$$\frac{n}{2K}\frac{8t\varepsilon n}{K} \leq \sum\limits_{ u \in V'_1} d_b(u) \leq\sum\limits_{ u \in V_1} d_b(u) \leq \varepsilon (t-1)\left(\frac{n}{K}\right)^2,$$
\noindent and hence a contradiction. By the induction hypothesis, we can find a subset $U_1 \subset V'_1$ such that the induced subgraph $G[U_1]$ is a cograph of size $f(n/(2K))$. If the inequality
$$f\left(\frac{n}{2K}\right) 8t\varepsilon \frac{n}{K} > \frac{n}{4tK}$$
\noindent is satisfied, then we have
$$f^3(n) \geq f\left(\frac{n}{2K}\right) t^2 > \frac{1}{32\varepsilon}.$$
\noindent By setting $\varepsilon$ such that $\frac{1}{\varepsilon} = 32e^{3c(\log n)^{1 - \delta}}$, we have $f(n) \geq e^{c(\log n)^{1 - \delta}}$ and we are done.
Therefore, we can assume that
$$f\left(\frac{n}{2K}\right) 8t\varepsilon \frac{n}{K} \leq \frac{n}{4tK}.$$
\noindent Hence, by deleting any vertex $v \in V_2\cup \cdots \cup V_t$ that is in a bad pair with a vertex in $U_1$, we have deleted at most $\frac{n}{4tK}$ vertices in each $V_i$ for $i = 2,\ldots, t$.
We repeat this entire process on the remaining vertices in $V_2,\ldots, V_t$. At step $i$, we will find a subset $U_i\subset V_i$ that induces a cograph of size
$$f\left(\frac{n}{2K} - i\frac{n}{4Kt}\right) \geq f\left(\frac{n}{4K}\right),$$
\noindent and again, if the inequality
$$f\left(\frac{n}{4K}\right) 8t\varepsilon \frac{n}{K} > \frac{n}{4tK}$$
\noindent is satisfied, then we are done by the same argument as above. Therefore we can assume that our cograph $G[U_i]$ has the property that there are at most $n/(4tK)$ bad pairs between $U_i$ and $V_j$ for $j > i$. At the end of this process, we obtain subsets $U_1,\ldots, U_t$ such that the union $U_1\cup \cdots \cup U_t$ induces a cograph of size at least $tf\left(\frac{n}{4K}\right)$. Therefore we have
\begin{equation}\label{series1}
\begin{array}{ccl}
f(n) & \geq & f\left( \frac{1}{4\varepsilon} \right)f\left(\frac{n}{4K}\right)\\\\
& \geq & f\left(e^{3c(\log n)^{1-\delta} } \right) f\left( e^{\log n - c\cdot c_5(\log n)^{1-\delta}} \right)\\\\
& \geq & e^{c\left(3c (\log n)^{1-\delta}\right)^{1 - \delta} }e^{c\left(\log n - c\cdot c_5(\log n)^{1 - \delta}\right)^{1-\delta} },\\
\end{array}
\end{equation}
\noindent where $c_5 = c_5(d)$. Notice we have the following estimate:
\begin{equation}\label{series2}
\begin{array}{ccl}
\left(\log n - c\cdot c_5(\log n)^{1 - \delta}\right)^{1-\delta} & = & (\log n)^{1 - \delta}\left(1 - \frac{c\cdot c_5}{\log^{\delta}n}\right)^{1 - \delta} \\\\
& \geq & (\log n)^{1 - \delta}\left( 1 - \frac{c\cdot c_5}{(\log n)^{\delta}} \right)\\\\
& \geq & (\log n)^{1 - \delta} - c\cdot c_5(\log n)^{1 - 2\delta}. \\\\
\end{array}
\end{equation}
\noindent Plugging (\ref{series2}) into (\ref{series1}) gives
\begin{equation}
\begin{array}{ccl}
f(n) & \geq & e^{c\left(3c(\log n)^{1-\delta} \right)^{1 - \delta} }\cdot e^{c(\log n)^{1 - \delta} - c^2\cdot c_5(\log n)^{1 - 2\delta}}\\\\
& = & e^{c(\log n)^{1 - \delta}}\cdot e^{\left(3^{1-\delta}c^2(\log n)^{1 - 2\delta + \delta^2} - c^2c_5(\log n)^{1-2\delta} \right)}.
\end{array}
\end{equation}
\noindent The last inequality follows from the fact that $c < 1$. Let $n_0 = n_0(d,\delta)$ be the minimum integer such that for all $n \geq n_0$ we have
$$ 3^{1-\delta}(\log n)^{1 - 2\delta + \delta^2} - c_5(\log n)^{1-2\delta} \geq0. $$
\noindent We now set $c = c(d,t)$ to be sufficiently small such that the statement is trivial for all $n < n_0$. Hence we have $f(n) \geq e^{c(\log n)^{1 - \delta}}$ for all $n$.\end{proof}
\section{Random constructions}\label{lll}
Here we prove Theorems \ref{offdiag} and \ref{notstrongeh}. The proof of Theorem \ref{offdiag} uses the Lov\'asz Local Lemma \cite{ErLo} in a similar manner as Spencer \cite{Sp} to give a lower bound on Ramsey numbers.
\begin{lemma}[Lov\'asz Local Lemma]\label{local}
Let $\mathcal{A}$ be a finite set of events in a probability space. For $A \in \mathcal{A}$ let $\Gamma(A)$ be a subset of $\mathcal{A}$ such that $A$ is independent of all events in $\mathcal{A} \setminus (\{A\}\cup\Gamma(A))$. If there is a function $x:\mathcal{A} \rightarrow (0,1)$ such that for all $A \in \mathcal{A}$,
$$Pr[A] \leq x(A)\prod\limits_{B \in \Gamma(A)}(1 - x(B)),$$
\noindent then $Pr\left[\bigcap_{A \in \mathcal{A}} \overline{A}\right] \geq \prod\limits_{A \in \mathcal{A}}(1 - x(A)).$ In particular, with positive probability no event in $\mathcal{A}$ holds.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{offdiag}] Let $s$ and $d$ be positive integers such that $d> s + 2$. Let $G(n, p)$ denote the random graph on $n$ vertices in which each edge appears with probability $p$ independently of all the other edges, where $p = n^{-2/(s + 1)}$ and $n$ is a sufficiently large number. For each set $S$ of $s$ vertices, let $A_S$ be the event that $S$ induces a complete graph. For each set $T$ of $t$ vertices, let $B_T$ be the event that $T$ induces an empty graph. Clearly, we have $Pr[A_S] = p^{s\choose 2}$ and $Pr[B_T] = (1 - p)^{t\choose 2}$.
For each set $D$ of $d$ vertices, let $C_D$ be the event that $D$ is shattered. Then
$$\begin{array}{ccl}
Pr[C_D] & \leq & \prod\limits_{W \subset D} Pr[\exists v \in V(G): N(v)\cap D = W]\\\\
& = & \prod\limits_{W \subset D} \left(1 - \left(1 - p^{|W|}(1 - p)^{d - |W|}\right)^n\right) \\\\
& = & \prod\limits_{j = 0}^d \left(1 - \left(1 - p^{j}(1 - p)^{d - j}\right)^n\right)^{d\choose j}\\\\
& \leq & \prod\limits_{j = 1}^d \left(n\cdot p^{j}(1 - p)^{d - j}\right)^{d\choose j}\\\\
& \leq & \prod\limits_{j = 1}^d n^{d\choose j}\cdot p^{j{d\choose j}} \\\\
& \leq & n^{2^d}\cdot p^{d2^{d-1}}. \\\\
\end{array}$$
Next we estimate the number of events dependent on each $A_S$, $B_T$ and $C_D$. Let $S\subset V$ such that $|S| = s$. Then the event $A_S$ is dependent on at most ${s\choose 2}{n\choose s-2} \leq s^2n^{s-2}$ events $A_{S'}$, where $|S'| = s$. Likewise, $A_S$ is dependent on at most ${n\choose t}$ events $B_T$ where $|T| = t$. Finally $A_S$ is dependent on at most ${s\choose 2}{n\choose d-2} \leq s^2n^{d-2}$ events $C_D$ where $|D| = d$.
Let $T \subset V$ be a set of vertices such that $|T| = t$. Then the event $B_T$ is dependent on at most ${t\choose 2}{n\choose s-2} \leq t^2n^{s-2}$ events $A_S$ where $|S| = s$. Likewise, $B_T$ is dependent on at most ${n\choose t}$ events $B_{T'}$ where $|T'| = t$. Finally $B_T$ is dependent on at most ${t\choose 2}{n\choose d-2} \leq t^2n^{d-2}$ events $C_D$ where $|D| = d$.
Let $D \subset V$ be a set of vertices such that $|D| = d$. Then the event $C_D$ is dependent on at most ${d\choose 2}{n\choose s-2}\leq d^2n^{s-2}$ events $A_S$ where $|S| = s$. Likewise, $C_D$ is dependent on at most ${n\choose t}$ events $B_T$ where $|T| = t$. Finally $C_D$ is dependent on at most ${d\choose 2}{n\choose d-2} \leq d^2n^{d-2}$ events $C_{D'}$ where $|D'| = d$.
By Lemma \ref{local}, it suffices to find three real numbers $x,y,z \in (0,1)$ such that
\begin{equation}\label{one}p^{s\choose 2} \leq x(1-x)^{s^2n^{s-2}}(1-y)^{n\choose t}(1 - z)^{s^2n^{d-2}},\end{equation}
\begin{equation}\label{two}(1-p)^{t\choose 2} \leq y(1-x)^{t^2n^{s-2}}(1-y)^{n\choose t}(1-z)^{t^2n^{d-2}},\end{equation}
\noindent and
\begin{equation}\label{three}n^{2^d}\cdot p^{d2^{d-1}} \leq z(1-x)^{d^2n^{s-2}}(1-y)^{n\choose t}(1 - z)^{d^2n^{d-2}}.\end{equation}
Recall $p = n^{\frac{-2}{s + 1}}$, $s\geq 3$, and $d > s + 2$. We now set $t = c_1n^{\frac{2}{s + 1}}(\log n)$, $x = c_2 n^{\frac{-2{s\choose 2}}{s + 1}}$, $y = e^{-c_3n^{\frac{2}{s + 1}}(\log n)^2}$, and $z = c_4n^{2^d - \frac{2}{s+1}d2^{d-1}} $, where $c_1,c_2,c_3,c_4$ only depend on $s$ and $d$. By letting $c_1 > 10c_3$, setting $c_1,c_2,c_3,c_4$ sufficiently large, an easy (but tedious) calculation shows that (\ref{one}), (\ref{two}), (\ref{three}) are satisfied when $n$ is sufficiently large. By Lemma \ref{local}, there is an $n$-vertex $K_s$-free graph $G$ with VC-dimension at most $d$ and independence number at most $c_1 n^{\frac{2}{s + 1}}\log n$.\end{proof}
\medskip
\begin{proof}[Proof of Theorem \ref{notstrongeh}] Let $d \geq 5$ and $n$ be a sufficiently large integer that will be determined later. Consider the random $n$-vertex graph $G = G(n,p)$, where each edge is chosen independently with probability $p = n^{-4/d}$. By choosing $n$ sufficiently large, the union bound and the analysis above implies that the probability that $G$ has VC-dimension at least $d$ is at most 1/3.
Let $A,B \subset V(G)$ be vertex subsets, each of size $k$. The probability that $(A,B)$ is homogenous is at most
$$p^{k^2} + (1 - p)^{k^2} \leq n^{-4k^2/d} + e^{-n^{-4/d}k^2}.$$
\noindent The probability that $G$ contains a homogeneous pair $(A,B)$, where $|A|,|B| = k$, is at most
$${n\choose k}{n - k \choose k}\left( n^{-4k^2/d} + e^{-n^{-4/d}k^2}\right) < 1/3,$$
\noindent for $k =4n^{4/d}\log n$ and $n$ sufficiently large. Thus, again by the union bound, there is a graph with VC-dimension less than $d$, with no two disjoint subsets $A,B \subset V(G)$ such that $(A,B)$ is homogeneous and $|A|,|B| = 4n^{4/d}\log n$.\end{proof}
\section{Ramsey-Tur\'an numbers for graphs with bounded VC-dimension}\label{ramseyturan}
In this section we prove Theorem \ref{rt}. First let us recall a classical theorem in graph theory.
\begin{theorem}[Tur\'an]\label{turan3}
Let $G = (V,E)$ be a $K_p$-free graph with $n$ vertices. Then the number of edges in $G$ is at most $\frac{1}{2}\left(1 - \frac{1}{p-1} + o(1)\right)n^2$.
\end{theorem}
Together with a sampling argument of Varnavides \cite{v}, we have the following lemma (see also Lemma 2.1 in \cite{keevash}).
\begin{lemma}\label{supersat}
For $\varepsilon > 0$, every $n$-vertex graph $G = (V,E)$ with $|E|\geq \frac{1}{2}\left(1 - \frac{1}{p-1} + \varepsilon\right)n^2$ has at least $\delta n^p$ copies of $K_p$, where $\delta = \delta(p,\varepsilon)$.
\end{lemma}
In order to establish the upper bound in Theorem \ref{rt}, it suffices to show
$$\mathbf{RT}_d(n,K_{2p},o(n)) \leq \frac{1}{2}\left( 1 - \frac{1}{p-1}\right)n^2 + o(n^2),$$
\noindent since we have $\mathbf{RT}_d(n,K_{2p-1},o(n)) \leq \mathbf{RT}(n,K_{2p-1},o(n))$. The following theorem implies the inequality above.
\begin{theorem}
Let $\varepsilon > 0$ and let $G = (V,E)$ be an $n$-vertex graph with VC-dimension $d$. If $G$ is $K_{2p}$-free and $|E| > \frac{1}{2}\left(1 - \frac{1}{p-1} + \varepsilon\right)n^2$, then $G$ contains an independent set of size $\gamma n$, where $\gamma = \gamma(d,p,\varepsilon)$.
\end{theorem}
\begin{proof}
By Lemma \ref{supersat}, $G$ contains at least $\delta n^p$ copies of $K_p$, where $\delta = \delta(\varepsilon,p)$. Without loss of generality, we can assume that $\delta$ is sufficiently small and will be determined later. We apply the regularity lemma (Lemma \ref{reg1}) with approximation parameter $\delta/4$ to obtain a (near) equipartition $\mathcal{P}:V = V_1\cup\cdots\cup V_K$ such that $4/\delta \leq K\leq c\left(4/\delta\right)^{2d +1}$, where $c = c(d)$, and all but a $\frac{\delta}{4}$-fraction of the pairs of parts in $\mathcal{P}$ are $(\delta/4)$-homogeneous.
By deleting all edges inside each part, we have deleted at most
$$K{n/K\choose 2} \leq \frac{n^2}{2K} \leq \frac{n^2}{8}\delta$$
\noindent edges. By deleting all edges between pairs of parts that are not $(\delta/4)$-homogeneous, we have deleted an additional
$$\left(\frac{n}{K}\right)^2\frac{\delta}{4}{K\choose 2} \leq \frac{n^2}{8}\delta$$
\noindent edges. Finally, by deleting all edges between pairs $(V_i,V_j)$ with density less than $\delta/4$, we have deleted at most
$$\frac{\delta}{4}\left(\frac{n}{K}\right)^2{K\choose 2} \leq \frac{n^2}{8}\delta$$
\noindent edges, which implies we have deleted in total less than $n^2\delta/2$ edges in $G$. The only edges remaining in $G$ are edges between pairs of parts $(V_i,V_j)$ with density greater than $1-\frac{\delta}{4}$. Since each edge lies in at most $n^{p-2}$ copies of $K_p$, we have deleted at most $\delta n^p/2$ $K_p$-s in $G$. Therefore there is at least one copy of $K_p$ remaining, which implies that there are $p$ parts $V_{i_1},\ldots,V_{i_p} \in \mathcal{P}$ that pairwise have density at least $1-\frac{\delta}{4}$, with $|V_{i_j}| = n/K$. Set $\delta_1 = \delta/4$.
For fixed $j \in \{2,\ldots, p\}$, notice that there are at least $(1 - 1/(2p))(n/K)$ vertices $v \in V_{i_1}$ such that $|N(v)\cap V_{i_j}| \geq (1- 4\delta_1 p)n/K$. Indeed, otherwise we would have
$$|E(V_{i_1},V_{i_j})| \leq \left(n/K\right)\left(1 - \frac{1}{2p}\right)\left(\frac{n}{K}\right) + \frac{n/K}{2p} \left(1 - 4\delta_1 p\right)\frac{n}{K} = \left(\frac{n}{K}\right)^2 - 2\delta_1 (n/K)^2 .$$
\noindent On the other hand, $|E(V_{i_1},V_{i_j})|\geq \left(1- \delta_1 \right)(n/K)^2$. This implies $2\delta_1 < \delta_1$ which is a contradiction.
Therefore there is a subset $V'_{i_1}\subset V_{i_1}$ with $|V'_{i_1}| \geq |V_{i_1}|/2$ such that each vertex $v \in V'_{i_1}$ satisfies $|N(v)\cap V_{i_j}| \geq (1-4\delta_1 p)|V_{i_j}|$ for all $j = 2,\ldots, p$. If $V'_{i_1}$ is an independent set, then we are done since $|V'_{i_1}| \geq n/(2K)$. Otherwise we have an edge $uv$ in $V'_{i_1}$. For $j = 2,\ldots, p$, the pigeonhole principle implies that $|V_{i_j}\cap N(u)\cap N(v)|\geq \frac{n}{K}(1 - 8\delta_1p)$. We define $V^{(2)}_{i_j}$ to be a set of exactly $\frac{n}{K}(1 - 8\delta_1p)$ elements in $V_{i_j}\cap N(u)\cap N(v)$. Notice that the graph induced on the vertex set $V^{(2)}_{i_2} \cup \cdots \cup V^{(2)}_{i_p}$ is $K_{2p-2}$-free. Moreover, the density between each pair of parts $(V^{(2)}_{i_j},V^{(2)}_{i_{\ell}})$ is at least $(1 - \delta_2)$ where $\delta_2 = \delta_1 + 16\delta_1p$. We repeat this process on the remaining $p-1$ parts $V^{(2)}_{i_2}, \ldots, V^{(2)}_{i_p}$.
After $j$ steps, we have either found an independent set of size at least $$\frac{n}{2K}(1 - 8\delta_1p)(1 - 8\delta_2(p-1)) \cdots (1 - 8\delta_{j-1}(p-j+2)),$$ where $\delta_k$ is defined recursively as $\delta_1 = \delta/4$ and $\delta_k = \delta_{k-1} + 16\delta_{k-1}p$, or we have obtained subsets $V^{(j)}_{i_j} ,\ldots,V^{(j)}_{i_p}$ such that $$|V^{(j)}_{i_{\ell}}| = \frac{n}{K}(1 - 8\delta_1p)(1 - 8\delta_2(p-1)) \cdots (1 - 8\delta_{j-1}(p-j)),$$ for $\ell = j,\ldots, p$, $V^{(j)}_{i_j} \cup \cdots \cup V^{(j)}_{i_p}$ is $K_{2p- 2j}$-free, and the density between each pair of parts $(V^{(j)}_{i_{k}}, V^{(j)}_{i_{\ell}})$ is at least $1 - \delta_j$.
By letting $\delta = \delta(\varepsilon,p)$ be sufficiently small such that $\delta_k < \frac{1}{100p}$ for all $k\leq p$, we obtain an independent set of size $\gamma n$, where $\gamma = \gamma(d,p,\varepsilon)$.
\end{proof}
The lower bound on $\mathbf{RT}_d(n,K_{2p-1},o(n))$ and $\mathbf{RT}_d(n,K_{2p},o(n))$ in Theorem \ref{rt} follows from a geometric construction of Fox et al. in \cite{FPS15} (see page 15), which is a graph with VC-dimension at most four.
\section{Concluding remarks}
Many interesting results arose in our study of graphs and hypergraphs with bounded VC-dimension. In particular, we strengthen several classical results from extremal hypergraph theory for hypergraphs with bounded VC-dimension. Below, we briefly mention two of them.
\medskip
\noindent \textbf{Hypergraphs with bounded VC-dimension.} Erd\H os, Hajnal, and Rado \cite{EHR} showed that every $3$-uniform hypergraph on $n$ vertices contains a clique or independent set of size $c\log\log n$. A famous open question of Erd\H os asks if $\log\log n$ is the correct order of magnitude for Ramsey's theorem for 3-uniform hypergraphs. According to the best known constructions, there are 3-uniform hypergraphs on $n$ vertices with no clique or independent set of size $c'\sqrt{\log n}$. For $k\geq 4$, the best known lower and upper bounds on the size of the largest clique or independent set in every $n$-vertex $k$-uniform hypergraph is of the form $c\log^{(k-1)} n$ (the $(k-1)$-times iterated logarithm) and $c' \sqrt{\log^{(k - 2)} n} $, respectively (see \cite{CFS10} for more details). By combining Theorem \ref{jacob} with an argument of Erd\H os and Rado \cite{ER}, one can significantly improve these bounds for hypergraphs of bounded (neighborhood) VC-dimension.
\begin{theorem}\label{hypramsey}
Let $k\geq 3$ and $d \geq 1$. Every $k$-uniform hypergraph on $n$ vertices with VC-dimension $d$ contains a clique or independent set of size $e^{\left(\log^{(k-1)} n\right)^{1 - o(1)}}$.
\end{theorem}
Geometric constructions given by Conlon et al.~\cite{CFPSS} show that Theorem \ref{hypramsey} is tight apart from the $o(1)$ term in the second exponent. That is, for fixed $k\geq 3$, there are $k$-uniform hypergraphs on $n$ vertices with VC-dimension $d = d(k)$ such that the largest clique or independent set is of size $O(\log^{(k-2)} n)$.
\medskip
\noindent \textbf{The Erd\H os-Hajnal conjecture for tournaments.} A \emph{tournament} $T = (V,E)$ on a set $V$ is an orientation of the edges of the complete graph on the vertex set $V$, that is, for $u,v \in V$ we have either $(u,v) \in E$ or $(v,u) \in E$, but not both. A tournament with no directed cycle is called \emph{transitive}. If a tournament has no subtournament isomorphic to $T$, then it is called $T$-free.
An old result due to Entringer, Erd\H os, and Harner \cite{EEH} and Spencer \cite{S74} states that every tournament on $n$ vertices contains a transitive subtournament of size $c\log n$, which is tight apart from the value of the constant factor. Alon, Pach, and Solymosi \cite{APS} showed that the Erd\H os-Hajnal conjecture is equivalent to the following conjecture.
\begin{conjecture}\label{conjeht}
For every tournament $T$, there is a positive $\delta = \delta(T)$ such that every $T$-free tournament on $n$ vertices has a transitive subtournament of size $n^{\delta}$.
\end{conjecture}
In particular, it is known that every $T$-free tournament on $n$ vertices contains a transitive subtournament of size $e^{c\sqrt{\log n}}$, where $c = c(T)$. Here we note that this bound can be improved in the special case that the forbidden tournament $T = (V,E)$ is \emph{2-colorable}, that is, there is a 2-coloring on $V(T)$ such that the each color class induces a transitive subtournament.
\begin{theorem}\label{eht}
For fixed integer $k>0$, let $T$ be a 2-colorable tournament on $k$ vertices. Then every $T$-free tournament on $n$ vertices contains a transitive subtournament of size $e^{(\log n)^{1 - o(1)}}$.
\end{theorem}
\noindent The idea of the proof of Theorem \ref{eht} is to use the fact that a tournament $T$ is 2-colorable if and only if the outneighborhood set system of every $T$-free tournament has VC-dimension at most $c(T)$. There is a straightforward analogue of Theorem \ref{reg1} for tournaments whose outneighborhood set system has bounded VC-dimension, and with this analogous tool, the proof of Theorem \ref{eht} is essentially the same as the proof of Theorem \ref{jacob}.
\section{Acknowledgements}
We would like to thank Lisa Sauermann for pointing out a small error in an earlier version of the proof of Lemma \ref{pattern}.
| {
"timestamp": "2017-10-11T02:11:47",
"yymm": "1710",
"arxiv_id": "1710.03745",
"language": "en",
"url": "https://arxiv.org/abs/1710.03745",
"abstract": "The Vapnik-Chervonenkis dimension (in short, VC-dimension) of a graph is defined as the VC-dimension of the set system induced by the neighborhoods of its vertices. We show that every $n$-vertex graph with bounded VC-dimension contains a clique or an independent set of size at least $e^{(\\log n)^{1 - o(1)}}$. The dependence on the VC-dimension is hidden in the $o(1)$ term. This improves the general lower bound, $e^{c\\sqrt{\\log n}}$, due to Erdos and Hajnal, which is valid in the class of graphs satisfying any fixed nontrivial hereditary property. Our result is almost optimal and nearly matches the celebrated Erdos-Hajnal conjecture, according to which one can always find a clique or an independent set of size at least $e^{\\Omega(\\log n)}$. Our results partially explain why most geometric intersection graphs arising in discrete and computational geometry have exceptionally favorable Ramsey-type properties.Our main tool is a partitioning result found by Lovász-Szegedy and Alon-Fischer-Newman, which is called the \"ultra-strong regularity lemma\" for graphs with bounded VC-dimension. We extend this lemma to $k$-uniform hypergraphs, and prove that the number of parts in the partition can be taken to be $(1/\\varepsilon)^{O(d)}$, improving the original bound of $(1/\\varepsilon)^{O(d^2)}$ in the graph setting. We show that this bound is tight up to an absolute constant factor in the exponent. Moreover, we give an $O(n^k)$-time algorithm for finding a partition meeting the requirements. Finally, we establish tight bounds on Ramsey-Turán numbers for graphs with bounded VC-dimension.",
"subjects": "Combinatorics (math.CO)",
"title": "Erdos-Hajnal conjecture for graphs with bounded VC-dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787838473908,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811444869059
} |
https://arxiv.org/abs/2108.11536 | Factorizations in evaluation monoids of Laurent semirings | For a positive real number $\alpha$, let $\mathbb{N}_0[\alpha,\alpha^{-1}]$ be the semiring of all real numbers $f(\alpha)$ for $f(x)$ lying in $\mathbb{N}_0[x,x^{-1}]$, which is the semiring of all Laurent polynomials over the set of nonnegative integers $\mathbb{N}_0$. In this paper, we study various factorization properties of the additive structure of $\mathbb{N}_0[\alpha, \alpha^{-1}]$. We characterize when $\mathbb{N}_0[\alpha, \alpha^{-1}]$ is atomic. Then we characterize when $\mathbb{N}_0[\alpha, \alpha^{-1}]$ satisfies the ascending chain condition on principal ideals in terms of certain well-studied factorization properties. Finally, we characterize when $\mathbb{N}_0[\alpha, \alpha^{-1}]$ satisfies the unique factorization property and show that, when this is not the case, $\mathbb{N}_0[\alpha, \alpha^{-1}]$ has infinite elasticity. | \section{Introduction}
The purpose of this paper is to understand the (additive) factorization properties of the commutative semirings $\mathbb{N}_0[\alpha, \alpha^{-1}]$ for any $\alpha \in \mathbb{R}_{> 0}$. To be more precise, let $\mathbb{N}_0[x,x^{-1}]$ denote the set of Laurent polynomials with coefficients in the set of nonnegative integers $\mathbb{N}_0$. Since $\mathbb{N}_0[x,x^{-1}]$ is closed under both addition and multiplication, it is a commutative semiring. For each $\alpha \in \mathbb{R}_{> 0}$, we let $M_\alpha$ denote the additive monoid of the semiring $\mathbb{N}_0[\alpha, \alpha^{-1}]$, that is,
\[
M_\alpha = \{f(\alpha) \mid f(x) \in \mathbb{N}_0[x,x^{-1}]\}.
\]
It is a sub-semiring of the commutative semiring $\mathbb{R}_{\ge 0}$.
For ease of notation, we shall use $M_\alpha$ in this paper to denote the additive monoid of the semiring $\mathbb{N}_0[\alpha, \alpha^{-1}]$ for $\alpha\in\mathbb{R}_{>0}$. Let $M$ be a cancellative and commutative (additive) monoid. A non-invertible element of $M$ is called an atom if it is not the sum of two non-invertible elements, and $M$ is atomic if every non-invertible element is a sum of atoms.
It is well-known that every commutative (and cancellative) monoid satisfying the ascending chain condition on principal ideals (ACCP) is atomic (see, for example, \cite[Proposition 1.1]{pC68}). As for integral domains, $M$ is called a unique factorization monoid (UFM) provided that every non-invertible element can be written as a sum of atoms in an essentially unique way (i.e., up to order and associates). Here, we study the properties of being atomic, satisfying the ACCP, and being a UFM for the additive monoids $M_\alpha$ (with $\alpha \in \mathbb{R}_{> 0}$), offering various characterizations for each of such properties in terms of atoms and (additive) factorizations.
\smallskip
Most of the results we establish here are motivated by some of the results in the recent paper~\cite{CG20} by Correa-Morris and Gotti, where the authors investigated the atomic structure of the additive monoids of the evaluation semirings $\mathbb{N}_0[\alpha]$ for $\alpha \in \mathbb{R}_{> 0}$, generalizing some of the results already established by Chapman et al. in~\cite{CGG20} when $\alpha$ is taken in $\mathbb{Q}_{> 0}$. The study of atomicity and factorizations in the setting of commutative semirings has received a great deal of attention in the last few years. For instance, Campanini and Facchini~\cite{CF19} studied the factorization structure of the multiplicative monoid of the semiring $\mathbb{N}_0[x]$. In addition, Baeth et al.~\cite{BCG21} recently studied the atomic structure of both the additive and the multiplicative monoids of subsemirings of $\mathbb{R}_{\ge 0}$. Finally, factorizations in certain subsemirings of $\mathbb{Q}_{\ge 0}$ have also been considered in~\cite{ABP21} by Albizu-Campos et al. and in~\cite{BG20} by Baeth and Gotti.
\smallskip
We begin by introducing the main terminology in Section~\ref{sec:background_1} and outlining the main known results we use later. Then, in Section~\ref{sec:atomicity}, we discuss the atomicity of the monoids $M_\alpha.$ We characterize the monoids $M_\alpha$ that are atomic as well those $M_\alpha$ that are not atomic in Theorem \ref{thm:1atomic} and Proposition \ref{prop:non-atomic characterization}, respectively. In contrast with \cite[Proposition~5.13]{CG20}, the monoid $M_\alpha$ is only finitely generated when $\alpha=1.$ In particular, if $\alpha \neq 1$ and the monoid $M_\alpha$ is atomic, then $M_\alpha$ must contain infinitely many atoms; indeed, we show in Theorem~\ref{thm:1atomic} that the atoms of $M_\alpha$ are precisely the integer powers of~$\alpha$.
\smallskip
Let $M$ be an atomic (additive) monoid. A factorization of a non-invertible element $x \in M$ is, up to order and associates, a sequence of finitely many atoms (allowing repetitions) with sum $x$, and the number of atoms in such a sequence (counting repetitions) is called the the length of the factorization. A non-invertible element in $M$ may have distinct factorizations (even infinitely many). For a non-invertible element $x \in M$, we let $\mathsf{Z}(x)$ and $\mathsf{L}(x)$ denote the set of factorizations and factorization lengths of $x$, respectively. Following Anderson et al.~\cite{AAZ90} and Halter-Koch~\cite{fHK92}, we say that the monoid~$M$ is an FFM (resp., a BFM) provided that $\mathsf{Z}(x)$ (resp., $\mathsf{L}(x)$) is finite for all non-invertible $x \in M$. The property of being a BFM was first studied back in 1949 by Neumann \cite{bN66} in connection to the ACCP. Note that every FFM is a BFM. In Section~\ref{sec:ACCP}, we prove that the conditions of satisfying the ACCP, being a BFM, and being an FFM are equivalent for any monoid~$M_\alpha$ (see Theorem \ref{thm:ffm}). In addition, we construct monoids $M_\alpha$ that are FFMs but not UFMs (see Subsection~\ref{sub:ffm not ufm}).
\smallskip
In Section~\ref{sec:factoriality}, we identify the monoids $M_\alpha$ that are UFMs. Following Zaks~\cite{aZ80}, we say that~$M$ is a half-factorial monoid (HFM) if $\mathsf{L}(x)$ is a singleton for every $x \in M$. The property of being an HFM was first considered by Carlitz~\cite{lC60} in the context of algebraic number theory to characterize rings of integers with class number two. Following Chapman et al.~\cite{CCGS21}, we say that $M$ is called a length-factorial monoid (LFM) if for every $x \in M$, not two factorizations in $\mathsf{Z}(x)$ have the same length. Additionally, in Section~\ref{sec:factoriality}, we prove that the conditions of being a UFM, an HFM, and an LFM are equivalent for any monoid $M_\alpha$.
\smallskip
It is not hard to argue that classes satisfying the atomic properties we have just defined are somehow nested, as indicated by the following chain of implications in Diagram~\eqref{eq:atomic chain}. In Section~\ref{sec:factoriality}, we produce a diagram (Diagram~\eqref{eq:refined_chain}) specialized for the class of all monoids $M_\alpha$ that refines Diagram~\eqref{eq:atomic chain}.
\begin{equation} \label{eq:atomic chain}
\textbf{UFM} \ \Rightarrow \ [\textbf{FFM, HFM}] \ \Rightarrow \ \textbf{BFM} \ \Rightarrow \ \textbf{ACCP} \ \Rightarrow \ \textbf{atomicity}
\end{equation}
\smallskip
The elasticity of a monoid is an arithmetic statistic that measures how much a monoid deviates from being an HFM. The elasticity was first considered by Steffan~\cite{jlS86} and Valenza~\cite{rV90} back in the eighties to understand how far from being a UFD is a Dedekind domain or a ring of integers, respectively. Since then the elasticity has become probably the most studied arithmetic invariant to measure non-uniqueness of factorizations (see~\cite{qZ19} by Zhong, and references therein). We conclude this paper with showing that $M_\alpha$ has infinite elasticity when it is not an HFM (see Proposition \ref{prop:elasticity}), which means that either $M_\alpha$ is an HFM or it is as far from being an HFM as a monoid can possibly be.
\bigskip
\section{Background}
\smallskip
\subsection{General Notation}
\label{sec:background_1}
\smallskip
We let $\mathbb{P}$, $\mathbb{N}$, and $\mathbb{N}_0$ denote the set of primes, positive integers, and nonnegative integers, respectively. If $X$ is a subset of $\mathbb{R}$ and $r$ is a real number, we let $X_{\ge r}$ denote the set $\{x \in X \mid x \ge r\}$. Similarly, we use the notations $X_{> r}, X_{\le r}$, and $X_{< r}$. For a positive rational $q$, the positive integers $a$ and $b$ with $q = a/b$ and $\gcd(a,b) = 1$ are denoted by $\mathsf{n}(q)$ and $\mathsf{d}(q)$, respectively.
\smallskip
Given a monic polynomial $f(x)\in\mathbb{Q}[x]$, let $\ell$ be the smallest positive integer such that $\ell \cdot f(x) \in \mathbb{Z}[x]$. Then there exist unique $p(x), q(x) \in \mathbb{N}_0[x]$ such that $\ell f(x) = p(x) - q(x)$ and that $p(x)$ and $q(x)$ share no monomials of the same degree (that is, the greatest common divisor of $p(x)$ and $q(x)$ in the free commutative monoid $(\mathbb{N}_0[x],+)$ is $0$). We call the pair $(p(x), q(x))$ the \emph{minimal pair} of $f(x)$. In addition, if $\alpha$ is a real algebraic number, the \emph{minimal pair of $\alpha$} is defined to be the minimal pair of its minimal polynomial over $\mathbb{Q}$.
\medskip
\subsection{Monoids}
\smallskip
A \emph{monoid} is a cancellative and commutative semigroup with an identity element. Monoids here will be written additively, unless we say otherwise. Let $M$ be a monoid. An element $x \in M$ is called \emph{invertible} (or a \emph{unit}) if there exists $y \in M$ such that $x+y = 0$. We tacitly assume that $M$ (and every monoid we deal with here) is \emph{reduced}; that is, its only invertible element is $0$. We set $M^\bullet = M \setminus \{0\}$. For a subset $S$ of $M$, we let $\langle S \rangle$ denote the submonoid of $M$ generated by $S$, i.e., the intersection of all submonoids of $M$ containing $S$. We say that a monoid is \emph{finitely generated} if it can be generated by a finite set. A nonzero element $a \in M$ is called an \emph{atom} if whenever $a = x+y$ for some $x,y \in M$ either $x = 0$ or $y = 0$. It is customary to let $\mathcal{A}(M)$ denote the set consisting of all atoms of $M$, and we do so. If $\mathcal{A}(M)$ is empty, $M$ is said to be \emph{antimatter}. The monoids we are mostly interested in this paper are atomic.
\begin{definition}
An (additive) monoid is \emph{atomic} if every nonzero element can be written as a sum of atoms.
\end{definition}
If $I$ is a subset of $M$, then $I$ is called an \emph{ideal} provided that $I + M = I$ (or, equivalently, $I + M \subseteq I$). Every subset of $M$ of the form $x + M$, where $x \in M$, is an ideal and is called a \emph{principal} ideal. The monoid $M$ satisfies the \emph{ascending chain condition on principal ideals} (\emph{ACCP}) if every increasing sequence (under inclusion) of principal ideals of $M$ becomes stationary from one point on. It is well known that every monoid satisfying the ACCP is atomic (see \cite[Proposition~1.1.4]{GH06}). The converse does not hold: for instance, the additive submonoid $\langle (\frac{2}{3})^n \mid n \in \mathbb{N} \rangle$ of $\mathbb{Q}$ is an atomic monoid that does not satisfy the ACCP \cite[Corollary~4.4]{CGG21}.
\smallskip
\subsection{Factorizations} Assume now that $M$ is atomic. Let $\mathsf{Z}(M)$ denote the free (commutative) monoid on the set $\mathcal{A}(M)$. For each $x \in M$, we let $\mathsf{Z}(x)$ denote the set of all formal sums $z := a_1 + \cdots + a_\ell \in \mathsf{Z}(M)$ with $a_1, \dots, a_\ell \in \mathcal{A}(M)$ such that $a_1 + \dots + a_\ell = x$ in $M$. In this case, $\ell$ is called the \emph{length} of $z$ and is denoted by~$|z|$. For each $x \in M$, we set $\mathsf{L}(x) := \{ |z| \mid z \in \mathsf{Z}(x) \}$. The sets $\mathsf{Z}(x)$ and $\mathsf{L}(x)$ play an important role in factorization theory (see~\cite{aG16}). Note that~$M$ is atomic if and only if $\mathsf{Z}(x)$ is nonempty for all $x \in M^\bullet$.
\smallskip
The monoid $M$ is called a \emph{bounded factorization monoid} (BFM) if $\mathsf{L}(x)$ is finite for all $x \in M$. Every BFM satisfies the ACCP \cite[Corollary~1.3.3]{GH06}, but the converse does not hold: $\langle 1/p \mid p \in \mathbb{P} \rangle$ satisfies the ACCP but is not a BFM \cite[Corollary~4.6]{CGG21}. The monoid $M$ is called a \emph{half-factorial monoid} (HFM) if $|\mathsf{L}(x)| = 1$ for all $x \in M^\bullet$. Observe that every HFM is a BFM. The monoid $M$ is called a \emph{finite factorization monoid}) (FFM) if $\mathsf{Z}(x)$ is finite for all $x \in M^\bullet$. Every finitely generated monoid is an FFM \cite[Corollary~3.7]{AG21}. Note that every FFM is a BFM; however, $\{0\} \cup \mathbb{Q}_{\ge 1}$ is a BFM that is not an FFM \cite[Example~4.10]{CGG21}. In addition, one can see that $\langle 2,3 \rangle$ is an FFM that is not an HFM. On the other hand, there are HFMs that are not FFMs; this is the case of the additive monoid $\{(0,0)\} \cup (\mathbb{Z} \times \mathbb{N})$ (see \cite[Example~3.9]{AG21}). Finally,~$M$ is called a \emph{unique factorization monoid} (UFM) provided that $|\mathsf{Z}(x)| = 1$ for all $x \in M^\bullet$. Every UFM is, by definition, both an HFM and an FFM. Then we see that each implication in Diagram~\eqref{eq:atomic chain} holds and that such a diagram does not support, in general, any additional implication.
\bigskip
\section{Atomicity}
\label{sec:atomicity}
In this section, we study the atomicity of the additive monoids $M_\alpha$, where $M_\alpha =\mathbb{N}_0[\alpha,\alpha^{-1}]$ for $\alpha\in\mathbb{R}_{>0}$. We characterize the monoids $M_\alpha$ that are atomic, and then we give examples of monoids $M_\alpha$ that are atomic but do not satisfy the ACCP. The next theorem, which gives a simple characterization of the monoids $M_\alpha$ that are atomic, also provides an explicit description of the set of atoms of $M_\alpha$. Moreover, it gives a necessary condition for the atomicity of $M_\alpha$ when $\alpha$ is algebraic. For any algebraic number $\alpha$ with minimal polynomial $m(x)\in\mathbb{Q}[x]$, the polynomial $\ell \cdot m(x)$ is a primitive polynomial in $\mathbb{Z}[x]$ for a unique $\ell \in \mathbb{N}$, so $\ell\cdot m(x)=p(x)-q(x)$ for unique $p(x),q(x)\in\mathbb{N}_0[x]$ that do not share monomials of equal degrees. We call $(p(x),q(x))$ the minimal pair of $\alpha$ (see Section \ref{sec:background_1}).
\begin{theorem}\label{thm:1atomic}
For each $\alpha \in \mathbb{R}_{>0},$ the following statements are equivalent.
\begin{enumerate}
\item[(a)] $1 \in \mathcal{A}(M_\alpha)$.
\smallskip
\item[(b)] $\mathcal{A}(M_\alpha) = \{\alpha^n \mid n \in \mathbb{Z}\}$.
\smallskip
\item[(c)] $M_\alpha$ is atomic.
\end{enumerate}
Suppose that $\alpha \in \mathbb{R}_{>0} \setminus \{1\}$ is an algebraic number. If $M_\alpha$ is atomic, then neither of the two components in the minimal pair of $\alpha$ is a monic monomial.
\end{theorem}
\begin{proof}
(a) $\Rightarrow$ (b): Suppose that $\mathcal{A}(M_\alpha) \neq \{\alpha^n \mid n \in \mathbb{Z}\}$. Then there exists $n \in \mathbb{Z}$ such that $\alpha^n \not\in \mathcal{A}(M_\alpha)$ and, therefore, there exists a finite set $S \subset \mathbb{Z}$ such that $\alpha^n = \sum_{i \in S} c_i\alpha^i$ for some coefficients $c_i \in \mathbb{N}$ for each $i \in S$ such that $\sum_{i\in S} c_i \geq 2$. Dividing by $\alpha^n$ gives $1 = \sum_{i\in S} c_i\alpha^{i-n}.$ Thus, $1 \not\in \mathcal{A}(M_\alpha),$ as desired.
\smallskip
(b) $\Rightarrow$ (c): This holds by the definition of $M_\alpha$.
\smallskip
(c) $\Rightarrow$ (a): Suppose $1\not\in \mathcal{A}(M_\alpha).$ Then there exists a finite set $S \subset \mathbb{Z}$ and coefficients $c_i \in \mathbb{N}$ for each $i \in S$ such that $\sum_{i\in S} c_i \geq 2$ and $1 = \sum_{i \in S} c_i\alpha^i.$ For each $k \in \mathbb{Z}$, we can multiplying both sides of $1 = \sum_{i \in S} c_i\alpha^i$ by $\alpha^k$ to obtain the equality $\alpha^k = \sum_{i\in S} c_i\alpha^{i+k}.$ Thus, $\alpha^k$ is not an atom for any $k \in \mathbb{Z}$, which implies that $M_\alpha$ has no atoms and, therefore, that it is not atomic.
\smallskip
Assume now that $\alpha$ is a positive algebraic real number such that $\alpha \neq 1$. Let $m(x)$ and $(p(x), q(x))$ be the minimal polynomial and the minimal pair of $\alpha$, respectively. Suppose, by way of contradiction, that either $p(x)$ or $q(x)$ is a monic monomial. We can say, without loss of generality, that $q(x) = x^n$ for some $n \in \mathbb{N}_0$. Thus, $p(\alpha) - \alpha^n = p(\alpha) - q(\alpha) = \ell m(\alpha) = 0$ for some $\ell \in \mathbb{N},$ so $p(\alpha) = \alpha^n$. Because $\alpha \neq 1$, we see that $p(x)$ must be the sum of at least two nonzero monomials (not necessarily distinct). Consequently, $\alpha^n \notin \mathcal{A}(M_\alpha)$. Therefore, $M_\alpha$ is not atomic in light of the characterizations established above, which yields the desired contradiction.
\end{proof}
It is worth mentioning that, as a direct consequence of Theorem~\ref{thm:1atomic}, one obtains that every monoid $M_\alpha$ satisfies $|\mathcal{A}(M_\alpha)| \in \{0, \infty\}$ and also that $M_\alpha$ is either atomic or antimatter. In addition, when $\alpha$ is transcendental, $M_\alpha$ is atomic, as we now illustrate.
\begin{cor}
If $\alpha \in \mathbb{R}_{> 0}$ is transcendental, then $M_\alpha$ is atomic.
\end{cor}
\begin{proof}
Suppose that $1 = \sum_{i \in S} c_i \alpha^i$ for a finite set $S \subseteq \mathbb{Z}$ and coefficients $c_i \in \mathbb{N}_0$ for every $i \in S$. Then $\alpha$ would be a root of the polynomial $f(x) := x^m - \sum_{i \in S} c_i x^{i+m} \in \mathbb{Z}[x]$, where $m = -\min (\{0\} \cup S)$. Since $\alpha$ is transcendental, $f(x)$ is the zero polynomial and, therefore, $S = \{0\}$ and $c_0 = 1$. Hence $1 \in \mathcal{A}(M_\alpha)$, and $M_\alpha$ is atomic by Theorem~\ref{thm:1atomic}.
\end{proof}
It is worth emphasizing that the necessary condition in Theorem~\ref{thm:1atomic} is not sufficient; this is illustrated in the following example.
\begin{example} \label{ex:non-atomic monoid M_alpha}
Consider the monic polynomial $m(x) = x^3 - 2x^2 + 3x - 7$. Because $m(x)$ has no integer roots, it follows from Gauss's lemma that $m(x)$ is irreducible in $\mathbb{Q}[x]$. On the other hand, $m(2) = -1$ and $m(3) = 11$, the polynomial $m(x)$ has a positive root $\alpha$ in the interval $(2,3)$. Consider the monoid $M_\alpha$. As $m(x)(x + 2) = x^4 - x^2 - x - 14$, we see that $\alpha$ is a root of the polynomial $x^4 - x^2 - x - 14$, so $\alpha^4 = \alpha^2 + \alpha + 14$. Hence $\alpha$ is not an atom of $M_\alpha$, and it follows from the characterization in Theorem~\ref{thm:1atomic} that $M_\alpha$ is not atomic. However, none of the polynomials in the minimal pair $(x^3 + 3x, 2x^2 + 7)$ of $\alpha$ are monic monomials. Therefore we conclude that the necessary condition in Theorem~\ref{thm:1atomic} is not sufficient.
\end{example}
If $\alpha = 1$, then $M_\alpha = \mathbb{N}_0$, which is atomic. On the other hand, if $\alpha \in \mathbb{N}_{\ge 2}$ (or if $\alpha = 1/n$ for some $n \in \mathbb{N}_{\ge 2}$), then $1$ is the sum of $\alpha$ copies of $\alpha^{-1}$ (resp., $\alpha^{-1}$ copies of $\alpha$) and, therefore $1 \notin \mathcal{A}(M_\alpha)$, and so Theorem~\ref{thm:1atomic} ensures that $M_\alpha$ is not atomic. In addition, we have exhibited in Example~\ref{ex:non-atomic monoid M_alpha} a monoid $M_\alpha$ that is not atomic for some $\alpha \in \mathbb{R}_{> 0} \setminus \mathbb{Q}$. We now characterize the monoids $M_\alpha$ that are not atomic.
\begin{prop} \label{prop:non-atomic characterization}
For $\alpha \in \mathbb{R}_{> 0}$ with $\alpha \neq 1$, the following statements are equivalent.
\begin{enumerate}
\item[(a)] $M_\alpha$ is not atomic.
\smallskip
\item[(b)] $(\mathbb{N}_0[\alpha],+)$ is antimatter or finitely generated.
\end{enumerate}
\end{prop}
\begin{proof}
(a) $\Rightarrow$ (b): Suppose that $M_\alpha$ is not atomic. Then $\alpha$ is algebraic as otherwise $M_\alpha$ would be a free commutative monoid, which is atomic. We consider the following two cases.
\smallskip
\textit{Case 1:} $\alpha < 1$. Since $M_\alpha$ is not atomic, $1 \notin \mathcal{A}(M_\alpha)$ by Theorem~\ref{thm:1atomic}, so we can write $1 = \sum_{i=1}^n c_i \alpha^i$ for some $n \in \mathbb{N}$ and $c_1, \dots, c_n \in \mathbb{N}_0$ (here, we use that $\alpha < 1$). Then $1$ is not an atom of the additive monoid $\mathbb{N}_0[\alpha]$, and it follows from \cite[Theorem~4.1]{CG20} that $\mathbb{N}_0[\alpha]$ is antimatter.
\smallskip
\textit{Case 2:} $\alpha > 1$. Since $0$ is not a limit point of $\mathbb{N}_0[\alpha]^\bullet$ (because $\alpha > 1$), it follows from \cite[Proposition~4.5]{fG19} that $\mathbb{N}_0[\alpha]$ is atomic. As in the case already considered, the fact that $M_\alpha$ is not atomic allows us to write $1 = \sum_{i=1}^n c_i \alpha^{-i}$ for some $n \in \mathbb{N}$ and $c_1, \dots, c_n \in \mathbb{N}_0$ (here, we use that $\alpha > 1$). Therefore $\alpha^n = \sum_{i=1}^n c_i \alpha^{n-i}$. As $\mathbb{N}_0[\alpha]$ is an atomic monoid, the inclusion $\mathcal{A}(\mathbb{N}_0[\alpha]) \subseteq \{\alpha^k \mid k \in \{0, \dots, n-1 \} \}$ holds by \cite[Theorem~4.1]{CG20}. Thus, $\mathbb{N}_0[\alpha]$ is finitely generated.
\smallskip
(b) $\Rightarrow$ (a): Note that $\alpha$ is algebraic, for otherwise, $\mathbb{N}_0[\alpha]$ would be a free commutative monoid on a countable basis, which is neither antimatter nor finitely generated. Suppose first that the additive monoid $\mathbb{N}_0[\alpha]$ is antimatter. Since the set $\{\alpha^n \mid n \in \mathbb{N}_0\}$ generates $\mathbb{N}_0[\alpha]$, the equality $1 = \sum_{i=1}^k c_i \alpha^i$ holds for some $k \in \mathbb{N}$ and $c_1, \dots, c_k \in \mathbb{N}_0$. Hence $1 \notin \mathcal{A}(M_\alpha)$, and so $M_\alpha$ is not atomic by Theorem~\ref{thm:1atomic}.
\smallskip
Finally, suppose that the additive monoid $\mathbb{N}_0[\alpha]$ is finitely generated. Then $\mathbb{N}_0[\alpha]$ is atomic by \cite[Proposition~2.7.8]{GH06}, and it follows from \cite[Theorem~4.1]{CG20} that
\[
\mathcal{A}(\mathbb{N}_0[\alpha]) = \{\alpha^k \mid k \in \{ 0, \dots, n-1 \} \}
\]
for some $n \in \mathbb{N}$. Since $\alpha \neq 1$, we see that $n \ge 2$. Then $\alpha^n = \sum_{k=0}^{n-1} c_k \alpha^k$ for some $c_0, \dots, c_{n-1} \in \mathbb{N}_0$, which means that $1 = \sum_{k=0}^{n-1} c_k \alpha^{k-n}$. Hence $1 \notin \mathcal{A}(M_\alpha)$, and it follows from Theorem~\ref{thm:1atomic} that $M_\alpha$ is not atomic.
\end{proof}
We conclude this section with examples of monoids $M_\alpha$ that are atomic but do not satisfy the ACCP.
\begin{example} \label{ex:atomic monoids without the ACCP}
Take $a,b \in \mathbb{N}$ with $\gcd(a,b) = 1$ such that $1 < a < b$, and set $\alpha = a/b$. It follows from \cite[Proposition~3.5]{fG18} that the monoid $M_\alpha$ is atomic. On the other hand, we claim that $M_\alpha$ does not satisfy the ACCP. By \cite[Corollary 4.4]{CGG21}, there is an ascending chain $(x_n + \mathbb{N}_0[\alpha])_{n \in \mathbb{N}}$ of principal ideals of the monoid $(\mathbb{N}_0[\alpha], +)$ that does not stabilize. From the fact that $M_\alpha$ is a reduced monoid having $(\mathbb{N}_0[\alpha],+)$ as a submonoid, we can deduce that the chain of principal ideals $(x_n + M_\alpha)_{n \in \mathbb{N}}$ of $M_\alpha$ cannot stabilize in $M_\alpha,$ showing that $M_\alpha$ does not satisfy the ACCP.
\end{example}
\bigskip
\section{The Ascending Chain Condition on Principal Ideals}
\label{sec:ACCP}
We have just seen in the previous section that satisfying the ACCP is a stronger condition than being atomic when restricted to the class consisting of the monoids $M_\alpha$. In this section, we provide two necessary conditions for a monoid $M_\alpha$ to satisfy the ACCP, and then we establish two factorization-theoretical characterizations: satisfying the ACCP is equivalent to both the bounded factorization property and the finite factorization property if one restricts attention to the class consisting of all monoids $M_\alpha$. We conclude this section by constructing monoids $M_\alpha$ satisfying the ACCP that are not UFMs.
\begin{prop} \label{prop:accp}
Let $\alpha \in (0,1)$ be an algebraic number with minimal pair $(p(x),q(x))$. If $M_\alpha$ satisfies the ACCP, then $p(x) - Q(x)q(x) \notin \mathbb{N}_0[x,x^{-1}]$ for any nonzero Laurent polynomial $Q(x) \in \mathbb{N}_0[x,x^{-1}]$.
\end{prop}
\begin{proof}
Suppose, for the sake of contradiction, that there exists a nonzero Laurent polynomial $Q(x) \in \mathbb{N}_0[x,x^{-1}]$ such that $r(x) := p(x) - Q(x)q(x) \in \mathbb{N}_0[x,x^{-1}]$. Now consider the sequences $(a_n)_{n \in \mathbb{N}}$ and $(b_n)_{n \in \mathbb{N}}$ defined by
\[
a_n = Q(\alpha)^n q(\alpha) \quad \text{ and } \quad b_n := Q(\alpha)^n r(\alpha),
\]
respectively, for every $n \in \mathbb{N}$. Observe that the terms of both $(a_n)_{n \in \mathbb{N}}$ and $(b_n)_{n \in \mathbb{N}}$ are nonzero elements in $M_\alpha$. On the other hand,
\[
a_n = Q(\alpha)^n q(\alpha) = Q(\alpha)^n p(\alpha) = Q(\alpha)^{n+1} q(\alpha) + Q(\alpha)^{n} r(\alpha) = a_{n+1} + b_{n}
\]
for every $n \in \mathbb{N}$. Therefore $(a_n + M_\alpha)_{n \in \mathbb{N}}$ is an ascending chain of principal ideals of $M_\alpha$. Since $a_n - a_{n+1} = b_n > 0$ for every $n \in \mathbb{N}$, the chain of ideals $(a_n + M_\alpha)_{n \in \mathbb{N}}$ does not stabilize, contradicting that $M_\alpha$ satisfies the ACCP.
\end{proof}
In Example~\ref{ex:atomic monoids without the ACCP}, we saw that $M_\alpha$ is an atomic monoid that does not satisfy the ACCP for most choices of $q \in \mathbb{Q}_{> 0}$. However, there are also some examples of irrational algebraic real numbers $\alpha$ such that $M_\alpha$ is atomic but does not satisfy the ACCP, and we can identify some of them using Proposition~\ref{prop:accp}.
\begin{example} \label{ex:quadratic atomic monoids without the ACCP}
Take $a,b \in \mathbb{N}$ such that $\gcd(a,b) = 1$ and $1 < a < b$. Assume, in addition, that $a$ and $b$ are not perfect squares, and then set $\alpha := \sqrt{a/b}$. Then $\alpha$ is a non-rational algebraic number with minimal polynomial $m(x) := x^2 - a/b$. Suppose, by way of contradiction, that $M_\alpha$ is not atomic. By Theorem~\ref{thm:1atomic}, we can take $c_1, \dots, c_n \in \mathbb{N}_0$ such that $1 = c_1 \alpha + \cdots + c_n \alpha^n$. Since $\alpha$ is a root of the polynomial $f(x) := c_n x^n + \cdots + c_1 x - 1 \in \mathbb{Z}[x]$, there exists a polynomial $g(x) \in \mathbb{Q}[x]$ such that $f(x) = m(x)g(x)$. By Gauss's lemma, there exists $q \in \mathbb{Q}_{> 0}$ such that $m'(x) := qm(x) \in \mathbb{Z}[x]$ and $g'(x) := q^{-1}g(x) \in \mathbb{Z}[x]$. Since $qm(x)$ has integer coefficients, $q \in b\mathbb{N}$. Therefore $a \mid m'(0)$, so $a \mid m'(0) g'(0) = f(0) = 1$, a contradiction. Thus, $M_\alpha$ is atomic. Let us argue now that $M_\alpha$ does not satisfy the ACCP. Since $\alpha$ has minimal pair $(p(x), q(x)) := (b x^2,a)$, for $Q(x) := x^2$ we see that $p(x) - Q(x) q(x) = (b-a)x^2$, which belongs to $\mathbb{N}_0[x,x^{-1}]$. Hence $M_\alpha$ does not satisfy the necessary condition in Proposition~\ref{prop:accp}, and so it does not satisfy the ACCP.
\end{example}
\medskip
\subsection{The Bounded and Finite Factorization Properties}
In this subsection we prove that in the context of the monoids $M_{\alpha}$, satisfying the ACCP, being a BFM, and being an FFM are equivalent properties. Recall that a monoid $M$ is a BFM (resp., an FFM) provided that $\mathsf{L}(x)$ (resp., $\mathsf{Z}(x)$) is finite for all $x \in M^\bullet$. We proceed to establish the main result of this section.
\begin{theorem}\label{thm:ffm}
For $\alpha \in \mathbb{R}_{> 0},$ the following statements are equivalent.
\begin{enumerate}
\item[(a)] $M_\alpha$ is an FFM.
\smallskip
\item[(b)] $M_\alpha$ is a BFM.
\smallskip
\item[(c)] $M_\alpha$ satisfies the ACCP.
\end{enumerate}
\end{theorem}
\begin{proof}
(a) $\Rightarrow$ (b): This follows from the definitions of a BFM and an FFM.
\smallskip
(b) $\Rightarrow$ (c): This is a special case of \cite[Corollary 1.3.3]{GH06}.
\smallskip
(c) $\Rightarrow$ (a): Suppose that the monoid $M_\alpha$ satisfies the ACCP. If $\alpha$ is transcendental, then $M_\alpha$ is a free commutative monoid, and thus an FFM. We assume, therefore, that $\alpha$ is algebraic.
\smallskip
Suppose, by way of contradiction, that $M_\alpha$ is not an FFM. Then $\alpha \neq 1$ and, after replacing $\alpha$ by $\alpha^{-1}$ if necessary, we can assume that $\alpha > 1$. Since $M_\alpha$ is not an FFM, we can choose $\beta \in M_\alpha$ such that $|\mathsf{Z}_{M_\alpha}(\beta)| = \infty$. Because $\alpha > 1$, there exists $N \in \mathbb{N}$ such that $\alpha^n \nmid_{M_\alpha} \beta$ for any $n \in \mathbb{Z}$ with $n > N$. As $M_\alpha$ is atomic, $\mathcal{A}(M_\alpha) = \{\alpha^n \mid n \in \mathbb{Z}\}$ by Theorem~\ref{thm:1atomic}. Consequently, there is a bijection $\mathsf{Z}_{M_\alpha}(\beta) \to \mathsf{Z}_{M_\alpha}(\beta/\alpha^N)$ given by multiplication by $\alpha^{-N}$. In addition, $\beta/\alpha^N$ is not divisible by any positive power of $\alpha$ in $M_\alpha$. Then after replacing $\beta$ by $\beta/\alpha^N,$ we can further assume that $\alpha^k \mid_{M_\alpha} \beta$ implies that $k \le 0$. Since $M_{\alpha^{-1}} = M_\alpha$ is atomic, it follows from Proposition~\ref{prop:non-atomic characterization} that the additive monoid $\mathbb{N}_0[\alpha^{-1}]$ is neither antimatter nor finitely generated. Hence, \cite[Theorem~4.1]{CG20} guarantees that $\mathcal{A}(\mathbb{N}_0[\alpha^{-1}]) = \{\alpha^{-k} \mid k \in \mathbb{N}_0\}$. As a result, the fact that $\alpha^k \nmid_{M_\alpha} \beta$ for any $k \in \mathbb{N}$ ensures that $\mathsf{Z}_{M_\alpha}(\beta) = \mathsf{Z}_{\mathbb{N}_0[\alpha^{-1}]}(\beta)$ and, therefore, that $|\mathsf{Z}_{\mathbb{N}_0[\alpha^{-1}]}(\beta)| = \infty$. Thus, $\mathbb{N}_0[\alpha^{-1}]$ is not an FFM. Now it follows from \cite[Theorem~4.11]{CG20} that $\mathbb{N}_0[\alpha^{-1}]$ does not satisfy the ACCP. However, this is a contradiction to the fact that $\mathbb{N}_0[\alpha^{-1}]$ is a submonoid of the reduced monoid $M_\alpha$, which satisfies the ACCP. Hence, $M_\alpha$ is an FFM.
\end{proof}
\medskip
\subsection{A Class of FFMs that are not UFMs}
\label{sub:ffm not ufm}
We have exhibited in Examples~\ref{ex:atomic monoids without the ACCP} and~\ref{ex:quadratic atomic monoids without the ACCP} some atomic monoids $M_\alpha$ that do not satisfy the ACCP. However, the only examples we have so far of monoids $M_\alpha$ satisfying the ACCP (or, equivalently, being FFMs) are the trivial cases, namely, those corresponding to $\alpha = 1$ and $\alpha$ transcendental. Our purpose in this subsection is to produce monoids $M_{\alpha}$ that are FFMs for some algebraic~$\alpha$ different from $1$. This will yield monoids $M_\alpha$ that are FFMs but not UFMs.
\smallskip
To do so, let $\alpha_1, \alpha_2 \in \mathbb{R}$ be distinct roots of an irreducible quadratic polynomial in $\mathbb{Q}[x]$, and set $M := M_{\alpha_1}$ and $K := \mathbb{Q}(\alpha_1)$. Then $K$ is a real quadratic field extension of~$\mathbb{Q}$ that contains the monoid $M$. In addition, let $T \colon \mathbb{Q}(\alpha_1) \to \mathbb{R}^2$ be the $\mathbb{Q}$-linear map induced by the assignments $1 \mapsto (1,1)$ and $\alpha_1 \mapsto (\alpha_1,\alpha_2)$, and set $M' = T(M)$. Let $T_M \colon M \to M'$ be the map obtained by restricting the domain and codomain of~$T$ to $M$ and $M'$, respectively. We use the notation introduced in this paragraph throughout the rest of this section.
\begin{lemma} \label{lem:monoid M'}
The following statements hold.
\begin{enumerate}
\item $T$ is an injective $\mathbb{Q}$-algebra homomorphism.
\smallskip
\item $T_M$ is a monoid isomorphism.
\smallskip
\item $M' = \big\{ \sum_{i \in I} c_i (\alpha_1^i, \alpha_2^i) \mid c_i \in \mathbb{N}_0, I \subseteq \mathbb{Z}, |I| < \infty \big\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Since $T$ is a $\mathbb{Q}$-linear map, the equalities $T(x+y) = T(x)+T(y)$ and $T(qx) = q T(x)$ hold for all $x,y \in \mathbb{Q}(\alpha_1)$ and $q \in \mathbb{Q}$. For each $i \in \{1,2\}$, we let~$\sigma_i$ denote the $\mathbb{Q}$-algebra homomorphism $\mathbb{Q}(\alpha_1) \to \mathbb{R}$ induced by the assignment $\alpha_1 \mapsto \alpha_i$. Then for each $x \in \mathbb{Q}(\alpha_1)$, we can verify that $T(x) = (\sigma_1(x),\sigma_2(x)).$ Therefore, for all $x, y \in \mathbb{Q}(\alpha_1)$,
\[
T(xy) = (\sigma_1(xy), \sigma_2(xy)) = (\sigma_1(x)\sigma_1(y), \sigma_2(x)\sigma_2(y)) = T(x) T(y).
\]
Hence $T$ is a $\mathbb{Q}$-algebra homomorphism. Note that $T(\alpha_1^{-1}) = T(\alpha_1)^{-1} = (\alpha_1^{-1},\alpha_2^{-1})$. Now if $x \in \ker T$ for some $x \in \mathbb{Q}(\alpha_1)$, then $(\sigma_1(x), \sigma_2(x)) = (0,0)$, and so the fact that $\sigma_1$ is the inclusion map ensures that $x=0$. Thus, $T$ is an injective $\mathbb{Q}$-algebra homomorphism.
\smallskip
(2) Since $T$ is injective, it is also injective when restricted to $M \subseteq \mathbb{Q}(\alpha_1)$. Moreover, because $M'$ is the image of $M$ under $T$, the map $T_M \colon M \to M'$ is a bijection. In addition, the linearity of $T$ over $\mathbb{Q}$ immediately implies that $T_M$ is a monoid homomorphism, making it a monoid isomorphism from $M$ onto $M'$.
\smallskip
(3) Finally, let $x$ be an arbitrary element in $M$. Then $x = \sum_{i \in I} c_i\alpha_1^i$ for a finite subset $I$ of $\mathbb{Z}$ and coefficients $c_i \in \mathbb{N}_0$. Because $T$ is a $\mathbb{Q}$-algebra homomorphism by part~(1), we see that
\[
T(x) = \sum_{i \in I} c_i T(\alpha_1)^i = \sum_{i \in I} c_i (\alpha_1,\alpha_2)^i= \sum_{i \in I} c_i (\alpha_1^i, \alpha_2^i).
\]
Therefore $M' \subseteq \big\{ \sum_{i \in I} c_i (\alpha_1^i, \alpha_2^i) \mid c_i \in \mathbb{N}_0, I \subseteq \mathbb{Z}, |I| < \infty \big\}$. The reverse implication follows immediately as $T$ is a $\mathbb{Q}$-algebra homomorphism and $M$ is a monoid containing $\{\alpha_1^i \mid i \in \mathbb{Z}\}$.
\end{proof}
In order to establish the main result of this section, we need the following two lemmas.
\begin{lemma} \label{lem:decreasing}
Let $M$ be an additive submonoid of $\mathbb{R}^2_{\ge 0}$. If $v, w \in M$ with $v = (v_1, v_2)$ satisfy $v + M \subseteq w + M$, then $w \in [0,v_1] \times [0,v_2]$.
\end{lemma}
\begin{proof}
Since $v + M \subseteq w + M$, we see that $w$ divides $v$ in $M$ and, therefore, we can write $v = w + d$, where $d = (d_1, d_2) \in M \subseteq \mathbb{R}_{\ge0}^2$. Then $w = (v_1 - d_1, v_2 - d_2)$ belongs to $[0, v_1] \times [0, v_2]$.
\end{proof}
For the rest of this section, we further assume that $0 < \alpha_1 < 1 < \alpha_2$. We observe that, in light of part~(3) of Lemma~\ref{lem:monoid M'}, the inclusion $M' \subseteq \{(0,0)\} \cup \mathbb{R}_{> 0} \times \mathbb{R}_{> 0}$ holds.
\begin{lemma} \label{lem:finite}
If $(v_1,v_2) \in M'$, then the set $M' \cap ([0,v_1] \times [0, v_2])$ is finite.
\end{lemma}
\begin{proof}
Set $v := (v_1,v_2)$ and $S_v := M' \cap ([0,v_1] \times [0, v_2])$. If $v = (0,0)$, then $S_v$ is a singleton and thus finite. Now we assume that $v \neq (0,0)$. Note that since $\alpha_1^{-1} > 1$ and $\alpha_2 > 1,$ the sequences $(\alpha_1^{-n})_{n \in \mathbb{N}_0}$ and $(\alpha_2^n)_{n \in \mathbb{N}_0}$ both increase to infinity and, as a result, the nonempty set
\begin{equation*}
N := \{ n \in \mathbb{Z} \mid \alpha_1^n \leq v_1 \text{ and } \alpha_2^n \leq v_2\}
\end{equation*}
is bounded. Let $m$ be the maximum of $N$. Take a nonzero $s \in S_v$. Since $T$ is injective, there exists a unique $\alpha \in M$ such that $s = T(\alpha)$. Write
\begin{equation} \label{eq:representation of alpha}
\alpha = \sum_{i=0}^m q_i\alpha_1^{-i} + \sum_{i=0}^m p_i \alpha_1^i \in M^\bullet,
\end{equation}
where $q_0, \dots, q_m$ and $p_0, \dots, p_m$ are nonnegative integers. As a result, we see that
\begin{align*}
s &= \sum_{i=0}^{m} q_i T(\alpha_1^{-i}) + \sum_{i=0}^{m} p_i T(\alpha_1^i) \\
&= \sum_{i=0}^{m} q_i (\alpha_1^{-i}, \alpha_2^{-i}) + \sum_{i=0}^{m} p_i (\alpha_1^i,\alpha_2^i) \\
&= \bigg( \sum_{i=0}^{m}( q_i \alpha_1^{-i} + p_i \alpha_1^i), \, \sum_{i=0}^{m} (q_i \alpha_2^{-i} + p_i \alpha_2^i )\bigg).
\end{align*}
Because $\alpha_2 > 1$, after looking at the second coordinate of $s$, we infer that $p_i \le p_i \alpha_2^i \le v_2$ for every $i \in \{0, \dots, m\}$. Hence, there are at most $(v_2 + 1)^{m+1}$ many possible $(m+1)$-tuples $(p_0, p_1, \dots, p_m)$ to choose for the respective coefficients of $\alpha_1^0, \dots, \alpha_1^m$ for a representation of $\alpha$ as in~\eqref{eq:representation of alpha}. Symmetrically, since $\alpha_1^{-1} > 1,$ there are finitely many possible $(m+1)$-tuples $(q_0, q_1, \dots, q_m)$ one can choose to express $\alpha$ as in~\eqref{eq:representation of alpha}. Consequently, the set $T_M^{-1}(S_v)$ is finite, which implies that $S_v$ is also finite.
\end{proof}
We are in a position to prove that $M$ is an FFM.
\begin{theorem} \label{thm:FFM evaluation monoids}
Suppose that $\alpha_1$ and $\alpha_2$ are the roots of an irreducible quadratic polynomial in $\mathbb{Q}[x]$ such that $0 < \alpha_1 < 1< \alpha_2$. Then $M_{\alpha_1}$ is an FFM and, therefore, satisfies the ACCP.
\end{theorem}
\begin{proof}
Define $T \colon \mathbb{Q}(\alpha_1) \to \mathbb{R}^2$ and $M'$ as before. Let $v = (v_1, v_2)$ be a nonzero element in $M'$. It follows from Lemma~\ref{lem:finite} that $S_v := M' \cap ([0, v_1] \times [0, v_2])$ is a finite set. On the other hand, it follows from Lemma~\ref{lem:decreasing} that every divisor of $v$ in $M'$ belongs to~$S_v$. Therefore, $v$ has only finitely many divisors in $M'$ and, as a result, $M'$ is an FFM by virtue of \cite[Theorem 2]{fHK92}. Since $M$ is isomorphic to $M'$ and being an FFM is an algebraic property, we conclude that $M$ is an FFM, whence it satisfies the ACCP.
\end{proof}
There are monoids $M_\alpha$ that are FFMs but not UFMs. The following example illustrates this observation.
\begin{example} \label{ex:FFM that is not a UFM}
Consider the polynomial $p(x) := x^2 - 2x + \frac{1}{2} \in \mathbb{Q}[x]$. Since the roots of $p(x)$ are $\alpha := 1 - \frac{\sqrt{2}}2$ and $\beta := 1 + \frac{\sqrt{2}}2$, it is an irreducible polynomial. In light of Theorem~\ref{thm:FFM evaluation monoids}, the chain of inequalities $0 < \alpha < 1 < \beta$ guarantees that the additive monoid $M_\alpha$ is an FFM. However, $M_\alpha$ is not a UFM: indeed, since $1,\alpha, \alpha^2 \in \mathcal{A}(M_\alpha)$ by Theorem~\ref{thm:1atomic}, the two sides of the equality $4\alpha = 2 \alpha^2 + 1$ yield distinct factorizations of the same element of $M_\alpha$ (see also Proposition~\ref{prop:HFM/UFM characterization} in the next section).
\end{example}
\bigskip
\section{Factoriality and Elasticity}
\label{sec:factoriality}
In this last section, we characterize the monoids $M_\alpha$ that are half-factorial, and we briefly discuss the elasticity of $M_\alpha$. The elasticity is a factorization invariant that measures how far from being half-factorial a given monoid is.
\smallskip
\subsection{Half-Factoriality}
Recall that an atomic monoid $M$ is an HFM if $|\mathsf{L}(x)| = 1$ for every $x \in M$. In the class consisting of evaluation monoids of Laurent semirings, being an HFM and being a UFM are equivalent conditions. We determine such monoids in the following proposition.
\begin{prop} \label{prop:HFM/UFM characterization}
For $\alpha \in \mathbb{R}_{>0}$, the following statements are equivalent.
\begin{enumerate}
\item[(a)] $M_\alpha$ is an UFM.
\smallskip
\item[(b)] $M_\alpha$ is an HFM.
\smallskip
\item[(c)] $\alpha = 1$ or $\alpha$ is transcendental.
\end{enumerate}
\end{prop}
\begin{proof}
(a) $\Rightarrow$ (b): This follows by definition.
\smallskip
(b) $\Rightarrow$ (c): Suppose for the sake of contradiction that $\alpha$ is an algebraic number not equal to~$1$. Let $(p_\alpha(x), q_\alpha(x))$ be the minimal pair for $m_\alpha(x)$ over $\mathbb{Z}$. Because $M_\alpha$ is an HFM, it is an atomic monoid; thus, $\mathcal{A}(M_\alpha) = \{\alpha^n \mid n \in \mathbb{Z}\}$ by Theorem~\ref{thm:1atomic}. Hence, $z_p = p_\alpha(\alpha)$ and $z_q = q_\alpha(\alpha)$ are factorizations for the same element of $M_\alpha$. As $M_\alpha$ is an HFM, $p_\alpha(1) = |z_p| = |z_q| = q_\alpha(1)$, which implies that $1$ is a root of $m_\alpha(x)$. However, this contradicts the irreducibility of $m_\alpha(x)$.
\smallskip
(c) $\Rightarrow$ (a): If $\alpha=1$, then $M_\alpha = \mathbb{N}_0$; hence, it is a UFM. On the other hand, suppose that $\alpha$ is transcendental. Then any equality of the form $1 = \sum_{n \in \mathbb{Z}} c_n \alpha^n$, where all but finitely many $c_n$ are zero, implies that $c_0 = 1$ and $c_n = 0$ for every $n \neq 0$. Therefore $1 \in \mathcal{A}(M_\alpha)$, and it follows from Theorem~\ref{thm:1atomic} that $M_\alpha$ is atomic. Now suppose that $p(\alpha)$ and $q(\alpha)$ are two factorizations of the same element in $M_\alpha$, where $p(x), q(x) \in \mathbb{N}_0[x,x^{-1}]$. Take $k \in \mathbb{N}$ such that $f(x) := x^k(p(x) - q(x)) \in \mathbb{Z}[x]$. Since $f(\alpha) = 0$, the fact that $\alpha$ is transcendental ensures that $f(x) = 0$ and, hence, $p(x) = q(x)$. Thus, the factorizations $p(\alpha)$ and $q(\alpha)$ are identical, concluding that $M_\alpha$ is a UFM.
\end{proof}
We proceed to discuss a dual notion of half-factoriality. A monoid $M$ is called a \emph{length-factorial monoid} (LFM) provided that for all $a \in M$ and $z,z' \in \mathsf{Z}(a)$, the equality $|z| = |z'|$ implies that $z = z'$. Observe that every UFM is an LFM. The notion of length-factoriality was first considered in~\cite{CS11} under the term ``other-half-factoriality," and it has been recently investigated in~\cite{CCGS21,GZ21,fG20}. On the other hand, not every LFM is a UFM, as illustrated next.
\smallskip
\begin{example}
Let $q \in \mathbb{Q}_{> 1} \setminus \mathbb{N}$ and consider the additive submonoid $M$ of $\mathbb{Q}_{\ge 0}$ generated by the set $\{1,q\}$. Since $1 = \min M^\bullet$ and $q \notin \mathbb{N}$, we conclude that $\mathcal{A}(M) = \{1,q\}$. In addition, one can check that if $z_1 := m_1 + n_1 q$ and $z_2 := m_2 + n_2q$ are two factorizations of the same element of $M$ having the same lengths, then $m_1 + n_1 = m_2 + n_2$ and, therefore, $(m_1, n_1) = (m_2, n_2)$; that is, $z_1 = z_2$. Thus, $M$ is an LFM. However, $M$ is not a UFM since, for instance, the two sides of the equality $\mathsf{n}(q) \cdot 1= \mathsf{d}(q) \cdot q$ yield distinct factorizations of $\mathsf{n}(q)$ in $M$. Additive submonoids of $\mathbb{Q}_{\ge 0}$ that are LFMs have been determined in \cite[Proposition~2.2]{fG20a}.
\end{example}
\begin{prop}
For $\alpha \in \mathbb{R}_{>0},$ $M_\alpha$ is an LFM if and only if it is a UFM.
\end{prop}
\begin{proof}
If $\alpha$ is transcendental, then $M_\alpha$ is a UFM; hence, the statement of the proposition immediately follows. Then we assume that $\alpha$ is algebraic. It suffices to argue the direct implication, for the reverse implication follows by definition. To do this, suppose, by way of contradiction, that $M_\alpha$ is not a UFM. Then there exists an element of $M_\alpha$ having two distinct factorizations, namely, $p(\alpha)$ and $q(\alpha)$, where $p(x), q(x) \in \mathbb{N}_0[x,x^{-1}]$. After rearranging $(\alpha-1)p(\alpha) = (\alpha-1)q(\alpha)$, we obtain that $z_1 := \alpha p(\alpha) + q(\alpha)$ and $z_2 := \alpha q(\alpha) + p(\alpha)$ are factorizations of the same element in $M_\alpha$. Observe that $z_1 \neq z_2$ as, otherwise, the Laurent polynomials $p(x)$ and $q(x)$ would satisfy $xp(x) + q(x) = xq(x) + p(x)$, which is not possible because $p(x) \neq q(x)$. However, the fact that $|z_1| = p(1) + q(1) = |z_2|$ indicates that $z_1$ and $z_2$ are distinct factorizations of the same element having the same length, which contradicts the fact that $M_\alpha$ is an LFM and completes the proof.
\end{proof}
Now we can summarize the main results we have established in this paper via the following diagram of implications, which is a specialization of Diagram~\eqref{eq:atomic chain} for the class consisting of all the evaluation monoids of Laurent semirings. As illustrated in Examples~\ref{ex:atomic monoids without the ACCP} and~\ref{ex:FFM that is not a UFM} the two (one-way) implications in the diagram are not reversible.
\begin{equation} \label{eq:refined_chain}
[\textbf{UFM} \ \Leftrightarrow \ \textbf{HFM} \ \Leftrightarrow \ \textbf{LFM}] \Rightarrow [\textbf{FFM} \ \Leftrightarrow \ \textbf{BFM} \ \Leftrightarrow \ \textbf{ACCP}] \ \Rightarrow \ \textbf{atomicity}
\end{equation}
\medskip
\subsection{The Elasticity}
We conclude this paper by saying a few words about the elasticity of the monoids $M_\alpha$.
Let $M$ be an atomic monoid. The \emph{elasticity} of a nonzero element $x \in M$, denoted by $\rho(x)$, is defined as
\[
\rho(x) := \frac{\sup \mathsf{L}(x)}{\min \mathsf{L}(x)}.
\]
In addition, we set $\rho(M) := \sup \{\rho(x) \mid x \in M^\bullet\}$ and call it the \emph{elasticity} of $M$. Notice that $\rho(M) \ge 1$. Furthermore, observe that $\rho(M) = 1$ if and only if $M$ is an HFM. As a result, the elasticity provides a measure of how far is an atomic monoid from being half-factorial.
\smallskip
As we proceed to argue, the elasticity of every monoid $M_\alpha$ is either $1$ or infinity.
\begin{prop} \label{prop:elasticity}
If $\alpha \in \mathbb{R}_{> 0}$, then $\rho(M_\alpha) = 1$ if either $\alpha = 1$ or $\alpha$ is transcendental, and $\rho(M_\alpha) = \infty$ otherwise.
\end{prop}
\begin{proof}
If $\alpha=1$ or $\alpha$ is transcendental, it follows from Proposition~\ref{prop:HFM/UFM characterization} that $M_\alpha$ is an HFM and, therefore, $\rho(M_\alpha) = 1$.
\smallskip
Now suppose that $\alpha$ is algebraic and $\alpha \neq 1$. We construct a sequence $(\beta_n)_{n \in \mathbb{N}}$ with terms in $M_\alpha$ such that $\sup \{\rho(\beta_n) \mid n \in \mathbb{N})\} = \infty$. Let $(p(x), q(x))$ be the minimal pair of $\alpha$. Then $z_1 := p(\alpha)$ and $z_2 := q(\alpha)$ are two distinct factorizations of the same element, namely, $\beta_1 \in M_\alpha$. Since $1$ is not a root of the minimal polynomial of $\alpha$, we see that $p(1) \neq q(1)$, so $z_1$ and $z_2$ are factorizations of different lengths. Suppose, without loss of generality, that $|z_1| < |z_2|$. For each $n \in \mathbb{N}$, set $\beta_n = \beta_1^n.$ Then we see that, for every $n \in \mathbb{N}$, both $z_1^n$ and $z_2^n$ are factorizations of $\beta_n$ in $M_\alpha$ whose lengths are $p(1)^n$ and $q(1)^n$, respectively. Therefore
\[
\rho(M_\alpha) \ge \rho(\beta_n) = \frac{\sup \mathsf{L}(\beta_n)}{\min \mathsf{L}(\beta_n)} \ge \frac{q(1)^n}{p(1)^n} = \bigg( \frac{|z_2|}{|z_1|} \bigg)^n
\]
for every $n \in \mathbb{N}$. Since $|z_2|/|z_1| > 1$, it follows that $\rho(M_\alpha) = \infty$, which concludes the proof.
\end{proof}
\bigskip
\section*{Acknowledgments}
First and foremost, it is my pleasure to thank Dr.~Felix Gotti for suggesting this project and for his mentorship all the way through. I also thank the MIT PRIMES-USA program for their support, without which this paper would not exist.
\bigskip
| {
"timestamp": "2021-08-27T02:06:50",
"yymm": "2108",
"arxiv_id": "2108.11536",
"language": "en",
"url": "https://arxiv.org/abs/2108.11536",
"abstract": "For a positive real number $\\alpha$, let $\\mathbb{N}_0[\\alpha,\\alpha^{-1}]$ be the semiring of all real numbers $f(\\alpha)$ for $f(x)$ lying in $\\mathbb{N}_0[x,x^{-1}]$, which is the semiring of all Laurent polynomials over the set of nonnegative integers $\\mathbb{N}_0$. In this paper, we study various factorization properties of the additive structure of $\\mathbb{N}_0[\\alpha, \\alpha^{-1}]$. We characterize when $\\mathbb{N}_0[\\alpha, \\alpha^{-1}]$ is atomic. Then we characterize when $\\mathbb{N}_0[\\alpha, \\alpha^{-1}]$ satisfies the ascending chain condition on principal ideals in terms of certain well-studied factorization properties. Finally, we characterize when $\\mathbb{N}_0[\\alpha, \\alpha^{-1}]$ satisfies the unique factorization property and show that, when this is not the case, $\\mathbb{N}_0[\\alpha, \\alpha^{-1}]$ has infinite elasticity.",
"subjects": "Commutative Algebra (math.AC)",
"title": "Factorizations in evaluation monoids of Laurent semirings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787830929849,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7093811439447941
} |
https://arxiv.org/abs/2108.08299 | Restricted Dyck Paths on Valleys Sequence | In this paper we study a subfamily of a classic lattice path, the \emph{Dyck paths}, called \emph{restricted $d$-Dyck} paths, in short $d$-Dyck. A valley of a Dyck path $P$ is a local minimum of $P$; if the difference between the heights of two consecutive valleys (from left to right) is at least $d$, we say that $P$ is a restricted $d$-Dyck path. The \emph{area} of a Dyck path is the sum of the absolute values of $y$-components of all points in the path. We find the number of peaks and the area of all paths of a given length in the set of $d$-Dyck paths. We give a bivariate generating function to count the number of the $d$-Dyck paths with respect to the the semi-length and number of peaks. After that, we analyze in detail the case $d=-1$. Among other things, we give both, the generating function and a recursive relation for the total area. | \section{Introduction}
A classic concept, the \emph{Dyck paths}, has been widely studied. Recently, a subfamily of these paths, non-decreasing Dyck paths, has received certain
level of interest due to the good behavior of its recursive relations and generating functions. In this paper we keep
studying a generalization of the non-decreasing Dyck paths. Other generalizations of non-drecreasing Dyck paths have been given for Motzkin paths and for \L{}ukasiewicz paths \cite{RigoJoseLuis,RigoJoseLuisRational}.
We now recall, to avoid ambiguities, some important definitions that we need in this paper. A Dyck path is a lattice path in the first quadrant of the $xy$-plane that starts at the origin, ends on the $x$-axis, and consists of (the same number of) North-East steps $U:=(1,1)$ and South-East steps $D:=(1,-1)$. The \emph{semi-length} of a path is the total number of $U$'s that the path has. A \emph{valley} (\emph{peak}) is a subpath of the form $DU$
($UD$) and the \emph{valley vertex} of $DU$ is the lowest point (a local minimum) of $DU$. Following \cite{FloRamVelVilPolyominoes, FloRamVelVil} we define the \emph{valley vertices} of a Dyck path $P$ as the vector
$\nu=(\nu_1, \nu_2, \dots, \nu_k)$ formed by all $y$-coordinates (listed from left to right) of all valley vertices of $P$. For further recent work about different combinatorial aspects of Dyck paths, see for instance \cite{Baril, Baril2, Blecher, RigoJoseLuisSym, Manes, Sergi}.
For a fixed $d \in \mathbb{Z}$,
a Dyck path $P$ is called \emph{restricted $d$-Dyck} or \emph{$d$-Dyck} (for simplicity), if either $P$ has at most one valley, or if its valley vertex
vector $\nu$ satisfies that $\nu_{i+1}-\nu_i\geq d$, where $1\le i <k$. The set of all $d$-Dyck paths is denoted by ${\mathcal D}_{d}$, the set of all
$d$-Dyck paths of semi-length $n$ is denoted ${\mathcal D}_{d}(n)$, and the cardinality of ${\mathcal D}_{d}(n)$ is denoted by $r_d(n)$.
The first well-known example of these paths is the set of $0$-Dyck paths; in the literature,
\cite{barcucci,CZ,CZ2,DeutscheProdinger, ElizaldeFlorezRamirez,FlorezJose}, this family is known as non-decreasing Dyck paths. The whole family of Dyck paths can
be seen as a limit of $d$-Dyck and it occurs occurs when $d\to -\infty$. Another example is from Figure \ref{Example}; we observe that $\nu=(0,1,0,3,4,3,2)$ and that
$\nu_{i+1}-\nu_i\ge -1$, for $i=1, \dots, 6$, so the figure depicts a $(-1)$-Dyck path of length 28 (or semi-length 14).
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=0.9]{Example1.eps}
\end{center}
\caption{A $(-1)$-Dyck path of length 28.} \label{Example}
\end{figure}
The recurrence relations and/or the generating functions for $d$-Dyck when $d\ge 0$ have different behavior than the case $d< 0$. For example, the generating
functions for known aspects, in $d$-Dyck when $d\ge 0$, are all rational (see \cite{barcucci,CZ,CZ2,ElizaldeFlorezRamirez,FlorezJose,FloRamVelVilPolyominoes, FloRamVelVil}).
However, the aspects that we analyze in this paper, when $d< 0$, give that the generating functions are all algebraic (non-rational).
In this paper we give a bivariate generating function to count the number of paths
in ${\mathcal D}_{d}(n)$, for $d\le 0$, with respect to the number of peaks and semi-length. We also give a relationship between the total number of
$d$-Dyck paths and the Catalan numbers. Additionally, we give an explicit symbolic expression for the generating function with respect to the
semi-length. For the particular case $d=-1$ we give a combinatorial expression and a recursive relation for the total number of paths.
We also analyze the asymptotic behavior for the sequence $r_{-1}(n)$. It would be very interesting if we can understand better the behavior of $d$-Dyck paths for $d<-1$.
The \emph{area} of a Dyck path is the sum of the absolute values of $y$-components of all points in the path. That is, the area of a Dyck path
corresponds to the surface area under the paths and above of the $x$-axis. For example, the path $P$ in Figure \ref{Example} satisfies that ${\texttt{area}}(P)=70$. We use generating functions and recursive relations to analyze the distribution of the area of all paths
in ${\mathcal D}_{-1}(n)$.
\section{Number of $d$-Dyck paths and Peaks Statistic}
Given a family of lattice paths, a classic question is how many lattice paths are there of certain length, and a second classic question is
how many peaks are there depending on the length of the path. These questions have been completely answered, for instance, for Dyck paths
\cite{Deutschenumeration}, $d$-Dyck paths for $d\ge 0$ \cite{barcucci, FloRamVelVil}, and Motzkin paths \cite{sapounakis} among others.
In this section we give a bivariate generating function to enumerate the peaks and semi-length of the $d$-Dyck paths for $d< 0$.
We now give some notation needed for this paper, including the parameters needed for the generating function in this section. The
\emph{level} of a valley is the $y$-component of its valley vertex. We recall that the set of all $d$-Dyck paths is denoted by ${\mathcal D}_{d}$; the set
of all $d$-Dyck paths of semi-length $n$ is denoted ${\mathcal D}_{d}(n)$, and the cardinality of ${\mathcal D}_{d}(n)$ is denoted by $r_d(n)$. Given a
$d$-Dyck path $P$, we denote the semi-length of $P$ by $\ell(P)$ and denote the number of peaks of $P$ by $\rho(P)$. So, the
bivariate generating function to count the number of paths and peaks of $d$-Dyck paths is defined by
$$L_d(x, y):=\sum_{P\in {\mathcal D}_d}x^{\ell(P)}y^{\rho(P)}.$$
\subsection{Some facts known when $d\geq 0$.}
These results can be found in \cite{FloRamVelVil}.
\begin{itemize}
\item
If $d\geq 0$, then the generating function $F_d(x, y)$ is given by
$$L_d(x, y)=1 + \frac{xy(1-2x+x^2+xy-x^{d+1}y)}{(1-x)(1-2x+x^2-x^{d+1}y)}.$$
\item If $d\geq 1$,
$$r_d(n)=\sum_{k=0}^{\lfloor\frac{n+d-2}{d}\rfloor}\binom{n-(d-1)(k-1)}{2k}.$$
\item If $n> d$, then we have the recursive relation
$$r_d(n)=2r_d(n-1)-r_d(n-2)+r_d(n-d-1),$$
with the initial values $r_d(n)=\binom n2 +1$, for $0\leq n \leq d$.
\item Let $p_d(n,k)$ be the number of $d$-Dyck paths of semi-length $n$, having exactly $k$ peaks.
If $d\geq 0$, then
$$p_d(n,k)=\binom{n+k-d(k-2)-2}{2(k-1)}.$$
For the whole set of Dyck paths, the number $p_{-\infty}(n,k)$, is given by the Narayana numbers $N(n,k)=\frac{1}{n}\binom nk \binom{n}{k-1}$.
\end{itemize}
\subsection{Peaks statistic for $d$ a negative integer} For the remaining part of the paper we consider only the case $d<0$ and use $e$ to
denote $|d|$.
\begin{theorem}\label{mainGF} If $d$ is a negative integer and $e:=|d|$, then
the generating function $L_e(x, y)$ satisfies the functional equation
\begin{align}
L_e(x,y)= xy+xL_e(x,y)+xS_e(x,y)L_e(x,y), \label{eqSys1}
\end{align}
where $S_e(x)$ satisfies the algebraic equation
\[
(1-xS_e(x,y))^e(y+(1-y)xS_e(x,y))-S_e(x,y)(1-xS_e(x,y))^{e+1}-\frac{x^{e+2}y}{1-x}S_e(x,y)=0.
\]
\end{theorem}
\begin{proof} We start this proof introducing some needed notation. The set ${\mathcal Q}_{d,i} \subseteq {\mathcal D}_{d}$ denotes the family of nonempty paths
where the last valley is at level $i$. We consider the generating function
$$Q_i^{(e)}(x,y):=\sum_{P\in {\mathcal Q}_{d,i}}x^{\ell(P)}y^{\rho(P)}.$$
It is convenient to consider the sum over the $Q^{(e)}_i(x,y)$. We also consider the generating function, with respect to the lengths and peaks, that counts the $d$-Dyck paths that have either no valleys or the last valley is at level less than $e$. That is,
\begin{align}\label{defSe}
S_e(x,y)=\frac{y}{1-x}+\sum _{j=0}^{e-1}Q^{(e)}_j(x,y).
\end{align}
A path $P$ can be uniquely decomposed as either
$UD, UTD$, or $UQDT$ (by considering the first return decomposition), where $T\in{\mathcal D}_{d}$ and $Q$ is either a path without valleys or is a path in $\cup_{i=0}^{e-1}{\mathcal Q}_{d,i}$ (see Figure \ref{Fig3}, for a graphical representation of this decomposition). Notice that the decomposition $UQDT$ ensures that the condition $\nu_{i+1}-\nu_i\geq d$ holds for all $i\geq 1$.
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=0.8]{Deco1.eps}
\end{center}
\caption{Decomposition of a $d$-Dyck path.} \label{Fig3}
\end{figure}
From the symbolic method we obtain the functional equation
$$ L_e(x,y)= xy+xL_e(x,y)+xS_e(x,y)L_e(x,y).$$
Now we are going to obtain a system of equations for the generating functions $Q_i(x,y)$. Let $Q$ be a path in the set ${\mathcal Q}_{d,i}$. If $i=0$, then the path $Q$ can be decomposed uniquely as either
$UQ'D\Delta$ or $UQ'DR$, where $\Delta$ is a pyramid, $R$ is a path in ${\mathcal Q}_{d,0}$, and $Q'$ is either a path without valleys or $Q'\in \cup_{i=0}^{e-1}{\mathcal Q}_{d,i}$.
Therefore, we have the functional equation
$$Q_0^{(e)}(x,y)=xS_e(x,y)\frac{xy}{1-x}+xS_e(x,y)Q_0^{(e)}(x,y).$$
For $i>0$, any path $Q$ can be decomposed uniquely in one of these two forms $UR_1D$ or $UQDR_2$,
where $R_1\in {\mathcal Q}_{d,i-1}, R_2 \in {\mathcal Q}_{d,i}$, and $Q$ is either a path without valleys or $Q\in \cup_{i=0}^{e-1}{\mathcal Q}_{d,i}$.
So, we have the functional equation
$$Q_i^{(e)}(x,y)=xQ_{i-1}^{(e)}(x,y)+xS_e(x)Q_i^{(e)}(x,y).$$
Summarizing the above discussion, we obtain the system of equations:
\begin{align}
\label{eqSys2}
\begin{cases}
Q_0^{(e)}(x,y)&=xS_e(x,y)\frac{xy}{1-x}+xS_e(x,y)Q_0^{(e)}(x,y)\\
Q_1^{(e)}(x,y) &=xQ_0^{(e)}(x,y)+xS_e(x,y)Q_1^{(e)}(x,y)\\
&\ \vdots\\
Q_i^{(e)}(x,y)&=xQ_{i-1}^{(e)}(x,y)+xS_e(x,y)Q_i^{(e)}(x,y)\\
&\ \vdots \\
Q_{e-1}^{(e)}(x,y)&=xQ_{e-2}^{(e)}(x,y)+xS_e(x,y)Q_{e-1}^{(e)}(x,y).
\end{cases}
\end{align}
Summing the equations in \eqref{eqSys2}, we obtain that
$$\sum_{j=0}^{e-1}Q_j^{(e)}(x,y)=xS_e(x,y)\left(\sum_{j=0}^{e-1}Q_j^{(e)}(x,y)+\frac{xy}{1-x}\right)+x\sum_{j=0}^{e-2}Q_j^{(e)}(x,y).$$
From this and \eqref{defSe} we have
\begin{multline}\label{eqSex}
S_e(x,y)-\frac{y}{1-x}=x\left (S_e(x,y)-\frac{y}{1-x}-Q_{e-1}^{(e)}(x,y)\right)\\+xS_e(x,y)\left(S_e(x,y)-\frac{y}{1-x}\right )+\frac{x^2y}{1-x}S_e(x,y).
\end{multline}
Now solving this in previous equation for $S_e(x,y)$ we have
\begin{align}\label{FunEcS}
S_e(x,y)=\frac{1-x+xy-\sqrt{1-2x+x^2-2xy-2x^2y+x^2y^2+4x^2Q_{e-1}^{(e)}(x,y)}}{2x}.
\end{align}
Notice that all of the $Q_i^{(e)}(x,y)$, with $i\geq 0$, can be expressed as
\begin{align}\label{FunEcQ}
Q_i^{(e)}(x,y)=\frac{x^{i+2}yS_e(x,y)}{(1-x)(1-xS_e(x,y))^{i+1}}.
\end{align}
Substituting \eqref{FunEcQ} into \eqref{eqSex} we obtain the desired functional equation.
\end{proof}
We observe that substituting \eqref{FunEcS} into \eqref{eqSys1}, we have
\begin{eqnarray*}
L_e(x,y)&=&\dfrac{xy}{1-x-xS_e(x,y)}\\
&=&\dfrac{xy}{1-x-\dfrac{1-x+xy-\sqrt{1-2x+x^2-2xy-2x^2y+x^2y^2+4x^2Q_{e-1}^{(e)}(x,y)}}{2}}.
\end{eqnarray*}
From the combinatorial description of $Q^{(e)}_{e-1}(x,y)$, we obtain $Q_{e-1}^{(e)}(x,y)\longrightarrow 0$, as $e\longrightarrow \infty$. Therefore,
$$\lim_{e\to \infty} L_e(x,y)=\lim_{e\to \infty}\frac{xy}{1-x-xS_e(x,y)}=\frac{1-x-xy-\sqrt{1 - 2 x + x^2 - 2 x y - 2 x^2 y + x^2 y^2}}{2x}.$$
This last generating function is the distribution of the Narayan sequence. This corroborates that the restricted $(-\infty)$-Dyck paths coincides
with the non-empty Dyck paths.
\begin{theorem}
If $1\leq k \leq |d|+3$, then the $k$-th coefficient of the generating function $L_e(x,1)$ coincides with the Catalan number $C_k$.
\end{theorem}
\begin{proof} We first observe that the shortest Dyck path that contains a forbidden sequence of valleys is $P=U^{e+2}DUD^{e+2}UD$ (clearly,
$\ell (P)=e+4$) with $e=|d|$. Therefore, if $d<0$, then $r_d(n)=C_n$ for $n=1, 2, \dots, |d|+3$.
\end{proof}
The first few values for the sequence $r_d(n)$, for $d\in \{-1, -2, -3, -4\}$ are
\begin{align*}
\{r_{-1}(n)\}_{n\geq 1}&=\{\textbf{1}, \, \textbf{2}, \, \textbf{5}, \, \textbf{14}, \, 41, \, 123, \, 375, \, 1157, \, 3603, \dots \}, \\
\{r_{-2}(n)\}_{n\geq 1}&=\{\textbf{1}, \, \textbf{2}, \, \textbf{5}, \, \textbf{14}, \, \textbf{42}, \, 131, \, 419, \, 1365, \, 4511, \dots \}, \\
\{r_{-3}(n)\}_{n\geq 1}&=\{\textbf{1}, \, \textbf{2}, \, \textbf{5}, \, \textbf{14}, \, \textbf{42}, \, \textbf{132}, \, 428, \, 1419, \, 4785, \dots \}, \\
\{r_{-4}(n)\}_{n\geq 1}&=\{\textbf{1}, \, \textbf{2}, \, \textbf{5},\, \textbf{14}, \, \textbf{42},\, \textbf{132}, \, \textbf{429}, \, 1429, \, 4850, \dots \}.
\end{align*}
For example, there are $41$ $(-1)$-Dyck paths out of the $42$ Dyck paths of length $10$. The Figure \ref{DyckF}, depicts the only Dyck path of length 10 that is not a $(-1)$-Dyck path.
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=1.3]{DyckF.eps}
\end{center}
\caption{The only Dyck path of length 10 that is not a $(-1)$-Dyck path.} \label{DyckF}
\end{figure}
Recall that $d$ is a negative integer and that $e:=|d|$. Then by Theorem \ref{mainGF}, we have
\begin{align*}
(L_e(x,y)+y)^e&\left(xL^2_e(x,y)+(xy+x-1)L_e(x,y)+xy\right)\\
&-\frac{x}{1-x}((1-x)L_e(x,y)-xy)(L_e(x,y))^{e+1}=0.
\end{align*}
This implies that
\begin{multline*}
\sum_{j=2}^{e+1}x\binom{e}{j-2}y^{e+2-j}(L_e(x,y))^j+\sum_{j=1}^{e+1}(xy+x-1)\binom{e}{j-1}y^{e+1-j}(L_e(x,y))^j\\
-\sum_{j=0}^ex\binom{e}{j}y^{e+1-j}(L_e(x,y))^j+\frac{x^2y}{1-x}(L_e(x,y))^{e+1}=0.
\end{multline*}
Hence, by taking $y=1$, we have
$$L_e(x,1)=Z\left(a_0+\sum_{j=2}^{e+1}a_j(x)(L_e(x,1))^j\right),$$
where $Z=1$, and
\begin{align*}
a_0&=\frac{x}{1-(e+2)x},\\
a_j&=\frac{1}{1-(e+2)x}\left(x\binom{e+2}{j}-\binom{e}{j-1}\right),\quad j=2,3,\ldots,e,\\
a_{e+1}&=\frac{(e+2)x(1-x)-1+x(1+x)}{(1-x)(1-(e+2)x)}.
\end{align*}
Hence, by the Lagrange inversion formula, we expand the generating function $L_e(x,1)$ as a power series in $Z$ to obtain
\begin{align*}
L_e(x,1)&=\sum_{n\geq1}\frac{[Z^{n-1}]}{n}
\sum_{i_0+i_2+i_3+\cdots+i_{e+1}=n}\frac{n!}{i_0!i_2!\cdots i_{e+1}!}a_0^{i_0}Z^{2i_2+\cdots+(e+1)i_{e+1}}\prod_{j=2}^{e+1}a_j^{i_j},
\end{align*}
that leads to the following result.
\begin{theorem}\label{thLfe}
We have
\begin{align*}
L_e(x,1)&=\sum_{n\geq1}\frac{\sum_{2i_2+\cdots+(e+1)i_{e+1}=n-1}\binom{n}{i_2,\ldots,i_{e+1}}x^{n-i_2-\cdots-i_{e+1}}t^{i_{e+1}}\prod_{j=2}^{e}\left(x\binom{e+2}{j}-\binom{e}{j-1}\right)^{i_j}}{n(1-(e+2)x)^n},
\end{align*}
where
\[ \binom{n}{i_2,\ldots,i_{e+1}}=\frac{n!}{i_2!\cdots i_{e+1}!(n-i_2-\cdots-i_{e+1})!} \text{ and } t=\frac{(e+2)x(1-x)-1+x(1+x)}{1-x}.\]
\end{theorem}
For example, Theorem \ref{thLfe} with $e=2$ gives
\[ L_2(x,1)=\sum_{n\geq1}\frac{\sum_{2i_2+3i_3=n-1}\binom{n}{i_2,i_3}x^{n-i_2-i_3} (6x-2)^{i_2}(\frac{-3x^2+5x-1}{1-x})^{i_3}}{n(1-4x)^n}.\]
Thus,
\begin{multline*}L_2(x,1)=\frac{x}{1-4x}+\frac{x^2(6x-2)}{(1-4x)^3}+\frac{x^3t}{(1-4x)^4}+
\frac{2x^3(6x-2)^2}{(1-4x)^5} +\frac{5x^4t(6x-2)}{(1-4x)^6}\\
+\frac{5x^4(6x-2)^3+3x^5t^2}{(1-4x)^7}+\frac{21x^5 (6 x - 2)^2 t}{(1 - 4 x)^8} + \frac{28x^6 (-2 + 6 x)t^2 + 14 x^5 (-2 + 6 x)^4}{(1 - 4 x)^9} +\cdots,
\end{multline*}
where $t=(-3x^2+5x-1)/(1-x)$.
\section{Some results for the case $d=-1$}
In this section we keep analyzing the bivariate generating function given in previous section for the particular case $d=-1$. For this case,
we provide more detailed results. We denote by ${\mathcal Q}$ the set of all nonempty paths in
${\mathcal D}_{-1}$ having at least one valley, where the last valley is at ground level. We denote by ${\mathcal Q}_n$ the subset of ${\mathcal Q}$ formed by all paths of
semi-length $n$ and denote by $q_n$ the cardinality of ${\mathcal Q}_n$. For simplicity, when $d=-1$ (or $e=1$) we use $L(x,y)$ instead of $L_{1}(x,y)$.
\begin{theorem}\label{teodnegativo}
The bivariate generating function $L(x,y)$ is given by
$$L(x,y)= \frac{(x-1)y\left(1-x(2+y) - \sqrt{(1-x-2xy-2x^2y+x^2y^2-x^3y^2)/(1-x)} \right)}{2(1-2x+x^2-2xy+x^2y)}.$$
\end{theorem}
\begin{proof}
A path $P \in {\mathcal Q}$ can be uniquely decomposed as either
$UD$, $\, UTD$, $\, U\Delta DT,$ or $\, UQDT$, where $\Delta$ is a pyramid, $T \in {\mathcal D}_{-1}$, and $Q\in {\mathcal Q}$. Therefore, we obtain the following functional relation
\begin{equation}\label{teodnegativoLxy}
L(x,y)= xy+ xL(x,y) + x\left(\frac{y}{1-x}\right)L(x,y) + xL(x,y)Q(x,y),
\end{equation}
where $$Q(x,y):=\sum_{Q\in {\mathcal Q}}x^{\ell(Q)}y^{\rho(P)}.$$
We are going to obtain an explicit expression for the generating function $Q(x,y)$. Additionally, a path $Q\in{\mathcal Q}$ can be uniquely decomposed as either
$U\Delta DU\Delta'D$, $\, U\Delta DR$, $\,UR_1 DR_2$, or $\,URDU\Delta D$,
where $\Delta, \Delta'$ are pyramids, and $R, R_1, R_2\in {\mathcal Q}$ (see Figure \ref{Fig4} for a graphical representation of this decomposition).
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=0.5]{Fig4.eps}
\end{center}
\caption{Decomposition of a $(-1)$-Dyck path in ${\mathcal Q}$.} \label{Fig4}
\end{figure}
Using the symbolic method, we obtain the functional equation
\begin{align*}
Q(x, y)&= x^2\left(\frac{y}{1-x}\right)^2+ x\left(\frac{y}{1-x}\right)Q(x, y) + x(Q(x,y))^2 + x^2\left(\frac{y}{1-x}\right)Q(x,y).
\end{align*}
Solving the equation above for $Q(x,y)$, we find that
\begin{align}\label{gFunQ}
Q(x, y)=\frac{1-x-xy-x^2y -\sqrt{(1-x)(1 - x - 2 x y -2 x^2 y + x^2y^2 - x^3y^2)}}{2(1-x)x}.
\end{align}
This and solving \eqref{teodnegativoLxy} for $L(x,y)$ imply the desired result.
\end{proof}
Expressing $L(x,y)$ as a series expansion we obtain these first few terms.
\begin{multline*}
L(x,y)=x y+x^2 \left(y^2+y\right)+x^3 \left(y^3+3 y^2+y\right)+x^4 \left(y^4+\textbf{6}y^3+6 y^2+y\right)\\
+x^5 \left(y^5+10 y^4+19 y^3+10 y^2+y\right)+x^6\left(y^6+15 y^5+46 y^4+45 y^3+15 y^2+y\right)+\cdots
\end{multline*}
Figure \ref{Eje1} depicts all six paths in ${\mathcal D}_{-1}(4)$ with exactly 3 peaks. Notice that this is the bold coefficient of $x^4y^3$ in the above series.
\begin{figure} [htbp]
\begin{center}
\includegraphics[scale=0.8]{Eje1.eps}
\end{center}
\caption{All six paths in ${\mathcal D}_{-1}(4)$ with exactly 3 peaks.} \label{Eje1}
\end{figure}
The generating function for the $(-1)$-Dyck paths is given by
\begin{equation}\label{LxForD1}
L(x):=L(x,1)=\frac{-1 + 4 x - 3 x^2 + \sqrt{1 - 4 x + 2 x^2 + x^4}}{2 (1 - 4 x + 2 x^2)}.
\end{equation}
Thus,
\begin{equation*}
L(x):=x+2 x^2+5 x^3+14 x^4+41 x^5+123 x^6+375 x^7+1157 x^8+\cdots.
\end{equation*}
For simplicity for the remaining part of the paper, if there is not ambiguity, we use $r(n)$ instead of $r_{-1}(n)$. Our interest here is to give a combinatorial expression for this sequence.
First of all, we give some preliminary results. Let $b(n)$ be the number of $(-1)$-Dyck paths of semi-length $n$ that either have no valleys or the last valley is at ground level. Note that $b(n)-1$ is the $n$-th coefficient of the generating function $Q(x,1)$, see \eqref{gFunQ}, or equivalently
\begin{align*}
\sum_{n\geq 0}b(n)x^n&=Q(x,1) + \frac{1}{1-x}=\frac{1-x^2-\sqrt{1-4x+2x^2+x^4}}{2(1-x)x}\\
&=1+x+2 x^2+4 x^3+9 x^4+22 x^5+57 x^6+154 x^7+429 x^8+\cdots.
\end{align*}
This generating function coincides with the generating function of the number of Dyck paths of semi-length $n$ that avoid the subpath $UUDU$. From Proposition 5 of \cite{Sap} and \cite[pp. 10]{barry} we conclude the following proposition.
\begin{proposition}
For all $n\geq 0$ we have
$$b(n)=1+\sum_{j=0}^{\lfloor\frac{n-1}{2}\rfloor}\frac{(-1)^j}{n-j}\binom{n-j}{j}\binom{2n-3j}{n-j+1}=\sum_{k=0}^{\lfloor\frac{n}{2}\rfloor}\sum_{j=0}^{n-k}\binom{n-k}{j}N(j,k),$$
where $N(n,k)=\frac{1}{n}\binom nk \binom{n}{k-1}$ are the Narayana numbers, with $N(0,0)=1$.
\end{proposition}
\begin{remark}
\label{rem1}
Let $\mathcal{B}(n)=\mathcal{Q}_n\cup \{U^nD^n\}$ denote the set of $(-1)$-Dyck paths having either no valleys or the last valley is at height zero. Denote by $\mathcal{B}_{j,k}(n)$
the subset of $\mathcal{B}(n)$ of paths that contain exactly $j$ valleys where $k$ of those valleys are $(-1)$-valleys, i.e., there is a valley to the left of it, no valleys in between, and the
heights differ by $-1$. Take a path $P\in \mathcal{B}_{j,k}(n)$, then $\rho (P)=j+1$. Decompose the path as $$P=U^a\Delta _{t_1}D^{r_1}U^{s_1}\Delta _{t_2}\cdots D^{s_j}U^bD^b,$$
where there are $k$ occurrences of $D\Delta _p{\color{red}{D}}U$. So, there are $k$ {\textcolor{red}{red}} down steps indicating a $(-1)$-valley. Notice then that there are $n-k$ down
steps that are not labeled red and they belong to pyramids, there is no restriction on those, so they can be represented as compositions of $n-k$ down steps on $j+1$ parts, one per
peak. They are counted by $\binom{n-k-1}{j+1-1}=\binom{n-k-1}{j}$. This means that, numerically, $N(j,k+1)$ corresponds to the sequence of $(-1)$-Dyck paths of semilength $j+k+1$
containing $j$ valleys with $k$ of them being $(-1)$-valleys.
\end{remark}
\begin{theorem} The total number of paths in ${\mathcal D}_{-1}(n)$ is given by
$$r(n)=\sum_{\ell=0}^n\sum_{i=0}^{n-\ell-1}\binom{n-\ell-1}{i}q^{(i)}(\ell),$$
where
$$q^{(i)}(n)=\sum_{n_1+n_2+\cdots+n_i=n}b(n_1)b(n_2)\cdots b(n_i).$$
\end{theorem}
\begin{proof}
Let $\mathcal{Q}^{(i)}(n)$ denote the set of $i-$tuples $(P_1,\ldots ,P_i)$ of paths $P_j\in \mathcal{B} =\bigcup _{n\geq 0}\mathcal{B}(n)$, using the notation in Remark \ref{rem1} above, such that $\ell \left (P_1\right )+\cdots+\ell \left (P_i\right )=n$. It is clear that $\left |\mathcal{Q}^{(i)}(n)\right |=q^{(i)}(n)$. Notice that by definition of $\mathcal{B}$ we allow the empty path, $\lambda$, to be counted. Recall, also, that the number of compositions of $n-\ell$ on $i+1$ positive integer parts, denoted as $\mathcal{C}_{i+1}(n-\ell)$, is given by the binomial coefficient $\binom{(n-\ell) -1}{(i+1)-1}$.
Consider, then, the function $$\varphi :\bigcup _{i,\ell}\left (\mathcal{Q}^{(i)}(\ell)\times \mathbb{C}_{i+1}(n-\ell)\right )\longrightarrow \mathbb{D}_{-1}(n),$$
defined by $\varphi \left ((P_1,\ldots ,P_i),(C_1,\ldots ,C_{i+1})\right )=P$, where $P$ is the path described as
$P=U^{C_1}MU^{C_2}\ldots$,
where $$M=\begin{cases}
D^{C_1},& \text{ if } P_1=\lambda; \\
P_1,& \text{ if } P_1=\Delta;\\
P_1D,& \text{ otherwise}.
\end{cases}$$
Figure \ref{fig:exBij} shows two examples of how the function $\varphi$ works.
\begin{figure}[H]
\centering
\includegraphics[height=5cm,width=10cm]{ExBig32.eps}
\caption{Function $\varphi$ applied to $(a,b,c),(\Delta , \lambda)$ and to $(a,c),(P_1)$.}
\label{fig:exBij}
\end{figure}
\smallskip
This function is a bijection, where the inverse function is given by decomposing a path by using the following algorithm:
\begin{algorithm}[htbp]
\begin{enumerate}[(1) ]
\item If there are $(-1)$-valleys, go to step (2). If there are no consecutive valleys with difference equal to $-1$, then the path is increasing and can be decomposed by using only pyramids $\Delta$ and $\lambda$ in the following way:
\begin{itemize}
\item If there are no valleys, then the path is $\Delta _a = U^aD^a$ for some $a\geq 1$, return $(a)$.
\item If there is just one valley, the path is $U^aD^bU^cD^d$ for $a\geq b$ and $a+c=b+d$, if $a>b$ return $(a-b,U^bD^b,c)$ and if $a=b$ return $(a,\lambda,c)$.
\item For two consecutive valleys at the same height (not in the ground), place $(a,\Delta ,b,\lambda)$ and start at the second valley. If they are in the ground, return $(a,\lambda ,b,\lambda)$.
\item For two consecutive valleys that are not at the same height return $(a,\Delta _1,b, \Delta _2)$
\end{itemize}
\item Find the rightmost $(-1)$-valley, i.e., locate the rightmost occurrence of $DU^kD^{k}{\color{red}{D}}U.$ The right part of this string is increasing: go to step (1). Extract the maximal subword that is a Dyck path and call this path $P_i,$ increase the value of $i$ and go to step (1) with the left part of the path.
\end{enumerate}
\caption{Inverse function $\phi$ (or reverse)}
\label{ReverseFunction}
\end{algorithm}
For example, consider the path given in Figure \ref{fig:expro}. First one locates the rightmost $(-1)$-valley (all denoted by a red circle around them) and to the right then the first step says that it is $(1)$. We take out the path $P_1$ and we locate the next $(-1)$-valley, the right part corresponds to $(1,\lambda,1,\lambda,1)$ and the left part of $P_2$ corresponds to $(1,UD,1)$, and so the whole path is encoded by $(1,UD,1,P_2,1,\lambda,1,\lambda,1,P_1,1)$.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{ExBij3.eps}
\caption{Example inverse function.}
\label{fig:expro}
\end{figure}
The Corollary \ref{NumberPathQn} is direct consequence of the decomposition given in the proof of Theorem \ref{teodnegativo}. The first result. follows
from Figure \ref{Fig4} and the second result uses the first part and the decomposition $UTD$, $U\Delta DT,$ or $UQDT$ as given in the proof
of Theorem \ref{teodnegativo}.
\begin{corollary}\label{NumberPathQn} If $n>1$, then these hold
\begin{enumerate}
\item If $q_n=|{\mathcal Q}_n|$, then
$$q_n=2 q_{n-1}+q_{n-2}+q_{n-3}+\sum _{i=2}^{n-4} q_{i} (q_{n-i-1}-q_{n-i-2})+1,$$
for $n>3$, with the initial values $q_1=0$, $q_2=1$, and $q_3=3$.
\item If $r(n)=|{\mathcal D}_{d}(n)|$, then
$$r(n)=3 r(n-1)-r(n-2)+q_{n-2}+\sum _{i=2}^{n-3} q_{i} (r(n-i-1)-r(n-i-2)), $$
for $n>3$, with the initial values $r(1)=1$, $r(2)=2$, and $r(3)=5$.
\end{enumerate}
\end{corollary}
The generating function of the sequence $r(n)$ is algebraic of order two, then $r(n)$ satisfies a recurrence relation with polynomial coefficients.
This can be automatically solved with Kauers's algorithm \cite{Kau}. In particular we obtain that $r(n)$ satisfies the recurrence relation:
\begin{multline*}
2nr(n) - 4nr(n+1) + (12 + 5 n)r(n+2) - 4(15 + 4 n)r(n+3) \\
+ 10(9 + 2 n)r(n+4) - 2(21 + 4 n)r(n+5) + (6 + n)r(n+6)=0, \quad n \geq 6
\end{multline*}
with the initial values $r(0)=0, r(1)=1, r(2)=2, r(3)=5, r(4)=14$, and $r(5)=41$.
In Theorem \ref{AsymptoticApproximationr(n)} we give an asymptotic approximation for the sequence $r(n)$. To accomplish this goal we use the singularity
analysis method to find the asymptotes of the coefficients of a generating function (see, for example, \cite{flajolet} for the details).
\begin{theorem} \label{AsymptoticApproximationr(n)}
The number of $(-1)$-Dyck paths has the asymptotic approximation
$$r(n)\sim \frac{\rho^{-n}}{\sqrt{n^3\pi}}\cdot \dfrac{\sqrt{\rho(4-4\rho-4\rho^3)}}{4(-1+4\rho-2\rho^2)}.$$
\end{theorem}
\begin{proof}
The dominant singularity $\rho$ of the generating function $L(x)$ is the smallest real positive root of $1 - 4 x + 2 x^2 + x^4$. From a symbolic computation we find that
$$\rho=\frac{1}{3} \left(-1-\frac{4\ 2^{2/3}}{\sqrt[3]{13+3 \sqrt{33}}}+\sqrt[3]{2 \left(13+3
\sqrt{33}\right)}\right)\approx 0.295598.$$
From the expression given in \eqref{LxForD1} for $L(x)$ we have
\[
L(x)=\frac{-1+4x-3x^2}{2 (1 - 4 x + 2 x^2)} + \frac{\sqrt{1 - 4 x + 2 x^2 + x^4}}{2 (1 - 4 x + 2 x^2)}
\sim (x-\rho)^{1/2} \frac{\sqrt{\rho(4-4\rho-4\rho^3)}}{2(1-4\rho+2\rho^2)} \quad \text{ as } x\to \rho.
\]
Therefore,
\begin{equation*}
r(n)\sim \frac{n^{-1/2-1}}{\rho^n(-2\sqrt{\pi})} \frac{\sqrt{\rho(4-4\rho-4\rho^3)}}{2(1-4\rho+2\rho^2)}= \frac{\rho^{-n}}{\sqrt{n^3\pi}}
\frac{\sqrt{\rho(4-4\rho-4\rho^3)}}{4(-1+4\rho-2\rho^2)}. \qedhere
\end{equation*}
\end{proof}
\section{The Area of the $(-1)$-Dyck paths}
In this section we use generating functions and recursive relations to analyze the distribution of the area of the paths in the set of restricted
$(-1)$-Dyck paths. We recall that the \emph{area} of a Dyck path is the sum of the absolute values of $y$-components of all points in the path.
We use ${\texttt{area}}(P)$ to denote the area
of a path $P$. From Figure \ref{Example} on Page \pageref{Example}, we can see that ${\texttt{area}}(P)=70$. We use $a(n)$
to denote the total area of all paths in ${\mathcal D}_{-1}(n)$. In Theorem \ref{GFAreaDDyckPath} we give a generating function for the sequence $a(n)$.
We now introduce a bivariate generating function depending on this previous parameter and $\ell(P)$ (the semi-length of $P$). So,
$$A(x, q):=\sum_{P\in {\mathcal D}_{-1}}x^{\ell(P)}q^{{\texttt{area}}(P)}.$$
We now give again some terminology needed for the following theorems. Let ${\mathcal Q} \subset {\mathcal D}_{-1}(n)$ be the set formed by all paths having at least one valley, were
the last valley is at ground level; let ${\mathcal Q}_n\subset {\mathcal Q}$ be the set formed by all paths of semi-length $n$, and let $q_n=|{\mathcal Q}_n|$.
\begin{theorem}\label{GFAreaDDyckPath}
The generating function for the sequence $a(n)$ is given by
\begin{align*}
V(x)&=\sum_{n\geq 0} a(n)x^n=\frac{b(x) - c(x)\sqrt{1-4x+2x^2+x^4}}{(1- x)^2 (1 - 4 x + 2 x^2)^3 (1 - 3 x - x^2 - x^3)},
\end{align*}
where
\begin{align*}
b(x)&=2 x - 23 x^2 + 107 x^3 - 262 x^4 + 359 x^5 - 256 x^6 + 82 x^7 -
5 x^8 - 10 x^9 + 6 x^{10},\\
c(x)&=x - 10 x^2 + 41 x^3 - 89 x^4 + 108 x^5 - 73 x^6 + 18 x^7 + 2 x^8.
\end{align*}
\end{theorem}
\begin{proof}
From the decomposition $UD, \, UTD, \, U\Delta DT,$ or $UQDT$ given in the proof of Theorem \ref{teodnegativo} we obtain the functional equation
\begin{align}\label{eqA1}
A(x,q)=xq+xqA(xq^2,q)+ E(x,q)A(x,q) + xqB(xq^2,q)A(x,q),
\end{align}
where $E(x,q):=\sum_{j\geq 1}x^jq^{j^2}$ and $B(x,q):=\sum_{P\in {\mathcal Q}}x^{\ell(P)}q^{{\texttt{area}}(P)}$.
Note that $E(x,q)$ corresponds to the generating function that counts the total number of non-empty pyramids in the given decomposition.
From the decomposition given in Figure \ref{Fig4}, we obtain the functional equation
\begin{align}\label{eqB1}
B(x,q)=E(x,q)^2+E(x,q)B(x,q)+xqB(q^2x,q)B(x,q) + xqB(q^2x,q)E(x,q).
\end{align}
Let $M(x)$ be the generating function of the total area of the $(-1)$-Dyck paths in ${\mathcal Q}$. From the definition of $A(x,q)$ and $B(x,q)$ we have
$$V(x)=\left.\frac{\partial A(x,q)}{\partial q}\right|_{q=1} \quad \text{and} \quad M(x)=\left.\frac{\partial B(x,q)}{\partial q}\right|_{q=1}.$$
Therefore, differentiating \eqref{eqB1} with respect to $q$ we obtain,
\begin{multline*}
M(x)=\frac{2x^2(1+x)}{(1-x)^4}+\frac{x(x+1)}{(1-x)^3}Q(x) + \frac{x}{1-x}M(x) + xQ(x)^2 + \\
x\left(M(x) + 2x \frac{\partial Q(x)}{\partial x}\right)\left(Q(x) + \frac{x}{1-x}\right)
+ xQ(x)\left(M(x) + \frac{x}{1-x}\right) + xQ(x)\frac{x(x+1)}{(1-x)^3},
\end{multline*}
where $Q(x):=Q(x,1)$ and $Q(x,y)$ is the generating function given in \eqref{gFunQ} on Page \pageref{gFunQ}.
Now, differentiating \eqref{eqA1} with respect to $q$ we obtain,
\begin{multline}\label{eqA2}
V(x)=x + xL(x) + x\left(V(x) + 2x\frac{\partial L(x)}{\partial x}\right) + \frac{x(x+1)}{(1-x)^3}L(x)\\+\frac{x}{1-x}V(x) + xQ(x)L(x) + x \left.\frac{\partial B(xq^2,q)}{\partial q}\right|_{q=1} L(x) + xQ(x)V(x).
\end{multline}
Solving \eqref{eqB1} for $B(x,q)$ and substituting into \eqref{eqA2} and then solving the resulting expression for $V(x)$ we obtain the desired result.
\end{proof}
The first few values of the series of $V(x)$ are
\begin{align*}
V(x)= \sum_{n\geq 1}a(n)x^n
&=x + 6 x^2 + 29 x^3 + 130 x^4 + 547 x^5 + 2198 x^6 + 8551 x^7 +
32508 x^8 + \cdots.
\end{align*}
We recall that for simplicity we use $r(n)$ instead of $r_{-1}(n)$.
\begin{theorem} If $n\ge 1$, then these hold
\begin{enumerate}
\item \label{AreaPart1} If $A_n$ is the total area of all paths in ${\mathcal Q}_n$, then
\begin{multline*}A_n=2 A_{n-1}+A_{n-2}+2 A_{n-3}+q_{n}-q_{n-1}+2n q_{n-2} +2(n-5) q_{n-3}+4 n^2-14 n+13+\\
\sum _{i=2}^{n-4} 2(A_{i}+i q_{i} +i(i+1)) (q_{n-i-1}-q_{n-i-2}), \quad n>4,
\end{multline*}
with the initial values $A_1=0$, $A_2=2$, $A_3=13$, and $A_4=58$.
\item \label{AreaPart2} The sequence $a(n)$ satisfies the recursive relation
\begin{multline*}
a(n)=3 a(n-1)-a(n-2)+A_{n-2}+2(n-1) q_{n-2}+2 n r(n-1)+2(3-n) r(n-2)\\
-4 r(n-3)+(n-1)^2+ \sum _{i=3}^{n-2} q_{i-1} (a(n-i)-a(n-i-1))\\
+\sum _{i=3}^{n-2} \left(A_{i-1}+(2 i-1) q_{i-1}+i^2\right) (r(n-i)-r(n-i-1)).
\end{multline*}
\end{enumerate}
\end{theorem}
\begin{proof}
We prove Part \eqref{AreaPart1}, constructing a recursive relation for the total area of ${\mathcal Q}_{n}$. This part of the proof is divided in four cases, and we use $P\setminus T$ to denote the subpath resulting after removing the subpath $T$ from the path $P$.
\textbf{Case 1}. We observe that for a fixed $i \in \{1, 2, \dots, n-1\}$ there is exactly one path in ${\mathcal Q}_{n}$ of the form $(XY)^i(XY)^{n-i}$, where its
area is equal to $i^2 +(n-i)^2$. So, the total area of these types of paths is $\sum_{i=1}^{n-1} (i^2+(n-i)^2)=n(n-1) (2 n-1)/3$.
\textbf{Case 2}. In this case, we find the area of all paths of the form $P_i:=X^iY^iQ$. Note that in $P_1$ the first pyramid is of height one and $Q \in {\mathcal Q}_{n-1}$
and in $P_{n-2}$ the first pyramid is of height $n-2$ and $Q \in {\mathcal Q}_{2}$. These give that all first pyramids $(XY)^i$ run for $i \in \{1,2, \dots, n-2\}$ and
${\mathcal Q}_{j}$ runs for $j \in \{2, \dots, n-1\}$.
Now from the definition of $P_i$, we have that for a fixed $i$ there are $q_{n-i}$ paths of form $P_i$ (that is, having a pyramid $(XY)^{i}$ in the beginning of the path).
So, the contribution to the area given by all first pyramids of the form $(XY)^i$, overall paths of the form $P_i$, is equal to $i^2\times q_{n-i}$.
This and the fact that $A_{n-j}$ is the area of ${\mathcal Q}_{n-j}$, imply that the total area of all paths of the form $P_i$ is given by $i^2 q_{n-i}+A_{i}$.
Therefore, the total area of these types of paths is $\sum_{i=1}^{n-2} i^2 q_{n-i}+\sum_{j=2}^{n-1}A_{j}$.
\textbf{Case 3}. In this case we find the area of all paths of the form $H_i:=XQ_{\ell}Y(XY)^{i}$ where $Q_{\ell} \in {\mathcal Q}_{n-i-1}$.
Note that similar to the Case 2, the last pyramids $(XY)^i$ run for $i \in \{1,2, \dots, n-3\}$ and ${\mathcal Q}_{j}$ runs for $j \in \{2, \dots, n-2\}$.
We now observe that for a fixed $i$ there are $q_{n-i-1}$ paths of form $H_i$ (that is, having a pyramid $(XY)^{i}$ in the end of the path).
The contribution to the area given by all last pyramids of the form $(XY)^{i}$, overall paths of the form $H_i$, is equal to $i^2\times q_{n-i-1}$.
We analyze the contribution to the desired area given by $XQ_{\ell}Y= H_{n-i-1}\setminus (XY)^{n-i-1} $ with $Q_{\ell} \in {\mathcal Q}_{i}$. For a fixed $i \in \{2, 3, \dots, n-2\}$ there are $q_{i}$ paths of form $H_{n-i-1}$ having a first subpath of the form $XQ_{\ell}Y$.
Note that $X$ and $Y$ give rise to a trapezoid, where the two parallel sides have lengths $2i$ and $2i+2$, giving rise to an area of $2i+1$.
So, for a fixed $i$, the contribution to the area given by all first subpaths of the form $XQ_{\ell}Y$ is equal to the area of the trapezoids plus the area of all
paths of the form $Q_{\ell}$ (these are on top of the trapezoids). That is, the area of a trapezoid multiplied by the total number of the paths of the
form $Q_{\ell}$, plus the area of all paths of the form $Q_{\ell}$. Thus, the contribution to the area given by first subpaths of the form $XQ_{\ell}Y$
(overall paths of the form $H_i$, for a fixed $i$), is $((2i+1) \times q_{i} + A_i)$.
We conclude that the total area of these types of paths is $$\sum_{i=1}^{n-3} i^2\times q_{n-i-1}+\sum_{i=2}^{n-2} ((2i+1) \times q_{i} + A_i).$$
\textbf{Case 4}. Finally, we find the area of all paths of the form $T_i:=XQ^{\prime}YQ^{\prime\prime}$ where $Q^{\prime} \in {\mathcal Q}_{i}$ and
$Q^{\prime\prime} \in {\mathcal Q}_{n-i-1}$ for $i \in \{2, 3, \dots, n-3\}$. First of all, we analyze the contribution to the desired area given by all paths of the form
$Q^{\prime\prime} \in {\mathcal Q}_{n-i-1}$ (overall paths of the form $T_i$ for a fixed $i$). Since $Q^{\prime} \in {\mathcal Q}_{i}$, we know that for a given path
$Q \in {\mathcal Q}_{n-i-1}$ there are as many paths of the form $XQ^{\prime}YQ$ as paths in ${\mathcal Q}_{i}$. Thus, for a fixed $i \in \{2, 3, \dots, n-3\}$ we find the
area given by all subpaths $T_i \setminus XQ^{\prime}Y$ for every $Q^{\prime} \in {\mathcal Q}_{i}$. Thus, the area of all subpaths of the form
$Q^{\prime\prime} \in {\mathcal Q}_{n-i-1}$, that is, clearly, equal to $A_{n-i-1} q_{i}$.
We now analyze the contribution to the desired area given by all subpaths of the form $XQ^{\prime}Y$. That is, the area of all subpaths
$T_i\setminus Q^{\prime\prime}$ (overall paths of the form $T_i$ for a fixed $i$). It is easy to see that for a fixed $i \in \{2, 3, \dots, n-3\}$
there are $q_{n-i-1}$ subpaths of the form $XQ^{\prime}Y$. Note that $X$ and $Y$ give rise to a trapezoid, where the two parallel sides
have lengths $2i$ and $2i+2$, giving rise to an area of $2i+1$. So, the contribution to the area given by the first subpaths of the form $XQ^{\prime}Y$
is equal to the area of the trapezoids plus the area of all paths of the form $Q^{\prime}$ (these are on top of the trapezoids). Thus, the area of a
trapezoid multiplied by the total number of the paths of the form $Q^{\prime}$ plus the area of all paths of the form $Q^{\prime}$ and then all of
these multiplied by the total number of paths of the form $Q^{\prime\prime}$. Thus, the contribution to the area given by the first subpaths of the
form $XQ^{\prime}Y$ (overall paths of the form $T_i$ for a fixed $i$), is $((2i+1) \times q_{i} q_{n-i-1} + A_i q_{n-i-1})$.
We conclude that the total area of these types of paths is $$\sum_{i=2}^{n-3} A_{n-i-1} q_{i}+\sum_{i=2}^{n-3} ((2i+1) \times q_{i} q_{n-i-1}+A_i q_{n-i-1}).$$
Adding the results from Cases 1-4, we obtain that the recursive relation for the area $A_n$ is given by
\begin{multline*}
A_n=\sum _{i=1}^{n-1} \left(i^2+(n-i)^2\right)+\sum _{i=1}^{n-2} i^2 q_{n-i}+\sum _{i=2}^{n-1} A_{i}+\sum _{i=2}^{n-3} (2 i+1) q_{i} q_{n-(i+1)}+\sum _{i=2}^{n-3} A_{i} q_{n-(i+1)}+\\
\sum _{i=2}^{n-3} A_{i} q_{n-(i+1)}+\sum _{i=2}^{n-2} A_{i}+\sum _{i=1}^{n-3} i^2 q_{n-(i+1)}+\sum _{i=2}^{n-2} (2 i+1) q_{i}.
\end{multline*}
Subtracting $A_n$ from $A_{n+1}$ and simplifying we have
\begin{multline*}A_n=2 A_{n-1}+A_{n-2}+2 A_{n-3}+(2 n-5) q_{n-3}+(2 n-4) q_{n-2}+q_{n-1}+4 n^2-14 n+15+\\
\sum _{i=2}^{n-4} (2 A_{i}+(2 i+1) q_{i}) (q_{n-i-1}-q_{n-i-2})+\sum _{i=2}^{n-3} \left(2 i^2-2 i+1\right) (q_{n-i}-q_{n-i-1}).
\end{multline*}
We now rearrange this expression to obtain $q_n$ (see the expression within brackets) given in Corollary \ref{NumberPathQn}
\begin{multline*} A_n=2 A_{n-1}+A_{n-2}+2 A_{n-3}+(2 n-6) q_{n-3}+(2 n-4) q_{n-2}-q_{n-1}+4 n^2-14 n+13+\\
\sum _{i=2}^{n-4} 2( A_{i}+ i q_{i}) (q_{n-i-1}-q_{n-i-2})+\sum _{i=2}^{n-3} 2\left( i^2- i\right) (q_{n-i}-q_{n-i-1}) \\
+[2 q_{n-1}+q_{n-2}+q_{n-3} +\sum _{i=2}^{n-4} q_{i} (q_{-i+n-1}-q_{-i+n-2})+1].
\end{multline*}
After some simplifications we obtain the desired recursive relation.
Proof of Part \eqref{AreaPart2}. This part is similar to Part \ref{AreaPart1}. However, in this proof we need to use:
${\mathcal D}_{-1}(j)$, $r(i)=|{\mathcal D}_{-1}(i)|$, ${\mathcal Q}_{j}$, $q_{j}=|{\mathcal Q}_{j}|$, and $A_t$.
\textbf{Case 1}. We find the area of all paths of the form $XQY$, where $Q \in {\mathcal D}_{-1}(n-1)$. Note that $X$ and $Y$ give rise to a trapezoid
of area equal to $2n-1$; this area multiplied by $r(n-1)=|{\mathcal D}_{-1}(n-1)|$ gives that the total area of the trapezoids is $(2n-1)r(n-1)$. The total area of all
paths of the form $XQY$ is given by the area of all trapezoids and the area of all paths that are on top of the trapezoids. That is, the area of these
types of paths is $(2n-1)r(n-1)+ a(n-1)$.
\textbf{Case 2}. In this case, we find the area of all paths of the form $K_i:=X^iY^iQ_{\ell}$, where $Q_{\ell}\in {\mathcal D}_{-1}(n-i)$ and $i \in \{1,2, \dots, n-1\}$.
Since $r(n-i)=| {\mathcal D}_{-1}(n-i)|$, we conclude that for a fixed $i$ there are $r(n-i)$ paths of form $K_i$. So, the contribution to the area given by all first
pyramids of the form $(XY)^i$, overall paths of the form $K_i$, is equal to
$i^2\times r(n-i)$. This and the fact that $a(n-j)$ is the area of ${\mathcal Q}_{n-j}$, imply that the total area of all paths of the form $K_i$ is given by
$i^2\times r(n-i)+a(n-i)$. Therefore, the total area of these typee of paths is $\sum_{i=1}^{n-1} i^2\times r(n-i)+a(n-i)$.
\textbf{Case 3}. Finally, we find the area of all paths of the form $M_i:=XQ^{\prime}YD$ where $Q^{\prime} \in {\mathcal Q}_{i}$ and
$D \in{\mathcal D}_{-1}(n-i-1)$ for $i \in \{2, 3, \dots, n-2\}$. First of all, we analyze the contribution to the desired area given by all paths
$D \in{\mathcal D}_{-1}(n-i-1)$ (overall paths of the form $M_i$ for a fixed $i$). Since $Q^{\prime}\in{\mathcal Q}_{i}$, we know that for a given path
$D^{\prime} \in{\mathcal D}_{-1}(n-i-1)$ there are as many paths of the form $XQ^{\prime}YD^{\prime}$ as paths in ${\mathcal Q}_{i}$. Thus, for a fixed
$i \in \{2, 3, \dots, n-2\}$ we find the area given by all subpaths $M_i \setminus XQ^{\prime}Y$ for every $Q^{\prime}\in{\mathcal Q}_{i}$. That is, the area of
all subpaths of $M_i$ of the form $D \in {\mathcal Q}_{n-i-1}$ is equal to $a(n-i-1) q_{i}$.
We now analyze the contribution to the desired area given by all subpaths of the form $XQ^{\prime}Y$ for every $Q^{\prime}\in{\mathcal Q}_{i}$ .
That is, the area of all subpaths $M_i\setminus D$ (overall paths of the form $M_i$ for a fixed $i$).
It is easy to see that for a fixed $i \in \{2, 3, \dots, n-2\}$ there are $r(n-i-1)$ subpaths of the form $XQ^{\prime}Y$. Note that $X$ and $Y$ give rise
to a trapezoid, where the two parallel sides have lengths $2i$ and $2i+2$, giving rise to an area of $2i+1$. So, the contribution to the area given by
the first subpaths of the form $XQ^{\prime}Y$ is equal to the area of the trapezoids plus the area of all paths of the form $Q^{\prime}$ (these are on top of
the trapezoids). Thus, the area of a trapezoid multiplied by the total number of the paths of the form $Q^{\prime}$ plus the area of all paths of the form
$Q^{\prime}$ and then all of these multiplied by the total number of paths of the form $D$. Thus, the contribution to the area given by the first subpaths
of the form $XQ^{\prime}Y$ (overall paths of the form $M_i$ for a fixed $i$), is $((2i+1) \times q_{i} r(n-i-1) + A_i r(n-i-1))$.
We conclude that the total area of these types of paths is $$\sum_{i=2}^{n-2} A_{i} r(n-i-1)+\sum_{i=2}^{n-2} (2i+1) \times q_{i} r(n-i-1).$$
Adding the results from Cases 1-3, we obtain that the recursive relation for the area $a(n)$ is given by
\begin{multline*} a(n)=a(n-1)+(2 n-1) r(n-1)+ \sum _{i=1}^{n-1} i^2 r(n-i)+\sum _{i=1}^{n-1} a(n-i) \\
+\sum _{i=2}^{n-2} q_{i} a(n-i-1)+\sum _{i=2}^{n-2} A_{i} r(n-i-1)+\sum _{i=2}^{n-2} (2i+1) q_{i} r(n-i-1).
\end{multline*}
Subtracting $a(n)$ from $a(n+1)$ and simplifying we have
\begin{multline*}
a(n)=3 a(n-1)-a(n-2)+A_{n-2}+2(n-1) q_{n-2}+(2 n-1) r(n-1)+(3-2 n) r(n-2)+(n-1)^2\\
+\sum _{i=3}^{n-2} q_{i-1} (a(n-i)-a(n-i-1))+\sum _{i=3}^{n-2} A_{i-1} (r(n-i)-r(n-i-1))\\
+\sum _{i=3}^{n-2} (2 i-1) q_{i-1} (r(n-i)-r(n-i-1))+\sum _{i=1}^{n-2} i^2 (r(n-i)-r(n-i-1)).
\end{multline*}
After some other simplifications we have that
\begin{multline*}
a(n)=3 a(n-1)-a(n-2)+A_{n-2}+2(n-1) q_{n-2}+2 n r(n-1) \\
+2(3-n) r(n-2)-4 r(n-3)+(n-1)^2+\sum _{i=3}^{n-2} q_{i-1} (a(n-i)-a(n-i-1))\\
+\sum _{i=3}^{n-2} \left(A_{i-1}+(2 i-1) q_{i-1}+i^2\right) (r(n-i)-r(n-i-1)).
\end{multline*}
This completes the proof.
\end{proof}
Notice that the total area of the Dyck paths (cf. \cite{Woan}) is given by $4^n-\binom{2n+1}{n}$.
\section{Acknowledgments}
The first author was partially supported by The Citadel Foundation.
The third author was partially supported by Universidad Nacional de Colombia.
| {
"timestamp": "2021-08-20T02:00:13",
"yymm": "2108",
"arxiv_id": "2108.08299",
"language": "en",
"url": "https://arxiv.org/abs/2108.08299",
"abstract": "In this paper we study a subfamily of a classic lattice path, the \\emph{Dyck paths}, called \\emph{restricted $d$-Dyck} paths, in short $d$-Dyck. A valley of a Dyck path $P$ is a local minimum of $P$; if the difference between the heights of two consecutive valleys (from left to right) is at least $d$, we say that $P$ is a restricted $d$-Dyck path. The \\emph{area} of a Dyck path is the sum of the absolute values of $y$-components of all points in the path. We find the number of peaks and the area of all paths of a given length in the set of $d$-Dyck paths. We give a bivariate generating function to count the number of the $d$-Dyck paths with respect to the the semi-length and number of peaks. After that, we analyze in detail the case $d=-1$. Among other things, we give both, the generating function and a recursive relation for the total area.",
"subjects": "Combinatorics (math.CO)",
"title": "Restricted Dyck Paths on Valleys Sequence",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787883738261,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7093811417903233
} |
https://arxiv.org/abs/1907.01634 | On some P-Q mixed modular equations of degree 5 | In his second notebook, Ramanujan recorded total of 23 P-Q modular equations involving theta-functions $f(-q)$, $\varphi(q)$ and $\psi(q)$. In this paper, modular equations analogous to those recorded by Ramanujan are obtained involving $f(-q)$. As a consequence, values of certain quotients of theta-function are evaluated. | \section{Introduction}
For $|q|<1,$ let $(a;q)_\infty$
denote the infinite product $\displaystyle \prod_{n=0}^\infty(1-aq^{n})$, where $a$, $q $ are complex numbers and $f(a,b)$ be the Ramanujan theta-function:
\begin{equation*}
f(a,b):=\sum_{n=-\infty}^{\infty}a^{n(n+1)/2} b^{n(n-1)/2},\,\,\,|ab|<1,
\end{equation*}
The following definitions of theta-functions $\varphi$, $\psi$ and $f$ follows as special cases of $f(a,b):$
\begin{align}
\varphi(q)&:=f(q,q)=\sum_{n=-\infty}^{\infty}q^{n^2},\\
\psi(q)&:=f(q,q^3) =\sum_{n=0}^{\infty}q^{n(n+1)/2},\\
f(-q)&:=f(-q,-q^2)=\sum_{n=-\infty}^{\infty}(-1)^n q^{n(3n-1)/2}.
\end{align}
The ordinary or Gaussian hypergeometric function is defined by
$$_2F_1(a,b;c;z):=\sum_{n=0}^{\infty}\frac{\lb(a\rb)_n \lb(b\rb)_n}{\lb(c\rb)_n n!}z^n,\ \ \ 0\leq|z|<1,$$
where $a$, $b$, $c$ are complex numbers, $c\neq0,-1,-2,\ldots$, and $$(a)_0=1,\ \ (a)_n=a(a+1)\cdots(a+n-1)\ \ \textrm{for any positive integer }n.$$
Let $K(k)$ be the complete elliptic integral of the first kind of modulus $k$. Recall that
\begin{equation}\label{ee11}
K(k):=\int_0^{\frac{\pi}{2}}\frac{d\phi}{\sqrt{1-k^2\sin^2\phi}}
=\frac{\pi}{2}\sum_{n=0}^{\infty}\frac{\left(\frac{1}{2}\right)^2_n}{\left(n!\right)^2}k^{2n}
=\frac{\pi}{2}\varphi^2(q),\,\,\,\,\,(0<k<1),
\end{equation}
and set $K'=K(k')$, where $k'=\sqrt{1-k^2}$ is the so called complementary modulus of $k$. It is classical to set $q(k)=e^{-\pi K(k')/K(k)}$ so that $q$ is one-to-one increases from 0 to 1.
\noindent In the same manner introduce $L_1=K(\ell_1), {L'_1}=K(\ell'_1) $ and suppose that the following equality
\begin{equation}\label{ee12}
n_1\frac{K'}{K}=\frac{L'_1}{L_1}
\end{equation}
holds for some positive integer $n_1$. Then a modular equation of degree $n_1$ is a relation between the moduli $k$ and $\ell_1$ which is induced by \eqref{ee12}. Following Ramanujan, set $\alpha=k^2$ and $\beta=\ell_1^2$. We say that $\beta$ is of degree $n_1$ over $\alpha$. The multiplier $m$, corresponding to the degree $n_{1}$, is defined by
\begin{equation}\label{a1}
m=\frac{K}{L_1}=\frac{\varphi^2(q)}{\varphi^2(q^{n_1})},
\end{equation}
for $q=e^{-\pi K(k')/K(k)}$.
\noindent Let $K$, $K'$, $L_1$, $L_1'$, $L_2$, $L_2'$, $L_3$ and $L_3'$ denote complete elliptic integrals of the first kind corresponding, in pairs, to the moduli $\sqrt{\alpha}$, $\sqrt{\beta}$, $\sqrt{\gamma}$ and $\sqrt{\delta}$, and their complementary moduli, respectively. Let $n_1$, $n_2$ and $n_3$ be positive integers such that $n_3=n_1n_2$. Suppose that the equalities
\begin{equation}\label{ee14}
n_1\frac{K'}{K}=\frac{L_1'}{L_1},\ \ n_2\frac{K'}{K}=\frac{L_2'}{L_2} \ \ \textrm {and}\ \ n_3\frac{K'}{K}=\frac{L_3'}{L_3},
\end{equation}
hold. Then a ``mixed'' modular equation is a relation between the moduli $\sqrt{\alpha}$, $\sqrt{\beta}$, $\sqrt{\gamma}$ and $\sqrt{\delta}$ that is induced by \eqref{ee14}. We say that $\beta$, $\gamma$ and $\delta$ are of degrees $n_1$, $n_2$ and $n_3$, respectively over $\alpha$. The multipliers $m=K/L_1$ and $m'=L_2/L_3$ are algebraic relation involving $\alpha$, $\beta$, $\gamma$ and $\delta$.
At scattered places of his second notebook \cite{SR2}, Ramanujan recorded a total of nine $P$--$Q$ ``mixed'' modular relations of degrees 1, 3, 5 and 15. These relations were proved by B. C. Berndt and L. -C. Zhang \cite{BCBLCZ1}, \cite{BCBLCZ2} and the same has been reproduced in the book by Berndt \cite[pp. 214-235]{BCB2}. In \cite{BAM1}, S. Bhargava, C. Adiga and M. S. Mahadeva Naika have established several new $P$--$Q$ ``mixed'' modular relations with four moduli. For more information one can see \cite{MSMCKSBH} and \cite{MSMCKSBH1}. Motivated by all these works, we establish some new modular equations of ``mixed'' degrees and as an application, we establish some new general formulas for the explicit evaluations of a remarkable product of theta function.
In Section \ref{sec2}, we collect some identities which are useful in proofs of our main results. In Section \ref{sec3}, we establish several new modular equations of degree 5. In Section \ref{sec4}, we establish several new $P$--$Q$ ``mixed'' modular equations akin to those recorded by Ramanujan in his notebooks.
Mahadeva Naika, M. C. Maheshkumar and Bairy \cite{MSMMCMKSB}, have defined a new remarkable product of theta-functions $b_{s,t}$:
\begin{equation}\label{bmn}
b_{s,\,\,t}= \frac{ t e^{ \frac{-(t-1)\pi}{4}
\sqrt{\frac{s}{t}}}\psi^2\left(-e^{-\pi\sqrt{st}}\right)
\varphi^2\left(-e^{-2\pi\sqrt{st}}\right)}{\psi^2\left(-e^{-\pi\sqrt{\frac{s}{t}}}\right)
\varphi^2\left(-e^{-2\pi\sqrt{\frac{s}{t}}}\right)},
\end{equation}
where $s$, $t$ are real numbers such that $s>0$ and $t\geq1.$ They have established some new general formulas for the explicit evaluations of $b_{s,t}$ and computed some particular values of $b_{s,t}$. In Section \ref{sec5}, we establish some new modular relations connecting a remarkable product of theta-functions $b_{s,5}$ with $b_{r^2s,5}$ for $r=$ 2, 4 and 6 and explicit values of $b_{s,5}$ are deduced.
\section{Preliminary results}\label{sec2}
In this section, we list some of the relevant identities which are useful in the proofs of our results.
\begin{lemma}\cite[Ch. 17, Entry 12 (i) and (iii), p. 124]{BCB1}
For\, $0<x<1$, let
\begin{eqnarray}
&&f(e^{-y})=\sqrt z2^{-1/6}\{x(1-x)e^y\}^{1/24},\label{b1}\\&&
f(e^{-2y})=\sqrt z2^{-1/3}\{x(1-x)e^y\}^{1/12},\label{b2}
\end{eqnarray}\end{lemma}
\textrm{where} $z:= \ _2F_1(\frac{1}{2},\frac{1}{2};1;x)$ and $\displaystyle y:=\pi\frac{_2F_1(\frac{1}{2},\frac{1}{2};1;1-x)}{_2F_1(\frac{1}{2},\frac{1}{2};1;x)}.$
\begin{lemma} \cite[Ch. 16, Entry 24 (ii) and (iv), p. 39]{BCB1} We have
\begin{eqnarray}
&&f^3(-q)=\varphi^2(-q)\psi(q),\label{b10}\\
&&f^3(-q^2)=\varphi(-q)\psi^2(q).\label{b11}
\end{eqnarray}
\end{lemma}
\begin{lemma} \cite[Ch. 19, Entry 13 (xii) and (vii), pp. 281-282]{BCB1}\\
If $\beta$ is of degree 5 over $\alpha$, then
\begin{eqnarray}
&&\lb(\frac{\beta}{\alpha}\rb)^{1/4}+\lb(\frac{1-\beta}{1-\alpha}\rb)^{1/4} -\lb(\frac{\beta\lb(1-\beta\rb)}{\alpha\lb(1-\alpha\rb)}\rb)^{1/4}=m,\label{b4}\\&&
\lb(\frac{\alpha}{\beta}\rb)^{1/4}+\lb(\frac{1-\alpha}{1-\beta}\rb)^{1/4} -\lb(\frac{\alpha\lb(1-\alpha\rb)}{\beta\lb(1-\beta\rb)}\rb)^{1/4}=\frac{5}{m},\label{b5}
\end{eqnarray}\end{lemma}
where $m$ is the multiplier for degree 5.
\begin{lemma}\cite[p. 55]{SR1}
If $X:=\dfrac{f(-q)}{q^{1/6}f(-q^{5})}$ and $Y:=\dfrac{f(-q^2)}{q^{1/3}f(-q^{10})},$
then
\begin{equation}\label{b16}
XY+\frac{5}{XY}=\left(\frac{Y}{X}\right)^3+\left(\frac{X}{Y}\right)^3.
\end{equation}
\end{lemma}
\begin{lemma}\cite[p. 55]{SR1}
If $X:=\dfrac{f(-q)}{q^{1/6}f(-q^{5})}$ and $Y:=\dfrac{f(-q^3)}{q^{1/2}f(-q^{15})},$
then
\begin{equation}\label{b7}
(XY)^3+\lb(\frac{5}{XY}\rb)^3+\lb[\lb(\frac{X}{Y}\rb)^6- \lb(\frac{Y}{X}\rb)^6\rb]+9\lb[\lb(\frac{X}{Y}\rb)^3+\lb(\frac{Y}{X}\rb)^3\rb]=0.
\end{equation}
\end{lemma}
\begin{lemma}\cite[p. 55]{SR1}
If $X:=\dfrac{f(-q)}{q^{1/6}f(-q^{5})}$ and $Y:=\dfrac{f(-q^4)}{q^{2/3}f(-q^{20})},$
then
\begin{equation}\begin{split}\label{b8}
&(XY)^3+\lb(\frac{5}{XY}\rb)^3=\lb(\frac{X}{Y}\rb)^5+\lb(\frac{Y}{X}\rb)^5 -8\lb\{\lb(\frac{X}{Y}\rb)^3+\lb(\frac{Y}{X}\rb)^3\rb\}\\&+4\lb(\frac{X}{Y}+\frac{Y}{X}\rb) +4\big/\lb(\frac{X}{Y}+\frac{Y}{X}\rb).
\end{split}\end{equation}
\end{lemma}
\begin{lemma}\cite{MSM5}
If $P:=\dfrac{\varphi(-q)}{\varphi(-q^5)}$ and $Q:=\dfrac{\varphi(-q^4)}{\varphi(-q^{20})},$
then
\begin{equation}\begin{split}\label{b12}
&\frac{P^4}{Q^4}+\frac{Q^4}{P^4}+24\lb[\frac{P^2}{Q^2}+\frac{Q^2}{P^2}\rb] +8\lb[P^2Q^2+\frac{5^2}{P^2Q^2}\rb]+3\lb[Q^4+\frac{5^2}{Q^4}\rb]+120\\&= 20\lb[P^2+\frac{5}{P^2}\rb]+32\lb[Q^2+\frac{5}{Q^2}\rb] +\lb[P^2Q^4+\frac{5^4}{P^2Q^4}\rb]+3\lb[\frac{5P^2}{Q^4}+\frac{Q^4}{P^2}\rb].
\end{split}\end{equation}
\end{lemma}
\begin{lemma}\cite[Theorem 5.3]{CA3}
If $U:=\dfrac{\varphi^2(q)}{\varphi^2(q^5)}$ and $V:=\dfrac{\psi^2(-q)}{q\psi^2(-q^5)}$, then
\begin{equation}\label{b3}
U+UV=5+V.
\end{equation}
\end{lemma}
\begin{lemma}\cite[Theorem 2.17]{NDB}
If $U:=\dfrac{\varphi(-q)}{\varphi(-q^5)}$ and $V:=\dfrac{\varphi(-q^2)}{\varphi(-q^{10})}$, then
\begin{equation}\label{b9}
\frac{U^2}{V^2}+\frac{V^2}{U^2}+4=V^2+\frac{5}{V^2}.
\end{equation}
\end{lemma}
\newpage
\section{$P$-$Q$ modular equations of degree 5}\label{sec3}
In this section, we establish some new modular equations of degree 5.
\begin{theorem}
If $M:=\dfrac{\varphi(-q)}{\varphi(-q^5)}$ and $N:=\dfrac{\varphi(-q^6)}{\varphi(-q^{30})}$, then
\begin{equation}\begin{split}\label{b22}
&\lb[\frac{M^8}{N^8}+\frac{N^8}{M^8}\rb]+96\lb[\frac{M^6}{N^6}+\frac{N^6}{M^6}\rb] +1146\lb[\frac{M^4}{N^4}+\frac{N^4}{M^4}\rb]+2868\lb[\frac{M^2}{N^2}+\frac{N^2}{M^2}\rb] \\&+\lb[N^8+\frac{5^4}{N^8}\rb]-16\lb[N^6+\frac{5^3}{N^6}\rb]+188\lb[N^4+\frac{5^2}{N^4}\rb] -1696\lb[N^2+\frac{5}{N^2}\rb]\\&-54\lb[M^6+\frac{5^3}{M^6}\rb]+498\lb[M^4+\frac{5^2}{M^4}\rb] -2106\lb[M^2+\frac{5}{M^2}\rb]-4\lb[\frac{N^8}{M^6}+\frac{5M^6}{N^8}\rb]\\& +6\lb[\frac{N^8}{M^4}+\frac{5^2M^4}{N^8}\rb]-4\lb[\frac{N^8}{M^2}+\frac{5^3M^2}{N^8}\rb] -144\lb[\frac{N^6}{M^4}+\frac{5M^4}{N^6}\rb]+64\lb[\frac{N^6}{M^2}+\frac{5^2M^2}{N^6}\rb] \\&-479\lb[\frac{N^4}{M^2}+\frac{5M^2}{N^4}\rb]-165\lb[\frac{M^6}{N^4}+\frac{5N^4}{M^6}\rb] +124\lb[\frac{M^6}{N^2}+\frac{5^2N^2}{M^6}\rb]-936\lb[\frac{M^4}{N^2}+\frac{5N^2}{M^4}\rb] \\&+10\lb[M^4N^4+\frac{5^4}{M^4N^4}\rb]+516\lb[M^2N^2+\frac{5^2}{M^2N^2}\rb] -39\lb[M^2N^4+\frac{5^3}{M^2N^4}\rb]\\&-120\lb[M^4N^2+\frac{5^3}{M^4N^2}\rb] +12\lb[M^6N^2+\frac{5^4}{M^6N^2}\rb]-\lb[M^6N^4+\frac{5^5}{M^6N^4}\rb]+6748=0.
\end{split}\end{equation}
\end{theorem}
\begin{proof}
Using the equation \eqref{b7} after changing $q$ to $q^2$, we get
\begin{equation}\label{b19}
X^9Y^9+125X^3Y^3+X^{12}-Y^{12}+9X^9Y^3+9Y^9X^3=0.
\end{equation}
where \begin{equation*}
X:=\dfrac{f(-q^2)}{q^{1/3}f(-q^{10})} \,\,\,\textrm{ and }\,\,\, Y:=\dfrac{f(-q^6)}{qf(-q^{30})}.
\end{equation*}
Cubing the equation \eqref{b19} and using the equations \eqref{b10} and \eqref{b11}, we deduce
\begin{equation}\label{b21}
TM^3R^3N^6T_2+125MRN^2T+M^4R^4-N^8T_2^2+9M^3R^3N^2T+9TN^6T_2MR=0.
\end{equation}
where \begin{equation*}
R:=\dfrac{\psi^2(q)}{q\psi^2(q^5)},\ T:=\dfrac{\psi(q^6)}{q^{3}\psi(q^{30})} \,\,\,\textrm{ and }\,\,\, T_2:=\dfrac{\psi^2(q^6)}{q^{6}\psi^2(q^{30})}.
\end{equation*}
Using the equation \eqref{b3} after changing $q$ to $-q$, we have
\begin{equation}\label{b20}
R:=\dfrac{M^2-5}{M^2-1}\,\,\, \textrm{and}\,\,\, T_2:=\dfrac{N^2-5}{N^2-1}.
\end{equation}
Collecting the terms containing $T$ on one side of the equation \eqref{b21} and using the equation \eqref{b20}, we get
\begin{equation}\label{b24}
A(M,N)B(M.N)=0,
\end{equation}
where
\begin{equation*}\begin{split}
&A(M,N):=\left(625-500M^2-20N^6-500N^2+150M^4+N^8+150N^4\rb.\\&\lb.+M^8-20M^6 +16M^6N^6+400M^2N^2+120M^4N^2+24M^6N^4+24M^4N^6\rb.\\&\lb.-300M^4N^4-16M^2N^6- 16M^6N^2-4M^8N^2+N^8M^8-4N^8M^6\rb.\\&\lb.+6N^8M^4-4N^8M^2-4M^8N^6+120M^2N^4+6M^8N^4\right)
\end{split}\end{equation*} and
\begin{equation*}\begin{split}
&B(M,N):=\left(625M^8+N^{16}-825N^{12}M^2+150M^{12}-500M^{10}+M^{16}\rb.\\&\lb.+12900M^6N^6 -4875M^6N^4-15000M^4N^6+6250M^4N^4+7500M^2N^6\rb.\\&\lb.-2000M^8N^2+1146M^{12}N^4-720M^{12}N^2 -2395M^{10}N^4+1600M^{10}N^2\rb.\\&\lb.+188N^{12}M^8-479N^{12}M^6+1146N^{12}M^4-1696N^{10}M^8 +2868N^{10}M^6\rb.\\&\lb.-4680N^{10}M^4+3100N^{10}M^2+6748N^8M^8-10530N^8M^6+12450N^8M^4 \rb.\\&\lb.-6750N^8M^2+10M^{12}N^{12}-120M^{12}N^{10}-39M^{10}N^{12}+516M^{10}N^{10}\rb.\\&\lb.-2106M^{10}N^8 +2868M^{10}N^6-8480M^8N^6+498M^{12}N^8-936M^{12}N^6\rb.\\&\lb.-165M^{14}N^4+96M^{14}N^2 -M^{14}N^{12}+12M^{14}N^{10}-54M^{14}N^8-20M^{14}\rb.\\&\lb.+124M^{14}N^6+96N^{14}M^2-16N^{14}M^8+64N^{14}M^6 -144N^{14}M^4\rb.\\&\lb.-4N^{16}M^2+N^{16}M^8-4N^{16}M^6+6N^{16}M^4-3125M^2N^4+4700M^8N^4\right).
\end{split}\end{equation*}
Expanding in powers of $q$, the first and second factor of the equation \eqref{b24}, one gets respectively,
$$A(M,N)=\left(256-1536q^8-512q^9+1152q^{10}+3840q^{11}+4736q^{12}+\dotsb\right)$$ and
$$B(M,N)=q^8\left(8448+33792q+33792q^2-54528q^3-194208q^4-268608q^5+\dotsb\right).$$
As $q\rightarrow0$, the factor $B(M,N)$ of the equation \eqref{b24} vanishes whereas the other factor $A(M,N)$ do not vanish. Hence, we arrive at the equation \eqref{b22} for $q\in(0,1)$. By analytic continuation the equation \eqref{b22} is true for $|q|<1$.
\end{proof}
\begin{remark}
The modular relation connecting
\begin{equation*}
\dfrac{\psi(q)}{q^{1/2}\psi(q^5)}\,\,\,\textrm{ and }\,\,\,\dfrac{\psi(q^6)}{q^{3}\psi(q^{30})},
\end{equation*}
can be obtained by eliminating $M$ and $N$ from the equation \eqref{b21}.
\end{remark}
\section{$P$--$Q$ ``mixed'' modular equations}\label{sec4}
In this section, we establish several new $P$--$Q$ ``mixed'' modular equations with four moduli. Throughout this section, we set
\begin{equation}\label{c1}\begin{split}
&A:=\frac{f(-q)f(-q^2)}{q^{1/2}f(-q^5)f(-q^{10})},\,\,\, B_{n}:=\frac{f(-q^n)f(-q^{2n})}{q^{n/2}f(-q^{5n})f(-q^{10n})}\\&
\,\,\,\,\,\,\,\,\,\,\,\,\textrm {and} \,\,\,\,\,\, C_{n}:=\frac{q^{{n}/{6}}f(-q^n)f(-q^{10n})}{f(-q^{2n})f(-q^{5n})}.
\end{split}\end{equation}
\begin{theorem}For $|q|<1,$
\begin{eqnarray}
&&\frac{f^2(-q)f^2(-q^2)}{qf^2(-q^5)f^2(-q^{10})} =\frac{U(U-5)}{U-1},\,\,\,U>1,\label{c8}\\
&&\frac{f^2(-q)f^2(-q^2)}{qf^2(-q^5)f^2(-q^{10})}=\frac{V(V-5)}{V-1},\,\,\,V>1,\label{c9}
\end{eqnarray}
where $\displaystyle U:=\frac{\varphi^2(-q)}{\varphi^2(-q^5)}$ $\,\,\textrm{and}\,\,\, \displaystyle V:=\frac{\psi^2(q)}{q\psi^2(q^5)}.$
\end{theorem}
\begin{proof}[Proof of \eqref{c8}]
The equations \eqref{b4} and \eqref{b5} can be rewritten as
\begin{equation}\label{b6}
m\left\{\frac{\alpha(1-\alpha)}{\beta(1-\beta)}\right\}^{1/4}+1=\frac{5}{m} +\left\{\frac{\alpha(1-\alpha)}{\beta(1-\beta)}\right\}^{1/4}.
\end{equation}
Employing the equation \eqref{b1} after changing $q$ to $-q$ and equation \eqref{b2} in the equation \eqref{b6}, we arrive at the equation \eqref{c8}.
\end{proof}
\begin{proof}[Proof of \eqref{c9}]
Using the equation \eqref{b3} in the equation \eqref{c8}, we arrive at the equation \eqref{c9}.
\end{proof}
\begin{theorem}For $|q|<1,$
\begin{eqnarray}
&&\frac{qf^6(-q)f^6(-q^{10})}{f^6(-q^2)f^6(-q^{5})} =\frac{U(U-1)}{U-5},\,\,\,\label{c10}\\
&&\frac{qf^6(-q)f^6(-q^{10})}{f^6(-q^2)f^6(-q^{5})} =\frac{(V-5)}{V(V-1)},\,\,\,\label{c11}
\end{eqnarray}
where $\displaystyle U:=\frac{\varphi^2(-q)}{\varphi^2(-q^5)}$ $\,\,\textrm{and}\,\,\, \displaystyle V:=\frac{\psi^2(q)}{q\psi^2(q^5)}.$
\end{theorem}
\begin{proof}
The proof of equations \eqref{c10} and \eqref{c11} are similar to the proof of \eqref{c8} and \eqref{c9}. Hence, we omit the details.
\end{proof}
\begin{theorem}
If $P:=AB_{2}$ and $Q:=\dfrac{A}{B_{2}}$, then
\begin{equation}\label{c2}
\lb(Q^4+\frac{1}{Q^4}\rb)-3\lb(Q^2+\frac{1}{Q^2}\rb)-\lb(P+\frac{5^2}{P}\rb)\lb(Q+\frac{1}{Q}\rb)-12=0.
\end{equation}
\end{theorem}
\begin{proof}
Taking the cube of both sides of the equation \eqref{b16}, we deduce
\begin{equation}\label{c29}\begin{split}
X^3Y^3+\frac{125}{X^3Y^3}+15\left(\frac{X^3}{Y^3}+\frac{Y^3}{X^3}\right) =\frac{X^9}{Y^9}+\frac{Y^9}{X^9},
\end{split}\end{equation}
where $X:=\dfrac{f(-q)}{q^{1/6}f(-q^5)}$ and $Y:=\dfrac{f(-q^2)}{q^{1/3}f(-q^{10})}$.
Using \eqref{b10}, \eqref{b11} and \eqref{c1}, we deduce
\begin{equation}\label{b30}\begin{split}
&A^4V_1^4B_2^4V_2^4+125A^2V_1^2B_2^2V_2^2+12A^4V_1^4B_2^2V_2^2 +12B_2^4V_2^4A^2V_1^2\\&=A^6V_1^6+B_2^6V_2^6,
\end{split}\end{equation}
where $V_1:=\dfrac{\psi(q)}{q^{1/2}\psi(q^5)}$\,\,and\,\, $V_2:=\dfrac{\psi(q^2)}{q\psi(q^{10})}$.\\
Using the equation \eqref{c9} in the equation \eqref{b30}, we deduce
\begin{equation}\begin{split}\label{b31}
&12B_{2}^6A^2vu+A^6B_{2}^6uv+5A^6B_{2}^4uv+5A^4B_{2}^6uv+12A^6B_{2}^2uv+625A^2B_{2}^2v\\&+625A^2B_{2}^2u-18A^8u -18B_{2}^8v+185A^4B_{2}^4v+185A^4B_{2}^4u+300A^4B_{2}^2u\\&+60A^4B_{2}^2uv+300B_{2}^4A^2v+60B_{2}^4A^2vu+A^8B_{2}^6v -250A^6-250B_{2}^6\\&+1350A^4B_{2}^4+2125A^4B_{2}^2+A^6B_{2}^8u+37A^4B_{2}^6v+40A^6B_{2}^4v+8A^6B_{2}^6v\\&+40A^4B_{2}^6u +37A^6B_{2}^4u+5A^4B_{2}^8u+8A^6B_{2}^6u+60A^6B_{2}^2u+60B_{2}^6A^2v\\&+425A^4B_{2}^2v-50A^6u-50B_{2}^6v+12A^2B_{2}^8u +5A^8B_{2}^4v+425A^2B_{2}^4u\\&+12A^8B_{2}^2v+125A^2B_{2}^2uv+64A^6B_{2}^6+96A^6B_{2}^2v+96B_{2}^6A^2u+25A^4B_{2}^4uv \\&-2B_{2}^{14}v-2A^{14}u+2125A^2B_{2}^4+3125A^2B_{2}^2+A^8B_{2}^8+8A^8B_{2}^6+37A^8B_{2}^4\\&+8A^6B_{2}^8+296A^6B_{2}^4 +37A^4B_{2}^8+296A^4B_{2}^6+60A^8B_{2}^2+480A^6B_{2}^2\\&+60A^2B_{2}^8+480A^2B_{2}^6-120B_{2}^8-120A^8-24B_{2}^{14} -24A^{14}-2A^{12}\\&-2B_{2}^{12}=0.
\end{split}\end{equation}
where $u:=\pm\sqrt{A^4+6A^2+25}$ \,\, and \,\, $v:=\pm\sqrt{B_{2}^4+6B_{2}^2+25}$.\\
Eliminating $u$ and $v$ from the equation \eqref{b31}, we find and squaring both sides, we deduce
\begin{equation*}\begin{split}
&\lb(A^8-A^6B_{2}^4-3A^6B_{2}^2-12A^4B_{2}^4-25A^4B_{2}^2-A^4B_{2}^6-3A^2B_{2}^6-25A^2B_{2}^4\rb.\\&\lb.+B_{2}^8\rb) \lb(A^{16}+B_{2}^{16}-187500A^6B_{2}^4-187500A^4B_{2}^6-26900A^6B_{2}^8\rb.\\&\lb.-390625A^4B_{2}^4 -93125A^6B_{2}^6-57500A^4B_{2}^8-15625A^2B_{2}^8-9600B_{2}^{10}A^4\rb.\\&\lb.-5000B_{2}^{10}A^2-875B_{2}^{12}A^2 -7676A^8B_{2}^8-57500A^8B_{2}^4-15625A^8B_{2}^2\rb.\\&\lb.-26900A^8B_{2}^6-1076A^8B_{2}^{10}-92A^8B_{2}^{12} -A^8B_{2}^{14}-35A^{14}B_{2}^4-384A^{12}B_{2}^6\rb.\end{split}\end{equation*}\begin{equation}\begin{split}\label{b33}&\lb.-A^{14}B_{2}^8-A^{12}B_{2}^{12}-52A^{14}B_{2}^2-1024A^{12}B_{2}^4 -875A^{12}B_{2}^2-3987A^{10}B_{2}^6\rb.\\&\lb.-9600A^{10}B_{2}^4-8A^{14}B_{2}^6-5000A^{10}B_{2}^2-92A^{12}B_{2}^8 -1076A^{10}B_{2}^8\rb.\\&\lb.-149A^{10}B_{2}^{10}-12A^{10}B_{2}^{12}-384A^6B_{2}^{12} -3987A^6B_{2}^{10}-8A^6B_{2}^{14}\rb.\\&\lb.-35A^4B_{2}^{14}-12A^{12}B_{2}^{10} -1024B_{2}^{12}A^4-52A^2B_{2}^{14}\rb)=0.
\end{split}\end{equation}
Expanding in powers of $q$, the first and second factors of \eqref{b33}, one gets respectively
$$-4q^{11}\lb(4+8q-27q^2-68q^3+40q^4+278q^5+62q^6-723q^7+\cdots\rb)$$ and
$$\lb(-2+2q^2-10q^3+30q^4+552q^5-2016q^6+1038q^7+15620q^8+\cdots\rb).$$
As $q$ tends to 0 the first factor of \eqref{b33} vanishes whereas the second factor does not vanish. Hence we arrive at \eqref{c2} for $q\in(0,1)$. By analytic continuation \eqref{c2} is true for $|q|<1$.
\end{proof}
\begin{theorem}
If $P=AB_{4}$ and $Q=\dfrac{A}{B_{4}}$, then
\begin{equation}\begin{split}\label{b34}
&\lb(Q^8+\frac{1}{Q^8}\rb)-52\lb(Q^6+\frac{1}{Q^6}\rb)-1024\lb(Q^4+\frac{1}{Q^4}\rb) -3987\lb(Q^2+\frac{1}{Q^2}\rb)\\&-\lb(P+\frac{5^2}{P}\rb)\lb[1076\lb(Q+\frac{1}{Q}\rb) +384\lb(Q^3+\frac{1}{Q^3}\rb)+35\lb(Q^5+\frac{1}{Q^5}\rb)\rb]\\&-\lb(P^2+\frac{5^4}{P^2}\rb) \lb[92\lb(Q^2+\frac{1}{Q^2}\rb)+8\lb(Q^4+\frac{1}{Q^4}\rb)+149\rb]-\lb(P^3+\frac{5^6}{P^3}\rb) \\&\times\lb[12\lb(Q+\frac{1}{Q}\rb)+\lb(Q^3+\frac{1}{Q^3}\rb)\rb]-\lb(P^4+\frac{5^8}{P^4}\rb)-7676=0.
\end{split}\end{equation}
\end{theorem}
\begin{proof}
Using the equation \eqref{b12} in the equation \eqref{c8}, we deduce
\begin{equation*}\begin{split}
&625-125u+125v-A^4uB_{4}^6v-5A^4uB_{4}^4v-90A^2uB_{4}^2v-6A^2uB_{4}^6v\\&-38A^2uB_{4}^4v +40A^2u+2A^6u-1215B_{4}^2v-63B_{4}^6v-A^6v-56A^6B_{4}^2\\&-36A^6B_{4}^4-8A^6B_{4}^6 -A^6B_{4}^8+3A^4v-592A^4B_{4}^2-360A^4B_{4}^4-80A^4B_{4}^6\\&-9A^4B_{4}^8-15A^2v -3080A^2B_{4}^2-1732A^2B_{4}^4-376A^2B_{4}^6-39A^2B_{4}^8\\&-25uv-1360uB_{4}^2 -688uB_{4}^4-144uB_{4}^6-13uB_{4}^8-125A^2-6000B_{4}^2\\&+29A^4-3A^6+2A^8-3216B_{4}^4 -688B_{4}^6-63B_{4}^8-9A^4u-13A^6B_{4}^2v\\&-A^6B_{4}^6v-5A^6B_{4}^4v-129A^4B_{4}^2v -9A^4B_{4}^6v-53A^4B_{4}^4v-A^4uv\end{split}\end{equation*}\begin{equation}\begin{split}\label{b36}&-56A^4uB_{4}^2-36A^4uB_{4}^4-8A^4uB_{4}^6-A^4uB_{4}^8 -643A^2B_{4}^2v-39A^2B_{4}^6v\\&-259A^2B_{4}^4v+6A^2uv-424A^2uB_{4}^2-252A^2uB_{4}^4 -56A^2uB_{4}^6-6A^2uB_{4}^8\\&-269uB_{4}^2v-13uB_{4}^6v-105uB_{4}^4v-13A^4uB_{4}^2v -499B_{4}^4v=0.
\end{split}\end{equation}
where $u:=\pm\sqrt{A^4+6A^2+25}$ \,\, and \,\, $v:=\pm\sqrt{B_{4}^4+6B_{4}^2+25}$.\\
Eliminating $u$ and $v$ from the equation \eqref{b36}, we arrive at \eqref{b34}.
\end{proof}
\begin{theorem}
If $P=AB_{6}$ and $Q=\dfrac{A}{B_{6}}$, then
\begin{equation*}\begin{split}
&\mathbb{Q}^{16}-363\mathbb{Q}^{14}-30882\mathbb{Q}^{12}-698682\mathbb{Q}^{10}-6183702\mathbb{Q}^{8}-16140317 \mathbb{Q}^{6}\\&+37225608\mathbb{Q}^{4}+231497788 \mathbb{Q}^{2} +\mathbb{P}\lb\{60133800\mathbb{Q}+21753498\mathbb{Q}^{3}\rb.\\&\lb.-1148442\mathbb{Q}^{5}-2210604\mathbb{Q}^{7}-406488\mathbb{Q}^{9} -26740\mathbb{Q}^{11}-519\mathbb{Q}^{13}\rb\}\\&+\mathbb{P}^{2}\lb\{6287236\mathbb{Q}^{2}+858465\mathbb{Q}^{4} -462222\mathbb{Q}^{6}-150099\mathbb{Q}^{8}-12840\mathbb{Q}^{10} \rb.\\&\lb. -267\mathbb{Q}^{12}+10229305\rb\}+\mathbb{P}^{3} \lb\{1132002\mathbb{Q}+362832\mathbb{Q}^{3}-42462\mathbb{Q}^{5}-37066\mathbb{Q}^{7}\rb.\\&\lb.-4323\mathbb{Q}^{9}-78\mathbb{Q}^{11}\rb\} +\mathbb{P}^{4}\lb\{74418\mathbb{Q}^{2}+4471\mathbb{Q}^{4}-5955\mathbb{Q}^{6}-1026\mathbb{Q}^{8}-12\mathbb{Q}^{10} \rb.\end{split}\end{equation*}\begin{equation}\begin{split}\label{b35}&\lb. +130902\rb\}+\mathbb{P}^{5}\lb\{9171\mathbb{Q}+2028\mathbb{Q}^{3}-588\mathbb{Q}^{5}-171\mathbb{Q}^{7}-\mathbb{Q}^{9}\rb\} +\mathbb{P}^{6}\lb\{300\mathbb{Q}^{2}\rb.\\&\lb.-27\mathbb{Q}^{4}-18\mathbb{Q}^{6}+679\rb\}+\mathbb{P}^{7}\lb\{24\mathbb{Q} -\mathbb{Q}^{5}\rb\}+\mathbb{P}^{8}+36965548 =0,
\end{split}\end{equation}
\end{theorem}
where $\mathbb{P}^{n}=\lb(P^{n}+\dfrac{5^{2n}}{P^{n}}\rb)$ and $\mathbb{Q}^{n}=\lb(Q^{n}+\dfrac{1}{Q^{n}}\rb)$.
\begin{proof}
The proof of the equation \eqref{b35} is similar to the proof of the equation \eqref{b34}; Notice that now \eqref{b22} is used in place of \eqref{b12}.
\end{proof}
\begin{theorem}
If $P=C_{1}C_{2}$ and $Q=\dfrac{C_{1}}{C_{2}}$, then
\begin{equation}\label{b28}
\lb(P+\frac{1}{P}\rb)\lb(Q^3+\frac{1}{Q^3}\rb)+2=\lb(P^2+\frac{1}{P^2}\rb).
\end{equation}
\end{theorem}
\begin{proof}
Using the equation \eqref{b9} in the equation \eqref{c10}, we deduce
\begin{equation}\begin{split}\label{b25}
&10v-10u-2vC_{2}^6C_{1}^6-2vuC_{2}^6+6vu+4uC_{1}^6+2vC_{2}^6-2C_{2}^{12}C_{1}^6 -6 \\&+24C_{2}^6C_{1}^6-2uC_{2}^{12}+24uC_{2}^6+6vC_{1}^6-46C_{1}^6-8C_{2}^6+4C_{1}^{12} +2C_{2}^{12}=0.
\end{split}\end{equation}
where $u:=\pm\sqrt{C_{1}^{12}-18C_{1}^6+1}$ \,\, and \,\, $v:=\pm\sqrt{C_{2}^{12}-18C_{2}^6+1}.$\\
Eliminating $u$ and $v$ from the equation \eqref{b25} leads to
\begin{equation}\begin{split}\label{b27}
&\lb(C_{1}^6-C_{2}^6C_{1}^6+C_{2}^6-C_{2}^2C_{1}^2+C_{2}^2C_{1}^8+2C_{2}^4C_{1}^4 +C_{2}^8C_{1}^2\rb)\lb(C_{2}^4C_{1}^{16}\rb.\\&\lb.+C_{2}^8C_{1}^{14}-C_{2}^2C_{1}^{14} +C_{2}^{12}C_{1}^{12}-4C_{2}^6C_{1}^{12}+C_{1}^{12}+4C_{2}^{10}C_{1}^{10} -4C_{2}^4C_{1}^{10}\rb.\\&\lb.+C_{2}^{14}C_{1}^8+C_{2}^8C_{1}^8+C_{2}^2C_{1}^8 -4C_{2}^{12}C_{1}^6+4C_{2}^6C_{1}^6+C_{2}^{16}C_{1}^4-4C_{2}^{10}C_{1}^4 \rb.\\&\lb.+C_{2}^4C_{1}^4-C_{2}^{14}C_{1}^2+C_{2}^8C_{1}^2+C_{2}^{12}\rb)=0.
\end{split}\end{equation}
Expanding in powers of $q$, the first and second factors of \eqref{b27}, one gets respectively
$$q^{11}\lb(8-32q-8q^2+168q^3-220q^4+196q^5-760q^6+1748q^7+\cdots\rb)$$ and
$$\lb(3-24q+117q^2-456q^3+1356q^4-3192q^5+7242q^6-17304q^7+\cdots\rb).$$
As $q$ tends to 0 the first factor of \eqref{b27} vanishes whereas the second factor does not vanish. Hence we arrive at \eqref{b28} for $q\in(0,1)$. By analytic continuation \eqref{b28} is true for $|q|<1$.
\end{proof}
\begin{theorem}
If $P=C_{1}C_{4}$ and $Q=\dfrac{C_{1}}{C_{4}}$, then
\begin{equation}\begin{split}\label{b29}
&\lb(P^3+\frac{1}{P^3}\rb)\lb[19\lb(Q+\frac{1}{Q}\rb)+8\lb(Q^3+\frac{1}{Q^3}\rb) +\lb(Q^5+\frac{1}{Q^5}\rb)\rb]\\&+\lb(Q^6+\frac{1}{Q^6}\rb)+13\lb(Q^4+\frac{1}{Q^4}\rb) +52\lb(Q^2+\frac{1}{Q^2}\rb)+82=\lb(P^6+\frac{1}{P^6}\rb).
\end{split}\end{equation}
\end{theorem}
\begin{proof}
The proof of the equation \eqref{b29} is similar to the proof of \eqref{b28};Notice that now \eqref{b12} is used in place of \eqref{b9}.
\end{proof}
\section{Remarkable product of theta-functions}\label{sec5}
In this section, we establish several new modular identities connecting the remarkable product of theta-functions $b_{s,5}$ with $b_{r^2s,5}$ for $r=$ 2, 4, and 6.
\begin{lemma}\cite{MSMMCMKSB}
If $s$ and $t$ are any positive rational, then
\begin{equation}\label{d1}
b_{2s,t}b_{\frac{2}{s},t}=1.
\end{equation}
\end{lemma}
\begin{lemma}\cite{MSMCKSHM}
$0<b_{s,t}\leq1$ for all $s\geq2$ and $t$ positive integer greater than 1.
\end{lemma}
\begin{theorem}
If $X=\sqrt{b_{s,5}b_{4s,5}}$ and $Y=\displaystyle\sqrt{\frac{b_{s,5}}{b_{4s,5}}}$, then
\begin{equation}\label{d2}
\lb(Y^4+\frac{1}{Y^4}\rb)-3\lb(Y^2+\frac{1}{Y^2}\rb)-5\lb(X+\frac{1}{X}\rb)\lb(Y+\frac{1}{Y}\rb)-12=0.
\end{equation}
\end{theorem}
\begin{proof}
Using the equation \eqref{bmn} in the equation \eqref{c2} we arrive at the equation \eqref{d2}.
\end{proof}
\begin{corollary}
\begin{equation}\label{d6}
b_{4,5}=\frac{\sqrt{2+2\sqrt{5}-2\sqrt{2+2\sqrt{5}}}}{2},
\end{equation}
\begin{equation}\label{d6a}
b_{1,5}=\frac{\sqrt{2+2\sqrt{5}+2\sqrt{2+2\sqrt{5}}}}{2}.
\end{equation}
\end{corollary}
\begin{proof}
Putting $s=1/2,$ in \eqref{d2} and using the fact that $b_{1,5}b_{4,5}=1$, we deduce
\begin{equation}\label{d3}
(h^8-2h^6-2h^4-2h^2+1)(h^2+h+1)(h^2-h+1)=0,
\end{equation}
where $h:=b_{4,5}.$\\
We observe that the first factor of \eqref{d3} vanishes for specific value of $q:=e^{-\pi\sqrt{4/5}}$, whereas the other factors does not vanish. Hence, we have
\begin{equation}\label{d4}
t^2-2t-4=0,
\end{equation}
where $t:=h^2+\displaystyle\frac{1}{h^2}.$\\
On solving the equation \eqref{d4} for $h$ and $t>0$, we deduce
\begin{equation}\label{d5}
h^2+\displaystyle\frac{1}{h^2}=1+\sqrt{5}.
\end{equation}
On solving the equation \eqref{d5} for $h$ and $0<h<1$, we arrive at \eqref{d6} and \eqref{d6a}.
\end{proof}
\begin{theorem}
If $X=\sqrt{b_{s,5}b_{16s,5}}$ and $Y=\displaystyle\sqrt{\frac{b_{s,5}}{b_{16s,5}}}$, then
\begin{equation}\begin{split}\label{d7}
&\lb(Y^8+\frac{1}{Y^8}\rb)-52\lb(Y^6+\frac{1}{Y^6}\rb)-1024\lb(Y^4+\frac{1}{Y^4}\rb) -3987\lb(Y^2+\frac{1}{Y^2}\rb)\\&-5\lb(X+\frac{1}{X}\rb)\lb[1076\lb(Y+\frac{1}{Y}\rb) +384\lb(Y^3+\frac{1}{Y^3}\rb)+35\lb(Y^5+\frac{1}{Y^5}\rb)\rb]\\&-5^2\lb(X^2+\frac{1}{X^2}\rb) \lb[92\lb(Y^2+\frac{1}{Y^2}\rb)+8\lb(Y^4+\frac{1}{Y^4}\rb)+149\rb]-5^3\lb(X^3+\frac{1}{X^3}\rb) \\&\times\lb[12\lb(Y+\frac{1}{Y}\rb)+\lb(Y^3+\frac{1}{Y^3}\rb)\rb]-5^4\lb(X^4+\frac{1}{X^4}\rb)-7676=0.
\end{split}\end{equation}
\end{theorem}
\begin{proof}
Using the equation \eqref{bmn} in the equation \eqref{b34} we arrive at the equation \eqref{d7}.
\end{proof}
\begin{corollary}
\begin{equation}\label{d11}
b_{8,5}=\sqrt{(\sqrt{2}-1)(\sqrt{5}-2)},
\end{equation}
\begin{equation}\label{d12}
b_{1/2,5}=\sqrt{(\sqrt{2}+1)(\sqrt{5}+2)}.
\end{equation}
\end{corollary}
\begin{proof}
Putting $s=1/4,$ in \eqref{d7} and using the fact that $b_{1/2,5}b_{8,5}=1$, we deduce
\begin{equation}\label{d8}\begin{split}
&\lb(h^8-8h^6-22h^4-8h^2+1\rb)\lb(h^4+3h^2+1\rb)\lb(h^4-h^3+h^2+h+1\rb) \\&\lb(h^4+h^3+h^2-h+1\rb)=0,
\end{split}\end{equation}
where $h:=b_{8,5}.$\\
We observe that the first factor of \eqref{d8} vanishes for specific value of $q:=e^{-\pi\sqrt{8/5}}$, whereas the other factors does not vanish. Hence, we have
\begin{equation}\label{d9}
t^2-8t-24=0,
\end{equation}
where $t:=h^2+\displaystyle\frac{1}{h^2}.$\\
On solving the equation \eqref{d9} for $h$ and $t>0$, we deduce
\begin{equation}\label{d10}
h^2+\displaystyle\frac{1}{h^2}=4+2\sqrt{10}.
\end{equation}
On solving the equation \eqref{d10} for $h$ and $0<h<1$, we arrive at \eqref{d11} and \eqref{d12}.
\end{proof}
\begin{theorem}
If $X=\sqrt{b_{s,5}b_{36s,5}}$ and $Y=\displaystyle\sqrt{\frac{b_{s,5}}{b_{36s,5}}}$, then
\begin{equation}\begin{split}\label{d15}
&\mathbb{Y}^{16}-363\mathbb{Y}^{14}-30882\mathbb{Y}^{12}-698682\mathbb{Y}^{10}-6183702\mathbb{Y}^{8}-16140317 \mathbb{Y}^{6}\\&+37225608\mathbb{Y}^{4}+231497788 \mathbb{Y}^{2} +5\mathbb{X}\lb\{60133800\mathbb{Y}+21753498\mathbb{Y}^{3}\rb.\\&\lb.-1148442\mathbb{Y}^{5}-2210604\mathbb{Y}^{7}-406488\mathbb{Y}^{9} -26740\mathbb{Y}^{11}-519\mathbb{Y}^{13}\rb\}\\&+5^2\mathbb{X}^{2}\lb\{6287236\mathbb{Y}^{2}+858465\mathbb{Y}^{4} -462222\mathbb{Y}^{6}-150099\mathbb{Y}^{8}-12840\mathbb{Y}^{10} \rb.\\&\lb. -267\mathbb{Y}^{12}+10229305\rb\}+5^3\mathbb{X}^{3} \lb\{1132002\mathbb{Y}+362832\mathbb{Y}^{3}-42462\mathbb{Y}^{5}\rb.\\&\lb.-37066\mathbb{Y}^{7}-4323\mathbb{Y}^{9}-78\mathbb{Y}^{11}\rb\} +5^4\mathbb{X}^{4}\lb\{74418\mathbb{Y}^{2}+4471\mathbb{Y}^{4}-5955\mathbb{Y}^{6}\rb.\\&\lb.-1026\mathbb{Y}^{8}-12\mathbb{Y}^{10} +130902\rb\}+5^5\mathbb{X}^{5}\lb\{9171\mathbb{Y}+2028\mathbb{Y}^{3}-588\mathbb{Y}^{5}-171\mathbb{Y}^{7}\rb.\\&\lb.-\mathbb{Y}^{9}\rb\} +5^6\mathbb{X}^{6}\lb\{300\mathbb{Y}^{2}-27\mathbb{Y}^{4}-18\mathbb{Y}^{6}+679\rb\}+5^7\mathbb{X}^{7}\lb\{24\mathbb{Y} -\mathbb{Y}^{5}\rb\}+5^8\mathbb{X}^{8}\\&+36965548 =0,
\end{split}\end{equation}
\end{theorem}
where $\mathbb{X}^{n}=\lb(X^{n}+\dfrac{1}{X^{n}}\rb)$ and $\mathbb{Y}^{n}=\lb(Y^{n}+\dfrac{1}{Y^{n}}\rb)$.
\begin{proof}
Using the equation \eqref{bmn} in the equation \eqref{b35} we arrive at the equation \eqref{d15}.
\end{proof}
\begin{corollary}
\begin{equation}\label{d13}
b_{12,5}=\sqrt{\frac{(2-\sqrt{3})(7-3\sqrt{5})}{2}},
\end{equation}
\begin{equation}\label{d14}
b_{1/3,5}=\sqrt{\frac{(2+\sqrt{3})(7+3\sqrt{5})}{2}}.
\end{equation}
\end{corollary}
\begin{proof}
Putting $s=1/6,$ in \eqref{d15} and using the fact that $b_{1/3,5}b_{12,5}=1$, we deduce,
\begin{equation}\label{d16}\begin{split}
&\lb(h^8-28 h^6+63 h^4-28 h^2+1\rb)\lb(h^{12}+10 h^{10}+15 h^8+28 h^6+15 h^4+10 h^2+1\rb) \\&\lb(h^8-2 h^7+4 h^6-h^5+7 h^4+h^3+4 h^2+2 h+1\rb)\lb(h^8+2 h^7+4 h^6+h^5+7 h^4\rb.\\&\lb.-h^3+4 h^2-2 h+1\rb)=0,
\end{split}\end{equation}
where $h:=b_{12,5}.$\\
We observe that the first factor of \eqref{d16} vanishes for specific value of $q:=e^{-\pi\sqrt{12/5}}$, whereas the other factors does not vanish. Hence, we have
\begin{equation}\label{d17}
t^2-28t+61=0,
\end{equation}
where $t:=h^2+\displaystyle\frac{1}{h^2}.$\\
On solving the equation \eqref{d17} for $h$ and $t>0$, we deduce
\begin{equation}\label{d18}
h^2+\displaystyle\frac{1}{h^2}=14+3\sqrt{15}.
\end{equation}
On solving the equation \eqref{d18} for $h$ and $0<h<1$, we arrive at \eqref{d13} and \eqref{d14}.
\end{proof}
| {
"timestamp": "2019-08-27T02:22:55",
"yymm": "1907",
"arxiv_id": "1907.01634",
"language": "en",
"url": "https://arxiv.org/abs/1907.01634",
"abstract": "In his second notebook, Ramanujan recorded total of 23 P-Q modular equations involving theta-functions $f(-q)$, $\\varphi(q)$ and $\\psi(q)$. In this paper, modular equations analogous to those recorded by Ramanujan are obtained involving $f(-q)$. As a consequence, values of certain quotients of theta-function are evaluated.",
"subjects": "Number Theory (math.NT)",
"title": "On some P-Q mixed modular equations of degree 5",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787876194205,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7093811412482117
} |
https://arxiv.org/abs/1406.7229 | Dimension-Free $L^p$-Maximal Inequalities in $\mathbb{Z}_{m+1}^N$ | For $m \geq 2$, let $(\mathbb{Z}_{m+1}^N, |\cdot|)$ denote the group equipped with the so-called $l^0$ metric,\[ |y| = \left| \big( y(1), \dots, y(N) \big) \right| := | \{1 \leq i \leq N : y(i) \neq 0 \} |,\] and define the $L^1$-normalized indicator of the $r$-sphere, \[ \sigma_r := \frac{1}{|\{|x| = r\}|} 1_{\{|x| =r\}}.\] We study the $L^p \to L^p$ mapping properties of the maximal operator \[ M^{N} f (x) := \sup_{r \leq N} | \sigma_r*f| \] acting on functions defined on $\mathbb{Z}_{m+1}^N$.Specifically, we prove that for all $p>1$, there exist absolute constants $C_{m,p}$ so that \[ \| M^{N} f \|_{L^p(\mathbb{Z}_{m+1}^N)} \leq C_{m,p} \| f \|_{L^p(\mathbb{Z}_{m+1}^N)} \] for all $N$. | \section{Introduction}\label{intro}
In $\RR^N$, let
\[ M_{B}^{\RR^N}f(x) := \sup_{r > 0} \frac{c_N}{r^N} \int_{|y| \leq r} |f(x-y)| \ dy,\]
denote the standard Hardy-Littlewood maximal function, where $c_N^{-1}$ is the volume of the $N$-dimensional Euclidean unit ball.
A celebrated result of Stein and Str\"{o}mberg \cite{SS} in Euclidean harmonic analysis concerns the following dimension-independent bounds:
\begin{theorem}[Theorem A of \cite{SS}]
For each $p > 1$ there exists a constant $A_p$ independent of $N$ so that
\[ \left\| M_{B}^{\RR^N}f \right\|_{L^p(\RR^N)} \leq A_p \| f \|_{L^p(\RR^N)}.\]
In particular, while the maximal operators are themselves dimension-dependent, they are all uniformly bounded in $L^p \to L^p$ operator-norm by the same constant, $A_p$.
\end{theorem}
This result was more recently extended by Bourgain \cite{B} to the \emph{cubic} maximal function
\[ M_Q^{\RR^N}f(x) := \sup_{r > 0} \frac{1}{(2r)^N} \int_{y \in Q_r} |f(x-y)| \ dy,\]
where
\[ Q_r := \ee\{ y = (y(1),\dots, y(N)) : |y(i)|\leq r \ \text{ for each } 1 \leq i \leq N \rr\} \]
is the cube of side-length $2r$ centered at the origin.
\begin{theorem}[Theorem of \cite{B}]
For each $p > 1$ there exist constants $A_p'$ independent of $N$ so that
\[ \left\| M_Q^{\RR^N}f \right\|_{L^p(\RR^N)} \leq A_p' \| f \|_{L^p(\RR^N)}.\]
\end{theorem}
The purpose of this article is to establish comparable dimension independent bounds in a discrete setting.
Specifically, for $m \geq 2$, let $\Z_{m+1}^N$ denote the group equipped with the so-called $l^0$-metric,
\[ |y|:= \ee|\{ 1 \leq i \leq N : y(i) \neq 0 \}\rr|.\]
We also define the ($L^2$-normalized) characters
\[ \chi_S(x):= \frac{1}{(m+1)^{N/2}} \xi_m^{x \cdot S} = \frac{1}{(m+1)^{N/2}} \prod_{i \in S} (\xi_m)^{s(i) x(i)} \]
where $\xi_m := e^{2\pi i /(m+1)}$ is a primitive $(m+1)$th root of unity.
Define the the Fourier transform
\[ \F f(S) = \hat{f}(S) = \sum_{x \in Z_{m+1}^N} f(x) \chi_S(x),\]
and the $L^1$-normalized indicator function of the $r$-sphere:
\[
\sigma_r := \frac{1}{|\s_r|} 1_{\s_r}.
\]
We adopt the convention that $\sigma_r$ is $0$ and the respective spheres are empty for $r<0$.
Motivated by \cite{B} and \cite{SS}, we will be interested in establishing dimension-independent bounds for the family of maximal functions
\[ Mf(x) = M^N f(x):= \sup_{r \leq N} |\sigma_r*f| \]
acting on functions in $\Z_{m+1}^N$.
We establish the following theorem:
\begin{theorem}\label{MAIN}
For any $p > 1$, and any $m \geq 2$, there exists a constant $C_{p,m}$ so that
\[ \left\| M f \right\|_{L^p(\Z_{m+1}^N)} \leq C_{p,m} \|f\|_{L^p(\Z_{m+1}^N)}.\]
In particular, the above bounds exist independent of the dimension, $N$.
\end{theorem}
A similar problem was studied in the $m=1$ case in \cite{HKS}:
\begin{theorem}[\cite{HKS}, Theorem 1]
There exists a constant $C_2$ so that for all $N$,
\[ \left\| \sup_{r \leq N} | \sigma_r *f| \right\|_{L^2(\Z_2^N)} \leq C_2 \|f\|_{L^2(\Z_2^N)}.\]
\end{theorem}
and later in \cite{BK}:
\begin{theorem}[\cite{BK}, Theorem 2.2]\label{big}
For any $p > 1$, there exist constants $C_p$ so that that all $N$,
\[ \left\| \sup_{r \leq N} | \sigma_r *f| \right\|_{L^p(\Z_2^N)} \leq C_p \| f \|_{L^p(\Z_2^N)}. \]
\end{theorem}
Note that, because spherical maximal functions pointwise dominate ball maximal functions, Theorems \ref{big} and \ref{MAIN} also establish dimension independent bounds on the ball average maximal operators for $\Z_{m+1}^N$ for all $m \geq 1$.
\begin{remark}
Theorems \ref{MAIN} and \ref{big} can be viewed as statements about Cartesian powers of finite cliques. The Hamming metric on $\Z_{m+1}^N$ can be isometrically identified with the graph distance metric $K_{m+1}^N$, the Cartesian power of the clique on $m+1$ vertices. To see this one simply labels the vertices of $K_{m+1}^N$ by the elements of $\Z_{m+1}^N$ and computes the distances directly.
As such, our results can be equivalently stated as dimension-independent $L^p$ bounds for Cartesian powers of finite cliques for all $p>1$. Each proof below can be readily rephrased in graph theoretic language: Translation is replaced by composition with a graph automorphism, the Fourier transform is replaced by a change of basis to diagonalize the adjacency matrix, Fourier coefficients are eigenvalues spherical averaging matrices, etc.
\end{remark}
The arguments of \cite{HKS} and \cite{BK} are applications of Stein's method \cite{S}, used in extending the well-known Hopf-Dunford-Schwartz maximal theorem for semigroups to more ``singular'' maximal averages, and breaks into four main steps:
\begin{enumerate}
\item By comparison with the noise semigroup from Boolean Analysis \cite[\S 4]{HKS}, \cite[\S 3]{BK} the ``smoother'' maximal function
\[ \sup_{K } \frac{1}{K+1} \bigg| \sum_{k\leq K} \sigma_k*f \bigg|, \]
is shown to satisfy a dimension-free weak-type $1-1$ inequality:
\[ \left| \left\{ \sup_{K } \frac{1}{K+1} \bigg| \sum_{k\leq K} \sigma_k*f\bigg| > \lambda \right\}\right| \leq \frac{C_1 \|f\|_{L^1(\Z_2^N)}}{\lambda}, \ \lambda \geq 0; \]
\item The ``rougher'' maximal function $\sup_{r \leq N} | \sigma_r *f|$ is compared to the ``smoother'' maximal function in $L^2$ by using Littlewood-Paley theory on the group $\Z_{m+1}^N$. The key tool is an analysis of the (radial) spherical multipliers
\[ \F \sigma_k(S) := \kappa_{k}^N(S)\]
the \emph{Krawtchouk} polynomials, which are introduced and discussed in \cite[\S 2]{HKS};
\item The ``rough'' maximal function, $\sup_{r \leq N} | \sigma_r *f|$, is compared to increasingly ``rougher'' maximal functions in $L^2$. Analysis of the Krawtchouk polynomials are pivotal in these further comparisons;
\item Stein interpolation is used to control $\sup_{r \leq N} | \sigma_r *f|$ in $L^p, \ p > 1$.
\end{enumerate}
A portion of our approach is based on this methodology. However there are some obstacles that require different techniques, notably bounding averages over spheres of sufficiently large radius. The symmetry of the $N$ dimensional hypercube about the radius $N/2$ allows the maximal averaging overator over distant spheres (i.e. spheres of radius greater than $N/2$) to immediately follow from those over the spheres. Indeed, distant spheres can simply be viewed as local spheres centered at an antipodal point. The loss of symmetry requires an additional argument to bound the maximal averaging operator over the analogous distant spheres.
Theorems \ref{MAIN} and \ref{big} together synthesize a generalization to arbitrary direct sums of finite cyclic groups, which can be viewed as a statement about all finite abelian groups.
Let $n, N_1, \dots , N_n \in \mathbb N$ and let $A_m$ be the group
\[
\Z_{m + 1}^{N_m}
\]
for $m \geq 1$. Also let $A$ be the direct sum
\[
A := \oplus_{m=1}^n A_m
\]
with the notation
\[
y = \oplus_{m=1}^n \ee(y_m^1, \dots , y_m^{N_m}\rr)
\]
equipt with the modified $l^0$ metric,
\[
|y| := \left|\{(m,j): 1 \leq m \leq n, 1 \leq j \leq N_m, y_m^j \neq 0\}\right|.
\]
Put more simply, viewing $y$ as a $N_1 + \dots + N_n$-tuple in the natural way, $|y|$ is the number of nonzero components. Let $\sigma^A_r$ be the $L^1$-normalized indicator function of the radius $r$, i.e.
\[
\sigma^A_r(x) := \frac{1}{\ee|\{|x| = r\}\rr|} 1_{\{|x| = r\}}
\]
and define the operator
\[
M^A f(x) := \sup_{r > 0} \ee|\sigma^A_r * f(x)\rr|,
\]
the spherical maximal operator.
\begin{theorem}\label{directsumbound}
For any $p > 1$ there exist constants $C_{p,n}$ such that $\|M^A f\|_{L^p(A)} \leq C_{p,n} \|f\|_{L^p(A)}$. In particular $C_{p,n}$ has no direct dependence on $A$.
\end{theorem}
\begin{proof}
Notice that the $L^1$-normalized indicator of a sphere in $A$ is a convex combination of products of $L^1$-normalized indicators of spheres in $A_1, \dots , A_n$. Thus the spherical maximal function on $A$ is pointwise dominated by the product of the spherical maximal functions on $A_1, \dots , A_n$. By Theorems \ref{big} and \ref{MAIN}, for a fixed $m$, the spherical maximal operator on $A_m$ satisfy $L^p$ inequalities dependent only on $p$ and $m$. Thus the product of the spherical maximal functions on $A_1, \dots , A_n$ depends only on $p$ and $n$.
\end{proof}
This result admits a corollary concerning Cayley graphs of finite abelian groups.
\begin{cor}\label{cayleycorollary}
Let $A$ be a finite abelian group whose elements have order at most $d$. Then there exists a generating set $S$ of minimal size up to a factor of $d$ such that the spherical maximal operator on the Cayley graph $\Gamma(A,S)$ satisfies $L^p$ bounds for all $p > 1$ dependent only on $p$ and $n$.
\end{cor}
\begin{proof}[Proof of Corollary \ref{cayleycorollary}, Assuming Theorem \ref{directsumbound}]
If $A$ is a finite abelian group that admits a minimal size generating set with $s$ elements, by the fundamental theorem of finitely generated abelian groups there exist $m_1 < \dots < m_k < \infty $ and $\tn_1 + \dots + \tn_k = s$ such that we can identify $A$ with
\begin{align}\label{directsumisomorphism}
\oplus_{j=1}^k \Z_{m_j + 1}^{\tn_j}.
\end{align}
We examine the generator set $S$ of $s$-tuples that have exactly one nonzero component. Note that as long as each element of $A$ has order at most $d$, $|S| \leq sd$ so $S$ is a generating set of minimal size up to a factor of $d$. Setting $n := m_k$, we can identify \eqref{directsumisomorphism} with
\[
\oplus_{m=1}^n \Z_{m + 1}^{N_m}
\]
where in general $N_m = 0$ for some values of $m$. Note that the distance metric on the Cayley graph $\Gamma(A,S)$ is precisely the $l^0$ metric of Theorem \ref{directsumbound}. From here the corollary is a direct application of Theorem \ref{directsumbound}.
\end{proof}
\bigskip
The structure of the paper is as follows:
In $\S \ref{noise}$, we introduce our smooth spherical maximal operator for local (i.e. small radius) spheres, and prove that they satisfy dimension independent weak type $1-1$ bounds;
In $\S \ref{comparison}$, we review Stein's semigroup comparison method, and adapt it to our present context; assuming the technical Proposition \ref{tech}, we prove $L^p$ bounds for the local spherical maximal operator;
In $\S \ref{distant}$, we retool the arguments of $\S \ref{noise}$ and $\S \ref{comparison}$ and use them to bound the distant maximal operator, thereby establishing main result, Theorem \ref{MAIN}, modulo the proof of Proposition \ref{tech}; and
In $\S \ref{krawtchouk}$, we prove Proposition \ref{tech}.
\subsection{Notation}
Throughout, we denote $c_m:= \frac{m}{m+1}$. When clear from context, we will suppress the superscript ``$N$'' in the definition of our maximal functions. We will also make use of the modified Vinogradov notation. We use $X \lesssim Y$, or $Y \gtrsim X$ to denote the estimate $X \leq CY$ for some constant $C$ which may depend only on $m$ (in general we will suppress dependence on $m$). If we need $C$ to depend on a parameter other than $m$, we shall indicate this by subscripts, for instance $X \lesssim_a Y$ denotes the estimate $X \leq C_a Y$ for some $C_a$ depending on $a$. We use $X \approx Y$ as shorthand for $X \lesssim Y \lesssim X$, and similarly for $X \approx_a Y$.
\section{The Smooth Local Spherical Maximal Operator in $\Z_{m+1}^N$}\label{noise}
As alluded to in the introduction, the lack of symmetry of $\Z_{m+1}^N$ requires separate treatment of local and distant spheres, so we split $M$ in two separate maximal operators:
\[ M f(x) \leq M^Lf + M^Df:= \sup_{k \leq c_mN} |\sigma_k*f| + \sup_{k> c_mN} |\sigma_k*f|.\]
In this section, we prove:
\begin{proposition}\label{smoothweaktype}
The smooth local spherical maximal function
\[ M^L_Sf:=\sup_{L \leq c_m N } \bigg|\frac{1}{K+1} \sum_{k \leq K} \sigma_k *f\bigg| \]
is of weak-type $1-1$, with bound independent of $N$,
i.e. there exists some absolute $C_{1,m}$ so that
\[ \|M^L_Sf\|_{L^{1,\infty}(\Z_{m+1}^N)} \leq C_{1,m} \|f\|_{L^1(\Z_{m+1}^N)}.\]
\end{proposition}
Proposition \ref{smoothweaktype} will also be useful in $\S \ref{distant}$ to bound the smooth distant spherical maximal operator given by
\[
M^D_S f := \sup_{D \leq \frac{N}{m+1}} \bigg|\frac{1}{D+1} \sum_{k \leq K} \sigma_{N-k} *f\bigg|.
\]
Following the lead of \cite[\S 4]{HKS}, we bound $M_S^L$ by comparison with an appropriate ``noise'' semigroup, which we now introduce.
\subsection{The noise semigroup in $\Z_{m+1}^N$}
For fixed $0 < p < c_m$ we define a probability measure $\tilde\mu_p$ on $\Z_{m+1}$ given by
\[ \tilde\mu_p\ee(\{w\}\rr) := \begin{cases} 1-p &\mbox{if } w = 0 \\
\frac{p}{m} & \mbox{otherwise}, \end{cases} \]
and for $y \in \Z_{m+1}^N$,
\[ \tilde\mu^N_p\ee(\{y\}\rr) = \left( \frac{p}{m} \right)^{|y|} \left( 1- p \right)^{N-|y|},\]
where, as above,
\[ |y|:= \left|\{ 1 \leq i \leq d : y(i) \neq 0 \}\right|\]
is the $l^0$-metric. We view $\tilde\mu^N_p$ alternatively as a measure and a function depending on context.
Consider the (dimension dependent) convolution operator
\[ \tilde{\N}_p f(x):= f* \tilde\mu_p^N(x) = \int_{\Z_{m+1}^N} f(x+y) \tilde\mu_p^N(y).\]
We denote by $\xi = \xi_m$ a primitive $(m+1)$th root of unity.
\begin{lemma}\label{characternoise}
For each ($L^2$-normalized) character
\[
\chi_S(x) := \frac{1}{(m+1)^{N/2}} \xi^{S \cdot x} = \frac{1}{(m+1)^{N/2}} \prod_{i=1}^N \xi^{S(i) x(i)}
\]
where $S,x \in \mathbb Z_{m+1}^N$ and $S \cdot x = \sum_{i=1}^N S(i) x(i)$, we have
\[ \tilde{\N}_p \chi_S(x) = (1 - p/c_m )^{|S|} \chi_S(x).\]
\end{lemma}
\begin{proof}
First note that
\[
\chi_S(x+y) = \frac{1}{(m+1)^{N/2}} \xi^{(x+y) \cdot S} = \chi_S(x) \xi^{y \cdot S}
\]
Thus
\begin{align}\label{coordinateform}
\tilde{\mathcal N}_p \chi_S(x) = \chi_S(x) \int_{\mathbb Z_{m+1}^N} \xi^{y \cdot S} \tilde\mu_p^N(y) = \chi_S(x) \int_{\mathbb Z_{m+1}^N} \prod_{i=1}^N \xi^{y(i) S(i)} \tilde\mu_p^N(y)
\end{align}
However, $\tilde\mu_p^N$ is a Cartesian product of $N$ copies of $\tilde\mu_p$ so \eqref{coordinateform} can be written
\begin{align}\label{productform}
\tilde{\mathcal N}_p \chi_S(x) = \chi_S(x) \prod_{i=1}^N \int_{\mathbb Z_{m+1}} \xi^{y S(i)} \tilde\mu_p(y)
\end{align}
If $S(i) = 0$, the integral in \eqref{productform} evaluates to $1$ because the integrand is $1$ and $\tilde\mu_p$ is a probability measure.
If $S(i) \neq 0$,
\begin{align*}
\int_{\mathbb Z_{m+1}} \xi^{y S(i)} \tilde\mu_p(y) &= \tilde\mu_p(\{0\}) + \sum_{y=1}^m (\xi^{S(i)})^y \tilde\mu_p(\{y\})\\
&= (1-p) + \frac{p}{m} \sum_{y=1}^m (\xi^{S(i)})^y\\
&= 1 - p - \frac{p}{m}\\
&= 1 - p/c_m
\end{align*}
Splitting the factors in \eqref{productform} into those corresponding to $0$ and non-$0$ indices of $S$, we see
\[
\tilde{\mathcal N}_p \chi_S(x) = \chi_S(x) \bigg[\prod_{i: S(i) \neq 0} \left(1 - p/c_m\right)\bigg] = \chi_S(x) (1 - p/c_m)^{|S|}
\]
\end{proof}
Consequently, with $p(t) = c_m(1- e^{-t})$ and
\[ \aligned
\mu^N_t(y) &:= \tilde\mu^N_{p(t)}(y)\\
\N_t &:= \tilde{\N}_{p(t)}
\endaligned \]
(so $\tilde{\N}_p = \N_{-\ln(1- p/c_m)}$), we have
\[ \N_t \chi_S(x)= e^{-t|S|} \chi_S(x),\]
and thus the family of operators $\{N_t : t > 0\}$ form a semigroup, and the maximal operator $\N_*$ given by
\[ \N_* f:= \sup_{T} \left|\frac{1}{T} \int_0^T \N_t f \ dt\right|\]
is of weak-type $1-1$, independent of dimension (\cite[Lemma VIII.7.6, pp. 690-691]{DS}).
For the sake of comparison with $M^L_S$, it will be convenient to reparametrize the semigroup maximal function in terms of $p$.
\begin{proposition}\label{reparameterization}
The maximal function
\[ \tilde{\N}_{*}f := \sup_{P \leq c_m} \left|\frac{1}{P} \int_0^P \tilde{\N}_p f \ dp\right| \]
is bounded pointwise by $\N_* f$. In particular $\tilde{\N}_{*}$ is of weak-type $1-1$ independent of dimension.
\end{proposition}
\begin{proof}
By direct calculation, one verifies -- analogous to the proof of \cite[Lemma 9]{HKS} -- that the measure
\[
\nu_P := \bigg\{\frac{c_m}{P} T e^{-T} 1_{(0,-\ln[1-P/c_m])} \bigg\} dT \ + \bigg\{\left(\frac{c_m}{P} - 1\right)\left(-\ln \left[1-\frac{P}{c_m}\right]\right)\bigg\} \delta_{-\ln[1-P/c_m]}
\]
has total mass $1$. Moreover, noting that the bracketed expression in \eqref{semigroupreparameterization} below equals $\frac{1}{P}1_{p\leq P}$, further computation reveals that
\begin{align}\label{semigroupreparameterization}
\frac{1}{P} \int_0^P \tilde\mu^N_p \ dp &= \int_0^{c_m} \tilde\mu^N_p \ \left( \frac{1}{c_m - p} \int_{-\ln(1 - p/c_m)}^\infty \frac{1}{T} \ d\nu_P(T) \right) \ dp\\
&= \int_0^\infty \left( \frac{1}{T} \int_0^{c_m(1- e^{-T})} \tilde\mu^N_p \frac{1}{c_m - p} \ dp \right) \ d\nu_P(T)\nonumber\\
&= \int_0^\infty \left( \frac{1}{T} \int_0^T \mu^N_t \ dt \right) \ d\nu_P(T).\nonumber
\end{align}
Because (for fixed $p$ and $t$) the convolution operators $\tilde\N_p$ and $\N_t$ are given by finite sums, they commute with all convergent integrals in $p$ and $t$. This leads directly to a pointwise majorization
\[ \aligned
\left|\frac{1}{P} \int_0^P \tilde{\N}_pf \ dp\right| &= \left|f * \left[\frac{1}{P} \int_0^P \tilde\mu_p^N\right]\right|\\
&= \left|f * \left[\int_0^\infty \left( \frac{1}{T} \int_0^T \mu^N_t \ dt \right) \ d\nu_P(T)\right]\right|\\
&= \left|\int_0^\infty \left( \frac{1}{T} \int_0^T \N_tf \ dt \right) \ d\nu_P(T)\right| \\
&\leq \int_0^\infty \N_*f \ d\nu_P(T) \\
&= \N_*f, \endaligned\]
from which the result follows.
\end{proof}
Finally, we will compare the smooth maximal function with the reparametrized ``semigroup'' maximal function:
\begin{proposition} For any nonnegative function $f$ we have the pointwise inequality
\[ M^L_S f \lesssim \tilde{\N}_{*}f. \]
In particular, $M^L_S$ is of weak-type $1-1$, independent of dimension.
\end{proposition}
\begin{proof}
We may express
\[ \aligned
\tilde\mu_p^N &= \sum_{l=0}^N (p/m)^l (1-p)^{N-l} 1_{\s_l} \\
&= \sum_{l=0}^N \binom{N}{l} (p)^l (1-p)^{N-l} \frac{1}{m^l \binom{N}{l}} 1_{\s_l} \\
&= \sum_{l=0}^N B(N,p,l) \sigma_l \endaligned\]
where $B(N,p,l) := \binom{N}{l} (p)^l (1-p)^{N-l}$.
By Lemma \ref{bigkchoice} below (similar to \cite{HKS}), for each $L \leq N$ we can choose $P(L) \in (0, c_m]$ that satisfies the favorable pointwise comparison
\[
\frac{1}{L+1} \lesssim \frac{1}{P(L)} \int_0^{P(L)} B(N,p,l) \, dp
\]
for each $l \leq L$. Thus
\begin{align}\label{smoothnoisepwise}
\sum_{l \leq L} \frac{1}{L+1} \sigma_l &\lesssim \sum_{l \leq N} \left( \frac{1}{P(L)} \int_0^{P(L)} B(N,p,l) \ dp \right) \sigma_l\nonumber\\
\frac{1}{L+1} \sum_{l \leq L} \sigma_l &\lesssim \frac{1}{P(L)}\int_0^{P(L)} \tilde\mu_p^N \ dp.
\end{align}
Noting that all terms in \eqref{smoothnoisepwise} are nonnegative, we observe that for any nonnegative function $f$, we have the pointwise comparison
\[ \aligned
\frac{1}{L+1} \sum_{l \leq L} \sigma_l * f &\lesssim \left(\frac{1}{P(L)}\int_0^{P(L)} \tilde\mu_p^N \ dp\right) * f\\
&= \left(\frac{1}{P(L)}\int_0^{P(L)} \tilde\N_p f \ dp\right)\\
&\leq \tilde\N_* f
\endaligned \]
where the first equality above is justified as in Proposition \ref{reparameterization}. Taking a supremum over all $L \leq c_mN$ provides the desired pointwise inequality. To prove the weak-type bound, first observe that because $M_S$ is a supremum over convolution operators with nonnegative kernels, we immediately have the pointwise inequality $M_S |f| \geq M_S f$ for an arbitrary function $f$. Thus for any $\|f\|_{L^1\left(\Z_{m+1}^N\right)} = 1$
\begin{align}\label{absnoiseineq}
\|M_S f\|_{L^{1,\infty}(\Z_{m+1}^N)} &\leq \big\|M_S |f|\big\|_{L^{1,\infty}(\Z_{m+1}^N)}\nonumber\\
&\lesssim \big\|\tilde\N_* |f|\big\|_{L^{1,\infty}(\Z_{m+1}^N)}
\end{align}
Simply because $f$ and $|f|$ share $L^1$ norms (i.e. $1$), \eqref{absnoiseineq} is bounded by the weak $1-1$ operator norm of $\tilde\N_*$. Taking a supremum over all $L^1$ normalized $f$ then proves that $M_S$ inherits the dimension independent weak-type $1-1$ bound from $\tilde\N_*$.
\end{proof}
Applying the Marcinkiewicz interpolation theorem with the trivial $L^\infty$ bound yields the desired $L^p$ bounds.
\begin{cor}\label{smoothlpbounds}
The operator $M^L_S$ satisfies $L^p$ bounds for all $p > 1$ that depend on $p$ and $m$ but are independent of dimension.
\end{cor}
All that remains in the section is to prove the key Lemma \ref{bigkchoice}. Below we present a computational proof based on an application of Stirling's formula. Remark \ref{probabilisticexplanation} conveys a probabilistic intuition for that computation for this calculation, made rigorous by lemmas \ref{distantiid} and \ref{blbound}). To adapt the remark to this situation, one simply replaces $\sigma_N$ with $\sigma_0$.
We put forth both methods for completeness.
\begin{lemma}\label{bigkchoice}
For each $0 \leq l \leq L \leq c_mN$, there exists $P(L) \in (0,c_m]$ (independent of $N$ and $l$) such that
\[
\frac{1}{L+1} \lesssim \frac{1}{P(L)} \int_0^{P(L)} B(N,p,l) \, dp
\]
\end{lemma}
\begin{proof}
We choose the value $P(L)$ as follows:
\[
P(L) =
\begin{cases}\frac{1}{N} \text{ if } L=0\\
\frac{L}{N} \text{ if } 1 \leq L \leq c_m N.
\end{cases}
\]
Because $\frac{P(L)}{L+1} \approx \frac{1}{N}$, it suffices to prove
\begin{align}\label{reducedsmoothinequality}
\frac{1}{N} \lesssim \int_0^{P(L)} B(N,p,l) \ dp
\end{align}
independent of $N$ and $l$. Also note that if $l=0$ we have
\[
\int_0^{P(L)} B(N,p,0) \ dp \geq \int_0^{1/N} (1-p)^N \ dp \gtrsim \frac{1}{N}
\]
so we can assume $1 \leq l$ (and recall $l \leq L \leq c_m N$). We estimate the right side of \eqref{reducedsmoothinequality} from below by
\[
\int_{l/N-\sqrt l/2N}^{l/N} B(N,p,l) \, dp
\]
From there it will suffice to show that for all $p \in [l/N-\sqrt l/2N,l/N]$ the inequality $B(N,p,l) \gtrsim 1/\sqrt l$ holds. To prove this, we first observe that by a direct application of Stirling's formula, $B(N,l/N,l) \gtrsim 1/\sqrt l$. Then we show that $B(N,p,l)$ maintains this bound for all
\[
\frac{l}{N}-\frac{\sqrt l}{2N} \leq p \leq \frac{l}{N}
\]
as follows:
\begin{align*}
\left| \ln \frac{B(N,l/N,l)}{B(N,p,l)} \right| &= \left| \int_p^{l/N} \partial_t \ln B(N,t,l) \ dt \right|\nonumber \\
&= \left| \int_p^{l/N} \frac{l-Nt}{t(1-t)} \ dt \right|\nonumber \\
&\leq \left(\frac{l}{N} - p\right) \left(\max_{t \in [p,l/N]}\frac{1}{t(1-t)}\right) \left(\max_{t \in [p,l/N]}l-Nt\right)\\
&\lesssim \frac{\sqrt l}{N} \frac{N}{l} \sqrt l \\
&= 1
\end{align*}
Exponentiating, it follows that
\[
\frac{B(N,l/N,l)}{B(N,p,l)} \approx 1 \implies B(N,p,l) \gtrsim \frac{1}{\sqrt l}.
\]
\end{proof}
\section{The Comparisons -- Stein's Method}\label{comparison}
The goal of this section is to prove the local spherical bound and set up much of the machinery for the distant spherical, and thus general spherical, bound.
\begin{theorem}\label{localbound}
The local spherical maximal operator $M^L$ given by
\[
M^L f(x) = \sup_{k \leq c_m N} |\sigma_k * f(x)|
\]
satisfies $L^p$ bounds for all $p > 1$ dependent only on $p$ and $m$.
\end{theorem}
As announced, in this section we adapt the Nevo-Stein \cite{NS} spectral machinery to our present context. We prepare to do so in our first subsection:
\subsection{Krawtchouk Preliminaries}
It is helpful to define the convolution operators:
\[ P^k f(x):= f* \sigma_k(x). \]
Their discrete derivatives
\begin{align}\label{differencetk}
\triangle^0 P^k &:= P^k,\nonumber \\
\triangle^1 P^k &:= P^k - P^{k-1},\nonumber \\
&\ \vdots\nonumber \\
\triangle^t P^k &:= \triangle \ee(\triangle^{t-1} P^k\rr) =
\sum_{j \leq t} (-1)^j \binom{t}{j} P^{k-j}, \\
&\ \vdots\nonumber
\end{align}
and their associated (radial) multipliers
\[ \F \ee(\triangle^t P^k\rr) (|S|)\]
will be central to our study.
First, when $|S|=r$ we have \cite[\S 5.3]{CST}
\[ \F P^k(r) = \sum_{j=\max(0,r+k-N)}^{\min(r,k)} (-1)^j \binom{N}{k}^{-1} \binom{r}{j} \binom{N-r}{k-j} m^{-j} =: \kappa_r^N(k),\]
the $k$th (normalized) Krawtchouk polynomial in $\Z_{m+1}^N$. By expanding the binomial coefficients in the expression above, it is easy to see that $\kappa_r^N(k) = \kappa_k^N(r)$ for all $r,k,N$. We adopt the convention that if any of $r$, $k$, or $N$ is negative, then $\kappa_r^N(k) = 0$.
The Krawtchouk polynomials have the following useful difference relation:
\begin{lemma}\label{krawtchoukderivatives}
In $\Z_{m+1}^N$, if $r \geq 0$ and $k \geq 1$ are integers, then
\[
(\F \triangle P^k)(r) = \kappa_k^N(r) - \kappa_{k-1}^N(r) = \kappa_r^N(k) - \kappa_r^N(k-1) = \ee(-\frac{1}{c_m}\rr)\frac{\binom{N-1}{r-1}}{\binom{N}{r}} \kappa^{N-1}_{k-1}(r-1).\]
\end{lemma}
\begin{proof}
For the boundary case $r=0$, direct computation shows that $\F P^k(r) = 1$ for all $k$. Thus, $\F \triangle P^k(r) = \F P^k(r) - \F P^{k-1}(r) = 0$. We may now assume $r$ is positive.
Because dimension is not a constant in the lemma, we adopt the notation
\begin{align*}
&\mathbb S_r^N = \ee\{x \in \mathbb Z_{m+1}^N: |x|=r\rr\}\\
&\sigma_r^N = \frac{1}{|\mathbb S_r^N|} 1_{\mathbb S_r^N}
\end{align*}
Letting $y_j^N = (1,\dots,1,0,\dots,0)$ with $j$ `$1$'s and $N-j$ `$0$'s, we exploit the radiality of $\mathcal F \sigma_j^N$ to see
\begin{align}
\kappa_r^N(k) - \kappa_r^N(k-1) &= \ee\langle \sigma_r^N, \xi^{x \cdot y_k^N} - \xi^{x \cdot y_{k-1}^N}\rr\rangle\nonumber\\
&= \frac{1}{|\mathbb S_r^N|} \sum_{x \in \mathcal S_r^N} (\xi^{x_1 + \dots + x_{k-1}})(\xi^{x_k} - 1)\nonumber\\
&= \frac{1}{|\mathbb S_r^N|} \sum_{x \in \mathcal S_{r-1}^{N-1}} (\xi^{x_1 + \dots + x_{k-1}}) \sum_{x_k=1}^m (\xi^{x_k} - 1)\label{expandeddifference}
\end{align}
The last equality follows from the observation that any summand corresponding to an $x \in \mathbb S_r^N$ such that $x_k = 0$ is $0$. As in the proof of Lemma \ref{characternoise} we have
\[
\sum_{x_k=1}^m (\xi^{x_k} - 1) = -(m+1)
\]
Rearranging \eqref{expandeddifference} then yields
\begin{align*}
-(m+1) \frac{|\mathbb S_{r-1}^{N-1}|}{|\mathbb S_r^N|} \ee\langle \frac{1}{|\mathbb S^{N-1}_{r-1}|} 1_{\mathbb S^{N-1}_{r-1}}, \xi^{x_1 + \dots + x_{k-1}}\rr\rangle &= -(m+1) \frac{|\mathbb S_{r-1}^{N-1}|}{|\mathbb S_r^N|} \ee\langle \sigma_{r-1}^{N-1}, \xi^{x \cdot y_{k-1}^{N-1}}\rr\rangle\\
&= -\frac{{{N-1} \choose {r-1}}}{c_m{{N} \choose {r}}} \kappa^{N-1}_{r-1}(k-1)\\
&= -\frac{{{N-1} \choose {r-1}}}{c_m{{N} \choose {r}}} \kappa^{N-1}_{k-1}(r-1)
\end{align*}
\end{proof}
Applying Lemma \ref{krawtchoukderivatives} $t$ times yields a useful general expression for higher orders differences.
\begin{cor}\label{krawtchoukhigherderivatives}
For any integers $0 \leq t \leq k$ and $0 \leq r$,
\[ (\F \triangle^t P^k)(r)= \ee(-\frac{1}{c_m}\rr)^t \frac{\binom{N-t}{r-t}}{\binom{N}{r}} \kappa^{N-t}_{r-t}(k-t). \]
Notice that, if $r < t$, this means $(\F \triangle^t P^k)(r) = 0$.
\end{cor}
Now we define
\[ \aligned \partial^0 \kappa_r^N(k) &= \kappa_r^N(k), \\
\partial \kappa_r^N(k) &= \partial^1 \kappa_r^N(k) := \kappa_r^N(k) - \kappa_r^N(k-1), \endaligned \]
and $\partial^t \kappa_r^N(k) := \partial ( \partial^{t-1} \kappa_r^N(k))$, provided $t \leq \min\{ r, k\}$. Otherwise we set $\partial^t \kappa_r^N(k) =0$.
Using this notation, for $t \leq k$, we may express
\[ (\F \triangle^t P^k)(r) = \partial_t \kappa_k^N(r) = \ee(-\frac{1}{c_m}\rr)^t \frac{\binom{N-t}{r-t}}{\binom{N}{r}} \kappa^{N-t}_{k-t}(r-t).\]
\begin{remark}
The restriction to $k \geq 1$ in Lemma \ref{krawtchoukderivatives} and (and therefore $k \geq t$ in Corollary \ref{krawtchoukhigherderivatives}) is necessary because the identity $(\F \triangle^t P^k)(r) = \partial_t \kappa_k^N(r)$ breaks down for $t > k$.
\end{remark}
The following proposition, whose proof we defer to \S \ref{krawtchouk} below, is the key quantitative ingredient needed to anchor the argument:
\begin{proposition}\label{tech}
There exists a constant $d$ (dependent only on $m$) such that for all $r,k,N$ we have
\[ |\kappa_k^N(r)| \leq e^{-d \frac{rk}{N}}.\]
\end{proposition}
\subsection{A Review of Nevo-Stein}
In this subsection, we shall regard $N$ as fixed, and
(quickly) review the comparison argument of \cite{S} as it relates to our current setting. For a fuller treatment, we refer the reader to \cite{NS}.
In the last subsection, we introduced the convolution operators $\{P^k\}$.
Because they are self-adjoint, positive $L^1$- and $L^\infty$-contractions, we may use the following outline from \cite{S}, \cite{NS}:
With $\lambda = \alpha + i \beta \in \C$, we recall the complex binomial coefficients
\[ A^\lambda_n = \frac{ (\lambda + 1)(\lambda + 2) \dots (\lambda + n)}{n!}, \ A_0^\lambda:= 1, A_{-n}^\lambda:=0.\]
We define the Ces{\`a}ro sums
\[ S_n^\lambda f(x) := \sum_{k\leq n} A_{n-k}^\lambda P^k f(x), \ \lambda \in \C,\]
for $n \leq c_m N$
and remark that for any integer $0 \leq t \leq k$, we have
\[
S_k^{-t-1}f(x) = \Delta^t P^k f(x)
\]
by a simple computation using \eqref{differencetk}. In particular, because we are only working with $S^\lambda_n$ for $n \leq c_m N$, Corollary \ref{krawtchoukhigherderivatives} shows that whenever $t > c_m N$ we have $S_n^{-t-1} f \equiv 0$.
The maximal functions associated to these higher Ces{\`a}ro means are
\[ S_*^\lambda f(x):= \max_{0 \leq n \leq c_m N} \left| \frac{S_n^\lambda f(x)}{(n+1)^{\lambda +1}} \right|. \]
The following lemmas are finitary adaptations of the results in \cite{NS}; we emphasize that the formal nature of the arguments in \cite{NS} allows them to be applied to any Ces{\`a}ro means of sequence of Markov operators operators that are $L^1$ and $L^\infty$ contractions.
\medskip
\begin{lemma}[\cite{NS}, Proof of Lemma 4, pp. 144-145]\label{1}
For $\alpha > 0, \beta \in \RR$, there exist positive constants $C_\alpha$ so that
\[ S_*^{\alpha+i\beta} f \leq C_\alpha e^{2 \beta^2} S_*^0|f| \]
holds pointwise.
\end{lemma}
\medskip
\begin{lemma}[\cite{NS}, Proof of Lemma 5, pp. 145-146]\label{2}
For each nonnegative integer $t$ and each real $\beta$, there exist positive constants $C_t$ so that
\[ S^{-t+i \beta}_*f \leq C_t e^{3 \beta^2} \left( S_*^{-t-1}f + S_*^{-t}f + \dots + S_*^{-1} f \right)\]
holds pointwise.
\end{lemma}
\medskip
\begin{lemma}[\cite{NS}, Proof of Lemma 5, p. 147]\label{3}
Define
\[ R_t f(x)^2 := \sum_{0 \leq k \leq c_mN} (k+1)^{2t-1}|S_k^{-t-1}f(x)|^2 \]
for any positive integer $t$. Then there exists a positive constant $c_{-t}$ so that
\[ S_*^{-t} f \leq c_{-t} R_t f + 2S_*^{1-t}f\]
holds pointwise.
\end{lemma}
\begin{proposition}\label{main}
Let
\[R_t f(x)^2 := \sum_{0 \leq k \leq c_m N} (k+1)^{2t-1}|S_k^{-t-1}f(x)|^2.\]
Then there exist constants $C_{t,m}$ independent of $N$ so that
\[ \|R_t f \|_{L^2(\Z_{m+1}^N)} \leq C_{t,m} \| f\|_{L^2(\Z_{m+1}^N)}\]
for all $N$.
\end{proposition}
Before the proof of Proposition \ref{main}, we show that it implies dimension independent $L^p$ bounds on $M^L$.
\begin{proof}[Theorem \ref{localbound}, Assuming Proposition \ref{main}]\label{localboundproofassumingmain}
First we note that $S_*^0$ is the smooth local spherical maximal operator $M^L_S$ from Proposition \ref{smoothweaktype} while $S_*^{-1}$ is the local spherical maximal operator $M^L$ so our goal is to establish dimension independent $L^p$ bounds on $S_*^{-1}$.
By Corollary \ref{smoothlpbounds}, we know that there exist constants $\{A_{p,m}\}$, $p > 1$, so that for each $N$,
\[ \big\| S_*^0 |f| \big\|_{L^p(\Z_{m+1}^N)} \leq A_{p,m} \|f\|_{L^p(\Z_{m+1}^N)},\]
where the operators $\{S_*^0\}$ are $N$-dependent, but the bounds are not.
By Lemma \ref{1}, for each $\alpha > 0, \beta \in \RR$, we therefore have the bound
\[ \| S_*^{\alpha+i\beta} f \|_{L^p(\Z_{m+1}^N)} \leq C_\alpha e^{2 \beta^2} A_{p,m} \| f \|_{L^p(\Z_{m+1}^N)}\]
independent of $N$.
By Proposition \ref{main}, Lemma \ref{3}, and induction on $t$, we see that there exist constants $\{B_{2,m}^t\}, t \geq 1$ so that for all $N$,
\[ \|S_*^{-t} f\|_{L^2({\Z_{m+1}}^N)} \leq B_{2,m}^t \|f\|_{L^2({\Z_{m+1}}^N)}.\]
By Lemma \ref{2}, this means that for all $N$, there exist constants $\{D_{2,m}^t \}$ so that
\[ \|S_*^{-t+i\beta} f\|_{L^2({\Z_{m+1}}^N)} \leq e^{3\beta^2} D_{2,m}^t \|f\|_{L^2({\Z_{m+1}}^N)} \]
for all $N$.
The theorem then follows by linearizing the $S_*^{-1}$-supremum and applying the Stein interpolation theorem as in the conclusion of the proof of \cite[Theorem 2, pp. 150-151]{NS}.
\end{proof}
It remains only to prove Proposition \ref{main}, which we accomplish in the following subsection.
\subsection{Proof of Proposition \ref{main}}
\begin{proof}
We proceed by truncating $R_t f(x)^2$ after $t$ summands and bound the tail later. Each individual operator $S_k^{-t-1}$ is bounded in $L^2(\Z_{m+1}^N)$ with a bound dependent on $k$ and $t$. Thus, letting $c_t := \max_{k < t} \|S_k^{-t-1}\|_2$,
\[\aligned
\sum_{x \in \Z_{m+1}^N} \sum_{k = 0}^{t-1} (k+1)^{2t-1} |S_k^{-t-1}f(x)|^2 &= \sum_{0 \leq k < t} (k+1)^{2t-1} \|S_k^{-t-1}f(x)\|_2^2\\
&\leq c_t^2 \sum_{0 \leq k < t} (k+1)^{2t-1} \|f\|_2^2\\
&\lesssim_t \|f\|_2^2
\endaligned\]
Now we move on to establish the desired bound for the tail, namely
\[
\sum_{k=t}^{c_m N} (k+1)^{2t-1} |S_k^{-t-1}f(x)|^2 = \sum_{k = t}^{c_m N} (k+1)^{2t-1} |\triangle^{t} P^k f(x)|^2 \lesssim_t \|f\|_2.
\]
By Plancherel, it is enough to show that there exists a constant, $C'_{t,m}$, independent of $r$ and $N$, so that for all $r$
\[ \sum_{k=t}^{c_mN} (k+1)^{2t-1} \ee|\F \triangle^{t} P^k\rr|^2(r)
\leq C'_{t,m} \]
or equivalently
\begin{align}\label{krawtchoukdifferencesum}
\sum_{k=t}^{c_mN} (k+1)^{2t-1} \ee|\partial^{t} \kappa_r^N(k)\rr|^2 \leq C'_{t,m}.
\end{align}
If $r<t$, each summand is $0$ so without loss of generality $r \geq t$. Ignoring a finite set of cases for fixed $t$, we can assume that $N > 2t$. Vital to the proof is the difference relation
\[ \partial^{t} \kappa_r^N(k)= \ee(-\frac{1}{c_m}\rr)^t \frac{\binom{N-t}{r-t}}{\binom{N}{r}} \kappa^{N-t}_{r-t}(k-t) \]
from Corollary \ref{krawtchoukhigherderivatives} and the upper bound
\[ |\kappa_{r-t}^{N-t}(k-t)| \leq e^{-d \frac{r-t}{N-t} (k-t)}\]
from Proposition \ref{tech}.
We first handle of the boundary case $r=t$, in which
\[ \kappa_{r-t}^{N-t}(k-t) = \kappa_{0}^{N-t}(k-t) = 1.\]
In this instance, we estimate
\[ \aligned
\sum_{k=0}^{c_mN} (k+1)^{2t-1} |\partial^{t} \kappa_r^N(k)|^2 &\leq
\sum_{k=1}^{N} k^{2t-1} \left( \frac{(c_m)^{-t}}{\binom{N}{t}} \right)^2 \\
&\lesssim \left( \frac{(N/c_m)^t}{\binom{N}{t}} \right)^2 \\
&\lesssim_t 1, \endaligned\]
simply bounding $\binom{N}{t}$ from below by $(N-t)^t/t^t \approx_t N^t$ because $N > 2t$. Henceforth, we may assume $r>t$.
Seeking the bound \eqref{krawtchoukdifferencesum}, we estimate
\[ \aligned
\sum_{k=t}^{c_mN} (k+1)^{2t-1} |\partial^{t} \kappa_r^N(k)|^2 &\leq
\sum_{k=t}^{\infty} (k+1)^{2t-1} |\partial^{t} \kappa_r^N(k)|^2 \\
&\lesssim_t \sum_{k=t}^{\infty} (k+1)^{2t-1} \left| \frac{\binom{N-t}{r-t}}{\binom{N}{r}} e^{-d \frac{r-t}{N-t} (k-t)} \right|^2 \\
&= \left( \frac{\binom{N-t}{r-t}}{\binom{N}{r}} \right)^2 \sum_{k=t}^{\infty} (k+1)^{2t-1} e^{-2d \frac{r-t}{N-t} (k-t)} \\
&= \left( \frac{\binom{N-t}{r-t}}{\binom{N}{r}} \right)^2 \sum_{k=0}^{\infty} \big(k+(t+1)\big)^{2t-1} e^{-2d \frac{r-t}{N-t} k}. \endaligned\]
We record the following easy lemma concerning infinite series:
\begin{lemma}\label{infiniteseries}
For any positive integer $n$, there exists a constant $A_n$ such that for all $|s| < 1$,
\[
\sum_{k=0}^{\infty} k^n s^k \leq \frac{A_n}{(1-s)^{n+1}}
\]
\end{lemma}
\begin{proof}
Define the operator
\[ Lf(s):= s \frac{df}{ds}(s),\]
and note that
\[ \sum_{k=0}^{\infty} k^n s^k = L^n \frac{1}{1-s}\]
Induction on $n$ shows that the right side of this equation can be expressed as $\frac{s^n + p_n(s)}{(1-s)^{n+1}},$ where $p_n(s) := \sum_{j <n} a_j^n s^j$ is a polynomial of degree $n-1$.
In particular, for $s < 1$, we may bound
\[ \left|\frac{s^n + p_n(s)}{(1-s)^{n+1}} \right| \leq \frac{A_n}{(1-s)^{n+1}},\]
where we let $A_n:= 1+\sum_{j<n} |a_j^n|$.
\end{proof}
Now, following the lead (and notation) of \cite[\S 4]{HKS}, we set
\[ \alpha = \alpha(r):= 2d \frac{r-t}{N-t};\]
possibly after reducing $d$, we may assume that $d< \frac{1}{2}$,
so that
\[ |\alpha(r)| \leq 2d \frac{r-t}{N-t} \leq 2d < 1\]
for all $r$.
In the following estimate we use the notation $A_n$ from Lemma \ref{infiniteseries}.
\[ \aligned
\sum_{k=0}^{\infty} \big(k+(t+1)\big)^{2t-1} e^{-2d \frac{r-t}{N-t} k} &=
\sum_{k=0}^{\infty} \big(k+(t+1)\big)^{2t-1} e^{-\alpha k} \\
&= \sum_{k=0}^{\infty} \left( \sum_{n=0}^{2t-1} \binom{2t-1}{n} k^n (t+1)^{2t-1-n} \right) e^{-\alpha k} \\
&\leq \sum_{k=0}^{\infty} \left( \sum_{n=0}^{2t-1} (2t)^{2t} k^n (t+1)^{2t} \right) e^{-\alpha k} \\
&\lesssim_t \sum_{n=0}^{2t-1} \left( \sum_{k=0}^{\infty} k^n e^{-\alpha k} \right) \\
&\lesssim_t \sum_{k=0}^{\infty} k^{2t-1} e^{-\alpha k} \\
&\leq \frac{A_{2t-1}}{(1-e^{-\alpha})^{2t}}\\
&\lesssim_t \alpha^{-2t}, \endaligned \]
where we used the mean value theorem in passing last line.
The upshot is that we may bound
\[ \sum_{k=0}^{\infty} (k+(t+1))^{2t-1} e^{-2 d \frac{r-t}{N-t} k} \lesssim_t \left( \frac{N-t}{r-t} \right)^{2t},\]
so that we have
\[
\sum_{k=0}^{c_mN} (k+1)^{2t-1} |\partial^{t} \kappa_r^N(k)|^2 \lesssim_t
\left[\left( \frac{\binom{N-t}{r-t}}{\binom{N}{r}} \right) \left( \frac{N-t}{r-t} \right)^{t} \right]^2.\]
Of course uniformly in $0 \leq j \leq t$, we have the equivalence $N-j \approx_t N$ and $r-j \approx_t r$ (recall $r \geq t+1$) so direct computation shows
\[
\frac{\binom{N-t}{r-t}}{\binom{N}{r}} \approx_t \ee(\frac{r}{N}\rr)^t \text{ and } \ee(\frac{N-t}{r-t}\rr)^t \approx_t \ee(\frac{N}{r}\rr)^t,
\]
thus proving the bound.\end{proof}
\section{Distant Spheres}\label{distant}
The strategy for bounding maximal averages over distant spheres is to bound (up to a constant) the smooth distant spherical maximal operator
\[
M^D_S f = \sup_{D \leq \frac{N}{m+1}} \bigg|\frac{1}{D+1} \sum_{d \leq D} \sigma_{N-d} *f\bigg|
\]
by the maximal operator given by precomposing the smooth local spherical maximal operator by $P^N$, the outermost spherical average. Explicitly, this operator is
\[
\sup_{L \leq c_m N} \ee|\frac{1}{L+1} \sum_{l \leq L} \big(\sigma_l * \sigma_N * f\big)(x)\rr|.
\]
The latter operator inherits the dimension independent $L^p$ bounds on $M^L_S$ from Corolloary \ref{smoothlpbounds} simply becasue $P^N$ is an $L^p$ contraction for all $1 \leq p \leq \infty$. Once $L^p$ bounds are established for $M^D_S$, the arguments from $\S \ref{comparison}$ work similarly to bound $M^D$.
\begin{lemma}\label{distantiid}
For any $k \leq c_m N$,
\[
\sigma_k * \sigma_N = \sum_{d \leq k} b_k(d) \sigma_{N-d}
\]
where $b_k(d)$ is the probability mass of a sum of $k$ i.i.d. copies of a random variable
\[
X :=
\begin{cases}
0 \text{ with probability } \frac{m-1}{m} \\
1 \text{ with probability } \frac{1}{m}.
\end{cases}
\]
\end{lemma}
\begin{proof}
Notice that $\sigma_k * \sigma_N(x)$ is a nonnegative function with integral $1$, supported on $\{x \in \Z_{m+1}^N: |x| \geq N-k\}$. First we show that this function is radial by fixing $x$ such that $N-k \leq |x| \leq N$ and observing that the number of pairs $(y,z) \in \Bbb S_k \times \Bbb S_N$ such that $x = y+z$ depends only on $|x|$.
To see this, we partition the pairs into sets $S_j$ containing all those $(y,z)$ such that exactly $j$ of the nonzero components of $y$ (note that there are $k$ such components in total) have indices $i$ such that $x_i = 0$. A counting argument shows that
\[
|S_j| = \binom{N-|x|}{j} m^j \binom{|x|}{k-j} (m-1)^j.
\]
Summing up $|S_j|$ from $j=0$ to the lesser of $N-|x|$ and $|x|-k$ proves radiality. Thus we may write
\[
\sigma_k * \sigma_N(x) = \sum_{d \leq k} b_k(d) \sigma_{N-d}
\]
with $b(0) + \dots + b(k) = 1$ and $b(0), \dots , b(k) > 0$.
Another counting argument shows that for any fixed $0 \leq d \leq k$ and $z \in \Bbb S_N$,
\[
\ee|\ee\{y \in \Bbb S_k: |y+z|=N-d\rr\}\rr| = \binom{N}{k} \binom{k}{d} (m-1)^{k-d}.
\]
Therefore
\begin{align}\label{blexpression}
b_k(d) &= \ee\langle \sigma_k * \sigma_N, 1_{\Bbb S_{N-d}}\rr\rangle\nonumber\\
&= \frac{1}{|\Bbb S_k| |\Bbb S_N|} \ee|\ee\{(y,z) \in \Bbb S_k \times \Bbb S_N: |y+z|=N-d\rr\}\rr|\nonumber\\
&= \frac{1}{|\Bbb S_k|} \binom{N}{k} \binom{k}{d} (m-1)^{k-d}\nonumber\\
&= m^{-k} \binom{k}{d} (m-1)^{k-d}.
\end{align}
Finally we define a discrete random variable
\[
X :=
\begin{cases}
0 \text{ with probability } \frac{m-1}{m} \\
1 \text{ with probability } \frac{1}{m}
\end{cases}
\]
and directly compute that \eqref{blexpression} is exactly the probability that $k$ i.i.d. copies of $X$ sum to $d$.
\end{proof}
\begin{remark}\label{probabilisticexplanation}
The intuition for this result is the the convolution of $\sigma_k$ and $\sigma_N$ can be thought of as the following random process:
\begin{enumerate}
\item Pick an element of $\Bbb S_N$ uniformly at random.
\item Pick $k$ components to change uniformly at random (among $k$-subsets of $[N]$).
\item Independently choose one of the remaining $m$ values in $\Bbb Z_{m+1}$ for each of those $k$ components.
\end{enumerate}
The symmetries of the first 2 steps above easily imply that the probability mass on $\Bbb Z_{m+1}^N$ is radial. Moreover, the length of the output is independent of the first 2 steps so it is simply determined by the outcome of the final step; a $k$-fold i.i.d. process with a $\frac{1}{m}$ probability of decreasing the length by $1$ and a $\frac{m-1}{m}$ probability of preserving the length.
\end{remark}
\begin{lemma}\label{blbound}
Let $0 \leq d \leq N/m$. Then for any $j$ within $\sqrt d$ of $md$ , $b_j(d) \gtrsim d^{-1/2}$.
\end{lemma}
\begin{proof}
This lemma can be thought of as a pointwise application of the central limit theorem. Indeed we start by noting that by the (classical) central limit theorem, the expressions
\begin{align}\label{stddeviationsums}
\sum_{i = j/m - 2\sqrt d}^{j/m- \sqrt d} b_j(i) \text{ and } \sum_{i = j/m + \sqrt d}^{j/m + 2 \sqrt d} b_j(i)
\end{align}
converge to positive numbers as $j \to \infty$ (which is equivalent to $d \to \infty$). To see this, note that probability mass (on the variable $a$)
\[
b_j\left(a\sqrt j +\frac{j}{m}\right)
\]
converges weakly to a fixed Gaussian. Moreover, $\sqrt j \approx \sqrt d$ so both expressions in \eqref{stddeviationsums} converge to integrals of this Gaussian over fixed intervals. In particular, this implies that there exist
\[
\lambda \in [j/m - 2\sqrt d, j/m - \sqrt d], \rho \in [j/m + \sqrt d, j/m + 2 \sqrt d]
\]
such that $b_j(\lambda), b_j(\rho) \gtrsim d^{-1/2}$.
Recall from Lemma \ref{blexpression} that $b_j(i) = m^{-j} \binom{j}{i} (m-1)^{j-i}$. In the interest of proving a concavity property of $b_j$, we observe that the ratio of successive summands is
\[
R_j(i) := \frac{b_j(i+1)}{b_j(i)} = \frac{j-i}{(m-1)(i+1)}.
\]
Notice that $R_j$ decreases from $i=\lambda$ to $i=\rho$ and that $d \in [\lambda, \rho]$ (this can be computed directly from the definitions of $j$ and $d$). Therefore, if $b_j(d) \leq b_j(\lambda)$, $R_j(d) \leq 1$. Moreover, if $R_j(d) \leq 1$, then $b_j(d) \geq b_j(\rho)$.
Thus, at least one of the inequalities $b_j(d) \geq b_j(\lambda)$ and $b_j(d) \geq b_j(\rho)$ must hold. Either way this shows $b_j(d) \geq d^{-1/2}$.
\end{proof}
\begin{proposition}
For any nonnegative function $f$ we have the pointwise inequality
\[
M^D_S f \lesssim M^L_S \big(\sigma_N * f\big)(x).
\]
in particular $M^D_S$ is weak-type $1-1$, independent of dimension. Again, by interpolation this implies dimension independent $L^p$ bounds for all $1<p\leq\infty$.
\end{proposition}
\begin{proof}
Because the operators in question are suprema over positive convolution operators, we seek pointwise bounds on the convolution kernels. Moreover, it suffices to show that any $0 \leq L \leq c_mN$,
\begin{align}\label{distantspheretrasfer}
\sum_{d \leq L/m} \sigma_{N-d} \lesssim \sum_{l \leq L} \sigma_l * \sigma_N.
\end{align}
This can be seen simply by dividing the left and right sides by $L/m + 1$ and $L+1$ respectively (as these values are equivalent up to a constant) and taking a supremum over $L$. Applying Lemmas \ref{distantiid} and \ref{blbound}, the right side of \eqref{distantspheretrasfer} can be reformulated:
\begin{align*}
\sum_{l \leq L} \sigma_l * \sigma_N &= \sum_{l=0}^L \sum_{j=0}^l b_k(j) \sigma_{N-j}\\
&= \sum_{j=0}^L \sum_{l=j}^L b_l(j) \sigma_{N-l}\\
&\geq \sum_{d = 0}^{L/m} \sum_{l=dm-\sqrt d}^{dm} b_l(d) \sigma_{N-d}\\
&\gtrsim \sum_{l \leq L/m} d^{-1/2} d^{1/2} \sigma_{N-d}\\
&\geq \sum_{d \leq L/m} \sigma_{N-d}.
\end{align*}
\end{proof}
Because the interpolation techniques used in \S \ref{comparison} apply to any Ces{\`a}ro means for a sequence of Markov operators that are $L^1$ and $L^\infty$ contractions, much of the argument caries over with the modification that the opertor $P^k$ is replaced by
\[
Q^k f(x) := f * \sigma_{N-k}(x),
\]
the operator $S_n^\lambda$ is replaced by
\[
T_n^\lambda f(x) := \sum_{k \leq n} A^\lambda_{n-k} Q^k f(x), \lambda \in \C,
\]
and the operator $S_*^\lambda$ is replaced by
\[
T_*^\lambda f(x) := \max_{0 \leq n \leq \frac{N}{m+1}} \ee|\frac{T_n^\lambda f(x)}{(n+1)^{\lambda+1}}\rr|,
\]
and the operator $R_t$ is replaced by
\[
\ppt f(x)^2 := \sum_{0 \leq k \leq \frac{N}{m+1}} (k+1)^{2t-1}|T_k^{-t-1} f(x)|^2.
\]
All other definitions from \S \ref{comparison} are translated over analogously (of course the local operators will be replaced by distant operators). Note also that, following the computations of Lemma \ref{krawtchoukderivatives} and Corollary \ref{krawtchoukhigherderivatives},
\begin{align}\label{krawtchoukforwardderivative}
\big|(\F \triangle^t Q^k)(r)\big| =\ee(\frac{1}{c_m}\rr)^t\frac{\binom{N-t}{r-t}}{\binom{N}{r}} |\kappa^{N-t}_{r-t}(N-k)|.
\end{align}
Carrying over the proof of Theorem \ref{localbound} in the natural way, we can establish (modulo an analog to Proposition \ref{main}) the distant spherical bound, from which the main result Theorem \ref{MAIN} follows:
\begin{theorem}\label{distantbound}
The distant spherical maximal operator $M^D$ given by
\[
M^D f(x) = \sup_{k \leq \frac{N}{m+1}} |\sigma_{N-k} * f(x)|
\]
satisfies $L^p$ bounds for all $p > 1$ dependent only on $p$ and $m$.
\end{theorem}
Thus, the only remaining element in this section is the distant sphere square function bound.
\begin{proposition}\label{distantsquarefunction}
With
\[ \aligned
\ppt f(x)^2 &= \sum_{0 \leq k \leq \frac{N}{m+1}} (k+1)^{2t-1}|T_k^{-t-1}f(x)|^2 \\
&= \sum_{0 \leq k \leq \frac{N}{m+1}} (k+1)^{2t-1} |\triangle^{t} Q^k f(x)|^2, \endaligned \]
there exist constants $C_{t,m}$ independent of $N$ so that
\[ \|\ppt f \|_{L^2(\Z_{m+1}^N)} \leq C_{t,m} \| f\|_{L^2(\Z_{m+1}^N)}\]
for all $N$.
\end{proposition}
\begin{proof}
Much of the proof of Proposition \ref{main} carries over. In fact, the fact that all spheres appearing in $\ppt$ have radii on the order of $N$ makes the bound simpler.
For any $r \geq t$, we bound
\begin{align*}
\sum_{k=0}^{\frac{N}{m+1}} (k+1)^{2t-1} |\F \Delta^t Q^k|^2(r) &= \sum_{k=0}^{\frac{N}{m+1}} (k+1)^{2t-1} \ee(\frac{1}{c_m}\rr)^{2t} \ee(\frac{\binom{N-t}{r-t}}{\binom{N}{r}} \ee|\kappa^{N-t}_{r-t}(N-k)\rr|\rr)^2\\
&\lesssim_t \sum_{k=0}^{\frac{N}{m+1}} (k+1)^{2t-1} \frac{\binom{N-t}{r-t}^2}{\binom{N}{r}^2} \exp\ee(-2d\frac{(r-t)(N-k)}{N-t}\rr)\\
&\leq \sum_{k=0}^{\frac{N}{m+1}} (k+1)^{2t-1} \frac{\binom{N-t}{r-t}^2}{\binom{N}{r}^2}e^{-d(r-t)}\\
&\lesssim_t N^{2t} \ee(\frac{r}{N}\rr)^{2t} e^{-dr}
\end{align*}
where we used the fact that $k \leq \frac{N}{m+1}$ to pass to the second-to-last line and the estimates at the end of the proof of Proposition \ref{main} to pass to the last line. Because $e^{-dr} \lesssim_t r^{-2t}$, this proves the desired inequality
\[
\sum_{k=0}^{\frac{N}{m+1}} (k+1)^{2t-1} |\F \Delta^t Q^k|^2(r) \lesssim_t 1.
\]
\end{proof}
\section{Proof of Proposition \ref{tech}}\label{krawtchouk}
First we introduce the notation
\[
a_j =\binom{N}{k}^{-1} \binom{r}{j} \binom{N-r}{k-j} m^{-j}
\]
for the magnitude of the $j$th summand in the full expression for $\kappa^N_k(r)$, which we recall is given by
\begin{align}\label{krawtchoukdefinition}
\kappa^N_k(r) = \sum_{j=\max(0,r+k-N)}^{\min(r,k)} (-1)^j \binom{N}{k}^{-1} \binom{r}{j} \binom{N-r}{k-j} m^{-j}.
\end{align}
We restate the proposition for the reader's convenience:
\begin{prop}[restatement]
There exists a constant $d$ (dependent only on $m$) such that for all $r,k,N$ we have
\[ |\kappa_k^N(r)| \leq e^{-d (rk/N)}.\]
\end{prop}
By the symmetry of the Krawtchouk polynomials in $r$ and $k$, without loss of generality $r \leq k$ so the sum \eqref{krawtchoukdefinition} will terminate at $r$. The thrust of the proof is to show that the largest summand magnitude in \eqref{krawtchoukdefinition} decays exponentially in $rk/N$ so Lemma \ref{dominantterm} below will prove the proposition.
For the remainder of the section we define
\[
\ell := \max(0, r+k-N)
\]
to be the lowest index of summation. Also we define $n$ to be the lowest index in the region of summation, i.e. $[\ell, r] \cap \Z$, such that $a_n$ is a maximal summand magnitude. In other words, $n \in \Z$ is minimal subject to the constraints that $\ell \leq n \leq r$ and $a_j \leq a_n$ for all $j \in \Z$ in that range.
\begin{lemma}\label{dominantterm}
Each Krawtchouk polynomial is dominated by its maximal summand magnitude. More concretely, $\left|\kappa^N_k(r)\right| \leq a_n$.
\end{lemma}
\begin{proof}
We begin by noting that the ratio $a_{j+1}/a_j$ is given by
\[
R(j) := \frac{(r-j)(k-j)}{m(j+1)(j+N-r-k+1)}.
\]
We view $R$ as a function on the real interval $(\ell - 1, r]$ rather than restricting it to the integers. Its key properties for this lemma are
\begin{enumerate}[(i)]
\item $R(j) \geq 0$,
\item $R(j)$ is continuously (strictly) decreasing,
\item $R(j)$ approaches $+\infty$ as $j$ approaches $\ell - 1$, and
\item $R(r) = 0$.
\end{enumerate}
Property (i) above follows from the fact that all factors in $R(j)$ are nonnegative. Property (ii) is a result of the factors in the numerator diminishing in magnitude and the factors in the denominator growing. Property (iii) follows from property (i) and the fact that $R$ has a pole at $\ell - 1$ while property (iv) is trivial.
By the intermediate value theorem, properties (ii), (iii), and (iv) imply that there exists some $J \in (\ell-1, r)$ such that $R(J)=1$. Applying property (ii), we see that for all integers $j$ in the region of summation,
\begin{align}\label{krawtchoukmonotonicity}
\begin{split}
&j \leq J \implies R(j) \geq 1 \implies a_{j+1} \geq a_j\\
&j \geq J \implies R(j) \leq 1 \implies a_{j+1} \leq a_j.
\end{split}
\end{align}
In particular, this means that $a_{\lceil J \rceil}$ is a maximal summand magnitude. Note that because $R(j)$ is strictly decreasing, $R(\lceil J \rceil - 1) > 1$ so $a_{\lceil J \rceil} > a_{\lceil J \rceil - 1}$. Thus $\lceil J \rceil$ must minimal among indices of maximal summand magnitudes, i.e. $n = \lceil J \rceil$.
Finally, we can bound $\left|\kappa^N_k(r)\right|$ by splitting it into two monotonic alternating sums, namely
\[
\kappa^N_k(r) = \left(\sum_{j=0}^n (-1)^j a_j\right) + \left(\sum_{j=n+1}^r (-1)^j a_j\right)
\]
where the monotinicity is a direct consequence of \eqref{krawtchoukmonotonicity}. Note that the second sum above may be empty, but we can ignore this by defining $a_{r+1}$ to be $0$.
Because they are monotonic and alternating, the sums are bounded between $0$ and their respective largest magnitude summands, namely $\pm a_n$ and $\mp a_{n+1}$. Because these bounds have opposite signs, we can bound $\left|\kappa^N_k(r)\right|$ by the maximum of their magnitudes, namely $a_n$.
\end{proof}
To bound $a_n$ we first bound $n$ from below. This technical lemma is largely comprised of algebraic and calculus manipulations.
\begin{lemma}\label{peaklemma}
If $n > 0$ and $rk \geq 2Nm$ then $n \gtrsim rk/N$.
\end{lemma}
For the sake of clarity we point out that the hypothesis $rk \geq 2Nm$ proves $n > 0$ a posteriori, however it is more efficient to handle the $n = 0$ case separately.
\begin{proof}
We recall from Lemma \ref{dominantterm} that the ratio $a_{j+1}/a_j$ is given by
\[
R(j) := \frac{(r-j)(k-j)}{m(j+1)(j+1+N-r-k)}.
\]
To solve the equation $R(j) = 1$, we apply the quadratic formula to the quadratic
\[ \aligned
&\big[m(j+1)(j+1+N-r-k)\big]-\big[(r-j)(k-j)\big]\\
&=(m-1)j^2 + [2m+Nm-(m-1)(r+k)]j + [m+Nm-rk-km-rk].
\endaligned \]
This reveals that $R$ can equal $1$ only at the values
\[
j_{\pm} := C \pm \sqrt{C^2+ A}
\]
Where
\begin{align}
A &:= -\frac{4(m-1)(m+Nm-rm-km-rk)}{4(m-1)^2} = \bigg(\frac{rk-Nm}{m-1} + \frac{rm+km-m}{m-1}\bigg)\label{defA}\\
C &:= -\frac{2m+Nm-(m-1)(r+k)}{2(m-1)} = \bigg(\frac{r+k}{2} - \frac{m}{m-1} - \frac{Nm}{2(m-1)}\bigg)\label{defC}.
\end{align}
We will show
\begin{enumerate}[(I)]
\item $A > 0$,
\item $A \gtrsim rk$, and
\item $\sqrt{C^2 + A} - |C| \gtrsim rk/N$.
\end{enumerate}
Item (I) above implies that $j_- < 0 < j_+$. We saw in the proof of Lemma \ref{dominantterm} there exists $J \in (\ell - 1,r)$ such that $R(J) = 1$ and $n = \lceil J \rceil$. It follows that $J = j_{\pm}$ and, by the assumption $n > 0$, that $J>0$. Therefore $J = j_+$ simply by default.
Item (II) is the key element in the proof of item (III). Item (III) shows that
\[
n \geq J \gtrsim rk/N
\]
simply because, regardless of the sign of $C$,
\[
J = C + \sqrt{ C^2 + A} \geq \sqrt{ C^2 + A} - |C| \gtrsim rk/N.
\]
Therefore all that remains in the lemma is to justify (I), (II), and (III).
\bigskip
\emph{Justification of (I) and (II):}
In light of fact that $r$ and $k$ are positive integers, simple arithmetic shows that
\[
\frac{rm+km-m}{m-1} > 0.
\]
and, because $rk \geq 2Nm$,
\[
\frac{rk-Nm}{m-1} \geq \frac{1}{2(m-1)} rk.
\]
Adding these two inequalities, the last expression in \eqref{defA} shows $A >0$ and $A \gtrsim rk$.
\bigskip
\emph{Justification of (III):}
We split into two cases.
\begin{enumerate}[\text{Case }1:]
\item If $A > 3C^2$, then
\[ \sqrt{C^2 + A} - |C| \geq A^{1/2} - (A/3)^{1/2} \gtrsim A^{1/2} \]
We know that $A \gtrsim rk$ and $(rk)^{1/2} \leq N$ by item (II) and the bound $r, k \leq N$ respectively. It follows that
\[ A^{1/2} \gtrsim (rk)^{1/2} = \frac{rk}{(rk)^{1/2}} \geq \frac{rk}{N}.\]
\item If $A \leq 3C^2$, then we apply the mean value theorem to observe that
\begin{align*}
\sqrt{C^2+A}-|C| &\geq \ee(\inf_{x \in [C^2, C^2 + A]} \frac{1}{2x^{1/2}}\rr) A\\
&\geq \frac{A}{2(4C^2)^{1/2}}\\
&\gtrsim \frac{rk}{N}.
\end{align*}
The final inequality follows from the bounds $A \gtrsim rk$ and $|C| \lesssim N$. The former is again item (II) and the latter comes from the fact that each term in the last expression of \eqref{defC} is bounded in magnitude by $2$ or $N$.
\end{enumerate}
Thus, regardless of $A$, $\sqrt{C^2 + A} - |C| \gtrsim rk/N$.
\end{proof}
From here Proposition \ref{tech} is fairly straightforward.
\begin{proof}[Proof of Proposition \ref{tech}]
First we use the combinatorial observation
\[ \aligned
\binom{N}{k} &= \bigg|\bigg\{S \subset [N]: |S| = k\bigg\}\bigg|\\
&\geq \bigg|\bigg\{S \subset [N]: |S| = k, \big|S \cap [r]\big| = j\bigg\}\bigg| = \binom{r}{j} \binom{N-r}{k-j}
\endaligned \]
to justify the inequality
\begin{align}\label{cubecase}
a_j = \binom{N}{k}^{-1} \binom{r}{j} \binom{N-r}{k-j} m^{-j} \leq m^{-j}.
\end{align}
for all $j$ in the region of summation.
This bound is useful because in order to prove the proposition, it is sufficient to prove $a_n \leq e^{-d(rk/N)}$ by Lemma \ref{dominantterm}. To this end, we split into three cases.
\begin{enumerate}[\text{Case } 1:]
\item The hypotheses of Lemma \ref{peaklemma} hold, i.e. $n > 0$ and $rk \geq 2mN$. Then there is an index $n$ and a (small) constant $\epsilon >0$ such that
\[
n \geq \epsilon \frac{rk}{N}.
\]
Letting $d := \epsilon\ln m > 0$, this shows $a_n \leq e^{-d(rk/N)}$ by \eqref{cubecase}.
\item $n>0$ and $rk < 2mN$. Because $m \geq 2$ and $n \geq 1$ by assumption, \eqref{cubecase} provides the inequality $a_n \leq 1/2$. Moreover, the assumption $rk < 2mN$ implies that
\[
e^{-2m} \leq e^{-rk/N}.
\]
Then we simply decrease $d$ to a small enough (positive) number that $1/2 \leq e^{-2md}$, to achieve the desired bound
\[
a_n \leq 1/2 \leq e^{-2md} \leq e^{-d(rk/N)}.
\]
\item $n=0$. We assume $r > 0$ because otherwise the entire proposition is trivial. Also, because
\[
\max(0, r+k-N) \leq n = 0,
\]
we know that $r+k \leq N$ so the factors below are all well defined. Then we bound as follows:
\[ \aligned
a_0 &= \binom{N}{k}^{-1} \binom{N-r}{k}\\
&= \prod_{j=0}^{k-1} \frac{N-r-j}{N-j}\\
&\leq \left(\frac{N-r}{N}\right)^k\\
&= \left[\left(1 - \frac{r}{N} \right)^{\frac{N}{r}}\right]^{rk/N}\\
&\leq e^{-rk/N}
\endaligned \]
Because we are free to assume $d \leq 1$, this completes the proof of proposition \ref{tech}.
\end{enumerate}\end{proof}
| {
"timestamp": "2014-12-02T02:07:00",
"yymm": "1406",
"arxiv_id": "1406.7229",
"language": "en",
"url": "https://arxiv.org/abs/1406.7229",
"abstract": "For $m \\geq 2$, let $(\\mathbb{Z}_{m+1}^N, |\\cdot|)$ denote the group equipped with the so-called $l^0$ metric,\\[ |y| = \\left| \\big( y(1), \\dots, y(N) \\big) \\right| := | \\{1 \\leq i \\leq N : y(i) \\neq 0 \\} |,\\] and define the $L^1$-normalized indicator of the $r$-sphere, \\[ \\sigma_r := \\frac{1}{|\\{|x| = r\\}|} 1_{\\{|x| =r\\}}.\\] We study the $L^p \\to L^p$ mapping properties of the maximal operator \\[ M^{N} f (x) := \\sup_{r \\leq N} | \\sigma_r*f| \\] acting on functions defined on $\\mathbb{Z}_{m+1}^N$.Specifically, we prove that for all $p>1$, there exist absolute constants $C_{m,p}$ so that \\[ \\| M^{N} f \\|_{L^p(\\mathbb{Z}_{m+1}^N)} \\leq C_{m,p} \\| f \\|_{L^p(\\mathbb{Z}_{m+1}^N)} \\] for all $N$.",
"subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)",
"title": "Dimension-Free $L^p$-Maximal Inequalities in $\\mathbb{Z}_{m+1}^N$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787876194204,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7093811412482115
} |
https://arxiv.org/abs/1908.10348 | Characterisation of the weak-star symmetric strong diameter 2 property in Lipschitz spaces | We give a characterisation of the weak* symmetric strong diameter 2 property for Lipschitz function spaces in terms of a property of the corresponding metric space. Using this characterisation we show that the weak* symmetric strong diameter 2 property is different from the weak* strong diameter 2 property in Lipschitz spaces, thereby answering a question posed in a recent paper by Haller, Langemets, Lima, and Nadel. | \section{Introduction}
We consider only real Banach spaces. We start by fixing some notation. Given a metric space $M$ and a point $x$ in $M$, we denote by $B(x,r)$ the open ball in $M$ centered at $x$ of radius $r$. Let $X$ be a Banach space. We denote the closed unit ball, the unit sphere, and the dual space of $X$ by $B_X$, $S_X$, and $X^*$, respectively. A \emph{weak$^*$ slice} of $B_{X^*}$ is a set of the form
\[
S(B_{X^*},x, \alpha)\coloneqq \{x^*\in B_{X^*}\colon \langle x, x^*\rangle>1-\alpha\},
\]
where $x\in S_X$ and $\alpha>0$.
Let $M$ be a pointed metric space, that is, a metric space with a fixed point $0$. The space $\Lip_0(M)$ of all Lipschitz functions $f\colon M\to \mathbb{R}$ with $f(0)=0$ is a Banach space with the norm
\[
\|f\|_{\Lip} =\sup\left\{ \frac{|f(x)-f(y)|}{d(x,y)}\colon x,y\in M, x\neq y\right\}.
\]
The space
\[
\mathcal{F}(M)\coloneqq \overline{\spanop}\left\{\delta_m \colon m\in M \right\}\subset \Lip_0(M)^*
\]
is called the Lipschitz-free space over $M$, where $\delta_m\colon \Lip_0(M)\to \mathbb{R}$,
\[
\langle f, \delta_m\rangle=f(m),\qquad m\in M,\; f\in \Lip_0(M).
\]
It can be shown that, under this duality, $\mathcal{F}(M)^*$ is isometrically isomorphic to $\Lip_0(M)$.
Recall that the dual space $X^*$ is said to have the \emph{weak$^*$ strong diameter $2$ property} ($w^*$-SD$2$P) if every finite convex combination of weak$^*$ slices of $B_{X^*}$ has diameter $2$. It is well known that $X^*$ has the $w^*$-SD$2$P iff the norm of $X$ is octahedral (\cite{D},\cite{G}, for a proof, see, e.g., \cite{HLP}). Therefore, $\Lip_0(M)$ has the $w^*$-SD$2$P iff the norm of $\mathcal{F}(M)$ is octahedral. Moreover, in \cite[Theorem 3.1]{PR}, it was shown that the norm of $\mathcal{F}(M)$ is octahedral iff the metric space $M$ has the following property.
\begin{Def}
A metric space $M$ is said to have the \emph{long trapezoid property} (LTP) if, for every finite subset $N$ of $M$ and $\varepsilon>0$, there exist $u,v\in M$, $u\neq v$, such that, for any $x,y\in N$,
\begin{equation}\label{eq: LTP}
(1-\varepsilon)\bigl(d(x,y)+d(u,v)\bigr)\leq d(x,u)+d(y,v).
\end{equation}
\end{Def}
\noindent
Therefore, the Lipschitz space $\Lip_0(M)$ has the $w^*$-SD$2$P iff $M$ has the LTP. The objective of this paper is to give a similar characterisation to the following property, which was introduced in \cite{ALN} but studied more extensively in \cite{ANP}, \cite{HLLN}, \cite{CCGMR}, and \cite{LR}.
\begin{Def}
A dual Banach space $X^*$ is said to have the \emph{weak$^*$ symmetric strong diameter $2$ property} ($w^*$-SSD$2$P) if, for every finite family $\{S_i\}_{i=1}^n$ of weak$^*$ slices of $B_{X^*}$ and $\varepsilon>0$, there exist $f_i\in S_i$, $i=1,\ldots,n$, and $g\in B_{X^*}$ such that $f_i\pm g\in S_i$ for every $i\in \{1,\ldots,n\}$ and $\|g\|>1-\varepsilon$.
\end{Def}
\noindent
It is known that in general the $w^*$-SSD$2$P is a strictly stronger property than the $w^*$-SD$2$P (see, e.g., \cite{HLLN}). In this paper, we show that the same is true for Lipschitz function spaces, thus giving an answer to \cite[Question 6.3]{HLLN}.
The paper is organised as follows.
In Section \ref{sec: 2}, we give a characterisation of the $w^*$-SSD$2$P for the Lipschitz space $\Lip_0(M)$ in terms of a property of the metric space $M$. More precisely, we prove Theorem \ref{main}, which says that $\Lip_0(M)$ has the $w^*$-SSD$2$P iff $M$ enjoys the following property.
\begin{Def}
We say that $M$ has the \emph{strong long trapezoid property} (SLTP) if, for every finite subset $N$ of $M$ and $\varepsilon>0$, there exist $u,v\in M$, $u\neq v$, such that, for any $x,y \in N$, the inequality \eqref{eq: LTP} holds, and, for any $x,y,z,w\in N$,
\begin{equation}\label{eq: SLTP}
\begin{aligned}
(1-\varepsilon)&\bigl(2d(u,v)+d(x,y)+d(z,w)\bigr)\\
&\qquad\qquad\leq d(x,u)+d(y,u)+d(z,v)+d(w,v).
\end{aligned}
\end{equation}
\end{Def}
In Section \ref{sec: 3}, we first apply Theorem \ref{main} to show that, for Lipschitz spaces, the $w^*$-SSD$2$P is a strictly stronger property than the $w^*$-SD$2$P: Example \ref{ex1} provides a metric space which has the LTP but not the SLTP.
A question that arises from the definition of the SLTP is whether the inequality \eqref{eq: SLTP} implies \eqref{eq: LTP}. Example \ref{ex2} shows that this is not the case: it provides a metric space $M$ for which \eqref{eq: SLTP} holds for every finite subset $N$, but which fails the LTP.
We finish the paper by showing that any infinite subset of $\ell_1$, viewed as a metric space, has the SLTP (Example \ref{ell1}).
\section{Main result}\label{sec: 2}
\begin{Teo}\label{main}
Let $M$ be a pointed metric space. The following statements are equivalent:
\begin{enumerate}[\upshape (i)]
\item $\Lip_0(M)$ has the $w^*$-SSD$2$P;
\item $M$ has the SLTP.
\end{enumerate}
\end{Teo}
\begin{proof}
(i)$\Rightarrow$(ii).
Assume that $\Lip_0(M)$ has the $w^*$-SSD$2$P,
and let $N$ be a finite subset of $M$ and $0<\varepsilon<1$.
Choose $\alpha>0$ such that $2\alpha<\varepsilon$ and, for any $x,y\in N$, $x\neq y$,
\[
\alpha\leq\frac{1}{d(x,y)}
\quad\text{and}\quad
2\alpha\leq d(x,y).
\]
For any $x,y\in N$, $x\neq y$,
define a slice $S_{x,y}:=S\left(B_{\Lip_0(M)}, \frac{\delta_x-\delta_y}{d(x,y)}, \alpha^3\right)$.
Since $\Lip_0(M)$ has the $w^*$-SSD$2$P, we can find $f_{x,y}\in S_{x,y}$ and $g\in B_{\Lip_0(M)}$, $\|g\|\geq 1-\alpha$, such that $\|f_{x,y}\pm g\|\leq 1$. For $x,y\in N$, $x=y$, define $f_{x,y}:=0\in\Lip_0(M)$.
For any $x,y\in N$,
\begin{align*}
\langle f_{x,y}, \delta_x-\delta_y \rangle
=f_{x,y}(x)-f_{x,y}(y)
\geq(1-\alpha^3)d(x,y),
\end{align*}
therefore, keeping in mind that $\|f_{x,y}\pm g\|\leq1$,
\[
|\langle g, \delta_x-\delta_y \rangle|=|g(x)-g(y)|\leq \alpha^3 d(x,y)\leq \alpha^2.
\]
Since $\|g\|\geq 1-\alpha$, there exist $u,v\in M$, $u\neq v$, such that
\[
\langle g, \delta_u-\delta_v \rangle
=g(u)-g(v)\geq\left(1-\alpha\right)d(u,v).
\]
Now, for any $x,y\in N$, again using that $\|f_{x,y}\pm g\|\leq1$,
\begin{align*}
|\langle f_{x,y},\delta_u-\delta_v\rangle|=|f_{x,y}(u)-f_{x,y}(v)|\leq \alpha d(u,v).
\end{align*}
Letting $x,y,z,w\in N$ be arbitrary, it remains to verify \eqref{eq: LTP} and \eqref{eq: SLTP}.
Since $\|f_{x,y}\pm g\|\leq 1$, we get
\begin{align*}
(1-\varepsilon)&\bigl(d(u,v)+d(x,y)\bigr)\\
&\leq (1-2\alpha)d(u,v)+(1-2\alpha^3)d(x,y)\\
&\leq \langle g,\delta_u-\delta_v\rangle - \langle f_{x,y},\delta_u-\delta_v\rangle
+\langle f_{x,y}, \delta_x-\delta_y\rangle - \langle g, \delta_x-\delta_y\rangle\\
&=\langle g-f_{x,y},\delta_u-\delta_x\rangle-\langle g-f_{x,y}, \delta_v-\delta_y\rangle\\
&\leq d(x,u)+d(y,v).
\end{align*}
Thus, \eqref{eq: LTP} holds.
If $x=y$ and $z=w$, then \eqref{eq: SLTP} follows from \eqref{eq: LTP} with $y$ replaced by $z$.
If $x\neq y$ or $z\neq w$, then
\[
\alpha\bigl(d(x,y)+d(z,w)\bigr)
\geq 2\alpha^2
\geq|\langle g, \delta_z-\delta_x+\delta_w-\delta_y \rangle|,
\]
and thus, since $\|f_{x,y}\pm g\|\leq 1$,
\begin{align*}
(1-\varepsilon)&\bigl(2d(u,v)+d(x,y)+d(z,w)\bigr)\\
&\leq 2\bigl(g(u)-g(v)\bigr)+(1-\alpha^3-\alpha)\bigl(d(x,y)+d(z,w)\bigr)\\
&\leq 2\langle g, \delta_u-\delta_v \rangle + \langle f_{x,y}, \delta_x-\delta_y \rangle
+\langle f_{z,w}, \delta_z-\delta_w \rangle\\
&\qquad +\langle g, \delta_z-\delta_x+\delta_w-\delta_y \rangle\\
&=\langle g-f_{x,y}, \delta_u-\delta_x \rangle + \langle g+f_{x,y}, \delta_u-\delta_y \rangle\\
&\qquad -\langle g+f_{z,w}, \delta_v-\delta_z \rangle - \langle g-f_{z,w}, \delta_v-\delta_w \rangle\\
&\leq d(x,u)+d(y,u)+d(z,v)+d(w,v).
\end{align*}
\medskip
(ii)$\Rightarrow$(i).
Assume that $M$ has the SLTP. Let $n\in\mathbb{N}$, let $S_i:=S(B_{\Lip_0(M)}, \mu_i,\alpha_i)$, $i=1,\dotsc,n$,
be weak$^\ast$ slices of $B_{\Lip_0(M)}$, and let $0<\varepsilon<1$.
It suffices to find $f_i\in S_i$, $i=1,\dotsc,n$, and $g\in B_{\Lip_0(M)}$ with $\|g\|\geq(1-\varepsilon)^2$ such that
$\|f_i\pm g\|\leq1$ for every $i\in\{1,\dotsc,n\}$.
We may assume that, for every $i\in\{1,\dotsc,n\}$, one has $\mu_i=\sum_{j=1}^{n_i}\lambda_{ij}\delta_{x_{ij}}$
for some $n_i\in\mathbb{N}$, $\lambda_{ij}\in\mathbb{R}\setminus\{0\}$, and $x_{ij}\in M$, $j=1,\dotsc,n_i$.
Now $N:=\{0\}\cup\bigcup_{i=1}^{n}\{x_{i1},\dotsc,x_{in_i}\}$ is a finite subset of $M$.
We may also assume that $\varepsilon<\min_{1\leq i\leq n}\alpha_i$.
This enables, for every $i\in\{1,\dotsc,n\}$, to pick an $ h_i\in S_i$ with $\| h_i\|<1-\varepsilon$.
By the SLTP, there exist $u,v\in M$, $u\neq v$, satisfying \eqref{eq: LTP} and \eqref{eq: SLTP} for all $x,y,z,w\in N$.
Setting
\[
r_0:=\frac{1}{2}\min_{x,y\in N}\bigl(d(x,u)+d(y,u)-(1-\varepsilon)d(x,y)\bigr)
\]
and
\[
s_0:=\frac{1}{2}\min_{z,w\in N}\bigl(d(z,v)+d(w,v)-(1-\varepsilon)d(z,w)\bigr),
\]
one has $r_0+s_0\geq(1-\varepsilon)d(u,v)$. Thus, there exist $r,s\geq 0$ with $r \leq r_0$ and $s\leq s_0$ such that
\[
r+s=(1-\varepsilon)^2 d(u,v).
\]
We may assume that $r>0$. Define a function $g\colon M\to\mathbb{R}$ by
\[
g(x):=
\begin{cases}
r-d(x,u)&\quad\text{if $x\in B(u,r)$;}\\
-s+d(x,v)&\quad\text{if $x\in B(v,s)$;}\\
0&\quad\text{otherwise}
\end{cases}
\]
(we use the convention $B(v,s)=\emptyset$ if $s=0$). Observe that $\|g\|\leq 1$
(here we use that, whenever $x\in B(u,r)$ and $y\in B(v,s)$, one has $g(y)\leq 0 \leq g(x)$, and thus $|g(x)-g(y)|=g(x)-g(y)$).
One also has $\|g\|\geq(1-\varepsilon)^2$, because
\[
|g(u)-g(v)|=g(u)-g(v)=r+s=(1-\varepsilon)^2 d(u,v).
\]
Set $L:=N\cup B$ where $B:=B(u,r)\cup B(v,s)$.
We next show that, for every $i\in\{1,\dotsc,n\}$, there is a $c_i\in\mathbb{R}$ such that,
defining a function $f_i\colon L\to\mathbb{R}$ by $f_i|_N= h_i|_N$ and $f_i|_{B}=c_i$ (observe that $B\cap N=\emptyset$), one has
$\|f_i\pm g\|_{\Lip_0(L)}\leq 1$ and $\|f_i\pm |g|\|_{\Lip_0(L)}\leq 1$.
Let $i\in\{1,\dotsc,n\}$. Set
\begin{alignat*}{2}
\widecheck{a}_i&:=\max_{x\in N}\bigl( h_i(x)-d(x,u)\bigr),\quad
&\widehat{a}_i&:=\min_{x\in N}\bigl( h_i(x)+d(x,u)\bigr),\\
\widecheck{b}_i&:=\max_{x\in N}\bigl( h_i(x)-d(x,v)\bigr),\quad
&\widehat{b}_i&:=\min_{x\in N}\bigl( h_i(x)+d(x,v)\bigr).
\end{alignat*}
Whenever $x,y\in N$, since $\| h_i\|<1-\varepsilon$, one has
\[
h_i(x)+d(x,u)-\bigl( h_i(y)-d(y,u)\bigr)\geq d(x,u)+d(y,u)-(1-\varepsilon)d(x,y)\geq 2r,
\]
and, by \eqref{eq: LTP},
\begin{align*}
h_i(x)+d(x,u)-\bigl( h_i(y)-d(y,v)\bigr)
&\geq d(x,u)+d(y,v)-(1-\varepsilon)d(x,y)\\
&\geq (1-\varepsilon)d(u,v)> r+s.
\end{align*}
Thus, $\widehat{a}_i-r\geq \widecheck{a}_i+r$ and $\widehat{a}_i-r> \widecheck{b}_i+s$.
Similarly, one observes that $\widehat{b}_i-s\geq \widecheck{b}_i+s$
and $\widehat{b}_i-s > \widecheck{a}_i+r$.
It follows that there exists a
$c_i\in\bigl[\widecheck{a}_i+r,\widehat{a}_i-r\bigr]\cap\bigl[\widecheck{b}_i+s,\widehat{b}_i-s\bigr]$.
This $c_i$ does the job.
Indeed, let $x\in N$ and $y\in B(u,r)$.
In order to see that
\[
\bigl|f_i(x)\pm g(x)-\bigl(f_i(y)\pm g(y)\bigr)\bigr|
=\Bigl| h_i(x)-\Bigl(c_i\pm \bigl(r-d(y,u)\bigr)\Bigr)\Bigr|
\leq d(x,y),
\]
it suffices to show that
\begin{equation}\label{eq: ...=<C_i-+A=<...}
h_i(x)-d(x,y)\pm d(y,u)
\leq c_i\pm r
\leq h_i(x)+d(x,y)\pm d(y,u).
\end{equation}
These inequalities hold:
\begin{align*}
h_i(x)-d(x,y)-d(y,u)
&\leq h_i(x)-d(x,u)\\
&\leq\widecheck{a}_i\leq c_i-r\leq\widehat{a}_i-2r\\
&\leq h_i(x)+d(x,u)-2d(y,u)\\
&\leq h_i(x)+d(x,y)-d(y,u)
\end{align*}
and
\begin{align*}
h_i(x)-d(x,y)+d(y,u)
&\leq h_i(x)-d(x,u)+2d(y,u)\\
&\leq\widecheck{a}_i+2r\leq c_i+r\leq\widehat{a}_i\\
&\leq h_i(x)+d(x,u)\\
&\leq h_i(x)+d(x,y)+d(y,u).
\end{align*}
The inequalities
\[
\bigl|f_i(x)\pm|g(x)|-\bigl(f_i(y)\pm |g(y)|\bigr)\bigr|
=\Bigl| h_i(x)-\Bigl(c_i\pm \bigl(r-d(y,u)\bigr)\Bigr)\Bigr|
\leq d(x,y)
\]
follow from \eqref{eq: ...=<C_i-+A=<...}.
For every $i\in\{1,\dotsc,n\}$, we extend $f_i$ to the entire space $M$ by setting
\[
f_i(y):=\sup_{x\in L}\bigl(f_i(x)+|g(x)|-d(x,y)\bigr)
\quad\text{for every $y\in M\setminus L$.}
\]
Note that, on $M\setminus L$, the function $f_i$ agrees with a norm preserving extension of $(f_i+|g|)|_L$.
It remains to show that $\|f_i\pm g\|_{\Lip_0(M)}\leq1$.
Indeed, this implies that also $\|f_i\|_{\Lip_0(M)}\leq 1$, and thus $f_i\in S_i$,
because, since $f_i|_N= h_i|_N$, one has $\langle\mu_i, f_i\rangle= \langle \mu_i, h_i\rangle>1-\alpha_i$.
Let $i\in\{1,\dotsc,n\}$.
To see that $\|f_i\pm g\|_{\Lip_0(M)}\leq1$, it suffices to show that,
whenever $x,y\in M$, one has
\begin{equation}\label{eq: -d(x,y)=<f_i(x)+-g(x)-(f_i(y)+-g(y))=<d(x,y)}
-d(x,y)\leq f_i(x)\pm g(x)-\bigl(f_i(y)\pm g(y)\bigr)\leq d(x,y).
\end{equation}
For the cases when $x,y\in L$ or $x,y\in M\setminus L$,
or $x\in N$ (or $y\in N$) and $y\in M\setminus L$ (or $x\in M\setminus L$),
the inequalities \eqref{eq: -d(x,y)=<f_i(x)+-g(x)-(f_i(y)+-g(y))=<d(x,y)}
follow from what has been proven above.
So, in fact, it suffices to consider the case when $x\in B(u,r)\cup B(v,s)$ and $y\in M\setminus L$.
In this case, \eqref{eq: -d(x,y)=<f_i(x)+-g(x)-(f_i(y)+-g(y))=<d(x,y)} means that
\[
-d(x,y)\leq c_i\pm g(x)-\sup_{z\in L}\bigl(f_i(z)+|g(z)|-d(z,y)\bigr)\leq d(x,y).
\]
Thus, it suffices to show that
\begin{enumerate}
\item
there is a $z\in L$ such that
\[
c_i\pm g(x)-d(x,y)+d(z,y)\leq f_i(z)+|g(z)|;
\]
\item
for every $z\in L$,
\[
f_i(z)+|g(z)|\leq c_i\pm g(x)+d(x,y)+d(z,y).
\]
\end{enumerate}
For (1), one may take $z=x$, so it remains to prove (2).
By symmetry, it suffices to consider only the case when $x\in B(u,r)$.
In this case $g(x)=r-d(x,u)\geq0$. Thus, it suffices to prove that, for every $z\in L$,
\begin{equation*}\label{eq: what one must show when x in B(u,A)}
f_i(z)+|g(z)|\leq c_i-r +d(x,u)+d(x,y)+d(z,y).
\end{equation*}
One has to look through the following cases:
\[
(\text{a})\quad z\in B(u,r);\qquad\qquad
(\text{b})\quad z\in B(v,s);\qquad\qquad
(\text{c})\quad z\in N.
\]
(a).
If $z\in B(u,r)$, then $f_i(z)=c_i$ and $|g(z)|=r-d(z,u)$.
Thus, one has to show that
\[
2r\leq d(x,u)+d(z,u)+d(x,y)+d(z,y).
\]
This inequality holds, because, since $y\notin B(u,r)$, one has $d(y,u)\geq r$, and thus
\[
2r\leq d(y,u)+d(y,u)\leq d(x,u)+d(x,y)+d(z,u)+d(z,y).
\]
(b).
If $z\in B(v,s)$, then $f_i(z)=c_i$ and $|g(z)|=s-d(z,v)$.
Thus, one has to show that
\[
r+s\leq d(x,u)+d(z,v)+d(x,y)+d(z,y).
\]
This inequality holds, because, since $y\notin B(u,r)$ and $y\notin B(v,s)$,
one has $d(y,u)\geq r$ and $d(y,v)\geq s$, and thus
\[
r+s\leq d(y,u)+d(y,v)\leq d(x,u)+d(x,y)+d(z,v)+d(z,y).
\]
(c).
If $z\in N$, then
\begin{align*}
f_i(z)+|g(z)|&=f_i(z)= h_i(z)\leq\widecheck{a}_i+d(z,u)\\
&\leq c_i-r+d(x,u)+d(x,y)+d(z,y).
\end{align*}
\end{proof}
\section{Examples}\label{sec: 3}
We now give an example of a metric space $M$ that has the LTP but fails the SLTP. By \cite[Theorem 3.1]{PR} and Theorem \ref{main}, this implies that the corresponding Lipschitz space $\Lip_0(M)$ has the $w^*$-SD$2$P but fails the $w^*$-SSD$2$P.
\begin{Ex}\label{ex1}
Let $M=\{a_1, a_2, b_1, b_2\}\cup \{u_i,v_i\colon i\in \mathbb{N}\}$ be a metric space where the distances between different points are defined as follows: for any $i\in \{1,2\}$, $j,k,l\in \mathbb{N}$, $k\neq l,$
\begin{align*}
d(a_1,a_2)&=d(b_1,b_2)=d(a_i,v_j)=d(b_i,u_j)\\
&=d(u_k,u_l)=d(v_k,v_l)=d(u_k,v_l)=2
\end{align*}
and, for any $i,j\in \{1,2\}$, $k\in \mathbb{N}$,
\begin{align*}
d(a_i,b_j)=d(a_i,u_k)=d(b_i,v_k)=d(u_k,v_k)=1.
\end{align*}
\begin{comment}
\begin{figure}
\centering
\begin{tikzpicture}
\draw[line width=0.5mm] (4,1) -- (2, 6) -- (6, 6) -- (8,1) -- (0,3) -- (2,6) -- (6,6) -- (4,3) -- cycle;
\draw[line width=0.5mm] (4,1) -- (8,1);
\draw[line width=0.5mm] (4, 3) -- (0, 3);
\draw[gray, line width=0.25mm] (4,1) -- (0,3);
\draw[gray, line width=0.25mm] (8,1) -- (4,3);
\draw[gray, line width=0.25mm] (8,1) -- (2, 6);
\draw[gray, line width=0.25mm] (4,1) -- (6, 6);
\draw[gray, dashed, line width=0.25mm] (4,3) -- (2,6);
\draw[gray, dashed, line width=0.25mm] (0,3) -- (6,6);
\draw (0,3) node[left] {$a_1$};
\draw (4,1) node[below left] {$a_2$};
\draw (4,3) node[right] {$b_1$};
\draw (8,1) node[below right] {$b_2$};
\draw (2,6) node[left] {$u_k$};
\draw (6,6) node[right] {$v_k$};
\end{tikzpicture}
\caption{A representation of the metric space $M$ in Example \ref{ex1}. The distance between two points corresponds to the thickness of the lines. Thicker lines -- distance is equal to $1$, thinner lines -- distance is equal to $2$.
The distances between points connected by a straight line segment are $1$, the distances between other different points are $2$.}
\label{fig1}
\end{figure}
\end{comment}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\draw (1,4) node[above left] {$a_1$};
\draw (1,0) node[below left] {$a_2$};
\draw (3,4) node[above right] {$b_1$};
\draw (3,0) node[below right] {$b_2$};
\draw (0,3) node[left] {$u_k$};
\draw (0,1) node[left] {$u_l$};
\draw (4,3) node[right] {$v_k$};
\draw (4,1) node[right] {$v_l$};
\draw (1,4) -- (3,4);
\draw (1,4) -- (3,0);
\draw (1,4) -- (0,3);
\draw (1,4) -- (0,1);
\draw (1,0) -- (3,4);
\draw (1,0) -- (3,0);
\draw (1,0) -- (0,3);
\draw (1,0) -- (0,1);
\draw (3,4) -- (4,3);
\draw (3,4) -- (4,1);
\draw (3,0) -- (4,3);
\draw (3,0) -- (4,1);
\draw (0,3) -- (4,3);
\draw (0,1) -- (4,1);
\end{tikzpicture}
\caption{A representation of the metric space $M$ in Example \ref{ex1}. The distances between points connected by a straight line segment are $1$, the distances between other different points are $2$.}
\label{fig1}
\end{figure}
We first show that $M$ has the LTP. Letting $N$ be a finite subset of $M$ and $i\in \mathbb{N}$ be such that $u_i,v_i\in M\setminus N$, it suffices to show that, for any $x,y\in N$,
\[
d(x,y)+d(u_i,v_i)=d(x,y)+1\leq d(x,u_i)+d(y,v_i).
\]
To this end, letting $x,y\in M\setminus \{u_i,v_i\}$ be such that $d(x,y)=2$, it suffices to show that
\[
d(x,u_i)+d(y,v_i)\geq 3.
\]
For this, notice that if $x\in \{a_1, a_2\}$, then either $y\in \{a_1,a_2\}$ or $y\in \{v_j\colon j\in \mathbb{N}\setminus\{i\} \}$, but in both of these cases $d(y,v_i)=2$ and $d(x,u_i)=1$; if $x\in \{b_1,b_2\}\cup \{u_j,v_j\colon j\in \mathbb{N}\setminus\{i\}\}$, then $d(x,u_i)=2$ and $d(y,v_i)\geq 1$.
It remains to show that $M$ fails the SLTP. Take $N\coloneqq \{a_1,a_2,b_1,b_2\}$. Then, for any $u,v\in M$, $u\neq v$, there exist $x,y,z,w\in N$ such that
\[
2d(u,v)+d(x,y)+d(z,w)\geq d(x,u)+d(y,u)+d(z,v)+d(w,v)+1.
\]
Indeed, set $U\coloneqq \{u_i\colon i\in \mathbb{N}\}$ and $V\coloneqq\{v_i\colon i\in \mathbb{N}\}$, and suppose that $u,v\in M$, $u\neq v$.
If $u,v\in U$ or $u,v\in V$, then, respectively, for $x=z=a_1$, $y=w=a_2$, and for $x=z=b_1$, $y=w=b_2$,
\begin{align*}
2d(u,v)+d(x,y)+d(z,w)&=8>4\\
&=d(x,u)+d(y,u)+d(z,v)+d(w,v).
\end{align*}
If $u\in U$ and $v\in V$, or $u\in V$ and $v\in U$, then, respectively, for $x=a_1$, $y=a_2$, $z=b_1$, $w=b_2$, and for $x=b_1$, $y=b_2$, $z=a_1$, $w=a_2$,
\begin{align*}
2d(u,v)+d(x,y)+d(z,w)&\geq 6>4\\
&=d(x,u)+d(y,u)+d(z,v)+d(w,v).
\end{align*}
Finally, if $u\in N$ or $v\in N$, then, respectively, for $x=y=u$ and $z,w\in N$ with $d(z,w)=2$ and $d(z,v)=d(w,v)=1$, and for $z=w=v$ and $x,y\in N$ with $d(x,y)=2$ and $d(x,u)=d(y,u)=1$,
\begin{align*}
2d(u,v)+d(x,y)+d(z,w)&\geq 4>2\\
&=d(x,u)+d(y,u)+d(z,v)+d(w,v).
\end{align*}
\end{Ex}
The following example shows that the inequality \eqref{eq: SLTP} in the definition of the SLTP does not imply \eqref{eq: LTP}.
\begin{Ex}\label{ex2}
Let $M=\{a,b\}\cup \{u_i,v_i\colon i\in \mathbb{N}\}$ be a metric space where the distances between different points are defined as follows: for any $i, j\in \mathbb{N}$, $i\neq j$,
\begin{align*}
d(a,b)=d(a,v_i)=d(b, u_i)=d(u_i,u_j)=d(v_i,v_j)=d(u_i,v_j)=2
\end{align*}
and, for any $i\in \mathbb{N}$,
\begin{align*}
d(a,u_i)=d(b,v_i)=d(u_i,v_i)=1.
\end{align*}
\begin{comment}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5]
\draw[line width=0.5mm] (0,0) -- (1,2) -- (3,2)--(4,0);
\draw[gray, line width=0.01mm] (0,0) -- (4,0);
\draw[gray, line width=0.01mm] (0,0) -- (3,2);
\draw[gray, line width=0.01mm] (1,2) -- (4,0);
\draw (0,0) node[left] {$a$};
\draw (4,0) node[right] {$b$};
\draw (1,2) node[left] {$u_i$};
\draw (3,2) node[right] {$v_i$};
\end{tikzpicture}
\caption{A representation of the metric space $M$ in Example \ref{ex2}. The distance between two points corresponds to the thickness of the lines. Thicker lines -- distance is equal to $1$, thinner lines -- distance is equal to $2$.}
\label{fig2}
\end{figure}
\end{comment}
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=1]
\draw (0,2) node[left] {$a$};
\draw (4,2) node[right] {$b$};
\draw (1,4) node[above left] {$u_k$};
\draw (1,0) node[below left] {$u_l$};
\draw (3,4) node[above right] {$v_k$};
\draw (3,0) node[below right] {$v_l$};
\draw (0,2) -- (1,4);
\draw (0,2) -- (1,0);
\draw (4,2) -- (3,4);
\draw (4,2) -- (3,0);
\draw (1,4) -- (3,4);
\draw (1,0) -- (3,0);
\end{tikzpicture}
\caption{A representation of the metric space $M$ in Example \ref{ex2}. The distances between points connected by a straight line segment are $1$, the distances between other different points are $2$.}
\label{fig2}
\end{figure}
For any finite subset $N$ of $M$, we can find an $i\in \mathbb{N}$ such that $u_i,v_i\in M\setminus N$. We first show that, for any $x,y,z,w\in N$,
\begin{align*}
d(x,y)+d(z,w)+2d(u_i,v_i)&=d(x,y)+d(z,w)+2\\
&\leq d(x,u_i)+d(y,u_i)+d(z,v_i)+d(w,v_i).
\end{align*}
By symmetry it suffices to show that, for any $x,y\in M\setminus \{u_i, v_i\}$,
\[
d(x,y)+1\leq d(x,u_i)+d(y,u_i).
\]
This inequality holds trivially if $d(x,u_i)+d(y,u_i)\geq 3$. It remains to note that if $d(x,u_i)+d(y,u_i)<3$, then $d(x,u_i)=d(y,u_i)=1$. Thus, $x=y=a$, and the desired inequality trivially holds.
We now show that $M$ does not have the LTP. Take $N\coloneqq \{a,b\}$. Then, for any $u,v\in M$, $u\neq v$, there exist $x,y\in N$ such that
\[
d(x,y)+d(u,v)\geq d(x,u)+d(y,v)+1.
\]
Indeed, set $U\coloneqq \{u_i\colon i\in \mathbb{N}\}$ and $V\coloneqq\{v_i\colon i\in \mathbb{N}\}$, and suppose that $u,v\in M$, $u\neq v$.
If $u,v\in U$ or $u,v\in V$, then, for $x=a$, $y=b$,
\begin{align*}
d(x,y)+d(u,v)= 4\geq 3=d(x,u)+d(y,v).
\end{align*}
If $u\in U$ and $v\in V$, or $u\in V$ and $v\in U$, then, respectively, for $x=a$, $y=b$, and for $x=b$, $y=a$,
\begin{align*}
d(x,y)+d(u,v)\geq 3>2=d(x,u)+d(y,v).
\end{align*}
Finally, if $u\in N$ or $v\in N$, then, respectively, for $x=u$, $y\in N\setminus \{x\}$, and for $y=v$, $x\in N\setminus\{y\}$,
\begin{align*}
d(x,y)+d(u,v)\geq 3>2\geq d(x,u)+d(y,v).
\end{align*}
\end{Ex}
In \cite[Proposition 4.7]{PR} it was shown that every infinite subset $M$ of $\ell_1$, viewed as a metric space, has the LTP. It turns out that every such $M$ has even the SLTP.
\begin{Ex}\label{ell1}
Every infinite subset $M$ of $\ell_1$, viewed as a metric space, has the SLTP.
Indeed, from \cite[Theorem 5.6]{HLLN} combined with our Theorem \ref{main} it follows that every unbounded metric space and every metric space $M$ with the property that $\inf\{d(x,y)\colon x,y\in M, x\neq y\}=0$ has the SLTP (this can also, without too much effort, be verified directly). Thus it suffices to consider the case when $M$ is a bounded and uniformly discrete subset of $\ell_1$. In this case there exist $R,r>0$ such that for any $x,y\in M$, $x\neq y$,
\[
r< d(x,y)< R.
\]
Let $N$ be a finite subset of $M$ and let $\varepsilon>0$. Choose $\delta >0$ such that $\varepsilon r\geq 6\delta$. Since $N$ is finite, there exists an $n\in \mathbb{N}$ such that for any $x=(x_i)\in N$
\[
\sum_{i> n} |x_i|\leq \delta.
\]
Since $M$ is infinite and bounded, there exist $u=(u_i),v=(v_i)\in M$, $u\neq v$, such that
\[
\sum_{i\leq n} |u_i-v_i|\leq \delta.
\]
For any $x=(x_i),y=(y_i)\in N$ and $a=(a_i), b=(b_i)\in \{u,v\}$,
\begin{align*}
\sum_i |x_i-y_i|&\leq \sum_{i\leq n} \bigr(|x_i-a_i|+|y_i-b_i|+|a_i-b_i|\bigr)+\sum_{i>n} |x_i-y_i| \\
&\leq \sum_{i\leq n}\bigl(|x_i-a_i|+|y_i-b_i|\bigr)+3\delta
\end{align*}
and
\begin{align*}
\sum_i |u_i-v_i|&\leq \sum_{i> n} |u_i-v_i-x_i+y_i|+\sum_{i> n}|x_i-y_i|+\sum_{i\leq n}|u_i-v_i|\\
&\leq \sum_{i> n}\bigl(|x_i-u_i|+|y_i-v_i|\bigr)+3\delta.
\end{align*}
Therefore, for any $x=(x_i), y=(y_i), z=(z_i), w=(w_i)\in N$
\begin{align*}
(1-\varepsilon)(d(x,y)+d(u,v))&\leq d(x,y)+d(u,v)-6\delta\\
&=\sum_{i} |x_i-y_i|+\sum_i|u_i-v_i|-6\delta\\
&\leq \sum_{i\leq n} \bigr(|x_i-u_i|+|y_i-v_i|\bigr) +3\delta\\
&\qquad +\sum_{i>n}\bigr(|x_i-u_i|+|y_i-v_i|\bigr)+3\delta-6\delta\\
&=\sum_i\bigr(|x_i-u_i|+|y_i-v_i|\bigr)\\
&=d(x,u)+d(y,v)
\end{align*}
and
\begin{align*}
(1-&\varepsilon)\bigl(2d(u,v)+d(x,y)+d(z,w)\bigr)\\
&\leq 2d(u,v)+d(x,y)+d(z,w)-12\delta\\
&=2\sum_i|u_i-v_i|+ \sum_{i} |x_i-y_i|+\sum_i|z_i-w_i|-12\delta\\
&\leq \sum_{i>n}\bigr(|x_i-u_i|+|z_i-v_i|+|y_i-u_i|+|w_i-v_i|\bigr)+6\delta\\
&\qquad+\sum_{i\leq n}\bigr(|x_i-u_i|+|y_i-u_i|+|z_i-v_i|+|w_i-v_i|\bigr)+6\delta-12\delta\\
&=\sum_i\bigl(|x_i-u_i|+|y_i-u_i|+|z_i-v_i|+|w_i-v_i|\bigr)\\
&= d(x,u)+d(y,u)+d(z,v)+d(w,v).
\end{align*}
\end{Ex}
\section*{Acknowledgments}
The paper is a part of a Ph.D. thesis which is being prepared
by the author at University of Tartu under the supervision of Rainis Haller and Märt Põldvere.
The author is grateful to his supervisors for their valuable help.
This research was partially supported by institutional
research funding IUT20-57 of the Estonian Ministry of Education and
Research.
\addcontentsline{toc}{section}{References}
| {
"timestamp": "2019-08-28T02:19:43",
"yymm": "1908",
"arxiv_id": "1908.10348",
"language": "en",
"url": "https://arxiv.org/abs/1908.10348",
"abstract": "We give a characterisation of the weak* symmetric strong diameter 2 property for Lipschitz function spaces in terms of a property of the corresponding metric space. Using this characterisation we show that the weak* symmetric strong diameter 2 property is different from the weak* strong diameter 2 property in Lipschitz spaces, thereby answering a question posed in a recent paper by Haller, Langemets, Lima, and Nadel.",
"subjects": "Functional Analysis (math.FA)",
"title": "Characterisation of the weak-star symmetric strong diameter 2 property in Lipschitz spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787868650146,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7093811407060999
} |
https://arxiv.org/abs/2109.01735 | Enumerating $k$-Naples Parking Functions Through Catalan Objects | This paper studies a generalization of parking functions named $k$-Naples parking functions, where backward movement is allowed. One consequence of backward movement is that the number of ascending $k$-Naples is not the same as the number of descending $k$-Naples. This paper focuses on generalizing the bijections of ascending parking functions with combinatorial objects enumerated by the Catalan numbers in the setting of both ascending and descending $k$-Naples parking functions. These combinatorial objects include Dyck paths, binary trees, triangulations of polygons, and non-crossing partitions. Using these bijections, we enumerate both ascending and descending $k$-Naples parking functions. | \section{Introduction}\label{sec:Introduction}
Parking functions are special types of integer sequences that were proposed independently by Ronald Pyke \cite{Pyke} as well as by Alan Konheim and Benjamin Weiss \cite{KonheimAndWeiss} in order to study hashing problems in computer science.
If we have a sequence of $n$ integers all belonging to $[n]:=\{1,2,\dots, n\}$, we call it a \textbf{parking preference} of length $n$.
A \textbf{parking function} of length $n$ is a special type of parking preference $(a_1,a_2,\dots,a_n)$, that allows $n$ cars $c_1,c_2,\dots,c_n$ with respective preferences $a_1,a_2,\dots,a_n$ to park in a one-way street with $n$ consecutively ordered parking spots according to the following rules:
\begin{enumerate}
\item $c_1$ parks in its preferred spot;
\item Every new car parks in its preferred spot if it is not occupied, otherwise it parks in the next available spot.
\end{enumerate}
For example, the parking preference $(2,2,1,4)$ is a parking function of length 4, where $c_1$ parks in the second spot, $c_2$ in the third, $c_3$ in the first, and $c_4$ in the fourth.
Special subsets of parking functions are the monotonic ones, that correspond to \textit{ascending} (weakly increasing) or \textit{descending} (weakly decreasing) parking preferences.
The set of ascending parking functions of length $n$, as well as the set of descending parking functions of length $n$, are counted by the Catalan numbers.
There are many well-known bijections between either of these subsets of parking functions and a variety of Catalan objects.
Figure \ref{fig:CatalanObjects} illustrates some of these Catalan objects that correspond to the ascending parking function $(1,2,2,4)$, i.e., the ascending rearrangement of the parking function $(2,2,1,4)$, which include Dyck paths, binary trees, triangulations of $n$-gons, and non-crossing partitions of the set $[n]$.
We remark that the number of ascending and descending parking functions is the same follows from the fact that if a given parking preference is a parking preference, then so are all of its rearrangements.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{scope}[scale=0.5, shift=(-90:2.5)]
\draw[blue, very thick] (0,0) to (8,0);
\draw[very thick] (0,0) to (1,1) to (2,0) to (4,2) to (6,0) to (7,1) to (8,0);
\fill (0,0) circle (4pt);
\fill (1,1) circle (4pt);
\fill (2,0) circle (4pt);
\fill (3,1) circle (4pt);
\fill (4,2) circle (4pt);
\fill (5,1) circle (4pt);
\fill (6,0) circle (4pt);
\fill (7,1) circle (4pt);
\fill (8,0) circle (4pt);
\end{scope}
\begin{scope}[shift=(0:5.5), scale=0.75]
\draw[very thick]
(0,0) to (2,-2)
(1,-1) to (0,-2)
;
\fill (0,0) circle (3pt);
\fill (1,-1) circle (3pt);
\fill (2,-2) circle (3pt);
\fill (0,-2) circle (3pt);
\end{scope}
\begin{scope}[shift=($(-90:3)+(0:2)$),scale=.5]
\draw[very thick](0:2) to (60:2) to (120:2) to (180:2) to (-120:2) to (-60:2) to (0:2) to ((60:2);
\draw[line width=1.5pt, red] (60:2) to (120:2);
\draw[very thick](60:1.95) to (180:1.95) to (-60:1.95) to (60:1.95) to (180:1.95)
;
\end{scope}
\begin{scope}[shift=($(-90:3)+(0:6)$), scale=.5]
\draw[very thick] (1,1) to (-1,-1);
\fill (1,1) circle (4pt) node[above right]{\Large $2$};
\fill (-1,1) circle (4pt) node[above left]{\Large $1$};
\fill (-1,-1) circle(4pt) node[below left]{\Large $3$};
\fill (1,-1) circle (4pt) node[below right]{\Large $4$};
\end{scope}
\end{tikzpicture}
\caption{Catalan objects corresponding to the parking function $(1,2,2,4)$.}
\label{fig:CatalanObjects}
\end{figure}
Many generalizations to parking functions were proposed throughout the years, including allowing cars with different lengths and starting with some spots already filled, and a reader interested in exploring all the different directions may find \cite{parkingadventure} a useful resource.
In this paper, we focus our attention
on the generalization known as \textbf{Naples parking functions}.
First proposed in \cite{BaumgardnerHonors}, Naples parking functions is different from parking function because cars would go to their preferred spot, and if this spot was already occupied, they then first check one spot before, parking there if available or moving forward if that space is occupied.
Similarly, a \textbf{$k$-Naples parking function} would allow cars to check up to $k$ spots preceding their parking preference, in decreasing order, before moving forward.
For example, the parking preference $(6,6,6,5,5,2,1)$ is a $2$-Naples parking function as the cars $c_1,\dots, c_7$ park in positions $6,5,4,3,7,2,1$, respectively, with the second car moving back once and the third and fourth cars moving back twice since both their preferred spot and the one directly behind it are taken.
We see this is not a $1$-Naples parking function as car $c_5$ cannot park in that setting.
Also, we remark that we refer to $1$-Naples parking functions simply as Naples parking functions.
Similarly, $0$-Naples parking functions are just the set of parking functions.
However, unless otherwise specified, we adopt the convention that $k\geq 1$ for all $k$-Naples functions considered in this paper.
We now summarize our main results in this paper:
\begin{itemize}
\item As we commented previously, all rearrangements of a parking function are parking functions. In Section \ref{sec:Rearrangements} we answer the analogous question for $k$-Naples parking functions, by establishing that given a parking preference, all of its rearrangements are $k$-Naples parking functions if and only if its ascending rearrangement is a $k$-Naples parking function. This is the statement of Theorem \ref{thm:Rearrangements}.
\item We then restrict our study to ascending and descending $k$-Naples parking functions and in sections \ref{sec:DyckPaths} and \ref{sec:BinaryTrees}, we present bijections between ascending and descending $k$-Naples parking functions and families of Dyck paths and binary trees, respectively.
\item In Section \ref{sec:Monotonic} we use the bijections found in the previous sections to give formulas to enumerate ascending and descending $k$-Naples parking functions. These results give connections to Fine numbers (\textcolor{blue}{\href{https://oeis.org/A000957}{A000957}}) and convolution of the Catalan numbers with the Fine numbers (\textcolor{blue}{\href{https://oeis.org/A000958}{A000958}}).
\item We also consider bijections between ascending and descending $k$-Naples parking functions and other Catalan objects. This is the content of Section \ref{sec:OtherBijections}.
\end{itemize}
Following these results, we conclude the article in Section \ref{sec:FutureWorks} by detailing some future directions of research on these topics.
\section{Rearrangements of $k$-Naples Parking Functions}\label{sec:Rearrangements}
An interesting question regarding $k$-Naples parking functions is if all the rearrangements of $k$-Naples parking functions remain $k$-Naples parking functions, since this is true for traditional parking functions.
However, this is not the case, as we illustrate in Example \ref{exmp:BadRearrangement}.
The bijective correspondence between ascending and descending parking functions does not hold in the $k$-Naples case for $k>0$. This motivates us to study both ascending and descending $k$-Naples parking functions separately to understand their similarities and differences in the hopes of achieving a better understanding of general $k$-Naples parking functions.
\begin{exmp}\label{exmp:BadRearrangement}
We have $(6,6,6,5,5,2,1)$ as a descending parking preference of length $7$ and we have $(1,2,5,5,6,6,6)$ as its corresponding ascending parking preference. From above, we see $(6,6,6,5,5,2,1)$ is a descending $2$-Naples parking function. We also can see that $(1,2,5,5,6,6,6)$ is an ascending $3$-Naples parking function, but not an ascending $2$-Naples parking function.
\end{exmp}
As a starting point to begin exploring exactly when rearrangements of a $k$-Naples parking function are still $k$-Naples, we define a slightly modified way of writing parking preferences that can capture the position of cars step by step as they park.
Then, we prove a lemma that expose small modifications in the parking preference that conserve its status as a parking function.
\begin{defn}
An \textit{$i$-filled} parking preference
of length $n$ is an ordered pair of sequences in $[n]$, of the form $((d_1,\dots,d_i),(a_{i+1},\dots, a_n))$, such that each $d_j$ for $1\leq j\leq i$ represents the spot in which car $c_j$ has already parked, and each $a_j$ for $i<j\leq n$ represents the preference of car $c_j$ that has yet to park.
If all cars can park using the $k$-Naples rules we call this an \textit{$i$-filled} $k$-Naples parking function.
\end{defn}
\begin{lem}\label{lem:t1}
If $P_1=(a_1, a_2,\ldots,a_n)$ is a $k$-Naples parking function with cars parking in spots $(d_1,\ldots, d_n)$ and we consider the $i$-filled parking preference $P_2=((p_1,\ldots,p_i),(a_{i+1},\ldots,a_n))$ with $p_j=d_j$ for all but exactly one $l\leq i$ where $p_l<d_l$, then $P_2$ is a $i$-filled $k$-Naples parking function.
\end{lem}
\begin{proof}
We prove this by induction on $n-i$. Let $n-i=1$ or $i=n-1$. Suppose that $P_2$ is an $(n-1)$-filled parking preference $((p_1,\dots, p_{n-1}),(a_n))$ so that, according to this preference, cars $c_1,\dots, c_{n-1}$ park in spots $p_1,\dots p_{n-1}$. By assumption $p_j=d_j$ for all $j\leq i$ except for one $1\leq l\leq n-1$ where $p_l<d_l$. Since all cars must park in distinct spots we must have $d_n=p_l<d_l$ so that $c_n$ spot $d_l$ is unoccupied when $c_n$ goes to park and $c_n$ is able to park in spot $d_l$. Thus, $P_2$ is an $(n-1)$-filled parking preference.
Now, suppose the same is true for every $k<i<n$ and suppose that $P_2$ is an $k$-filled parking preference $((p_1,\dots, p_k),(a_{k+1},\dots a_n))$ so that, according to this preference, cars $c_1,\dots, c_{k}$ park in spots $p_1,\dots p_{k}$. By assumption $p_j=d_j$ for all but exactly one $l\leq i$, where $p_l<d_l$. Thus, $p_l=d_j$ for some $j>k$. Now consider where car $c_{k+1}$ parks according to $P_2$.
If $p_l=d_{k+1}$ then $d_l$ is unoccupied when $c_{k+1}$ tries to park. By assumption $d_{k+1}<d_l$, which forces $c_{k+1}$ to park at spot $d_l$ or earlier. If $c_{k+1}$ parks in spot $d_l$ then when cars $c_{k+2},\dots,c_n$ go to park according to $P_2$ they find spots $d_1,\dots, d_{k+1}$ occupied and park in spots $d_{k+2},\dots d_n$ respectively. Thus, we may assume that $c_{k+1}$ parks between spots $d_{k+1}$ and $d_l$. Thus, the collection of spots $X$ occupied by cars $c_1,...,c_{k+1}$ differs as a set from $X'=\{d_1,\dots, d_{k+1}\}$ by one element. Say $X\setminus X'=d'$ and $X'\setminus X=d''$ so that $d'<d''$. Then we can arrange the these spots into a $(k+1)$-filled parking preference satisfying our inductive hypothesis so that it is a $(k+1)$-filled $k$-Naples parking function. But that means that cars $c_{k+2},\dots c_n$ are able to park based on how cars $c_1,...,c_{k+1}$ have filled the lot according to $P_2$ so that $P_2$ is a $k$-filled Naples parking function.
If $p_l\neq d_{k+1}$ then it must be the case that $p_l=d_j$ for some $j>k+1$.
Then when $c_{k+1}$ goes to park it can either park in $d_l$ or some earlier spot.
If it parks in $d_l$ we fall into the same situation as above of having a $(k+1)$-filled $k$-Naples parking function.
If it parks in some earlier spot, this spot would have also been available to $c_{k+1}$ when it tried to park according to $P_1$ and thus is a contradiction.
\end{proof}
Lemma \ref{lem:t1} plays a key role in the following results about rearrangements of $k$-Naples parking functions.
\begin{thm}\label{thm:Rearrangements}
Given a parking preference, all of its rearrangements are $k$-Naples parking functions if and only if its ascending rearrangement is a $k$-Naples parking function.
\end{thm}
\begin{proof}
Note that if all rearrangements of a parking preference are $k$-Naples then this includes the fact that the ascending rearrangement is $k$-Naples.
To prove that all rearrangements of an ascending $k$-Naples parking function are $k$-Naples it suffices to show that if we have a $k$-Naples parking function $P_1=(a_1, a_2,\ldots,a_n)$, with $a_i<a_{i+1}$ for some $i,$ then the preference $P_2=(b_1,b_2,\ldots,b_n)$ where $b_j=a_j$ for every $j\notin\{i,i+1\},$ $b_i=a_{i+1},$ and $b_{i+1}=a_i,$ is also a $k$-Naples parking function.
Let car $c_j$ park in spot $d_j$ in accordance with parking preference $P_1$.
We see that both parking preferences $P_1$ and $P_2$ result in the first $i-1$ cars park identically, then $p_j=d_j$.
If $c_i$ in $P_2$ parks in $d_{i+1}$, then $c_i$ in $P_1$ must not pass by $d_{i+1}$ before parking or it would park there.
The two cars take up the same spaces in $P_2$ and the rest of the parking proceeds as in $P_1$.
So, we may assume $c_i$ in $P_2$ does not park in $d_{i+1}$.
But then it must be parking in a spot not open to $c_{i+1}$ in $P_1$, namely $d_i$.
Then when $c_{i+1}$ goes to park in $P_2$, it drives past $d_i$ which is now full.
If $d_{i+1}<d_i$, we see car $c_{i+1}$ in $P_1$ backed up all the way to $d_{i+1}$ and since $b_{i+1}<b_i$, $c_{i+1}$ in $P_2$ backs up to $d_{i+1}$ as well.
Otherwise, we have $d_{i+1}>d_i$ and $c_{i+1}$ in $P_2$ clearly parks at or before $d_{i+1}$.
So, we know that after the $i+1$st car in $P_2$ parks, all the spots the first $i$ cars park in for $P_1$ are full, and a car is either parked in $d_{i+1}$ or in a spot that would be open in $P_1$ that is before $d_{i+1}$.
Since this is the situation in the previous lemma, we see $P_2$ is a $k$-Naples parking function as desired.
\end{proof}
\begin{exmp}
We saw above that $(6,6,5,5,3,1)$ is a $2$-Naples parking function, but its rearrangement $(3,5,1,6,6,5)$ is not.
Note that in the ascending rearrangement $(1,3,5,5,6,6)$, no car can park in the second spot, so it is not a $2$-Naples parking function.
However, we know $(1,3,3,5,6,6)$ is an ascending $2$-Naples parking function and an exhaustive search shows that all of the rearrangements are also $2$-Naples parking functions.
\end{exmp}
\begin{rem}
On a closer look, this proof actually defines a hierarchy that is followed when deciding when rearrangements of a parking preference are $k$-Naples.
If two rearrangements differ by just one switch, where the switched car that comes after in the first rearrangement has a higher preference than the one coming before, then it is intrinsically harder for the first one to be $k$-Naples than the second one.
This is because according to the proof, the first being $k$-Naples implies the second also is, but the converse is not true.
\end{rem}
\section{Dyck Paths}\label{sec:DyckPaths}
One family of Catalan objects that is in bijection with descending parking functions are Dyck paths.
In \cite{naples}, a generalization of this result is presented, that gives a bijection between descending $k$-Naples parking functions and a generalization of Dyck paths called $k$-Dyck paths.
In related work by Colmenarejo et al \cite{kNPF-AIMUP}, the authors counted k-Naples parking functions through permutations and they also defined the k-Naples area statistic.
In this section, we explore when a $k$-Dyck path corresponds to an ascending $k$-Naples Parking function, giving a way of finding all $k$-Naples parking functions with the property that each of its rearrangements remains a $k$-Naples parking function.
We then embed $k$-Dyck paths into a subset of Dyck paths to help us find other bijections with ascending and descending $k$-Naples parking functions.
\begin{defn}\label{def:DyckPath}
A \textit{Dyck path} of length $n$ is a lattice path of Up $(1,1)$ and Down $(-1,-1)$ steps from $(0,0)$ to $(n,n)$ that never reaches below the line $y=0$. A \textit{$k$-Dyck path} is a similarly defined path that never reaches below the line $y=-k$ and ends with a Down step.
Any such path can be represented by a sequence of $U$'s and $D$'s corresponding to its steps.
The \textit{length} of a Dyck path or $k$-Dyck path is defined as the number of Up steps it has.
\end{defn}
The following result from \cite{naples} connects $k$-Dyck paths to descending $k$-Naples parking functions.
\begin{prop}[Theorem 1.3, \cite{naples}]\label{prop:DescendingKDyckBijection}
The set of descending $k$-Naples parking functions of length $n$ are in bijective correspondence to $k$-Dyck paths of length $n$.
\end{prop}
\begin{rem}\label{rem:ParkingFunctionToDyck} From \cite{naples} we have the following correspondence between $k$-Dyck paths and increasing parking preferences.
A $k$-Dyck path $P$ of length $n$ uniquely corresponds to the parking preference $\alpha=(a_1,\dots, a_n)$, where $a_i$ is $1$ plus the number of Down steps coming before the $i$th Up step.
Note that, $\alpha$ is an ascending parking preference. In \cite{naples} the descending rearrangement of $\alpha$ was shown to be $k$-Naples, and it is straightforward to reverse this process to go from decreasing $k$-Naples parking functions of length $n$ to $k$-Dyck paths of length $n$.
\end{rem}
Next, we use this correspondence between ascending parking preferences of length $n$ and $k$-Dyck paths of length $n$ to classify which $k$-Dyck paths correspond to ascending $k$-Naples parking functions.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.75]
\draw[red, thick] (0,0) to (12,0);
\draw[thick] (0,0) to (1,1) to (2,0) to (3,-1) to (4,0) to (5,1) to (6,0) to (7,-1) to (8,0) to (9,-1) to (10,0) to (11,1) to (12,0);
\fill (0,0) circle (2pt)
(1,1) circle (2pt)
(2,0) circle (2pt)
(3,-1) circle (2pt)
(4,0) circle (2pt)
(5,1) circle (2pt)
(6,0) circle (2pt)
(7,-1) circle (2pt)
(8,0) circle (2pt)
(9,-1) circle (2pt)
(10,0) circle (2pt)
(11,1) circle (2pt)
(12,0) circle (2pt)
;
\end{tikzpicture}
\caption{The $k$-Dyck path corresponding to $(1,3,3,5,6,6)$.} \label{fig:PPToKDyck}
\end{figure}
\begin{exmp}
To go from the 2-Dyck path in Figure \ref{fig:PPToKDyck} to an ascending parking preference, we see the first Up step has no previous Down steps making the first preference $1$.
The second and third Up step then correspond to a preference of $3$.
Continuing in this manner yields the parking preference $(1,3,3,5,6,6)$.
\end{exmp}
\begin{thm}\label{thm:AscendingCharacterization}
A $k$-Dyck path corresponds to an ascending $k$-Naples parking function if and only if every Down step that puts the path below the line $y=0$ crosses back above $y=0$ within $2k$ steps.
\end{thm}
\begin{proof}
Assume for the sake of contradiction that we have a $k$-Dyck path $P$ that corresponds to an ascending $k$-Naples parking function, but that at some point $P$ crosses below the line $y=0$ and does not cross back above this line within $2k$ steps.
Let step $2i+1$ be the first step where the path goes below $y=0$, but does not cross back above $y=0$ within $2k$ steps.
Now, let us look at car $c_{i+j}$ with $1\leq j\leq k+1.$
We see $c_{i+j}$ must have preference larger than $i+j$ since the path is below the horizontal and there are more Down than Up steps during this section.
For the car to move back to spot $i+1$, all spots between $i+1$ and the parking preference, including the preference, must be filled.
We see these are spots $i+2, i+3, \ldots, i+j+1$ of which there are $j$.
However, they could only be filled by cars $c_{i+1}, c_{1+2}, \ldots c_{i+j-1}$ of which there are $j-1$.
So, one of these spots is open and $c_{i+j}$ cannot fill spot $i+1$. We see cars $c_{i+k+2}$ and later must have parking preference at or larger than or equal to $i+k+2$ since the path does not go below the diagonal until at least step $2(i+k)+2$ by assumption. So no car fills spot $i+1$ showing that the path does not correspond to a $k$-Naples parking function.
Next, we show that if a $k$-Dyck path always crosses back above the line $y=0$ within $2k$ steps of it crossing below $y=0$ then it corresponds to an ascending parking function.
We justify this by induction on $k$.
We know this is true for the usual parking functions, i.e. $0$-Naples parking functions, and we assume it is true for all values up to $k-1$.
Suppose we have a $k$-Naples parking function corresponding to a $k$-Dyck path that goes below the line $y=0$ on step $2i+1$.
We may assume that the $k$-Naples parking function leads to the first $i$ spots being filled by the first $i$ cars.
By hypothesis, the path must go above the horizontal at or before step $2(i+k)+1$.
If the $k$-Dyck path always goes above $y=0$ before step $2(i+k)+1$ after going below $y=0$ on step $2i+1$ then it corresponds to an ascending $(k-1)$-Naples parking function since it cannot touch the line $y=-k$ given that within $2(k-1)$ steps of going under the horizontal it has crossed back above $y=0$.
So, we assume the path goes back above the horizontal for the first time on step $2(i+k)+1$.
Now, we look at cars $c_j$ with $i+1\leq j\leq i+k+1$.
These are all the cars with preferred parking spot $a_j$ corresponding to Up steps below the horizontal except for car $c_{i+k+1}$ whose preferred parking spot corresponds to the last Up step to back above the horizontal $y=0$.
We see that their parking preferences $a_j$ have the property $i+1\leq a_j\leq i+k$.
Since each car is able to move back $k$ spots and $j\leq i+k+1$, we see these cars fill spots at or before spot $i+k+1$.
But that implies the $k+1$ cars fill up the $k+1$ spots immediately after what was already filled.
So, the first $i+k+1$ cars fill the first $i+k+1$ spots.
Now, if the path goes below the horizontal for the first time on step $2i'+1$, then the first $i'$ spots are clearly filled.
This shows that a $k$-Dyck path corresponds to an ascending $k$-Naples parking function when each time the path goes below the horizontal, it has had more Up steps than Down steps at least once within $2k$ steps.
\end{proof}
\begin{cor}\label{cor:Ascending1-Naples}
Every rearrangement of a parking preference is a $k$-Naples parking function if and only if whenever its corresponding $k$-Dyck path has a Down step which crosses the line $y=0,$ the following $2k$ steps have a point where there have been in total two more Up steps than Down steps so far into the path.
\end{cor}
\begin{proof}
This follows directly from Theorems \ref{thm:AscendingCharacterization} and \ref{thm:Rearrangements}.
\end{proof}
\begin{exmp}
From Figure \ref{fig:PPToKDyck}, we see $(1,3,3,5,6,6)$ is not a $1$-Naples parking function even though the lattice path is a $1$-Dyck path.
In fact, we can see that at step 7 it crosses below $y=0$, and then it takes four steps for the lattice path for the path to cross back above $y=0$.
Thus, it is a $2$-Naples parking function.
\end{exmp}
\begin{rem}
Note that, in particular, for the 1-Naples case, the path cannot be below the $y=0$ line for more than 3 steps at a time.
This means that we cannot have two consecutive valleys under the $y=0$ line.
This special case is equivalent to Conjecture 5 in \cite{naples}.
\end{rem}
\begin{figure}[h]
\[
\begin{tikzpicture}[scale=.75]
\newcommand{\U}{--++(1,1)}
\newcommand{\D}{--++(1,-1)}
\begin{scope}[shift=($(90:6)+(0:2)$)]
\draw[red, thick] (-2,0) to (16,0);
\draw[thick] (0,0) \U \D \U \D \D\D\U \U\D\U\U\U\D\D;
\fill
(0,0) circle (2pt)
(1,1) circle (2pt)
(2,0) circle (2pt)
(3,1) circle (2pt)
(4,0) circle (2pt)
(5,-1) circle (2pt)
(6,-2) circle (2pt)
(7,-1) circle (2pt)
(8,0) circle (2pt)
(9,-1)circle (2pt)
(10,0)circle (2pt)
(11,1) circle (2pt)
(12,2) circle (2pt)
(13,1) circle (2pt)
(14,0)circle (2pt)
;
\end{scope}
\draw[red, thick] (0,2) to (18,2);
\draw[blue, thick] (0,0) to (18,0);
\begin{scope}[shift=($(90:2)+(0:2)$)]
\draw[thick] (0,0) \U \D \U \D \D\D\U \U\D\U\U\U\D\D;
\fill
(0,0) circle (2pt)
(1,1) circle (2pt)
(2,0) circle (2pt)
(3,1) circle (2pt)
(4,0) circle (2pt)
(5,-1) circle (2pt)
(6,-2) circle (2pt)
(7,-1) circle (2pt)
(8,0) circle (2pt)
(9,-1)circle (2pt)
(10,0)circle (2pt)
(11,1) circle (2pt)
(12,2) circle (2pt)
(13,1) circle (2pt)
(14,0)circle (2pt)
;
\end{scope}
\draw[thick] (0,0) \U\U;
\draw[thick] (16,2) \D\D;
\fill
(0,0) circle (2pt)
(1,1) circle (2pt)
(18,0) circle (2pt)
(17,1) circle (2pt)
;
\begin{scope}[shift=($(90:2)+(0:2)$), thick, loosely dashed]
\draw (0,0)--++(90:4);
\draw (2,0)--++(90:4);
\draw (6,-2)--++(90:4);
\draw (9,-1)--++(90:4);
\draw (14,0)--++(90:4);
\end{scope}
\end{tikzpicture}
\]
\caption{The $k$-Dyck path to Dyck path transformation.}
\label{fig:KDyckToDyck}
\end{figure}
Next, we use Proposition \ref{prop:DescendingKDyckBijection} to view descending $k$-Naples parking functions of length $n$ as $k$-Dyck paths of the same length, and then embed these into usual Dyck paths of length $n+k$.
This allows us to obtain many similar bijections for both ascending and descending $k$-Naples parking functions to other subsets of Catalan objects.
\begin{prop}\label{prop:TranslatingAxis}
Descending $k$-Naples parking functions are in bijective correspondence to Dyck paths of length $n+k$ whose first $k$ steps are Up and last $k+1$ steps are Down.
\end{prop}
\begin{proof}
This is a matter of using the bijection between descending $k$-Naples functions and $k$-Dyck paths and then embedding these $k$-Dyck paths into the usual Dyck paths.
Specifically, given a descending $k$-Naples parking function find the corresponding $k$-Dyck path.
Then, shift this path $k$ units right and $k$ units up so that it starts at $(k,k)$ and concatenate this with the lattice path of all Up steps from $(0,0)$ to $(k,k)$ and the lattice path of all Down steps from $(2n+k,k)$ to $(2n+2k,0)$. In terms of Up steps and Down steps, this corresponds to appending $k$ Up steps to the start of the $k$-Dyck path and $k$ Down steps to the end of the $k$-Dyck path.
Notice, that if a lattice path initially takes $k$ consecutive Up steps, then it is at $(k,k)$ after the first $k$ steps, and $y=0$ is $k$ units below this point.
By concatenating this with a $k$-Dyck path that is shifted $k$ units right and $k$ units up to start at $(k,k)$, this new path does not venture below $y=0$.
Note that the $k$-Dyck path has $n$ Up steps and $n$ Down steps, with the last step always a Down step, so that along $k$ consecutive down steps from $(2n+k,k)$ to $(2n+2k,0)$ this concatenated path is a Dyck path of length $n+k$.
Moreover, the reverse of this process, namely removing the first $k$ Up steps and the last $k$ Down steps of Dyck path whose first $k$ steps are Up and last $k+1$ steps are Down and then shifting the resulting lattice path left $k$ units and down $k$ units so that it starts at $(0,0)$ and ends at $(2n,0)$, results in a $k$-Naples parking function.
\end{proof}
\begin{rem}
An example of this transformation for our running example $(6,6,6,5,5,2,1)$ can be seen in Figure \ref{fig:KDyckToDyck}.
Notice that for a $k$-Dyck path, the corresponding Dyck path represents a descending parking function of length $n+k$ that ends in at least $k$ cars with preference $1$. This leads to the following result.
\end{rem}
\begin{cor}
Descending $k$-Naples parking functions of length $n$ are in bijective correspondence to descending parking functions of length $n+k$ which end with at least $k$ cars with preference $1$.
\end{cor}
\begin{rem}
Similar to the transformation in \ref{prop:TranslatingAxis}, we see that Dyck paths of length $n+k$ that do not return to the line $y=0$ until the last step are in bijection to Dyck paths of length $n+k-1$ by removing the Up step and last Down step.
This motivates the next result.
\end{rem}
\begin{defn}\label{defn:Strictlyk-Naples}
A parking preference is \textit{strictly} $k$-Naples if it is $k$-Naples but not $(k-1)$-Naples.
\end{defn}
\begin{prop}\label{prop:DescendingtoDyck}
The descending $k$-Naples parking functions that are not descending $(k-1)$-Naples parking functions are in bijective correspondence to Dyck paths of length $n+k$ whose first $k$ steps are Up and last $k+1$ steps are Down and which return to the line $y=0$ sometime before the last step.
\end{prop}
\begin{proof}
To see this, use the same translation between a $k$-Dyck path and Dyck path as before.
The Dyck paths that do not return to the horizontal correspond to the $k$-Dyck paths that do not reach the line $y=-k$.
If the descending path reaches at most the line $y=-k+1$, it corresponds to a descending $(k-1)$-Naples parking function.
This shows the Dyck paths which do return to the $y=0$ before the last step, and thus correspond to $k$-Dyck paths which reach the line $y=-k$, are in bijection with descending $k$-Naples parking functions.
\end{proof}
Finally, we may also use this embedding of $k$-Dyck paths into Dyck paths to see which Dyck paths are in correspondence to ascending $k$-Naples parking functions. The following corollary follows directly from Theorem \ref{thm:AscendingCharacterization}.
\begin{cor}\label{cor:AscendingtoDyck}
Ascending $k$-Naples parking functions are in bijective correspondence to Dyck paths with length $n+k$ whose first $k$ steps are Up, last $k+1$ steps are Down, and before the last $k+1$ steps, whenever a Down step puts the path below the line $y=k$, the following $2k$ steps have a point with two more Up then Down steps.
\end{cor}
\section{Binary Trees}\label{sec:BinaryTrees}
In the previous section, we found a bijection between ascending and descending $k$-Naples parking functions of length $n$ and subsets of Dyck paths of length $n+k$.
Since there is already a well-known bijection between Dyck paths and full binary trees (see \cite{Stanley}, for example) we find a bijection between ascending or descending $k$-Naples parking functions and a certain subset of full binary trees.
By trimming the leaves of the full binary tree, we can then obtain a bijection to a subset of binary trees.
We begin this section by recalling the following definitions.
\begin{defn}\label{defn:RootedTrees}
A \textit{tree} is an undirected, connected graph with no cycles.
We say that a tree is \textit{rooted} if there is one vertex distinguished as the \textit{root}.
In a rooted tree an \textit{ancestor} of a vertex $v$ is a vertex that lies on the path from $v$ to the root, and we say that an ancestor of a vertex $v$ is the \textit{parent} of $v$ if it is also adjacent to $v$.
Analogously, a \textit{descendant} of a vertex $v$ is any vertex that has $v$ as its ancestor, and a \textit{child} of a vertex $v$ is any vertex that has $v$ as its parent.
A \textit{leaf} is a vertex with no children.
A \textit{binary} tree is a rooted tree in which every vertex has at most 2 children.
If every vertex has either 0 or 2 children we say that the binary tree is \textit{full}.
\end{defn}
\begin{defn}\label{def:TreesToDyck}
The bijection we use between full binary trees with $2n+1$ nodes and Dyck paths of length $n$ is defined recursively for a given tree, $T$ by $B(T) = UB(T_1)DB(T_2)$.
Here $U$ and $D$ represent Up and Down steps respectively, $T_1$ is the subtree of $T$ whose root is the left child of $T$ and consists of all descendants of this left child, and $T_2$ is the subtree of the right child.
If $T'$ consists of a single vertex then $B(T')$ is just the empty word of Up and Downs.
For further reference see \cite{Stanley}.
\end{defn}
\begin{rem}
From Proposition \ref{prop:TranslatingAxis} and Corollary \ref{cor:AscendingtoDyck} we can see that Dyck paths corresponding to ascending or descending $k$-Naples parking functions must start with $k$ Up steps.
Thus, the corresponding full binary trees must have a path from the root to $k$ consecutive left children.
Describing the remaining structure of the trees coming from ascending or descending $k$-Naples parking functions are one of the focuses of this section.
\end{rem}
\begin{rem}\label{rem:fullToTrimmed}
It is well-known that there is a bijection between full binary trees with $2n+1$ vertices and binary trees with $n$ vertices given by removing the leaves and the edges to which they are incident in the full binary tree.
To obtain a full binary tree on $2n+1$ vertices from a binary tree on $n$ vertices we add leaves to each node without two children until each of the original vertices has two children.
Figure \ref{fig:DyckFullBinaryAndBinary} illustrates both of these bijections.
From left to right we have the Dyck path corresponding to the strictly 2-Naples parking function $(6,6,6,5,5,2,1)$, then the full binary tree on $2n+1$ vertices with the leaves highlighted in red, and lastly the binary tree on $n$ vertices obtained by pruning the red leaves.
\end{rem}
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (408.13,280.16) .. controls (408.13,278.79) and (409.24,277.68) .. (410.61,277.68) .. controls (411.98,277.68) and (413.08,278.79) .. (413.08,280.16) .. controls (413.08,281.53) and (411.98,282.64) .. (410.61,282.64) .. controls (409.24,282.64) and (408.13,281.53) .. (408.13,280.16) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (466.02,319.75) .. controls (466.02,318.38) and (467.13,317.27) .. (468.5,317.27) .. controls (469.87,317.27) and (470.98,318.38) .. (470.98,319.75) .. controls (470.98,321.12) and (469.87,322.23) .. (468.5,322.23) .. controls (467.13,322.23) and (466.02,321.12) .. (466.02,319.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (468.5,319.75) -- (477.9,336.58) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (475.42,339.06) .. controls (475.42,337.69) and (476.53,336.58) .. (477.9,336.58) .. controls (479.27,336.58) and (480.38,337.69) .. (480.38,339.06) .. controls (480.38,340.43) and (479.27,341.54) .. (477.9,341.54) .. controls (476.53,341.54) and (475.42,340.43) .. (475.42,339.06) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (468.5,319.75) -- (459.24,339.88) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (450.92,358.73) .. controls (450.92,357.36) and (449.84,356.25) .. (448.5,356.25) .. controls (447.16,356.25) and (446.08,357.36) .. (446.08,358.73) .. controls (446.08,360.1) and (447.16,361.21) .. (448.5,361.21) .. controls (449.84,361.21) and (450.92,360.1) .. (450.92,358.73) -- cycle ;
\draw (459.24,339.88) -- (448.5,356.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (459.24,339.88) -- (469,358.75) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (466.52,358.75) .. controls (466.52,357.38) and (467.63,356.27) .. (469,356.27) .. controls (470.37,356.27) and (471.48,357.38) .. (471.48,358.75) .. controls (471.48,360.12) and (470.37,361.23) .. (469,361.23) .. controls (467.63,361.23) and (466.52,360.12) .. (466.52,358.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (456.77,339.88) .. controls (456.77,338.51) and (457.87,337.4) .. (459.24,337.4) .. controls (460.61,337.4) and (461.72,338.51) .. (461.72,339.88) .. controls (461.72,341.25) and (460.61,342.36) .. (459.24,342.36) .. controls (457.87,342.36) and (456.77,341.25) .. (456.77,339.88) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (449.5,299.75) -- (468.5,319.75) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (413.52,341.25) .. controls (413.52,339.88) and (414.63,338.77) .. (416,338.77) .. controls (417.37,338.77) and (418.48,339.88) .. (418.48,341.25) .. controls (418.48,342.62) and (417.37,343.73) .. (416,343.73) .. controls (414.63,343.73) and (413.52,342.62) .. (413.52,341.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (407.02,320.75) .. controls (407.02,319.24) and (408.13,318.01) .. (409.5,318.01) .. controls (410.87,318.01) and (411.98,319.24) .. (411.98,320.75) .. controls (411.98,322.26) and (410.87,323.49) .. (409.5,323.49) .. controls (408.13,323.49) and (407.02,322.26) .. (407.02,320.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (400.11,301.16) -- (409.5,320.75) ;
\draw (409.5,320.75) -- (416,338.77) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (399.26,341.14) .. controls (399.26,339.77) and (400.37,338.66) .. (401.73,338.66) .. controls (403.1,338.66) and (404.21,339.77) .. (404.21,341.14) .. controls (404.21,342.51) and (403.1,343.62) .. (401.73,343.62) .. controls (400.37,343.62) and (399.26,342.51) .. (399.26,341.14) -- cycle ;
\draw (409.5,320.75) -- (401.73,338.66) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (449.5,299.75) -- (459.5,279.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (447.02,299.75) .. controls (447.02,298.38) and (448.13,297.27) .. (449.5,297.27) .. controls (450.87,297.27) and (451.98,298.38) .. (451.98,299.75) .. controls (451.98,301.12) and (450.87,302.23) .. (449.5,302.23) .. controls (448.13,302.23) and (447.02,301.12) .. (447.02,299.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (430.08,260.65) -- (410.61,280.16) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (459.5,279.75) -- (480,299.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (410.48,280.45) -- (398.5,299.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (427.61,260.65) .. controls (427.61,259.28) and (428.72,258.17) .. (430.08,258.17) .. controls (431.45,258.17) and (432.56,259.28) .. (432.56,260.65) .. controls (432.56,262.02) and (431.45,263.13) .. (430.08,263.13) .. controls (428.72,263.13) and (427.61,262.02) .. (427.61,260.65) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (396.02,299.75) .. controls (396.02,298.38) and (397.13,297.27) .. (398.5,297.27) .. controls (399.87,297.27) and (400.98,298.38) .. (400.98,299.75) .. controls (400.98,301.12) and (399.87,302.23) .. (398.5,302.23) .. controls (397.13,302.23) and (396.02,301.12) .. (396.02,299.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (449.5,299.75) -- (440,319.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (430.08,260.65) -- (459.5,279.75) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (477.52,299.25) .. controls (477.52,297.88) and (478.63,296.77) .. (480,296.77) .. controls (481.37,296.77) and (482.48,297.88) .. (482.48,299.25) .. controls (482.48,300.62) and (481.37,301.73) .. (480,301.73) .. controls (478.63,301.73) and (477.52,300.62) .. (477.52,299.25) -- cycle ;
\draw (410.61,280.16) -- (420.02,297.31) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (417.5,299.75) .. controls (417.52,298.38) and (418.65,297.29) .. (420.02,297.31) .. controls (421.39,297.34) and (422.48,298.47) .. (422.45,299.84) .. controls (422.43,301.21) and (421.3,302.3) .. (419.93,302.27) .. controls (418.56,302.25) and (417.48,301.12) .. (417.5,299.75) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (437.52,319.25) .. controls (437.52,317.88) and (438.63,316.77) .. (440,316.77) .. controls (441.37,316.77) and (442.48,317.88) .. (442.48,319.25) .. controls (442.48,320.62) and (441.37,321.73) .. (440,321.73) .. controls (438.63,321.73) and (437.52,320.62) .. (437.52,319.25) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ][line width=0.75] (386.6,319.93) .. controls (386.6,318.56) and (387.71,317.45) .. (389.08,317.45) .. controls (390.44,317.45) and (391.55,318.56) .. (391.55,319.93) .. controls (391.55,321.3) and (390.44,322.41) .. (389.08,322.41) .. controls (387.71,322.41) and (386.6,321.3) .. (386.6,319.93) -- cycle ;
\draw (398.5,299.75) -- (389.08,317.45) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (457.02,279.75) .. controls (457.02,278.38) and (458.13,277.27) .. (459.5,277.27) .. controls (460.87,277.27) and (461.98,278.38) .. (461.98,279.75) .. controls (461.98,281.12) and (460.87,282.23) .. (459.5,282.23) .. controls (458.13,282.23) and (457.02,281.12) .. (457.02,279.75) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (439.92,377.23) .. controls (439.92,375.86) and (438.84,374.75) .. (437.5,374.75) .. controls (436.16,374.75) and (435.08,375.86) .. (435.08,377.23) .. controls (435.08,378.6) and (436.16,379.71) .. (437.5,379.71) .. controls (438.84,379.71) and (439.92,378.6) .. (439.92,377.23) -- cycle ;
\draw (448.24,358.38) -- (437.5,374.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (448.24,358.38) -- (458,377.25) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (455.52,377.25) .. controls (455.52,375.88) and (456.63,374.77) .. (458,374.77) .. controls (459.37,374.77) and (460.48,375.88) .. (460.48,377.25) .. controls (460.48,378.62) and (459.37,379.73) .. (458,379.73) .. controls (456.63,379.73) and (455.52,378.62) .. (455.52,377.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (526.13,279.66) .. controls (526.13,278.29) and (527.24,277.18) .. (528.61,277.18) .. controls (529.98,277.18) and (531.08,278.29) .. (531.08,279.66) .. controls (531.08,281.03) and (529.98,282.14) .. (528.61,282.14) .. controls (527.24,282.14) and (526.13,281.03) .. (526.13,279.66) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (547.52,339.25) .. controls (547.52,337.88) and (548.63,336.77) .. (550,336.77) .. controls (551.37,336.77) and (552.48,337.88) .. (552.48,339.25) .. controls (552.48,340.62) and (551.37,341.73) .. (550,341.73) .. controls (548.63,341.73) and (547.52,340.62) .. (547.52,339.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (566.52,318.75) .. controls (566.52,317.38) and (567.63,316.27) .. (569,316.27) .. controls (570.37,316.27) and (571.48,317.38) .. (571.48,318.75) .. controls (571.48,320.12) and (570.37,321.23) .. (569,321.23) .. controls (567.63,321.23) and (566.52,320.12) .. (566.52,318.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (549.5,299.75) -- (570,279.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (547.02,299.75) .. controls (547.02,298.38) and (548.13,297.27) .. (549.5,297.27) .. controls (550.87,297.27) and (551.98,298.38) .. (551.98,299.75) .. controls (551.98,301.12) and (550.87,302.23) .. (549.5,302.23) .. controls (548.13,302.23) and (547.02,301.12) .. (547.02,299.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (524.92,320.84) .. controls (524.92,319.33) and (526.02,318.1) .. (527.39,318.1) .. controls (528.76,318.1) and (529.87,319.33) .. (529.87,320.84) .. controls (529.87,322.36) and (528.76,323.58) .. (527.39,323.58) .. controls (526.02,323.58) and (524.92,322.36) .. (524.92,320.84) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (548.08,260.15) -- (528.61,279.66) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (510,299.75) -- (527.39,320.84) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (528.61,279.66) -- (510,299.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (545.61,260.15) .. controls (545.61,258.78) and (546.72,257.67) .. (548.08,257.67) .. controls (549.45,257.67) and (550.56,258.78) .. (550.56,260.15) .. controls (550.56,261.52) and (549.45,262.63) .. (548.08,262.63) .. controls (546.72,262.63) and (545.61,261.52) .. (545.61,260.15) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (507.52,299.75) .. controls (507.52,298.38) and (508.63,297.27) .. (510,297.27) .. controls (511.37,297.27) and (512.48,298.38) .. (512.48,299.75) .. controls (512.48,301.12) and (511.37,302.23) .. (510,302.23) .. controls (508.63,302.23) and (507.52,301.12) .. (507.52,299.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (569,318.75) -- (550,339.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (548.08,260.15) -- (570,279.25) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (549.5,299.75) -- (569,318.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (567.52,279.25) .. controls (567.52,277.88) and (568.63,276.77) .. (570,276.77) .. controls (571.37,276.77) and (572.48,277.88) .. (572.48,279.25) .. controls (572.48,280.62) and (571.37,281.73) .. (570,281.73) .. controls (568.63,281.73) and (567.52,280.62) .. (567.52,279.25) -- cycle ;
\draw (550,339.25) -- (530.5,360.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (528.02,360.25) .. controls (528.02,358.74) and (529.13,357.51) .. (530.5,357.51) .. controls (531.87,357.51) and (532.98,358.74) .. (532.98,360.25) .. controls (532.98,361.76) and (531.87,362.99) .. (530.5,362.99) .. controls (529.13,362.99) and (528.02,361.76) .. (528.02,360.25) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (359,307.75) -- (4.5,309) ;
\draw (282.67,268.6) -- (263.02,288.42) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (282.69,270.1) .. controls (281.86,270.12) and (281.18,269.46) .. (281.17,268.63) .. controls (281.15,267.8) and (281.81,267.12) .. (282.64,267.1) .. controls (283.47,267.09) and (284.15,267.75) .. (284.17,268.58) .. controls (284.18,269.4) and (283.52,270.09) .. (282.69,270.1) -- cycle ;
\draw (263.09,288) -- (243.12,308) ;
\draw (243.12,308) -- (223.16,328.5) ;
\draw (223.16,328.5) -- (203.19,308) ;
\draw (203.19,308) -- (163.25,348) ;
\draw (102.84,287.5) -- (163.25,348) ;
\draw (102.84,287.5) -- (82.87,308.5) ;
\draw (82.87,308.5) -- (62.91,288.5) ;
\draw (62.91,288.5) -- (42.94,308.5) ;
\draw [color={rgb, 255:red, 22; green, 1; blue, 251 } ,draw opacity=1 ] (360,348.5) -- (3,348.5) ;
\draw (282.56,268) -- (322.5,308) ;
\draw (42.94,308.5) -- (3,348.5) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (21.47,328.5) .. controls (21.47,327.67) and (22.14,327) .. (22.97,327) .. controls (23.8,327) and (24.47,327.67) .. (24.47,328.5) .. controls (24.47,329.33) and (23.8,330) .. (22.97,330) .. controls (22.14,330) and (21.47,329.33) .. (21.47,328.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (101.34,287.5) .. controls (101.34,286.67) and (102.01,286) .. (102.84,286) .. controls (103.67,286) and (104.34,286.67) .. (104.34,287.5) .. controls (104.34,288.33) and (103.67,289) .. (102.84,289) .. controls (102.01,289) and (101.34,288.33) .. (101.34,287.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (81.37,308.5) .. controls (81.37,307.67) and (82.05,307) .. (82.87,307) .. controls (83.7,307) and (84.37,307.67) .. (84.37,308.5) .. controls (84.37,309.33) and (83.7,310) .. (82.87,310) .. controls (82.05,310) and (81.37,309.33) .. (81.37,308.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (41.44,308.5) .. controls (41.44,307.67) and (42.11,307) .. (42.94,307) .. controls (43.77,307) and (44.44,307.67) .. (44.44,308.5) .. controls (44.44,309.33) and (43.77,310) .. (42.94,310) .. controls (42.11,310) and (41.44,309.33) .. (41.44,308.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (61.41,288.5) .. controls (61.41,287.67) and (62.08,287) .. (62.91,287) .. controls (63.73,287) and (64.41,287.67) .. (64.41,288.5) .. controls (64.41,289.33) and (63.73,290) .. (62.91,290) .. controls (62.08,290) and (61.41,289.33) .. (61.41,288.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (301.03,288) .. controls (301.03,287.17) and (301.7,286.5) .. (302.53,286.5) .. controls (303.36,286.5) and (304.03,287.17) .. (304.03,288) .. controls (304.03,288.83) and (303.36,289.5) .. (302.53,289.5) .. controls (301.7,289.5) and (301.03,288.83) .. (301.03,288) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (261.59,288) .. controls (261.59,287.17) and (262.26,286.5) .. (263.09,286.5) .. controls (263.92,286.5) and (264.59,287.17) .. (264.59,288) .. controls (264.59,288.83) and (263.92,289.5) .. (263.09,289.5) .. controls (262.26,289.5) and (261.59,288.83) .. (261.59,288) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (241.62,308) .. controls (241.62,307.17) and (242.3,306.5) .. (243.12,306.5) .. controls (243.95,306.5) and (244.62,307.17) .. (244.62,308) .. controls (244.62,308.83) and (243.95,309.5) .. (243.12,309.5) .. controls (242.3,309.5) and (241.62,308.83) .. (241.62,308) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (221.66,328.5) .. controls (221.66,327.67) and (222.33,327) .. (223.16,327) .. controls (223.98,327) and (224.66,327.67) .. (224.66,328.5) .. controls (224.66,329.33) and (223.98,330) .. (223.16,330) .. controls (222.33,330) and (221.66,329.33) .. (221.66,328.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (181.72,328) .. controls (181.72,327.17) and (182.39,326.5) .. (183.22,326.5) .. controls (184.05,326.5) and (184.72,327.17) .. (184.72,328) .. controls (184.72,328.83) and (184.05,329.5) .. (183.22,329.5) .. controls (182.39,329.5) and (181.72,328.83) .. (181.72,328) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (201.69,308) .. controls (201.69,307.17) and (202.36,306.5) .. (203.19,306.5) .. controls (204.01,306.5) and (204.69,307.17) .. (204.69,308) .. controls (204.69,308.83) and (204.01,309.5) .. (203.19,309.5) .. controls (202.36,309.5) and (201.69,308.83) .. (201.69,308) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (161.75,348) .. controls (161.75,347.17) and (162.42,346.5) .. (163.25,346.5) .. controls (164.08,346.5) and (164.75,347.17) .. (164.75,348) .. controls (164.75,348.83) and (164.08,349.5) .. (163.25,349.5) .. controls (162.42,349.5) and (161.75,348.83) .. (161.75,348) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (140.97,327) .. controls (140.97,326.17) and (141.64,325.5) .. (142.47,325.5) .. controls (143.3,325.5) and (143.97,326.17) .. (143.97,327) .. controls (143.97,327.83) and (143.3,328.5) .. (142.47,328.5) .. controls (141.64,328.5) and (140.97,327.83) .. (140.97,327) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (121.47,307.5) .. controls (121.47,306.67) and (122.14,306) .. (122.97,306) .. controls (123.8,306) and (124.47,306.67) .. (124.47,307.5) .. controls (124.47,308.33) and (123.8,309) .. (122.97,309) .. controls (122.14,309) and (121.47,308.33) .. (121.47,307.5) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (321,308) .. controls (321,307.17) and (321.67,306.5) .. (322.5,306.5) .. controls (323.33,306.5) and (324,307.17) .. (324,308) .. controls (324,308.83) and (323.33,309.5) .. (322.5,309.5) .. controls (321.67,309.5) and (321,308.83) .. (321,308) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (1.5,348.5) .. controls (1.5,347.67) and (2.17,347) .. (3,347) .. controls (3.83,347) and (4.5,347.67) .. (4.5,348.5) .. controls (4.5,349.33) and (3.83,350) .. (3,350) .. controls (2.17,350) and (1.5,349.33) .. (1.5,348.5) -- cycle ;
\draw (322.5,308) -- (361.5,347) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (340.03,327) .. controls (340.03,326.17) and (340.7,325.5) .. (341.53,325.5) .. controls (342.36,325.5) and (343.03,326.17) .. (343.03,327) .. controls (343.03,327.83) and (342.36,328.5) .. (341.53,328.5) .. controls (340.7,328.5) and (340.03,327.83) .. (340.03,327) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (358.5,347) .. controls (358.5,346.17) and (359.17,345.5) .. (360,345.5) .. controls (360.83,345.5) and (361.5,346.17) .. (361.5,347) .. controls (361.5,347.83) and (360.83,348.5) .. (360,348.5) .. controls (359.17,348.5) and (358.5,347.83) .. (358.5,347) -- cycle ;
\end{tikzpicture}
\caption{Dyck path to full binary tree to binary tree transformations.} \label{fig:DyckFullBinaryAndBinary}
\end{figure}
Before we determine which binary trees with $n+k$ nodes corresponds to ascending $k$-Naples parking functions we need the following definitions.
\begin{figure}[h]
\centering
\begin{subfigure}{.5\textwidth}
\centering
\begin{tikzpicture}[scale=0.6]
\draw[ dashed]
(-.8,.8) to (4,-4)
(-1.8,-.2) to (3,-5)
(-2.8,-1.2) to (2,-6)
;
\draw[very thick]
(0,0) to (-2,-2)
(-1,-1) to (2,-4)
(0,-2) to (-1,-3)
(2,-4) to (1,-5)
;
\fill
(0,0) circle (4pt)
(-1,-1) circle (4pt)
(-2,-2) circle (4pt)
(0,-2) circle (4pt)
(-1,-3) circle (4pt)
(1,-3) circle (4pt)
(2,-4) circle (4pt)
(1,-5)circle (4pt)
;
\node at (-1,1) {$0$};
\node at (-2,0) {$1$};
\node at (-3,-1) {$2$};
\end{tikzpicture}
\caption{Diagonal depth}
\label{fig:diagdep}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\begin{tikzpicture}[scale=1.5]
\draw[very thick]
(0,0) to (-2,-2)
(-1,-1) to (0,-2)
;
\draw[very thick, blue,->-=0.05,->-=0.5]
(0,.5) to[out=-150, in=45] (-1.8,-1) to[out=-135, in=120] (-2.5,-2) to[out=-60, in=180] (-2,-2.5) to[out=0, in=180] (-1,-1.5) to[out=0, in=180] (0,-2.5) to[out=0, in=-60] (0.6, -2) to[out=120, in=-90] (-0.3,-1) to[out=90, in=-60] (0.5,0) to[out=120,in=0] cycle;
\fill[red] (0,.5) circle (2pt) ;
\node[blue] at (-0.25,0) {$1$};
\node[blue] at (-1.25,-1) {$2$};
\node[blue] at (-2.25,-2) {$3$};
\node[blue] at (-1.75,-2) {$4$};
\node[blue] at (-1,-1.25) {$5$};
\node[blue] at (-0.25,-2) {$6$};
\node[blue] at (0.25,-2) {$7$};
\node[blue] at (-0.75,-1) {$8$};
\node[blue] at (0.25,0) {$9$};
\fill
(0,0) circle (2pt)
(-1,-1) circle (2pt)
(-2,-2) circle (2pt)
(0,-2) circle (2pt)
;
\end{tikzpicture}
\caption{Tree traversal}
\label{fig:treetrav}
\end{subfigure}
\caption{Depth and traversal in binary trees.}
\end{figure}
\begin{defn}\label{def:DiagonalDepth}
The \textit{diagonal depth} of a node is the number of left steps needed in a path from the root to the node without using any edge more than once.
\end{defn}
\begin{defn}\label{def:TraversingNodes}
The \textit{traversal} of a rooted binary tree $T$ is the unique counterclockwise path in $T$ starting and ending at the root, traversing each edge exactly twice. A vertex $v$ of $T$ is \textit{traversed} at step $i$ if it is $i$-th vertex in the traversal.
\end{defn}
\begin{rem}
A tree traversal is illustrated in blue in Figure \ref{fig:treetrav} with each vertex labeled for each step at which it is traversed.
We see that the root is always traversed for the first time at step $1$ and its left child is traversed for the first time at step $2$.
Observe that a leaf is traversed for the first time at step $i$, then it is traversed for the second time at step $i+1$.
In general, each vertex that does not have two descendants is traversed twice, and each vertex with two descendants is traversed three times.
\end{rem}
To see when a rooted binary tree corresponds to an ascending $k$-Naples parking function of length, we need to understand what height the associated Dyck path is at while each node is being traversed.
\begin{lem}\label{lem:TraversingHeights}
Suppose $T'$ is a rooted binary tree on $n$ vertices corresponding to the full rooted binary tree $T$ on $2n+1$ vertices, and suppose $B(T)$ is the corresponding Dyck path of length $n$.
Let $v$ be a vertex of $T'$. If $v$ is traversed at steps $i<j$ in the traversal of $T'$, then the height of $B(T)$ immediately before step $i$ is the same as immediately after step $j$.
\end{lem}
\begin{proof}
Suppose step $i$ of $B(T)$ is an Up step, and let $v$ be the vertex traversed at step $i$ of the traversal of $T'$. By assumption $v$ is not a leaf in $T$.
Then the step $i$ Up step of $B(T)$ is directly followed by the path \[B(T_{v_L})DB(T_{v_R}),\] where $v_L$, $v_R$ are the left and right children of $v$ respectively and $T_{v_{L}}$, $T_{v_{L}}$ are the rooted subtrees those children define.
Note that $B(T_{v_L})$ and $B(T_{v_R})$ both correspond to Dyck paths and thus must have the same number of Up and Down steps.
Moreover, the Down step directly after $B(T_{v_L})$ is the $k$-th step of the Dyck path corresponding to the next time that vertex $v$ is traversed.
Consequently, the height of $B(T)$ before step $i$ and after step $k$ are identical.
Recall, that directly following the $k$-th down step of $B(T)$ is $B(T_{v_R})$. If $T_{v_R}$ is not a leaf then we have \[B(T_{v_R})=UB(T_{v_{RL}})DB(T_{v_{RR}}),\] where $T_{v_{RL}}$ ($T_{v_{RL}})$ is the rooted subtree of the left (right) child of $v_R$.
By the same reasoning as above the down step following $UB(T_{v_{RL}})$ in the above Dyck path occurs at some step $l$ that occurs at the same height as step $k$ and corresponds to $v$ being traversed at step $l$.
\end{proof}
\begin{rem}
Unless otherwise specified we consider the bijective correspondence between rooted binary trees and Dyck paths--where the number of nodes of the tree is the same as the length of the Dyck path--for the remained of this paper.
\end{rem}
\begin{cor}\label{cor:TraversingHeightsChild}
If a node of a rooted binary tree is traversed for the first time at step $i$ and its right child is traversed for the first time at step $j$, then the height of the corresponding Dyck path before step $i$ is the same as before step $j$.
\end{cor}
\begin{proof}
Observe that if a node being traversed for the second time at step $k$ has a right child, then that right child is traversed for the first time at the next step. Along with the previous lemma this completes our proof.
\end{proof}
We can now use these results to see what a return to the horizontal line $y=0$ in the Dyck path corresponds to in the binary tree.
\begin{cor}\label{cor:ReturntoDiagonal}
The Dyck path corresponding to a rooted binary tree returns to the horizontal at step $i$ if $i$ is the last step or a direct right descendant of the root is traversed for the first time at step $i+1$.
\end{cor}
\begin{proof}
This is true for the first descendant of the root since if the root is traversed for the second time at step $i$, the right descendant is traversed for the first time at step $i+1$.
The rest follows by induction on the right descendants.
\end{proof}
Using this result, we find that the height of the corresponding location of a Dyck path is based on the diagonal depth of the node.
\begin{lem}\label{lem:HTraversingHeight}
If a node has diagonal depth of $h$ and is traversed for the first time on step $i$, then the height of the Dyck path before step $i$ is also $h$.
\end{lem}
\begin{proof}
We see this is true for $h=0$.
Now assume it is true for all values before $m$ and that the node $v$ has diagonal depth $m$ and is traversed for the first time at step $i$.
We see that if $v$ is the right child of a node, then the height of the path before step $i$ is the same as the height before the step the parent of $v$ is traversed for the first time.
This allows us to look at the most recent ancestor of $v$ that was the left child.
Since we are assuming $h>0$, this must exist.
Let this ancestor be node $w$.
By induction, we see that the path before the parent of $w$ is first traversed is at height $h-1$.
But this next step is to a left descendent, so must be Up.
This implies immediately before $w$ is first traversed, the path height is $h$.
It follows the same is true for $v$, thus proving the statement.
\end{proof}
\begin{rem}
Lemma~\ref{lem:HTraversingHeight} has many consequences.
For one, we know when the final node of a rooted binary tree is traversed for the first time at step $i$, every remaining step of the corresponding length $2n$ binary tree is Down.
Notice that step $i$ is an Up step. So after step $i$, the path must have a height of at least $k+1$, requiring the path to have a height of at least $k$ before step $i$.
This implies the final node traversed must have a diagonal depth of $k$.
\end{rem}
Now, we look at the nodes with diagonal depths $k-1$.
When these nodes are traversed for a second time, the corresponding path is taking a Down step to the line $y=k-1$.
Recall that this is the same as going below the line $y=0$ in a $k$-Dyck path.
Whenever this happens, at some point over the next $2k$ steps, there must be $2$ more Up then Down steps.
For the tree, this corresponds to one of the next $2k-1$ nodes being traversed, either for the first or second time, must have diagonal depth $k$.
This completes the description of binary trees that correspond to ascending $k$-Naples parking functions.
\begin{rem}
Notice that all observations in this section apply equally to the trees which correspond to descending $k$-Naples parking functions except that the corresponding Dyck paths have no restriction on the number of steps spent under the line $y=k$.
\end{rem}
We now give another bijection involving a subset of binary trees, this time from descending strictly $k$-Naples parking functions.
For this, notice that descending strictly $k$-Naples parking functions are in bijection with Dyck paths of length $n+k$ that start with $k$ Up steps, return to the horizontal before they end, and have at least $k+1$ Up steps after the first return to the horizontal.
We already know that Dyck paths that start with $k$ Up steps, end with $k+1$ Down steps, and return to the horizontal are in bijection to descending strictly $k$-Naples parking functions, so reflecting the Dyck path after the first return gives us the new result.
This can be seen in Figure \ref{fig:StrictlyKBijections}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (395.93,154.39) -- (175.47,155.18) ;
\draw (324.1,129.82) -- (336.07,142.34) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (324.08,130.76) .. controls (324.59,130.77) and (325,130.36) .. (325.01,129.83) .. controls (325.02,129.31) and (324.62,128.88) .. (324.12,128.87) .. controls (323.61,128.86) and (323.19,129.28) .. (323.19,129.8) .. controls (323.18,130.32) and (323.58,130.75) .. (324.08,130.76) -- cycle ;
\draw (336.03,142.07) -- (348.19,154.71) ;
\draw (348.19,154.71) -- (360.36,167.66) ;
\draw (360.36,167.66) -- (372.52,154.71) ;
\draw (372.52,154.71) -- (396.86,179.99) ;
\draw (238.02,141.76) -- (275.59,179.99) ;
\draw (238.02,141.76) -- (225.61,155.03) ;
\draw (225.61,155.03) -- (213.19,142.39) ;
\draw (213.19,142.39) -- (200.77,155.03) ;
\draw [color={rgb, 255:red, 22; green, 1; blue, 251 } ,draw opacity=1 ] (396.86,179.99) -- (175.93,180.3) ;
\draw (324.16,129.44) -- (299.83,154.71) ;
\draw (200.77,155.03) -- (175.93,180.3) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (187.42,167.66) .. controls (187.42,167.14) and (187.84,166.72) .. (188.35,166.72) .. controls (188.87,166.72) and (189.28,167.14) .. (189.28,167.66) .. controls (189.28,168.19) and (188.87,168.61) .. (188.35,168.61) .. controls (187.84,168.61) and (187.42,168.19) .. (187.42,167.66) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (237.09,141.76) .. controls (237.09,141.23) and (237.51,140.81) .. (238.02,140.81) .. controls (238.54,140.81) and (238.96,141.23) .. (238.96,141.76) .. controls (238.96,142.28) and (238.54,142.71) .. (238.02,142.71) .. controls (237.51,142.71) and (237.09,142.28) .. (237.09,141.76) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (224.67,155.03) .. controls (224.67,154.5) and (225.09,154.08) .. (225.61,154.08) .. controls (226.12,154.08) and (226.54,154.5) .. (226.54,155.03) .. controls (226.54,155.55) and (226.12,155.97) .. (225.61,155.97) .. controls (225.09,155.97) and (224.67,155.55) .. (224.67,155.03) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (199.84,155.03) .. controls (199.84,154.5) and (200.25,154.08) .. (200.77,154.08) .. controls (201.28,154.08) and (201.7,154.5) .. (201.7,155.03) .. controls (201.7,155.55) and (201.28,155.97) .. (200.77,155.97) .. controls (200.25,155.97) and (199.84,155.55) .. (199.84,155.03) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (212.25,142.39) .. controls (212.25,141.87) and (212.67,141.44) .. (213.19,141.44) .. controls (213.7,141.44) and (214.12,141.87) .. (214.12,142.39) .. controls (214.12,142.91) and (213.7,143.34) .. (213.19,143.34) .. controls (212.67,143.34) and (212.25,142.91) .. (212.25,142.39) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (312.91,142.07) .. controls (312.91,141.55) and (312.5,141.13) .. (312,141.13) .. controls (311.49,141.13) and (311.08,141.55) .. (311.08,142.07) .. controls (311.08,142.6) and (311.49,143.02) .. (312,143.02) .. controls (312.5,143.02) and (312.91,142.6) .. (312.91,142.07) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (336.94,142.07) .. controls (336.94,141.55) and (336.53,141.13) .. (336.03,141.13) .. controls (335.52,141.13) and (335.11,141.55) .. (335.11,142.07) .. controls (335.11,142.6) and (335.52,143.02) .. (336.03,143.02) .. controls (336.53,143.02) and (336.94,142.6) .. (336.94,142.07) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (349.11,154.71) .. controls (349.11,154.19) and (348.7,153.76) .. (348.19,153.76) .. controls (347.69,153.76) and (347.28,154.19) .. (347.28,154.71) .. controls (347.28,155.23) and (347.69,155.66) .. (348.19,155.66) .. controls (348.7,155.66) and (349.11,155.23) .. (349.11,154.71) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (361.27,167.66) .. controls (361.27,167.14) and (360.86,166.72) .. (360.36,166.72) .. controls (359.85,166.72) and (359.44,167.14) .. (359.44,167.66) .. controls (359.44,168.19) and (359.85,168.61) .. (360.36,168.61) .. controls (360.86,168.61) and (361.27,168.19) .. (361.27,167.66) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (385.6,167.35) .. controls (385.6,166.83) and (385.19,166.4) .. (384.69,166.4) .. controls (384.19,166.4) and (383.78,166.83) .. (383.78,167.35) .. controls (383.78,167.87) and (384.19,168.3) .. (384.69,168.3) .. controls (385.19,168.3) and (385.6,167.87) .. (385.6,167.35) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (373.44,154.71) .. controls (373.44,154.19) and (373.03,153.76) .. (372.52,153.76) .. controls (372.02,153.76) and (371.61,154.19) .. (371.61,154.71) .. controls (371.61,155.23) and (372.02,155.66) .. (372.52,155.66) .. controls (373.03,155.66) and (373.44,155.23) .. (373.44,154.71) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (274.66,179.99) .. controls (274.66,179.46) and (275.07,179.04) .. (275.59,179.04) .. controls (276.1,179.04) and (276.52,179.46) .. (276.52,179.99) .. controls (276.52,180.51) and (276.1,180.93) .. (275.59,180.93) .. controls (275.07,180.93) and (274.66,180.51) .. (274.66,179.99) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (261.73,166.72) .. controls (261.73,166.19) and (262.15,165.77) .. (262.67,165.77) .. controls (263.18,165.77) and (263.6,166.19) .. (263.6,166.72) .. controls (263.6,167.24) and (263.18,167.66) .. (262.67,167.66) .. controls (262.15,167.66) and (261.73,167.24) .. (261.73,166.72) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (249.61,154.39) .. controls (249.61,153.87) and (250.02,153.45) .. (250.54,153.45) .. controls (251.05,153.45) and (251.47,153.87) .. (251.47,154.39) .. controls (251.47,154.92) and (251.05,155.34) .. (250.54,155.34) .. controls (250.02,155.34) and (249.61,154.92) .. (249.61,154.39) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (300.74,154.71) .. controls (300.74,154.19) and (300.34,153.76) .. (299.83,153.76) .. controls (299.33,153.76) and (298.92,154.19) .. (298.92,154.71) .. controls (298.92,155.23) and (299.33,155.66) .. (299.83,155.66) .. controls (300.34,155.66) and (300.74,155.23) .. (300.74,154.71) -- cycle ;
\draw (299.83,154.71) -- (276.07,179.35) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (289.15,166.72) .. controls (289.15,166.19) and (288.74,165.77) .. (288.24,165.77) .. controls (287.73,165.77) and (287.32,166.19) .. (287.32,166.72) .. controls (287.32,167.24) and (287.73,167.66) .. (288.24,167.66) .. controls (288.74,167.66) and (289.15,167.24) .. (289.15,166.72) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (396.86,179.99) .. controls (396.86,179.46) and (396.44,179.04) .. (395.93,179.04) .. controls (395.42,179.04) and (395.01,179.46) .. (395.01,179.99) .. controls (395.01,180.51) and (395.42,180.93) .. (395.93,180.93) .. controls (396.44,180.93) and (396.86,180.51) .. (396.86,179.99) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (175,180.3) .. controls (175,179.78) and (175.42,179.35) .. (175.93,179.35) .. controls (176.45,179.35) and (176.87,179.78) .. (176.87,180.3) .. controls (176.87,180.83) and (176.45,181.25) .. (175.93,181.25) .. controls (175.42,181.25) and (175,180.83) .. (175,180.3) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (397.95,100.68) -- (177.49,101.47) ;
\draw (350.47,75.95) -- (338.25,88.47) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (350.49,76.9) .. controls (349.98,76.9) and (349.55,76.49) .. (349.54,75.96) .. controls (349.53,75.44) and (349.94,75.01) .. (350.46,75) .. controls (350.97,74.99) and (351.4,75.41) .. (351.41,75.93) .. controls (351.42,76.45) and (351.01,76.89) .. (350.49,76.9) -- cycle ;
\draw (338.3,88.21) -- (325.88,100.84) ;
\draw (325.88,100.84) -- (313.47,113.8) ;
\draw (313.47,113.8) -- (301.05,100.84) ;
\draw (301.05,100.84) -- (276.21,126.12) ;
\draw (238.65,87.89) -- (276.21,126.12) ;
\draw (238.65,87.89) -- (226.23,101.16) ;
\draw (226.23,101.16) -- (213.81,88.52) ;
\draw (213.81,88.52) -- (201.39,101.16) ;
\draw [color={rgb, 255:red, 22; green, 1; blue, 251 } ,draw opacity=1 ] (398.57,126.43) -- (176.55,126.43) ;
\draw (350.41,75.57) -- (375.25,100.84) ;
\draw (201.39,101.16) -- (176.55,126.43) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (188.04,113.8) .. controls (188.04,113.27) and (188.46,112.85) .. (188.97,112.85) .. controls (189.49,112.85) and (189.91,113.27) .. (189.91,113.8) .. controls (189.91,114.32) and (189.49,114.74) .. (188.97,114.74) .. controls (188.46,114.74) and (188.04,114.32) .. (188.04,113.8) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (237.71,87.89) .. controls (237.71,87.37) and (238.13,86.94) .. (238.65,86.94) .. controls (239.16,86.94) and (239.58,87.37) .. (239.58,87.89) .. controls (239.58,88.41) and (239.16,88.84) .. (238.65,88.84) .. controls (238.13,88.84) and (237.71,88.41) .. (237.71,87.89) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (225.29,101.16) .. controls (225.29,100.64) and (225.71,100.21) .. (226.23,100.21) .. controls (226.74,100.21) and (227.16,100.64) .. (227.16,101.16) .. controls (227.16,101.68) and (226.74,102.11) .. (226.23,102.11) .. controls (225.71,102.11) and (225.29,101.68) .. (225.29,101.16) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (200.46,101.16) .. controls (200.46,100.64) and (200.88,100.21) .. (201.39,100.21) .. controls (201.91,100.21) and (202.32,100.64) .. (202.32,101.16) .. controls (202.32,101.68) and (201.91,102.11) .. (201.39,102.11) .. controls (200.88,102.11) and (200.46,101.68) .. (200.46,101.16) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (212.88,88.52) .. controls (212.88,88) and (213.29,87.57) .. (213.81,87.57) .. controls (214.32,87.57) and (214.74,88) .. (214.74,88.52) .. controls (214.74,89.04) and (214.32,89.47) .. (213.81,89.47) .. controls (213.29,89.47) and (212.88,89.04) .. (212.88,88.52) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (361.9,88.21) .. controls (361.9,87.68) and (362.31,87.26) .. (362.83,87.26) .. controls (363.34,87.26) and (363.76,87.68) .. (363.76,88.21) .. controls (363.76,88.73) and (363.34,89.15) .. (362.83,89.15) .. controls (362.31,89.15) and (361.9,88.73) .. (361.9,88.21) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (337.37,88.21) .. controls (337.37,87.68) and (337.79,87.26) .. (338.3,87.26) .. controls (338.82,87.26) and (339.24,87.68) .. (339.24,88.21) .. controls (339.24,88.73) and (338.82,89.15) .. (338.3,89.15) .. controls (337.79,89.15) and (337.37,88.73) .. (337.37,88.21) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (324.95,100.84) .. controls (324.95,100.32) and (325.37,99.89) .. (325.88,99.89) .. controls (326.4,99.89) and (326.82,100.32) .. (326.82,100.84) .. controls (326.82,101.37) and (326.4,101.79) .. (325.88,101.79) .. controls (325.37,101.79) and (324.95,101.37) .. (324.95,100.84) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (312.53,113.8) .. controls (312.53,113.27) and (312.95,112.85) .. (313.47,112.85) .. controls (313.98,112.85) and (314.4,113.27) .. (314.4,113.8) .. controls (314.4,114.32) and (313.98,114.74) .. (313.47,114.74) .. controls (312.95,114.74) and (312.53,114.32) .. (312.53,113.8) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (287.7,113.48) .. controls (287.7,112.96) and (288.11,112.53) .. (288.63,112.53) .. controls (289.14,112.53) and (289.56,112.96) .. (289.56,113.48) .. controls (289.56,114) and (289.14,114.43) .. (288.63,114.43) .. controls (288.11,114.43) and (287.7,114) .. (287.7,113.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (300.11,100.84) .. controls (300.11,100.32) and (300.53,99.89) .. (301.05,99.89) .. controls (301.56,99.89) and (301.98,100.32) .. (301.98,100.84) .. controls (301.98,101.37) and (301.56,101.79) .. (301.05,101.79) .. controls (300.53,101.79) and (300.11,101.37) .. (300.11,100.84) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (275.28,126.12) .. controls (275.28,125.59) and (275.7,125.17) .. (276.21,125.17) .. controls (276.73,125.17) and (277.14,125.59) .. (277.14,126.12) .. controls (277.14,126.64) and (276.73,127.07) .. (276.21,127.07) .. controls (275.7,127.07) and (275.28,126.64) .. (275.28,126.12) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (262.36,112.85) .. controls (262.36,112.33) and (262.77,111.9) .. (263.29,111.9) .. controls (263.8,111.9) and (264.22,112.33) .. (264.22,112.85) .. controls (264.22,113.37) and (263.8,113.8) .. (263.29,113.8) .. controls (262.77,113.8) and (262.36,113.37) .. (262.36,112.85) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (250.23,100.53) .. controls (250.23,100) and (250.65,99.58) .. (251.16,99.58) .. controls (251.68,99.58) and (252.09,100) .. (252.09,100.53) .. controls (252.09,101.05) and (251.68,101.47) .. (251.16,101.47) .. controls (250.65,101.47) and (250.23,101.05) .. (250.23,100.53) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (374.31,100.84) .. controls (374.31,100.32) and (374.73,99.89) .. (375.25,99.89) .. controls (375.76,99.89) and (376.18,100.32) .. (376.18,100.84) .. controls (376.18,101.37) and (375.76,101.79) .. (375.25,101.79) .. controls (374.73,101.79) and (374.31,101.37) .. (374.31,100.84) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (175.62,126.43) .. controls (175.62,125.91) and (176.04,125.49) .. (176.55,125.49) .. controls (177.07,125.49) and (177.49,125.91) .. (177.49,126.43) .. controls (177.49,126.96) and (177.07,127.38) .. (176.55,127.38) .. controls (176.04,127.38) and (175.62,126.96) .. (175.62,126.43) -- cycle ;
\draw (375.25,100.84) -- (399.5,125.49) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (386.15,112.85) .. controls (386.15,112.33) and (386.57,111.9) .. (387.08,111.9) .. controls (387.6,111.9) and (388.01,112.33) .. (388.01,112.85) .. controls (388.01,113.37) and (387.6,113.8) .. (387.08,113.8) .. controls (386.57,113.8) and (386.15,113.37) .. (386.15,112.85) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (397.63,125.49) .. controls (397.63,124.96) and (398.05,124.54) .. (398.57,124.54) .. controls (399.08,124.54) and (399.5,124.96) .. (399.5,125.49) .. controls (399.5,126.01) and (399.08,126.43) .. (398.57,126.43) .. controls (398.05,126.43) and (397.63,126.01) .. (397.63,125.49) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (454.63,108.16) .. controls (454.63,106.79) and (455.74,105.68) .. (457.11,105.68) .. controls (458.48,105.68) and (459.58,106.79) .. (459.58,108.16) .. controls (459.58,109.53) and (458.48,110.64) .. (457.11,110.64) .. controls (455.74,110.64) and (454.63,109.53) .. (454.63,108.16) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (494.02,146.25) .. controls (494.02,144.88) and (495.13,143.77) .. (496.5,143.77) .. controls (497.87,143.77) and (498.98,144.88) .. (498.98,146.25) .. controls (498.98,147.62) and (497.87,148.73) .. (496.5,148.73) .. controls (495.13,148.73) and (494.02,147.62) .. (494.02,146.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (473.52,145.75) .. controls (473.52,144.38) and (474.63,143.27) .. (476,143.27) .. controls (477.37,143.27) and (478.48,144.38) .. (478.48,145.75) .. controls (478.48,147.12) and (477.37,148.23) .. (476,148.23) .. controls (474.63,148.23) and (473.52,147.12) .. (473.52,145.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (486.5,126.75) -- (495.98,107.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (484.02,126.75) .. controls (484.02,125.38) and (485.13,124.27) .. (486.5,124.27) .. controls (487.87,124.27) and (488.98,125.38) .. (488.98,126.75) .. controls (488.98,128.12) and (487.87,129.23) .. (486.5,129.23) .. controls (485.13,129.23) and (484.02,128.12) .. (484.02,126.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (476.58,88.65) -- (457.11,108.16) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (474.11,88.65) .. controls (474.11,87.28) and (475.22,86.17) .. (476.58,86.17) .. controls (477.95,86.17) and (479.06,87.28) .. (479.06,88.65) .. controls (479.06,90.02) and (477.95,91.13) .. (476.58,91.13) .. controls (475.22,91.13) and (474.11,90.02) .. (474.11,88.65) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (486.5,126.75) -- (496.5,146.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (476.58,88.65) -- (495.98,107.25) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (476,145.75) -- (486.5,126.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (493.5,107.25) .. controls (493.5,105.88) and (494.61,104.77) .. (495.98,104.77) .. controls (497.34,104.77) and (498.45,105.88) .. (498.45,107.25) .. controls (498.45,108.62) and (497.34,109.73) .. (495.98,109.73) .. controls (494.61,109.73) and (493.5,108.62) .. (493.5,107.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (434.52,126.25) .. controls (434.52,124.88) and (435.63,123.77) .. (437,123.77) .. controls (438.37,123.77) and (439.48,124.88) .. (439.48,126.25) .. controls (439.48,127.62) and (438.37,128.73) .. (437,128.73) .. controls (435.63,128.73) and (434.52,127.62) .. (434.52,126.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (453.42,145.34) .. controls (453.42,143.97) and (454.52,142.86) .. (455.89,142.86) .. controls (457.26,142.86) and (458.37,143.97) .. (458.37,145.34) .. controls (458.37,146.71) and (457.26,147.82) .. (455.89,147.82) .. controls (454.52,147.82) and (453.42,146.71) .. (453.42,145.34) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (437,126.25) -- (455.89,145.34) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (437,126.25) -- (457.11,108.16) ;
\draw (476,145.75) -- (466.5,165.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (464.02,165.25) .. controls (464.02,163.88) and (465.13,162.77) .. (466.5,162.77) .. controls (467.87,162.77) and (468.98,163.88) .. (468.98,165.25) .. controls (468.98,166.62) and (467.87,167.73) .. (466.5,167.73) .. controls (465.13,167.73) and (464.02,166.62) .. (464.02,165.25) -- cycle ;
\end{tikzpicture}
\caption{Bijections for descending strictly $k$-Naples parking functions.}
\label{fig:StrictlyKBijections}
\end{figure}
\begin{prop}\label{prop:SecondBinaryTree}
Descending strictly $k$-Naples parking functions are in bijection with binary trees that have $n+k$ nodes and satisfy the properties that the root has at least $k-1$ left children in a row, has a right child, and this right child has at least $k$ left children in a row.
\end{prop}
\begin{proof}
To see this, notice that the left subtree of the root remains unchanged.
For the right subtree, this is equivalent to having a return to the horizontal, then using our earlier bijection we see the next $k+1$ steps are Up.
There is then no restriction on the ending.
So, we see this is indeed in bijection to the Dyck paths just mentioned, and therefore to descending strictly $k$-Naples parking functions.
An example of this binary tree can be seen in Figure \ref{fig:StrictlyKBijections}.
\end{proof}
\section{Enumeration of Monotonic $k$-Naples Parking Functions}\label{sec:Monotonic}
In the previous two sections we found bijections between either of the two types of monotonic $k$-Naples parking functions--descending or ascending--and subsets of both Dyck paths and binary trees.
We now use those bijections to help us count these objects.
In particular, we give a recursive formula for the number of ascending $k$-Naples parking functions and a closed formula for their descending counterparts.
We also give results about the generating functions for the sequences corresponding to these objects.
Throughout this section $C(x)$ is the generating function for the Catalan numbers, and $C_k$ is the $k$th Catalan number.
\begin{defn}\label{defn:Convolution}
Given sequences $(a_n)=a_0,a_1,a_2,\dots$ with generating function $A(x)=a_0+a_1x+a_2x^2+\cdots$ and $(b_n)=b_0,b_1,b_2,\dots$ with generating function $B(x)=b_0+b_1x+b_2x^2+\cdots$, the \textit{convolution} of $(a_n)$ and $(b_n)$ is defined to be $g_n=\sum_{i=0}^na_ib_{n-i},$ with generating function $G(x)=A(x)B(x).$
If $B(x)=A(x)$ we write $G(x)=A^2(x)$ and similarly extend to higher exponents.
\end{defn}
\begin{thm}\label{thm:RecurrenceAscending}
Let $I_{n,k}$ denote the number of ascending $k$-Naples parking functions of length $n$, and let $U_{n,k}$ denote the number of ascending $k$-Naples parking functions of length $n$ which start with~$1$.
For $n-1\geq k\geq1$ and $n\geq0$, we have
\begin{align}
I_{n,k} = I_{n,k-1}+C_k\sum^{n-k}_{i=0} (I_{i,k-1})(U_{n-k-i,k})\text{ and}\\ U_{n,k} = U_{n,k-1}+\sum^{n-k}_{i=0} (U_{i,k-1})(C_k)(U_{n-k-i,k}) \label{eq:5.2}.
\end{align}
\end{thm}
\begin{rem}
Note that $I_{n,0} = C_n$ and $U_{0,k} = 0$, otherwise $U_{n,0} = C_n$.
Further observe that the $k$-Naples parking functions which start with $1$ correspond to $k$-Dyck paths that start with an Up step.
In Theorem \ref{thm:RecurrenceAscending}, if $n\leq k$, then both summations are empty making them $0$.
This corresponds to there being no new $k$-Naples for a fixed length $n$ if $k$ is large enough.
\end{rem}
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (370.56,172.25) -- (130.94,172.75) ;
\draw [color={rgb, 255:red, 139; green, 87; blue, 42 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (370.56,172.25) -- (351.09,152.25) ;
\draw [color={rgb, 255:red, 139; green, 87; blue, 42 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (351.09,152.25) -- (331.12,172.25) ;
\draw [color={rgb, 255:red, 139; green, 87; blue, 42 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (291.19,172.25) -- (311.23,151.75) ;
\draw [color={rgb, 255:red, 139; green, 87; blue, 42 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (311.23,151.75) -- (331.12,172.25) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (291.19,172.25) -- (271.22,192.25) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (210.97,171.75) -- (230.47,191.25) ;
\draw (150.91,153.25) -- (170.87,132.25) ;
\draw (170.87,132.25) -- (190.84,152.25) ;
\draw (150.91,152.75) -- (130.94,172.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (189.34,151.75) .. controls (189.34,150.92) and (190.01,150.25) .. (190.84,150.25) .. controls (191.67,150.25) and (192.34,150.92) .. (192.34,151.75) .. controls (192.34,152.58) and (191.67,153.25) .. (190.84,153.25) .. controls (190.01,153.25) and (189.34,152.58) .. (189.34,151.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (169.37,132.25) .. controls (169.37,131.42) and (170.05,130.75) .. (170.87,130.75) .. controls (171.7,130.75) and (172.37,131.42) .. (172.37,132.25) .. controls (172.37,133.08) and (171.7,133.75) .. (170.87,133.75) .. controls (170.05,133.75) and (169.37,133.08) .. (169.37,132.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (129.44,172.75) .. controls (129.44,171.92) and (130.11,171.25) .. (130.94,171.25) .. controls (131.77,171.25) and (132.44,171.92) .. (132.44,172.75) .. controls (132.44,173.58) and (131.77,174.25) .. (130.94,174.25) .. controls (130.11,174.25) and (129.44,173.58) .. (129.44,172.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (149.41,152.75) .. controls (149.41,151.92) and (150.08,151.25) .. (150.91,151.25) .. controls (151.73,151.25) and (152.41,151.92) .. (152.41,152.75) .. controls (152.41,153.58) and (151.73,154.25) .. (150.91,154.25) .. controls (150.08,154.25) and (149.41,153.58) .. (149.41,152.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (369.06,172.25) .. controls (369.06,171.42) and (369.73,170.75) .. (370.56,170.75) .. controls (371.39,170.75) and (372.06,171.42) .. (372.06,172.25) .. controls (372.06,173.08) and (371.39,173.75) .. (370.56,173.75) .. controls (369.73,173.75) and (369.06,173.08) .. (369.06,172.25) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (349.59,152.25) .. controls (349.59,151.42) and (350.26,150.75) .. (351.09,150.75) .. controls (351.92,150.75) and (352.59,151.42) .. (352.59,152.25) .. controls (352.59,153.08) and (351.92,153.75) .. (351.09,153.75) .. controls (350.26,153.75) and (349.59,153.08) .. (349.59,152.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (329.62,172.25) .. controls (329.62,171.42) and (330.3,170.75) .. (331.12,170.75) .. controls (331.95,170.75) and (332.62,171.42) .. (332.62,172.25) .. controls (332.62,173.08) and (331.95,173.75) .. (331.12,173.75) .. controls (330.3,173.75) and (329.62,173.08) .. (329.62,172.25) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 0; blue, 0 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (309.73,151.75) .. controls (309.73,150.92) and (310.4,150.25) .. (311.23,150.25) .. controls (312.06,150.25) and (312.73,150.92) .. (312.73,151.75) .. controls (312.73,152.58) and (312.06,153.25) .. (311.23,153.25) .. controls (310.4,153.25) and (309.73,152.58) .. (309.73,151.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (289.69,172.25) .. controls (289.69,171.42) and (290.36,170.75) .. (291.19,170.75) .. controls (292.01,170.75) and (292.69,171.42) .. (292.69,172.25) .. controls (292.69,173.08) and (292.01,173.75) .. (291.19,173.75) .. controls (290.36,173.75) and (289.69,173.08) .. (289.69,172.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (209.47,171.75) .. controls (209.47,170.92) and (210.14,170.25) .. (210.97,170.25) .. controls (211.8,170.25) and (212.47,170.92) .. (212.47,171.75) .. controls (212.47,172.58) and (211.8,173.25) .. (210.97,173.25) .. controls (210.14,173.25) and (209.47,172.58) .. (209.47,171.75) -- cycle ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (230.47,191.29) -- (249.87,170.25) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (249.87,170.25) -- (271.22,192.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (248.5,171.75) .. controls (248.5,172.58) and (249.17,173.25) .. (250,173.25) .. controls (250.83,173.25) and (251.5,172.58) .. (251.5,171.75) .. controls (251.5,170.92) and (250.83,170.25) .. (250,170.25) .. controls (249.17,170.25) and (248.5,170.92) .. (248.5,171.75) -- cycle ;
\draw (191.47,152.25) -- (210.97,171.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (269.72,192.25) .. controls (269.72,191.42) and (270.39,190.75) .. (271.22,190.75) .. controls (272.05,190.75) and (272.72,191.42) .. (272.72,192.25) .. controls (272.72,193.08) and (272.05,193.75) .. (271.22,193.75) .. controls (270.39,193.75) and (269.72,193.08) .. (269.72,192.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (228.97,191.29) .. controls (228.97,190.46) and (229.65,189.79) .. (230.47,189.79) .. controls (231.3,189.79) and (231.97,190.46) .. (231.97,191.29) .. controls (231.97,192.12) and (231.3,192.79) .. (230.47,192.79) .. controls (229.65,192.79) and (228.97,192.12) .. (228.97,191.29) -- cycle ;
\draw [color={rgb, 255:red, 0; green, 115; blue, 255 } ,draw opacity=1 ] (210.72,159) -- (211.22,187.5) ;
\draw [color={rgb, 255:red, 0; green, 115; blue, 255 } ,draw opacity=1 ] (291.22,157.88) -- (291.15,186.63) ;
\end{tikzpicture}
\caption{Breakdown for the recurrence for ascending $k$-Naples parking functions.}
\label{fig:AscendingRecurrence}
\end{figure}
\begin{proof}
For $I_{n,k}$, we need to find the new ascending $k$-Naples parking functions which are not represented in $I_{n,k-1}$ and add the two together.
Recall that $I_{n,0} = C_n$ when $n>0$, giving the base for our recurrence.
From Theorem \ref{thm:AscendingCharacterization}, we know that an ascending $k$-Naples parking function of length $n$ that is not a $(k-1)$-Naples parking functions must have a corresponding Dyck path that is below the horizontal for exactly $2k$ steps.
Let there be $2i$ steps before the point it goes below the horizontal for $2k$ steps.
We see $i$ of these are Up steps since the last step must be to the horizontal.
Also, notice that the last step is a Down step as otherwise the path would be below the horizontal for at least $2k+2$ steps.
So, the number of ways to get to this point is the number of $(k-1)$-Naples parking functions of length $i$, recalling that the last step of the $k$-Dyck paths corresponding to ascending Naples parking functions must be down.
So, we see there are $I_{i,k-1}$ such paths.
This argument corresponds to the first section in Figure \ref{fig:AscendingRecurrence}.
Now, once the path has gone below the horizontal, it must stay there for $2k$ steps.
At this point, it must return to the horizontal.
Flipping this across the horizontal gives regular Dyck paths of length $k$, leading to $C_k$ possibilities.
This argument corresponds to the second section in Figure \ref{fig:AscendingRecurrence}.
Now, the path is at the horizontal after $2(i+k)$ steps.
It must then go up so as to not stay below the horizontal for too long.
But we see the final section of length $n-i-k$ starting with an Up step has $U_{n-i-k, k}$ options, which can be seen in the third section of Figure \ref{fig:AscendingRecurrence}.
Summing over all $i$ gives $I_{n,k} = I_{n,k-1}+\sum^{n-k}_{i=0} (I_{i,k-1})(C_k)(U_{n-k-i,k})$.
Notice that the first term in the sum represents the paths to the horizontal, the second term the paths under the horizontal, and the third the ending paths which must start with an Up step.
Also notice the $C_k$ is constant and can be pulled out, giving $I_{n,k-1}+C_k\sum^{n-k}_{i=0} (I_{i,k-1})(U_{n-k-i,k})$.
For $U_{n,k}$, we follow a similar argument.
Again, recall that $U_{0,0} = 0$ and that otherwise $U_{n,0} = C_n$.
This gives the base for our recurrence.
We now find the new paths that are $k$-Naples parking functions and not $(k-1)$-Naples parking functions.
We see that eventually the corresponding $k$-Dyck path must be under the horizontal for $2k$ steps, then must have an Up step to above the horizontal, and finally it finishes the path.
So, similarly as above, this gives $U_{n,k} = U_{n,k-1}+\sum^{n-k}_{i=0} (U_{i,k-1})(C_k)(U_{n-k-i,k})$ which can be rearranged to give
$$U_{n,k-1}+C_k\sum^{n-k}_{i=0} (U_{i,k-1})(U_{n-k-i,k}).$$
Also note that since $U_{0,k} = 0$, the indices can start at $i=1$.
We see these sums only depend on smaller values of $n$ and $k$.
Since we know the values of $I_{n,0}$ and $U_{n,0}$, we may find the value of $U_{n,k}$ independently of $I_{n,k}$ and vise versa.
So this gives a recurrence formula for the ascending $k$-Naples parking functions as well as the ascending $k$-Naples parking functions which start at $1$.
\end{proof}
Given this recursive formula, we can see a connection between the $1$-Naples paths that start with an Up step the Fine Numbers, an integer sequence closely related to the Catalan Numbers.
Many interpretations of the Fine Number sequence can be found in \cite{ShapiroFineNumber}.
\begin{thm}\label{thm:ClosedStartsUp}
For $n\geq 0$, we have $U_{n,1}=F_{n+1},$ where $F_{n+1}$ denotes the $(n+1)$th Fine Number (\textcolor{blue}{\href{https://oeis.org/A000957}{A000957}}).
\end{thm}
\begin{proof}
Let $U_1(x)$ represent the ordinary generating function for $U_{n,1},$ $U_0(x)$ represent the one for $U_{n,0},$ $C(x)$ the one for the Catalan numbers, and $F(x)$ the one for the Fine numbers.
It suffices to prove that $U_1(x)=\frac{F(x)-1}{x}$ (excluding $F_1$ and reindexing).
We know that $U_{n,0}=C_n$ for all $n>0,$ and $U_{0,0}=0=C_0-1;$ so we have $U_0(x)=C(x)-1.$ Now, for any given $n>0$ (and $k=1$), from \eqref{eq:5.2} we have
\begin{align*}
U_{n,1}&=U_{n,0}+C_1\sum_{i=0}^{n-1}U_{i,0}U_{n-i-1,1}\\
x^nU_{n,1}&=x^nU_{n,0}+x^nC_n\sum_{i=0}^{n-1}U_{i,0}U_{n-i-1,1}\\
x^nU_{n,1}&=x^nU_{n,0}+x\sum_{i=0}^{n-1}x^iU_{i,0}x^{n-i-1}U_{n-i-1,1}.\\
\end{align*}
Adding these equations for all $n>0$ we get
\[ \sum_{i=1}^{\infty}x^nU_{n,1}=\sum_{i=1}^{\infty}x^nU_{n,0}+x\sum_{i=1}^{\infty}x^{n-1}\sum_{i=0}^{n-1}U_{i,0}U_{n-i-1,1}.\]
Noticing that $\sum_{i=0}^{n-1}x^iU_{i,0}x^{n-i-1}U_{n-i-1,1}$ is the $(n-1)$th term in the convolution of $U_{n,0}$ and $U_{n,1},$ that is, the coefficient of $x^{n-1}$ in $U_1(x)U_0(x)$, we have
\begin{align*}
U_1(x)-U_{0,1}&=U_0(x)-U_{0,0}+xU_1(x)U_0(x)\\
U_1(x)-0&=U_0(x)-0+xU_1(x)U_0(x)\\
U_1(x)&=U_0(x)+xU_1(x)U_0(x)\\
U_1(x)&=C(x)-1+xU_1(x)(C(x)-1)\\
U_1(x)&=\frac{C(x)-1}{1+x-xC(x)}.
\end{align*}
But we know that $F(x)=\frac{1}{1-x^2C^2(x)},$ and that $xC^2(x)-C(x)+1=0,$ so \begin{align*}
U_1(x)&=\frac{C(x)-1}{1+x-xC(x)}\\
&=\frac{x(C(x)-1}{x(1+x-xC(x)}\\
&=\frac{1-(1+x-xC(x))}{x(1+x-xC(x)}\\
&=\frac{1}{x(1+x-xC(x)}-\frac{1}{x}\\
&=\frac{1}{x(1-x(C(x)-1)}-\frac{1}{x}\\
&=\frac{1}{x(1-x^2C^2(x))}-\frac{1}{x}\\
&=\frac{F(x)}{x}-\frac{1}{x}=\frac{F(x)-1}{x}.\qedhere
\end{align*}
\end{proof}
\begin{thm}\label{thm:ClosedAscending}
For $n\geq 0,$ $I_{n,1}=CF_n,$ where $CF$ is the convolution of the Catalan numbers with the Fine numbers (\textcolor{blue}{\href{https://oeis.org/A000958}{A000958}}).
\end{thm}
\begin{proof}
Let $I_1(x)$ be the ordinary generating function for $I_{n,1}$ and $I_0(x)$ be the one for $I_{n,0}$.
Since $I_{0,1}=I_{0,0}$ and $I_0(x)=C(x)$ we get, by a very similar argument as Theorem \ref{thm:ClosedStartsUp},
\begin{align*}
I_1(x)&=I_0(x)+xI_0(x)U_1(x)\\
&=C(x)+xC(x)\frac{F(x)-1}{x}\\
&=C(x)+C(x)F(x)-C(x)=C(x)F(x).
\end{align*}
\end{proof}
\begin{thm}\label{thm:RecurrenceAscendingFunctional}
Let $U_k(x)$ represent the ordinary generating function for $U_{k,1},$ and define $U_{k-1}(x),$ $I_k(x),$ and $I_{k-1}(x)$ similarly, with $c_k$ being the $k$-th Catalan number.
Then
\begin{align}
I_k(x)&=I_{k-1}(x)+c_kx^kI_{k-1}(x)U_k(x)\text{ and}\\
U_k(x)&=U_{k-1}(x)+c_kx^kU_{k-1}(x)U_k(x).
\end{align}
\end{thm}
\begin{proof}
The idea of the proof is identical to Theorem \ref{thm:ClosedAscending}, except for a given $k$ we are only able to use recurrence relations for $n\geq k,$ otherwise the convolution sum becomes meaningless.
This means that we are never adding the coefficients representing degrees smaller than $k$ for both $I_k(x)$ and $I_{k-1}(x)$ in (5.3) and conversely for $U$ in (5.4).
But this is not an issue, since for $n<k$ any ascending parking preference is $k-1$-Naples (and thus also $k$-Naples), so the coefficients are the same and adding them on both sides of the equation do not change the result, so we can use the exact same reasoning as for Theorem \ref{thm:ClosedStartsUp}.
\end{proof}
For $k\geq 2,$ the generating functions become increasingly cumbersome to work with towards finding a closed formula.
We now use the bijective results between Dyck paths and Binary Trees to find a closed formula for the number of descending strictly $k$-Naples parking functions.
First, we give a closed formula for terms of the convolution of Catalan numbers.
\begin{lem}[Lemma 9, \cite{convolution}]\label{ClosedCOnvolutionFormula}
The $m$th term of the $r$th convolution of the Catalan numbers is given by $\frac{r}{2m-r}\binom{2m-r}{m}$.
\end{lem}
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (82.16,32.66) .. controls (82.16,31.29) and (83.27,30.18) .. (84.63,30.18) .. controls (86,30.18) and (87.11,31.29) .. (87.11,32.66) .. controls (87.11,34.03) and (86,35.14) .. (84.63,35.14) .. controls (83.27,35.14) and (82.16,34.03) .. (82.16,32.66) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (115,51.75) -- (125.98,31.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (112.52,51.75) .. controls (112.52,50.38) and (113.63,49.27) .. (115,49.27) .. controls (116.37,49.27) and (117.48,50.38) .. (117.48,51.75) .. controls (117.48,53.12) and (116.37,54.23) .. (115,54.23) .. controls (113.63,54.23) and (112.52,53.12) .. (112.52,51.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (106.58,13.15) -- (84.63,32.66) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (104.11,13.15) .. controls (104.11,11.78) and (105.22,10.67) .. (106.58,10.67) .. controls (107.95,10.67) and (109.06,11.78) .. (109.06,13.15) .. controls (109.06,14.52) and (107.95,15.63) .. (106.58,15.63) .. controls (105.22,15.63) and (104.11,14.52) .. (104.11,13.15) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (106.58,13.15) -- (125.98,31.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (123.5,31.75) .. controls (123.5,30.38) and (124.61,29.27) .. (125.98,29.27) .. controls (127.34,29.27) and (128.45,30.38) .. (128.45,31.75) .. controls (128.45,33.12) and (127.34,34.23) .. (125.98,34.23) .. controls (124.61,34.23) and (123.5,33.12) .. (123.5,31.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (101.52,70.25) .. controls (101.52,68.88) and (102.63,67.77) .. (104,67.77) .. controls (105.37,67.77) and (106.48,68.88) .. (106.48,70.25) .. controls (106.48,71.62) and (105.37,72.73) .. (104,72.73) .. controls (102.63,72.73) and (101.52,71.62) .. (101.52,70.25) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (104,70.25) -- (115,51.75) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (92.98,89.75) -- (97.98,89.75) -- (97.98,94.75) -- (92.98,94.75) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (122.5,70) -- (127.5,70) -- (127.5,75) -- (122.5,75) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (72.63,49.16) -- (77.63,49.16) -- (77.63,54.16) -- (72.63,54.16) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (111,90) -- (116,90) -- (116,95) -- (111,95) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (92,49) -- (97,49) -- (97,54) -- (92,54) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (133,49) -- (138,49) -- (138,54) -- (133,54) -- cycle ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (104,70.25) -- (95.48,88.25) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (115,51.75) -- (125,68.75) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (135.5,46.75) -- (125.98,31.75) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (84.63,32.66) -- (75,48.25) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (84.63,32.66) -- (94.5,48.75) ;
\draw [dash pattern={on 0.84pt off 2.51pt}] (104,70.25) -- (113.5,88.25) ;
\end{tikzpicture}
\caption{Places to put binary trees on the tree corresponding to descending strictly $k$-Naples parking functions.}
\label{fig:PlacingBinaryTrees}
\end{figure}
\begin{thm}\label{thm:NewDescendingNaples}
The number of descending strictly $k$-Naples parking functions of length $n$ is $$\frac{k+1}{n}\binom{2n}{n+k+1}.$$
\end{thm}
\begin{proof}
First, notice that if $n\leq k$ then every parking preference of length $n$ is already a $(k-1)$-Naples parking function since each car can move to the beginning if need be, so we cannot have any strictly $k$-Naples parking functions when $n-k-1<0$.
Now, we use the bijection given in Proposition \ref{prop:SecondBinaryTree}.
We know that the root has $k-1$ direct left descendants, at least $1$ right child, and that right child has at least $k$ direct left descendants.
This uses $1+(k-1)+1+k = 2k+1$ total nodes.
So, there are $n-k-1$ nodes left.
Each of the $k-1$ direct left descendants of the root have one place to put another (possibly empty) binary tree and the last node has two.
The right child of the root also has one place, while each of the $k$ direct left descendants of the right child has one.
Again, the final node will have an extra.
This gives $k+(k+2) = 2k+2$ total places to place (possibly empty) binary trees.
Since the number of binary trees with $i$ nodes corresponds to the $i$th Catalan number \cite{Stanley}, we see this gives the number of descending strictly $k$-Naples parking functions as $$\sum_{i_1+i_2+\cdots+i_{2k+2}=n-k-1} C_{i_1}C_{i_2}\cdots C_{i_{2k+2}}.$$
this is the $(2k+2)$ convolution of the Catalan numbers.
Since the indices sum to $n-k-1$, the number of descending strictly $k$-Naples parking functions of length $n$ is the $n+k+1$ term of the $2k+2$nd convolution of the Catalan numbers.
In other words, this gives the corresponding generating function as $x^{k+1}C(x)^{2k+2}$ where we real $C(x)$ is the generating function for the Catalan numbers.
We see that plugging in $m = n-k-1$ and $r = 2k+2$ into the formula from $\ref{ClosedCOnvolutionFormula}$ gives $\frac{k+1}{n}\binom{2n}{n+k+1}$, as we wanted.
\end{proof}
\begin{lem}[Lemma 8, \cite{convolution}]
Let $1\leq q\leq p\leq 2q-1$. Then
$$\sum_{i\geq0}C_i\binom{p-1-2i}{q-1-i} = \binom{p}{q} .$$
\end{lem}
\begin{lem}\label{lem:lemma9}
Let $G(x) = \frac{C^2(x)}{1-xC^2(x)}$ be the generating functions for $\binom{2n+1}{n}$ (\textcolor{blue}{\href{https://oeis.org/A001700}{A001700}}).
Then $G(x)(C(x)-1)^{k}$ is the generating functions for $\binom{2n+1}{n+k+1}.$
\end{lem}
\begin{proof}
For $k=0$, $G(C(x)-1)$ is the generating function for the convolution $\sum_{i=1}^n\binom{2(n-i)+1}{n-i+1}C_i$ where we use the closed formula for $G$, and the lower bound comes from the fact we are dealing with the Catalan numbers without the zeroth term.
Then
\begin{align*}
\sum_{i=1}^n\binom{2(n-i)+1}{n-i+1}C_i&= \sum_{i=0}^n\binom{2(n-i)+1}{n-i+1}C_i - \binom{2n+1}{n+1}\\
&= \sum_{i=0}^n\binom{(2n+2)-2i-1}{(n+2)-i-1}C_i - \binom{2n+1}{n+1}.
\end{align*}
Now, by Lemma \ref{lem:lemma9} with $p = 2n+2$ and $q=n+2$, we see that $$\binom{2n+2}{n+2} - \binom{2n+1}{n+1} = \binom{2n+2}{n} - \binom{2n+1}{n} = \binom{2n+1}{n-1} = \binom{2n+1}{n+2}$$ by using the recurrence relation for and the symmetry of binomial coefficients.
This gives the formula for $k=0$.
Now, assume it is true for all values less than $k$. This gives the convolution for $$G(x)(C(x)-1)^{k+1} = G(x)(C(x)-1)^{k}(C(x)-1)$$ as $\sum_{i=1}^n\binom{2(n-i)+1}{n-i+(k-1)+2}C_i$.
Following the same reasoning as before, we can see this is equivalent to $$\sum_{i=0}^n\binom{(2n+2)-i-1}{(n+k+2)-i-1}C_i - \binom{2n+1}{n+k+1}.$$
Again, by Lemma \ref{lem:lemma9} with $p = 2n+2$ and $q = n+k+2$, we see this gives $\binom{2n+2}{n+k+2} - \binom{2n+1}{n+k+1}$ which then equals $\binom{2n+1}{n+k+2}$, that is, $\binom{2n+1}{n+(k+1)+1}$.
\end{proof}
\begin{cor}\label{cor:AllDescendingNaples}
The total number of descending $k$-Naples parking functions of length $n$ is $\binom{2n-1}{n} - \binom{2n-1}{n+k+1}$.
\end{cor}
\begin{proof}
From Theorem \ref{thm:NewDescendingNaples}, we know the generating function for descending strictly $k$-Naples parking functions is $x^{k+1}C^{2k+2}(x) = (xC^2(x))^{k+1}$ where $C(x)$ is the generating function for the Catalan numbers.
Now sum over all values up to $k$.
Let us call the generating function for this $D(x)$; we see that
\begin{align*}
D(x) &= \sum_{i=0}^k(xC^2(x))^{i+1}\\
&= xC^2(x)\sum_{i=0}^k(xC^2(x))^{i}\\
&= xC^2(x)(\frac{1-(xC^2(x))^{k+1}}{1-xC^2(x)}).
\end{align*}
Now, let $G(x) = \frac{C^2(x)}{1-xC^2(x)}$, which is the generating function for $\binom{2n+1}{n}$. Then we see that $D(x) = xG(x) - x^{k+2}G(x)C^{2k+2}(x)$.
Since we know a closed formula for $xG(x)$, we need to find one for $x^{k+2}G(x)C^{2k+2}(x) = xG(x)(C(x)-1)^{k+1}$.
We have the closed formula for the sequences for which $G$ and $G(x)(C(x)-1)^{k+1}$ are generating functions.
To find a closed formula for the sequence $D(x) = xG(x) - xG(x)(C(x)-1)^{k+1}$, we simply must combine these and offset by one to account for the extra factor of $x$. This gives $\binom{2n-1}{n} - \binom{2n-1}{n+k+1}$ as desired.
\end{proof}
\section{Other Bijections}\label{sec:OtherBijections}
Now, that we have a bijection between descending strictly $k$-Naples parking functions and a subset of binary trees, we can use this to find other bijections.
Specifically, we demonstrate a bijection to subsets of $r$-in$-s$ dissections and $r$-rooted non-crossing set partitions.
These generalize the bijection between descending parking functions of length $n$ with triangulations of $(n+2)$-gons and non-crossing set partitions of $[n]$.
\begin{defn}
An \textit{$r$-in$-s$ dissection is a dissection} of a $s$-gon into a $r$-gon and $(s-r)$ triangles.
\end{defn}
\begin{defn}
An \textit{$r$-rooted non-crossing set partition} is a non-crossing set partition where one of the parts, the root, has size $r$.
\end{defn}
The next theorems are nice consequences of the bijection defined in Proposition \ref{prop:SecondBinaryTree}.
As mentioned in the proof of Theorem \ref{thm:NewDescendingNaples}, choosing a strictly descending $k$-Naples parking function is equivalent to choosing an ordered set of $2k+2$ binary trees.
We know these binary trees with $i$ nodes are in bijection to triangulations of an $(i+2)$-gon and non-crossing partitions of the set $[i]$ \cite{Stanley}.
\begin{thm}\label{thm:t1}
Descending strictly $k$-Naples parking functions of length $n$ are in bijection with $(2k+2)$-in$-(n+k+1)$ dissections, up to rotation, but with a distinguished edge on the $(2k+2)$-gon.
\end{thm}
\begin{proof}
To go from the dissections to the binary tree, we start in the special edge and see the outer polygon triangulation it is attached to (it could be empty), and the tree corresponding to this triangulation is the subtree that is the left descendent of the left most node on the original tree.
Going clockwise from there, each next edge in the $(2k+2)$-gon similarly defines a triangulation that corresponds to the next descendent subtree going left to right.
To go from the binary tree to the dissection, we identify what the $2k+2$ subtrees are (some may be empty), and convert them into triangulations as in \cite{Stanley}.
We then combine them, in order, making an $n$-gon out of the edges that correspond to the root of each subtree. The edge corresponding to the right child of the tree is then the special edge.
This gives us an $(n+k+1)$-gon, since every polygon has $i+2$ edges, where $i$ is the number of nodes in the subtree, but one of them will be internal in the $(2k+2)$-gon, so in total we have $n-k-1+2k+2=n+k+1$ sides.
We can easily see that none of the dissecting lines exit the polygon, so we can make it convex without creating any issues.
\end{proof}
The process illustrated in the proof of Theorem \ref{thm:t1} is exemplified in Figure \ref{fig:treeanddissection} for the $2$-Naples parking function $(6,6,6,5,5,2,1)$.
\begin{figure}[h]
\centering
\tikzset{every picture/.style={line width=0.75pt}}
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (158.21,83.68) -- (173.31,76.79) -- (173.41,84.05) -- (217.66,85.13) -- (217.56,77.88) -- (232.85,85.5) -- (217.76,92.39) -- (217.66,85.13) -- (173.41,84.05) -- (173.5,91.31) -- cycle ;
\draw (371.61,80.33) -- (361.19,110.75) -- (334.07,129.5) -- (300.61,129.42) -- (273.59,110.54) -- (263.34,80.07) -- (273.76,49.65) -- (300.88,30.9) -- (334.34,30.98) -- (361.36,49.87) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ] (361.36,49.86) -- (371.62,80.33) ;
\draw [color={rgb, 255:red, 65; green, 117; blue, 5 } ,draw opacity=1 ] (361.19,110.75) -- (300.6,129.42) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (273.59,110.54) -- (273.76,49.65) ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (334.35,30.98) -- (273.76,49.65) ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ] (371.62,80.33) -- (361.19,110.75) ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ] (273.59,110.54) -- (300.6,129.42) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][line width=2.25] (273.76,49.65) -- (361.36,49.86) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (79.23,64.96) .. controls (79.23,63.55) and (80.39,62.41) .. (81.82,62.41) .. controls (83.24,62.41) and (84.4,63.55) .. (84.4,64.96) .. controls (84.4,66.36) and (83.24,67.5) .. (81.82,67.5) .. controls (80.39,67.5) and (79.23,66.36) .. (79.23,64.96) -- cycle ;
\draw [fill={rgb, 255:red, 65; green, 117; blue, 5 } ,fill opacity=1 ] (122.84,103.99) .. controls (122.84,102.59) and (124,101.45) .. (125.43,101.45) .. controls (126.86,101.45) and (128.01,102.59) .. (128.01,103.99) .. controls (128.01,105.39) and (126.86,106.53) .. (125.43,106.53) .. controls (124,106.53) and (122.84,105.39) .. (122.84,103.99) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (98.95,103.48) .. controls (98.95,102.07) and (100.11,100.94) .. (101.53,100.94) .. controls (102.96,100.94) and (104.12,102.07) .. (104.12,103.48) .. controls (104.12,104.88) and (102.96,106.02) .. (101.53,106.02) .. controls (100.11,106.02) and (98.95,104.88) .. (98.95,103.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (112.49,84.01) -- (122.38,64.03) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (109.91,84.01) .. controls (109.91,82.6) and (111.06,81.47) .. (112.49,81.47) .. controls (113.92,81.47) and (115.08,82.6) .. (115.08,84.01) .. controls (115.08,85.41) and (113.92,86.55) .. (112.49,86.55) .. controls (111.06,86.55) and (109.91,85.41) .. (109.91,84.01) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (102.14,44.96) -- (81.82,64.96) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (99.56,44.96) .. controls (99.56,43.56) and (100.72,42.42) .. (102.14,42.42) .. controls (103.57,42.42) and (104.73,43.56) .. (104.73,44.96) .. controls (104.73,46.37) and (103.57,47.5) .. (102.14,47.5) .. controls (100.72,47.5) and (99.56,46.37) .. (99.56,44.96) -- cycle ;
\draw [color={rgb, 255:red, 65; green, 117; blue, 5 } ,draw opacity=1 ][fill={rgb, 255:red, 65; green, 117; blue, 5 } ,fill opacity=1 ] (114.99,86.55) -- (125.43,103.99) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (102.14,44.96) -- (122.38,64.03) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (101.53,103.48) -- (112.49,84.01) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (119.8,64.03) .. controls (119.8,62.62) and (120.95,61.48) .. (122.38,61.48) .. controls (123.81,61.48) and (124.96,62.62) .. (124.96,64.03) .. controls (124.96,65.43) and (123.81,66.57) .. (122.38,66.57) .. controls (120.95,66.57) and (119.8,65.43) .. (119.8,64.03) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (58.74,85) .. controls (58.74,83.59) and (59.9,82.45) .. (61.33,82.45) .. controls (62.76,82.45) and (63.91,83.59) .. (63.91,85) .. controls (63.91,86.4) and (62.76,87.54) .. (61.33,87.54) .. controls (59.9,87.54) and (58.74,86.4) .. (58.74,85) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (78.46,104.56) .. controls (78.46,103.16) and (79.62,102.02) .. (81.04,102.02) .. controls (82.47,102.02) and (83.63,103.16) .. (83.63,104.56) .. controls (83.63,105.96) and (82.47,107.1) .. (81.04,107.1) .. controls (79.62,107.1) and (78.46,105.96) .. (78.46,104.56) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (61.33,85) -- (81.04,104.56) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (61.33,85) -- (79.73,66.46) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (99.53,105.52) -- (89.62,122.96) ;
\draw [fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (87.03,122.96) .. controls (87.03,121.55) and (88.19,120.42) .. (89.62,120.42) .. controls (91.04,120.42) and (92.2,121.55) .. (92.2,122.96) .. controls (92.2,124.36) and (91.04,125.5) .. (89.62,125.5) .. controls (88.19,125.5) and (87.03,124.36) .. (87.03,122.96) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (84.4,64.96) -- (96.3,82.04) ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (136.5,83.65) -- (122.38,64.03) ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 74; blue, 74 } ,fill opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (112.67,122.38) -- (101.53,103.48) ;
\end{tikzpicture}
\caption{Trees and polygon dissections.}
\label{fig:treeanddissection}
\end{figure}
\begin{thm}\label{thm:t2}
Descending strictly $k$-Naples parking functions with $n$ cars are in bijection with $(2k+2)$-rooted non-crossing partitions of $[n+k+1],$ where 1 is in the root.
\end{thm}
\begin{proof}
The proof for this theorem is very similar to that of Theorem \ref{thm:t1}.
The root partitions the remaining $n-k-1$ elements into $2k+2$ subsets, where elements in each subset form a chain of consecutive elements (otherwise this overall partition would not be non-crossing).
Each of these subsets can now have a non-crossing partition of its elements that corresponds to a binary tree.
The one starting right after the special element (could be empty) corresponds to the leftmost subtree, and clock-wise we have the ones corresponding to the remaining subtrees left to right.
To go the other way we choose the root by selecting the element 1 and the next one will be however many nodes the first subtree has afterwards, plus 1, and so on.
Each subtree now corresponds to a non-crossing partition of the subset it defined by its number of nodes.
\end{proof}
The process illustrated in the proof of Theorem \ref{thm:t2} is exemplified in Figure \ref{fig:treesandpartitions} for the $2$-Naples parking function $(6,6,6,5,5,2,1)$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw (277.24,49.83) .. controls (278.78,60.2) and (301.13,78.69) .. (327.33,79.87) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (300.36,49.83) .. controls (301.13,59.56) and (310.76,64.66) .. (320.91,61.74) ;
\draw (327.33,79.87) .. controls (321.55,84.11) and (314.23,84.75) .. (319.37,98.73) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ][line width=2.25] (275.32,49.83) .. controls (275.32,48.95) and (276.18,48.24) .. (277.24,48.24) .. controls (278.31,48.24) and (279.17,48.95) .. (279.17,49.83) .. controls (279.17,50.71) and (278.31,51.43) .. (277.24,51.43) .. controls (276.18,51.43) and (275.32,50.71) .. (275.32,49.83) -- cycle ;
\draw [fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (256.31,60.89) .. controls (256.31,60.01) and (257.17,59.29) .. (258.23,59.29) .. controls (259.3,59.29) and (260.16,60.01) .. (260.16,60.89) .. controls (260.16,61.77) and (259.3,62.48) .. (258.23,62.48) .. controls (257.17,62.48) and (256.31,61.77) .. (256.31,60.89) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (248.35,81.46) .. controls (248.35,80.58) and (249.21,79.87) .. (250.27,79.87) .. controls (251.34,79.87) and (252.2,80.58) .. (252.2,81.46) .. controls (252.2,82.34) and (251.34,83.05) .. (250.27,83.05) .. controls (249.21,83.05) and (248.35,82.34) .. (248.35,81.46) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (258.36,101.28) .. controls (258.36,100.4) and (259.22,99.69) .. (260.29,99.69) .. controls (261.35,99.69) and (262.22,100.4) .. (262.22,101.28) .. controls (262.22,102.16) and (261.35,102.87) .. (260.29,102.87) .. controls (259.22,102.87) and (258.36,102.16) .. (258.36,101.28) -- cycle ;
\draw [fill={rgb, 255:red, 65; green, 117; blue, 5 } ,fill opacity=1 ] (273.26,109.68) .. controls (273.26,108.8) and (274.12,108.08) .. (275.19,108.08) .. controls (276.25,108.08) and (277.11,108.8) .. (277.11,109.68) .. controls (277.11,110.56) and (276.25,111.27) .. (275.19,111.27) .. controls (274.12,111.27) and (273.26,110.56) .. (273.26,109.68) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (300.49,109.36) .. controls (300.49,108.48) and (301.35,107.76) .. (302.41,107.76) .. controls (303.48,107.76) and (304.34,108.48) .. (304.34,109.36) .. controls (304.34,110.24) and (303.48,110.95) .. (302.41,110.95) .. controls (301.35,110.95) and (300.49,110.24) .. (300.49,109.36) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (317.44,98.73) .. controls (317.44,97.85) and (318.3,97.13) .. (319.37,97.13) .. controls (320.43,97.13) and (321.29,97.85) .. (321.29,98.73) .. controls (321.29,99.61) and (320.43,100.32) .. (319.37,100.32) .. controls (318.3,100.32) and (317.44,99.61) .. (317.44,98.73) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (318.98,61.74) .. controls (318.98,60.86) and (319.84,60.14) .. (320.91,60.14) .. controls (321.97,60.14) and (322.83,60.86) .. (322.83,61.74) .. controls (322.83,62.62) and (321.97,63.33) .. (320.91,63.33) .. controls (319.84,63.33) and (318.98,62.62) .. (318.98,61.74) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (325.4,79.87) .. controls (325.4,78.98) and (326.27,78.27) .. (327.33,78.27) .. controls (328.4,78.27) and (329.26,78.98) .. (329.26,79.87) .. controls (329.26,80.75) and (328.4,81.46) .. (327.33,81.46) .. controls (326.27,81.46) and (325.4,80.75) .. (325.4,79.87) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (298.43,49.83) .. controls (298.43,48.95) and (299.3,48.24) .. (300.36,48.24) .. controls (301.42,48.24) and (302.29,48.95) .. (302.29,49.83) .. controls (302.29,50.71) and (301.42,51.43) .. (300.36,51.43) .. controls (299.3,51.43) and (298.43,50.71) .. (298.43,49.83) -- cycle ;
\draw (302.41,107.76) .. controls (298.82,101.65) and (308.84,93.04) .. (321.29,98.73) ;
\draw (250.27,81.46) .. controls (259.52,87.09) and (262.99,86.66) .. (260.29,99.69) ;
\draw (252.2,81.46) .. controls (265.68,77.1) and (275.7,57.33) .. (277.24,49.83) ;
\draw (260.29,101.28) .. controls (269.92,92.4) and (297.66,96.55) .. (302.41,107.76) ;
\draw (134.21,79.62) -- (149.31,72.73) -- (149.41,79.99) -- (193.66,81.07) -- (193.56,73.81) -- (208.85,81.44) -- (193.76,88.33) -- (193.66,81.07) -- (149.41,79.99) -- (149.5,87.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (57.23,63.46) .. controls (57.23,62.05) and (58.39,60.91) .. (59.82,60.91) .. controls (61.24,60.91) and (62.4,62.05) .. (62.4,63.46) .. controls (62.4,64.86) and (61.24,66) .. (59.82,66) .. controls (58.39,66) and (57.23,64.86) .. (57.23,63.46) -- cycle ;
\draw [fill={rgb, 255:red, 65; green, 117; blue, 5 } ,fill opacity=1 ] (100.84,102.49) .. controls (100.84,101.09) and (102,99.95) .. (103.43,99.95) .. controls (104.86,99.95) and (106.01,101.09) .. (106.01,102.49) .. controls (106.01,103.89) and (104.86,105.03) .. (103.43,105.03) .. controls (102,105.03) and (100.84,103.89) .. (100.84,102.49) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (76.95,101.98) .. controls (76.95,100.57) and (78.11,99.44) .. (79.53,99.44) .. controls (80.96,99.44) and (82.12,100.57) .. (82.12,101.98) .. controls (82.12,103.38) and (80.96,104.52) .. (79.53,104.52) .. controls (78.11,104.52) and (76.95,103.38) .. (76.95,101.98) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (90.49,82.51) -- (100.38,62.53) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (87.91,82.51) .. controls (87.91,81.1) and (89.06,79.97) .. (90.49,79.97) .. controls (91.92,79.97) and (93.08,81.1) .. (93.08,82.51) .. controls (93.08,83.91) and (91.92,85.05) .. (90.49,85.05) .. controls (89.06,85.05) and (87.91,83.91) .. (87.91,82.51) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (80.14,43.46) -- (59.82,63.46) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (77.56,43.46) .. controls (77.56,42.06) and (78.72,40.92) .. (80.14,40.92) .. controls (81.57,40.92) and (82.73,42.06) .. (82.73,43.46) .. controls (82.73,44.87) and (81.57,46) .. (80.14,46) .. controls (78.72,46) and (77.56,44.87) .. (77.56,43.46) -- cycle ;
\draw [color={rgb, 255:red, 65; green, 117; blue, 5 } ,draw opacity=1 ][fill={rgb, 255:red, 65; green, 117; blue, 5 } ,fill opacity=1 ] (92.99,85.05) -- (103.43,102.49) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (80.14,43.46) -- (100.38,62.53) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (79.53,101.98) -- (90.49,82.51) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (97.8,62.53) .. controls (97.8,61.12) and (98.95,59.98) .. (100.38,59.98) .. controls (101.81,59.98) and (102.96,61.12) .. (102.96,62.53) .. controls (102.96,63.93) and (101.81,65.07) .. (100.38,65.07) .. controls (98.95,65.07) and (97.8,63.93) .. (97.8,62.53) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (36.74,83.5) .. controls (36.74,82.09) and (37.9,80.95) .. (39.33,80.95) .. controls (40.76,80.95) and (41.91,82.09) .. (41.91,83.5) .. controls (41.91,84.9) and (40.76,86.04) .. (39.33,86.04) .. controls (37.9,86.04) and (36.74,84.9) .. (36.74,83.5) -- cycle ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (56.46,103.06) .. controls (56.46,101.66) and (57.62,100.52) .. (59.04,100.52) .. controls (60.47,100.52) and (61.63,101.66) .. (61.63,103.06) .. controls (61.63,104.46) and (60.47,105.6) .. (59.04,105.6) .. controls (57.62,105.6) and (56.46,104.46) .. (56.46,103.06) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (39.33,83.5) -- (59.04,103.06) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (39.33,83.5) -- (57.73,64.96) ;
\draw [color={rgb, 255:red, 74; green, 144; blue, 226 } ,draw opacity=1 ] (77.53,104.02) -- (67.62,121.46) ;
\draw [fill={rgb, 255:red, 74; green, 144; blue, 226 } ,fill opacity=1 ] (65.03,121.46) .. controls (65.03,120.05) and (66.19,118.92) .. (67.62,118.92) .. controls (69.04,118.92) and (70.2,120.05) .. (70.2,121.46) .. controls (70.2,122.86) and (69.04,124) .. (67.62,124) .. controls (66.19,124) and (65.03,122.86) .. (65.03,121.46) -- cycle ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (62.4,63.46) -- (74.3,80.54) ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (114.5,82.15) -- (100.38,62.53) ;
\draw [color={rgb, 255:red, 74; green, 74; blue, 74 } ,draw opacity=1 ][fill={rgb, 255:red, 74; green, 74; blue, 74 } ,fill opacity=1 ][line width=1.5] [dash pattern={on 1.69pt off 2.76pt}] (90.67,120.88) -- (79.53,101.98) ;
\draw (323,84.84) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize,color={rgb, 255:red, 74; green, 74; blue, 74 } ,opacity=1 ] {$\emptyset $};
\draw (308.5,102.84) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize,color={rgb, 255:red, 74; green, 74; blue, 74 } ,opacity=1 ] {$\emptyset $};
\draw (241.5,86.34) node [anchor=north west][inner sep=0.75pt] [font=\scriptsize,color={rgb, 255:red, 74; green, 74; blue, 74 } ,opacity=1 ] {$\emptyset $};
\draw (265.54,31.89) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {1};
\draw (237.87,47.15) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {10};
\draw (237.19,71.73) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {9};
\draw (246.95,98.52) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {8};
\draw (268.03,112.93) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {7};
\draw (299.81,113) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {6};
\draw (323.5,97.56) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {5};
\draw (330.95,72.8) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {4};
\draw (323.46,46.72) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {3};
\draw (301.29,32.47) node [anchor=north west][inner sep=0.75pt] [font=\small] [align=left] {2};
\end{tikzpicture}
\caption{Trees and non-crossing partitions.}
\label{fig:treesandpartitions}
\end{figure}
With these two final results at hand, in Figure \ref{fig:bijectionmap}, we illustrate the bijections we have established between $k$-Naples parking functions and (generalized) Catalan objects.
It is no surprise that this figure gives a parallel with the objects in bijection to the (traditional) parking function example, as we illustrated in Figure \ref{fig:CatalanObjects}.
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.70pt,y=0.70pt,yscale=-1,xscale=1]
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (433.1,99.35) .. controls (433.1,98.44) and (433.85,97.7) .. (434.78,97.7) .. controls (435.7,97.7) and (436.45,98.44) .. (436.45,99.35) .. controls (436.45,100.27) and (435.7,101.01) .. (434.78,101.01) .. controls (433.85,101.01) and (433.1,100.27) .. (433.1,99.35) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (447.58,139.15) .. controls (447.58,138.23) and (448.33,137.49) .. (449.26,137.49) .. controls (450.18,137.49) and (450.93,138.23) .. (450.93,139.15) .. controls (450.93,140.06) and (450.18,140.8) .. (449.26,140.8) .. controls (448.33,140.8) and (447.58,140.06) .. (447.58,139.15) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (460.45,125.46) .. controls (460.45,124.54) and (461.2,123.8) .. (462.12,123.8) .. controls (463.05,123.8) and (463.8,124.54) .. (463.8,125.46) .. controls (463.8,126.37) and (463.05,127.11) .. (462.12,127.11) .. controls (461.2,127.11) and (460.45,126.37) .. (460.45,125.46) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (448.92,112.77) -- (462.8,99.08) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (447.24,112.77) .. controls (447.24,111.85) and (447.99,111.11) .. (448.92,111.11) .. controls (449.85,111.11) and (450.6,111.85) .. (450.6,112.77) .. controls (450.6,113.68) and (449.85,114.43) .. (448.92,114.43) .. controls (447.99,114.43) and (447.24,113.68) .. (447.24,112.77) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (432.27,126.85) .. controls (432.27,125.84) and (433.03,125.02) .. (433.95,125.02) .. controls (434.88,125.02) and (435.63,125.84) .. (435.63,126.85) .. controls (435.63,127.87) and (434.88,128.68) .. (433.95,128.68) .. controls (433.03,128.68) and (432.27,127.87) .. (432.27,126.85) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (447.96,86.32) -- (434.78,99.35) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (422.18,112.77) -- (433.95,126.85) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (434.78,99.35) -- (422.18,112.77) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (446.29,86.32) .. controls (446.29,85.41) and (447.04,84.67) .. (447.96,84.67) .. controls (448.89,84.67) and (449.64,85.41) .. (449.64,86.32) .. controls (449.64,87.24) and (448.89,87.98) .. (447.96,87.98) .. controls (447.04,87.98) and (446.29,87.24) .. (446.29,86.32) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (420.5,112.77) .. controls (420.5,111.85) and (421.25,111.11) .. (422.18,111.11) .. controls (423.1,111.11) and (423.85,111.85) .. (423.85,112.77) .. controls (423.85,113.68) and (423.1,114.43) .. (422.18,114.43) .. controls (421.25,114.43) and (420.5,113.68) .. (420.5,112.77) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (462.12,125.46) -- (449.26,139.15) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (447.96,86.32) -- (462.8,99.08) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (448.92,112.77) -- (462.12,125.46) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (461.12,99.08) .. controls (461.12,98.16) and (461.87,97.42) .. (462.8,97.42) .. controls (463.73,97.42) and (464.48,98.16) .. (464.48,99.08) .. controls (464.48,99.99) and (463.73,100.74) .. (462.8,100.74) .. controls (461.87,100.74) and (461.12,99.99) .. (461.12,99.08) -- cycle ;
\draw (449.26,139.15) -- (436.06,153.17) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (434.38,153.17) .. controls (434.38,152.16) and (435.13,151.34) .. (436.06,151.34) .. controls (436.98,151.34) and (437.73,152.16) .. (437.73,153.17) .. controls (437.73,154.18) and (436.98,155) .. (436.06,155) .. controls (435.13,155) and (434.38,154.18) .. (434.38,153.17) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (446.62,238.38) .. controls (446.62,237.31) and (447.46,236.44) .. (448.49,236.44) .. controls (449.51,236.44) and (450.35,237.31) .. (450.35,238.38) .. controls (450.35,239.45) and (449.51,240.32) .. (448.49,240.32) .. controls (447.46,240.32) and (446.62,239.45) .. (446.62,238.38) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (476.25,268.19) .. controls (476.25,267.12) and (477.09,266.25) .. (478.11,266.25) .. controls (479.14,266.25) and (479.98,267.12) .. (479.98,268.19) .. controls (479.98,269.26) and (479.14,270.13) .. (478.11,270.13) .. controls (477.09,270.13) and (476.25,269.26) .. (476.25,268.19) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (460.83,267.8) .. controls (460.83,266.73) and (461.67,265.86) .. (462.7,265.86) .. controls (463.72,265.86) and (464.56,266.73) .. (464.56,267.8) .. controls (464.56,268.87) and (463.72,269.74) .. (462.7,269.74) .. controls (461.67,269.74) and (460.83,268.87) .. (460.83,267.8) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (470.59,252.93) -- (477.72,237.67) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (468.73,252.93) .. controls (468.73,251.86) and (469.56,250.99) .. (470.59,250.99) .. controls (471.62,250.99) and (472.45,251.86) .. (472.45,252.93) .. controls (472.45,254) and (471.62,254.87) .. (470.59,254.87) .. controls (469.56,254.87) and (468.73,254) .. (468.73,252.93) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (463.14,223.11) -- (448.49,238.38) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (461.27,223.11) .. controls (461.27,222.04) and (462.11,221.17) .. (463.14,221.17) .. controls (464.16,221.17) and (465,222.04) .. (465,223.11) .. controls (465,224.18) and (464.16,225.05) .. (463.14,225.05) .. controls (462.11,225.05) and (461.27,224.18) .. (461.27,223.11) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (470.59,252.93) -- (478.11,268.19) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (463.14,223.11) -- (477.72,237.67) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (462.7,267.8) -- (470.59,252.93) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (475.86,237.67) .. controls (475.86,236.59) and (476.69,235.73) .. (477.72,235.73) .. controls (478.75,235.73) and (479.58,236.59) .. (479.58,237.67) .. controls (479.58,238.74) and (478.75,239.61) .. (477.72,239.61) .. controls (476.69,239.61) and (475.86,238.74) .. (475.86,237.67) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (431.5,252.54) .. controls (431.5,251.46) and (432.33,250.6) .. (433.36,250.6) .. controls (434.39,250.6) and (435.22,251.46) .. (435.22,252.54) .. controls (435.22,253.61) and (434.39,254.48) .. (433.36,254.48) .. controls (432.33,254.48) and (431.5,253.61) .. (431.5,252.54) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (445.71,267.48) .. controls (445.71,266.41) and (446.54,265.54) .. (447.57,265.54) .. controls (448.6,265.54) and (449.43,266.41) .. (449.43,267.48) .. controls (449.43,268.55) and (448.6,269.42) .. (447.57,269.42) .. controls (446.54,269.42) and (445.71,268.55) .. (445.71,267.48) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (433.36,252.54) -- (447.57,267.48) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (433.36,252.54) -- (448.49,238.38) ;
\draw (462.7,267.8) -- (455.55,283.06) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (453.69,283.06) .. controls (453.69,281.99) and (454.52,281.12) .. (455.55,281.12) .. controls (456.58,281.12) and (457.41,281.99) .. (457.41,283.06) .. controls (457.41,284.13) and (456.58,285) .. (455.55,285) .. controls (454.52,285) and (453.69,284.13) .. (453.69,283.06) -- cycle ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ][fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (379.14,182.14) -- (186.13,182.78) ;
\draw (337.58,161.88) -- (326.88,172.14) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (337.59,162.65) .. controls (337.14,162.66) and (336.77,162.32) .. (336.76,161.89) .. controls (336.75,161.46) and (337.11,161.11) .. (337.56,161.1) .. controls (338.02,161.09) and (338.39,161.44) .. (338.4,161.86) .. controls (338.4,162.29) and (338.05,162.65) .. (337.59,162.65) -- cycle ;
\draw (326.92,171.92) -- (316.05,182.27) ;
\draw (316.05,182.27) -- (305.18,192.87) ;
\draw (305.18,192.87) -- (294.31,182.27) ;
\draw (294.31,182.27) -- (272.56,202.97) ;
\draw (239.68,171.66) -- (272.56,202.97) ;
\draw (239.68,171.66) -- (228.8,182.52) ;
\draw (228.8,182.52) -- (217.93,172.18) ;
\draw (217.93,172.18) -- (207.06,182.52) ;
\draw [color={rgb, 255:red, 22; green, 1; blue, 251 } ,draw opacity=1 ] (379.68,203.22) -- (185.32,203.22) ;
\draw (337.52,161.57) -- (359.27,182.27) ;
\draw (207.06,182.52) -- (185.32,203.22) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (195.37,192.87) .. controls (195.37,192.45) and (195.74,192.1) .. (196.19,192.1) .. controls (196.64,192.1) and (197.01,192.45) .. (197.01,192.87) .. controls (197.01,193.3) and (196.64,193.65) .. (196.19,193.65) .. controls (195.74,193.65) and (195.37,193.3) .. (195.37,192.87) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (238.86,171.66) .. controls (238.86,171.23) and (239.23,170.88) .. (239.68,170.88) .. controls (240.13,170.88) and (240.49,171.23) .. (240.49,171.66) .. controls (240.49,172.09) and (240.13,172.43) .. (239.68,172.43) .. controls (239.23,172.43) and (238.86,172.09) .. (238.86,171.66) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (227.99,182.52) .. controls (227.99,182.1) and (228.35,181.75) .. (228.8,181.75) .. controls (229.26,181.75) and (229.62,182.1) .. (229.62,182.52) .. controls (229.62,182.95) and (229.26,183.3) .. (228.8,183.3) .. controls (228.35,183.3) and (227.99,182.95) .. (227.99,182.52) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (206.24,182.52) .. controls (206.24,182.1) and (206.61,181.75) .. (207.06,181.75) .. controls (207.51,181.75) and (207.88,182.1) .. (207.88,182.52) .. controls (207.88,182.95) and (207.51,183.3) .. (207.06,183.3) .. controls (206.61,183.3) and (206.24,182.95) .. (206.24,182.52) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (217.12,172.18) .. controls (217.12,171.75) and (217.48,171.4) .. (217.93,171.4) .. controls (218.38,171.4) and (218.75,171.75) .. (218.75,172.18) .. controls (218.75,172.6) and (218.38,172.95) .. (217.93,172.95) .. controls (217.48,172.95) and (217.12,172.6) .. (217.12,172.18) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (347.58,171.92) .. controls (347.58,171.49) and (347.94,171.14) .. (348.39,171.14) .. controls (348.85,171.14) and (349.21,171.49) .. (349.21,171.92) .. controls (349.21,172.34) and (348.85,172.69) .. (348.39,172.69) .. controls (347.94,172.69) and (347.58,172.34) .. (347.58,171.92) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (326.11,171.92) .. controls (326.11,171.49) and (326.47,171.14) .. (326.92,171.14) .. controls (327.37,171.14) and (327.74,171.49) .. (327.74,171.92) .. controls (327.74,172.34) and (327.37,172.69) .. (326.92,172.69) .. controls (326.47,172.69) and (326.11,172.34) .. (326.11,171.92) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (315.23,182.27) .. controls (315.23,181.84) and (315.6,181.49) .. (316.05,181.49) .. controls (316.5,181.49) and (316.87,181.84) .. (316.87,182.27) .. controls (316.87,182.69) and (316.5,183.04) .. (316.05,183.04) .. controls (315.6,183.04) and (315.23,182.69) .. (315.23,182.27) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (304.36,192.87) .. controls (304.36,192.45) and (304.73,192.1) .. (305.18,192.1) .. controls (305.63,192.1) and (306,192.45) .. (306,192.87) .. controls (306,193.3) and (305.63,193.65) .. (305.18,193.65) .. controls (304.73,193.65) and (304.36,193.3) .. (304.36,192.87) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (282.62,192.62) .. controls (282.62,192.19) and (282.98,191.84) .. (283.44,191.84) .. controls (283.89,191.84) and (284.25,192.19) .. (284.25,192.62) .. controls (284.25,193.04) and (283.89,193.39) .. (283.44,193.39) .. controls (282.98,193.39) and (282.62,193.04) .. (282.62,192.62) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (293.49,182.27) .. controls (293.49,181.84) and (293.86,181.49) .. (294.31,181.49) .. controls (294.76,181.49) and (295.12,181.84) .. (295.12,182.27) .. controls (295.12,182.69) and (294.76,183.04) .. (294.31,183.04) .. controls (293.86,183.04) and (293.49,182.69) .. (293.49,182.27) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (271.75,202.97) .. controls (271.75,202.54) and (272.11,202.19) .. (272.56,202.19) .. controls (273.01,202.19) and (273.38,202.54) .. (273.38,202.97) .. controls (273.38,203.39) and (273.01,203.74) .. (272.56,203.74) .. controls (272.11,203.74) and (271.75,203.39) .. (271.75,202.97) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (260.43,192.1) .. controls (260.43,191.67) and (260.8,191.32) .. (261.25,191.32) .. controls (261.7,191.32) and (262.07,191.67) .. (262.07,192.1) .. controls (262.07,192.53) and (261.7,192.87) .. (261.25,192.87) .. controls (260.8,192.87) and (260.43,192.53) .. (260.43,192.1) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (249.82,182.01) .. controls (249.82,181.58) and (250.18,181.23) .. (250.63,181.23) .. controls (251.08,181.23) and (251.45,181.58) .. (251.45,182.01) .. controls (251.45,182.44) and (251.08,182.78) .. (250.63,182.78) .. controls (250.18,182.78) and (249.82,182.44) .. (249.82,182.01) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (358.45,182.27) .. controls (358.45,181.84) and (358.82,181.49) .. (359.27,181.49) .. controls (359.72,181.49) and (360.08,181.84) .. (360.08,182.27) .. controls (360.08,182.69) and (359.72,183.04) .. (359.27,183.04) .. controls (358.82,183.04) and (358.45,182.69) .. (358.45,182.27) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (184.5,203.22) .. controls (184.5,202.8) and (184.87,202.45) .. (185.32,202.45) .. controls (185.77,202.45) and (186.13,202.8) .. (186.13,203.22) .. controls (186.13,203.65) and (185.77,204) .. (185.32,204) .. controls (184.87,204) and (184.5,203.65) .. (184.5,203.22) -- cycle ;
\draw (359.27,182.27) -- (380.5,202.45) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (368.81,192.1) .. controls (368.81,191.67) and (369.18,191.32) .. (369.63,191.32) .. controls (370.08,191.32) and (370.44,191.67) .. (370.44,192.1) .. controls (370.44,192.53) and (370.08,192.87) .. (369.63,192.87) .. controls (369.18,192.87) and (368.81,192.53) .. (368.81,192.1) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (378.87,202.45) .. controls (378.87,202.02) and (379.23,201.67) .. (379.68,201.67) .. controls (380.13,201.67) and (380.5,202.02) .. (380.5,202.45) .. controls (380.5,202.88) and (380.13,203.22) .. (379.68,203.22) .. controls (379.23,203.22) and (378.87,202.88) .. (378.87,202.45) -- cycle ;
\draw (381.24,170.89) -- (385.63,157.53) -- (388,160.29) -- (406.25,144.6) -- (403.88,141.84) -- (417.74,139.51) -- (413.35,152.87) -- (410.98,150.11) -- (392.74,165.8) -- (395.1,168.56) -- cycle ;
\draw (390.44,211.53) -- (404.49,211.01) -- (402.73,214.19) -- (423.78,225.86) -- (425.54,222.68) -- (432.54,234.87) -- (418.49,235.39) -- (420.25,232.21) -- (399.21,220.54) -- (397.44,223.72) -- cycle ;
\draw (490.41,229.04) -- (499.97,214.35) -- (501.91,218.72) -- (528.77,206.83) -- (526.83,202.46) -- (544.13,205.26) -- (534.57,219.95) -- (532.64,215.57) -- (505.78,227.47) -- (507.71,231.84) -- cycle ;
\draw (497.25,268.36) -- (512.16,263.48) -- (510.94,267.71) -- (535.87,274.88) -- (537.09,270.65) -- (547.12,282.7) -- (532.22,287.58) -- (533.44,283.35) -- (508.5,276.18) -- (507.29,280.41) -- cycle ;
\draw (135.44,191.46) -- (144.9,185.81) -- (144.93,188.58) -- (163.98,188.36) -- (163.94,185.59) -- (173.53,191.02) -- (164.07,196.67) -- (164.04,193.9) -- (144.99,194.12) -- (145.02,196.89) -- cycle ;
\draw (639,301.61) -- (630.87,326.6) -- (609.7,342.01) -- (583.59,341.94) -- (562.5,326.42) -- (554.5,301.39) -- (562.63,276.4) -- (583.8,261) -- (609.91,261.07) -- (631,276.58) -- cycle ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ] (631,276.58) -- (639.01,301.61) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ] (630.87,326.6) -- (583.58,341.94) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ] (562.5,326.43) -- (562.63,276.4) ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (609.92,261.06) -- (562.63,276.4) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ] (639.01,301.61) -- (630.87,326.6) ;
\draw [color={rgb, 255:red, 155; green, 155; blue, 155 } ,draw opacity=1 ] (562.5,326.43) -- (583.58,341.94) ;
\draw [color={rgb, 255:red, 208; green, 2; blue, 27 } ,draw opacity=1 ] (562.63,276.4) -- (631.01,276.57) ;
\draw (588.65,174.2) .. controls (590.06,185.02) and (610.48,204.34) .. (634.42,205.56) ;
\draw (609.78,174.2) .. controls (610.48,184.36) and (619.28,189.69) .. (628.55,186.63) ;
\draw (634.42,205.56) .. controls (629.14,210) and (622.45,210.66) .. (627.15,225.26) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (586.89,174.2) .. controls (586.89,173.28) and (587.68,172.54) .. (588.65,172.54) .. controls (589.62,172.54) and (590.41,173.28) .. (590.41,174.2) .. controls (590.41,175.12) and (589.62,175.87) .. (588.65,175.87) .. controls (587.68,175.87) and (586.89,175.12) .. (586.89,174.2) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (569.52,185.75) .. controls (569.52,184.83) and (570.31,184.08) .. (571.28,184.08) .. controls (572.25,184.08) and (573.04,184.83) .. (573.04,185.75) .. controls (573.04,186.67) and (572.25,187.41) .. (571.28,187.41) .. controls (570.31,187.41) and (569.52,186.67) .. (569.52,185.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (562.24,207.23) .. controls (562.24,206.31) and (563.03,205.56) .. (564,205.56) .. controls (564.98,205.56) and (565.76,206.31) .. (565.76,207.23) .. controls (565.76,208.14) and (564.98,208.89) .. (564,208.89) .. controls (563.03,208.89) and (562.24,208.14) .. (562.24,207.23) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (571.4,227.92) .. controls (571.4,227) and (572.18,226.25) .. (573.16,226.25) .. controls (574.13,226.25) and (574.92,227) .. (574.92,227.92) .. controls (574.92,228.84) and (574.13,229.58) .. (573.16,229.58) .. controls (572.18,229.58) and (571.4,228.84) .. (571.4,227.92) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (585.01,236.69) .. controls (585.01,235.77) and (585.8,235.02) .. (586.77,235.02) .. controls (587.74,235.02) and (588.53,235.77) .. (588.53,236.69) .. controls (588.53,237.61) and (587.74,238.35) .. (586.77,238.35) .. controls (585.8,238.35) and (585.01,237.61) .. (585.01,236.69) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (609.89,236.35) .. controls (609.89,235.43) and (610.68,234.69) .. (611.65,234.69) .. controls (612.63,234.69) and (613.41,235.43) .. (613.41,236.35) .. controls (613.41,237.27) and (612.63,238.02) .. (611.65,238.02) .. controls (610.68,238.02) and (609.89,237.27) .. (609.89,236.35) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (625.39,225.26) .. controls (625.39,224.34) and (626.17,223.59) .. (627.15,223.59) .. controls (628.12,223.59) and (628.91,224.34) .. (628.91,225.26) .. controls (628.91,226.17) and (628.12,226.92) .. (627.15,226.92) .. controls (626.17,226.92) and (625.39,226.17) .. (625.39,225.26) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (626.79,186.63) .. controls (626.79,185.71) and (627.58,184.97) .. (628.55,184.97) .. controls (629.53,184.97) and (630.31,185.71) .. (630.31,186.63) .. controls (630.31,187.55) and (629.53,188.3) .. (628.55,188.3) .. controls (627.58,188.3) and (626.79,187.55) .. (626.79,186.63) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (632.66,205.56) .. controls (632.66,204.64) and (633.45,203.9) .. (634.42,203.9) .. controls (635.4,203.9) and (636.18,204.64) .. (636.18,205.56) .. controls (636.18,206.48) and (635.4,207.23) .. (634.42,207.23) .. controls (633.45,207.23) and (632.66,206.48) .. (632.66,205.56) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (608.02,174.2) .. controls (608.02,173.28) and (608.8,172.54) .. (609.78,172.54) .. controls (610.75,172.54) and (611.54,173.28) .. (611.54,174.2) .. controls (611.54,175.12) and (610.75,175.87) .. (609.78,175.87) .. controls (608.8,175.87) and (608.02,175.12) .. (608.02,174.2) -- cycle ;
\draw (611.65,234.69) .. controls (608.37,228.31) and (617.52,219.32) .. (628.91,225.26) ;
\draw (564,207.23) .. controls (572.45,213.11) and (575.62,212.66) .. (573.16,226.25) ;
\draw (565.76,207.23) .. controls (578.09,202.67) and (589,182.03) .. (590.41,174.2) ;
\draw (573.16,227.92) .. controls (581.96,218.65) and (607.31,222.98) .. (611.65,234.69) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (586.89,175.2) .. controls (586.89,173.73) and (588.15,172.54) .. (589.71,172.54) .. controls (591.26,172.54) and (592.52,173.73) .. (592.52,175.2) .. controls (592.52,176.67) and (591.26,177.87) .. (589.71,177.87) .. controls (588.15,177.87) and (586.89,176.67) .. (586.89,175.2) -- cycle ;
\draw (606.04,157.49) node [anchor=north west][inner sep=0.75pt] [align=left] {2};
\draw (629.04,170.81) node [anchor=north west][inner sep=0.75pt] [align=left] {3};
\draw (637.26,198.55) node [anchor=north west][inner sep=0.75pt] [align=left] {4};
\draw (630.45,224.41) node [anchor=north west][inner sep=0.75pt] [align=left] {5};
\draw (606.98,237.4) node [anchor=north west][inner sep=0.75pt] [align=left] {6};
\draw (580.22,237.84) node [anchor=north west][inner sep=0.75pt] [align=left] {7};
\draw (560.5,225.41) node [anchor=north west][inner sep=0.75pt] [align=left] {8};
\draw (551.58,197.44) node [anchor=north west][inner sep=0.75pt] [align=left] {9};
\draw (553.68,171.25) node [anchor=north west][inner sep=0.75pt] [align=left] {10};
\draw (580.68,157.94) node [anchor=north west][inner sep=0.75pt] [align=left] {1};
\draw (19,182) node [anchor=north west][inner sep=0.75pt] [align=left] {(6,6,6,5,5,2,1)};
\end{tikzpicture}
\caption{A $k$-Naples parking function and its corresponding Catalan objects.}
\label{fig:bijectionmap}
\end{figure}
\section{Future work}\label{sec:FutureWorks}
As we have said, all rearrangements of parking functions are still parking functions.
Using this fact, it is possible to find simple labelling rules on Dyck paths and trees that correspond to every parking function \cite{YanBook}.
One area of future research is to explore a way of describing which rearrangements of descending $k$-Naples parking functions are still $k$-Naples parking functions based on a labelling of the objects with which they are in bijection.
\begin{figure}[h]
\centering
\begin{tikzpicture}[x=0.75pt,y=0.75pt,yscale=-1,xscale=1]
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (276.13,85.66) .. controls (276.13,84.29) and (277.24,83.18) .. (278.61,83.18) .. controls (279.98,83.18) and (281.08,84.29) .. (281.08,85.66) .. controls (281.08,87.03) and (279.98,88.14) .. (278.61,88.14) .. controls (277.24,88.14) and (276.13,87.03) .. (276.13,85.66) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (318.02,124.75) .. controls (318.02,123.38) and (319.13,122.27) .. (320.5,122.27) .. controls (321.87,122.27) and (322.98,123.38) .. (322.98,124.75) .. controls (322.98,126.12) and (321.87,127.23) .. (320.5,127.23) .. controls (319.13,127.23) and (318.02,126.12) .. (318.02,124.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (337.02,104.25) .. controls (337.02,102.88) and (338.13,101.77) .. (339.5,101.77) .. controls (340.87,101.77) and (341.98,102.88) .. (341.98,104.25) .. controls (341.98,105.62) and (340.87,106.73) .. (339.5,106.73) .. controls (338.13,106.73) and (337.02,105.62) .. (337.02,104.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (317.52,85.25) .. controls (317.52,83.88) and (318.63,82.77) .. (320,82.77) .. controls (321.37,82.77) and (322.48,83.88) .. (322.48,85.25) .. controls (322.48,86.62) and (321.37,87.73) .. (320,87.73) .. controls (318.63,87.73) and (317.52,86.62) .. (317.52,85.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (339.5,104.25) -- (320.5,124.75) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (320,85.25) -- (339.5,104.25) ;
\draw (320.5,124.75) -- (301,145.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (298.52,145.75) .. controls (298.52,144.24) and (299.63,143.01) .. (301,143.01) .. controls (302.37,143.01) and (303.48,144.24) .. (303.48,145.75) .. controls (303.48,147.26) and (302.37,148.49) .. (301,148.49) .. controls (299.63,148.49) and (298.52,147.26) .. (298.52,145.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (274.92,126.84) .. controls (274.92,125.33) and (276.02,124.1) .. (277.39,124.1) .. controls (278.76,124.1) and (279.87,125.33) .. (279.87,126.84) .. controls (279.87,128.36) and (278.76,129.58) .. (277.39,129.58) .. controls (276.02,129.58) and (274.92,128.36) .. (274.92,126.84) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (298.08,66.15) -- (278.61,85.66) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (260,105.75) -- (277.39,126.84) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (278.61,85.66) -- (260,105.75) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (295.61,66.15) .. controls (295.61,64.78) and (296.72,63.67) .. (298.08,63.67) .. controls (299.45,63.67) and (300.56,64.78) .. (300.56,66.15) .. controls (300.56,67.52) and (299.45,68.63) .. (298.08,68.63) .. controls (296.72,68.63) and (295.61,67.52) .. (295.61,66.15) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (257.52,105.75) .. controls (257.52,104.38) and (258.63,103.27) .. (260,103.27) .. controls (261.37,103.27) and (262.48,104.38) .. (262.48,105.75) .. controls (262.48,107.12) and (261.37,108.23) .. (260,108.23) .. controls (258.63,108.23) and (257.52,107.12) .. (257.52,105.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (298.08,66.15) -- (320,85.25) ;
\draw (325.5,75.25) -- (344.13,95.04) ;
\draw [shift={(345.5,96.5)}, rotate = 226.74] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (320.5,124.75) -- (320,85.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (321.02,204.75) .. controls (321.02,203.38) and (322.13,202.27) .. (323.5,202.27) .. controls (324.87,202.27) and (325.98,203.38) .. (325.98,204.75) .. controls (325.98,206.12) and (324.87,207.23) .. (323.5,207.23) .. controls (322.13,207.23) and (321.02,206.12) .. (321.02,204.75) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (340.02,184.25) .. controls (340.02,182.88) and (341.13,181.77) .. (342.5,181.77) .. controls (343.87,181.77) and (344.98,182.88) .. (344.98,184.25) .. controls (344.98,185.62) and (343.87,186.73) .. (342.5,186.73) .. controls (341.13,186.73) and (340.02,185.62) .. (340.02,184.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (320.52,165.25) .. controls (320.52,163.88) and (321.63,162.77) .. (323,162.77) .. controls (324.37,162.77) and (325.48,163.88) .. (325.48,165.25) .. controls (325.48,166.62) and (324.37,167.73) .. (323,167.73) .. controls (321.63,167.73) and (320.52,166.62) .. (320.52,165.25) -- cycle ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (342.5,184.25) -- (323.5,204.75) ;
\draw [fill={rgb, 255:red, 208; green, 2; blue, 27 } ,fill opacity=1 ] (323,165.25) -- (342.5,184.25) ;
\draw [fill={rgb, 255:red, 0; green, 0; blue, 0 } ,fill opacity=1 ] (301.08,146.15) -- (323,165.25) ;
\draw (312,143.25) -- (347.02,175.15) ;
\draw [shift={(348.5,176.5)}, rotate = 222.32999999999998] [color={rgb, 255:red, 0; green, 0; blue, 0 } ][line width=0.75] (10.93,-3.29) .. controls (6.95,-1.4) and (3.31,-0.3) .. (0,0) .. controls (3.31,0.3) and (6.95,1.4) .. (10.93,3.29) ;
\draw [dash pattern={on 4.5pt off 4.5pt}] (323.5,204.75) -- (323,165.25) ;
\draw (265.5,66) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 2}};
\draw (248.5,88.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 3}};
\draw (265.5,120.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 5}};
\draw (305.5,85.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 4}};
\draw (345,99) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 1}};
\draw (323.5,121.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 6}};
\draw (286.5,135.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 10}};
\draw (308,159) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 9}};
\draw (350,174.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 7}};
\draw (327,199.5) node [anchor=north west][inner sep=0.75pt] [align=left] {{\tiny 8}};
\end{tikzpicture}
\caption{Labelling binary trees for $k=1$}
\label{fig:TreeLabbel}
\end{figure}
When $k=1$, we noticed some patterns when labelling trees that correspond to a $1$-Naples parking function as illustrated in Figure \ref{fig:TreeLabbel}.
Note that in these labellings, the root is unlabeled as it does not correspond to a car, and a node must have a lower labelling then its right child.
Now, for the patterns, if a direct right descendent of the root does not have a left child, then it must have a higher labelling then its right child.
Once this ends, we look at the final direct right descendent in the section without a left child.
We already know it must have a higher label than its right child, so now look at the right child that is connected to the original node in Figure \ref{fig:TreeLabbel} by a dotted line.
If the grandchild has a smaller than the original, there is no problem here with the rearrangement. Otherwise, the process must begin again.
We can see this is a rather convoluted process, and neither of the rules follow in a satisfying way once $k>1$.
A similar area of study is to find what labelling conventions correspond to ascending $k$-Naples parking functions using our other bijections.
We have seen the result for both Dyck paths and binary trees, but we have not studied the condition on dissections or rooted non-crossing partitions.
Perhaps on these the condition is relatively nice and could help us understand rearrangements more.
Lastly, there are many objects counted by the Catalan numbers and their convolutions that we have not discussed here.
Finding and understanding more bijections could help us better understand the structure of $k$-Naples parking functions, and some objects may be better suited for describing rearrangements.
One could also look for bijections for ascending strictly $k$-Naples parking functions, ascending or descending parking preferences that are not $k$-Naples parking functions, or descending $k$-Naples parking functions whose ascending rearrangements are not $k$-Naples parking functions.
\section*{Acknowledgements}
Part of this research was performed with support from the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation (Grant No.\ DMS-1440415).
PEH was supported through a Karen EDGE Fellowship.
ARVM was partially supported by the National Science Foundation under Awards DGE-1247392, KY-WV LSAMP Bridge to Doctorate HRD-2004710, and DMS-2102921.
| {
"timestamp": "2021-09-07T02:03:44",
"yymm": "2109",
"arxiv_id": "2109.01735",
"language": "en",
"url": "https://arxiv.org/abs/2109.01735",
"abstract": "This paper studies a generalization of parking functions named $k$-Naples parking functions, where backward movement is allowed. One consequence of backward movement is that the number of ascending $k$-Naples is not the same as the number of descending $k$-Naples. This paper focuses on generalizing the bijections of ascending parking functions with combinatorial objects enumerated by the Catalan numbers in the setting of both ascending and descending $k$-Naples parking functions. These combinatorial objects include Dyck paths, binary trees, triangulations of polygons, and non-crossing partitions. Using these bijections, we enumerate both ascending and descending $k$-Naples parking functions.",
"subjects": "Combinatorics (math.CO)",
"title": "Enumerating $k$-Naples Parking Functions Through Catalan Objects",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787868650146,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7093811407060998
} |
https://arxiv.org/abs/2109.01930 | Geometric bijections between spanning subgraphs and orientations of a graph | Let $G$ be a connected finite graph. Backman, Baker, and Yuen have constructed a family of explicit and easy-to-describe bijections $g_{\sigma,\sigma^*}$ between spanning trees of $G$ and $(\sigma,\sigma^*)$-compatible orientations, where the $(\sigma,\sigma^*)$-compatible orientations are the representatives of equivalence classes of orientations up to cycle-cocycle reversal which are determined by a cycle signature $\sigma$ and a cocycle signature $\sigma^*$. Their proof makes use of zonotopal subdivisions and the bijections $g_{\sigma,\sigma^*}$ are called \emph{geometric bijections}. In this paper, we extend the geometric bijections to subgraph-orientation correspondences. Moreover, we extend the geometric constructions accordingly. Our proofs are purely combinatorial, even for the geometric constructions. We also provide geometric proofs for partial results, which make use of zonotopal tiling, relate to Backman, Baker, and Yuen's method, and motivate our combinatorial constructions. Finally, we explain that the main results hold for \emph{regular matroids}. | \section{Introduction}\label{intro}
\subsection{Introduction to the main combinatorial results}
Let $G$ be a connected finite graph and let $E$ be its edge set.
This paper examines correspondence between spanning subgraphs and orientations of $G$. Obviously, the number of spanning subgraphs of $G$ equals the number of orientations of $G$. More interestingly, some types of spanning subgraphs are equinumerous to some types of equivalence classes of orientations, which can be counted by the \emph{Tutte polynomial} $T_G(x,y)$ of $G$; see Table~\ref{table1}. The Tutte polynomial counts the types of spanning subgraphs in the first column by definition. The equivalence classes of orientations in the second column were introduced and enumerated by Gioan \cite{G1}. It is therefore natural to look for bijections between them.
\begin{table}[h!]
\centering
\begin{tabular}{ |m{5.5cm}|m{6.5cm}|m{1.8cm}| }
\hline
types of spanning subgraphs & equivalence classes of orientations & cardinality\\
\hline
spanning subgraphs & orientations & $T_G(2,2)$\\
spanning forests & equivalence classes of orientations up to cycle reversal & $T_G(2,1)$ \\
connected spanning subgraphs & equivalence classes of orientations up to cocycle reversal & $T_G(1,2)$\\
spanning trees & equivalence classes of orientations up to cycle-cocycle reversal & $T_G(1,1)$\\
\hline
\end{tabular}
\caption{}
\label{table1}
\end{table}
In~\cite{BBY}, Backman, Baker, and Yuen construct a family of bijections for the last row of Table~\ref{table1}, called the \emph{geometric bijections}. We denote by $\mathcal{T}(G)$ the set of spanning trees of $G$. We call the equivalence classes of orientations up to cycle-cocycle reversal the \emph{cycle-cocycle reversal (equivalence) classes}, the set of which is denoted by $\mathcal{G}(G)$. Two orientations are in the same class if and only if one can obtain one orientation from the other by reversing some directed cycles and cocycles; see Section~\ref{combinatorial} for details and see Figure~\ref{F-1} for an example.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F-1.eps}
\caption{The triangle graph $G$ and its eight orientations. They are partitioned into the three cycle-cocycle reversal classes. From the top three orientations, one may get the others by reversing certain directed cycles or cocycles, which are indicated beside the line connecting two orientations.}
\label{F-1}
\end{figure}
In \cite{BBY}, the authors actually establish a family of bijections $g_{\sigma,\sigma^*}$ between $\mathcal{T}(G)$ and the set of \emph{$(\sigma,\sigma^*)$-compatible orientations} of $G$, where the latter set is a representative set of $\mathcal{G}(G)$ determined by a pair of \emph{acyclic cycle signature} $\sigma$ and \emph{acyclic cocycle signature} $\sigma^*$. Let us mention that their work is motivated by finding bijections between $\mathcal{T}(G)$ and the \emph{Jacobian group} (also known as the \emph{critical group}, \emph{sandpile group}, or \emph{Picard group}) $\text{Jac}(G)$ of $G$. There is a canonical simply transitive group action of $\text{Jac}(G)$ on $\mathcal{G}(G)$ \cite{B}, and hence the geometric bijection induces a bijection between $\mathcal{T}(G)$ and $\text{Jac}(G)$.
In this paper, we extend the geometric bijection to a subgraph-orientation correspondence. To present their bijections and ours, we now give the necessary definitions and properties.
A \emph{cycle signature} $\sigma$ is the choice of a direction for each cycle of the graph $G$. For each cycle $C$, we denote by $\sigma(C)$ the directed cycle we choose for $C$. By abusing notation, we also view $\sigma$ as the set of the directed cycles we choose: $\{\sigma(C):C\text{ is a cycle}\}$. By fixing a reference orientation of $G$, we can identify directed cycles with $\{0,\pm 1\}$-vectors in $\mathbb{R}^E$. The cycle signature $\sigma$ is said to be \emph{acyclic} if whenever $a_C$ are nonnegative reals with $\sum_C a_C\sigma(C)=0$ in $\mathbb{R}^E$ we have $a_C=0$ for all $C$, where the sum is over all cycles of $G$. An orientation is said to be \emph{$\sigma$-compatible} if any directed cycle in the orientation is in $\sigma$. An \emph{acyclic cocycle signature} $\sigma^*$ and a \emph{$\sigma^*$-compatible orientation} are defined similarly but for directed cocycles instead of directed cycles. An orientation is said to be \emph{($\sigma,\sigma^*$)-compatible} if it is both $\sigma$-compatible and $\sigma^*$-compatible.
For any orientation $\overrightarrow{O}$ of $G$, there exists a unique $(\sigma,\sigma^*)$-compatible orientation $\overrightarrow{O^{cp}}$ in the cycle-cocycle reversal class of $\overrightarrow{O}$, and $\overrightarrow{O}$ can be obtained by reversing disjoint directed cycles and cocycles in $\overrightarrow{O^{cp}}$ (see Corollary~\ref{cor0}). Hence the ($\sigma,\sigma^*$)-compatible orientations are representatives for the cycle-cocycle reversal classes.
Figure~\ref{F-2} gives an acyclic cycle signature $\sigma$ and an acyclic cocycle signature $\sigma^*$ of the triangle graph, which will be fixed for all the examples in this paper. In Figure~\ref{F-1}, the top three orientations are ($\sigma,\sigma^*$)-compatible. For more examples of the two signatures, see \cite[Example 1.1.1 and Example 1.1.3]{BBY}.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F-2.eps}
\caption{An acyclic cycle signature $\sigma$ and an acyclic cocycle signature $\sigma^*$ of the triangle graph.}
\label{F-2}
\end{figure}
Let $T\in\mathcal{T}(G)$ and $e\in E$. If $e\notin T$, then we call the unique cycle in $T\bigcup \{e\}$ the \emph{fundamental cycle} of $e$ with respect to $T$, denoted by $C(T,e)$; if $e\in T$, then we call the unique cocycle in $(E\backslash T)\bigcup \{e\}$ the \emph{fundamental cocycle} of $e$ with respect to $T$, denoted by $C^*(T,e)$.
The Backman-Baker-Yuen's geometric bijection $g_{\sigma,\sigma^*}$ is as follows.
\begin{Prop}\label{tree}(\cite{BBY},Theorem 1.3.1(1))
Fix acyclic signatures $\sigma$ and $\sigma^*$ of $G$.
Then the map $$g_{\sigma,\sigma^*}:\mathcal{T}(G)\longrightarrow\{(\sigma,\sigma^*)\text{-compatible orientations}\}$$ is a bijection, where $g_{\sigma,\sigma^*}$ sends $T$ to the orientation of $G$ in which we orient each $e\notin T$ according to its orientation in $\sigma(C(T,e))$ and each $e\in T$ according to its orientation in $\sigma^*(C^*(T,e))$.
\end{Prop}
We extend $g^{-1}_{\sigma,\sigma^*}$ as follows. For an orientation $\overrightarrow{O}$,
recall that there exists a unique ($\sigma,\sigma^*$)-compatible orientation $\overrightarrow{O^{cp}}$ such that $\overrightarrow{O}$ is obtained from $\overrightarrow{O^{cp}}$ by reversing certain disjoint directed cycles $\{\overrightarrow{C_i}\}_{i\in I}$ and cocycles $\{\overrightarrow{C_j^*}\}_{j\in J}$. By applying the geometric bijection to $\overrightarrow{O^{cp}}$, we get a spanning tree $T=g^{-1}_{\sigma,\sigma^*}(\overrightarrow{O^{cp}})$. Then we add the edges in the reversed cycles to $T$ and delete the edges in the reversed cocycles from $T$, and hence we get a spanning subgraph $S=T\bigcup (\biguplus_{i\in I}C_i)\backslash (\biguplus_{j\in J}C_j^*)$. We denote $\overrightarrow{O}\mapsto S$ by $\varphi_{\sigma,\sigma^*}$. Note that if $\overrightarrow{O}$ is ($\sigma,\sigma^*$)-compatible, then no directed cycles or cocycles will be reversed and hence $S=T=g^{-1}_{\sigma,\sigma^*}(\overrightarrow{O})$. Therefore $\varphi_{\sigma,\sigma^*}$ extends $g^{-1}_{\sigma,\sigma^*}$.
Now we ready to present the main results of this paper.
\begin{Th}\label{th}
Fix acyclic signatures $\sigma$ and $\sigma^*$ of $G$.
(1) The map
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}:\{\text{discrete orientations}\} & \longrightarrow & \{\text{spanning subgraphs}\} \\
\overrightarrow{O} & \mapsto & g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\bigcup (\biguplus_{i\in I}C_i)\backslash (\biguplus_{j\in J}C_j^*)
\end{eqnarray*}
is a bijection, where $\overrightarrow{O}$ is an orientation obtained by reversing disjoint directed cycles $\{\overrightarrow{C_i}\}_{i\in I}$ and directed cocycles $\{\overrightarrow{C_j^*}\}_{j\in J}$ in a ($\sigma,\sigma^*$)-compatible orientation $\overrightarrow{O^{cp}}$.
(2) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}: \{\sigma\text{-compatible orientations}\} & \longrightarrow & \{\text{spanning forests}\} \\
\overrightarrow{O} & \mapsto & g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\backslash (\biguplus_{j\in J}C_j^*).
\end{eqnarray*}
(3) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}:\{\sigma^*\text{-compatible orientations}\} & \longrightarrow & \{\text{connected spanning subgraphs}\} \\
\overrightarrow{O} & \mapsto & g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\bigcup (\biguplus_{i\in I}C_i).
\end{eqnarray*}
\end{Th}
Note that the $\sigma$-compatible (resp. $\sigma^*$-compatible) orientations form representatives for cycle reversal equivalence classes (resp. cocycle reversal equivalence classes) (see Proposition~\ref{sigmacompatible1}). Hence our construction establishes bijections for all the objects in Table~\ref{table1}.
\begin{Ex}\label{ex1}
This example illustrates Theorem~\ref{th}. Let $G$ be the triangle graph and two signatures be as in Figure~\ref{F-2}. Figure~\ref{F5} shows the extended geometric bijection $\varphi_{\sigma,\sigma^*}$. It is convenient to draw the orientation $\overrightarrow{O}$ and the subgraph $S$ together provided $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})=S$. The eight orientations are divided into the three cycle-cocycle reversal classes, where the three $(\sigma,\sigma^*)$-compatible orientations correspond to the three spanning trees. One can check that the seven forests correspond to the seven $\sigma$-compatible orientations (see also Figure~\ref{F4}) and the four connected spanning subgraphs correspond to the four $\sigma^*$-compatible orientations (see also Figure~\ref{F7}).
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F5.eps}
\caption{This is the bijection $\varphi_{\sigma,\sigma^*}$ obtained by applying Theorem~\ref{th} to the triangle graph $G$ and the two signatures in Figure~\ref{F-2}. If an orientation is mapped to a subgraph, then we draw them together. An edge is not in the subgraph if and only if it is dashed.}
\label{F5}
\end{figure}
\end{Ex}
Many bijections between spanning subgraphs and orientations with nice specializations are known. We mention two of them. Emeric Gioan and Michel Las Vergnas define the \emph{active bijection} for graphs with a \emph{total order} on $E$ \cite{GM}. Olivier Bernardi defines a bijection for graphs with a \emph{ribbon structure} (also known as \emph{combinatorial embedding}) \cite{Bernardi}. Both of their work interpret the evaluations of $T_G(x,y)$ for $x=0,1,2$ and $y=0,1,2$. (Our work does not deal with the case $x=0$ or $y=0$.) By a result of Chi Ho Yuen \cite[Theorem 20]{Yuen}, for planar graphs, Bernardi's bijection restricted to spanning trees coincides with the geometric bijection for some pair of signatures; see also \cite[Example 1.1.3]{BBY}. However, our extended geometric bijection is different from Bernardi's in general.
\subsection{Introduction to the geometric ideas behind the main theorem with an example}
Now we use an example to illustrate the geometric interpretation of the bijection $g_{\sigma,\sigma^*}$ in Proposition~\ref{tree}, which is due to \cite{BBY}, and explain the geometric ideas behind the bijection $\varphi_{\sigma,\sigma^*}$ in Theorem~\ref{th}(2), which will be treated rigorously in Section~\ref{geometric}. The full geometric interpretation of Theorem~\ref{th} will be discussed in Section~\ref{tiling}.
Let $G$ be the triangle graph with the reference orientation as shown in Figure~\ref{F1}. We identify the continuous orientations of $G$ with the cube $[0,1]^E$, whose vertices are the (discrete) orientations.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{F1.eps}
\caption{The reference orientation of the triangle graph $G$.}
\label{F1}
\end{figure}
The incidence matrix of $G$ is $\begin{pmatrix}
1 & -1 & 0\\
0 & 1 & -1\\
-1 & 0 & 1
\end{pmatrix}$. We delete the last row to get a full-rank matrix $D=\begin{pmatrix}
1 & -1 & 0\\
0 & 1 & -1
\end{pmatrix}$.
We restrict the linear map $D$ to the continuous orientations $[0,1]^E$. The restricted map is called $\psi$. The map $\psi$ projects the cube $[0,1]^E$ to the \emph{zonotope}\footnote{In general, a zonotope is a \emph{Minkowski sum} of closed line segments and the Minkowski sum is defined by $A+B=\{a+b:a\in A, b\in B\}$, where $A$ and $B$ are two subsets of $\mathbb{R}^n$.} $Z_D=\{\sum_{i=1}^{|E|}c_iv_i:0\leq c_i \leq 1\}$, where $v_i$'s are the columns of $D$; see Figure~\ref{F12}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{F12.eps}
\caption{The map $\psi$ and the zonotope $Z_D$. Here we draw $Z_D$ in the indicated coordinate system so that it looks like its section $\operatorname{im}\mu$; see Section~\ref{geometric} for the definition of $\operatorname{im}\mu$. }
\label{F12}
\end{figure}
Fix an acyclic cycle signature $\sigma$. The zonotope $Z_D$ has a polyhedral subdivision
$Z_D=\bigcup_{T\in\mathcal{T}(G)}Z_\sigma(T)$, where $Z_\sigma(T)$ is the image (under $\psi$) of all the continuous orientations where every edge $e\notin T$ is oriented according to the $\sigma$-oriented directed fundamental cycle $\sigma(C(T,e))$; see Figure~\ref{F2}(a). The vertices in the polyhedral subdivision correspond to the (discrete) $\sigma$-compatible orientations and hence by abusing language we may identify each of the vertices with the corresponding orientation. Every tile $Z_\sigma(T)$ is a parallelogram of dimension $|T|$. Each edge of $Z_\sigma(T)$ connects two $\sigma$-compatible orientations, which differ by one arc. The underlying edge of this arc is in the tree $T$. Moreover, the parallel edges of $Z_\sigma(T)$ correspond to the same edge in $T$ and the $|T|$ sets of parallel edges of $Z_\sigma(T)$ correspond to the $|T|$ edges in $T$.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F2.eps}
\caption{(a): The decomposition $Z_D=\bigcup_{T\in\mathcal{T}(G)}Z_\sigma(T)$. The three spanning trees indicated by solid lines are put in the middle of the corresponding parallelograms. By bi-orienting the edges in the solid lines, one gets the continuous orientations inside the parallelograms;\\
(b): The geometric bijection which sends a spanning tree to the right-bottom vertex of the corresponding parallelogram. }
\label{F2}
\end{figure}
Now we fix a generic vector $w^*$ in the space spanned by $Z_D$ and consider shifting the three parallelograms $Z_\sigma(T)$ along $w^*$. For sufficiently small positive $\epsilon$, the image of $Z_\sigma(T)$ under the shifting map $v\mapsto v+\epsilon w^*$ covers a unique vertex of $Z_\sigma(T)$, which turns out to be the $(\sigma,\sigma^*)$-compatible orientation $g_{\sigma,\sigma^*}(T)$, where $\sigma^*$ is an acyclic cocycle signature determined by $w^*$ and $g_{\sigma,\sigma^*}$ is defined in Proposition~\ref{tree}; see Figure~\ref{F2}(b) for the example and see Figure~\ref{F0}(a) for the general case. So the geometric bijection $g_{\sigma,\sigma^*}$ is the map sending each spanning tree $T$ of $G$ to the $(\sigma,\sigma^*)$-compatible orientation covered by $Z_\sigma(T)+\epsilon w^*$. For proofs of all these statements, see \cite{BBY}.
In order to extend the geometric bijection, we make use of two kinds of tiling related to the zonotope $Z_D$. First we use copies of $Z_D$ to tile its ambient space $\operatorname{im} D$, which is $\mathbb{R}^2$ in this example; see Figure~\ref{F3}(a). It can be proved that two adjacent copies of $Z_D$ differ by a sum of disjoint directed cocycles\footnote{Strictly speaking, they differ by $D\cdot \overrightarrow{C^*}$, where $\overrightarrow{C^*}$ is a sum of disjoint directed cocycles (viewed as $\{0,\pm 1\}$-vectors). }.
The direction $w^*$ induces a decomposition of $\mathbb{R}^2$ into \emph{half-open cells}\footnote{A half-open cell is a Minkowski sum of linear independent half-open line segments.}, each of which contains a unique lattice point. Note that the three half-open cells in the original copy $Z_D$ correspond to the three spanning trees and the lattice points they contain are $(\sigma,\sigma^*)$-compatible orientations. Now we restrict the tiling of $\mathbb{R}^2$ to $Z_D$ and hence get the second kind of tiling, which is a decomposition of the zonotope into half-open cells; see Figure~\ref{F3}(b). In the tiling, each half-open cell contains a unique lattice point, which corresponds to a $\sigma$-compatible orientation and the vectors that generate the half-open cell correspond to a forest. Hence we get a map from the set of $\sigma$-compatible orientations to the set of forests; see Figure~\ref{F4}. It is easy to see that this map is $\varphi_{\sigma,\sigma^*}$ defined as in Theorem~\ref{th}(2). This geometric approach will be treated rigorously in Section~\ref{geometric}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.9]{F3.eps}
\caption{(a) Copies of $Z_D$ tile $\mathbb{R}^2$, although we only draw four copies. Three copies among them are obtained by subtracting three directed cocycles respectively from $Z_D$. The direction $w^*$ induces a half-open decomposition of $\mathbb{R}^2$ indicated by parallelograms in both solid and dashed lines. \\(b) The tiling of $Z_D$ obtained from restricting the decomposition in (a) to $Z_D$.}
\label{F3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F4.eps}
\caption{The bijection $\varphi_{\sigma,\sigma^*}$ between spanning forests and $\sigma$-compatible orientations derived from Figure~\ref{F3}(b). For conciseness, if a forest is mapped to an orientation, we combine them into one configuration and put it beside the vertex corresponding to the orientation.}
\label{F4}
\end{figure}
\begin{Rem}
In this paper, the two kinds of tiling are not original, but their usage in the context of geometric bijections is original. For the space tiling zonotopes, see \cite[Theorem 1]{DG}; for the decomposition of the zonotope into half-open cells, see \cite[Lemma 2.1]{S}.
\end{Rem}
By making use of the duality between cycles and cocycles, we get the map $\varphi_{\sigma,\sigma^*}$ in Theorem~\ref{th}(3). Inspired by the two maps in Theorem~\ref{th}(2) and (3), we were led to conjecture Theorem~\ref{th}(1), but we do not know a geometric proof of it. Instead, we give a purely combinatorial proof. To make the combinatorial approach self-contained, we also give a combinatorial proof of Proposition~\ref{tree}, which is proved in a geometric way in \cite{BBY}.
\subsection{The structure of the paper}
Now let us outline the structure of the paper.
In Section~\ref{combinatorial}, we introduce preliminaries and give the combinatorial proof of Proposition~\ref{tree} and Theorem~\ref{th}. In Section~\ref{geometric}, we briefly review the geometric proof of Proposition~\ref{tree} in \cite{BBY} and give the geometric proof of Theorem~\ref{th}(2). In Section~\ref{tiling}, we discuss a geometric interpretation of Theorem~\ref{th}, which is a half-open decomposition of the cube $[0,1]^E$. The proof is combinatorial. We also show that the restriction of the decomposition to the zonotope $Z_D$ (via $\psi$) is exactly the one used to derive Theorem~\ref{th}(2) (e.g. Figure~\ref{F3}(b)). Hence we get a combinatorial proof of the geometric construction behind Theorem~\ref{th}(2). Note that the work of \cite{BBY} is done in the setting of \emph{regular matroids}. While this paper is written mainly in the setting of graphs, we explain in Section~\ref{regular} that the main results hold for regular matroids.
\section{Combinatorial Proof of the Main Results}\label{combinatorial}
In this section we will prove Proposition~\ref{tree} and Theorem~\ref{th} using combinatorial method.
\subsection{Preliminaries}
Let $G$ be a connected finite graph and let $E$ be its edge set. \emph{Loops} and \emph{multiple edges} are allowed.
For each edge $e\in E$, we may assign a direction to it by choosing one of its two endpoints to be the \emph{head} and the other one to be the \emph{tail} and hence get an \emph{arc} (directed from the tail to the head). Note that a loop has two possible directions. An \emph{orientation} of the graph $G$ is an assignment of a direction to each edge, typically denoted by $\overrightarrow{O}$.
An \emph{partial orientation} of the graph $G$ is an assignment of a direction to each edge in a subset of $E$, typically denoted by $\overrightarrow{P}$.
A subset $C$ of $E$ is called a \emph{cycle} if there exists distinct vertices $v_1,v_2, \cdots, v_n$ such that $C=\{\text{edge }v_iv_{i+1}:i=1,2,\cdots,n\}$,
where $v_{n+1}:=v_1$. Note that a cycle may be a loop. If we direct every edge in $C$ from $v_i$ to $v_{i+1}$ or direct every edge in $C$ from $v_{i+1}$ to $v_{i}$, then we get a \emph{directed cycle}, which is typically denoted by $\overrightarrow{C}$. Given a subset $W$ of vertices, the set of edges with one endpoint in $W$ and the other one not in $W$ is called a \emph{cut}. A \emph{cocycle} $C^*$ is a cut which is minimal for inclusion (equivalently it is a cut whose deletion increases the number of connected components by one). If we direct every edge in $C^*$ from $W$ to its complement or directed every edge in the other way, then we get a \emph{directed cocycle}, which is typically denoted by $\overrightarrow{C^*}$.
When an arc $\overrightarrow{e}$, a directed cycle $\overrightarrow{C}$, a directed cocycle $\overrightarrow{C^*}$, or a partial orientation $\overrightarrow{P}$ is specified, the corresponding underlying edge(s) will be denoted by $e$, $C$, $C^*$, or $P$, respectively. Viewing $\overrightarrow{O}$, $\overrightarrow{C}$, $\overrightarrow{C^*}$, and $\overrightarrow{P}$ as sets of arcs, it makes sense to write $\overrightarrow{e}\in\overrightarrow{O}$, etc. When $\overrightarrow{P}$ is a subset of $\overrightarrow{O}$, we say $\overrightarrow{P}$ is \emph{in} the orientation $\overrightarrow{O}$. In particular, $\overrightarrow{P}$ can be a directed cycle or a directed cocycle.
Given a spanning tree $T$ and an arc $\overrightarrow{e}$, we denote by $C(T,\overrightarrow{e})$ the fundamental cycle directed according to $\overrightarrow{e}$ when $e\in T$, and denote by
$C^*(T,\overrightarrow{e})$ the fundamental cocycle directed according to $\overrightarrow{e}$ when $e\notin T$.
It is a classical fact that every arc in an orientation belongs to either a directed cycle or a directed cocycle (but not both). Here we need the following stronger version.
\begin{Lemma}\label{3-painting}
Let $E_c$ and $E_d$ be two disjoint subsets of $E$, $\overrightarrow{P}$ be a partial orientation with support $E\backslash (E_c\cup E_d)$, and $\overrightarrow{e}$ be an arc in $\overrightarrow{P}$. Then there exists a directed cycle $\overrightarrow{C}$ containing $\overrightarrow{e}$ such that $\overrightarrow{C}$ agrees with $\overrightarrow{P}$ on any edge they share and $C\bigcap E_d=\emptyset$, or there exists a directed cocycle $\overrightarrow{C^*}$ containing $\overrightarrow{e}$ such that $\overrightarrow{C^*}$ agrees with $\overrightarrow{P}$ on any edge they share and $C^*\bigcap E_c=\emptyset$.
\end{Lemma}
\begin{proof}
Assume the arc $\overrightarrow{e}$ is directed from the vertex $u$ to $w$. Let $W$ be set of vertices reachable from $w$, where one may use the arcs in $\overrightarrow{P}$ and both directions of the edges in $E_c$. If $u\in W$, then there exists a directed cycle $\overrightarrow{C}$ as desired. Otherwise $\overrightarrow{e}$ belongs to the directed cut $\overrightarrow{C^*_0}$ oriented from the complement of $W$ to $W$. Then the directed cocycle (minimal directed cut) $\overrightarrow{C^*}$ containing $\overrightarrow{e}$ in $\overrightarrow{C^*_0}$ is as desired.
\end{proof}
\begin{Rem}
(1) We do not need the fact that the two cases in Lemma~\ref{3-painting} are exclusive. For the proof of the exclusiveness, see either of the two references below.
(2) Lemma~\ref{3-painting} (including the proof) is essentially the same as Proposition 2.5 in \cite{BH}, which is a paper studying \emph{fourientations} of a graph.
(3) Lemma~\ref{3-painting} can be generalized to \emph{regular matroids} or even \emph{oriented matroids}, and it is called the \emph{3-painting axiom}; see \cite[Theorem 3.4.4]{BVSWZ}. However, our proof cannot be generalized accordingly, because it makes use of vertices.
\end{Rem}
By fixing a reference orientation, we identify the set of orientations of $G$ with the set $\{0,1\}^E$, where the point $(1,1,\cdots,1)$ corresponds to the reference orientation. Let $D$ be the modified incidence matrix of the reference orientation with the last row removed. To be precise, if we denote the vertices of $G$ by $v_1, v_2, \cdots, v_{r+1}$ and the arcs in the reference orientation by $\overrightarrow{e_1}, \overrightarrow{e_2}, \cdots, \overrightarrow{e_n}$, then $D$ is the $r \times n$ matrix $(d_{ij})$ whose entries are
\begin{equation*}
d_{ij}=
\begin{cases}
1, &\text{if vertex $v_i$ is the head of non-loop arc $\overrightarrow{e_j}$;} \\
-1, &\text{if vertex $v_i$ is the tail of non-loop arc $\overrightarrow{e_j}$;}\\
0, & \text{otherwise}.
\end{cases}
\end{equation*}
See Section~\ref{intro} for an example of $D$.
One minor issue here is that when the graph $G$ only has one vertex ($r=0$), the matrix $D$ is not defined. In this (trivial) case, it is easy to check that Theorem~\ref{th} holds. Note that some geometric results do not hold, whether we use the incidence matrix or the matrix $D$. For example, Proposition~\ref{bbymain} does not hold because $Z_\sigma(T)$ is a single point if we use the incidence matrix and is not defined if we use the matrix $D$. From now on, we assume $G$ contains at least two vertices.
\begin{Rem}
The advantage of removing one row is to make $D$ of full rank. Hence in Section~\ref{geometric}, the zonotope $Z_D$ spans the whole space rather than a hyperplane. It does not matter which row is removed. Also note that when we use a matrix to represent a regular matroid in Section~\ref{regular}, the matrix is chosen to be of full rank.
\end{Rem}
Now we give some general facts concerning the directed cycles and cocycles in terms of linear algebra. We refer readers to \cite{Biggs} for the details. The vector space $\mathbb{R}^E$ has an orthogonal decomposition $\ker(D)\oplus\operatorname{im}(D^T)$ with respect to the standard inner product $\langle\cdot,\cdot\rangle$. We call $\ker(D)$ the \emph{cycle space} and $\operatorname{im}(D^T)$ the \emph{cocycle space}. Any directed cycle (resp. directed cocycle), written as a $\{0,\pm1\}$-vector according the reference orientation, is an element in the cycle space (resp. cocycle space). For lattice points, we denote the two abelian groups $\ker_\mathbb{Z}(D)=\ker(D)\bigcap \mathbb{Z}^E$ and $\operatorname{im}_\mathbb{Z}(D^T)=\operatorname{im}(D^T)\bigcap \mathbb{Z}^E$. For any spanning tree $T$, the directed fundamental cycles (resp. directed fundamental cocycles) form a basis of $\ker(D)$ (resp. $\operatorname{im}(D^T)$), and an integral basis of $\ker_\mathbb{Z}(D)$ (resp. $\operatorname{im}_\mathbb{Z}(D^T)$).
We recall the \emph{cycle reversal (equivalence) classes}, \emph{cocycle reversal (equivalence) classes}, and \emph{cycle-cocycle reversal (equivalence) classes} of orientations of $G$ introduced in \cite{G1}. If $\overrightarrow{C}$ is a directed cycle in $\overrightarrow{O}$, then a \emph{cycle reversal} replaces each arc in $\overrightarrow{C}$ with the opposite arc, and hence gives a new orientation, which is $\overrightarrow{O}-\overrightarrow{C}$ written as a vector, where $\overrightarrow{O}$ is a \{0,1\}-vector and $\overrightarrow{C}$ is a $\{0,\pm1\}$-vector. The equivalence relation generated by cycle reversals defines the cycle reversal classes of orientations of $G$. Similarly, we define the cocycle reversal classes. The equivalence relation generated by cycle and cocycle reversals defines the cycle-cocycle reversal classes.
When $\overrightarrow{P}$ is \emph{in} the orientation $\overrightarrow{O}$, we denote by $-_{\overrightarrow{P}}\overrightarrow{O}$ or $-_{P}\overrightarrow{O}$ the orientation obtained by reversing $\overrightarrow{P}$ in $\overrightarrow{O}$. For example, if $\overrightarrow{C}$ is a directed cycle in $\overrightarrow{O}$, then $\overrightarrow{O}-\overrightarrow{C}=-_{\overrightarrow{C}}\overrightarrow{O}$.
We need the following lemmas.
\begin{Lemma}\cite[Lemma 6.7]{Z}\label{conformal0}
(1) Let $\overrightarrow{u}\in\mathbb{R}^E$ be a vector in $\ker(D)$. Then $\overrightarrow{u}$ can be written as a sum of directed cycles with positive coefficients $\sum k_i\overrightarrow{C_i}$ where for each edge $e$ of each $C_i$, the sign of $e$ in $\overrightarrow{C_i}$ agrees with the sign of $e$ in $\overrightarrow{u}$.
(2) Let $\overrightarrow{u}\in\mathbb{R}^E$ be a vector in $\operatorname{im}(D^T)$. Then $\overrightarrow{u}$ can be written as a sum of directed cocycles with positive coefficients $\sum k_i\overrightarrow{C_i^*}$ where for each edge $e$ of each $C_i^*$, the sign of $e$ in $\overrightarrow{C_i^*}$ agrees with the sign of $e$ in $\overrightarrow{u}$.
\end{Lemma}
Sometimes the sum of directed cycles and the sum of directed cocycles are also denoted by $\overrightarrow{C}$ and $\overrightarrow{C^*}$, respectively.
\begin{Lemma}\label{conformal}
(1) If $\overrightarrow{C}\in\ker_\mathbb{Z}(D)$ is a $\{0,\pm1\}$-vector, then $\overrightarrow{C}$ is a sum of disjoint directed cycles.
(2) If $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$ is a $\{0,\pm1\}$-vector, then $\overrightarrow{C^*}$ is a sum of disjoint directed cocycles.
\end{Lemma}
\begin{proof}
(1) This is from \cite[Lemma 4.1.1]{BBY}.
(2) The proof of \cite[Lemma 4.1.1]{BBY} is derived from Lemma~\ref{conformal0}(or \cite[Lemma 6.7]{Z}), so the dual argument works.
\end{proof}
The next lemma gives two simple descriptions of two orientations being in the same cycle reversal class.
\begin{Lemma}\label{cyclelemma}
Let $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ be two orientations of $G$. Then the following are equivalent:
(a) $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ are in the same cycle reversal class;
(b) $\overrightarrow{O_1}-\overrightarrow{O_2}\in\ker_\mathbb{Z}(D)$;
(c) $\overrightarrow{O_1}$ can be obtained from $\overrightarrow{O_2}$ by reversing disjoint directed cycles.
\end{Lemma}
\begin{proof}
It is trivial that (a) implies (b) and (c) implies (a). By Lemma~\ref{conformal}, (b) implies (c).
\end{proof}
Similar results hold for cocycles.
Recall the definitions of the acyclic cycle signature $\sigma$, the acyclic cocycle signature $\sigma^*$, the $\sigma$-compatible orientations, the $\sigma^*$-compatible orientations, and the $(\sigma,\sigma^*)$-compatible orientations in Section~\ref{intro}. The next result shows that $\sigma$-compatible orientations are representatives of cycle reversal classes.
\begin{Prop}\cite[Prop. 4.1.4]{BBY}\label{sigmacompatible1}
Let $\sigma$ be an acyclic cycle signature. Then each cycle reversal class contains a unique $\sigma$-compatible orientation.
\end{Prop}
The similar result to Proposition~\ref{sigmacompatible1} holds for cocycles.
\begin{Cor}\label{cor0}
Let $\sigma$ be an acyclic cycle signature, $\sigma^*$ be an acyclic cocycle signature, and $\overrightarrow{O}$ be an orientation.
(1) There exists a unique $\sigma$-compatible orientation $\overrightarrow{O^\sigma}$ in the cycle reversal class of $\overrightarrow{O}$, and $\overrightarrow{O}$ can be obtained by reversing disjoint directed cycles in $\overrightarrow{O^\sigma}$.
(2) There exists a unique $\sigma^*$-compatible orientation $\overrightarrow{O^{\sigma^*}}$ in the cocycle reversal class of $\overrightarrow{O}$, and $\overrightarrow{O}$ can be obtained by reversing disjoint directed cocycles in $\overrightarrow{O^{\sigma^*}}$.
(3) There exists a unique $(\sigma,\sigma^*)$-compatible orientation $\overrightarrow{O^{cp}}$ in the cycle-cocycle reversal class of $\overrightarrow{O}$, and $\overrightarrow{O}$ can be obtained by reversing disjoint directed cycles and cocycles in $\overrightarrow{O^{cp}}$. Moreover, if $\overrightarrow{O}$ is already $\sigma$-compatible, then only directed cocycles are reversed; if $\overrightarrow{O}$ is already $\sigma^*$-compatible, then only directed cycles are reversed.
\end{Cor}
\begin{proof}
By Proposition~\ref{sigmacompatible1} and Lemma~\ref{cyclelemma}, we have (1). (2) is dual to (1). Because every arc in an orientation belongs to a directed cycle or a directed cocycle but not both, (3) holds.
\end{proof}
\subsection{Combinatorial proof of Proposition~\ref{tree}}
We fix a graph $G$, a reference orientation, an acyclic cycle signature $\sigma$, and an acyclic cocycle signature $\sigma^*$.
Let $e$ be an edge. For an orientation $\overrightarrow{O}$, we denote by $\overrightarrow{O}(e)$ the arc $\overrightarrow{e}$ (whose underlying edge is $e$) in $\overrightarrow{O}$. If $e$ is in the support of a directed cycle $\overrightarrow{C}$ (resp. directed cocycle $\overrightarrow{C^*}$), we denote by $\overrightarrow{C}(e)$ (resp. $\overrightarrow{C^*}(e)$) the arc $\overrightarrow{e}$ in the directed cycle $\overrightarrow{C}$ (resp. directed cocycle $\overrightarrow{C^*}$). More generally, by viewing $\overrightarrow{v}\in\mathbb{Z}^E$ as a multiset of arcs, we denote by $\overrightarrow{v}(e)$ the arc $\overrightarrow{e}$ (with multiplicity one) in $\overrightarrow{v}$ if $e$ in the support of $\overrightarrow{v}$. In this case, we also write $\overrightarrow{e}\in\overrightarrow{v}$.
The following lemma plays an important role in our proofs.
\begin{Lemma}\label{fundamental}
Fix a spanning tree $T$.
(1) For any $\overrightarrow{C}\in\ker_\mathbb{Z}(D)$, $\overrightarrow{C}=\sum_{e\notin T,\overrightarrow{e}\in\overrightarrow{C}}C(T,\overrightarrow{e})$,
(2) and for any $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$, $\overrightarrow{C^*}=\sum_{e\in T,\overrightarrow{e}\in\overrightarrow{C^*}}C^*(T,\overrightarrow{e})$, where $\overrightarrow{e}$ appears in the summations as many times as its multiplicity.
\end{Lemma}
\begin{proof}
(1) Because the directed fundamental cycles form a basis of $\ker(D)$, we can write $\overrightarrow{C}$ as a linear combination of them. Note that each edge $e\notin T$ is in exactly one fundamental cycle, so by comparing the coefficients of $e$ in both sides, we get the desired formula.
(2) The proof is similar.
\end{proof}
Recall that the map $g_{\sigma,\sigma^*}$ defined in Proposition~\ref{tree} sends a spanning tree $T\in\mathcal{T}(G)$ to the orientation $\overrightarrow{O}$ where $C(T,\overrightarrow{e})\in\sigma$ or $C^*(T,\overrightarrow{e})\in\sigma^*$
for any arc $\overrightarrow{e}\in\overrightarrow{O}$. The next lemma is a key tool in this subsection. It uses the information carried by $g_{\sigma,\sigma^*}(T)$ rather than the two signatures to give a sufficient condition of directed cycles (resp. directed cocycles) being $\sigma$-compatible (resp. $\sigma^*$-compatible).
\begin{Lemma}\label{mainlemma}
Assume $g_{\sigma,\sigma^*}(T)=\overrightarrow{O}$, where $T$ is a spanning tree.
(1) Let $\overrightarrow{C}\in\ker_\mathbb{Z}(D)$. If for any arc $\overrightarrow{e}\in\overrightarrow{C}$ such that $e\notin T$, we have $\overrightarrow{e}\in\overrightarrow{O}$, then $\overrightarrow{C}$ is a sum of $\sigma$-compatible directed cycles. In particular, if $\overrightarrow{C}$ is a directed cycle satisfying the condition, then $\overrightarrow{C}$ is $\sigma$-compatible.
(2) Let $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$. If for any arc $\overrightarrow{e}\in\overrightarrow{C^*}$ such that $e\in T$, we have $\overrightarrow{e}\in\overrightarrow{O}$, then $\overrightarrow{C^*}$ is a sum of $\sigma^*$-compatible directed cocycles. In particular, if $\overrightarrow{C^*}$ is a directed cocycle satisfying the condition, then $\overrightarrow{C^*}$ is $\sigma^*$-compatible.
\end{Lemma}
\begin{proof}
(1) By Lemma~\ref{fundamental}, $\overrightarrow{C}=\sum_{e\notin T,\overrightarrow{e}\in\overrightarrow{C}}C(T,\overrightarrow{e})$. By assumption, $\overrightarrow{e}\in\overrightarrow{O}$ and hence the directed cycle $C(T,\overrightarrow{e})$ is $\sigma$-compatible by the definition of $g_{\sigma,\sigma^*}$.
If $\overrightarrow{C}$ is a directed cycle, then $\overrightarrow{C}$ is $\sigma$-compatible because the signature $\sigma$ is acyclic.
(2) The proof is similar.
\end{proof}
We now show that $g_{\sigma,\sigma^*}$ is a map from $\mathcal{T}(G)$ to
$\{(\sigma,\sigma^*)\text{-compatible orientations}\}$.
\begin{Lemma}
For any $T\in\mathcal{T}(G)$, the orientation $g_{\sigma,\sigma^*}(T)$ is $(\sigma,\sigma^*)$-compatible.
\end{Lemma}
\begin{proof}
By definition, we need to show any directed cycle $\overrightarrow{C}$ in $g_{\sigma,\sigma^*}(T)$ is $\sigma$-compatible
and any directed cocycle $\overrightarrow{C^*}$ in $g_{\sigma,\sigma^*}(T)$ is $\sigma^*$-compatible. This is a direct consequence of Lemma~\ref{mainlemma}.
\end{proof}
Next we show that $g_{\sigma,\sigma^*}$ is injective. The following result is actually stronger than injectivity, which will be useful later.
\begin{Prop}\label{proptree}
Let $\overrightarrow{O_1}=g_{\sigma,\sigma^*}(T_1)$ and $\overrightarrow{O_2}=g_{\sigma,\sigma^*}(T_2)$, where $T_1$ and $T_2$ are two
different spanning trees. Then there exists an edge $e$ such that $e\in T_1\bigtriangleup T_2$ and $\overrightarrow{O_1}(e)\neq \overrightarrow{O_2}(e)$. In particular, $g_{\sigma,\sigma^*}$ is injective.
\end{Prop}
\begin{proof}
Denote $E_c=T_1\bigcap T_2$ and $E_d=E\backslash(T_1\bigcup T_2)$. Assume by contradiction that there is no such edge, which means $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ agree on $E\backslash (E_c\cup E_d)$.
Let $\overrightarrow{P}$ be the (non-empty) partial orientation obtained by restricting the orientation $-_{T_1}\overrightarrow{O_1}$ to $E\backslash (E_c\cup E_d)$. Note that $\overrightarrow{P}$ is also the restriction of $-_{T_1}\overrightarrow{O_2}$ to $E\backslash (E_c\cup E_d)$.
Apply Lemma~\ref{3-painting} to any arc in $\overrightarrow{P}$. Then one of the following two cases occurs.
Case 1: There is a directed cycle $\overrightarrow{C}$ such that it agrees with $\overrightarrow{P}$ on any edge they share and $C\bigcap E_d=\emptyset$.
Let $\overrightarrow{e}$ be an arc of $\overrightarrow{C}$. Then $e$ is in $T_1\backslash T_2$, $T_2\backslash T_1$ or $E_c$. When $e\in T_1\backslash T_2$, which is equivalent to $e\in C\backslash T_2$, $\overrightarrow{e}=-\overrightarrow{O_2}(e)$; when $e\in T_2\backslash T_1$, which is equivalent to $e\in C\backslash T_1$, $\overrightarrow{e}=\overrightarrow{O_1}(e)$.
Applying Lemma~\ref{mainlemma} to $T_1$ and $\overrightarrow{C}$, we get $\overrightarrow{C}$ is $\sigma$-compatible.
Applying Lemma~\ref{mainlemma} to $T_2$ and $-\overrightarrow{C}$, we get $-\overrightarrow{C}$ is $\sigma$-compatible. Hence we get a contradiction.
Case 2: There is a directed cocycle $\overrightarrow{C^*}$ such that it agrees with $\overrightarrow{P}$ on any edge they share and $C^*\bigcap E_c=\emptyset$.
This case is dual to Case 1 and hence ends with a contradiction.
\end{proof}
By Corollary~\ref{cor0}(3), $(\sigma,\sigma^*)$-compatible orientations are representatives of the cycle-cocycle reversal classes. By \cite[Corollary 4.13]{G1}, the number of the cycle-cocycle reversal classes is $T_G(1,1)$ and hence is equal to the number of the spanning trees. Note that the counterpart of this enumerative fact also holds for regular matroids; see \cite[Theorem 3.10]{G2}. So we get the bijectivity.
\begin{Cor}\cite{BBY}
The map $$g_{\sigma,\sigma^*}:\mathcal{T}(G)\longrightarrow\{(\sigma,\sigma^*)\text{-compatible orientations}\}$$ is a bijection.
\end{Cor}
This concludes the combinatorial proof of Proposition~\ref{tree}, which is one of the main results of \cite{BBY}.
\subsection{Combinatorial proof of Theorem~\ref{th}}
We now prove our main result, Theorem~\ref{th}, and a "local bijectivity" property.
We extend $g_{\sigma,\sigma^*}^{-1}$ to $$\varphi_{\sigma,\sigma^*}:\{\text{discrete orientations}\}\longrightarrow \{\text{spanning subgraphs}\}.$$
By Corollary~\ref{cor0}, for any orientation $\overrightarrow{O}$, there exists a unique $(\sigma,\sigma^*)$-compatible orientation $\overrightarrow{O^{cp}}$ such that $\overrightarrow{O}=-_{\overrightarrow{P}}\overrightarrow{O^{cp}}$, where $\overrightarrow{P}$ is the disjoint union of certain directed cycles $\{\overrightarrow{C_i}\}_{i\in I}$ and directed cocycles $\{\overrightarrow{C^*_j}\}_{j\in J}$. Define $$\varphi_{\sigma,\sigma^*}(\overrightarrow{O})=g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\cup(\biguplus_{i\in I}C_i)\backslash(\biguplus_{j\in J}C^*_j).$$
Note that $\varphi_{\sigma,\sigma^*}=g_{\sigma,\sigma^*}^{-1}$ when restricted to the set $\{(\sigma,\sigma^*)\text{-compatible orientations}\}$. The domain and codomain of $\varphi_{\sigma,\sigma^*}$ clearly have the same cardinality, so it is enough to show the injectivity. As in Proposition~\ref{proptree}, we prove a stronger property, which will be related to the half-open decomposition in Section~\ref{tiling}.
\begin{Prop}\label{propsubset}
Let $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ be two different orientations of $G$. Then there exists an edge $e$ such that $\overrightarrow{O_1}(e)\neq\overrightarrow{O_2}(e)$ and $e\in \varphi_{\sigma,\sigma^*}(\overrightarrow{O_1})\bigtriangleup \varphi_{\sigma,\sigma^*}(\overrightarrow{O_2})$. In particular, $\varphi_{\sigma,\sigma^*}$ is injective.
\end{Prop}
\begin{proof}
Denote $E_{\rightrightarrows}=\{e:\overrightarrow{O_1}(e)=\overrightarrow{O_2}(e)\}$ and $E_{\rightleftarrows}=\{e:\overrightarrow{O_1}(e)\neq\overrightarrow{O_2}(e)\}$($\neq\emptyset$).
For $k=1,2$, we denote $-_{\overrightarrow{P_k}}\overrightarrow{O_k}=\overrightarrow{O_k^{cp}}$ and $T_k=\varphi_{\sigma,\sigma^*}(\overrightarrow{O_k^{cp}})$, where $\overrightarrow{O_k^{cp}}$ is a $(\sigma,\sigma^*)$-compatible orientation, $\overrightarrow{P_k}$ is a partial orientation consisting of certain disjoint directed cycles $\{\overrightarrow{C_{k,i}}\}_{i\in I_k}$ and directed cocycles $\{\overrightarrow{C^*_{k,j}}\}_{j\in J_k}$ in $\overrightarrow{O_k}$,
and $T_k$ is a spanning tree.
The proof consists of two parts.
For the first part, we claim if one of the following cases happens, we get the desired edge.
Case (a): There exists an edge $e\in E_{\rightleftarrows}\backslash T_1$ such that $e\in (\biguplus_{i\in I_2} C_{2,i})\backslash(\biguplus_{i\in I_1} C_{1,i})$.
Case (b): There exists an edge $e\in E_{\rightleftarrows}\backslash T_2$ such that $e\in (\biguplus_{i\in I_1} C_{1,i})\backslash(\biguplus_{i\in I_2} C_{2,i})$.
Case (a*): There exists an edge $e\in E_{\rightleftarrows}\bigcap T_1$ such that $e\in (\biguplus_{j\in J_2} C^*_{2,j})\backslash(\biguplus_{j\in J_1} C^*_{1,j})$.
Case (b*): There exists an edge $e\in E_{\rightleftarrows}\bigcap T_2$ such that $e\in (\biguplus_{j\in J_1} C^*_{1,j})\backslash(\biguplus_{j\in J_2} C^*_{2,j})$.
Case (c): $(\biguplus_{i\in I_2}C_{2,i})\bigcap(\biguplus_{j\in J_1}C^*_{1,j})\neq\emptyset$.
Case (c*): $(\biguplus_{i\in I_1}C_{1,i})\bigcap(\biguplus_{j\in J_2}C^*_{2,j})\neq\emptyset$.
If Case (a) holds, then $e\notin\varphi_{\sigma,\sigma^*}(\overrightarrow{O_1})$ and $e\in\varphi_{\sigma,\sigma^*}(\overrightarrow{O_2})$ by the definition of $\varphi_{\sigma,\sigma^*}$. Hence we obtain the desired edge.
By exchanging the subscripts $1$ and $2$ or taking the dual case, we also obtain the desired edge in Case (b), (a*), and (b*).
If Case (c) holds, then $C_{2,i}\bigcap C^*_{1,j}\neq\emptyset$ for some $i,j$. The intersection contains at least two edges and one of the edges, called $e$, is oriented in opposite ways by $\overrightarrow{C_{2,i}}$ and $\overrightarrow{C^*_{1,j}}$. Then $e\in E_{\rightleftarrows}$, $e\in\varphi_{\sigma,\sigma^*}(\overrightarrow{O_2})$, and $e\notin\varphi_{\sigma,\sigma^*}(\overrightarrow{O_1})$.
By exchanging the subscripts $1$ and $2$, we get Case (c*).
For the second part of the proof, we consider $\overrightarrow{C}=\sum_{i\in I_2}\overrightarrow{C_{2,i}}-\sum_{i\in I_1}\overrightarrow{C_{1,i}}\in\ker_\mathbb{Z}(A)$ and $\overrightarrow{C^*}=\sum_{j\in J_2}\overrightarrow{C^*_{2,j}}-\sum_{j\in J_1}\overrightarrow{C^*_{1,j}}\in\operatorname{im}_\mathbb{Z}(A^T)$. They are both $\{0,\pm 1, \pm 2\}$-vectors.
Now we claim that if none of the cases in the first part of the proof holds, then $\overrightarrow{C}=\overrightarrow{C^*}=0$.
Assume by contradiction that $\overrightarrow{C}\neq 0$.
We consider applying Lemma~\ref{mainlemma} to $T_1$ and $\overrightarrow{C}$. For any arc $\overrightarrow{e}\in\overrightarrow{C}$ such that $e\notin T_1$, we will show $\overrightarrow{e}\in\overrightarrow{O_1^{cp}}$ and hence $\overrightarrow{C}$ is a sum of $\sigma$-compatible directed cycles.
If the underlying edge of this arc $e\in\biguplus C_{1,i}$, then $\overrightarrow{e}=-(\biguplus \overrightarrow{C_{1,i}})(e)\in\overrightarrow{O_1^{cp}}$.
If $e\notin\biguplus C_{1,i}$, then $e\in\biguplus C_{2,i}$. Moreover, $e$ must be in $E_{\rightrightarrows}$ because otherwise $e\in E_{\rightleftarrows}$ and hence Case (a) holds.
Hence $\overrightarrow{e}=\overrightarrow{O_2}(e)=\overrightarrow{O_1}(e)$. Because Case (c) does not hold, $e\in \biguplus C_{2,i}$ implies $e\notin \biguplus C^*_{1,j}$. By $e\notin \biguplus C^*_{1,j}$ and $e\notin \biguplus C_{1,i}$, we get $\overrightarrow{O_1}(e)=\overrightarrow{O_1^{cp}}(e)$. So $\overrightarrow{e}=\overrightarrow{O_1^{cp}}(e)\in\overrightarrow{O_1^{cp}}$.
Therefore $\overrightarrow{C}$ is a sum of $\sigma$-compatible directed cycles. Similarly, we apply Lemma~\ref{mainlemma} to $T_2$ and $-\overrightarrow{C}=\sum_{i\in I_1}\overrightarrow{C_{1,i}}-\sum_{i\in I_2}\overrightarrow{C_{2,i}}$. Then we get $-\overrightarrow{C}$ is also a sum of $\sigma$-compatible directed cycles, which contradicts that the cycle signature $\sigma$ is acyclic.
Therefore, $\overrightarrow{C}=0$.
By a dual argument, $\overrightarrow{C^*}=0$.
It remains to show the proposition is true when $\overrightarrow{C}=\overrightarrow{C^*}=0$.
In this case, the cycles $C_{1,i}$ and $C_{2,i}$ and the cocycles $C^*_{1,j}$ and $C^*_{2,j}$ are all subsets of $E_{\rightrightarrows}$ and hence $\{e:\overrightarrow{O_1^{cp}}(e)\neq\overrightarrow{O_2^{cp}}(e)\}=E_{\rightleftarrows}$. By Proposition~\ref{proptree}, there exists an edge $e\in E_{\rightleftarrows}$ such that $e\in T_1\bigtriangleup T_2$. Since $e\notin(\biguplus_{i\in I_1}C_{1,i})\bigcup(\biguplus_{j\in J_1}C^*_{1,j})$, $e\in \varphi_{\sigma,\sigma^*}(\overrightarrow{O_1})\bigtriangleup \varphi_{\sigma,\sigma^*}(\overrightarrow{O_2})$ as desired.
\end{proof}
As a consequence, Theorem~\ref{th} holds.
Now we study some specializations of the bijection $\varphi_{\sigma,\sigma^*}$. Theorem~\ref{th}(2) and (3) hold due to the following lemma.
\begin{Lemma}\label{l1}
Let $\overrightarrow{O}$ be an orientation and $\overrightarrow{O^{cp}}$ be the $(\sigma,\sigma^*)$-compatible orientation such that one gets $\overrightarrow{O}$ by reversing disjoint directed cycles $\{\overrightarrow{C_i}\}_{i\in I}$ and directed cocycles $\{\overrightarrow{C^*_j}\}_{j\in J}$ in $\overrightarrow{O^{cp}}$. Then
(1) $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$ is a forest $\Leftrightarrow$ $I=\emptyset$ $\Leftrightarrow$ $\overrightarrow{O}$ is $\sigma$-compatible;
(2) $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$ is a connected spanning subgraph $\Leftrightarrow$ $J=\emptyset$ $\Leftrightarrow$ $\overrightarrow{O}$ is $\sigma^*$-compatible.
\end{Lemma}
\begin{proof}
(1) Recall that $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})=g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\cup(\biguplus_{i\in I}C_i)\backslash(\biguplus_{j\in J}C^*_j)$, where $g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})$ is a spanning tree.
So $I=\emptyset$ $\Rightarrow$ $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$ is a forest. The inverse (and hence the converse) is true because $(\biguplus_{i\in I}C_i)\cap(\biguplus_{j\in J}C^*_j)=\emptyset$.
The other equivalence is due to Corollary~\ref{cor0}(3).
(2) The proof is similar.
\end{proof}
The next proposition shows that $\varphi_{\sigma,\sigma^*}$ is "locally bijective", which means if we fix a partial orientation or a subgraph, then the restricted $\varphi_{\sigma,\sigma^*}$ is still bijective.
\begin{Prop}\label{local}
(1) Let $\overrightarrow{O_P}$ be a partial orientation of $G$, where $P$ is the underlying edges of $\overrightarrow{O_P}$. Then the map
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*,\overrightarrow{O_P}}:\{\text{orientations of }E\backslash P\} & \longrightarrow & \{\text{subsets of }E\backslash P\} \\
\overrightarrow{O_{E\backslash P}} & \mapsto & \varphi_{\sigma,\sigma^*}(\overrightarrow{O_{E\backslash P}}\cup\overrightarrow{O_P})\backslash P
\end{eqnarray*}
is a bijection. In other words, if one restricts $\varphi_{\sigma,\sigma^*}$ to the orientations containing $\overrightarrow{O_P}$ and ignores the edges in $P$, then one gets a bijection.
(2) Let $E_c$ and $E_d$ be two disjoint subsets of $E$. Then the map
\begin{eqnarray*}
\varphi^{-1}_{\sigma,\sigma^*, E_c, E_d}: \{\text{subsets of }E\backslash (E_c\cup E_d)\} & \longrightarrow & \{\text{orientations of }E\backslash (E_c\cup E_d)\} \\
H & \mapsto & \varphi^{-1}_{\sigma,\sigma^*}(H\cup E_c)\text{ restricted to }E\backslash (E_c\cup E_d)
\end{eqnarray*}
is a bijection. In other words, if one restricts $\varphi^{-1}_{\sigma,\sigma^*}$ to the spanning subgraphs that include $E_c$ and exclude $E_d$, and ignores the edges in $E_c\cup E_d$, then one gets a bijection.
\end{Prop}
\begin{proof}
By Proposition~\ref{propsubset}, these two maps are injective. Clearly for each map the domain and codomain have the same cardinality, so they are bijections.
\end{proof}
\begin{Ex}
Here we illustrate the maps $\varphi_{\sigma,\sigma^*,\overrightarrow{O_P}}$ and $\varphi^{-1}_{\sigma,\sigma^*, E_c, E_d}$ of Proposition~\ref{local}.
We choose the bijection $\varphi_{\sigma,\sigma^*}$ to be the one in Example~\ref{ex1}; see Figure~\ref{F5}. In Figure~\ref{F51}(a), we choose $\overrightarrow{O_P}$ to be one bottom arc directed from left to right. Then there are four orientations in Figure~\ref{F5} extending $\overrightarrow{O_P}$. By ignoring this bottom edge, we get four different subsets and four different orientations of $E\backslash P$. The naturally inherited correspondence between them is $\varphi_{\sigma,\sigma^*,\overrightarrow{O_P}}$.
To fix a partial subgraph, we need to determine which edges are in it and which edges are not in it. In Figure~\ref{F51}(b), we include the left edge and exclude the right edge. Then there are two configurations satisfying this condition. By ignoring these two edges in the two configurations, we get the bijection $\varphi^{-1}_{\sigma,\sigma^*, E_c, E_d}$.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F51.eps}
\caption{(a) We restrict the bijection $\varphi_{\sigma,\sigma^*}$ in Figure~\ref{F5} to the orientations extending $\protect\overrightarrow{O_P}$ and get $\varphi_{\sigma,\sigma^*,\protect\overrightarrow{O_P}}$. Technically speaking, the bottom arc in the four configurations of $\varphi_{\sigma,\sigma^*,\protect\overrightarrow{O_P}}$ should be erased. Here we still draw it to emphasize the relation between $\varphi_{\sigma,\sigma^*}$ and $\varphi_{\sigma,\sigma^*,\protect\overrightarrow{O_P}}$.\\
(b) We restrict the bijection $\varphi_{\sigma,\sigma^*}$ in Figure~\ref{F5} to the subgraphs that include the left edge and exclude the right edge and get $\varphi^{-1}_{\sigma,\sigma^*, E_c, E_d}$.
}
\label{F51}
\end{figure}
\end{Ex}
\section{The geometric proof of Theorem~\ref{th}(2)}\label{geometric}
This section aims at giving a geometric proof of Theorem~\ref{th}(2), which extends the geometric proof of Proposition~\ref{tree} in \cite{BBY} and is independent of the combinatorial proof in Section~\ref{combinatorial}.
\subsection{An introduction to the geometric proof of Proposition~\ref{tree} in \cite{BBY}}
Fix a graph $G$, a reference orientation, an acyclic cycle signature $\sigma$, and an acyclic cocycle signature $\sigma^*$.
We identity the set of \emph{continuous orientations} of $G$ with the cube $[0,1]^E$. In a continuous orientation, if the $e$-th coordinate is in $(0,1)$, then we say the edge $e$ is \emph{bi-oriented}.
We recall some definitions concerning the continuous orientations in \cite{BBY}.
A \emph{continuous cycle reversal} with respect to a directed cycle $\overrightarrow{C}$ replaces a continuous orientation $\overrightarrow{O}$ with $\overrightarrow{O}-\epsilon\overrightarrow{C}$ for some $\epsilon>0$ provided the new continuous orientation is still in $[0,1]^E$. Here $\overrightarrow{C}$ is not necessarily a directed cycle in $\overrightarrow{O}$. The equivalence relation generated by continuous cycle reversals defines the \emph{continuous cycle reversal (equivalence) classes}.
A continuous orientation $\overrightarrow{O}$ is called \emph{$\sigma$-compatible} if one cannot flow $\overrightarrow{O}$ along any directed cycle in $\sigma$, i.e, $\overrightarrow{O}+\epsilon\sigma(C)\notin [0,1]^E$ for any $\epsilon >0$ and any cycle $C$.
The \emph{continuous cocycle (equivalence) classes} and \emph{$\sigma^*$-compatible continuous orientations} are defined similarly.
The next lemma gives a geometric description of a cycle signature being acyclic.
\begin{Lemma}\cite[Lemma 3.1.1]{BBY}\label{signature}
Let $\sigma$ be a cycle signature of $G$. Then $\sigma$ is acyclic if and only if there exists $w\in\mathbb{R}^E$ such that $\langle w, \sigma(C)\rangle>0$ for each cycle $C$ of $G$.
\end{Lemma}
Recall that $D$ is the modified incidence matrix of the reference orientation with one row removed.
We restrict the linear map $D$ to the continuous orientations $[0,1]^E$. The image is the \emph{zonotope} $Z_D=\{\sum_{i=1}^{|E|}c_iv_i:0\leq c_i \leq 1\}$, where $v_i$'s are the columns of $D$. We call this map $\psi$, i.e.,
\begin{eqnarray*}
\psi:[0,1]^E & \longrightarrow & Z_D \\
\overrightarrow{O} & \mapsto & D\cdot \overrightarrow{O},
\end{eqnarray*}
where $\overrightarrow{O}$ is viewed as a column vector.
The next proposition gives a simple description of the \emph{fibers} of $\psi$.
\begin{Prop}\cite[Prop. 3.1.4 and Prop. 4.1.3]{BBY}\label{fiber}
(1) The map $\psi$ gives a bijection between continuous cycle reversal classes of continuous orientations of $G$ and points of the zonotope $Z_D$. In other words, two continuous orientations $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ are in the same continuous cycle reversal class if and only if $\overrightarrow{O_1}-\overrightarrow{O_2}\in\ker(D)$.
(2) The map $\psi$ induces a bijection between (discrete) cycle reversal classes of $G$ and lattice points of the zonotope $Z_D$.
\end{Prop}
The next result is the continuous version of Proposition~\ref{sigmacompatible1}.
\begin{Prop}\cite[Prop. 3.2.1]{BBY}\label{sigmacompatible2}
Let $\sigma$ be an acyclic cycle signature. Then each continuous cycle reversal class contains a unique $\sigma$-compatible continuous orientation.
\end{Prop}
The similar results to Lemma~\ref{signature}, Proposition~\ref{fiber}, and Proposition~\ref{sigmacompatible2} hold for cocycles.
By \cite[Remark 3.2.2]{BBY}, the map
\begin{eqnarray*}
\mu:Z_D & \longrightarrow & [0,1]^E \\
z & \mapsto & \text{the unique }\sigma\text{-compatible continuous orientation in }\psi^{-1}(z)
\end{eqnarray*}
is a continuous \emph{section} to the map $\psi$, i.e., $\mu$ is continuous and $\psi\circ\mu$ is the identity map.
By abusing language, we also call the image of a section a section.
\noindent\textbf{Notation.} We denote $\operatorname{im}\mu$, the set of $\sigma$-compatible continuous orientations, by $S_\sigma$.
By Proposition~\ref{fiber}, Proposition~\ref{sigmacompatible1}, and Proposition~\ref{sigmacompatible2}, we have the following corollary.
\begin{Cor}\label{section}
The map $\psi$ restricted to $S_\sigma$ is a bijection to $Z_D$. Moreover, $\psi$ restricted to the lattice points of $S_\sigma$, i.e., the $\sigma$-compatible orientations, is a bijection to the lattice points of $Z_D$.
\end{Cor}
For each spanning tree $T$, define the face $\text{Face}_\sigma(T)$ of $[0,1]^E$ to be the set of the continuous orientations where each edge $e\notin T$ is oriented according to $\sigma(C(T,e))$. Define the parallelepiped $Z_\sigma(T)=\psi(\text{Face}_\sigma(T))\subseteq Z_D$.
The following result shows a nice structure of $Z_D$.
\begin{Prop}\cite[Prop. 3.4.1]{BBY}\label{polyhedralzonotope}
Fix an acyclic cycle signature $\sigma$ of $G$.
(1) $Z_D=\bigcup_{T\in\mathcal{T}(G)}Z_\sigma(T)$.
(2) For two different spanning trees $T_1$ and $T_2$, $Z_\sigma(T_1)$ and $Z_\sigma(T_2)$ are different and their intersection is either a common face or the empty set.
(3) The vertices of $Z_\sigma(T)$'s correspond via $\mu$ to the $\sigma$-compatible (discrete) orientations (and hence by Corollary~\ref{section}, the vertices of $Z_\sigma(T)$'s are exactly the lattice points of $Z_D$).
\end{Prop}
We also need the counterpart of Proposition~\ref{polyhedralzonotope}(1) for $S_\sigma$.
\begin{Prop}\cite{BBY}\label{verticesofS}
$S_\sigma=\bigcup_{T\in\mathcal{T}(G)}\text{Face}_\sigma(T)$.
\end{Prop}
The fact that $\text{Face}_\sigma(T)\subseteq S_\sigma$ is proved implicitly in \cite[Prop. 3.3.1(2)]{BBY}; see also Lemma~\ref{facelemma}. The other direction $S_\sigma\subseteq\bigcup_{T\in\mathcal{T}(G)}\text{Face}_\sigma(T)$ is proved implicitly in \cite[Prop. 3.4.1]{BBY}.
Now we consider shifting the parallelepiped $Z_\sigma(T)$ in the ambient Euclidean space of $Z_D$ along a generic vector $w^*$; see Figure~\ref{F0}(a). For sufficiently small positive $\epsilon$, the image of $Z_\sigma(T)$ under the shifting map $v\mapsto v+\epsilon w^*$ contains a unique vertex of $Z_\sigma(T)$ corresponding via $\mu$ to a $\sigma$-compatible orientation. This defines a map that sends a spanning tree to a $\sigma$-compatible orientation. Fix an acyclic cocycle signature $\sigma^*$. By \cite[Theorem 3.5.3]{BBY}, there exists\footnote{The paper \cite{BBY} works with the row zonotope instead of the zonotope when explaining the geometric interpretation of $g_{\sigma,\sigma^*}$. By \cite[Lemma 3.5.1]{BBY}, the two zonotopes differ by the linear transformation $D$, so the geometric idea work for both zonotopes. The vector $w^*$ can be chosen to be $D\cdot w'$, where $w'$ is the shifting vector used for the row zonotope in \cite[Lemma 3.5.2]{BBY}. For simplicity, we only claim the existence of such a vector.} a vector $w^*$ (depending on $\sigma^*$) such that the map is $g_{\sigma,\sigma^*}$ defined in Proposition~\ref{tree}. By \cite[Theorem 3.5.5]{BBY}, $g_{\sigma,\sigma^*}$ is a bijection to the set of $(\sigma,\sigma^*)$-compatible orientations, which is Proposition~\ref{tree}. The bijection is called the \emph{geometric bijection}. We summarize this paragraph with the following proposition.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F0.eps}
\caption{(a): We shift the parallelepiped $Z_\sigma(T)$ along $w^*$, then it will cover the vertex $\psi(g_{\sigma,\sigma^*}(T))$. \\(b): The half-open cell $\text{hoc}_{\sigma,\sigma^*}(T)$ defined in Subsection~\ref{hoc}. }
\label{F0}
\end{figure}
\begin{Prop}\cite{BBY}\label{bbymain}
Fix acyclic signatures $\sigma$ and $\sigma^*$ of $G$. Then there exists a vector $w^*$ such that, for sufficiently small positive $\epsilon$ and any spanning tree $T$, the image of $Z_\sigma(T)$ under the shifting map $v\mapsto v+\epsilon w^*$ contains a unique vertex of $Z_\sigma(T)$ corresponding (via $\mu$) to the $(\sigma,\sigma^*)$-compatible orientation $g_{\sigma,\sigma^*}(T)$ defined in Proposition~\ref{tree}. Moreover, the map $g_{\sigma,\sigma^*}$ is bijective from $\mathcal{T}(G)$ to $\{(\sigma,\sigma^*)\text{-compatible orientations}\}$.
\end{Prop}
See Section~\ref{intro} for an example.
\subsection{An extension of the section $S_\sigma$}
The target of this subsection is to prove that $$\widetilde{S_\sigma}:=S_\sigma+\operatorname{im}_\mathbb{Z}(D^T)$$ is the image of a section to the linear map $D$ that extends $\mu$. Intuitively, one may imagine $\widetilde{S_\sigma}$ as Figure~\ref{F3}(a), where the copies of $Z_D$ are viewed as copies of $S_\sigma$.
\begin{Rem}
As introduced in Section~\ref{intro}, we want to make use of the fact that copies of the zonotope $Z_D$ tile the ambient space $\operatorname{im} D$. For technical reasons, we make the tiling for the section $\widetilde{S_\sigma}$ and transfer the tiling to $\operatorname{im} D$ via $\psi$. So we do not make use of the result \cite[Theorem 1]{DG} for the space tiling zonotopes directly.
\end{Rem}
We start with a lemma, where intuitively $[0,1]^E+\operatorname{im}_\mathbb{Z}(D^T)$ looks like Figure~\ref{F3}(a) if one imagine copies of $Z_D$ as copies of $[0,1]^E$.
\begin{Lemma}\label{cubepile}
$\mathbb{R}^E=[0,1]^E+\operatorname{im}_\mathbb{Z}(D^T)+\ker D$.
\end{Lemma}
\begin{proof}
Let $\overrightarrow{O}\in\mathbb{R}^E$. Denote the coordinate of $\overrightarrow{O}$ corresponding to an edge $e$ by $e(\overrightarrow{O})$. Define the distance function $N(\overrightarrow{O})=\max_{e\in E}|e(\overrightarrow{O})-1/2|$. Note that $\overrightarrow{O}\in [0,1]^E$ if and only if $N(\overrightarrow{O})\leq 1/2$.
This function measures the "distance" between $\overrightarrow{O}$ and the cube $[0,1]^E$.
We first show that (i) if $\overrightarrow{O}\notin [0,1]^E$, then $N(\overrightarrow{O}+\overrightarrow{v^*}+\overrightarrow{v})<N(\overrightarrow{O})$ for some $\overrightarrow{v^*}\in\operatorname{im}_\mathbb{Z}(D^T)$ and some $\overrightarrow{v}\in\ker D$. Then we show that (ii) when a vector $\overrightarrow{O'}$ varies in
$\overrightarrow{O}+\operatorname{im}_\mathbb{Z}(D^T)+\ker D$, $N(\overrightarrow{O'})$ can attain its minimum value. Hence the lemma holds.
To prove (i), we assume $\overrightarrow{O}\notin [0,1]^E$. Let $P$ be the set of edges $e$ such that $|e(\overrightarrow{O})-1/2|=N(\overrightarrow{O})$. Note that $e(\overrightarrow{O})>1$ or $e(\overrightarrow{O})<0$. We construct a partial orientation $\overrightarrow{P}$ as follows. If $e\in P$ satisfies $e(\overrightarrow{O})>1$, then we put the arc $\overrightarrow{e}$ oriented by the reference orientation into $\overrightarrow{P}$; if $e\in P$ satisfies $e(\overrightarrow{O})<0$, then we put the arc $\overrightarrow{e}$ oriented against the reference orientation into $\overrightarrow{P}$. (Recall that the coordinate of the reference orientation is $(1,1,\cdots,1)$.)
Now we apply Lemma~\ref{3-painting} to the data $\overrightarrow{P}$, $E_c=E\backslash P$, $E_d=\emptyset$, and any arc $\overrightarrow{e}\in\overrightarrow{P}$. If $\overrightarrow{e}$ belongs a directed cocycle $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$, then the new vector $\overrightarrow{O'}=\overrightarrow{O}-\overrightarrow{C^*}$ satisfies $N(\overrightarrow{O'})\leq N(\overrightarrow{O})$ and the number of edges $e'$ such that $|e'(\overrightarrow{O'})-1/2|=N(\overrightarrow{O})$ decreases. In the other case that $\overrightarrow{e}$ belongs a directed cycle $\overrightarrow{C}\in\ker D$, then for a sufficiently small positive number $\epsilon$, the new vector $\overrightarrow{O'}=\overrightarrow{O}-\epsilon\overrightarrow{C}$ satisfies $N(\overrightarrow{O'})\leq N(\overrightarrow{O})$ and the number of edges $e'$ such that $|e'(\overrightarrow{O'})-1/2|=N(\overrightarrow{O})$ decreases. So we can apply the same process until $N(\overrightarrow{O'})<N(\overrightarrow{O})$ and hence (i) holds.
It remains to prove (ii). Note that the function $N$ is continuous, so we will make use of compactness. Consider the orthogonal decomposition $\ker(D)\oplus\operatorname{im}(D^T)$ of $\mathbb{R}^E$ and its topology. Note that $\operatorname{im}_\mathbb{Z}(D^T)$ is closed in $\operatorname{im}(D^T)$ and $\ker(D)$ is closed in $\ker(D)$, so $\overrightarrow{O}+\operatorname{im}_\mathbb{Z}(D^T)+\ker D$ is closed in $\mathbb{R}^E$. Let $r=N(\overrightarrow{O})$. Then the set
$(\overrightarrow{O}+\operatorname{im}_\mathbb{Z}(D^T)+\ker D)\bigcap \{\overrightarrow{O'}\in\mathbb{R}^E:N(\overrightarrow{O'})\leq r\}$ is compact and hence (ii) holds.
\end{proof}
\begin{Cor}\label{decomposition2}
$\mathbb{R}^E=S_\sigma+\operatorname{im}_\mathbb{Z}(D^T)+\ker D$.
\end{Cor}
\begin{proof}
By Lemma~\ref{cubepile}, any vector in $\mathbb{R}^E$ can be written as $\overrightarrow{O}+\overrightarrow{v^*}+\overrightarrow{v}$, where $\overrightarrow{O}\in [0,1]^E$, $\overrightarrow{v^*}\in\operatorname{im}_\mathbb{Z}(D^T)$, and $\overrightarrow{v}\in\ker D$.
By Proposition~\ref{sigmacompatible2}, $\overrightarrow{O}$ and some $\overrightarrow{O'}\in S_\sigma$ are in the same continuous cycle reversal class. By Proposition~\ref{fiber}(1), $\overrightarrow{O}-\overrightarrow{O'}\in\ker(D)$. So we have the desired sum $\overrightarrow{O'}+\overrightarrow{v^*}+(\overrightarrow{v}+\overrightarrow{O}-\overrightarrow{O'})$.
\end{proof}
\noindent\textbf{Notation.} Let $A$, $B$, and $C$ be three sets of vectors in the same vector space, then $C=A\dot{+}B$ means that any vector in $C$ can be written uniquely as the sum of a vector in $A$ and a vector in $B$.
\begin{Prop}\label{decomposition3}
$\mathbb{R}^E=\widetilde{S_\sigma}\dot{+}\ker D$.
\end{Prop}
\begin{proof}
Let $\overrightarrow{O_1},\overrightarrow{O_2}\in S_\sigma\subseteq [0,1]^E$, $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$, and $\overrightarrow{C}\in\ker D$. By Corollary~\ref{decomposition2}, it suffices to show that $\overrightarrow{O_1}-\overrightarrow{O_2}-\overrightarrow{C^*}=\overrightarrow{C}$ implies $\overrightarrow{C}=0$.
Note that $\ker D\perp \operatorname{im}(D^T)$. So $\langle\overrightarrow{O_1}-\overrightarrow{O_2}-\overrightarrow{C^*}, \overrightarrow{C^*}\rangle=0$ and hence $\langle\overrightarrow{O_1}-\overrightarrow{O_2}, \overrightarrow{C^*}\rangle=\langle\overrightarrow{C^*}, \overrightarrow{C^*}\rangle$. By writing the equality in terms of coordinates, it is not hard to see that $\overrightarrow{O_1}-\overrightarrow{O_2}\in[-1,1]^E$ forces $\overrightarrow{C^*}$ to be a $\{0,\pm 1\}$-vector and (the coordinates of) $\overrightarrow{O_1}-\overrightarrow{O_2}$ restricted to the underlying edges of $\overrightarrow{C^*}$ is $\overrightarrow{C^*}$.
This implies that $\overrightarrow{O_3}:=\overrightarrow{O_1}-\overrightarrow{C^*}$ is the continuous orientation obtained by reversing $\overrightarrow{C^*}$ in $\overrightarrow{O_1}$. Note that by Lemma~\ref{conformal}, $\overrightarrow{C^*}$ is a sum of disjoint directed cocycles. Because $\overrightarrow{O_1}\in S_\sigma$, we have $\overrightarrow{O_3}\in S_\sigma$. By $\overrightarrow{O_3}-\overrightarrow{O_2}=\overrightarrow{C}$ and Proposition~\ref{fiber}(1), $\overrightarrow{O_2}$ and $\overrightarrow{O_3}$ are in the same continuous cycle reversal class. Then by Proposition~\ref{sigmacompatible2}, $\overrightarrow{C}=0$.
\end{proof}
\begin{Cor}\label{tinycor}
$\widetilde{S_\sigma}\bigcap [0,1]^E=S_\sigma$.
\end{Cor}
\begin{proof}
$\widetilde{S_\sigma}\bigcap [0,1]^E\supseteq S_\sigma$ is trivial.
For the other direction, let $\overrightarrow{O}\in \widetilde{S_\sigma}$ be a continuous orientation. By Proposition~\ref{sigmacompatible2} and Proposition~\ref{fiber}(1), $\overrightarrow{O}=\overrightarrow{O}'+\overrightarrow{C}$ for some $\overrightarrow{O}'\in S_\sigma\subseteq\widetilde{S_\sigma}$ and $\overrightarrow{C}\in\ker D$. By Proposition~\ref{decomposition3}, we compare both sides of $\overrightarrow{O}+0=\overrightarrow{O}'+\overrightarrow{C}$ and get $\overrightarrow{O}=\overrightarrow{O}'\in S_\sigma$.
\end{proof}
By Proposition~\ref{decomposition3}, we get a section to the linear map $D$ which sends a point $p$ in $\operatorname{im}(D)$ to the unique point in the intersection of its preimage and $\widetilde{S_\sigma}$. It is easy to see that this section extends $\mu$. It seems more convenient to work with a bijection than a section. Now we define it. We denote the rank of $D$ by $r$, and then $\operatorname{im}(D)=\mathbb{R}^r\supseteq Z_D$.
\begin{Prop}\label{bigsection}
The map
\begin{eqnarray*}
\widetilde{\psi}:\widetilde{S_\sigma} & \longrightarrow & \mathbb{R}^r \\
\overrightarrow{O} & \mapsto & D\cdot \overrightarrow{O}
\end{eqnarray*}
is a bijection, where $\overrightarrow{O}$ is viewed as a column vector. Moreover, $\widetilde{\psi}$ sends $S_\sigma$ to $Z_D$ and preserve lattice points, i.e., $\overrightarrow{O}\in\mathbb{Z}^E$ if and only if $\widetilde{\psi}(\overrightarrow{O})\in\mathbb{Z}^r$.
\end{Prop}
\begin{proof}
By Proposition~\ref{decomposition3}, $\widetilde{\psi}$ is a bijection. By Corollary~\ref{section}, $\widetilde{\psi}$ sends $S_\sigma$ to $Z_D$. It is trivial that $\overrightarrow{O}\in\mathbb{Z}^E$ implies $\widetilde{\psi}(\overrightarrow{O})\in\mathbb{Z}^r$.
For the other direction, write $\overrightarrow{O}=\overrightarrow{O'}+\overrightarrow{C^*}$,
where $\overrightarrow{O'}\in S_\sigma$ and $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$. Then $\widetilde{\psi}(\overrightarrow{O})=\psi(\overrightarrow{O'})+D\cdot\overrightarrow{C^*}$ and hence $\psi(\overrightarrow{O'})\in\mathbb{Z}^r$. By Corollary~\ref{section}, $\overrightarrow{O'}\in\mathbb{Z}^E$, so $\overrightarrow{O}\in\mathbb{Z}^E$.
\end{proof}
\subsection{Polyhedral subdivision and half-open decomposition of $\widetilde{S_\sigma}$}\label{hoc}
By a \emph{polyhedral subdivision} of a subset $S\subseteq \mathbb{R}^n$ we mean writing $S$ as the union of a collection of $n$-dimensional polytopes where any two of the polytopes intersect in their common face. By Proposition~\ref{polyhedralzonotope}, we have a polyhedral subdivision of $Z_D$ (and hence $S_\sigma$ via $\mu$). Now we generalize it to $\widetilde{S_\sigma}$.
\begin{Prop}\label{subdivision1}
Fix an acyclic cycle signature $\sigma$ of $G$, then $$\widetilde{S_\sigma}=\bigcup_{T\in\mathcal{T}(G),\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T) }(\text{Face}_\sigma(T)+\overrightarrow{C^*}).$$
Moreover, for any two different pairs $(T_1, \overrightarrow{C^*_1})$ and $(T_2, \overrightarrow{C^*_2})$ in $\mathcal{T}(G)\times\operatorname{im}_\mathbb{Z}(D^T)$, $\text{Face}_\sigma(T_1)+\overrightarrow{C^*_1}$ and $\text{Face}_\sigma(T_2)+\overrightarrow{C^*_2}$ are different and their intersection is either a common face or the empty set.
\end{Prop}
\begin{proof}
By Proposition~\ref{verticesofS}, we have $S_\sigma=\bigcup_{T\in\mathcal{T}(G)}\text{Face}_\sigma(T)$. Because $\widetilde{S_\sigma}=S_\sigma+\operatorname{im}_\mathbb{Z}(D^T)$, the equality in the proposition holds.
It remains to show that this is a polyhedral subdivision. Without loss of generality, assume $\overrightarrow{C^*_1}=0$. Let $\overrightarrow{O}\in\text{Face}_\sigma(T_1)\bigcap(\text{Face}_\sigma(T_2)+\overrightarrow{C^*_2})$. Because $\overrightarrow{O},\overrightarrow{O}-\overrightarrow{C^*_2}\in [0,1]^E$, $\overrightarrow{C^*_2}$ is a $\{0,\pm 1\}$-vector, which is a sum of disjoint directed cocycles by Lemma~\ref{conformal}, and the continuous orientation $\overrightarrow{O}$ contains these directed cocycles.
So a continuous orientation $\overrightarrow{O}$ is in $\text{Face}_\sigma(T_1)\bigcap(\text{Face}_\sigma(T_2)+\overrightarrow{C^*_2})$ if and only if (i) $\overrightarrow{O}$ is in $\text{Face}_\sigma(T_1)$, (ii) $\overrightarrow{O}$ contains $\overrightarrow{C^*_2}$, and (iii) after reversing $\overrightarrow{C^*_2}$ in $\overrightarrow{O}$, any edge $e\notin T_2$ is oriented according to $\sigma(C(T_2,e))$. Note that a face of the parallelepiped $\text{Face}_\sigma(T_1)$ is determined by setting some edges in $T_1$ to $0$ or $1$. This is what (ii) and (iii) do. If there is any contradiction among (i), (ii), and (iii), then the intersection is the empty set; otherwise the intersection is a face of $\text{Face}_\sigma(T_1)$, and the face cannot be $\text{Face}_\sigma(T_1)$ because any cocycle intersects any spanning tree. Similarly, we can prove $\text{Face}_\sigma(T_1)\bigcap(\text{Face}_\sigma(T_2)+\overrightarrow{C^*_2})$ is a proper face of $\text{Face}_\sigma(T_2)+\overrightarrow{C^*_2}$.
\end{proof}
\begin{Cor}\label{subdivision2}
Fix an acyclic cycle signature $\sigma$ of $G$, then $$\mathbb{R}^r=\bigcup_{T\in\mathcal{T}(G),\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T) }(Z_\sigma(T)+D\cdot\overrightarrow{C^*}).$$
Moreover, for any two different pairs $(T_1, \overrightarrow{C^*_1})$ and $(T_2, \overrightarrow{C^*_2})$ in $\mathcal{T}(G)\times\operatorname{im}_\mathbb{Z}(D^T)$, $Z_\sigma(T_1)+D\cdot\overrightarrow{C^*_1}$ and $Z_\sigma(T_2)+D\cdot\overrightarrow{C^*_2}$ are different and their intersection is either a common face or the empty set.
\end{Cor}
\begin{proof}
Note that $\psi$ sends the $r$ linearly independent generating edges of the parallelepiped $\text{Face}_\sigma(T)$ to the $r$ linearly independent generating edges of the parallelepiped $Z_\sigma(T)$, so $\psi$ sends a face of $\text{Face}_\sigma(T)$ to a face of $Z_\sigma(T)$. Hence this corollary is a direct consequence of Proposition~\ref{bigsection} and Proposition~\ref{subdivision1}.
\end{proof}
Remark that although Proposition~\ref{subdivision1} and Corollary~\ref{subdivision2} generalize Proposition~\ref{polyhedralzonotope}(2), we do not use it in the proofs.
By a \emph{half-open decomposition} of a vector set $S$ we mean writing $S$ as a disjoint union of half-open cells. We now look for a half-open decomposition of $\mathbb{R}^r$ and hence of $\widetilde{S_\sigma}$. Fix $\sigma$ and $\sigma^*$, for each spanning tree $T$, define the half-open cell $\text{hoc}_{\sigma,\sigma^*}(T)$ to be the set of the continuous orientations where each edge $e\notin T$ is oriented according to $\sigma(C(T,e))$ and each edge $e\in T$ is oriented according to $\sigma(C(T,e))$ or bi-oriented. Recall in Proposition~\ref{bbymain} that there exists $w^*$ such that if we shift the parallelepiped $Z_\sigma(T)$ along $w^*$ a bit, then the unique vertex it will cover is $\psi(g_{\sigma,\sigma^*}(T))$. The half-open cell $\psi(\text{hoc}_{\sigma,\sigma^*}(T))$ can be obtained from $Z_\sigma(T)$ by removing all the closed facets not containing $\psi(g_{\sigma,\sigma^*}(T))$; see Figure~\ref{F0}. By basic geometry, we have the following lemma.
\begin{Lemma}\label{hoclemma}
The half-open cell $\psi(\text{hoc}_{\sigma,\sigma^*}(T))$ is the set points $p\in\mathbb{R}^r$ such that for sufficiently small positive $\epsilon$, $p-\epsilon w^*$ is in the interior of the parallelepiped $Z_\sigma(T)$.
\end{Lemma}
\noindent\textbf{Notation.} We denote the disjoint union of two sets $A$ and $B$ by $A\biguplus B$.
\begin{Prop}\label{hocprop}
Fix $\sigma$ and $\sigma^*$ of $G$. Then
(1) $\mathbb{R}^r=\biguplus_{T\in\mathcal{T}(G),\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)}(\psi(\text{hoc}_{\sigma,\sigma^*}(T))+D\cdot\overrightarrow{C^*})$;
(2) $\widetilde{S_\sigma}=\biguplus_{T\in\mathcal{T}(G),\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T) }(\text{hoc}_{\sigma,\sigma^*}(T)+\overrightarrow{C^*})$.
\end{Prop}
\begin{proof}
(1) Let $w^*$ be as in Proposition~\ref{bbymain} and $p\in\mathbb{R}^r$. By Corollary~\ref{subdivision2}, if we shift $p$ along $-w^*$ by a sufficiently small distance, then it will be in the interior of a unique parallelepiped $Z_\sigma(T)+D\cdot\overrightarrow{C^*}$. By Lemma~\ref{hoclemma}, $p\in\psi(\text{hoc}_{\sigma,\sigma^*}(T))+D\cdot\overrightarrow{C^*}$.
(2) Apply $\widetilde{\psi}^{-1}$ to the first identity, then we get the second one.
\end{proof}
\subsection{Half-open decomposition of $S_\sigma$ and the induced map $\varphi_{\sigma,\sigma^*}$}\label{varphi}
In this subsection we will restrict the half-open decomposition of $\widetilde{S_\sigma}$ to $S_\sigma$ and derive the combinatorial definition of the map $\varphi_{\sigma,\sigma^*}$ in Theorem~\ref{th}(2) from it.
We start with some definitions. All the half-open cells we talk about in this subsection are in $\mathbb{R}^E$ and of the form $\prod_{e\in E} I_e$, where each $I_e$ is $\{a_e\}$, $(a_e-1,a_e]$, or $[a_e,a_e+1)$ for some integer $a_e$. We call them \emph{standard half-open cells} in $\mathbb{R}^E$. Any standard half-open cell contains a unique lattice point $(a_e)_{e\in E}$. We call the set $\{e\in E:I_e=(a_e-1,a_e]\text{ or }[a_e,a_e+1))\}$ the \emph{generating set} of the standard half-open cell.
The following lemma is trivial. In order to state the results of this subsection in a better way, we change the sign here.
\begin{Lemma}\label{standard}
Fix $\sigma$ and $\sigma^*$ of $G$. Let $T\in\mathcal{T}(G)$ and $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$.
(1) $\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*}$ is a standard half-open cell. Its generating set is $T$ and it contains the unique lattice point $g_{\sigma,\sigma^*}(T)-\overrightarrow{C^*}$, where $g_{\sigma,\sigma^*}$ is the geometric bijection defined in Proposition~\ref{tree}.
(2) If $(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\cap [0,1]^E\neq\emptyset$, then it is a standard half-open cell. Its generating set is a forest $F\subseteq T$ and it contains the unique lattice point $g_{\sigma,\sigma^*}(T)-\overrightarrow{C^*}$.
\end{Lemma}
Now we get a half-open decomposition of $S_\sigma$ as follows.
\begin{Prop}\label{hocprop2}
Fix $\sigma$ and $\sigma^*$ of $G$. Then
$$S_\sigma=\biguplus_{T\in\mathcal{T}(G),\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)}(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\cap [0,1]^E,$$ where each summand is a standard half-open cell if it is non-empty.
\end{Prop}
\begin{proof}
This follows immediately from Proposition~\ref{hocprop}, Corollary~\ref{tinycor}, and Lemma~\ref{standard}.
\end{proof}
Then we define the map $\varphi_{\sigma,\sigma^*}$ from a geometric point of view. For any $\sigma$-compatible (discrete) orientation $\overrightarrow{O}$, it is a lattice point of $S_\sigma$ and hence by Proposition~\ref{hocprop2}, there exists a unique standard half-open cell $(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\bigcap [0,1]^E$ containing it. Define $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$ to be the generating set of the half-open cell, which is a forest $F\subseteq T$.
The next lemma characterizes the non-empty half-open cells in the decomposition of $S_\sigma$.
\begin{Lemma}\label{forestlemma}
Fix $\sigma$ and $\sigma^*$ of $G$. Let $T\in\mathcal{T}(G)$ and $\overrightarrow{C^*}\in\operatorname{im}_\mathbb{Z}(D^T)$. Then the following are equivalent:
(a) $(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\bigcap [0,1]^E\neq\emptyset$.
(b) $g_{\sigma,\sigma^*}(T)-\overrightarrow{C^*}\in [0,1]^E$.
(c) $\overrightarrow{C^*}$ is a sum of disjoint directed cocycles and each directed cocycle is in the orientation $g_{\sigma,\sigma^*}(T)$.
\end{Lemma}
\begin{proof}
Due to Lemma~\ref{standard}(1), (b) implies (a).
Due to Lemma~\ref{standard}(2), (a) implies (b).
It is trivial that (c) implies (b).
To prove that (b) implies (c), we write (b) in terms of coordinates. Then we find that $\overrightarrow{C^*}$ has to be a $\{0,\pm 1\}$-vector and $g_{\sigma,\sigma^*}(T)$ restricted to the underlying edges of $\overrightarrow{C^*}$ is $\overrightarrow{C^*}$. By Lemma~\ref{conformal}, (c) holds.
\end{proof}
\begin{Cor}\label{cor1}
If the standard half-open cell
$H=(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\bigcap [0,1]^E$ in Proposition~\ref{hocprop2} is non-empty, then
(1) $\overrightarrow{C^*}$ is a sum of disjoint directed cocycles $\{\overrightarrow{C^*_j}\}_{j\in J}$, where each $\overrightarrow{C^*_j}$ is in the orientation $g_{\sigma,\sigma^*}(T)$;
(2) the unique lattice point contained in $H$ is
$g_{\sigma,\sigma^*}(T)-\overrightarrow{C^*}$, which is an orientation obtained by reversing the directed cocycle $\overrightarrow{C_j^*}$'s in $g_{\sigma,\sigma^*}(T)$;
(3) and the generating set of $H$ is the forest $T\backslash (\biguplus_{j\in J}C_j^*)$.
\end{Cor}
\begin{proof}
The only part we have not proved is (3).
Write the standard half-open cell $\text{hoc}_{\sigma,\sigma^*}(T)=\prod_{e\in E} I_e$, where each $I_e$ is $\{a_e\}$, $(a_e-1,a_e]$, or $[a_e,a_e+1)$ for some integer $a_e$. The generating set of $\text{hoc}_{\sigma,\sigma^*}(T)$ is $T$, consisting of edges $e$ such that $I_e$ is a half-open interval. To get $H$, we shift the cell by $-\overrightarrow{C^*}$ and restrict it to $[0,1]^E$. So during this process a half-open interval $I_e$ becomes a point if and only if $I_e$ is shifted (and hence is reduced), which means $e\in\biguplus_{j\in J}C_j^*$. These edges are removed from the generating set, so (3) holds.
\end{proof}
Now we give the combinatorial description of $\varphi_{\sigma,\sigma^*}$ to end this subsection.
\begin{Prop}\label{forestgeometry}
Fix $\sigma$ and $\sigma^*$ of $G$. Let
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}:\{\sigma\text{-compatible orientations}\}\longrightarrow \{\text{spanning forests}\}
\end{eqnarray*}
be the map that sends a $\sigma$-compatible orientation $\overrightarrow{O}$ to the generating forest of the unique standard half-open cell in Proposition~\ref{hocprop2} containing $\overrightarrow{O}$. Let $\overrightarrow{O^{cp}}$ be the ($\sigma,\sigma^*$)-compatible orientation in the cocycle reversal class containing $\overrightarrow{O}$ and $\{\overrightarrow{C_j^*}\}_{j\in J}$ be the disjoint directed cocycles in $\overrightarrow{O^{cp}}$ by reversing which we get $\overrightarrow{O}$. Then $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$ is the forest $g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\backslash (\biguplus_{j\in J}C_j^*)$.
\end{Prop}
\begin{proof}
This is a direct consequence of Corollary~\ref{cor1} and Corollary~\ref{cor0}.
\end{proof}
\subsection{Bijectivity of $\varphi_{\sigma,\sigma^*}$ and the generalized Ehrhart polynomial of a zonotope}\label{subsectionduality}
In this subsection we generalize the \emph{Ehrhart polynomial} of a zonotope to prove the bijectivity of $\varphi_{\sigma,\sigma^*}$ in Proposition~\ref{forestgeometry}.
We first recall the Ehrhart polynomial. For any positive integer $q$ and any convex polytope $\mathcal{P}\subseteq\mathbb{R}^r$ with integer vertices, denote by $E_\mathcal{P}(q)$ the number of lattice points in $q\cdot\mathcal{P}:=\{qp:p\in\mathcal{P}\}$ (including the boundary).
Then $E_\mathcal{P}(q)$ is a polynomial function of $q$, called the \emph{Ehrhart polynomial} of $\mathcal{P}$.
Let $A$ be a $r\times n$ integer matrix. Denote the columns of $A$ by $v_1, v_2, \cdots, v_n$ and the zonotope $\{\sum_{i=1}^{n}c_iv_i:0\leq c_i \leq 1\}$ by $Z_A$. We rewrite $E_{Z_A}(q)$ as $E_A(q)$.
Now we recall the following basic fact.
\begin{Lemma}\cite[Theorem 2.2]{S}\label{Ehrhart1}
The Ehrhart polynomial of $Z_A$ is
$$E_A(q)=\sum_Xh(X)q^{|X|},$$
where $X$ ranges over all linearly independent subsets of $\{v_1, v_2, \cdots, v_n\}$, and where $h(X)$ denotes the greatest common divisor of all minors of size $|X|$ of the matrix whose columns are the elements of $X$.
\end{Lemma}
From now on we take $A$ to be a \emph{totally unimodular matrix}, meaning every square submatrix has determinant $0$, $1$, or $-1$. We further assume that $\text{rank}(A)=r$. For example, the matrix $D$ we have been using in this section satisfies this condition. Generally, a \emph{matroid} is \emph{regular} if and only if it can be represented by a totally unimodular matrix (which can always be chosen to satisfy the rank property). Then $h(X)=1$ for all linearly independent set $X$ and hence $E_A(q)=\sum_X q^{|X|}$. Note that when $A$ is taken to be $D$, the edges corresponding to $X$, which can also be viewed as the subscripts of the elements in $X$, form a forest. This is because linear independence means no cycle.
Now we generalize the Ehrhart polynomial of $Z_A$. Let $q_1, q_2, \cdots, q_n$ be positive integers. We denote by $E_A(q_1, q_2, \cdots, q_n)$ the number lattice points in the $(q_1,\cdots,q_n)$-dilate of $Z_A$, i.e.,
the zonotope $\widehat{Z_A}:=\{\sum_{i=1}^{n}c_iq_iv_i:0\leq c_i \leq 1\}$. Then we have the following formula.
\begin{Prop}\label{Ehrhart2}
Let $A$ be a totally unimodular matrix. Then
$$E_A(q_1, q_2, \cdots, q_n)=\sum_F\prod_{e\in F}q_e,$$
where $F$ ranges over all the subsets of $\{1,2,\cdots,n\}$ such that $\{v_e:e\in F\}$ is linearly independent.
\end{Prop}
\begin{proof}
We apply Lemma~\ref{Ehrhart1} to the matrix obtained by multiplying $i$-th column of $A$ by $q_i$ for $i=1,2,\cdots,n$, and take $q=1$.
\end{proof}
Now we take $A$ to be the matrix $D$ associated to the graph $G$. We will show $E_D(q_1, q_2, \cdots, q_n)$ ($n=|E|$) is equal to the number of lattices points in the $(q_1,\cdots,q_n)$-dilate of $S_\sigma$, i.e., $$\widehat{S_\sigma}:=\{(q_1x_1,\cdots,q_nx_n):(x_1,\cdots,x_n)\in S_{\sigma}\}.$$
This is useful because we have the following formula to count $\#(\widehat{S_\sigma}\cap\mathbb{Z}^E)$ and hence we can compare it with the formula in Proposition~\ref{Ehrhart2}.
\begin{Prop}\label{Ehrhart3}
$$\#(\widehat{S_\sigma}\cap\mathbb{Z}^E)=\sum_F\prod_{e\in F}q_e,$$
where $F$ ranges over all the generating forests of the non-empty standard half-open cells $(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\bigcap [0,1]^E$ in Proposition~\ref{hocprop2}.
\end{Prop}
\begin{proof}
By Proposition~\ref{hocprop2}, $S_\sigma$ is decomposed into certain standard half-open cells. Note that each of them contains a unique lattice point. So after the $(q_1,\cdots,q_n)$-dilation, each dilated standard half-open cell contains $\prod_{e\in F}q_e$ lattice points, where $F$ is the generating forest of the cell.
\end{proof}
Now we go back to prove $\#(\widehat{Z_D}\cap\mathbb{Z}^r)=\#(\widehat{S_\sigma}\cap\mathbb{Z}^E)$. We restrict the linear map $D$ to the "extended" continuous orientations $\prod_{e\in E}[0,q_e]$. Note that the image is the zonotope $\widehat{Z_D}$. We call this map $\widehat{\psi}$. Clearly $\widehat{\psi}$ extends $\psi$.
We plan to prove that $\widehat{S_\sigma}$ is a section of $\widehat{\psi}$. To do this, we need to define the \emph{extended $\sigma$-compatible continuous orientations} and prove a similar result(Lemma~\ref{extendedsigmacompatible}) to Proposition~\ref{sigmacompatible2}. The method is the same as in \cite[Prop. 3.2.1]{BBY}.
Let $\sigma$ be an acyclic cycle signature. An \emph{extended} continuous orientation $\overrightarrow{O}$ is called \emph{$\sigma$-compatible} if one cannot flow $\overrightarrow{O}$ along any directed cycle in $\sigma$, i.e, $\overrightarrow{O}+\epsilon\sigma(C)\notin \prod_{e\in E}[0,q_e]$ for any $\epsilon >0$ and any cycle $C$.
Now we prove that the extended $\sigma$-compatible continuous orientations form a section of $\widehat{\psi}$.
\begin{Lemma}\label{extendedsigmacompatible}
For any extended continuous orientation $\overrightarrow{O}$, there exists a unique extended $\sigma$-compatible continuous orientation $\overrightarrow{O^\sigma}$ such that $\overrightarrow{O^\sigma}-\overrightarrow{O}\in\ker(D)$.
\end{Lemma}
\begin{proof}
By Lemma~\ref{signature}, there exists $w\in\mathbb{R}^E$ such that $\langle w, \sigma(C)\rangle>0$ for each cycle $C$. Consider the function $P(\overrightarrow{O'}):=\langle w, \overrightarrow{O'}\rangle$ defined on $D:=(\overrightarrow{O}+\ker(D))\bigcap\prod_{e\in E}[0,q_e]$. If $\overrightarrow{O'}\in D$ is not $\sigma$-compatible, then for some $\epsilon >0$ and some cycle $C$, $\overrightarrow{O''}:=\overrightarrow{O'}+\epsilon\sigma(C)\in D$ and $P(\overrightarrow{O''})>P(\overrightarrow{O'})$. Because the set $D$ is compact and $P$ is continuous, there exists a maximizer $\overrightarrow{O^\sigma}\in D$ and hence $\overrightarrow{O^\sigma}$ is $\sigma$-compatible.
It remains to show the uniqueness. Assume by contradiction that there are two such distinct extended continuous orientations $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$. Then $\overrightarrow{O_1}-\overrightarrow{O_2}\in\ker(D)$ and hence by Lemma~\ref{conformal0}, $\overrightarrow{O_1}-\overrightarrow{O_2}=\sum k_i\overrightarrow{C_i}$ where for each edge $e$ of each cycle $C_i$, the sign of $e$ in $\overrightarrow{C_i}$ agrees with the sign of $e$ in $\overrightarrow{O_1}-\overrightarrow{O_2}$. Therefore $\overrightarrow{O_1}-k_1\overrightarrow{C_1}\in D$ and $\overrightarrow{O_2}+k_1\overrightarrow{C_1}\in D$. This contradicts that either $\overrightarrow{C_1}\in\sigma$ or $-\overrightarrow{C_1}\in\sigma$.
\end{proof}
Then we prove that the set of extended $\sigma$-compatible continuous orientations is exactly $\widehat{S_\sigma}$. The following observation follows immediately by definition.
\begin{Lemma}\label{aa}
Let $\overrightarrow{O}=(x_1,\cdots,x_n)$ be a continuous orientation and $\overrightarrow{O'}=(q_1x_1,\cdots,q_nx_n)$ be an extended continuous orientation. Then $\overrightarrow{O}$ is a $\sigma$-compatible continuous orientation (in the usual sense) if and only if $\overrightarrow{O'}$ is an extended $\sigma$-compatible continuous orientation.
\end{Lemma}
\begin{Cor}\label{bb}
$\widehat{S_\sigma}$ is the set of extended $\sigma$-compatible continuous orientations.
\end{Cor}
\begin{proof}
Recall that $S_\sigma$ is the set of $\sigma$-compatible continuous orientations, so this is a direct consequence of Lemma~\ref{aa}.
\end{proof}
\begin{Prop}\label{cc}
The map $\widehat{\psi}$ restricted to $\widehat{S_\sigma}$ is a bijection to the zonotope $\widehat{Z_D}$. Moreover, $\overrightarrow{O}\in\widehat{S_\sigma}$ is a lattice point if and only if $\widehat{\psi}(\overrightarrow{O})$ is a lattice point. In particular, $$\#(\widehat{Z_D}\cap\mathbb{Z}^r)=\#(\widehat{S_\sigma}\cap\mathbb{Z}^E).$$
\end{Prop}
\begin{proof}
The first part is a direct consequence of Lemma~\ref{extendedsigmacompatible} and Corollary~\ref{bb}. The second part is essentially because $D$ is totally unimodular. Here we give an alternative proof.
The "only if" part is trivial. For the "if" part, we consider shifting $\overrightarrow{O}$ into $S_\sigma$. By Proposition~\ref{verticesofS}, $\overrightarrow{O}$ is in the $(q_1,\cdots,q_n)$-dilate of the face $\text{Face}_\sigma(T)$ for some spanning tree $T$. Hence for the edges $e\notin T$, the $e$-th coordinate of $\overrightarrow{O}$ is either $0$ or $q_e$ depending on $\sigma$. So there exists an integer vector $\overrightarrow{v}$ such that the $e$-th coordinate of $\overrightarrow{O}-\overrightarrow{v}$ is either $0$ or $1$ accordingly and the other coordinates are in $[0,1]$, which implies $\overrightarrow{O}-\overrightarrow{v}\in\text{Face}_\sigma(T)$. Because $\widehat{\psi}(\overrightarrow{O}-\overrightarrow{v})$ is a lattice point, by Corollary~\ref{section}, $\overrightarrow{O}-\overrightarrow{v}$ is a lattice point and hence so is $\overrightarrow{O}$.
\end{proof}
Now we are ready to prove the bijectivity of $\varphi_{\sigma,\sigma^*}$ .
\begin{Th}
The map $\varphi_{\sigma,\sigma^*}$ in Proposition~\ref{forestgeometry} is bijective.
\end{Th}
\begin{proof}
By Proposition~\ref{Ehrhart2}, Proposition~\ref{Ehrhart3}, and Proposition~\ref{cc},
$$\sum_{F_1}\prod_{e\in F_1}q_e=\sum_{F_2}\prod_{e\in F_2}q_e,$$
where $F_1$ ranges over all the spanning forests of $G$ and $F_2$ ranges over all the generating forests of the non-empty standard half-open cells $(\text{hoc}_{\sigma,\sigma^*}(T)-\overrightarrow{C^*})\bigcap [0,1]^E$ in Proposition~\ref{hocprop2}. Because the two sides are equal for any $(q_1,\cdots,q_n)\in\mathbb{Z}^n$, they are equal as polynomials in $(q_1,\cdots,q_n)$. Hence these generating forests are exactly all the spanning forests of $G$. Therefore the map $\varphi_{\sigma,\sigma^*}$ in Proposition~\ref{forestgeometry} is bijective.
\end{proof}
By replacing the geometric definition of $\varphi_{\sigma,\sigma^*}$ in Proposition~\ref{forestgeometry} with the combinatorial one, we get Theorem~\ref{th}(2).
By a dual argument, we get Theorem~\ref{th}(3). To be precise, for all statements and proofs, we switch cycles and cocycles, $\ker(D)$ and $\operatorname{im}(D^T)$, $\sigma$ and $\sigma^*$, $e\in T$ and $e\notin T$ for a spanning tree $T$, etc.
In particular, for each spanning tree $T$, the face $\text{Face}_{\sigma^*}(T)$ of $[0,1]^E$ is defined to be the set of the continuous orientations where each edge $e\in T$ is oriented according to $\sigma^*(C^*(T,e))$. The tricky part is to replace the matrix $D$ with some matrix $D_*$ and hence get the zonotope $Z_{D_*}:=D_*([0,1]^E)$. The following construction is classic and we refer the readers to \cite[Section 2]{SW} for the technical details. Without loss of generality, we assume the first $r$ columns of $D$ form a basis of the column space of $D$. Hence they form an invertible matrix $P$ and then $P^{-1}D$ is of the form
\[D_0:=\left[ I_r\: L\right].\]
Note that $\ker(D)=\ker(D_0)$, $\operatorname{im}(D^T)=\operatorname{im}(D_0^T)$, and $D_0$ is totally unimodular. (We can use $D_0$ instead of $D$ in our theory and nothing else needs to be changed.) Now we introduce the "dual" matrix. Take \[D_*:=\left[ -L^T \: I_{n-r}\right].\]
It is easy to check $\operatorname{im}(D^T_*)=\ker(D_0)$, $\ker(D_*)=\operatorname{im}(D^T_0)$, and $D_*$ is totally unimodular. Then we can finish the dual argument.
By the dual argument, we obtain a dual geometric construction to the one in Proposition~\ref{hocprop2}, a half-open decomposition of the set $S_{\sigma^*}$ of the $\sigma^*$-compatible continuous orientations. Here we point out that although the maps in Theorem~\ref{th}(2) and (3) agree on the set of $\{(\sigma,\sigma^*)\text{-compatible orientations}\}$, the two geometric constructions do not share the half-open cells corresponding to the trees in general. Indeed, $\dim(\text{Face}_{\sigma}(T))$ is not equal to $\dim(\text{Face}_{\sigma^*}(T))$ in general and hence the two half-open cells $\text{hoc}_{\sigma,\sigma^*}(T)$ are different. We will see in Section~\ref{tiling} (Proposition~\ref{cor5} and Proposition~\ref{dualcor5}) that they are dual to each other. This is one reason that we cannot combine the two geometric constructions in a naive way and hence derive Theorem~\ref{th}(1).
\section{A geometric interpretation of the main theorem and its combinatorial proof }\label{tiling}
In Section~\ref{geometric}, we have seen that the geometric construction behind Theorem~\ref{th}(2) is the half-open decomposition of the set $S_\sigma$ of the $\sigma$-compatible continuous orientations (Proposition~\ref{forestgeometry}). In this section, we will extend this construction to the cube $[0,1]^E$ (Theorem~\ref{tilingofcube}), which can be viewed as a geometric interpretation of Theorem~\ref{th}(1). Moreover, our proofs are combinatorial, do not rely on the results in Section~\ref{geometric}, and recover the geometric construction in Proposition~\ref{forestgeometry} and its dual construction; see Proposition~\ref{cor5} and Proposition~\ref{dualcor5}.
\subsection{Half-open decompositions of the cube $[0,1]^E$}
We fix a graph $G$ with the edge set $E$ and a reference orientation.
In this subsection, we describe under which condition a map $$\varphi:\{\text{discrete orientations}\}\longrightarrow\{\text{spanning subgraphs}\}$$ induces a half-open decomposition of the cube $[0,1]^E$ in a canonical way, where $[0,1]^E$ is viewed as the set of continuous orientations of $G$.
Recall the definitions for the \emph{standard half-open cells} given in Section~2.5. For any discrete orientation $\overrightarrow{O}$ and any spanning subgraph $S$, we define the standard half-open cell $\text{hoc}(\overrightarrow{O},S)$ in the cube $[0,1]^E$ to be the set of the continuous orientations where each edge $e\notin S$ is oriented according to $\overrightarrow{O}$ and each edge $e\in S$ is oriented according to $\overrightarrow{O}$ or bi-oriented. Note that $\text{hoc}(\overrightarrow{O},S)$ contains a unique lattice point $\overrightarrow{O}$ and the generating set is $S$. Conversely, a standard half-open cell in $[0,1]^E$ satisfying these two properties must be $\text{hoc}(\overrightarrow{O},S)$. Hence we identify the standard half-open cells in $[0,1]^E$ with elements in $\{\text{discrete orientations}\}\times\{\text{spanning subgraphs}\}$. We ask which relation, i.e., subset of this Cartesian product, gives a half-open decomposition of $[0,1]^E$. Clearly each discrete orientation should be covered once and only once, so the relation must be a map $\varphi:\{\text{discrete orientations}\}\longrightarrow \{\text{spanning subgraphs}\}$. The target is to study for which maps $\varphi$, $[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))$ holds.
\begin{Ex}
This example illustrates the half-open decomposition of the cube $[0,1]^E$ induced by a map $\varphi:\{\text{discrete orientations}\}\longrightarrow \{\text{spanning subgraphs}\}$; see Figure~\ref{F6}(a). Here we choose $\varphi$ to be the map $\varphi_{\sigma,\sigma^*}$ in Example~\ref{ex1}; see also Figure~\ref{F5}. We still draw the orientation $\overrightarrow{O}$ and the subgraph $S$ together provided $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})=S$. Now we give this configuration a geometric interpretation: $\text{hoc}(\overrightarrow{O},S)$.
The cube is decomposed into the eight half-open cells $\{\text{hoc}(\overrightarrow{O},\varphi_{\sigma, \sigma^*}(\overrightarrow{O})):\overrightarrow{O}\in\{0,1\}^3\}$. The dimension of each half-open cell is equal to the number of edges in the corresponding subgraph. Here we have $\binom{3}{i}$ half-open cells of dimension $i$ for $i=0,1,2,3$. We will see in Theorem~\ref{tilingofcube} that any $\varphi_{\sigma,\sigma^*}$ in Theorem~\ref{th} induces a half-open decomposition.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F6.eps}
\caption{(a) This shows $[0,1]^E=\biguplus_{\protect\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\protect\overrightarrow{O},\varphi_{\sigma,\sigma^*}(\protect\overrightarrow{O}))$, where the bijection $\varphi_{\sigma,\sigma^*}$ is as in Figure~\ref{F5}. The unique 3-dimensional half-open cell is not drawn to keep the figure clean.\\
(b) This is dual tiling of (a); see the definition of the dual tiling after Corollary~\ref{cordual}. The unique 3-dimensional half-open cell is not drawn.}
\label{F6}
\end{figure}
\end{Ex}
The main result of the subsection is as follows.
\begin{Prop}\label{abcd}
For a map $\varphi:\{\text{discrete orientations}\}\longrightarrow \{\text{spanning subgraphs}\}$, the following are equivalent.
(\ref{abcd}a) $[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))$.
(\ref{abcd}b) $\varphi$ is bijective and for any two distinct discrete orientations $\overrightarrow{O_1}$ and
$\overrightarrow{O_2}$, $\text{hoc}(\overrightarrow{O_1},\varphi(\overrightarrow{O_1}))\cap\text{hoc}(\overrightarrow{O_2},\varphi(\overrightarrow{O_2}))=\emptyset$.
(\ref{abcd}c) For any two distinct discrete orientations $\overrightarrow{O_1}$ and
$\overrightarrow{O_2}$, there exists an edge $e$ such that $\overrightarrow{O_1}(e)\neq\overrightarrow{O_2}(e)$ and $e\in \varphi(\overrightarrow{O_1})\bigtriangleup \varphi(\overrightarrow{O_2})$.
(\ref{abcd}d) $\varphi$ is \emph{locally bijective} (defined as follows similarly to Proposition~\ref{local}(1)).
\end{Prop}
\begin{Def}
A map $\varphi:\{\text{discrete orientations}\}\longrightarrow \{\text{spanning subgraphs}\}$ is called \emph{locally bijective} if for any partial orientation $\overrightarrow{O_P}$ of $G$, where $P$ is the underlying edges of $\overrightarrow{O_P}$, the map
\begin{eqnarray*}
\varphi_{\overrightarrow{O_P}}:\{\text{orientations of }E\backslash P\} & \longrightarrow & \{\text{subsets of }E\backslash P\} \\
\overrightarrow{O_{E\backslash P}} & \mapsto & \varphi(\overrightarrow{O_{E\backslash P}}\cup\overrightarrow{O_P})\backslash P
\end{eqnarray*}
is bijective.
\end{Def}
Remark that we can also add the restriction in Proposition~\ref{local}(2) as statement (\ref{abcd}e), but it does not help proving the equivalences among (\ref{abcd}a), (\ref{abcd}b), (\ref{abcd}c), and (\ref{abcd}d).
First we prove (\ref{abcd}a) $\Leftrightarrow$ (\ref{abcd}b) by using the generalized Ehrhart polynomial.
\begin{Lemma}\label{ab}
(\ref{abcd}a) $\Leftrightarrow$ (\ref{abcd}b).
\end{Lemma}
\begin{proof}
We need to prove that if the second part of (\ref{abcd}b) holds, then $\varphi$ is bijective if and only if (\ref{abcd}a) holds. For any positive integers $q_1, \cdots, q_n$, where $n=|E|$, we consider the $(q_1,\cdots,q_n)$-dilation:
\begin{eqnarray*}
f: \mathbb{R}^E & \longrightarrow & \mathbb{R}^E \\
(x_1, \cdots, x_n) & \mapsto & (q_1x_1, \cdots, q_nx_n).
\end{eqnarray*}
Dilating the cube, we get $$\#( f([0,1]^E)\cap\mathbb{Z}^E)=\prod_{e\in E}(1+q_e).$$
Dilating the half-open cells, we get $$\#( f(\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O})))\cap\mathbb{Z}^E)=\sum_{\overrightarrow{O}\in\{0,1\}^E}\prod_{e\in\varphi(\overrightarrow{O})}q_e.$$
Consider the equation
\begin{equation}\label{ast}
\#( f([0,1]^E)\cap\mathbb{Z}^E)=\#( f(\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O})))\cap\mathbb{Z}^E).
\end{equation}
By the previous two formulas, one gets that Equation~\ref{ast} holds for any $(q_1,\cdots,q_n)$-dilation $f$ $\Leftrightarrow$ $\prod_{e\in E}(1+q_e)=\sum_{\overrightarrow{O}\in\{0,1\}^E}\prod_{e\in\varphi(\overrightarrow{O})}q_e$ as polynomials in $q_e$'s $\Leftrightarrow$ $\{\varphi(\overrightarrow{O}):\overrightarrow{O}\in\{0,1\}^E\}=\{\text{spanning subgraphs}\}$ $\Leftrightarrow$ $\varphi$ is bijective.
It remains to show that (\ref{abcd}a) $\Leftrightarrow$ Equation~\ref{ast} holds for any $(q_1,\cdots,q_n)$-dilation $f$. The "$\Rightarrow$" part is trivial. Now we begin to prove the other direction. Clearly $\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))\subseteq [0,1]^E$. Assume by contradiction that some point in the cube is not covered by the half-open cells. Then one can disturb the point such that it is a rational point and is still not covered. By a suitable choice of $f$, this rational point becomes integral after the dilation, which contradicts the cardinality equality. Therefore (\ref{abcd}a) $\Leftrightarrow$ (\ref{abcd}b).
\end{proof}
\begin{Lemma}\label{acd}
(\ref{abcd}c) $\Rightarrow$ (\ref{abcd}b) $\Rightarrow$ (\ref{abcd}a) $\Rightarrow$ (\ref{abcd}d) $\Rightarrow$ (\ref{abcd}c).
\end{Lemma}
\begin{proof}
It is trivial that (\ref{abcd}c) implies the injectivity of $\varphi$ and hence the bijectivity. It is straightforward to check that (\ref{abcd}c) implies the second part of (\ref{abcd}b). So (\ref{abcd}c) $\Rightarrow$ (\ref{abcd}b).
(\ref{abcd}b) $\Rightarrow$ (\ref{abcd}a) is proved in Lemma~\ref{ab}.
For (\ref{abcd}a) $\Rightarrow$ (\ref{abcd}d), we consider the restriction of the tiling. Observe that for a partial orientation $\overrightarrow{O_P}$, the set of the continuous orientations extending $\overrightarrow{O_P}$ is a closed face $F$ of $[0,1]^E$. We restrict the decomposition $[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))$ to the closed face $F$. Then we get $$F=[0,1]^{E\backslash P}\times\overrightarrow{O_P}|_P =\biguplus_{\overrightarrow{O}\in\{0,1\}^E \text{ extends } \overrightarrow{O_P}}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))\cap F.$$
It is straightforward to check that for any $\overrightarrow{O}\in\{0,1\}^E$ that extends $\overrightarrow{O_P}$, we have
$$\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))\cap F=\text{hoc}(\overrightarrow{O}|_{E\backslash P},\varphi(\overrightarrow{O})\backslash P)\times\overrightarrow{O_P}|_P=\text{hoc}(\overrightarrow{O}|_{E\backslash P},\varphi_{\overrightarrow{O_P}}(\overrightarrow{O}|_{E\backslash P}))\times\overrightarrow{O_P}|_P,$$
where the later two "hoc" are with respect to the cube $[0,1]^{E\backslash P}$. Hence
$$[0,1]^{E\backslash P}=\biguplus_{\overrightarrow{O_{E\backslash P}}\in\{0,1\}^{E\backslash P}}\text{hoc}(\overrightarrow{O_{E\backslash P}},\varphi_{\overrightarrow{O_P}}(\overrightarrow{O_{E\backslash P}})).$$
By applying Lemma~\ref{ab} to $\varphi_{\overrightarrow{O_P}}$, we get $\varphi_{\overrightarrow{O_P}}$ is bijective and hence (\ref{abcd}d) holds.
It remains to show (\ref{abcd}d) $\Rightarrow$ (\ref{abcd}c). Denote $E_{\rightrightarrows}=\{e:\overrightarrow{O_1}(e)=\overrightarrow{O_2}(e)\}$ and $E_{\rightleftarrows}=\{e:\overrightarrow{O_1}(e)\neq\overrightarrow{O_2}(e)\}$($\neq\emptyset$). By (\ref{abcd}d),
$\varphi_{\overrightarrow{O_{E_{\rightrightarrows}}}}$ is bijective. Hence $\varphi_{\overrightarrow{O_{E_{\rightrightarrows}}}}(\overrightarrow{O_1}|_{E_{\rightleftarrows}})\neq \varphi_{\overrightarrow{O_{E_{\rightrightarrows}}}}(\overrightarrow{O_2}|_{E_{\rightleftarrows}})$. By the definition of $\varphi_{\overrightarrow{O_{E_{\rightrightarrows}}}}$, this means $\varphi(\overrightarrow{O_1})\backslash E_{\rightrightarrows}\neq\varphi(\overrightarrow{O_2})\backslash E_{\rightrightarrows}$. So there exists an edge $e\in E_{\rightleftarrows}$ such that $e\in \varphi(\overrightarrow{O_1})\bigtriangleup \varphi(\overrightarrow{O_2})$. So (\ref{abcd}c) holds.
\end{proof}
By Lemma~\ref{ab} and Lemma~\ref{acd}, Proposition~\ref{abcd} holds.
When we define $\text{hoc}(\overrightarrow{O},S)$,
it is somewhat arbitrary that we bi-orient the edges in $S$ rather than the edges in $E\backslash S$. Actually, we could also define it in the other way, or equivalently, we may replace $\varphi$ with $\varphi^*$ defined by $\overrightarrow{O}\mapsto E\backslash\varphi(\overrightarrow{O})$.
\begin{Cor}\label{cordual}
If (\ref{abcd}a) holds for $\varphi$, then (\ref{abcd}a) also holds for $\varphi^*$, where the map $\varphi^*$ is defined by $\overrightarrow{O}\mapsto E\backslash\varphi(\overrightarrow{O})$.
\end{Cor}
\begin{proof}
By Proposition~\ref{abcd}, (\ref{abcd}a) $\Leftrightarrow$ (\ref{abcd}c). Clearly (\ref{abcd}c) holds for $\varphi$ if and only if (\ref{abcd}c) holds for $\varphi^*$, so (\ref{abcd}a) holds for $\varphi^*$.
\end{proof}
We call the tiling $[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi^*(\overrightarrow{O}))$ the \emph{dual tiling} of $[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi(\overrightarrow{O}))$. Clearly $(\varphi^*)^*=\varphi$. See Figure~\ref{F6}(b) for an example of the dual tiling.
\begin{Rem}
Although we develop the theory in this subsection for a graph $G$, the structure of $G$ does not play a role in either the results or the proofs. What we need is just a finite set $E$, whose elements are viewed as edges.
\end{Rem}
\subsection{A geometric interpretation of Theorem~\ref{th}}
By Proposition~\ref{propsubset}, $\varphi_{\sigma,\sigma^*}$ satisfies (\ref{abcd}c). By the equivalence of (\ref{abcd}a) and (\ref{abcd}c) in Proposition~\ref{abcd}, we have the following result.
\begin{Th}\label{tilingofcube}
$[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi_{\sigma,\sigma^*}(\overrightarrow{O}))$.
\end{Th}
Recall that if we choose the map $\varphi_{\sigma,\sigma^*}$ in Example~\ref{ex1}, then we get the decomposition in Figure~\ref{F6}(a).
The tiling of the cube $[0,1]^E$ in Theorem~\ref{tilingofcube} can be viewed as a geometric construction behind the bijection $\varphi_{\sigma,\sigma^*}$ in Theorem~\ref{th}(1) for the following two reasons. First, the bijection $\varphi_{\sigma,\sigma^*}$ induces the tiling, and conversely, from the tiling, we may get the map $\varphi_{\sigma,\sigma^*}$ back by sending an orientation $\overrightarrow{O}$ to the generating set of the unique standard half-open cell containing $\overrightarrow{O}$. Second, the tiling extends the tiling of $S_\sigma$ in Proposition~\ref{hocprop2}, which is viewed as the geometric construction behind the bijection in Theorem~\ref{th}(2). Moreover, the dual tiling extends the tiling of $S_{\sigma^*}$ behind Theorem~\ref{th}(3). We now begin to show the second point.
We consider the restriction of the tiling in Theorem~\ref{tilingofcube} to $S_\sigma$, the set of $\sigma$-compatible continuous orientations.
We will prove the following decomposition of $S_\sigma$ and its dual version, Proposition~\ref{dualcor5}.
\begin{Prop}\label{cor5}
$S_\sigma=\biguplus_{\overrightarrow{O}\in\{0,1\}^E\text{ is }\sigma\text{-compatible}}\text{hoc}(\overrightarrow{O},\varphi_{\sigma,\sigma^*}(\overrightarrow{O}))$.
\end{Prop}
\begin{Rem}\label{rmk}
Comparing Proposition~\ref{forestgeometry} and Proposition~\ref{cor5}, we have the same map $\varphi_{\sigma,\sigma^*}$. Moreover, the half-open cells in Proposition~\ref{hocprop2} and Proposition~\ref{cor5} induce the map $\varphi_{\sigma,\sigma^*}$ in the same way, which sends the unique lattice point contained in a half-open cell to the generating set. So the half-open decomposition in Proposition~\ref{hocprop2} is the same as the one in Proposition~\ref{cor5}. Hence the decomposition in Theorem~\ref{tilingofcube} extends the one in Proposition~\ref{hocprop2}. For example, the tiling in Figure~\ref{F6} extends the tiling in Figure~\ref{F3}(b). By Proposition~\ref{forestgeometry}, we already know Proposition~\ref{cor5} holds. However, the proof we will give in this section is combinatorial and independent of the proofs in Section~\ref{geometric}, which makes our combinatorial approach self-contained.
\end{Rem}
To prove Proposition~\ref{cor5}, we need the following lemma, which is part of Proposition~\ref{verticesofS} and is proved implicitly in \cite{BBY}. For readers' convenience, we give a proof here. Recall that $\text{Face}_\sigma(T)$ is a closed face of $[0,1]^E$ consisting of the continuous orientations where each edge $e\notin T$ is oriented according to $\sigma(C(T,e))$.
\begin{Lemma}\label{facelemma}
$\text{Face}_\sigma(T)\subseteq S_\sigma$.
\end{Lemma}
\begin{proof}
For any continuous orientation $\overrightarrow{O}\in\text{Face}_\sigma(T)$, we need to show it is $\sigma$-compatible, i.e., for any $\epsilon >0$ and any cycle $C$, $\overrightarrow{O}+\epsilon\sigma(C)\notin [0,1]^E$. By Lemma~\ref{fundamental}, $\sigma(C)=\sum_{e\notin T,\overrightarrow{e}\in\sigma (C)}C(T,\overrightarrow{e})$. Because $\sigma(C)$ is $\sigma$-compatible, at least one of the directed cycles in the sum is $\sigma$-compatible, say $C(T,\overrightarrow{e_0})$. Note that $\overrightarrow{e_0}$ is an arc in $\sigma(C)$ and by the definition of $\text{Face}_\sigma(T)$, $\overrightarrow{e_0}$ is an arc in $\overrightarrow{O}$. So $\overrightarrow{O}+\epsilon\sigma(C)\notin [0,1]^E$.
\end{proof}
We also need the following notation and lemma. Let $\overrightarrow{O}$ be a continuous orientation. Denote some of the discretely oriented edges in $\overrightarrow{O}$ by a partial orientation $\overrightarrow{P}$. Then we say $\overrightarrow{O}$ contains $\overrightarrow{P}$ and denote by $_{\overrightarrow{P}}\overrightarrow{O}$ or $_{P}\overrightarrow{O}$ the continuous orientation obtained by reversing the arcs $\overrightarrow{P}$ in $\overrightarrow{O}$.
\begin{Lemma}\label{lemmacc}
Let $\overrightarrow{O}$ be a $\sigma$-compatible continuous orientation. If $\overrightarrow{O}$ contains a directed cocycle $\overrightarrow{C^*}$, then $_{C^*}\overrightarrow{O}$ is also $\sigma$-compatible.
\end{Lemma}
\begin{proof}
Assume by contradiction that there exists some $\epsilon>0$ and cycle $C$ such that $_{C^*}\overrightarrow{O}+\epsilon\sigma(C)\in [0,1]^E$.
Note that $C\cap C^*$ must be empty. Indeed, if $C\cap C^*\neq\emptyset$, because $\langle\sigma C,\overrightarrow{C^*}\rangle=0$, one arc in $\sigma(C)$ will be equal to one arc in $\overrightarrow{C^*}$ and some other arc in $\sigma(C)$ will be opposite to an arc in $\overrightarrow{C^*}$. Hence $_{C^*}\overrightarrow{O}+\epsilon\sigma(C)\notin [0,1]^E$, which leads to a contradiction.
Because $C\cap C^*=\emptyset$ and $_{C^*}\overrightarrow{O}+\epsilon\sigma(C)\in [0,1]^E$, $\overrightarrow{O}+\epsilon\sigma(C)\in [0,1]^E$, which contradicts that $\overrightarrow{O}$ is $\sigma$-compatible.
\end{proof}
Now we show which half-open cells in Theorem~\ref{tilingofcube} consist of $\sigma$-compatible continuous orientations.
\begin{Lemma}\label{lemmadd}
Let $\overrightarrow{O}$ be a discrete orientation. Then a continuous orientation in $\text{hoc}(\overrightarrow{O},\varphi_{\sigma,\sigma^*}(\overrightarrow{O}))$ is $\sigma$-compatible if and only if $\overrightarrow{O}$ is $\sigma$-compatible.
\end{Lemma}
\begin{proof}
Adopt the notations in Lemma~\ref{l1}.
If $\overrightarrow{O}$ is not $\sigma$-compatible, then by Lemma~\ref{l1}, $I\neq\emptyset$, which means $\overrightarrow{O}$ contains a directed cycle $-\overrightarrow{C_i}\notin\sigma$ for some $i\in I$. Note that $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$ contains the cycle $C_i$. Hence for any continuous orientation $\overrightarrow{O'}\in\text{hoc}(\overrightarrow{O},\varphi_{\sigma,\sigma^*}(\overrightarrow{O}))$, there exist a sufficiently small $\epsilon$ such that $\overrightarrow{O'}+\epsilon\overrightarrow{C_i}\in [0,1]^E$. Therefore $\overrightarrow{O'}$ is not $\sigma$-compatible.
If $\overrightarrow{O}$ is $\sigma$-compatible, then by Lemma~\ref{l1}, $I=\emptyset$. Let $T$ be the spanning tree $g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})$. Note that $\overrightarrow{O}=_{\biguplus_{j\in J}C^*_j}\overrightarrow{O^{cp}}$ and $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})=T\backslash\biguplus_{j\in J}C^*_j$. So any continuous orientation $\overrightarrow{O'}\in\text{hoc}(\overrightarrow{O},\varphi_{\sigma,\sigma^*}(\overrightarrow{O}))$ contains directed cocycle $-\overrightarrow{C^*_j}$'s and $_{\biguplus_{j\in J}C^*_j}\overrightarrow{O'}\in\text{hoc}(\overrightarrow{O^{cp}},T\backslash\biguplus_{j\in J}C^*_j)\subseteq\text{hoc}(\overrightarrow{O^{cp}},T)\subseteq\text{Face}_\sigma(T)$. By Lemma~\ref{facelemma}, $_{\biguplus_{j\in J}C^*_j}\overrightarrow{O'}$ is $\sigma$-compatible. By Lemma~\ref{lemmacc}, $\overrightarrow{O'}$ is also $\sigma$-compatible.
\end{proof}
By Lemma~\ref{lemmadd} and Theorem~\ref{tilingofcube}, we get Proposition~\ref{cor5}.
Now we consider the dual theory. We only list the results because the proof is similar.
Due to Corollary~\ref{cordual}, we have the following the dual decomposition.
\begin{Prop}
$[0,1]^E=\biguplus_{\overrightarrow{O}\in\{0,1\}^E}\text{hoc}(\overrightarrow{O},\varphi^*_{\sigma,\sigma^*}(\overrightarrow{O}))$.
\end{Prop}
Let $S_{\sigma^*}$ be the set of $\sigma^*$-compatible continuous orientations. The face $\text{Face}_{\sigma^*}(T)$ of $[0,1]^E$ is defined to be the set of the continuous orientations where each edge $e\in T$ is oriented according to $\sigma^*(C^*(T,e))$. Then we have the following dual lemmas.
\begin{Lemma}
(1) $\text{Face}_{\sigma^*}(T)\subseteq S_{\sigma^*}$.
(2) Let $\overrightarrow{O}$ be a $\sigma^*$-compatible continuous orientation. If $\overrightarrow{O}$ contains a directed cycle $\overrightarrow{C}$, then $_{C}\overrightarrow{O}$ is also $\sigma^*$-compatible.
(3) Let $\overrightarrow{O}$ be a discrete orientation. Then a continuous orientation in $\text{hoc}(\overrightarrow{O},\varphi^*_{\sigma,\sigma^*}(\overrightarrow{O}))$ is $\sigma^*$-compatible if and only if $\overrightarrow{O}$ is $\sigma^*$-compatible.
\end{Lemma}
Finally, we get the decomposition of $S_{\sigma^*}$.
\begin{Prop}\label{dualcor5}
$S_{\sigma^*}=\biguplus_{\overrightarrow{O}\in\{0,1\}^E\text{ is }\sigma^*\text{-compatible}}\text{hoc}(\overrightarrow{O},\varphi^*_{\sigma,\sigma^*}(\overrightarrow{O}))$.
\end{Prop}
\begin{Ex}\label{ex}
We still consider the bijection $\varphi_{\sigma,\sigma^*}$ in Example~\ref{ex1}. Note that for a spanning tree $T$, the closed face $\text{Face}_{\sigma^*}(T)$ and the half-open cell $\text{hoc}(g_{\sigma, \sigma^*}(T),E\backslash T)(=\text{hoc}(\overrightarrow{O^{cp}},\varphi^*_{\sigma,\sigma^*}(\overrightarrow{O^{cp}})))$, where $\overrightarrow{O^{cp}}=g_{\sigma, \sigma^*}(T)$, are of dimension $1$. By restricting the tiling in Figure~\ref{F6}(b) to $S_{\sigma^*}$, we get Figure~\ref{F7}, which consists of three half-open line segments and one point.
\end{Ex}
\begin{figure}[h]
\centering
\includegraphics[scale=1]{F7.eps}
\caption{This shows the half-open decomposition of $S_{\sigma^*}$ in Example~\ref{ex}, which induces the bijection $\varphi_{\sigma,\sigma^*}$ from $\sigma$-compatible orientations to connected spanning subgraphs.}
\label{F7}
\end{figure}
\section{Generalization to Regular Matroids}\label{regular}
In this section, we first introduce the definition of regular matroids and related objects; see also \cite{BBY}. Then we explain why the results on graphs in Section~\ref{geometric} and Section~\ref{combinatorial} can be generalized to the regular matroids. At last, we explain the obstacles to generalizing the results to \emph{realizable matroids} over $\mathbb{R}$.
We assume that the reader is familiar with the basic theory of matroids; some standard references include \cite{O}. Recall that a matrix is called \emph{totally unimodular} if every square submatrix has determinant $0$, $1$, or $-1$. A \emph{matroid} is called \emph{regular} if it can be represented by a totally unimodular matrix over $\mathbb{R}$. Let $A$ be an $r\times n$ matrix representing a regular matroid $M$ with the ground set $E$. Without loss of generality, we may assume $\text{rank}(A)=r$.
Let $C$ be a circuit. By definition, $C$ is the support of a support-minimal nonzero element in $\ker(A)$. By \cite[Lemma 6]{SW}\footnote{In \cite{SW}, Lemma~6 and Lemma~7 are proved with respect to $\ker(A)\cap\mathbb{Z}^n$ instead of $\ker(A)$. However, the proofs work for $\ker(A)$.}, all the elements in $\ker(A)$ with support $C$, together with the zero vector, form a one-dimensional subspace of $\ker(A)$. By \cite[Lemma 7]{SW}, the generator of this subspace can be chosen to be a $\{0, \pm 1\}$-vector. Clearly there are exactly two such generators, denoted by $\pm\overrightarrow{C}$. So for any circuit $C$, there are exactly two $\{0, \pm 1\}$-vectors in $\ker(A)$ with support $C$. We call them \emph{signed circuits} of $M$. Note that if we take $A$ to be the matrix $D$ associated to a graph $G$ (defined in Section~\ref{combinatorial}), then these signed circuits are exactly the directed cycles.
By \cite[Lemma 10]{SW}, the notion of signed circuit is intrinsic to $M$, independent of the choice of $A$, and hence we are safe to work with one matrix $A$. To be precise, let $A'$ be another $r\times n$ totally unimodular matrix representing $M$. Without loss of generality, we may assume the $i$-th columns of $A$ and $A'$ correspond to the same element in $E$ for $i=1, \cdots, n$. Then we have $A'=FAP$, where $F$ is an integer matrix whose determinant is $\pm 1$ and $P$ is a diagonal matrix whose entries on the main diagonal are $\pm 1$; see \cite[Section 2.2]{SW}. Hence for any vector $\overrightarrow{C}$, $P\cdot\overrightarrow{C}\in\ker(A)$ if and only if $\overrightarrow{C}\in\ker(A')$. So the signed circuits with respect to $A$ and the ones with respect to $A'$ differ merely by a reorientation $P$ of $M$. As in the case of graphs, when we choose the matrix $A$, it contains the information of a reference orientation for $M$. The choice of the reference orientation does not affect our results essentially. We remark that these signed circuits make $M$ an oriented matroid; see \cite[Chapter 1.2]{BVSWZ}. Moreover, the oriented matroid structure on a regular matroid is unique up to reorientation; see \cite[Corollary 7.9.4]{BVSWZ}.
Similarly, for any cocircuit $C^*$, there are exactly two $\{0, \pm 1\}$-vectors in $\operatorname{im}(A)$ with support $C^*$. We call them \emph{signed cocircuits} of $M$. All the arguments above for circuits also work for cocircuits due to the duality;
see \cite[Section 2]{SW} or Subsection~\ref{subsectionduality}.
One can generalize the notion of fundamental cycles and cocycles, acyclic cycle signature and cocycle signature, cycle reversals, and cocycle reversals in a straightforward way from graphs to regular matroids, while one needs to replace "cycle" with "circuit" and "cocycle" with "cocircuit" in the names. For details, see \cite{BBY}.
Similarly to the case of graphs, the vector space $\mathbb{R}^E$ has an orthogonal decomposition $\ker(A)\oplus\operatorname{im}(A^T)$ with respect to the standard inner product $\langle\cdot,\cdot\rangle$. Hence the signed circuit and the signed cocircuit are orthogonal to each other. For any basis $B$, it is easy to check that the signed fundamental circuits (resp. signed fundamental cocircuits) form a basis of $\ker(A)$ (resp. $\operatorname{im}(A^T)$), and an integral basis of $\ker_\mathbb{Z}(A)=\ker(A)\bigcap \mathbb{Z}^E$ (resp. $\operatorname{im}_\mathbb{Z}(A^T)=\operatorname{im}(A^T)\bigcap \mathbb{Z}^E$).
Now we claim that we may generalize all the results in the previous sections to the regular matroids. Indeed, the results that we cite from other references are proved in the setting of regular matroids in these references, most of which are in \cite{BBY}. For the proofs in this paper, one can simply replace "cycle" with "circuit", "cocycle" with "cocircuit", "spanning tree" with "basis", etc.
In particular, Proposition~\ref{tree} and Theorem~\ref{th} hold for the regular matroids.
\begin{Prop}(\cite{BBY},Theorem 1.3.1(1))
Fix acyclic signatures $\sigma$ and $\sigma^*$ of a regular matroid $M$.
Then the map $$g_{\sigma,\sigma^*}:\{\text{bases of } M\}\longrightarrow\{(\sigma,\sigma^*)\text{-compatible orientations}\}$$ is a bijection, where $g_{\sigma,\sigma^*}$ sends $B$ to the orientation of $M$ in which we orient each $e\notin B$ according to its orientation in $\sigma(C(B,e))$ and each $e\in B$ according to its orientation in $\sigma^*(C^*(B,e))$.
\end{Prop}
\begin{Th}
Fix acyclic signatures $\sigma$ and $\sigma^*$ of a regular matroid $M$ with ground set $E$.
(1) The map
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}:\{\text{discrete orientations}\} & \longrightarrow & \{\text{subsets of } E\} \\
\overrightarrow{O} & \mapsto & g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\bigcup (\biguplus_{i\in I}C_i)\backslash (\biguplus_{j\in J}C_j^*)
\end{eqnarray*}
is a bijection, where $\overrightarrow{O}$ is an orientation obtained by reversing disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and signed cocircuits $\{\overrightarrow{C_j^*}\}_{j\in J}$ in a ($\sigma,\sigma^*$)-compatible orientation $\overrightarrow{O^{cp}}$.
(2) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}: \{\sigma\text{-compatible orientations}\} & \longrightarrow & \{\text{independent sets}\} \\
\overrightarrow{O} & \mapsto & g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\backslash (\biguplus_{j\in J}C_j^*).
\end{eqnarray*}
(3) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{eqnarray*}
\varphi_{\sigma,\sigma^*}:\{\sigma^*\text{-compatible orientations}\} & \longrightarrow & \{\text{spanning sets}\} \\
\overrightarrow{O} & \mapsto & g_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\bigcup (\biguplus_{i\in I}C_i).
\end{eqnarray*}
\end{Th}
Note that Proposition~\ref{tree} is proven for \emph{realizable matroids} over $\mathbb{R}$ in \cite{BBY} and, more generally, for \emph{oriented matroids} in \cite{BSY}. It is natural to ask whether Theorem~\ref{th} also holds for these types of matroids. The answer is negative (unless some notions are modified significantly). A direct reason is that one circuit-cocircuit reversal class of a realizable matroid possibly contains more than one $(\sigma,\sigma^*)$-compatible orientations. To be precise, we define the circuit reversals and the cocircuit reversals for the realizable matroids by analogy with the terminology for graphs; see also \cite{G2} for the definitions. We assume the $(\sigma,\sigma^*)$-compatible orientations for the realizable matroids are defined so that they are in bijection with the bases; see \cite{BSY} for such a definition. We consider the uniform oriented matroid $U_{2,4}$, which is realizable. It is easy to check that $U_{2,4}$ has only two circuit-cocircuit reversal classes while it has six bases (cf. \cite[Prop. 3.2]{G2}). Hence for an orientation $\overrightarrow{O}$, there may be more than one $(\sigma,\sigma^*)$-compatible orientations $\overrightarrow{O^{cp}}$ in the same circuit-cocircuit reversal class, which makes the formulation of Theorem~\ref{th} problematic.
We would like to share another way to see the obstacles to the generalization.
From a geometric point of view, the zonotope generated by a set $Z$ of vectors tiles the space if and only if the matroid represented by $Z$ is regular (see \cite[Theorem 1]{DG}). Hence our geometric construction as in Figure~\ref{F3} cannot work beyond the regular matroids.
\section*{Acknowledgement}
Many thanks to Olivier Bernardi for orienting the author towards the study of this project and countless helpful discussions, as well as detailed advice on the writing of this paper. As a graduate student, the author wants to thank the Department of Mathematics at Brandeis University for admitting the author to the Ph.D. program and creating a flexible study environment. The author also wants to thank Chi Ho Yuen for helpful discussions.
| {
"timestamp": "2021-10-18T02:06:20",
"yymm": "2109",
"arxiv_id": "2109.01930",
"language": "en",
"url": "https://arxiv.org/abs/2109.01930",
"abstract": "Let $G$ be a connected finite graph. Backman, Baker, and Yuen have constructed a family of explicit and easy-to-describe bijections $g_{\\sigma,\\sigma^*}$ between spanning trees of $G$ and $(\\sigma,\\sigma^*)$-compatible orientations, where the $(\\sigma,\\sigma^*)$-compatible orientations are the representatives of equivalence classes of orientations up to cycle-cocycle reversal which are determined by a cycle signature $\\sigma$ and a cocycle signature $\\sigma^*$. Their proof makes use of zonotopal subdivisions and the bijections $g_{\\sigma,\\sigma^*}$ are called \\emph{geometric bijections}. In this paper, we extend the geometric bijections to subgraph-orientation correspondences. Moreover, we extend the geometric constructions accordingly. Our proofs are purely combinatorial, even for the geometric constructions. We also provide geometric proofs for partial results, which make use of zonotopal tiling, relate to Backman, Baker, and Yuen's method, and motivate our combinatorial constructions. Finally, we explain that the main results hold for \\emph{regular matroids}.",
"subjects": "Combinatorics (math.CO)",
"title": "Geometric bijections between spanning subgraphs and orientations of a graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787868650146,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7093811407060998
} |
https://arxiv.org/abs/1506.02797 | Abelian Powers and Repetitions in Sturmian Words | Richomme, Saari and Zamboni (J. Lond. Math. Soc. 83: 79-95, 2011) proved that at every position of a Sturmian word starts an abelian power of exponent $k$ for every $k > 0$. We improve on this result by studying the maximum exponents of abelian powers and abelian repetitions (an abelian repetition is an analogue of a fractional power) in Sturmian words. We give a formula for computing the maximum exponent of an abelian power of abelian period $m$ starting at a given position in any Sturmian word of rotation angle $\alpha$. vAs an analogue of the critical exponent, we introduce the abelian critical exponent $A(s_\alpha)$ of a Sturmian word $s_\alpha$ of angle $\alpha$ as the quantity $A(s_\alpha) = limsup\ k_{m}/m=limsup\ k'_{m}/m$, where $k_{m}$ (resp. $k'_{m}$) denotes the maximum exponent of an abelian power (resp.~of an abelian repetition) of abelian period $m$ (the superior limits coincide for Sturmian words). We show that $A(s_\alpha)$ equals the Lagrange constant of the number $\alpha$. This yields a formula for computing $A(s_\alpha)$ in terms of the partial quotients of the continued fraction expansion of $\alpha$. Using this formula, we prove that $A(s_\alpha) \geq \sqrt{5}$ and that the equality holds for the Fibonacci word. We further prove that $A(s_\alpha)$ is finite if and only if $\alpha$ has bounded partial quotients, that is, if and only if $s_{\alpha}$ is $\beta$-power-free for some real number $\beta$. Concerning the infinite Fibonacci word, we prove that: i) The longest prefix that is an abelian repetition of period $F_j$, $j>1$, has length $F_j( F_{j+1}+F_{j-1} +1)-2$ if $j$ is even or $F_j( F_{j+1}+F_{j-1} )-2$ if $j$ is odd, where $F_{j}$ is the $j$th Fibonacci number; ii) The minimum abelian period of any factor is a Fibonacci number. Further, we derive a formula for the minimum abelian periods of the finite Fibonacci words | \section{Introduction}
Sturmian words are infinite words having exactly $n+1$ distinct factors of each length $n\geq 0$. By the celebrated theorem of Morse and Hedlund \cite{MoHe38}, they are the aperiodic binary words with minimal factor complexity. Every Sturmian word is characterized by an irrational number $\alpha$ and a real number $\rho$ called the \emph{angle} and the \emph{initial point} respectively. The Sturmian word $s_{\alpha,\rho}$ is defined by rotating the point $\rho$ by the angle $\alpha$ in the torus $I=\mathbb{R}/\mathbb{Z}=[0,1)$ and by writing a letter $\sa{b}$ when the point falls in the interval $[0,1-\alpha)$ and a letter $\sa{a}$ when the point falls in the complement. The Fibonacci word $f=\sa{abaababaabaababaabab}\cdots$ is a well-known Sturmian word obtained by taking both the angle and the initial point equal to $\phi-1$, where $\phi=(1+\sqrt{5})/2$ is the Golden Ratio. The Fibonacci word $f$ is also the limit of the sequence of finite Fibonacci words $f_{n}$, defined by $f_{0}=\sa{b}$, $f_{1}=\sa{a}$ and $f_{j}=f_{j-1}f_{j-2}$ for every $j>1$, that are the natural counterpart of the Fibonacci numbers in the setting of words.
Sturmian words have several equivalent definitions and a lot of combinatorial properties that make them well-studied objects in discrete mathematics and theoretical computer science.
In fact, there exists a huge bibliography on Sturmian words (see for instance the survey papers
\cite{BerstelRecent07,Berstel-Reutenauersurvey}, \cite[Chap. 2]{lothaire-book:2002},
\cite[Chap. 6]{Pitheasfogg} and the references therein).
There are mainly two approaches to Sturmian words: one is purely combinatorial, while the other uses techniques from elementary number theory to derive correspondences between the finer arithmetic properties of the irrational $\alpha$ and the factors of the Sturmian words of angle $\alpha$. In the language of computer science, such correspondences are called \emph{semantics}. In this paper, we aim at building upon such approach by showing new semantics allowing us to give new and tight results on the abelian combinatorics of Sturmian words. Indeed, this approach extends to the abelian setting the well-known fruitful semantics that in the last decades have allowed researchers to derive deep and important results on the combinatorics of infinite words from the theory of codings of rotations and continued fractions of irrationals. Interestingly these semantics also allowed researchers to shed new light on consolidated theories by exploiting the opposite direction. A remarkable example of this is represented by the work of B. Adamczewski and Y. Bugeaud \cite{AB1,AB2}.
Concerning the maximum exponent of repetitions in Sturmian words, there exists a vast bibliography (see for example \cite{Damanik200223,BeHoZa06,Vandeth2000283,Justin2001363,Krieger200770,Be99,CaDel00} and the references therein), which stems from the seminal work on the Fibonacci word presented in \cite{MignosiPirillo}. Indeed, the study of repetitions in words is a classical subject both from the combinatorial and the algorithmic point of view. Repetitions are strictly related to the notion of periodicity. Recall that a word $w$ of length $|w|$ has a \emph{period} $p>0$ if $w_i=w_{i+p}$ for every $1\leq i \leq |w|-p$, where $w_i$ is the letter in the position $i$ of $w$. The \emph{exponent} of $w$ is the ratio $|w|/\pi_w$ between its length $|w|$ and its \emph{minimum period $\pi_w$}. When studying the degree of repetitiveness of a word, we are often interested in the factors whose exponent is at least $2$, called \emph{repetitions}. Repetitions whose exponent is an integer are called \emph{integer powers} since a word $w$ with integer exponent $k\geq 2$ can be written as $w=u^{k}$, i.e., $w$ is the concatenation of $k$ copies of a non-empty word $u$ of length $\pi_w$. If instead $k$ is not an integer, then the word $w$ is a \emph{fractional power}. In this case we can write $w=u^{\lfloor k \rfloor}u'$, where $u'$ is the prefix of $u$ of length $\pi_w(k-\lfloor k \rfloor)$. For example, the word $w=aabaaba$ is a $7/3$-power since it has minimum period $3$ and length $7$. A good reference on periodicity is \cite[Chap.~8]{lothaire-book:2002}.
A measure of repetitiveness of an infinite word is given by the supremum of the exponents of its factors, called the \emph{critical exponent} of the word. If this supremum $\beta$ is finite, then the word is said to be $\beta^+$-power-free (or simply $\beta$-power-free if $\beta$ is irrational, so there are no factors with exponent $\beta$). For example, the critical exponent of the Fibonacci word $f$ is $2+\phi$ \cite{MignosiPirillo}, so $f$ is $(2+\phi)$-power-free. In general, a Sturmian word $s_{\alpha,\rho}$ is $\beta$-power-free for some $\beta$ if and only if the continued fraction expansion of $\alpha$ has bounded partial quotients \cite{Mi89}. The critical exponent of $s_{\alpha,\rho}$ can be explicitly determined by a formula involving these partial quotients \cite{CaDel00,Justin2001363,Damanik200223,Pelto}.
Recently, the extension of these notions to the so-called abelian setting has received a lot of interest.
Abelian properties of words have been studied since
the very beginning of formal language theory and combinatorics on words. The notion of the Parikh vector of a word (see later for its definition) has become a standard and is often used without an explicit reference to the original 1966 paper by Parikh \cite{Parikh:1966:CL:321356.321364}. Abelian powers were first considered in 1957 by Erd\H{o}s \cite{Erdos1961221} as a natural generalization of usual integer powers.
Research concerning abelian
properties of words and languages developed afterwards in
different directions.
For instance, there is an increasing interest in
abelian properties of words linked to periodicity; see, for example,
\cite{AKP2012,CI2006,CRSZ2010,DR2012,PZ13,Richomme201179,SS2011}.
Recall that the Parikh vector $\mathcal{P}_{w}$ of a finite word $w$ enumerates the total number of each letter of the alphabet in $w$. Therefore, two words have the same Parikh vector if and only if one can be obtained from the other by permuting letters.
An \emph{abelian decomposition} of a word $w$ is a factorization $w=u_0u_1 \cdots u_{j-1}u_j$, where $j\geq 2$, the words $u_1$, $\ldots$, $u_{j-1}$ have the same Parikh vector $\mathcal{P}$ and the Parikh vectors of the words $u_0$ and $u_j$ are contained in $\mathcal{P}$ (that is, they are component-wise less
or equal to $\mathcal{P}$ but not equal to $\mathcal{P}$). The sum $m$ of the components of the Parikh vector $\mathcal{P}$ (that is, the length of $u_1$) is called an \emph{abelian period} of $w$ (cf.~\cite{CI2006}).
The words $u_0$ and $u_j$, the first and the last factor of the decomposition, are respectively called the \emph{head} and the \emph{tail} of the abelian decomposition. Notice that different abelian decompositions can give the same abelian period. For example, the word $w=abab$ has an abelian period $2$ with $u_0=\epsilon$ (the empty word), $u_1=u_2=ab$ and $u_3=\epsilon$ or with $u_0=a$, $u_1=ba$ and $u_2=b$.
The \emph{abelian exponent} of $w$ is the ratio $|w|/\mu_w$ between its length $|w|$ and its minimum abelian period $\mu_w$. We say that a word $w$ is an \emph{abelian repetition} of period $m$ and exponent $k$ if it has an abelian decomposition $w=u_0u_1 \cdots u_{j-1}u_j$ with $j\geq 3$ (so that $u_1$ and $u_2$ exist and are nonempty) such that $|u_1| = m$ and $|w|/m = k$. If we are uninterested in the period and exponent, then we simply call $w$ an abelian repetition.
An \emph{abelian power} (also known as a \emph{weak repetition}~\cite{Cummings_weakrepetitions}) is a word $w$ that has an abelian decomposition with empty head and empty tail. Let $m$ be the abelian period of $w$ corresponding to the decomposition. Then we say that the word $w$ is an abelian power of period $m$ and exponent $|w|/m$. If the exponent of $w$ equals $1$, then we say that $w$ is a \emph{degenerated} abelian power of period $m$.
\subsection{Our results}
The main contribution of this paper is the description of a framework that allows us to translate arithmetic properties of rotations of an irrational number in the torus $I=\mathbb{R}/\mathbb{Z}=[0,1)$ to properties of abelian powers and repetitions in Sturmian words.
In \cite{Mi89} (and in a very preliminary form in \cite{KnuthAMM}) a bijection (that we call \emph{Sturmian bijection}) between factors of Sturmian
words and subintervals of the torus $I$ is described.
In the last three decades, the Sturmian bijection has allowed researchers to shed light on the combinatorics of the factors of Sturmian words especially from the point of view of repetitions. Most of the results on maximal repetitions in Sturmian words stem in fact from the Sturmian bijection.
We show in this paper that the Sturmian bijection preserves abelian properties of
factors. In particular, using the Sturmian bijection, we prove that a Sturmian word of rotation angle $\alpha$ contains an abelian power of period $m$ and exponent $k\geq 2$ if and only if $\|m\alpha\|< \frac{1}{k}$, where $\|x\|$ is the distance between $x$ and the nearest integer. As a consequence, the maximum exponent of an abelian power of period $m$ in a Sturmian word of rotation angle $\alpha$ is $\left \lfloor 1/ \|m\alpha\| \right \rfloor$. Furthermore, we prove that the Sturmian word $s_{\alpha,\rho}$ of angle $\alpha$ and initial point $\rho$ contains an abelian power of period $m$ and exponent $k\geq 2$ starting at position $n$ if and only if $\{\rho+n\alpha\}<1-k\{m\alpha\}$ or $\{\rho+n\alpha\}>k\{-m\alpha\}$ save for few exceptional positions related to the points $\{\{-rm\alpha\}\colon r \geq 0\}$. We recover the result of \cite{Richomme201179} that abelian powers of arbitrarily large exponent occur at every position of a Sturmian word.
A Sturmian word always contains abelian powers of arbitrarily large exponent, so we cannot define a direct analogue of the critical exponent to the abelian setting. Instead we define the \emph{abelian critical exponent} of a Sturmian word $s_\alpha$ of angle $\alpha$ as the quantity $A(s_\alpha) = \limsup k_{m}/m=\limsup k'_{m}/m$ where $k_m$ (resp.~$k'_m$) denotes the maximum exponent of an abelian power (resp.~of an abelian repetition) of abelian period $m$ in $s_\alpha$ (in fact, the two superior limits coincide). We show that $A(s_\alpha)$ equals the \emph{Lagrange constant} of the irrational $\alpha$, a well-known constant in number theory (see Section \ref{sec:approx} for its definition). Via this connection, we determine $A(s_\alpha)$ in terms of the partial quotients of the continued fraction expansion of $\alpha$. This allows us to prove that $A(s_\alpha) \geq \sqrt{5}$ for every Sturmian word $s_\alpha$ and that the equality holds for the Fibonacci word. We further prove that $A(s_\alpha)$ is finite if and only if the continued fraction expansion of $\alpha$ has bounded partial quotients, that is, if and only if $s_{\alpha}$ is $\beta$-power-free for some real number $\beta$.
We finally focus on the particular case of the Fibonacci word $f=s_{\phi-1,\phi-1}$. We prove that in the Fibonacci word the maximum exponent of an abelian power of period $F_j$---the $j$th Fibonacci number---equals $\lfloor \phi F_j + F_{j-1} \rfloor$ and that for every $F_j$, $j>1$, the longest prefix of the Fibonacci word that is an abelian repetition of period $F_j$ has length $F_j( F_{j+1}+F_{j-1} +1)-2$ if $j$ is even and $F_j( F_{j+1}+F_{j-1} )-2$ if $j$ is odd.
We then prove that the minimum abelian period of any factor of the Fibonacci word is a Fibonacci number; a result analogous to a result of Currie and Saari \cite{CuSa09} concerning ordinary periods.
These results allow us to give an exact formula for the minimum abelian periods of the finite Fibonacci words. More precisely, we prove that for every $j\geq 3$ the Fibonacci word $f_j$, of length $F_{j}$, has minimum abelian period $F_{\lfloor{j/2}\rfloor}$ if $j = 0, 1, 2\mod{4}$ and $F_{1+\lfloor{j/2}\rfloor}$ if $j = 3\mod{4}$.
\medskip
The paper is organized as follows. In Section~\ref{sec:preliminaries} we give the basic definitions and fix the notation. In Section~\ref{sec:Sturmian} we recall needed results on Sturmian words and present connections with the Parikh vectors of their factors. In Section~\ref{sec:ab_pow_and_ab_repet} we give the main results about abelian powers in Sturmian words, while in Section~\ref{sec:approx} we use standard techniques from elementary number theory to study abelian repetitions in Sturmian words and the abelian critical exponent of Sturmian words. In the final Section~\ref{sec:abrepFibo} we deal with the abelian repetitions in the Fibonacci word and abelian periods of its factors.
\section{Preliminaries}\label{sec:preliminaries}
Let $\Sigma=\{a_{1},a_{2},\ldots ,a_{\sigma}\}$ be an ordered alphabet of cardinality $\sigma$, and let $\Sigma^*$ be the set of finite words over $\Sigma$. We let $|w|$ denote the length of the word $w$. The empty word has length $0$ and is denoted by $\epsilon$. We let $w_i$ denote the $i$th letter of $w$ and $w_{i\ldotp\ldotp j}$ with $1 \leq i \leq j \leq |w|$ the factor $w_i w_{i+1} \cdots w_j$ of $w$. We say that the factor $w_{i\ldotp\ldotp j}$ occurs at position $i$ in $w$.
We let $|w|_a$ denote the number of occurrences of the letter $a\in\Sigma$ in the word $w$.
An integer $p>0$ is an (ordinary) period of a word $w$ if $w_i=w_{i+p}$ for all $i$ such that $1\leq i \leq |w|-p$.
The \emph{Parikh vector} of a word $w$, denoted by $\mathcal{P}_{w}$, counts the occurrences of each letter of $\Sigma$ in $w$, i.e., $\mathcal{P}_{w}=(|w|_{a_{1}},\ldots,|w|_{a_{\sigma}})$. Given the Parikh vector $\mathcal{P}_{w}$ of a word $w$, $\mathcal{P}_{w} [i]$ denotes its $i$th component
and $|\mathcal{P}_{w}|$ its norm (the sum of its components). Thus, for a word $w$ and $i$ such that $1\leq i\leq\sigma$, we have $\mathcal{P}_{w} [i]=|w|_{a_i}$ and $|\mathcal{P}_{w}|=\sum_{i=1}^{\sigma}\mathcal{P}_{w}[i]=|w|$. Given two Parikh vectors $\mathcal{P}$ and $\mathcal{Q}$, we say that $\mathcal{P}$ is \emph{contained} in $\mathcal{Q}$ if $|\mathcal{P}| < |\mathcal{Q}|$ and $\mathcal{P}[i]\leq \mathcal{Q}[i]$ for every $i$ such that $1\leq i\leq \sigma$. If the Parikh vector $\mathcal{P}$ is contained in $\mathcal{Q}$, then we simply write $\mathcal{P} \subset \mathcal{Q}$.
Recall from the introduction that an \emph{abelian decomposition} of a word $w$ is a factorization $w=u_0u_1 \cdots u_{j-1}u_j$ where $j \geq 2$, the words $u_1, u_2, \ldots, u_{j-1}$ have the same Parikh vector $\mathcal{P}$ and the Parikh vectors of $u_0$ (the \emph{head}) and $u_j$ (the \emph{tail}) are contained in $\mathcal{P}$. The norm of $\mathcal{P}$ is an \emph{abelian period} of $w$.
The \emph{abelian exponent} of $w$ is the ratio $|w|/\mu_w$ between its length $|w|$ and its minimum abelian period $\mu_w$.
We say that a word $w$ is an \emph{abelian repetition} if it has an abelian decomposition $w=u_0u_1 \cdots u_{j-1}u_j$ with $j\geq 3$.
An \emph{abelian power} is a word for which there exists an abelian decomposition with an empty head and an empty tail. An abelian power $w$ is \emph{degenerated} if $j = 2$ in its abelian decomposition $w = u_0u_1 \cdots u_{j-1}u_j$ with an empty head and an empty tail, that is, if and only if the norm of the Parikh vector of $u_1$ equals $|w|$.
\begin{example}
The word $w=abaababa$ is an abelian repetition of minimum period $2$ and abelian exponent $4$ since $w=a\cdot ba\cdot ab \cdot ab \cdot a$. Notice that $w$ is also an abelian repetition of period $3$ and exponent $8/3$ since $w=\varepsilon \cdot aba\cdot aba \cdot ba$.
If a word $w$ is an abelian power of maximum exponent $k$, then $k$ is not necessarily the abelian exponent of $w$. For
instance, if $w = (baaba)^2$, then $w$ is an abelian power of period $5$ and exponent $2$, but its abelian exponent is
$10/3$ as $w = b \cdot aab \cdot aba \cdot aba \cdot \epsilon$.
\end{example}
The following lemma, which is a natural extension of the properties of ordinary periods to the abelian setting, is a straightforward consequence of the definition of the abelian period.
\begin{lemma}\label{lem:ap}
Let $v$ be a factor of a word $w$. Then $\mu_w \geq \mu_v$. On the other hand, if $w$ has an abelian period $m$ such
that $m\leq |v|$, then $m$ is also an abelian period of $v$.
\end{lemma}
\section{Sturmian Words}\label{sec:Sturmian}
From now on, we fix the alphabet $\Sigma=\{\sa{a,b}\}$ and the torus $I=\mathbb{R}/\mathbb{Z}=[0,1)$. Recall that given a real number $\alpha$, $\lfloor \alpha \rfloor$ is the largest integer smaller than or equal to $\alpha$, $\lceil \alpha \rceil$ is the smallest integer greater than or equal to $\alpha$ and $\{\alpha\}=\alpha-\lfloor \alpha \rfloor$ is the fractional part of $\alpha$. Notice that $\{-\alpha\}= 1-\{\alpha\}$. We let $\|\alpha\|$ denote the distance between $\alpha$ and the nearest integer, i.e., $\|\alpha\|=\min(\{\alpha\},\{-\alpha\})$. Observe that $\|\alpha\|=\|-\alpha\|$.
Most of the content present in this section is based on the results from \cite{Mi89} (see also \cite[Chap.~2]{lothaire-book:2002} and \cite[Chap.~6]{Pitheasfogg}).
Let us recall the definition of Sturmian words as codings of a rotation.
Let $\alpha\in I$ be irrational and $\rho\in I$.
The Sturmian word $\underline{s}_{\alpha,\rho}$ (resp.~$\overline{s}_{\alpha,\rho}$) of \emph{angle} $\alpha$ and \emph{initial point} $\rho$ is the infinite word $a_{0}a_{1}a_{2}\cdots$ defined by
$$a_{n} =
\left\{
\begin{array}{ll}
\sa{b} & \mbox{if } \{ \rho + n\alpha \}\in I_{\sa{b}},\\
\sa{a} & \mbox{if } \{ \rho + n\alpha \}\in I_{\sa{a}},
\end{array}
\right.$$
where $I_{\sa{b}}=[0,1-\alpha)$ and $I_{\sa{a}}=[1-\alpha,1)$ (resp.~$I_{\sa{b}}=(0,1-\alpha]$ and $I_{\sa{a}}=(1-\alpha,1]$).
In other words, take the unit circle and consider a point initially in position $\rho$. Then rotate this point on the circle (clockwise) by the angles $\alpha$, $2\alpha$, $3\alpha$, etc, and write consecutively the letters associated with the intervals the rotated points fall into. The infinite sequence of letters obtained is the Sturmian word $\underline{s}_{\alpha,\rho}$ or $\overline{s}_{\alpha,\rho}$, depending on the choice of the intervals $I_{\sa{b}}$ and $I_{\sa{a}}$. See \figurename~\ref{Fig:gab1} for an illustration. Notice that the words $\underline{s}_{\alpha,\rho}$ and $\overline{s}_{\alpha,\rho}$ differ by at most two letters: a single occurrence of $\sa{ba}$ changes into $\sa{ab}$ or vice versa. Notice that the words $\sa{ba}$ and $\sa{ab}$ are abelian equivalent.
Mostly the choice of the intervals $I_{\sa{b}}$ and $I_{\sa{a}}$ is irrelevant. We thus adopt the convention that
$s_{\alpha,\rho}$ stands for either of the Sturmian words $\underline{s}_{\alpha,\rho}$ or
$\overline{s}_{\alpha,\rho}$. When we want to emphasize which choice of intervals is used to obtain the word
$s_{\alpha,\rho}$, we write $s_{\alpha,\rho} = \underline{s}_{\alpha,\rho}$ or
$s_{\alpha,\rho} = \overline{s}_{\alpha,\rho}$. Alternatively the choice for a fixed Sturmian word is made explicit
by telling if $0 \in I_{\sa{b}}$ or $0 \notin I_{\sa{b}}$. We later focus on specific subintervals of the torus, and
the choice of the intervals $I_{\sa{b}}$ and $I_{\sa{a}}$ affects the endpoints of these subintervals. We let
$I(\alpha,\beta)$, $\alpha, \beta \in I$, $\alpha < \beta$, to stand for the subinterval $[\alpha,\beta)$ if
$0 \in I_{\sa{b}}$ and for $(\alpha,\beta]$ if $0 \notin I_{\sa{b}}$.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.3]{gab1}
\caption{The rotation of the initial point $\rho = \phi-1 \approx 0.618$ by the angle $\alpha = \phi-1$ generating the Fibonacci word $f=s_{\phi-1,\phi-1}=\sa{abaababaabaabab}\cdots$. \label{Fig:gab1}}
\end{figure}
For example, let $\phi=(1+\sqrt 5 )/2\approx 1.618$ be the Golden Ratio, and consider the Sturmian word $f=s_{\phi-1,\phi-1}$, called the \emph{Fibonacci word}. Since the (approximated) first values of the sequence $(\{ (\phi-1) + n(\phi-1) \})_{n\geq 0}$ are $0.618$, $0.236$, $0.854$, $0.472$, $0.090$, $0.708$, $0.326$, $0.944$, $0.562$, $0.180$, $0.798$, $0.416$, $0.034$, and since $1-\alpha=\{-\alpha\}=2-\phi \approx0.382$, we have $$f=\sa{abaababaabaababaababaabaababaabaab}\cdots.$$ The choice of the intervals $I_{\sa{b}}$ and $I_{\sa{a}}$ is irrelevant here as none of the numbers in the sequence $(\{ (\phi-1) + n(\phi-1) \})_{n\geq 0}$ equals $\{-\alpha\}$ or $0$.
A Sturmian word for which $\rho=\alpha$, like the Fibonacci word, is called \emph{characteristic}. Notice that $\underline{s}_{\alpha,0}=\sa{b}s_{\alpha,\alpha}$ and $\overline{s}_{\alpha,0}=\sa{a}s_{\alpha,\alpha}$ for every $\alpha$.
An equivalent view is to fix the point and rotate the intervals backwards. The interval $I_{\sa{b}}=I_{\sa{b}}^{0}$ is rotated at each step, so that after $i$ rotations it is transformed into the interval $I_{\sa{b}}^{-i}=I(\{-i\alpha\},\{-(i+1)\alpha\})$, while $I_{\sa{a}}^{-i}=I\setminus I_{\sa{b}}^{-i}$. See \figurename~\ref{Fig:gab2} for an illustration.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{gab2}
\caption{The interval $I_{\sa{b}}=I_{\sa{b}}^{0}=I(0,1-\alpha)$ is rotated at each step, defining the intervals $I_{\sa{b}}^{-i}=I(\{-i\alpha\},\{-(i+1)\alpha\})$ (light gray). For every $i$, the complement of the interval $I_{\sa{b}}^{-i}$ is the interval $I_{\sa{a}}^{-i}$ (dark gray). In the figure, $\alpha=\phi-1\approx 0.618$. The Fibonacci word $f$ can be obtained by looking at the horizontal line of height $\rho=\alpha$. The factor of length $15$ starting at position $9$ of the Fibonacci word $f$, that is $\sa{baababaababaaba}$, can be obtained by looking at the horizontal line of height $\{\rho+9\alpha\}$ (Proposition \ref{prima}).\label{Fig:gab2}}
\end{figure}
This representation is convenient since one can read within it not only a Sturmian word but also any of its factors. More precisely, for every positive integer $m$, the factor of length $m$ of $s_{\alpha,\rho}$ starting at position $n$ is determined only by the value of $\{\rho+n\alpha\}$, as shown in the following proposition.
\begin{proposition}\label{prima}
Let $s_{\alpha,\rho}=a_{0}a_{1}a_{2}\cdots$ be a Sturmian word. Then, for every $n$ and $i$, we have:
$$a_{n+i} = \left\{ \begin{array}{lllll}
\sa{b} & \mbox{if $\{\rho + n\alpha\}\in I_{\sa{b}}^{-i}$;}\\
\sa{a} & \mbox{if $\{\rho + n\alpha\}\in I_{\sa{a}}^{-i}$.}
\end{array} \right.$$
\end{proposition}
For example, suppose we want to know the factor of length 15 starting at position 9 in the Fibonacci word $f$. We have $\{\rho+9\alpha\}=\{\phi-1+9(\phi-1)\}\approx 0.180$. The first terms of the sequence $(\{-i\alpha\})_{i\geq 0}$ are, approximately, $0$, $0.382$, $0.764$, $0.146$, $0.528$, $0.910$, $0.292$, $0.674$, $0.056$, $0.438$, $0.820$, $0.202$, $0.584$, $0.966$, $0.348$, $0.729$. So we get $a_{9}a_{10}\cdots a_{23}=\sa{baababaababaaba}$ (see \figurename~\ref{Fig:gab2}).
A remarkable consequence of Proposition \ref{prima} is the following: Given a Sturmian word $s_{\alpha,\rho}$ and a positive integer $m$, the $m+1$ different factors of $s_{\alpha,\rho}$ of length $m$ are completely determined by the intervals $I_{\sa{b}}^{0}, I_{\sa{b}}^{-1},\ldots, I_{\sa{b}}^{-(m-1)}$, that is, only by the points $\{-i\alpha\}$ for $0\leq i< m$. In particular, they do not depend on the initial point $\rho$, so the set of factors of $s_{\alpha,\rho}$ is the same as the set of factors of $s_{\alpha,\rho'}$ for any $\rho$ and $\rho'$.
Hence, from now on, we let $s_{\alpha}$ denote any Sturmian word of angle $\alpha$.
If we arrange the $m+2$ points $0,1,\{-\alpha\},\{-2\alpha\},\ldots,\{-m \alpha\}$ in increasing order, we determine a partition of $I$ in $m+1$ half-open subintervals $L_0(m),L_{1}(m),\ldots,L_{m}(m)$. Each of these subintervals is in bijection with a factor of length $m$ of any Sturmian word of angle $\alpha$ (see \figurename~\ref{Fig:gab3}).
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.6]{gab3}
\caption
The points $0$, $1$, $\{-\alpha\}$, $\{-2\alpha\}$, $\{-3\alpha\}$, $\{-4\alpha\}$, $\{-5\alpha\}$ and $\{-6\alpha\}$
arranged in increasing order. This gives the intervals $L_{0}(6)\approx I(0,0.146)$, $L_{1}(6)\approx I(0.146,0.292)$, $L_{2}(6)\approx I(0.292,0.382)$, $L_{3}(6)\approx I(0.382,0.528)$, $L_{4}(6)\approx I(0.528,0.764)$, $L_{5}(6)\approx I(0.764,0.910)$ and $L_{6}(6)\approx I(0.910,1)$. These intervals are associated respectively with the factors
$\sa{babaab}$,$\sa{baabab}$,$\sa{baabaa}$,$\sa{ababaa}$,$\sa{abaaba}$,$\sa{aababa}$ and $\sa{aabaab}$ of length $6$ of the Fibonacci word. \label{Fig:gab3}}
\end{figure}
Moreover, the factors associated with these intervals are lexicographically ordered (decreasingly), as stated in the following proposition.
\begin{proposition}\label{Pro:otherbranch}
Let $m\geq 1$ and $0\le j,k\le m$. Then the factor
associated with the interval $L_j(m)$ is lexicographically greater than the factor
associated with the interval $L_k(m)$ if and only if $j<k$.
\end{proposition}
\begin{proof}
We prove the statement by induction on $m$. The case $m=1$ is true by the definition of the two subintervals $L_{0}(1)=I_{\sa{b}}$ and $L_{1}(1)=I_{\sa{a}}$.
Suppose now that the statement holds for some $m$ such that $m\ge 1$ and let us show that it holds for $m+1$.
The sequence of subintervals $L_i(m)$ is the same sequence of intervals $L_{i}(m+1)$ except for the interval $L_{t}(m)$ containing the point $\{-(m+1)\alpha\}$, which is subdivided into two intervals.
Therefore, each of the $m+1$ factors of length $m$ is extended to the right by one letter in a unique way, except for the factor $t$ associated with the interval $L_{t}(m)$, that is extended to $t\sa{b}$, associated with the interval $L_{t}(m+1)$, and to $t\sa{a}$, associated with the interval $L_{t+1}(m+1)$. Thus, for these two factors the statement holds.
For the factors of length $m+1$ different from $t\sa{b}$ and $t\sa{a}$ we can apply the induction hypothesis.
\end{proof}
The result of Proposition \ref{Pro:otherbranch} is of independent interest and is related to some recent research on Sturmian words and the lexicographic order (see \cite{Bucci201225,Glen200845,JeZa04,Perrin2012265}).
We now present properties of the bijection between the factors of length $m$ and the intervals $L_{i}(m)$ that allow us to use standard Number Theory techniques to deal with abelian repetitions in Sturmian words and, in particular, in the Fibonacci word.
Recall that a factor of length $m$ of a Sturmian word $s_{\alpha}$ has a Parikh vector equal either to $(\lfloor m\alpha \rfloor , m-\lfloor m\alpha \rfloor )$ (in which case it is called \emph{light}) or to $(\lceil m\alpha \rceil , m-\lceil m\alpha \rceil) $ (in which case it is called \emph{heavy}).
The following proposition relates the intervals $L_{i}(m)$ to the Parikh vectors of the associated factors; this result appears as a part of \cite[Theorem~19]{Rigo13}.
\begin{proposition}\label{pro:main}
Let $s_{\alpha}$ be a Sturmian word of angle $\alpha$ and $m$ be a positive integer.
Let $t_{i}$ be the factor of length $m$ associated with the interval $L_{i}(m)$. Then $t_{i}$ is
heavy if $L_{i}(m)\subset I(\{-m\alpha\},1)$, while it is light if $L_{i}(m)\subset I(0,\{-m\alpha\})$.
Moreover, if $\{-m\alpha\}\geq \{-\alpha\}$, then all heavy factors start and end with $\sa{a}$, while if $\{-m\alpha\} \leq \{-\alpha\}$, then all light factors start and end with $\sa{b}$.
\end{proposition}
\begin{proof}
We prove the statement by induction on $m$. The case $m=1$ is true by definition. Suppose the
statement true for $m$, and let us prove it for $m+1$. We have two cases:
\begin{enumerate}
\item $\{-(m+1)\alpha\} < \{-m\alpha\}$;
\item $\{-(m+1)\alpha\} > \{-m\alpha\}$.
\end{enumerate}
Case 1. By induction, the factors of length $m$ corresponding to intervals above $\{-m\alpha\}$ are heavy and the others are light. Hence, the factors of length $m+1$ corresponding to intervals above $\{-(m+1)\alpha\}$ are either heavy factors of length $m$ extended with $\sa{b}$ or light factors of length $m$ extended with $\sa{a}$, while the factors of length $m+1$ corresponding to intervals below $\{-(m+1)\alpha\}$ are light factors of length $m$ extended with $\sa{b}$. Therefore, the former are the heavy factors of length $m+1$, while the latter are the light ones.
Case 2. In this case, the factors of length $m+1$ corresponding to intervals above $\{-(m+1)\alpha\}$ are heavy factors of length $m$ extended with $\sa{a}$, so they must be heavy factors of length $m+1$. The factors of length $m+1$ corresponding to intervals below $\{-(m+1)\alpha\}$ have another Parikh vector, so they are the light ones.
The second part of the statement follows directly from the very definition of Sturmian words as rotation words.
\end{proof}
\begin{example}
Let $\alpha=\phi-1\approx 0.618$ and $m=6$. We have $6\alpha\approx 3.708$, so $\{-6\alpha\}\approx 0.292$. It is evident from \figurename~\ref{Fig:gab3} that the factors of length $6$
corresponding to intervals above (resp.~below) $\{-6\alpha\}\approx 0.292$ all have Parikh vector $(4,2)$ (resp.~$(3,3)$). That is, the intervals $L_{0}$ and $L_{1}$ are associated with the light factors (\sa{babaab}, \sa{baabab}), while the intervals from $L_{2}$ to $L_{6}$ are associated with the heavy factors (\sa{baabaa}, \sa{ababaa}, \sa{abaaba}, \sa{aababa}, \sa{aabaab}). Notice that every light factor starts and ends with $\sa{b}$ since $\{-6\alpha\}<\{-\alpha\}$.
\end{example}
From Propositions \ref{prima} and \ref{pro:main}, we derive the following.
\begin{corollary}\label{cor:hl}
Let $s_{\alpha,\rho}$ be a Sturmian word. Then for all integers $m$ and $n$, the factor of length $m$ occurring in $s_{\alpha,\rho}$ at position $n$ is heavy if $\{\rho+n\alpha\}>\{-m\alpha\}$, while it is light if $\{\rho+n\alpha\}<\{-m\alpha\}$.
\end{corollary}
\section{Abelian Powers in Sturmian Words}\label{sec:ab_pow_and_ab_repet}
The results of the previous section can be used to give tight bounds on the lengths of abelian powers in Sturmian words.
The next results extend to the abelian setting analogous results obtained for ordinary powers in \cite{Mi89}.
First, observe that an abelian power in a Sturmian word is a concatenation of factors of the same length having equal Parikh vectors, that is, these factors are all heavy or all light.
The next proposition follows directly from Propositions \ref{prima} and \ref{pro:main}.
\begin{proposition}\label{pro:int}
Let $s_{\alpha,\rho}=a_{0}a_{1}a_{2}\cdots$ be a Sturmian word of angle $\alpha$. Then the factor $a_{n}\cdots a_{n+m-1}\cdots a_{n+km-1}$ is an abelian power of period $m$ and exponent
$k\geq 2$ starting at position $n$ if and only if the $k$ points $\{\rho+(n+im)\alpha\}$, $0\leq i \leq k-1$, are all either in the interval $I(0,\{-m\alpha\})$ or in the interval $I(\{-m\alpha\},1)$.
\end{proposition}
In fact, we can state something more precise.
\begin{lemma}\label{lem:ordered}
The $k$ points $\{\rho+(n+im)\alpha\}$, $0\leq i \leq k-1$, of Proposition \ref{pro:int} are naturally ordered. That is to say:
\begin{itemize}
\item if $\{m\alpha\} < 1/2$, then they are all in the subinterval
$I(0, \{-m\alpha\})$ and $$\{\rho+n\alpha\}<\{\rho+(n+m)\alpha\}< \cdots < \{\rho+(n+(k-1)m)\alpha\};$$
\item if $\{m\alpha\}> 1/2$, then they are all in the interval $I(\{-m\alpha\},1)$
and $$\{\rho+n\alpha\} > \{\rho+(n+m)\alpha\}> \cdots > \{\rho+(n+(k-1)m)\alpha\}.$$
\end{itemize}
\end{lemma}
\begin{proof}
We prove only the first part; the latter part is similar. Recall that $k \geq 2$. If
$\{\rho+n\alpha\} \in I(\{-m\alpha\},1)$, then $\{\rho+(n+m)\alpha\} \notin I(0,\{-m\alpha\})$ because
$\{m\alpha\} < 1/2$. Thus, by Proposition \ref{pro:int} we conclude that $\{\rho+n\alpha\} \in I(0,\{-m\alpha\})$.
Therefore, by assumption the points
$\{\rho+(n+im)\alpha\}$, $0\leq i \leq k-1$, are in $I(0,\{-m\alpha\})$. Let then $i < k-1$. Since
$\{\rho+(n+im)\alpha\} < \{-m\alpha\}$, it follows that $\{\rho+(n+im)\alpha\} + \{m\alpha\} < 1$, so
$\{\rho+(n+(i+1)m)\alpha\} = \{\rho+(n+im)\alpha\} + \{m\alpha\} > \{\rho+(n+im)\alpha\}$. The conclusion follows.
\end{proof}
\begin{example}
Let $f=a_{0}a_{1}a_{2}\cdots$ be the Fibonacci word. The factor
\[
a_{9}a_{10}\cdots a_{23}=\sa{baababaababaaba}
\]
is an abelian power of period $m=5$ and exponent $k=3$ (it is also an ordinary power, but this is irrelevant here). The sequence $\{\rho+9\alpha\}$, $\{\rho+14\alpha\}$, $\{\rho+19\alpha\}$, i.e., the sequence $\{10(\phi-1)\}$, $\{15(\phi-1)\}$, $\{20(\phi-1)\}$, is (approximately) equal to the sequence $0.180$, $0.271$, $0.361$. This sequence is increasing---agreeing with Lemma \ref{lem:ordered}, since $\{m\alpha\}=\{5(\phi-1)\}\approx 0.090<0.5$---and is contained in the interval $I(0,\{-m\alpha\})\approx I(0,0.910)$.
\end{example}
\begin{remark}
Notice that all the Sturmian words with the same rotation angle $\alpha$ have the same abelian powers and the same abelian repetitions---of course starting at different positions, depending on the value of the initial point $\rho$.
\end{remark}
In the following theorem we characterize the positions of occurrence of the abelian powers having given period and exponent.
For most positions the case \emph{(i)} of the next theorem applies, but due to the choice involved in coding the points $0$
and $1-\alpha$, the special points $\{\{-rm\alpha\}\colon r \geq 0\}$ require specific attention.
\begin{theorem}\label{the:maingab}
Let $s_{\alpha,\rho} = a_0 a_1 a_2 \cdots$ be a Sturmian word of angle $\alpha$. Consider the factor
$w = a_n \cdots a_{n+m-1} \cdots a_{n+km-1}$ starting at position $n$ of $s_{\alpha,\rho}$.
\begin{enumerate}[(i)]
\item If $\{\rho + n\alpha\} \notin \{\{-rm\alpha\}\colon r \geq 0\}$, then the factor $w$ is an abelian power of
period $m$ and exponent $k \geq 2$ if and only if $\{\rho+n\alpha\} < 1-k\{m\alpha\}$ (if
$\{m\alpha\} < 1/2$) or $\{\rho+n\alpha\} > k\{-m\alpha\}$ (if $\{m\alpha\} > 1/2$).
\item If $\{\rho + n\alpha\} = 0$, then the factor $w$ is an abelian power of period $m$ and exponent $k \geq 2$
if and only if $0 \in I_{\sa{b}}$ and $k\{m\alpha\} < 1$ (if $\{m\alpha\} < 1/2$) or $0 \notin I_{\sa{b}}$
and $k\{-m\alpha\} < 1$ (if $\{m\alpha\} > 1/2$).
\item If $\{\rho + n\alpha\} = \{-rm\alpha\}$ for some $r > 0$, then the factor $w$ is an abelian power of period
$m$ and exponent $k$ such that $2 \leq k < r$ if and only if $\{\rho+n\alpha\} < 1-k\{m\alpha\}$ (if
$\{m\alpha\} < 1/2$) or $\{\rho+n\alpha\} > k\{-m\alpha\}$ (if $\{m\alpha\} > 1/2$).
\item If $\{\rho + n\alpha\} = \{-rm\alpha\}$ for some $r > 0$, then the factor $w$ is an abelian power of period
$m$ and exponent $k$ such that $k \geq r \geq 2$ if and only if $0 \notin I_{\sa{b}}$ and
$\{\rho+n\alpha\} < 1-k\{m\alpha\}$ (if $\{m\alpha\} < 1/2$) or $0 \in I_{\sa{b}}$ and
$\{\rho+n\alpha\} > k\{-m\alpha\}$ (if $\{m\alpha\} > 1/2$).
\end{enumerate}
\end{theorem}
\begin{proof}
\emph{(i)} Suppose that $\{\rho + n\alpha\} \notin \{\{-rm\alpha\}\colon r \geq 0\}$ and $\{m\alpha\} < 1/2$ (the
case $\{m\alpha\} > 1/2$ is analogous). Say the factor $w$ is an abelian power of period $m$ and exponent
$k \geq 2$. Since $k \geq 2$ all of the points $\{\rho + \{n + im\}\alpha\}$, $0 \leq i \leq k - 1$, are by Lemma
\ref{lem:ordered} naturally ordered in the interval $I(0,\{-m\alpha\})$. Moreover, these points are all interior
points of the interval $I(0,\{-m\alpha\})$, so the coding is unambiguous. The distance between any two consecutive
such points is $\{m\alpha\}$. Therefore, $\{\rho + n\alpha\} + (k-1)\{m\alpha\}$ must be smaller than the length of
the interval $I(0,\{-m\alpha\})$, which is equal to $\{-m\alpha\} = 1 - \{m\alpha\}$. From this we derive that
$\{\rho+n\alpha\} < 1 - k\{m\alpha\}$.
Conversely if $\{\rho+n\alpha\} < 1 - k\{m\alpha\}$ for $k \geq 2$, then surely the points
$\{\rho + \{n + im\}\alpha\}$, $0 \leq i \leq k - 1$, all are interior points of the interval $I(0,\{-m\alpha\})$, so
$w$ is indeed an abelian power of period $m$ and exponent $k$ by Proposition \ref{pro:int}.
\emph{(ii)} Assume that $\{\rho + n\alpha\} = 0$ and $\{m\alpha\} < 1/2$ (the case $\{m\alpha\} > 1/2$ is analogous).
Suppose that the factor $w$ is an abelian power of period $m$ and exponent $k \geq 2$. Like above in the case
\emph{(i)}, all of the points $\{\rho + \{n + im\}\alpha\}$, $0 \leq i \leq k - 1$, are naturally ordered in
the interval $I(0,\{-m\alpha\})$. Therefore, $0 \in I_{\sa{b}}$. Proceeding as above, we see that
$(k-1)\{m\alpha\} < \{-m\alpha\}$, that is, $k\{m\alpha\} < 1$. The converse is easily seen to hold.
\emph{(iii)} This case reduces directly to the case \emph{(i)} as none of the points $\{\rho + \{n+im\}\alpha\}$,
$0 \leq i \leq k - 1$, equal neither of the two problematic points $0$ and $1-\alpha$ whose codings depend on the
choice of the intervals $I_{\sa{a}}$ and $I_{\sa{b}}$.
\emph{(iv)} Assume that $\{\rho + n\alpha\} = \{-rm\alpha\}$ for some $r > 0$. Suppose moreover that
$\{m\alpha\} < 1/2$; the case $\{m\alpha\} > 1/2$ is analogous. Assume first that the factor $w$ is an abelian power
of period $m$ and exponent $k$ such that $k \geq r \geq 2$. Again, the points $\{\rho + \{n + im\}\alpha\}$,
$0 \leq i \leq k - 1$, are naturally ordered in the interval $I(0,\{-m\alpha\})$. Thus,
$\{\rho+(n+(r-1)m)\alpha\} = \{-m\alpha\} \in I(0,\{-m\alpha\})$, that is to say, $0 \notin I_{\sa{b}}$. Proceeding
exactly as in the case \emph{(i)}, we see that $\{\rho+n\alpha\} < 1-k\{m\alpha\}$.
Conversely if $0 \notin I_{\sa{b}}$ and $\{\rho+n\alpha\} < 1-k\{m\alpha\}$ with $k \geq r \geq 2$, then again
$w$ is an abelian power of period $m$ and exponent $k$ by Proposition \ref{pro:int}.
\end{proof}
\begin{example}
An abelian power of period $2$ and exponent $4$ occurs in the Fibonacci word at every position $n$ such that $\{(n+1)(\phi-1)\}<1-4\{2(\phi-1)\}\approx 0.056$. The first such $n$ are $12$, $33$, $46$, $67$ and $88$.
An abelian power of period $3$ and exponent $6$ occurs in the Fibonacci word at every position $n$ such that $\{(n+1)(\phi-1)\}>6\{-3(\phi-1)\}\approx 0.875$. The first such $n$ are $7$, $15$, $20$, $28$, $41$, $49$, $54$, $62$ and $70$.
\end{example}
Theorem \ref{the:maingab} allows us to effortlessly characterize the maximum exponent of an abelian power of period $m$.
\begin{theorem}\label{the:main1}
Let $s_\alpha$ be a Sturmian word of angle $\alpha$ and $m$ be a positive integer. Then $s_\alpha$ contains an
abelian power of period $m$ and exponent $k \geq 2$ if and only if $\|m\alpha\| < \frac{1}{k}$. In particular,
the maximum exponent $k_m$ of an abelian power of period $m$ in $s_\alpha$ is the largest integer $k$ such that
$\|m\alpha\|< \frac{1}{k}$, i.e.,
\begin{equation*}
k_{m}=\left \lfloor \frac{1}{ \|m\alpha\| } \right \rfloor.
\end{equation*}
\end{theorem}
\begin{proof}
Keeping the period $m$ fixed, it is evident from Theorem \ref{the:maingab} that in order to maximize the
exponent, we can consider the prefixes of the Sturmian words $\underline{s}_{\alpha,0}$ and
$\overline{s}_{\alpha,0}$. If $\{m\alpha\} < 1/2$, then the word
$\underline{s}_{\alpha,0} = \sa{b}s_{\alpha,\alpha}$ has an abelian power of period $m$ and maximum exponent
$\left\lfloor 1/\|m\alpha\| \right\rfloor$ as a prefix, while if $\{m\alpha\} > 1/2$, then the word
$\overline{s}_{\alpha,0} = \sa{a}s_{\alpha,\alpha}$ starts with an abelian power of period $m$ and maximum exponent
$\left\lfloor 1/\|m\alpha\| \right\rfloor$.
\end{proof}
\begin{example}
In Table \ref{tab:fici2} we give the first values of the sequence $k_{m}$ for the Fibonacci word $f$. We have $k_{2}=4$, since $\{2(\phi-1)\}\approx 0.236$, so the largest $k$ such that $\{2(\phi-1)\}< 1/k$ is $4$. Indeed, $\sa{babaabab}$ is an abelian power of period $2$ and exponent $4$, and the reader can verify that no factor of $f$ of length $10$ is an abelian power of period $2$.
For $m=3$, since $\{-3(\phi-1)\}\approx 0.146$, the largest $k$ such that $\{-3(\phi-1)\}< 1/k$ is $6$. Indeed, $\sa{aabaababaabaababaa}$ is an abelian power of period $3$ and exponent $6$, and the reader can verify that no factor of $f$ of length $21$ is an abelian power of period $3$.
\end{example}
\begin{table}
\centering
\begin{small}
\begin{raggedright}
\begin{tabular}{c *{30}{@{\hspace{3.1mm}}c}}
$m$\hspace{2mm} & \textbf{1} & \textbf{2} & \textbf{3} & 4 & \textbf{5} & 6 & 7 & \textbf{8} & 9 & 10 & 11 & 12 & \textbf{13} & 14 & 15 & 16 & 17 & 18 & 19 & 20 & \textbf{21}
\\
\hline \\
$k_{m}$\hspace{2mm} & \textbf{2} & \textbf{4} & \textbf{6} & 2 & \textbf{11} & 3 & 3 & \textbf{17} & 2 & 5 & 4 & 2 & \textbf{29} & 2 & 3 & 8 & 2 & 8 & 3 & 2 & \textbf{46}
\\
\hline \rule[0pt]{0pt}{12pt}
\end{tabular}
\end{raggedright}\caption{\label{tab:fici2} The first values of the maximum exponent $k_{m}$ of an abelian power of period $m$ in the Fibonacci word $f$. The values corresponding to the Fibonacci numbers are in bold.}
\end{small}
\end{table}
Next we consider the maximum exponent of an abelian power of given period and location. Again, save for the
exceptional points $\{\{-rm\alpha\}\colon r \geq 0\}$, the first formula of the next corollary suffices.
\begin{corollary}\label{cor:gab2}
Let $s_{\alpha,\rho}=a_0 a_1 a_2 \cdots$ be a Sturmian word of angle $\alpha$,
\begin{equation*}
A = \left\lfloor \frac{\{-\rho-n\alpha\}}{\{m\alpha\}} \right\rfloor \ \text{ and } \
B = \left\lfloor \frac{\{\rho+n\alpha\}}{\{-m\alpha\}} \right\rfloor.
\end{equation*}
Consider the maximum exponent $k_{m,n}$ of a (possibly degenerated) abelian
power of period $m$ starting at position $n$ in $s_{\alpha,\rho}$.
\begin{enumerate}[(i)]
\item If $\{\rho + n\alpha\} \notin \{\{-rm\alpha\}\colon r \geq 0\}$, then
\begin{equation*}
k_{m,n} = \max\left( A,B \right).
\end{equation*}
\item If $\{\rho + n\alpha\} = 0$, then
\begin{equation*}
k_{m,n} = \begin{cases}
\left\lfloor 1/\{m\alpha\} \right\rfloor, &\text{ if $0 \in I_{\sa{b}}$,} \\
\left\lfloor 1/\{-m\alpha\} \right\rfloor, &\text{ if $0 \notin I_{\sa{b}}$.}
\end{cases}
\end{equation*}
\item If $\{\rho + n\alpha\} = \{-rm\alpha\}$ for some $r > 0$ and $r > \max(A,B)$, then
\begin{equation*}
k_{m,n} = \max\left( A,B \right).
\end{equation*}
\item If $\{\rho + n\alpha\} = \{-rm\alpha\}$ for some $r > 0$ and $r \leq \max(A,B)$, then
\begin{equation*}
k_{m,n} = \max\left( A - \gamma, B + \gamma - 1 \right),
\end{equation*}
where
\begin{equation*}
\gamma = \begin{cases}
1, &\text{ if $0 \in I_{\sa{b}}$,} \\
0, &\text{ if $0 \notin I_{\sa{b}}$.}
\end{cases}
\end{equation*}
\end{enumerate}
\end{corollary}
\begin{proof}
The formulas follow directly from Theorem \ref{the:maingab}. Observe that if $\{\rho + n\alpha\} \neq 0$ and
$\{\rho + n\alpha\} \neq \{-m\alpha\}$, then $A \geq 1$ if and only if $B = 0$. We show here how the case \emph{(iv)}
is handled.
Suppose that $\{\rho + n\alpha\} = \{-rm\alpha\}$ for some $r > 0$ and $r \leq \max(A,B)$. Assume that
$\{m\alpha\} < 1/2$. By Theorem \ref{the:maingab} \emph{(iv)} there is an abelian power of period $m$ and maximum
exponent $A \geq 2$ starting at position $n$ of $s_{\alpha,\rho}$ provided that $0 \notin I_{\sa{b}}$. If
$0 \in I_{\sa{b}}$, then there is an abelian power of period $m$ and maximum exponent $A-1$ starting at
position $n$ because the change of coding affects the Parikh vector of the factor of length $m$ starting at position
$n+(A-1)m$. Therefore, $k_{m,n} = A - \gamma$ if $A > 1$. Notice that in this case $B + 1 - \gamma \leq 1$. If $A = 1$,
then $r = 1$ by assumption, so $B = 1$. Since $A = 1$, the Parikh vectors of the factors of length $m$ starting at
positions $n$ and $n+m$ are different when $0 \notin I_{\sa{b}}$. Since $\{\rho+(n+m)\alpha\} = 0$, the Parikh vectors
of the factors of length $m$ starting at positions $n$ and $n+m$ are also different when $0 \in I_{\sa{b}}$. Therefore,
$k_{m,n} = 1 = \max\left( A - \gamma, B + \gamma - 1 \right)$. The case $\{m\alpha\} > 1/2$ is similar.
\end{proof}
Hence, given a Sturmian word $s_{\alpha,\rho}$ we can compute the maximum length of an abelian power of any period $m$ starting at any position $n$ in $s_{\alpha,\rho}$. This length is precisely $m\cdot k_{m,n}$.
Corollary \ref{cor:gab2} implies the following result of Richomme, Saari and Zamboni \cite{Richomme201179}.
\begin{proposition}
Let $s_\alpha$ be a Sturmian word of angle $\alpha$. For all $n \geq 0$ and $k \geq 1$ there is an abelian $k$-power
starting at position $n$ of $s_\alpha$.
\end{proposition}
\begin{proof}
By the well-known Kronecker Approximation Theorem (see, for instance, \cite[Chap.~XXIII]{Hardy_and_Wright}) we have that the sequence $(\|m\alpha\|)_{m \geq 0}$ is dense in $I$, so that we can make the quantity $\|m\alpha\|$ arbitrarily small. The claim follows
then from Corollary \ref{cor:gab2}.
\end{proof}
\begin{example}
For the Fibonacci word, the first values of the sequences $k_{3,n}$ and $k_{10,n}$ are given in Table \ref{tab:kmn}. For $n=0$ we have $k_{3,0}=4$, so the longest abelian power of period $3$ starting at position $0$ has exponent $4$; for $n=1$ we have $k_{3,1}=1$, so there are no proper abelian powers of period $3$ starting at position $1$; for $n=2$ we have $k_{3,2}=5$, so the longest abelian power of period $3$ starting at position $2$ has exponent $5$; etc.
\end{example}
\begin{table}
\centering
\begin{small}
\begin{raggedright}
\begin{tabular}{c *{30}{@{\hspace{3.1mm}}c}}
$n$\hspace{2mm} &0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 & 20
\\
\hline \\
$k_{3,n}$\hspace{2mm} &4& 1& 5& 3& 1& 4& 2& 6& 3& 1& 5& 2& 1& 4& 1& 6& 3& 1& 5& 2& 6
\\
\hline \\
$k_{10,n}$\hspace{2mm} &2& 4& 1& 2& 5& 1& 3& 1& 2& 4& 1& 3& 5& 1& 4& 1& 2& 4& 1& 3& 1
\\
\hline \rule[0pt]{0pt}{12pt}
\end{tabular}
\end{raggedright}\caption{\label{tab:kmn} The first values of the maximum exponent $k_{3,n}$ of a (possibly degenerated) abelian power of period $3$ starting at position $n$ and $k_{10,n}$ of a (possibly degenerated) abelian power of period $10$ starting at position $n$ in the Fibonacci word $f=s_{\phi-1,\phi-1}$.}
\end{small}
\end{table}
We now introduce a new notion, the \emph{guaranteed exponent with anticipation $i$}, which will be useful when we study the Fibonacci word in the final section. Recall from the previous section the definition of the ordered $i+1$ subintervals, $L_0(i),L_{1}(i),\ldots,L_{i}(i)$, which form a partition of the torus $I$ and are in one-to-one correspondence with the factors of length $i$.
\begin{definition}
Let $s_{\alpha}$ be a Sturmian word of angle $\alpha$. We define for all $m>0$ and $i$ such that $0\leq i\leq m$ the guaranteed exponent with anticipation $i$, denoted by $k_m^{(i)}$, as the largest $k$ such that for every $n\geq 0$ there exists a position $j$ such that $0\leq j \leq i$ and there is in $s_{\alpha}$ a (possibly degenerated) abelian power of period $m$ and exponent $k$ starting at position $n-j$.
\end{definition}
In other words, $k_m^{(i)}$ is the largest value that is guaranteed to appear in every interval of $i+1$ consecutive positions in the sequence $k_{m,n}$.
\begin{theorem}\label{theor:kmi}
For every $m>0$ and $i$ such that $0\leq i\leq m$, we have that
\begin{equation}\label{eq:maxk2}
k_m^{(i)}=\max \left( 1,\left \lfloor \frac{1-l_{i}}{\|m\alpha\|}\right \rfloor \right),
\end{equation}
where $l_{i}=\max_{0\leq k \leq i} |L_{k}(i)| $ is the maximum size of an interval in the Sturmian bijection with the factors of length $i$.
\end{theorem}
\begin{proof}
Let $m > 0$ be fixed, and let us first consider the case $i = 0$. Since $l_0 = 1$, we need to prove that
$k_m^{(0)} = 1$. It is equivalent to say that in a Sturmian word $s_{\alpha,\rho}$ there always exists a position $n$
such that no proper abelian power of period $m$ starts at this position. This is clear: if $\{m\alpha\} < 1/2$, then by
Proposition \ref{pro:int} and Lemma \ref{lem:ordered} we need to find a point $\{\rho+n\alpha\}$ such that
$\{\rho+n\alpha\} > \{-m\alpha\}$, while if $\{m\alpha\} > 1/2$, we need to have $\{\rho+n\alpha\} < \{-m\alpha\}$. By
the Kronecker Approximation Theorem such a point can always be found.
Let us now consider the general case $i>0$.
We know from Proposition \ref{pro:int} that the factor $a_{n}\cdots a_{n+m-1}\cdots a_{n+km-1}$ of length $km$ of $s_{\alpha,\rho}$ is an abelian power of period $m$ and exponent $k$ starting at position $n$ if and only if the $k$ points $\{\rho+(n+tm)\alpha\}$, $0\leq t \leq k-1$, are either all in the interval $I_1=I(0,\{-m\alpha\})$ or all in the interval $I_2=I(\{-m\alpha\},1)$.
Let us suppose $\{m\alpha\}<1/2$; the case $\{m\alpha\}>1/2$ is analogous. The longest abelian power of period $m$ starting at position $n$ is a factor that depends on the point $\{\rho+n\alpha\}$. Since we want the largest abelian power of period $m$ with anticipation $i$, we have to consider all the points $\{\rho+(n-j)\alpha\}$ with $0\leq j \leq i$. Let $k$ be such that $\{\rho+n\alpha\}\in L_k(i)$.
Since the size of the interval $L_k(i)$ is at most $l_i$, by the definition of the intervals we can say that there exists a $j$, with $0\leq j \leq i$, such that $\{\rho+(n-j)\alpha\}<l_i$, but by the Kronecker Approximation Theorem, this point can be arbitrarily close to $l_i$.
Hence, the largest integer $k$ such that for any $n\geq 0$ there exists a $j\leq i$ such that $\{\rho+(n-j+tm)\alpha\} \leq \{-m\alpha\}$ for every $0\leq t\leq k-1$, is either $1$ (in the case when no proper abelian power is guaranteed to start in any of $i+1$ consecutive positions) or the largest $k$ such that $(k-1)\|m\alpha\| \leq |I_1|-l_i=1-\|m\alpha\|-l_i$, i.e.,
\begin{equation*}
k=\left \lfloor \frac{1-l_{i}}{\|m\alpha\|}\right \rfloor.
\end{equation*}
Consequently, Proposition \ref{pro:int} implies that
\begin{equation*}
k_m^{(i)} = \max \left( 1, \left \lfloor \frac{1-l_{i}}{\|m\alpha\|}\right \rfloor \right).
\end{equation*}
in this case.
Indeed, the case $\{m\alpha\} > 1/2$ is analogous. We can find $j$, with $0 \leq j \leq i$, such that
$\{\rho+(n-j)\alpha\} > 1-l_i$. Again such a point can be arbitrarily close to the point $1-l_i$. Thus, we need to find
the largest integer $k$ such that $1-l_i-(k-1)\|m\alpha\| \geq \|m\alpha\|$. The conclusion follows.
\end{proof}
\begin{example}
Take the Fibonacci word $f=s_{\phi-1,\phi-1}$, $m=10$ and $i=6$. We have $\|m\alpha\|\approx 0.180$ and $1-l_6\approx 0.764$, so from (\ref{eq:maxk2}) we get $k_m^{(i)}=4$. Indeed, using Corollary \ref{cor:gab2} we can compute the first values of the sequence $k_{10,n}$ (see Table \ref{tab:kmn}) and check that---at least in the first positions---there is a value $4$ in any interval of $i+1=7$ consecutive positions, but there are intervals of size $6$ in which no value $5$ is present, so that the exponent guaranteed for any $n$, with anticipation $6$, is equal to $4$.
The values of $k_{10}^{(i)}$ relative to $\alpha=\phi-1$ are reported in Table \ref{tab:kmi}.
\end{example}
\begin{table}
\centering
\begin{small}
\begin{raggedright}
\begin{tabular}{c *{30}{@{\hspace{3.1mm}}c}}
$i$\hspace{2mm} &0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9
\\
\hline \\
$k_{10}^{(i)}$\hspace{2mm} &1& 2& 3& 3& 4& 4& 4& 4& 4& 4
\\
\hline \rule[0pt]{0pt}{12pt}
\end{tabular}
\end{raggedright}\caption{\label{tab:kmi} The values of the guaranteed exponent $k_{10}^{(i)}$ for $0\leq i\leq 10$ in the Fibonacci word $f=s_{\phi-1,\phi-1}$.}
\end{small}
\end{table}
\section{Approximating Irrationals by Rationals and Abelian Repetitions}\label{sec:approx}
The results of the previous sections allow us to deal with abelian powers and abelian repetitions in a Sturmian word $s_{\alpha}$ by using classical results on the rational approximations of the irrational $\alpha$.
Indeed, for any rational approximation $n/m$ of $\alpha$ such that $|n/m-\alpha|<1/km$, we have $|n-m\alpha|<1/k$, so $\|m\alpha\|<1/k$. Hence, by Theorem \ref{the:main1}, the Sturmian word $s_{\alpha}$ of angle $\alpha$ contains an abelian power of period $m$ and exponent $k$. Using this observation, we can translate classical results on the rational approximations of $\alpha$ into analogous properties on the abelian powers occurring in $s_{\alpha}$.
In what follows, we recall some classical results from elementary number theory. For any notation not explicitly defined in this section we refer the reader to \cite{Hardy_and_Wright}.
Recall that every irrational number $\alpha$ can be uniquely written as a (simple) continued fraction as follows:
\begin{equation}\label{cf}
\alpha = a_0 + \dfrac{1}{a_1 + \dfrac{1}{a_2 + \ldots}}
\end{equation}
where $a_{0}=\lfloor \alpha \rfloor$, and the infinite sequence $(a_{i})_{i\geq 0}$ is called the sequence of partial quotients of $\alpha$. The continued fraction expansion of $\alpha$ is usually denoted by its sequence of partial quotients as follows: $\alpha=[a_{0};a_{1},a_{2},\ldots ]$, and each its finite truncation $[a_{0};a_{1},a_{2},\ldots,a_{k}]$ is a rational number $n_{k}/m_{k}$ called the $k$th convergent to $\alpha$. We say that an irrational $\alpha=[a_{0};a_{1},a_{2},\ldots ]$ has bounded partial quotients if and only if the sequence $(a_{i})_{i\geq 0}$ is bounded.
Since the Golden Ratio $\phi$ is defined by the equation $\phi=1+1/\phi$, we have from Equation \ref{cf} that $\phi=[1;\overline{1}]$ (we indicate a repeating period with a bar over the period) and therefore $\phi-1=[0;\overline{1}]$. The sequence $F_0=1, F_1=1, F_{j+1}=F_j+F_{j-1}$ for $j\geq 1$ is the well known sequence of Fibonacci numbers. The sequences of fractions $\left(F_{j+1}/F_j\right)_{j\geq 0}$ and $0,\left(F_{j}/F_{j+1}\right)_{j\geq 0}$ are the sequences of convergents to $\phi$ and $\phi-1$, respectively.
The following classical result (see for example \cite[Theorem 6]{Lang}) states that the best rational approximations of an irrational $\alpha$ are given by its convergents.
\begin{theorem}\label{theor:lang}
For every irrational $\alpha$, if $n_{i}/m_{i}$ and $n_{i+1}/m_{i+1}$ are consecutive convergents to $\alpha$, then $m_{i+1}$ is the smallest integer $m>m_{i}$ such that $\|m\alpha\|<\|m_{i}\alpha\|$.
\end{theorem}
From Theorem \ref{theor:lang} we directly have the following corollary.
\begin{corollary}\label{cor:cor}
Suppose that $m_i>1$ and $m_{i+1}$ are consecutive denominators of convergents to $\alpha$. Then $\|m_i\alpha\|=\min_{1\leq m< m_{i+1}}\|m\alpha\|$.
\end{corollary}
Corollary \ref{cor:cor} has several consequences on the structure of abelian powers and abelian repetitions in Sturmian words.
The first one is given in the following proposition.
\begin{proposition}
Let $s_{\alpha}$ be a Sturmian word of angle $\alpha$. For every integer $m>1$, let $k_{m}$ (resp. $k'_{m}$) be the maximum exponent of an abelian power (resp.~abelian repetition) of period $m$ in $s_{\alpha}$. Then the subsequence $(k_{m_{i}})_{i\geq 0}$ (resp.~$(k'_{m_{i}})_{i\geq 0}$), where the numbers $m_{i}$ are the denominators of the convergents to $\alpha$, is strictly increasing.
\end{proposition}
\begin{proof}
The proposition is an immediate consequence of Theorem \ref{the:main1} and Corollary \ref{cor:cor}.
\end{proof}
\begin{example}
In Table \ref{tab:fici3}, we give the first values of the sequence $\|m(\phi-1)\|$. Notice that the local minima correspond to the Fibonacci numbers (in bold), which are the denominators of the convergents to $\phi-1$.
From Corollary \ref{cor:cor} and Theorem \ref{the:main1}, we have that the local maxima of the sequence $k_{m}$ are precisely the values of $m$ that are Fibonacci numbers (see Table \ref{tab:fici2}).
\end{example}
From Proposition \ref{pro:main} and Corollary \ref{cor:cor} we deduce the following.
\begin{proposition}\label{prop:unique}
Let $m$ be a denominator of a convergent to $\alpha$. If $\{-m\alpha\}\geq \{-\alpha\}$ (resp.~if $\{-m\alpha\}< \{-\alpha\}$), then there is only one heavy (resp.~light) factor of length $m$, which starts and ends with the letter $\sa{a}$ (resp.~with the letter $\sa{b}$).
\end{proposition}
The previous proposition allows us to state that the abelian repetitions of period $m$, when $m$ is a denominator of a convergent to $\alpha$, have maximum head length and maximum tail length. More precisely, we have the following.
\begin{proposition}\label{prop:mec}
Let $s_{\alpha} = a_0 a_1 a_2 \cdots$ be a Sturmian word of angle $\alpha$ and $m_{i}$ be a denominator of a
convergent to $\alpha$. Let $w$ be an abelian power of period $m_{i}$ in $s_{\alpha}$ starting at position
$n \geq m_{i} - 1$. Then this occurrence of $w$ can be extended to an abelian repetition of period $m_{i}$ with
maximum head and tail length $m_{i}-1$.
\end{proposition}
\begin{proof}
Let $w=a_{n}\cdots a_{n+m_{i}k-1}$ be an abelian power of period $m_{i}$ and exponent $k$ in $s_{\alpha}$. Suppose first that $w$ has maximum exponent $k_{m_{i}}$. We claim that the Parikh vectors of $a_{n-m_{i}+1}\cdots a_{n-1}$ and $a_{n+m_{i}k}\cdots a_{n+m_{i}k+m_{i}-2}$, both of length $m_{i}-1$, are contained in the Parikh vector $\mathcal{P}$ of $a_{n}\cdots a_{n+m_{i}-1}$. This implies that the abelian repetition $a_{n-m_{i}+1}\cdots a_{n+m_{i}k+m_{i}-2}$ of period $m_{i}$ has maximum head length and maximum tail length, and the statement follows.
In order to prove the claim, let $\mathcal{P}'$ be the Parikh vector of the factors of length $m_{i}$ of $s_{\alpha}$ that do not have Parikh vector $\mathcal{P}$. Suppose that $\mathcal{P}$ is the Parikh vector of the heavy (resp.~light) factors and that $\mathcal{P}'$ is the Parikh vector of the light (resp.~heavy) factors. By the maximality of $k$, the Parikh vector of the two factors of length $m_{i}$, $a_{n-m_{i}}\cdots a_{n-1}$ and $a_{n+m_{i}k_{m_{i}}}\cdots a_{n+m_{i}k_{m_{i}}+m_{i}-1}$, respectively preceding and following the occurrence of $w$ in $s_{\alpha}$, must be $\mathcal{P}'$ (we can extend $s_{\alpha}$ to the left by one letter if needed).
By Proposition \ref{prop:unique}, there is a unique light (resp.~heavy) factor having Parikh vector $\mathcal{P}'$.
Moreover, this unique factor starts and ends with $\sa{b}$ (resp.~$\sa{a}$). Therefore, $a_{n-m_{i}}=a_{n+m_{i}k_{m_{i}}+m_{i}-1}$ is equal to $\sa{b}$ (resp.~to $\sa{a}$), and the claim is proved.
Finally, if the exponent of $w$ is not maximum, then the Parikh vector of the factor of length $m_{i}$ preceding (resp.~following) $w$ in $s_{\alpha}$ is either $\mathcal{P}'$, if $w$ is a prefix (resp.~a suffix) of an abelian power of maximum exponent (and in this case the previous reasoning applies) or $\mathcal{P}$. In both cases $w$ can be extended to an abelian repetition with maximum head length and maximum tail length.
\end{proof}
\begin{corollary}
Let $s_{\alpha}$ be a Sturmian word of angle $\alpha$. Let $m_{i}$ be a denominator of a convergent to $\alpha$. Then $k'_{m_{i}}=k_{m_{i}}+2-2/m_{i}$.
\end{corollary}
\begin{table}
\centering
\begin{scriptsize}
\begin{raggedright}
\begin{tabular}{c *{30}{@{\hspace{1.4mm}}l}}
$m$\hspace{1mm} & {\bf 1} & {\bf 2} & {\bf 3} & 4 & {\bf 5} & 6 & 7 & {\bf 8} & 9 & 10 & 11 & 12 & {\bf 13} & 14 & 15 & 16 & 17 & 18
\\
\hline \\
$\|m(\phi-1)\|$\hspace{1mm} & {\bf 0.38} & {\bf 0.24} & {\bf 0.15} & 0.47 & {\bf 0.09} & 0.29 & 0.33 & {\bf 0.06} & 0.44 & 0.18 & 0.20 & 0.42 & {\bf 0.03} & 0.35 & 0.27 & 0.11 & 0.49 & 0.13
\\
\hline \rule[0pt]{0pt}{12pt}
\end{tabular}
\end{raggedright}\caption{\label{tab:fici3} The first values of the sequence $\|m(\phi-1)\|$. The values corresponding to the Fibonacci numbers are in bold.}
\end{scriptsize}
\end{table}
In the context of ordinary powers, it is interesting to study the largest power occurring in a word. However, in the
abelian setting it does not make sense to study the analogous quantity since any Sturmian word contains abelian powers
of arbitrarily large exponent. Instead, we propose the following notion of abelian critical exponent, which measures
the maximum ratio between the exponent and the period of an abelian repetition.
\begin{definition}
Let $s_{\alpha}$ be a Sturmian word of angle $\alpha$. For every integer $m>1$, let $k_{m}$ (resp. $k'_{m}$) be the
maximum exponent of an abelian power (resp.~abelian repetition) of period $m$ in $s_{\alpha}$. The \emph{abelian
critical exponent of $s_{\alpha}$} is defined as
\begin{equation}\label{eq:1}
A(s_\alpha) = \limsup_{m \to \infty} \frac{k_{m}}{m} = \limsup_{m \to \infty} \frac{k'_{m}}{m}.
\end{equation}
Notice that, indeed, the two superior limits above coincide for any Sturmian word $s_{\alpha}$ since
by definition one has $k_{m}\leq k'_{m}<k_{m}+2$ for every $m \geq 1$.
\end{definition}
Before studying abelian critical exponents further, we explore their connection to a number-theoretical concept known as
the Lagrange spectrum.
\begin{definition}
Let $\alpha$ be a real number. The \emph{Lagrange constant of $\alpha$} is defined as
\begin{equation*}
\lambda(\alpha) = \limsup_{m\to\infty} (m\|m\alpha\|)^{-1}.
\end{equation*}
\end{definition}
Let us briefly motivate the definition of the Lagrange constants. The famous Hurwitz's Theorem states that for every irrational $\alpha$ there exists infinitely many rational numbers $n/m$ such that $$\left|\alpha - \frac{n}{m}\right| < \frac{1}{\sqrt{5}m^2}$$ and, moreover, the constant $\sqrt{5}$ is best possible. Indeed, if $\alpha=\phi-1$, then for every $A>\sqrt{5}$ the inequality
$$\left|\frac{n}{m}-\alpha\right|< \frac{1}{Am^2}$$
has only a finite number of solutions $n/m$.
For a general irrational $\alpha$, the infimum of the real numbers $\lambda$ such that for every $A>\lambda$ the inequality
$\left|n/m-\alpha\right|< 1/Am^2$
has only a finite number of solutions $n/m$, is indeed the Lagrange constant $\lambda(\alpha)$ of $\alpha$. The set of all finite Lagrange
constants of irrationals is called the \emph{Lagrange spectrum} $L$. The Lagrange spectrum has been
extensively studied, see for instance \cite{Cusick_and_Flahive}. Yet its structure is still not completely understood. Markov \cite{Ma} proved that
$L \cap (-\infty, 3) = \{k_1 = \sqrt{5} < k_2 = \sqrt{8} < k_3 = \sqrt{221}/5 < \ldots\}$
where $k_n$ is a sequence of quadratic irrational numbers converging to $3$ (so the beginning of $L$ is discrete). Then Hall \cite{Hall} proved that $L$ contains a whole half line, and Freiman \cite{Freiman} determined the biggest half line that is contained
in $L$, which is $[c, +\infty)$, with
$$c=\frac{2221564096+283748\sqrt{462}}{491993569}=4.5278295661\ldots $$
Using the terminology of Lagrange constants, we have the following direct consequence of Theorem \ref{the:main1}.
\begin{theorem}
Let $s_\alpha$ be a Sturmian word of angle $\alpha$. Then $A(s_\alpha) = \lambda(\alpha)$. In other words, the
abelian critical exponent of a Sturmian word is the Lagrange constant of its angle.
\end{theorem}
Let us next derive a (known) formula for the Lagrange constant of an irrational number. For this we need to recall some
elementary results on continued fractions. For full details see \cite[Chap.~X]{Hardy_and_Wright}.
Let $\alpha$ be a fixed irrational with continued fraction expansion $[a_0; a_1, a_2, \ldots]$. Let us set
$\alpha_i = [a_i; a_{i+1}, a_{i+2}, \ldots].$ Since $\alpha = [a_0; a_1, a_2, \ldots, a_i, \alpha_{i+1}]$, we have that
$$\alpha = \frac{\alpha_{i+1}n_i + n_{i-1}}{\alpha_{i+1}m_i + m_{i-1}}.$$ Therefore, by applying the identity
$m_i r_{i-1} - n_i m_{i-1} = (-1)^i$, we obtain that
\begin{equation}\label{eq:difference}
\alpha - \frac{n_i}{m_i} = \frac{(-1)^i}{m_i(\alpha_{i+1}m_i + m_{i-1})},
\end{equation}
so $$(m_i\|m_i\alpha\|)^{-1} = \alpha_{i+1} + \frac{m_{i-1}}{m_i}.$$ By induction it is easy prove the
well-known fact that $m_{i-1}/m_i = [0;a_i,a_{i-1},\ldots,a_1]$. Consequently,
$$(m_i\|m_i\alpha\|)^{-1} = [a_{i+1};a_{i+2},\ldots]+[0;a_i,a_{i-1},\ldots,a_1].$$ Let $m$ be an integer
such that $m_i < m < m_{i+1}$ for some $i \geq 0$. By Theorem \ref{theor:lang}, we have
$\|m_i\alpha\| < \|m\alpha\|$, so $m_i\|m_i\alpha\| < m\|m\alpha\|$. Thus,
\begin{equation}\label{eq:lagrange_formula}
\lambda(\alpha) = \limsup_{i\to\infty} (m_i\|m_i\alpha\|)^{-1} = \limsup_{i\to\infty} \left([a_{i+1};a_{i+2},\ldots]+[0;a_i,a_{i-1},\ldots,a_1]\right).
\end{equation}
\begin{definition}
Let $\alpha$ and $\beta$ be two real numbers having continued fraction expansions $[a_0; a_1, a_2, \ldots]$ and
$[b_0; b_1, b_2, \ldots]$ respectively. If there exists integers $N$ and $M$ such that $a_{N+i} = b_{M+i}$ for all
$i \geq 0$, then we say that $\alpha$ and $\beta$ are \emph{equivalent}. In other words, two numbers are equivalent
if their continued fraction expansions ultimately coincide.
\end{definition}
By Equation \eqref{eq:lagrange_formula} it is immediate that equivalent numbers have the same Lagrange constant. Equation
\eqref{eq:lagrange_formula} directly implies the following important proposition.
\begin{proposition}\label{prp:act_formula}
Let $s_\alpha$ be a Sturmian word of angle $\alpha$. Then
\begin{equation*}
A(s_\alpha) = \limsup_{i\to\infty} \left([a_{i+1};a_{i+2},\ldots]+[0;a_i,a_{i-1},\ldots,a_1]\right).
\end{equation*}
\end{proposition}
We have thus obtained a formula for the abelian critical exponent of a Sturmian word in terms of the partial quotients
of its angle. A formula for the usual critical exponent of Sturmian words can also be expressed in terms of partial
quotients; see \cite{CaDel00,Justin2001363,Damanik200223,Pelto}.
Proposition \ref{prp:act_formula} enables us to study the abelian critical exponent of Sturmian words. The first
application is the following result. Recall that an infinite word is $\beta$-power-free if it does not contain
repetitions of exponent $\beta$ or larger.
\begin{theorem}\label{theor:act_finite}
Let $s_\alpha$ be a Sturmian word of angle $\alpha$. The following are equivalent:
\begin{enumerate}[(i)]
\item $A(s_\alpha)$ is finite,
\item $s_\alpha$ is $\beta$-power-free for some $\beta \geq 2$,
\item $\alpha$ has bounded partial quotients.
\end{enumerate}
\end{theorem}
\begin{proof}
It is evident from Proposition \ref{prp:act_formula} that $A(s_\alpha)$ is finite if and only if $\alpha$ has
bounded partial quotients. By a well-known result, $s_\alpha$ is $\beta$-power-free for some $\beta \geq 2$ if and
only if $\alpha$ has bounded partial quotients; see \cite{Mi89}.
\end{proof}
\begin{theorem}\label{theor:sqrt5}
For every Sturmian word $s_{\alpha}$ of angle $\alpha$, we have $A(s_\alpha) \geq \sqrt{5}$. Moreover,
$A(s_\alpha) = \sqrt{5}$ if and only if $\alpha$ is equivalent to $\phi-1$. In particular, the abelian critical
exponent of the Fibonacci word is $\sqrt{5}$.
\end{theorem}
\begin{proof}
It is clear from Proposition \ref{prp:act_formula} that $A(s_\alpha)$ is as small as possible when $\alpha$ is
equivalent to $\phi = [1;\overline{1}]$. It is straightforward to compute that $\lambda(\phi) = \sqrt{5}$, so
$A(s_\alpha) \geq A(s_{1-\phi,1-\phi}) = \sqrt{5}$ for all Sturmian words $s_\alpha$.
What is left is to prove is that if $A(s_\alpha) = \sqrt{5}$, then $\alpha$ is equivalent to $\phi-1$. Suppose
that $\alpha = [0;a_1,a_2,\ldots]$ and $A(s_\alpha) = \sqrt{5}$. If $a_i \geq 3$ for infinitely many $i$, then
clearly $A(s_\alpha) \geq 3 > \sqrt{5}$. Thus, there exists $M > 0$ such that $a_i < 3$ for all $i \geq M$. We are
left with two cases: either $a_i = 1$ for only finitely many $i$ or the sequence $(a_i)$ takes values $1$ and $2$
infinitely often; otherwise we are done.
Suppose first that $a_i = 1$ for finitely many $i$. It follows that $(a_i)$ eventually takes only the value $2$, so
$\alpha$ is equivalent to $\sqrt{2} = [2;\overline{2}]$. Therefore, $\lambda(\alpha) = \lambda(\sqrt{2})$. It is
routine computation to show that $\lambda(\sqrt{2}) = \sqrt{8}$, so $A(s_\alpha) = \sqrt{8} > \sqrt{5}$; a
contradiction.
Assume finally that the sequence $(a_i)$ takes values $1$ and $2$ infinitely often. It follows that the sequence
$(a_i)$ contains either of the patterns $2,1,1$ or $2,1,2$ infinitely often. Since an odd convergent of a number
$\beta$ is always strictly less than $\beta$, it follows that $$[2,1,1,a_k,a_{k+1},\ldots] > [2,1,1] = \frac52$$ and
$$[2,1,2,a_k,a_{k+1},\ldots] > [2,1,2] = \frac83.$$ Thus, $A(s_\alpha) \geq 5/2 > \sqrt{5}$, which is impossible.
\end{proof}
In general, two numbers having the same Lagrange constant need not be equivalent. Any two numbers having unbounded partial
quotients have Lagrange constant $\infty$, but obviously such numbers are not necessarily equivalent.
As a consequence of Theorem \ref{theor:sqrt5} we have the following.
\begin{corollary}\label{cor:fil_new}
Let $s_{\alpha}$ be a Sturmian word of angle $\alpha$. For every $\delta>0$ there exists an increasing sequence of integers $(m_i)_{i\geq 0}$ such that for every $i$ there is in $s_\alpha$ an abelian power of period $m_i$ and length greater than $(\sqrt{5}-\delta)m_i^2$ (i.e., with exponent greater than $(\sqrt{5}-\delta)m_i$).
\end{corollary}
Notice that the result of the previous corollary about abelian powers is in sharp contrast with the analogous situation
for ordinary repetitions. Indeed, it is known (see \cite{MignosiPirillo}) that there exist Sturmian words that are
$\beta$-power-free for any real number $\beta \geq 2+\phi$, so with respect to both the length and the exponent the
difference with the abelian setting is of one order of magnitude.
\begin{remark}
By Proposition \ref{prp:act_formula} it is in principle possible to explicitly compute the abelian critical exponent
for a given Sturmian word. This is especially true if the angle $\alpha$ is a quadratic irrational, as then the
continued fraction expansion of $\alpha$ is ultimately periodic
\cite[Chap.~X, Theorem~176, Theorem~177]{Hardy_and_Wright}. Observe that this implies that the partial quotients of
the expansion are bounded, so by Theorem \ref{theor:act_finite} the abelian critical exponent is finite in this case.
Notice that if a Sturmian word is a fixed point of a substitution, then its angle is a quadratic irrational called a
Sturm number \cite{yasutomi}.
Suppose for an example that $\alpha = [0;\overline{2,1}]$. It is routine to compute that
$[0;\overline{2,1}] = (-1+\sqrt{3})/2$ and $[0;\overline{1,2}] = -1+\sqrt{3}$. Combining the fact that odd
convergents of $[0;\overline{1,2}]$ approximate $[0;\overline{1,2}]$ from below with the fact that
$[2;\overline{1,2}] > [1;\overline{2,1}] + 1$, it follows from Proposition \ref{prp:act_formula} that
$A(s_\alpha) = [2;\overline{1,2}] + [0;\overline{1,2}] = 2\sqrt{3} \approx 3.46$ for a Sturmian word $s_\alpha$ of
angle $\alpha$.
\end{remark}
\section{Abelian Repetitions in the Fibonacci Word}\label{sec:abrepFibo}
In this section we apply our results to the Fibonacci word and study in detail its abelian powers and
repetitions. We begin with the following simple observation.
\begin{proposition}\label{pro:max}
The maximum exponent of an abelian power of period $F_{j}$, $j>0$, in the Fibonacci word is equal to
$\lfloor \phi F_j+F_{j-1} \rfloor$.
\end{proposition}
\begin{proof}
The sequence of the denominators of the convergents of $\phi-1$ coincide with the sequence of Fibonacci numbers.
Therefore, it is an immediate consequence of Equation \eqref{eq:difference} that
$$\|F_{j}(\phi-1)\|= \frac{1}{\phi F_{j}+F_{j-1}}.$$ The claim follows now from Theorem \ref{the:main1}.
\end{proof}
Next we turn our attention to the prefixes of the Fibonacci word which are abelian repetitions. Consider such a prefix
of abelian period $m$. The head of the abelian decomposition of this prefix has length at most $m-1$. Therefore, in
order to find the longest abelian repetition of period $m$ that is a prefix, we have to check the maximum length of a
compatible head of all the abelian powers that start at positions $i$ such that $0 \leq i \leq m-1$.
\begin{proposition}\label{pro:maximal_exponent}
Let $j>1$. In the Fibonacci word $f$, the longest abelian power of period $F_j$ starting at a position $i< F_j$
has an occurrence starting at position $F_j-1$ and has exponent $$\lfloor \phi F_j+F_{j-1} \rfloor -1=
\begin{cases}
F_{j+1}+F_{j-1} -1 & \mbox{ if $j$ is even;}\\
F_{j+1}+F_{j-1} -2 & \mbox{ if $j$ is odd.}
\end{cases}
$$
\end{proposition}
\begin{proof}
For simplicity we consider the word $s_{\phi-1,0}$ and show that in $s_{\phi-1,0}$ the longest abelian power having period $F_j$ starting at a position $i$ such that $i\leq F_j$ has an occurrence starting at position $F_j$ and has the claimed exponent. By Theorem \ref{the:maingab}, an abelian power with period $F_{j}$ starting at position $F_{j}$ in $s_{\phi-1,0}$ has exponent $k$ if and only if $$\{F_{j}(\phi-1)\}<1-k\{F_{j}(\phi-1)\} \ \mbox{ or } \ \{F_{j}(\phi-1)\}>k\{-F_{j}(\phi-1)\}.$$
Suppose that the first case holds (the other case is analogous); then $j$ is even. We hence have
\begin{equation}\label{eq:uno}
\{F_{j}(\phi-1)\}<\frac{1}{k+1}.
\end{equation}
As in the proof of Proposition \ref{pro:max}, we derive from Equation \eqref{eq:difference} that
$$\{F_{j}(\phi-1)\}=\frac{1}{\phi F_{j}+F_{j-1}}.$$
Therefore, the largest integer $k$ for which \eqref{eq:uno} holds is $\lfloor \phi F_j+F_{j-1} \rfloor -1$.
Then, the abelian power of period $F_{j}$ starting at position $F_{j}-1$ in the Fibonacci word has exponent $\lfloor \phi F_j+F_{j-1} \rfloor -1$. By Corollary \ref{cor:cor} any abelian power starting at a position $i$ such that $i<F_{j}-1$ has a smaller exponent, so the proof is complete if we derive the formula of the statement. By Equation \eqref{eq:difference}
\begin{equation*}
\phi F_j - F_{j+1} = \frac{(-1)^j}{\phi F_j + F_{j-1}},
\end{equation*}
so we obtain that
\begin{equation*}
\lfloor \phi F_j \rfloor = F_{j+1} +
\begin{cases}
0 & \text{ if $j$ is even;}\\
-1 & \text{ if $j$ is odd.}
\end{cases}
\end{equation*}
This gives the formula of the statement.
\end{proof}
The following theorem provides a formula for computing the length of the longest abelian repetition occurring as a prefix in the Fibonacci word.
\begin{theorem}\label{pro:longest_prefix}
Let $j>1$. The longest prefix of the Fibonacci word that is an abelian repetition of period $F_j$ has length
$$\textit{lp}(F_{j}) =
\begin{cases}
F_j( F_{j+1}+F_{j-1} +1)-2 & \mbox{ if $j$ is even;}\\
F_j( F_{j+1}+F_{j-1} )-2 & \mbox{ if $j$ is odd.}
\end{cases}
$$
\end{theorem}
\begin{proof}
Let $w$ be the abelian power in $f$ of period $F_j$ having maximum exponent described in Proposition
\ref{pro:maximal_exponent}. By Proposition \ref{prop:mec} this occurrence of $w$ can be extended to an abelian
repetition with maximum head and tail length. The claim thus follows from the formula of Proposition
\ref{pro:maximal_exponent}.
\end{proof}
\begin{proposition}\label{corr:last}
Let $j>1$ and $k_{j}$ be the maximum exponent of a prefix of the Fibonacci word that is an abelian repetition of period $F_{j}$. Then $$\lim_{j\to \infty}\frac{k_{j}}{F_{j}}=\sqrt{5}.$$
\end{proposition}
\begin{proof}
We proved in Theorem \ref{theor:sqrt5} that over a bigger set of repetitions the superior limit tends to $\sqrt{5}$.
Therefore, the limit must be smaller than or equal to $\sqrt{5}$. The equality follows from the fact that the
sequence $(F_{j+1}+F_{j-1})/F_j$ converges to $\sqrt{5}$. Indeed, since the sequence $F_{j+1}/F_j$ converges to $\phi$
and the sequence $F_{j-1}/F_j$ converges to $\phi-1$, the sequence $$\frac{F_{j+1}+F_{j-1}}{F_j}$$ converges to
$\phi + \phi-1= \sqrt{5}.$
\end{proof}
\begin{figure}[ht!]
\centering
\fbox{
\includegraphics[scale=0.8]{Fig_prefixlenghts}
}\caption{Longest abelian repetition of period $m$ that is a prefix of the Fibonacci word for $m=2,3,5$.
$(a)$ For $m=2$, the longest abelian repetition has length $8=1+3m+1$.
$(b)$ For $m=3$, the longest abelian repetition has length $19=2+5m+2$.
$(c)$ For $m=5$, the longest abelian repetition has length $58=4+10m+4$. }
\label{Fig:prefixlenghts}
\end{figure}
In \figurename~\ref{Fig:prefixlenghts} we give a graphical representation of the longest prefix of the Fibonacci word that is an abelian repetition of period $m$ for $m=2,3,5$. In Table \ref{tab:fici} we give the length $\textit{lp}(F_{j})$ of the longest prefix of the Fibonacci word that is an abelian repetition of period $F_{j}$, for $j=2,\ldots, 11$, computed using the formula of Theorem \ref{pro:longest_prefix}. We also give the distance between $\sqrt{5}$ and the ratio between the maximum exponent $\textit{lp}(F_{j})/F_{j}$ of a prefix of the Fibonacci word having abelian period $F_{j}$ and the number $F_{j}$.
\begin{table}
\centering
\begin{small}
\begin{raggedright}
\begin{tabular}{c *{30}{@{\hspace{3.1mm}}l}}
$j$\hspace{2mm} & 2& 3& 4& 5& 6& 7& 8& 9& 10& 11
\\
\hline \\
$F_{j}$\hspace{2mm} & 2& 3& 5& 8& 13& 21& 34& 55& 89& 144
\\
\hline \\
$\textit{lp}(F_{j})$ \hspace{2mm} & 8 & 19 & 58 & 142 & 388 & 985 & 2616\hspace{1ex} & 6763 & 17798 & 46366
\\
\hline\\
$|\sqrt{5}-k_{j}/F_{j}|\times 10^2$ \hspace{2mm} & $23.6 $ & $12.5 $ & $8.393 $ & $1.732 $ & $5.98 $ & $0.25 $ & $2.69 $ & $0.037 $ & $1.087$ & $0.005 $
\\
\hline \rule[0pt]{0pt}{12pt}
\end{tabular}
\end{raggedright}\caption{\label{tab:fici}The length of the longest prefix ($\textit{lp}(F_{j})$) of the Fibonacci word having abelian period $F_{j}$ for $j=2,\ldots, 11$. The table also reports rounded distances (multiplied by $10^2$) between $\sqrt{5}$ and the ratio between the exponent $\textit{lp}(F_{j})/F_{j}$ of the longest prefix of the Fibonacci word having abelian period $F_{j}$ and the number $F_{j}$ (see Proposition~\ref{corr:last}).}
\end{small}
\end{table}
Next we extend a classical result on the periods of the factors of the Fibonacci word to the abelian setting. Currie
and Saari proved the following \cite{CuSa09}.
\begin{proposition}
The minimum period of any factor of the Fibonacci infinite word is a Fibonacci number.
\end{proposition}
We prove an analogous result for abelian periods: the minimum abelian period of every factor of the Fibonacci word is
a Fibonacci number (Theorem \ref{the:abperFib}). For the proof we need two lemmas.
\begin{lemma}\label{lem:ratio}
For all $j > 2$ we have
\begin{equation*}
\frac{\|F_{j-1}(\phi-1)\|}{\|F_j(\phi-1)\|} = \phi \ \text{ and } \ \frac{\|F_{j-2}(\phi-1)\|}{\|F_j(\phi-1)\|} = 1 + \phi.
\end{equation*}
\end{lemma}
\begin{proof}
It is straightforward to verify that $\|F_1(\phi-1)\|/\|F_2(\phi-1)\| = \phi$. By applying induction and Equation
\eqref{eq:difference} we obtain that
\begin{align*}
\frac{\|F_{j-1}(\phi-1)\|}{\|F_j(\phi-1)\|} &= \frac{\phi F_j + F_{j-1}}{\phi F_{j-1} + F_{j-2}}
= \frac{\phi F_{j-1} + \phi F_{j-2} + F_{j-2} + F_{j-3}}{\phi F_{j-1} + F_{j-2}} \\
&= 1 + \frac{\|F_{j-1}(\phi-1)\|}{\|F_{j-2}(\phi-1)\|} = 1 + \frac{1}{\phi} = \phi.
\end{align*}
Similarly,
\begin{equation*}
\frac{\|F_{j-2}(\phi-1)\|}{\|F_j(\phi-1)\|} = 1 + \frac{\|F_{j-2}(\phi-1)\|}{\|F_{j-1}(\phi-1)\|} = 1 + \phi.
\end{equation*}
\end{proof}
For the next lemma we need the following corollary of the famous Three Distance Theorem; see, e.g., \cite{AlBe}.
\begin{proposition}\label{prp:tdt}
Let $\alpha = [0;a_1,a_2,\ldots]$ be an irrational number and $(n_k/m_k)$ be its sequence of convergents. If $k > 1$, then the
lengths of the $m_k$ subintervals $L_0(m_k-1)$, $L_1(m_k-1)$, $\ldots$, $L_{m_k-1}(m_k-1)$ of the torus take two
values: $\|m_{k-1}\alpha\|$ or $\|((a_k - 1)m_{k-1} + m_{k-2})\alpha\|$.
\end{proposition}
\begin{lemma}\label{lem:kmi_fibonacci}
For the Fibonacci word we have $k_{F_j}^{(F_j - 1)} = F_{j+1} + F_{j-1} - 3$ for all $j > 1$.
\end{lemma}
\begin{proof}
By Proposition \ref{prp:tdt} and Corollary \ref{cor:cor} the maximum size of the intervals of factors of length
$F_j - 1$ is $\|F_{j-2}(\phi-1)\|$. Therefore, Theorem \ref{theor:kmi}, Lemma \ref{lem:ratio} and Equation
\eqref{eq:difference} imply that
\begin{equation*}
k_{F_j}^{(F_j-1)} = \max\left( 1, \left\lfloor \phi F_j + F_{j-1} - 1 - \phi \right\rfloor \right).
\end{equation*}
Further, Equation \eqref{eq:difference} implies that
\begin{equation*}
\phi F_j = F_{j+1} + \frac{(-1)^j}{\phi F_j + F_{j-1}}.
\end{equation*}
Since $1/(\phi F_j + F_{j-1})$ is at most $2-\phi$, it follows that
\begin{equation*}
F_{j+1} - 2 \leq \phi F_j - \phi \leq F_{j+1} + 2 - 2\phi
\end{equation*}
Since $3 < 2\phi < 4$, we have that $\left\lfloor \phi F_j - \phi \right\rfloor = F_{j+1} - 2$. The claim follows.
\end{proof}
\begin{theorem}\label{the:abperFib}
The minimum abelian period of any factor of the Fibonacci word is a Fibonacci number.
\end{theorem}
\begin{proof}
Let $w$ be a factor of the Fibonacci infinite word $f$, and suppose that $w$ has an abelian period $m>0$. We will show
that $w$ has also period $F_{n}$ where $F_{n}$ is the largest Fibonacci number such that $F_n \leq m$. If $m = F_n$,
then there is nothing to prove, so we can suppose that $F_n < m < F_{n+1}$. In particular, we have that $n \geq 3$. We
will show that given a suitable occurrence of $w$ in $f$ there is an earlier occurrence of an abelian repetition $w'$
of period $F_n$ such that $w$ is a factor of $w'$. The conclusion follows then from Lemma \ref{lem:ap}.
Suppose that $w$ occurs in $f$ at position $i$. By Theorem \ref{theor:kmi} there is an abelian power of period $F_n$ of
length $F_n k_{F_n}^{(F_n - 1)}$ starting at position $i + j$ for some $j$ such that $0 \leq j \leq F_n - 1$. By
Proposition \ref{prop:mec} this abelian power can be extended to an abelian repetition with maximum head and tail
length $F_n - 1$, so we only need to ensure that this abelian repetition is long enough to have $w$ as a factor. Since
$w$ has length at most $m(k_m + 2) - 2$, we thus need to establish that
\begin{equation*}
m(k_m + 2) - 2 \leq F_n(k_{F_n}^{(F_n - 1)} + 1) - 1.
\end{equation*}
By Lemma \ref{lem:kmi_fibonacci} this inequality holds if and only if the inequality
\begin{equation}\label{eq:equiv1}
m(k_{m}+2) \leq F_n(F_{n+1} + F_{n-1} - 2) + 1
\end{equation}
is satisfied. The rest of the proof consists of showing that \eqref{eq:equiv1} holds.
First we derive the following upper bound on $m(k_m + 2)$:
\begin{equation}\label{eq:tech1}
m(k_{m}+2) < F_{n+1}(F_{n-1}+F_{n-3}+2).
\end{equation}
Let us first show that $\|m(\phi-1)\|>\|F_{n-2}(\phi-1)\|$. By Theorem \ref{theor:lang} we have
$$\|F_{n-2}(\phi-1)\|=\min_{i< F_{n-1}}\|i(\phi-1)\|.$$ Equation \eqref{eq:difference} implies that either
$\{F_n(\phi-1)\}<1/2$ and $\{F_{n+1}(\phi-1)\}>1/2$ or $\{F_n(\phi-1)\}>1/2$ and $\{F_{n+1}(\phi-1)\}<1/2$. Suppose
first that $\{m(\phi-1)\}<1/2$. If $\{F_n(\phi-1)\}<1/2$, then we have
\begin{eqnarray*}
\|m(\phi-1)\|&=&\|m(\phi-1)\|-\|F_{n}(\phi-1)\|+\|F_{n}(\phi-1)\| \\
&=& \|(m - F_{n})(\phi-1)\|+\|F_{n}(\phi-1)\| \\
&=& \|(F_{n}-m)(\phi-1)\|+\|F_{n}(\phi-1)\| \\
&>& \|F_{n-2}(\phi-1)\|+\|F_{n}(\phi-1)\| \\
&>& \|F_{n-2}(\phi-1)\|.
\end{eqnarray*}
If instead $\{F_n(\phi-1)\}>1/2$, then we can apply the same manipulation with $F_{n+1}$ in place of $F_n$. Indeed,
since the difference of $F_{n+1}$ and $F_n$ is $F_{n-1}$ and since $F_n<m<F_{n+1}$, we have that $F_{n+1} -m <F_{n-1}$,
so we can still apply Theorem \ref{theor:lang} to derive that $\|(F_{n+1}-m)(\phi-1)\| \geq \|F_{n-2}(\phi-1)\|$. The
case $\{m(\phi-1)\}>1/2$ is symmetric. Thus, we have shown that $\|m(\phi-1)\|>\|F_{n-2}(\phi-1)\|$.
Therefore, by Equation \eqref{eq:difference} we have $k_m < \phi F_{n-2} + F_{n-3}$. Again, by applying Equation
\eqref{eq:difference} as in the proof of Lemma \ref{lem:kmi_fibonacci}, we obtain that
$k_m \leq F_{n-1} + F_{n-3}$. As $m < F_{n+1}$, inequality \eqref{eq:tech1} follows.
By the inequality \eqref{eq:tech1}, in order to establish the inequality \eqref{eq:equiv1}, it is sufficient to
show that
\begin{equation*}
F_{n+1}(F_{n-1}+F_{n-3}+2) \leq F_n(F_{n+1} + F_{n-1} - 2).
\end{equation*}
This inequality, however, is easily seen to be true whenever $F_{n-1}+F_{n-3}+2 \leq F_n$, that is, when $n \geq 6$.
By a direct computation it can be seen that the above inequality holds also for $n = 5$. Suppose then that $n = 4$. We
proved above that $k_m \leq F_{n-1} + F_{n-3} = 4$. Plugging the estimates $k_m \leq 4$ and $m \leq 7$ into
\eqref{eq:equiv1} shows that the conclusion holds also in this case. Suppose finally that $n = 3$, that is, $m = 4$.
Now $k_m = 2$, and a direct substitution to the inequality \eqref{eq:equiv1} shows that the conclusion holds. This ends
the proof.
\end{proof}
\begin{corollary}\label{cor:prefix}
The minimum abelian period of any prefix of the Fibonacci word is a Fibonacci number.
\end{corollary}
\begin{remark}
Theorem \ref{the:abperFib} does not generalize to hold for every Sturmian word. Consider for instance Sturmian words
of angle $\alpha = [0;\overline{2,1}] = (\sqrt{3}-1)/2$. It can be verified that the factor
$$\sa{aabab} \cdot \sa{aabaabaababaabaabaababaabaabaa} \cdot \sa{babaa}$$ starting at position $35$ of
$s_{\alpha,\alpha}$ is an abelian repetition of minimum period $6$ with maximum head and tail length. However, the
number $6$ is not a denominator of a convergent of $\alpha$ since the sequence of convergents starts
$0,1/2,1/3,3/8,\ldots$
\end{remark}
Recall that the (finite) Fibonacci words are defined by $f_{0}=\sa{b}$, $f_{1}=\sa{a}$ and $f_{j+1}=f_{j}f_{j-1}$ for every $j > 1$. Hence, for every $j\geq 0$, we have $|f_{j}|=F_{j}$.
From Corollary \ref{cor:prefix}, we know that every finite Fibonacci word has an abelian period that is a Fibonacci number. The following theorem, stated without proof in \cite{FiLaLeLeMiPG13}, provides an explicit formula for the minimum abelian period of the finite Fibonacci words.
\begin{theorem}
For $j \geq 3$, the minimum abelian period of the word $f_j$ is the $n$th Fibonacci number $F_n$, where
$$n =
\begin{cases}
\lfloor{j/2}\rfloor & \mbox{ if $j = 0, 1, 2\mod{4}$;}\\
\lfloor{j/2}\rfloor +1& \mbox{ if $ j = 3\mod{4}$.}
\end{cases}
$$
\end{theorem}
\begin{proof}
From Corollary \ref{cor:prefix} using Theorem \ref{pro:longest_prefix}, it is sufficient to find the smallest integer
$n$ such that $\textit{lp}(F_{n})$ is greater than or equal to $F_{j}$. In other words, we need to find the smallest integer
$n$ such that
\begin{equation*}
F_n \left(F_{n+1} + F_{n-1} + \gamma \right) - 2 \geq F_j
\end{equation*}
where $\gamma$ equals $1$ if $n$ is even and $0$ if $n$ is odd.
We need the following well-known formula:
\begin{equation}\label{eq:fibo_2j}
F_j(F_{j+1} + F_{j-1}) = F_{2j+1}.
\end{equation}
This identity follows easily from the matrix identity
\begin{equation*}
\begin{pmatrix}
1 &1 \\
1 &0
\end{pmatrix}^j
= \begin{pmatrix}
F_j & F_{j-1} \\
F_{j-1} & F_{j-2}
\end{pmatrix}
\end{equation*}
and the fact that $A^n A^m = A^{n+m}$ for a matrix $A$.
It is now straightforward to verify the claim using Equation \eqref{eq:fibo_2j}. We will prove the claim in the case
that $j = 2 \mod 4$; the other cases are similar. Choose $n = \lfloor j/2 \rfloor$. Now $n$ is odd and $2n+1 = j+1$, so
by the Equation \eqref{eq:fibo_2j} we need to verify that $F_{j+1} - 2 \geq F_j$, which is clearly true. Choose then
$n = \lfloor j/2 \rfloor - 1$. Then $n$ is even and $2n + 1 = j - 1$, so $F_{2n+1} + F_n - 2 \geq F_j$ if and only if
$F_{\lfloor j/2 \rfloor - 1} - 2 \geq F_{j-2}$. This latter inequality, however, cannot hold as $F_{4k} \geq F_{2k}$
for all $k \geq 0$. This shows that the value $\lfloor j/2 \rfloor$ is minimal in this case.
\end{proof}
\begin{example}
The minimum abelian period of the word $f_{4}=\sa{abaab}$ is $2=F_{2}=F_{\lfloor{4/2}\rfloor}$, since $f_{4}=\sa{a}\cdot \sa{ba} \cdot \sa{ab}$; the minimum abelian period of $f_{5}=\sa{a}\cdot\sa{ba}\cdot\sa{ab}\cdot\sa{ab}\cdot \sa{a}$ is $2=F_{2}=F_{\lfloor{5/2}\rfloor}$; the minimum abelian period of $f_{6}=\sa{aba}\cdot\sa{aba}\cdot\sa{baa}\cdot\sa{baa}\cdot\sa{b}$ is $3=F_{3}=F_{\lfloor{6/2}\rfloor}$; the minimum abelian period of $f_{7}=\sa{abaab}\cdot\sa{abaab}\cdot\sa{aabab}\cdot\sa{aabab}\cdot\sa{a}$ is $5=F_{4}=F_{1+\lfloor{7/2}\rfloor}$.
In Table~\ref{tab:val} we report the minimum abelian periods of the first Fibonacci words.
\end{example}
\begin{table}
\centering
\begin{small}
\begin{raggedright}
\begin{tabular}{c *{30}{@{\hspace{2.1mm}}l}}
$j$\hspace{2mm} & 3\hspace{1ex} & 4\hspace{1ex} & 5\hspace{1ex} & 6\hspace{1ex} & 7\hspace{1ex} & 8\hspace{1ex} & 9\hspace{1ex} & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\
\hline \rule[-6pt]{0pt}{22pt}
abelian period~of $f_{j}$\hspace{2mm} & $F_{2}$ & $F_{2}$ & $F_{2}$ & $F_{3}$ & $F_{4}$ & $F_{4}$ & $F_{4}$ & $F_{5}$ & $F_{6}$ & $F_{6}$ & $F_{6}$ & $F_{7}$ & $F_{8}$ & $F_{8}$\\
\hline \rule[-2pt]{0pt}{12pt}
\end{tabular}
\end{raggedright}\caption{\label{tab:val}The minimum abelian periods of the Fibonacci words $f_{j}$ for $j=3,\ldots, 16$.}
\end{small}
\end{table}
\subsection*{Acknowledgements}
This work has been partially supported by Italian MIUR Project PRIN~2010LYA9RH, ``Automi e Linguaggi Formali: Aspetti Matematici e Applicativi''.
| {
"timestamp": "2016-04-19T02:16:47",
"yymm": "1506",
"arxiv_id": "1506.02797",
"language": "en",
"url": "https://arxiv.org/abs/1506.02797",
"abstract": "Richomme, Saari and Zamboni (J. Lond. Math. Soc. 83: 79-95, 2011) proved that at every position of a Sturmian word starts an abelian power of exponent $k$ for every $k > 0$. We improve on this result by studying the maximum exponents of abelian powers and abelian repetitions (an abelian repetition is an analogue of a fractional power) in Sturmian words. We give a formula for computing the maximum exponent of an abelian power of abelian period $m$ starting at a given position in any Sturmian word of rotation angle $\\alpha$. vAs an analogue of the critical exponent, we introduce the abelian critical exponent $A(s_\\alpha)$ of a Sturmian word $s_\\alpha$ of angle $\\alpha$ as the quantity $A(s_\\alpha) = limsup\\ k_{m}/m=limsup\\ k'_{m}/m$, where $k_{m}$ (resp. $k'_{m}$) denotes the maximum exponent of an abelian power (resp.~of an abelian repetition) of abelian period $m$ (the superior limits coincide for Sturmian words). We show that $A(s_\\alpha)$ equals the Lagrange constant of the number $\\alpha$. This yields a formula for computing $A(s_\\alpha)$ in terms of the partial quotients of the continued fraction expansion of $\\alpha$. Using this formula, we prove that $A(s_\\alpha) \\geq \\sqrt{5}$ and that the equality holds for the Fibonacci word. We further prove that $A(s_\\alpha)$ is finite if and only if $\\alpha$ has bounded partial quotients, that is, if and only if $s_{\\alpha}$ is $\\beta$-power-free for some real number $\\beta$. Concerning the infinite Fibonacci word, we prove that: i) The longest prefix that is an abelian repetition of period $F_j$, $j>1$, has length $F_j( F_{j+1}+F_{j-1} +1)-2$ if $j$ is even or $F_j( F_{j+1}+F_{j-1} )-2$ if $j$ is odd, where $F_{j}$ is the $j$th Fibonacci number; ii) The minimum abelian period of any factor is a Fibonacci number. Further, we derive a formula for the minimum abelian periods of the finite Fibonacci words",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Formal Languages and Automata Theory (cs.FL); Number Theory (math.NT)",
"title": "Abelian Powers and Repetitions in Sturmian Words",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787864878118,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.709381140435044
} |
https://arxiv.org/abs/1502.06635 | Small random instances of the stable roommates problem | Let $p_n$ denote the probability that a random instance of the stable roommates problem of size $n$ admits a solution. We derive an explicit formula for $p_n$ and compute exact values of $p_n$ for $n\leq 12$. | \section{Introduction}
\label{sec:intro}
Matching under preferences is a topic of great practical importance,
deep mathematical structure, and elegant algorithmics
\cite{manlove:book,gusfield:irving:book}. A paradigmatic example is
the stable roommates problem \cite{gale:shapley:62}. Consider an even
number $n$ of participants. Each of the participants ranks all the
others in strict order of preference. A matching is a set of $n/2$
disjoint pairs of participants. A matching is stable if there is no
pair of unmatched participants who both prefer each other to their
partner in the matching. Such a pair is said to block the matching.
The stable roommates problem is to find a stable matching. The name
originates from the problem to assign students to the double bedroomes
of a dormitory. Another application is the formation of cockpit crews
from a pool of pilots.
An instance of the stable roommates problem is defined by a preference
table, in which each participant ranks all other $n-1$ participants,
most preferred first. For technical reasons we will assume that each
participant puts himself at the very end of his preference list. Here
are two examples for $n=4$:
\begin{equation}
\label{eq:examples}
\text{(A)}\quad
\begin{array} {rrrrr}
1: & 4 & \mathbf{2} & 3 & 1\\
2: & 3 & 4 & \mathbf{1} & 2\\
3: & 1 & \mathbf{4} & 2 & 3\\
4: & \mathbf{3} & 2 & 1 & 4
\end{array}
\qquad\qquad
\text{(B)}\quad
\begin{array} {rrrrr}
1: & 3 & 2 & 4 & 1\\
2: & 1 & 3 & 4 & 2\\
3: & 2 & 1 & 4 & 3\\
4: & 1 & 2 & 3 & 4
\end{array}
\end{equation}
In (A), the marked matching $(1,2)(3,4)$ is stable. In (B), there is
no stable matching: whoever is matched with $4$ can always form a
blocking pair with someone else.
Example (B) illustrates the fact that not all instances of the
stable roommates problem have a solution. Let $p_n$ denote the
probabilty that a random instance, chosen uniformely from all possible
instances of size $n$, admits a solution. Our examples shows that $0
< p_4 <1$. The exact value is $p_4=26/27$. It has been computed by
Pittel \cite{pittel:93a} more than 20 years ago. No other values of
$p_n$ are known exactly. Numerical simulations
\cite{mertens:roommates} suggest that $p_n$ is a monotonically
decreasing function of $n$ that asymptotically decays like $n^{-1/4}$.
In this paper we derive an explicit formula for $p_n$ that we use
to compute exact values of $p_n$ for $n\leq 12$. And we
discuss a generalization of this approach for odd
values of $n$.
\section{Stable Permutations}
\label{sec:stable-permutations}
A matching of size $n$ can be interpreted as a permutation $\pi$ of
$\{1,\ldots,n\}$ that is completely composed of 2-cyles. An obvious
generalization is to allow arbitrary permutations $\pi$, but for that
one needs to extend the definition of stability. A permutation $\pi$
is called stable if it satisfies the two following conditions:
\begin{subequations}
\begin{equation}
\label{eq:stable-partition-1}
\mbox{$\forall i: i$ does not prefer $\pi(i)$ to $\pi^{-1}(i)$}
\end{equation}
\begin{equation}
\label{eq:stable-partition-2}
\mbox{$i$ prefers $j$ to $\pi(i) \Rightarrow j $ prefers $\pi(j)$ to $i$}
\end{equation}
\end{subequations}
This definition includes permutations with fixed points. This is the
reason why we've added each participant to the very end of his own
preference list. But note that \eqref{eq:stable-partition-2} rules out
that a stable permutation can have more than one fixed point.
For permutations composed of $2$-cycles (matchings) condition
\eqref{eq:stable-partition-1} is trivially satisfied and condition
\eqref{eq:stable-partition-2} reduces to the usual ``no blocking
pairs'' condition. Condition \eqref{eq:stable-partition-1} enforces
each cycle of length $\geq 3$ to have a monotonic rank ordering: every
member $i$ prefers his predecessor $\pi^{-1}(i)$ to his successor
$\pi(i)$, and condition \eqref{eq:stable-partition-2} prevents any
member of the cycle to leave the cycle.
The significance of stable permutations for the stable roommates
problem arises from the following facts, proven by Tan \cite{tan:91}:
\begin{enumerate}
\item Each instance of the stable roommates problem admits at least
one stable permutation.
\item \label{fact2} If $\pi$ is a stable permutation for a roommates
instance that contains a cycle $C=(v_1,v_2,\ldots,v_{2m})$ of even
length, we can get two different stable permutations by replacing $C$ by the $2$-cycles
$(v_1,v_2),\ldots,(v_{2m-1},v_{2m})$ or by
$(v_2,v_3),\ldots,(v_{2m},v_{1})$.
\item If $C$ is an odd-length cycle in \emph{one} stable permutation
for a given roommates instance, then $C$ is a cycle in \emph{all}
stable permutations for that instance.
\end{enumerate}
These facts establish the \emph{cycle type} of stable permutations as
certificate for the existence of a stable matching: An instance of
the stable roommates problem is solvable if and only if the instance
admits a stable permutation with no odd cycles.
Consider again the two examples from the previous section. One can
easily check that the permutation $(1,2,3)(4)$ is a stable permutation
for $(B)$. Since it contains the odd cycle $(1,2,3)$, (B) admits no
stable matching. The permutation $(1,3,4,2)$ is stable for
(A). According to fact \ref{fact2}, its $4$-cycle can be replaced by
$(1,3)\,(4,2)$ or by $(3,4)\,(1,2)$, which are in fact both stable
matchings.
\section{A Formula for $\mathbf{p_n}$}
\label{sec:theory}
The facts proven by Tan allow us to derive an explicit formula for the
probability $p_n$. The underlying ideas have already been
discussed more or less in \cite{pittel:93a}, but the formulas
\eqref{eq:Pn-general} and \eqref{eq:complement-Pn-general} haven't
been published before. We start with an integral representation for
$P(\pi)$, the probability that a permutation $\pi$ is stable.
\begin{proposition} \label{the:PnPi}
Let $\pi$ be a permutation of $\{1,\ldots,n\}$ and let $F_\pi =
\{i:i=\pi(i)\}$ denote the fixed points and $M_\pi = \{i:
\pi(i)=\pi^{-1}(i)\neq i\}$ the elements in two cycles of $\pi$. The
probability that $\pi$ is a stable permutation for a random instance
of the stable roommates problem is given by
\begin{equation}
\label{eq:PnPi}
P(\pi) = \int_0^1 \mathrm{d}^nx \, \prod_{(i, j> i)\not\in D_\pi} (1-x_jx_i)
\prod_{i\not\in M_\pi \cup F_\pi} x_i \prod_{i\in F_\pi}\delta(x_i-1)\,,
\end{equation}
where integration is over the $n$-dimensional unit cube and
\begin{equation}
\label{eq:def-Dpi}
D_\pi = \{(i,j) : i\neq j\,,\,\,i = \pi(j) \vee j = \pi(i) \}
\end{equation}
is the set of pairs of elements that are cyclic neighbors in $\pi$.
\end{proposition}
\begin{proof}
A random instance of the stable roommates problem can be generated as
follows: Introduce an $n\times (n-1)$ array of independent random variable
$X_{ij}$ $(1 \leq i\neq j\leq n)$, each uniformly distributed in
$[0,1]$. Each agent $i$ ranks the agents $j\neq i$ on his preference
list in increasing order of the variables $X_{ij}$. Obviously, such an
ordering is uniform for every $i$, and the orderings by different
members are independent. The fact that each agent is at the very end
of his hown preference list is taken into account by adding variables
$X_{ii}=1$ to the set of random variables.
Let $P(\pi|x,y)$ denote the conditional probability that the
permutation $\pi$ is stable given $X_{i\pi(i)}=x_i$ and
$X_{i\pi^{-1}(i)}=y_i$, and let $F_\pi = \{i:i=\pi(i)\}$ and
$M_\pi = \{i: \pi(i)=\pi^{-1}(i)\neq i\}$ denote the fixed
points and two cycles of $\pi$. Then \eqref{eq:stable-partition-1}
tells us
\begin{equation}
\label{eq:integrand-1}
P(\pi|x,y) \propto \prod_{i\not\in\mathcal{M}_\pi\cup\mathcal{F}_\pi}
\Theta(x_i-y_i)\,\prod_{i \in M_\pi\cup
F_\pi}\delta(x_i-y_i)
\end{equation}
where $\Theta$ is the step function
\begin{displaymath}
\Theta(z) = \begin{cases}
1 & z \geq 0 \\
0 & z < 0
\end{cases}
\end{displaymath}
and $\delta(z)$ is the Dirac delta function.
The second condition \eqref{eq:stable-partition-2} is violated if $X_{ij} < x_i$ and
$X_{ji} < x_j$ for some $(i,j)\not\in D_\pi$. This happens with
probability $x_ix_j$, hence
\begin{equation}
\label{eq:integrand-2}
P(\pi|x,y) \propto \prod_{(i, j>i)\not\in D_\pi} (1-x_jx_i)\,,
\end{equation}
which does not depend on $y$.
Integrating \eqref{eq:integrand-1} over $y_i$ gives a factor
$x_i$ if $i$ is an element of cycle of length three or more, a factor
$1$ otherwise. Adding the product $\prod_{i\in F_\pi} \delta(x_i-1)$
to ensure the constraints $X_{ii}=1$ finally allows us to integrate
over the $x_i$'s to obtain \eqref{eq:PnPi}.
\end{proof}
Note that \eqref{eq:PnPi} differs slightly from the integral
representation in \cite{pittel:93a}: Our integral is valid for any
permutation $\pi$. If $\pi$ contains more
than one fixed point, the integrand vanishes since the
$\delta$-function forces at least one of the factors in the product
$\prod (1-x_ix_j)$ to be zero and $P(\pi)=0$ as it should.
Obviously $P(\pi)$ depends on $\pi$ only through the cycle type of
$\pi$. Let $a_k$ denote the number of cycles of length $k$ in $\pi$.
We use the notation $\mathbf{a}=[1^{a_1}, 2^{a_2},\ldots]$ to
denote the cycle type, including only those terms with
$a_k>0$. For $n=4$, the only
non-zero integrals are
\begin{subequations}
\label{eq:p4-integrals}
\begin{equation}
\label{eq:Pa-4-02}
P([2^2]) = \int_0^1 \mathrm{d}^4x \left( 1-x_{{1}}x_{{3}} \right) \left( 1-x_{{1}}x_{{4}} \right)
\left( 1-x_{{2}}x_{{3}} \right) \left( 1-x_{{2}}x_{{4}} \right) = \frac{233}{648}
\end{equation}
\begin{equation}
\label{eq:Pa-4-0001}
P([4^1]) = \int_0^1 \mathrm{d}^4x \left( 1-x_{{1}}x_{{3}} \right) \left( 1-x_{{2}}x_{{4}} \right) x_{{1}}x_{{2}}x_{{3}}x_{{4}} = \frac{25}{1296}
\end{equation}
\begin{equation}
\label{eq:Pa-4-101}
P([1^1\,3^1]) = \int_0^1 \mathrm{d}^3x\,(1-x_1)(1-x_2)(1-x_3)x_1x_2x_3 = \frac{1}{216}\,.
\end{equation}
\end{subequations}
Note that in the last integral, we have already done the trivial integration over $\delta(x_4-1)$.
\begin{proposition}
\label{the:Pn-general}
Let $p_n$ ($n$ even) be the probability that a random instance of the stable roommates
problem has a solution. Then
\begin{equation}
\label{eq:Pn-general}
p_n = \sum_{\mathbf{a}\in\mathcal{E}_n} (-1)^{e(\mathbf{a})} c(\mathbf{a}) P(\mathbf{a})\,,
\end{equation}
where $\mathcal{E}_n$ is the set of all cycle types of size $n$ with
even cycles only. The exponent $e(\mathbf{a})$ is the number of
even cycles of length $\geq 4$ in $\mathbf{a}$,
$e(\mathbf{a})=\sum_{k=4,6,\ldots} a_k$. The factor
$c(\mathbf{a})$ is the number of permutations with cycle type
$\mathbf{a}$,
\begin{equation}
\label{eq:num-cycles}
c(\mathbf{a}) = \frac{n!}{\prod_k a_k!\, k^{a_k}}\,.
\end{equation}
\end{proposition}
\begin{proof}
A matching of size $n$ has cycle structure $\mathbf{a}=[2^{n/2}]$,
and there are $(n-1)!!$ matchings of size $n$. Boole's inequality
(aka union bound) then tells us that
\begin{equation}
\label{eq:boole-1}
p_n \leq (n-1)!!\, P([2^{n/2}])\,,
\end{equation}
where equality holds if and only if the stability of different matchings
were independent. This is not true in our case. Fact \ref{fact2}
from above tells us that stable matchings may come in pairs. Every stable
permutation that consists of exactly one even length cycle of size $z\geq 4$ and
$(n-z)/2$ cycles of size $2$ corresponds to two stable matchings. These pairs
have been counted twice in the sum in \eqref{eq:boole-1}.
The number of permutations of cycle type $[2^{(n-z)/2}\,z^1]$ is
$n!\,\big((n-z)!!\,z\big)^{-1}$ and we get
\begin{equation}
\label{eq:boole-2}
p_n \geq (n-1)!!\, P([2^{n/2}]) - \sum_{z=4,6,\ldots}^n \frac{n!}{(n-z)!!\,z} P([2^{(n-z)/2}\,z^1])\,.
\end{equation}
The $\geq$ is again a consequence of Boole's inequality. Equality in
\eqref{eq:boole-2} would only hold if the stability of pairs of
permutations were independent events, but we know from fact
\ref{fact2} that stable pairs again may come in pairs: we have a quartet of
stable permutation for each permutation that is composed of precisely
two cycles of length $\geq 4$ and $2$-cycles. Again we can express
the corrections by $P([])$ and a combinatorial prefactor. Iterating this reasoning
(which is of course the well known inclusion-exclusion principle) yields
\eqref{eq:Pn-general}.
The formula \eqref{eq:num-cycles} for the number of permutations
of a given cycle type is well known. Yet we will give a short proof
for completeness. Write down the cycle structure
in terms of $a_k$ pairs of parentheses enclosing $k$ dots, like
\begin{equation}
\label{eq:cycle-structure}
(\cdot \cdot \cdot) (\cdot \cdot \cdot) (\cdot \cdot) (\cdot)
\end{equation}
for $n=9$ and $\mathbf{a}=[1^1,\,2^1,\,3^2]$. Now imagine that the $n$ dots
are replaced left to right with a permutation of
$\{1,\ldots,n\}$. Then the parentheses induces the desired cycle
structure on this permutation. There are $n!$ permutations, but some
of them result in the same "cycled" permutations. First, a cycle of
length $k$ can have $k$ different leftmost values in $(\cdots)$, which
gives a factor $k^{a_k}$ of overcounting. And pairs of parentheses
that hold the same number of dots can be arranged in any order, which
gives a factor $a_k!$ of overcounting. This yields \eqref{eq:num-cycles}.
\end{proof}
\begin{corollary}
\label{the:complement-Pn-general}
Let $\mathcal{O}_n$ denote the set of all cycle types of size $n$ that contain
at most one fixed point and at least one odd cycle. Then
\begin{equation}
\label{eq:complement-Pn-general}
1-p_n = \sum_{\mathbf{a}\in\mathcal{O}_n} (-1)^{e(\mathbf{a})} c(\mathbf{a}) P(\mathbf{a})\,,
\end{equation}
\end{corollary}
\begin{proof}
Since $P(\mathbf{a})=0$ if $\mathbf{a}$ has more than one fixed
point, we can extend the sum to run over all cycle types with at
least one odd cycle. Then the right hand side of
\eqref{eq:complement-Pn-general} is the probability that a random
instance of the stable roommates problem has a stable permutation
with at least one odd cycle. But this equals the probability that a
random instance of the stable roommates problem has no solution.
\end{proof}
\section{Evaluation of $p_n$}
We already know the values of the integrals $P(\mathbf{a})$ for $n=4$,
see \eqref{eq:p4-integrals}.
When we insert these values into \eqref{eq:Pn-general} or \eqref{eq:complement-Pn-general} we get
\begin{equation}
\label{eq:P4}
\begin{aligned}
p_4 &= 3\, P([2^2]) - 6\, P([4^1]) = {\frac {26}{27}} =
0.962962\ldots\,\\
1-p_4 &= 8\, P([1^1,3^1]) = \frac{1}{27}\,,
\end{aligned}
\end{equation}
the value computed by Pittel in 1993 \cite{pittel:93a}. It seems
straightforward to compute $p_n$ for larger values of $n$, since all
we need to do is to evaluate and sum the corresponding integrals $P(\mathbf{a})$.
This is not easy, however. Pittel
wrote ``For $n = 6$, the computations by hand become
considerably lengthier and we gave up after a couple of half-hearted attempts.''
The computations become ``lengthier'' for two reasons: the number of
integrals in \eqref{eq:Pn-general} and
\eqref{eq:complement-Pn-general} increase with $n$, and the evaluation
of each individual integral gets harder. Let us first look at the
number of integrals:
\begin{lemma}
Let $p(n)$ denote the number of unordered partitions of $n$, and let
$n$ be even. Then
\begin{subequations}
\begin{align}
|\mathcal{E}_n| &= p\left(\frac{n}{2}\right) \label{eq:En}\\
|\mathcal{O}_n| &= p(n)-p(n-2)-p\left(\frac{n}{2}\right)\label{eq:On}\,.
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
From $\sum_k k a_k=n$, or from glancing at
\eqref{eq:cycle-structure}, it is obvious that there is a one-to-one
correspondence between the set of cycle types of size $n$ and the
set of integer partitions of $n$.
Every cycle type $\mathbf{a}\in\mathcal{E}_n$ corresponds to a
partition of $n$ into even numbers and vice versa. Every partition
of $n$ into even numbers corresponds to a unique partition of $n/2$
and vice versa---simply divide or mutiply all parts of the
partition by two. This proves \eqref{eq:En}.
The number of all cycle types is $p(n)$, and the number of all cycle
types that contain at least two fixed points is $p(n-2)$. Hence the
number of cycle types that contain at most one fixed point is
$p(n)-p(n-2)$. For $|\mathcal{O}_n|$ we also need to subtract the number
of cycle types with even cycles only, which is $p(n/2)$. This proves \eqref{eq:On}.
\end{proof}
There is no closed formula for the partition numbers $p(n)$, but they
are known for all $n \leq 10\,000$ \cite{oeisA000041}. And
we need $p(n)$ only for small values of $n$ to get
\begin{center}
\begin{tabular}{crrrrrrrr}
$n$ & 4 & 6 & 8 & 10 & 12 & 14 & 16 & 18 \\[1ex]
$|\mathcal{E}_n|$ & 2 & 3 & 5 & 7 & 11 & 15 & 22 & 30 \\
$|\mathcal{O}_n|$ & 1 & 3 & 6 & 13 & 24 & 43 & 74 & 124
\end{tabular}
\end{center}
In this regime of $n$, the number of integrals is no problem. So let
us turn our attention to the individual integrals.
When we expand the product in \eqref{eq:PnPi}, we get a sum of
easy-to-integrate terms of the form $x_1^{b_1}\cdot x_n^{b_n}$,
but there too many terms to be integrated by hand.
\begin{lemma}
A full expanion of the integrand in \eqref{eq:PnPi} yields
$2^{f(\mathbf{a})}$ terms, where
\begin{equation}
\label{eq:factors}
f(\mathbf{a}) = \frac{1}{2} n (n-3) + a_1 + a_2\,.
\end{equation}
\end{lemma}
\begin{proof}
If we expand the integrand, each factor in the product
\begin{equation}
\label{eq:culprit}
\prod_{\substack{i<j \\ (i,j)\not\in D_\pi}}^n (1-x_i x_j)
\end{equation}
doubles the number of terms. Hence we need to show that
\eqref{eq:factors} is the number of factors in this product.
Think of the $n$ variables $x_i$ as the vertices of a graph $G$.
Each factor $(1-x_ix_j)$ in \eqref{eq:culprit} corresponds to an edge
of $G$. Without the constraint $(i,j)\not\in D_\pi$, $G$ is the
complete graph with $\frac{1}{2}n(n-1)$ edges. Each cycle of length
$k\geq 3$ in $\mathbf{a}$ corresponds to a cycle in $G$ with $k$ edges
that are removed from the complete graph. Each cycle of length $2$
corresponds to an edge that is also removed. This gets us
\begin{displaymath}
f(\mathbf{a}) = \frac{1}{2}n(n-1)-\sum_{k\geq 3} k a_k - a_2 =
\frac{1}{2}n(n-1)- \left(\sum_{k} k a_k - 2a_2 -a_1\right) -a_2
\end{displaymath}
and \eqref{eq:factors} follows from $\sum_k k a_k = n$.
\end{proof}
The maximum number of terms arises for pure matchings, i.e., for
$a_2=n/2$ and $a_1=0$. It reads $2^4$, $2^{12}$, $2^{24}$, $2^{40}$
and $2^{60}$ for $n=4,6,8,10$ and $12$. Hence it is no surprise that
Pittel gave up on the integrals for $n=6$. The integration is better
left to a computer.
We used the computer-algebra system Mathematica \cite{mathematica:10}
for the exact evaluation of the integrals
$P(\mathbf{a})$. Figure~\ref{fig:integration} shows the Mathematica
code that sets up the integrand and performs the integration. The full
Mathematica code is available online \cite{mertens:page:matchings}.
\begin{figure}
\centering\small
\begin{verbatim}
Integrand[a_] := Module[(* computes integrand corresponding to cycle pattern a *)
{n,i,j,l,result,cycle},
If[a[[1]]>1,result=0, (* more than one fixed point *)
n=Sum[k*a[[k]],{k,1,Length[a]}];
If[a[[1]]>0, (* take care of fixed point *)
n=n-1;result=Product[(1-x[i]),{i,1,n}],
result = 1 (* no fixed point *)
];
result=result*Product[(1-x[i]*x[j]),{i,1,n-1},{j,i+1,n}];
(* remove 2-cycles from product *)
result=result/Product[(1-x[2*i-1]*x[2*i]),{i,1,a[[2]]}];
(* cycles larger than 2 *)
result=result*Product[x[i],{i,2*a[[2]]+1,n}];
For[cycle=3,cycle<=Length[a],cycle++,
l=Sum[i*a[[i]],{i,2,cycle-1}]+1;
For[i=l,i<=l+cycle*(a[[cycle]]-1),i+=cycle,
For[j=0,j<cycle,j++,result=result/(1-x[i+j]*x[i+Mod[(j+1),cycle]])]
]
]
];
result
];
P[a_] := Module[
{y,n,k},
n=Sum[k*a[[k]],{k,1,Length[a]}];
If[a[[1]]>0,n=n-1];
y = Integrand[a];
For[k=n,k>=1,k--,y=Integrate[y,{x[k],0,1}]];
y
];
\end{verbatim}
\caption{Mathematica code to compute the integrals $P(\mathbf{a})$
\eqref{eq:PnPi}. The procedure \texttt{Integrand[a]} returns the
integrand as a function of variables
\texttt{x[1]},\ldots,\texttt{x[n]} (or \texttt{x[n-1]} if the
cycle type \texttt{a} contains a fixed point), the procedure
\texttt{P[a]} evaluates the integral by exactly integrating
variable by variable.}
\label{fig:integration}
\end{figure}
Using our Mathematica code, we computed the values of $p_n$ for $n\leq 12$ both from
\eqref{eq:Pn-general} and (as a crosscheck) from \eqref{eq:complement-Pn-general}.
The results are
\begin{subequations}
\label{eq:Pvalues}
\begin{align}
\label{eq:P6}
p_6 &= {\frac {181431847}{194400000}} = 0.93329139403292181070\ldots\\[1ex]
\label{eq:P8}
p_8 &= {\frac {809419574956627}{889426440000000}} = 0.91004667564933981499\ldots\\[1ex]
\label{eq:P10}
p_{10} &= \frac{25365465754520943457921774207}{28460490127321448448000000000}
= 0.89125189485653484085\ldots\\[1ex]
\label{eq:P12}
p_{12} &=
\frac{13544124829485098788469430650439043569062157071}{15469783933925839494793980316271247360000000000}
\\
&= 0.87552126696367780620\ldots\,.\nonumber
\end{align}
\end{subequations}
The values of the corresponding individual integrals are listed in
Tables~\ref{tab:n10} and \ref{tab:n12}.
\begin{table}
\centering
\begin{tabular}{rrrrrr}
& \multicolumn{1}{c}{$p_4$} & \multicolumn{1}{c}{$p_6$} & \multicolumn{1}{c}{$p_8$} & \multicolumn{1}{c}{$p_{10}$} & \multicolumn{1}{c}{$p_{12}$} \\[1ex]
\eqref{eq:Pn-general}: & 0.20 sec. & 19.8 sec. & 5 min. & 20 min. & 15.5
days \\[1ex]
\eqref{eq:complement-Pn-general}: & 0.02 sec. & 3.5 sec. & 6 min. & 25
min. & 13.9 days
\end{tabular}
\caption{\label{tab:times}Times to compute $p_n$ according
\eqref{eq:Pn-general} or \eqref{eq:complement-Pn-general}.}
\end{table}
We ran our Mathematica code on a computer equipped with 2
Intel\textsuperscript{\textregistered}\
Xeon\textsuperscript{\textregistered}\ CPUs E5-1620 with 3.60 GHz
clock rate and 32 GByte of memory. The total computation times are
shown in Table~\ref{tab:times}. Table~\ref{tab:n12} also shows the
times to compute the individual integrals for $n=12$. Some of theses
integrals (marked with a $\star$) could not be computed by the simple
iterative scheme in Figure~\ref{fig:integration} because Mathematica
ran out of memory. In these cases we expanded the integrand in a
polynomial in the variable $x_n$ (or $x_{n-1}$ if there is a fixed
point) and applied interative integration to each coefficient of this
polynomial. This reduces the memory consumption, but it slows down the
computation. With a larger memory (like 64 GByte instead of 32 GByte),
this could have been avoided and $p_{12}$ could
have been computed somewhat faster.
\begin{table}
\centering
\begin{tabular}{c@{\hskip 0mm}c@{\hskip 8mm}c@{\hskip 0mm}c}
$\mathbf{a}$ & $P(\mathbf{a})$ & $\mathbf{a}$ & $P(\mathbf{a})$ \\[1ex]
$[2^3]$ & $\frac{448035973}{5832000000}$ & $[1^1,2^1,3^1]$ & $\frac{38077}{86400000}$ \\[1ex]
$[2^1,4^1]$ & $\frac{307841}{144000000}$ & $[1^1,5^1]$ & $\frac{26257}{777600000}$\\[1ex]
$[6^1]$ & $\frac{2591729}{11664000000}$ & $[3^2]$ & $\frac{1742111}{7776000000}$\\[3ex]
$[2^4]$ &$\frac {1245959394495647}{107585022182400000}$ & $[1^1,7^1]$ & $\frac {49958102093}{384232222080000000}$ \\[1ex]
$[2^2,4^1]$ & $\frac {5211637894488503}{26896255545600000000}$ & $[1^1,2^2,3^1]$ & $\frac {441974732789}{12807740736000000}$\\[1ex]
$[2^1,6^1]$ & $\frac {914248620325799}{53792511091200000000}$ & $[1^1,3^1,4^1]$ & $\frac {1249592153}{9605805552000000}$ \\[1ex]
$[4^2]$ & $\frac {1493807915753}{1195389135360000000}$ & $[1^1,2^1,5^1]$ & $\frac {58105985423}{25615481472000000}$ \\[1ex]
$[8^1]$ & $\frac {622186155317}{498078806400000000}$ & $[2^1,3^2]$ & $\frac {76670733315619}{4482709257600000000}$ \\[1ex]
& & $[3^1,5^1]$ & $\frac {58105985423}{25615481472000000}$ \\[3ex]
$[2^5]$ &
$\frac{433857166916418660757431885203}{322741958043825225400320000000000}$
& $[1^1,2^3,3^1]$ & $\frac{1882697003227025150390719}{819662115666857715302400000000}$\\[1ex]
$[2^3,4^1]$ &
$\frac{4794693488032751578104859937}{322741958043825225400320000000000}$
& $[1^1,3^3]$ & $\frac{158398327239405983477}{512288822291786072064000000000}$ \\[1ex]
$[2^2,6^1]$ &
$\frac{726158117631681830112186713}{645483916087650450800640000000000}$
& $[1^1,2^1,3^1,4^1]$ & $\frac{2765878679393466620633}{409831057833428857651200000000}$ \\[1ex]
$[2^1,4^2,]$ &
$\frac{94089601969271248978571831}{1290967832175300901601280000000000}$
& $[1^1,2^2,5^1]$ & $\frac{4336602947669955694769}{32786484626674308612096000000}$ \\[1ex]
$[2^1,8^1]$ &
$\frac{18812621042800384360939621}{258193566435060180320256000000000}$
& $[1^1,4^1,5^1]$ & $\frac{126601947989502609349}{409831057833428857651200000000}$ \\[1ex]
$[4^1,6^1]$ &
$\frac{10678226865621944175135083}{2581935664350601803202560000000000}$
& $[1^1,3^1,6^1]$ & $\frac{633196266619396193087}{2049155289167144288256000000000}$ \\[1ex]
$[10^1]$ &
$\frac{42708804188035567140443357}{10327742657402407212810240000000000}$
& $[1^1,2^1,7^1]$ &
$\frac{789921304062168675601}{117094587952408245043200000000}$
\\[1ex]
& & $[1^1,9^1]$ &
$\frac{1265995491264426770353}{4098310578334288576512000000000}$\\[1ex]
& & $[2^2,3^2]$ &
$\frac{2918990176269285877130918549}{2581935664350601803202560000000000}$\\[1ex]
& & $[2^1,3^1,5^1]$ &
$\frac{18845369089082632479619357}{258193566435060180320256000000000}$\\[1ex]
& & $[3^2,4^1]$ &
$\frac{21410287713579117222366871}{5163871328701203606405120000000000}$
\\[1ex]
& & $[3^1,7^1]$ &
$\frac{610894828667022260751797}{147539180820034388754432000000000}$
\\[1ex]
& & $[5^2]$ & $\frac{8541874436295301342281403}{2065548531480481442562048000000000}$
\end{tabular}
\caption{\label{tab:n10}Probabilities $P(\mathbf{a})$ for $n=6,8,10$
(top to bottom). Cycle
types with (right) and without (left) odd cycles.}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccr}
$\mathbf{a}$ & $P(\mathbf{a})$ & Time [sec.] \\[1ex]
$[2^6]$ & $\frac{325899908494883644126440199857602193757211429627}{2572934463890545624774134806202233860915200000000000}$ & 265\,018\\[1ex]
$[2^4,4^1]$ & $\frac{1209115974791734652605681563324122140963407221}{1225206887566926487987683241048682790912000000000000}$ & 265\,091\makebox[0pt]{\:\:$^\star$}\\[1ex]
$[2^2,4^2]$ & $\frac{16288072152327610053000950409164225186650151}{4288224106484242707956891343670389768192000000000000}$ & 205\,089\makebox[0pt]{\:\:$^\star$}\\[1ex]
$[4^3]$ & $\frac{231703173390597300042186053017177445722753}{25729344638905456247741348062022338609152000000000000}$ & 206\,115\makebox[0pt]{\:\:$^\star$}\\[1ex]
$[2^3,6^1]$ & $\frac{3378294177941932509053172872298924486923764591}{51458689277810912495482696124044677218304000000000000}$ & 49\,493\\[1ex]
$[2^1,4^1,6^1]$ & $\frac{417880592074077264470531487240272595070729}{2144112053242121353978445671835194884096000000000000}$ & 49\,220\\[1ex]
$[6^2]$ & $\frac{1853330912748299530044034784734880537462353}{205834757111243649981930784496178708873216000000000000}$ & 49\,293\\[1ex]
$[2^2,8^1]$ & $\frac{508893194633666952579907861671385135829521}{134007003327632584623652854489699680256000000000000}$ & 47\,303\\[1ex]
$[4^1,8^1]$ & $\frac{4412925241742005785167715449536219676971}{490082755026770595195073296419473116364800000000000}$ & 47\,520\\[1ex]
$[2^1,10^1]$ & $\frac{53484730261191253361608747405394814514357}{274446342814991533309241045994904945164288000000000}$ & 53\,836\\[1ex]
$[12^1]$ &
$\frac{14826641669894164340076941832557808383893}{1646678056889949199855446275969429670985728000000000}$
& 53\,299 \\[3ex]
$[1^1,2^4,3^1]$ &
$\frac{122503966894472107602242737308438169403}{920820472257490446118689304539955200000000000}$
& 92\,605\\[1ex]
$[1^1,2^1,3^3]$ &
$\frac{1222543880169122622560877738575059}{93543667022983156431104945223106560000000000}$
& 42\,996 \\[1ex]
$[1^1,2^2,3^1,4^1]$ &
$\frac{193959334006722457965074605079586629657}{618791357357033579791759212650849894400000000000}$
& 7\,038\\[1ex]
$[1^1,3^1,4^2]$ &
$\frac{2855025200767172513735081732106863}{5729549605157718331405177894915276800000000000}$
& 8\,354\\[1ex]
$[1^1,2^3,5^1]$ &
$\frac{8437895290055710585350317910566101247317}{1237582714714067159583518425301699788800000000000}$
& 10\,784\\[1ex]
$[1^1,3^2,5^1]$ &
$\frac{176230039945164631684723423501203937}{353595061346876331309576692943342796800000000000}$ & 10\,498\\[1ex]
$[1^1,2^1,4^1,5^1]$ &
$\frac{1795748861495201189078302039999290739}{137509190523785239953724269477966643200000000000}$ & 9\,092\\[1ex]
$[1^1,2^1,3^1,6^1]$ &
$\frac{2694291584800385097759305707384840499}{206263785785677859930586404216949964800000000000}$ & 8\,314\\[1ex]
$[1^1,5^1,6^1]$ &
$\frac{2174740904996317920876887889334183}{4365371127739213966784897443744972800000000000}$ & 8\,376\\[1ex]
$[1^1,2^2,7^1]$ &
$\frac{1695852720842076492466118915028628181}{5412169306329739764942500402194022400000000000}$ & 8\,059\\[1ex]
$[1^1,4^1,7^1]$ &
$\frac{88077935438211707375963113857429259}{176797530673438165654788346471671398400000000000}$ & 8\,082\\[1ex]
$[1^1,3^1,8^1]$ &
$\frac{19270992061340957619880212582236803}{38674459834814598736984950790678118400000000000}$ & 8\,098\\[1ex]
$[1^1,2^1,9^1]$ &
$\frac{1197147888812853403164542654655246797}{91672793682523493302482846318644428800000000000}$ & 8\,096\\[1ex]
$[1^1,11^1]$ &
$\frac{3699232196202777873480674898554033731}{7425496288284402957501110551810198732800000000000}$ & 8\,001\\[1ex
$[2^3,3^2]$ & $\frac{161499154693883709213457621140881273357670713}{2450413775133852975975366482097365581824000000000000}$ & 209\,450\\[1ex]
$[2^2,3^1,5^1]$ &
$\frac{4348643545825433788892694617856351275828603}{1143526428395798055455171024978770604851200000000000}$ & 56\,207\\[1ex
$[2^1,3^2,4^1]$ & $\frac{357327697339220191285941924523831564051}{1829642285433276888728273639966032967761920000000}$ & 212\,763\makebox[0pt]{\:\:$^\star$}\\[1ex]
$[3^4]$ & $\frac{9836824308843120655187019024812769030613}{1089072788948379100433496214265495814144000000000000}$ & 216\,063\\[1ex
$[2^1,3^1,7^1]$ &
$\frac{91054335946285045516721350625722458874631}{466745480977876757328641234685212491776000000000000}$ & 50\,230\\[1ex
$[3^2,6^1]$ &
$\frac{618748213165565813756023362735829302961763}{68611585703747883327310261498726236291072000000000000}$ & 49\,614\\[1ex]
$[2^1,5^2]$ &
$\frac{26742627021755978677974945561880197844453}{137223171407495766654620522997452472582144000000000}$ & 59\,920\\[1ex]
$[3^1,4^1,5^1]$ &
$\frac{61829720130534204153121031185033846815523}{6861158570374788332731026149872623629107200000000000}$ & 59\,587\\[1ex]
$[3^1,9^1]$ &
$\frac{20608745217756332777239346633128265563123}{2287052856791596110910342049957541209702400000000000}$ & 53\,642\\[1ex]
$[5^1,7^1]$ &
$\frac{17650822382212529975478949518938933533441}{1960331020107082380780293185677892465459200000000000}$
& 53\,495
\end{tabular}
\caption{\label{tab:n12}Probabilities $P(\mathbf{a})$ for $n=12$
and the times to compute them. Times marked with
$\star$ refer to a slower, more memory efficient integration procedure (see text).}
\end{table}
\section{Odd values of $n$}
For odd values of $n$ there are no stable matchings, of
course. But there are still stable permutations: Tan's results listed
in Section~\ref{sec:stable-permutations} as well as
Proposition~\ref{the:PnPi} also hold for odd values of $n$.
This allows us to generalize the stable roommates problem to odd
values of $n$. The most obvious generalization is to accept one
fixed point, i.e., to reject one participant from the dormitory (or
put him into a single bedroom), and to ask for a stable matching of
the remaining $n-1$ participants. Let $p_n$ (for $n$ odd) denote the
probability that a random instance admits such a solution. Following
the same reasoning as in Proposotion~\ref{the:Pn-general} and
Corollary~\ref{the:complement-Pn-general}, we get
\begin{align}
\label{eq:Pn-odd-general}
p_n &= \sum_{\mathbf{a}\in\mathcal{E}^1_n} (-1)^{e(\mathbf{a})}
c(\mathbf{a}) P(\mathbf{a})\,, \\
\label{eq:complement-Pn-odd-general}
1-p_n &= \sum_{\mathbf{a}\in\mathcal{O}^3_n} (-1)^{e(\mathbf{a})} c(\mathbf{a}) P(\mathbf{a})\,,
\end{align}
where $\mathcal{E}^1_n$ is the set of all cycle types of size $n$
consisting of one fixed point and even cycles and $\mathcal{O}^3_n$
is the set of all cycle types of size $n$ that contain at least one
cycle of odd length $\geq 3$. Table~\ref{tab:n-odd} lists the values of the corresponding integrals
$P(\mathbf{a})$ for odd $n\leq 11$. The resulting values of $p_n$ are
\begin{subequations}
\label{eq:Pvalues-odd}
\begin{align}
p_3 &= \frac{3}{4} = 0.75\\[1ex]
\label{eq:P5}
p_5 &= \frac{4075}{6912} = 0.5895543981481481\ldots\\[1ex]
\label{eq:P7}
p_7 &= \frac{246462083}{518400000} = 0.4754284008487654\ldots\\[1ex]
\label{eq:P9}
p_{9} &= \frac{11365049284140796201}{29144725585920000000} = 0.38995218021992023\ldots\\[1ex]
\label{eq:P11}
p_{11} &= \frac{176967745750762518431538515329}{546441410444571810201600000000} = 0.3238549318705289\ldots
\end{align}
\end{subequations}
It seems counterintuitive that $p_{2k-1} < p_{2k}$, but note that the
enforced fixed-point for an odd number of participants represents
someone who is happy to be matched with anybody else.
This high destabilizing potential is a result of the rule that every participant
has to put himself at the very end of his preference list.
\section{Conclusions and Outlook}
We have seen that $p_n$, the probabilty of a random instance of the
stable roommmates problem of size $n$ to admit a solution, can be expressed as a sum over
cycle types of permutations of size $n$. Each term in the sum is an
integral with an exponential number of terms. The latter restricts an
exact evaluation of $p_n$ to $n\leq 12$. In spite of this limitation, the
method is far more efficient than the exhaustive enumeration over the
$[(n-1)!]^{n-1}$ different instances of size $n$. For
$n=12$, this number is $4.1\times 10^{83}$, or $4100$ times the number
of atoms in the visible universe (which is usually estimated as
$10^{80}$).
Our results for $n\leq 12$ don't shed new light on the ultimate
behavior of $p_n$ as $n$ becomes large, but they suggest that exact
evaluation of $p_n$ for any larger values of $n$ is likely to be
infeasible without some unexpected new approach.
The approach outlined in this paper can easily be modified to work for
the stable matching problem on general graphs, where each participant
corresponds to a vertex of a graph $G$ and ranks only those
participants adjacent to him in $G$. If $G$ is the complete graph, we
recover the stable roommates problem. In the case of bipartite graphs
$G$ (known as stable marriage problem) we have $p_n=1$. For
non-bipartite graphs, $p_n$ seems to be a monotonically decreasing
function of $n$ that may or may not approach a non-zero value,
depending on the number of short cycles in $G$
\cite{mertens:matchings}.
\begin{table}
\centering
\begin{tabular}{c@{\hskip 0mm}c@{\hskip 8mm}c@{\hskip 0mm}c}
$\mathbf{a}$ & $P(\mathbf{a})$ & $\mathbf{a}$ & $P(\mathbf{a})$
\\[1ex]
$[1^1,2^1]$ & $\frac{1}{4}$ &
$[3^1]$ & $\frac{1}{8}$ \\[3ex]
$[1^1,2^2]$ & $\frac{833}{20736}$ & $[2^1,3^1]$ &
$\frac{491}{27648}$ \\[1ex]
$[1^1,4^1]$ & $\frac{1}{2304}$ & $[5^1]$ & $\frac{191}{82944}$
\\[3ex]
$[1^1,2^3]$ & $\frac{110831617}{23328000000}$ & $[2^2,3^1]$
& $\frac{5103637}{2592000000}$ \\[1ex]
$[1^1,2^1,4^1]$ & $\frac{797731}{23328000000}$ & $[2^1,5^1]$ &
$\frac{1945639}{9331200000}$
\\[1ex]
$[1^1,6^1]$ & $\frac{6541}{2916000000}$ & $[3^1,4^1]$ & $\frac{336349}{18662400000}$\\[1ex]
$[1^1,3^2]$ & $\frac{2183}{972000000}$& $[7^1]$ & $\frac{558779}{31104000000}$\\[3ex]
$[1^1,2^4]$ & $\frac{12242957448855683129}{27541765678694400000000}$ &$[2^3,3^1]$ & $\frac{39406434169244998649}{220334125429555200000000}$
\\[1ex]
$[1^1,2^2,4^1]$ & $\frac{2998148628185909}{1311512651366400000000}$ & $[2^2,5^1]$ & $\frac{234360972607515209}{14688941695303680000000}$\\[1ex]
$[1^1,4^2]$ & $\frac{184134811312313}{27541765678694400000000}$ & $[2^1,3^1,4^1]$ & $\frac{3502136387768779}{2937788339060736000000}$\\[1ex]
$[1^1,2^1,6^1]$ & $\frac{3617070987119831}{27541765678694400000000}$ & $[3^3]$ & $\frac{374799675933251}{4896313898434560000000}$\\[1ex]
$[1^1,8^1]$ & $\frac{26303739761759}{3934537954099200000000}$ &$[2^1,7^1]$ & $\frac{12476274579169301}{10492101210931200000000}$
\\[1ex]
$[1^1,2^1,3^2]$ & $\frac{1206877128048157}{9180588559564800000000}$ & $[3^1,6^1]$ & $\frac{16811008475015879}{220334125429555200000000}$\\[1ex]
$[1^1,3^1,5^1]$ & $\frac{2455964944171}{367223542382592000000}$ & $[4^1,5^1]$ & $\frac{671436255551711}{8813365017182208000000}$\\[1ex]
& & $[9^1]$ & $\frac{1864835590786319}{24481569492172800000000}$\\[3ex]
$[1^1,2^5]$&$\frac{88853486478784120344992170351}{2581935664350601803202560000000000}$&$[2^4,3^1]$&$\frac{710107424563570828306588840739}{51638713287012036064051200000000000}$\\[1ex]
$[1^1,2^3,4^1]$&$\frac{4572509990406797552502341}{34425808858008024042700800000000}$&$[2^3,5^1]$&$\frac{109197089334060411167876570117}{103277426574024072128102400000000000}$\\[1ex]
$[1^1,2^1,4^2]$&$\frac{268054718171660435931803}{860645221450200601067520000000000}$&$[2^2,3^1,4^1]$& $\frac{14352373021321999225705658471}{206554853148048144256204800000000000}$\\[1ex]
$[1^1,2^2,6^1]$&$\frac{1168831786137020235667067}{172129044290040120213504000000000}$&$[2^1,3^3]$& $\frac{183002200715285406357445301}{45901078477344032056934400000000000}$\\[1ex]
$[1^1,4^1,6^1]$&$\frac{743715115011041403407}{57376348096680040071168000000000}$&$[2^2,7^1]$& $\frac{204627732127480795488157591}{2950783616400687775088640000000000}$\\[1ex]
$[1^1,2^1,8^1]$&$\frac{100516822545753167453891}{322741958043825225400320000000000}$&$[2^1,3^1,6^1]$& $\frac{547557978971950371021494551}{137703235432032096170803200000000000}$\\[1ex]
$[1^1,10^1]$&$\frac{66933419040890225282203}{5163871328701203606405120000000000}$&$[2^1,4^1,5^1]$& $\frac{2461017693717356460362113427}{619664559444144432768614400000000000}$\\[1ex]
$[1^1,2^2,3^2]$ & $\frac{4386643900008678909343237}{645483916087650450800640000000000}$&$[3^2,5^1]$& $\frac{41866526759821300816071211}{206554853148048144256204800000000000}$\\[1ex]
$[1^1,3^2,4^1]$&$\frac{531499853646597948383}{40983105783342885765120000000000}$&$[3^1,4^2]$& $\frac{373490614662460067378083}{1844239760250429859430400000000000}$\\[1ex]
$[1^1,2^1,3^1,5^1]$&$\frac{804392338445761377188767}{2581935664350601803202560000000000}$&$[2^1,9^1]$& $\frac{182277891278802756936253723}{45901078477344032056934400000000000}$\\[1ex]
$[1^1,5^2]$&$\frac{16733378688533122315949}{1290967832175300901601280000000000}$&$[3^1,8^1]$& $\frac{62737571161936687651226813}{309832279722072216384307200000000000}$\\[1ex]
$[1^1,3^1,7^1]$&$\frac{19128779897378689455131}{1475391808200343887544320000000000}$&$[4^1,7^1]$& $\frac{17908596195396917111551979}{88523508492020633252659200000000000}$\\[1ex]
$[11^1]$& $\frac{62675660640300931114214381}{309832279722072216384307200000000000}$&$[5^1,6^1]$& $\frac{6963996535691809265274221}{34425808858008024042700800000000000}$
\end{tabular}
\caption{\label{tab:n-odd}Probabilities $P(\mathbf{a})$ for $n=3,5,7,9,11$
(top to bottom).}
\end{table}
\bibliographystyle{unsrt}
| {
"timestamp": "2015-02-25T02:02:39",
"yymm": "1502",
"arxiv_id": "1502.06635",
"language": "en",
"url": "https://arxiv.org/abs/1502.06635",
"abstract": "Let $p_n$ denote the probability that a random instance of the stable roommates problem of size $n$ admits a solution. We derive an explicit formula for $p_n$ and compute exact values of $p_n$ for $n\\leq 12$.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "Small random instances of the stable roommates problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787861106087,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7093811401639881
} |
https://arxiv.org/abs/1210.3772 | An Extension Theorem for Real Kahler Submanifolds in Codimension Four | In this article, we prove a Kahler extension theorem for real Kahler submanifolds of codimension 4 and rank at least 5. Our main theorem states that such a manifold is a holomorphic hypersurface in another real Kahler submanifold of codimension 2. This generalizes a result of Dajczer and Gromoll in 1997 which states that any real Kahler submanifolds of codimension 3 and rank at least 4 admits a Kahler extension. | \section{Introduction}
\vspace{0.4cm}
Submanifold theory, and especially the study of Riemannian submanifolds in Euclidean spaces, have been a classic subarea in differential geometry. The Nash embedding theorem \cite{Nash} guarantees that any complete Riemannian manifold can be isometrically embedded into an Euclidean space. There are lots of important development in submanifold theory. At the risk of omitting many, we will just mention two recent such examples. One is the work of Hongwei Xu and his collaborators (see \cite{Xu}, \cite{XG} and \cite{XZ}) generalizing the famous Differentiable Sphere Theorem of Brendle and Schoen (\cite{BS1}, \cite{BS2}) to the submanifold case, thus obtaining the optimal pinching constant. The other one is the very recent work by F. Marques and A. Neves \cite{MN}, solving the long-standing Willmore conjecture.
\vspace{0.2cm}
However, in the special case when the submanifold happens to be K\"ahler, the research is relatively few and sporadic, and the state of knowledge is still rather primitive in our opinion. We will call a K\"ahler manifold isometrically embedded in a real Euclidean space a {\em real K\"ahler Euclidean submanifold,} or {\em real K\"ahler submanifold} for short. That is, we have an isometric embedding $f: M^n \rightarrow {\mathbb R}^{2n+p}$ from a K\"ahler manifold $M^n$ of complex dimension $n$ into the real Euclidean space.
\vspace{0.2cm}
Ideally, since $M^n$ is equipped with a complex structure, one would like the embedding $f$ to be both isometric and holomorphic. However, the thesis of Calabi \cite{Calabi} in 1950's showed us that very few K\"ahler metrics can be isometrically and holomorphically embedded in a complex Euclidean space or other complex space forms. He actually precisely characterized all such metrics. So to study generic K\"ahler manifolds in the extrinsic setting, one has to abandon the holomorphicity assumption on the embedding, and only assume it to be isometric.
\vspace{0.2cm}
For a real K\"ahler submanifold $f: M^n \rightarrow {\mathbb R}^{2n+p}$, the K\"ahlerness of $M^n$ imposes strong restrictions and made it very sensitive to its codimension. For instance, when $p=1$, namely, when $M^n$ is a hypersurface, the result of Florit and the second named author in \cite{FZ-hypersurface} states that, when $M^n$ is also assumed to be complete, $f$ must be the product of $g$ with the identity map of ${\mathbb C}^{n-1}$, where $g:\Sigma \rightarrow {\mathbb R}^3$ is the isometric embedding of a complete surface, which is always K\"ahler. In other words, surfaces in ${\mathbb R}^3$ are essentially the only real K\"ahler submanifolds in codimension one. In contrast, there are all kinds of real hypersurfaces in Euclidean spaces.
\vspace{0.2cm}
In codimension two, the situation is also well-studied and fully understood. In the minimal case, it was analyzed in details by Dajczer and Gromoll (see \cite{DG2}, \cite{DR} and the references therein), and in the non-minimal case, it was classified by Florit and the second named author \cite{FZ-codim2}. In codimension three, the work of Dajczer and Gromoll \cite{DG97} showed that, unless the submanifold $M^n$ is a holomorphic hypersurface of a real K\"ahler submanifold of codimension $1$, its rank has to be less than or equal to $3$, the codimension of $M^n$.
\vspace{0.2cm}
Recall that the {\em rank} of a real K\"ahler submanifold $f: M^n \rightarrow {\mathbb R}^{2n+p}$ at $x\in M$ is defined to be $n-\nu_0$, with $\nu_0$ the complex dimension of $\Delta_0=\Delta \cap J\Delta$, which is the $J$-invariant part of the kernel $\Delta $ of the second fundamental form of $f$. Of course these spaces may not have constant dimensions on $M$. But if we let $U$ be the open subset where $\Delta_0$ takes the minimum (thus constant) dimension, then $r$ will be constant in $U$. Outside the closure of $U$, $M$ will be a real K\"ahler submanifold with smaller rank. In general, by restricting to an open dense subset $U'$ of $M$, we can always assume that in each connected component $U$ of $U'$, $\Delta$ and $\Delta_0$ take constant dimensions and form distributions. Note that the leaves of $\Delta $ ($\Delta_0$) are totally geodesic (complex) submanifolds in $M^n$. They are actually open subset of (parallel translation of) linear subspaces in the ambient Euclidean space. We might need to further reduce $U'$ later, but the conclusions we will draw will always be valid in each connected component of an open dense subset of $M$.
\vspace{0.2cm}
The main purpose of this paper is to show that the result of Dajczer and Gromoll in \cite{DG97} can be extended to the codimension $4$ case. To be precise, we will prove the following
\vspace{0.4cm}
\noindent {\bf Main Theorem.} {\em Let $f: M^n \rightarrow {\mathbb R}^{2n+4}$ be a real K\"ahler submanifold with rank $r>4$ everywhere. Then there exists an open dense subset $U'\subset M$ such that for each connected component $U$ of $U'$, the restriction $f|_U$ has a K\"ahler extension, namely, there exists a real K\"ahler submanifold $h: Q^{n+1} \rightarrow {\mathbb R}^{2n+4}$ of codimension $2$, and a holomorphic embedding $\sigma : U \rightarrow Q^{n+1}$, such that $f|_U=h \circ \sigma $. Furthermore, when $f$ is minimal, one can choose $h$ to be minimal as well.}
\vspace{0.4cm}
Note that if $h$ is minimal, $f$ has to be minimal. In general, the extension $h$ might not be unique. But as we shall see from the proof, there is always a `canonical' extension, unless $f$ itself is a holomorphic isometric embedding into ${\mathbb C}^{n+2}$.
\vspace{0.2cm}
This result can be regarded as an extension of a phenomenon discovered by Dajczer \cite{D} and Dajczer-Gromoll \cite{DG97}, in codimension two and three, respectively. In \cite{D}, Dajczer proved that, for any codimension two real K\"ahler submanifold, if its rank is greater than $2$, then in any connected component $U$ of an open dense subset of $M$, the restriction $f|_U$ is a holomorphic embedding into ${\mathbb R}^{2n+2}\cong {\mathbb C}^{n+1}$. This is an important discovery. In codimension three, Dajczer and Gromoll proved in 1997 (see \cite{DG97}) that if real K\"ahler submanifold of dimension three has rank greater than $3$, then there exists an open dense subset $U'\subseteq M$ such that in each connected component $U$ of $U'$, $f|_U$ has a K\"ahler extension into a real K\"ahler submanifold $Q^{n+1}$ of codimension one.
\vspace{0.2cm}
Note that for the results in \cite{D} and \cite{DG97}, assumptions were made on the {\em relative nullity} $\nu$, namely, the (real) dimension of the kernel $\Delta $ of the second fundamental form $\alpha_f$. Since $\Delta_0\subseteq \Delta$, we have $2\nu_0\leq \nu$ hence $\nu \geq 2n-2r$, with $r$ the rank. In \cite{D}, the assumption was $\nu <2n-4$, which implies $r>2$. In \cite{DG97}, the assumption was $\nu <2n-6$, which implies $r>3$. Even though their assumptions were slightly stronger, it is easy to see that their arguments can be extended to the cases when assumptions are made on the ranks.
\vspace{0.2cm}
We suspect that similar phenomenon will persist in higher codimensions as well, namely, the rank $r$ should be controlled by the codimension $p$ in a certain way, unless the manifold is a complex submanifold of another real K\"ahler submanifold of a smaller codimension. We will explore the higher codimensional cases elsewhere, but in here we will just state a conjecture which says that, for $p\leq 11$, the words ``controlled by" in the above sentence should mean that the rank is no greater than the codimension, namely, $r\leq p$. In other words,
\vspace{0.4cm}
\noindent {\bf Conjecture.} {\em Let $f: M^n \rightarrow {\mathbb R}^{2n+p}$ be a real K\"ahler submanifold with rank $r>p$ everywhere. If $p\leq 11$, then there exists an open dense subset $U'\subset M$ such that for each connected component $U$ of $U'$, the restriction $f|_U$ has a K\"ahler extension, namely, there exists a real K\"ahler submanifold $h: Q^{n+s} \rightarrow {\mathbb R}^{2n+p}$ of codimension $p-2s<p$, and a holomorphic embedding $\sigma : U \rightarrow Q^{n+s}$, such that $f|_U=h \circ \sigma $. }
\vspace{0.4cm}
Note that the main theorem, together with results of \cite{D} and \cite{DG97}, confirms the conjecture for $p\leq 4$. (When $p=1$, one always has $r\leq 1$).
\vspace{0.4cm}
\noindent {\bf Acknowledgement:} We would like to take this opportunity to thank a few people who helped us in our study. First, we are very grateful to Marcos Dajczer for his inspiring papers on the subject of real K\"ahler submanifolds, which opened the way to the investigation of this under-explored territory in submanifold theory. The second named author would like to thank his former collaborators Luis Florit and Wing San Hui. The present work is a continuation of these earlier joint works. Finally, we would also like to thank CMS of Zhejiang University which provides an ideal research environment for mathematicians, and in particular to Hongwei Xu for his warm hospitality and numerous stimulating conversations.
\vspace{0.4cm}
\vspace{0.4cm}
\section{Preliminaries}
\vspace{0.4cm}
In this section, we shall collect some known results in the literature that will be needed in the proof of our theorem. We will also fix some notations and terminologies that will be used later.
\vspace{0.2cm}
In this paper, unless specified otherwise, we will always assume that $M$ is a real K\"ahler submanifold of complex dimension $n$ and codimension $p$, with $f$ the isometric embedding from $M$ into ${\mathbb R}^{2n+p}$. At any $x\in M$, let $\Delta $ be the kernel of the second fundamental form $\alpha_f$ of $f$, and $ \Delta_0 =\Delta \cap J\Delta $ the $J$-invariant part of $\Delta$. The rank $r$ is defined to be $n-\nu_0$, where $2\nu_0$ is the real dimension of $\Delta_0$. We always have $\nu \geq 2n-2r$, where $\nu =\mbox{dim}(\Delta )$ is the relative nullity.
\vspace{0.2cm}
The results in this paper are local in nature, and we will from time to time reduce from $M$ into an open dense subset of it, to make various subspaces in the tangent or normal bundle taking constant dimensions and forming subbundles.
\vspace{0.2cm}
For $x\in M$, we will denote by $T\cong {\mathbb R}^{2n}$ the real tangent space $T_xM$, $N=T_xM^{\perp }\cong {\mathbb R}^p$ the normal space, and by $V\cong {\mathbb C}^n$ the space of all type $(1,0)$ complex tangent vectors at $x$, namely, $V\oplus \overline{V} \cong T\otimes_{\mathbb R}{\mathbb C}$. Extend the second fundamental form $\alpha_f: T\times T \rightarrow N$ linearly over ${\mathbb C}$, we will denote its $(1,1)$ and $(2,0)$ components by $H$ and $S$, respective:
$H: V\otimes \overline{V} \rightarrow N_{\mathbb C}$, and $S: V\otimes V \rightarrow N_{\mathbb C}$,
where $N_{\mathbb C}=N\otimes_{\mathbb R}{\mathbb C}$.
\vspace{0.2cm}
As observed in \cite{FHZ}, the K\"ahlerness of $M$ implies that the Hermitian bilinear form $H$ and the symmetric bilinear form $S$ satisfy the following symmetry conditions:
\begin{eqnarray}
\langle H_{X\overline{Y}}, H_{Z\overline{W}}\rangle & = & \langle H_{Z\overline{Y}}, H_{X\overline{W}}\rangle \\
\langle H_{X\overline{Y}}, S_{ZW}\rangle & = & \langle H_{Z\overline{Y}}, S_{XW}\rangle \\
\langle S_{XY}, S_{ZW}\rangle & = & \langle S_{ZY}, S_{XW}\rangle
\end{eqnarray}
for any $X,Y,Z,W \in V$.
\vspace{0.2cm}
We notice that $H$ and $S$ together carry all the information of $\alpha_f$. Also, by $(2.1)$, we get
$$ | \sum_{i=1}^n H_{i\overline{i}} |^2 = \sum_{i,j=1}^n |H_{i\overline{j}}|^2 $$
for any unitary frame $\{ e_1, \ldots , e_n\}$ of $V$. Here we wrote $H_{i\overline{j}}$ for $H_{e_i\overline{e_j}}$. So $H\equiv 0$ if and only if the trace of $H$, which is (a multiple of) the mean curvature of $f$, vanishes. So $f$ is minimal when and only when $H=0$.
\vspace{0.2cm}
Note that for $\Delta = \mbox{ker}(\alpha_f)$, its $J$-invariant part $\Delta_0=\Delta \cap J\Delta$ corresponds to a complex subspace $D\subseteq V$ with complex dimension $\nu_0$, and $D$ is exactly the intersection of the kernels of $H$ and $S$. Let $V'$ be the orthogonal complement of $D$ in $V$. We have $V=D\oplus V'$ and $V'\cong {\mathbb C}^r$, where $r=n-\nu_0$ is the rank of $M^n$. $D$ (or $\Delta$) is contained in the kernel of the curvature tensor of $M$, and the leaves of the foliation $D$ are totally geodesic, flat complex submanifolds in $M$. They are actually open subset of ${\mathbb C}^{n-r}$, embedded linearly (i.e., as parallel translation of linear subspace) in ${\mathbb R}^{2n+p}$. So in a way, the rank $r$ of $M$ is like the essential (complex) dimension of $M$, even though in general $M$ might not be isometric to the product space (i.e., the leaves of $D$ might not be parallel to each other).
\vspace{0.2cm}
For any $\eta \in N$, the shape operator $A_{\eta}$ is defined by $\langle A_{\eta }u,v\rangle =\langle \alpha_f(u,v), \eta \rangle$ for any $u, v\in T$. It is self-adjoint. For convenience, we will also denote by $A^{\eta }$ the {\em shape form}, which is defined by $A^{\eta}_{uv} = \langle A_{\eta }(u), v\rangle = \langle \alpha_f(u,v), \eta \rangle $. It is the component of the second fundamental form in the $\eta$-direction.
\vspace{0.2cm}
Let $\{ e_1, \ldots , e_n\}$ be a basis of $V$. For each $1\leq i\leq n$, write
$$ e_i=\frac{1}{\sqrt{2}} (\varepsilon _i -\sqrt{-1} \varepsilon_{n+i}).$$
Then under the basis $\{ \varepsilon_1, \ldots , \varepsilon_{2n}\}$ of $T$, $A^{\eta}$ will take the form
\begin{eqnarray}
A^{\eta} & = & \left( \begin{array}{ll} \mbox{Re}(H^{\eta })+\mbox{Re}(S^{\eta }) & \mbox{Im}(H^{\eta })- \mbox{Im}(S^{\eta }) \\ - \mbox{Im}(H^{\eta })-\mbox{Im}(S^{\eta }) & \mbox{Re}(H^{\eta })-\mbox{Re}(S^{\eta })\end{array} \right)
\end{eqnarray}
where $H^{\eta }=\langle H_{i\overline{j}}, \eta \rangle$ and $S^{\eta }=\langle S_{ij}, \eta \rangle$. Note that under any tangent frame $\{\varepsilon_1, \ldots , \varepsilon_{2n}\}$, the shape operator $A_{\eta }$ and the shape form $A^{\eta }$ are related by
$$ A_{\eta} (\varepsilon_i) = \sum_{j=1}^{2n} (A^{\eta }g^{-1})_{ij}\varepsilon_j =\sum_{j,k=1}^{2n} A^{\eta }_{ik}g^{kj}\varepsilon_j
$$
where $A^{\eta}_{ij}=A^{\eta}_{\varepsilon_i\varepsilon_j}$, $g_{ij}=\langle \varepsilon_i, \varepsilon_j\rangle$, and $(g^{ij})$ is the inverse matrix of $(g_{ij})$.
\vspace{0.4cm}
Next let us recall the Codazzi equation:
\begin{eqnarray}
\nabla_u(A_{\xi }v)-\nabla_v(A_{\xi }u)-A_{ \nabla^{\perp }_u\xi }v+ A_{ \nabla^{\perp }_v\xi }u -A_{\xi }[u,v]=0
\end{eqnarray}
for any vector fields $u$, $v$ on $M$ and normal section $\xi$. For any type $(1,0)$ tangent vector $X$ and any (possibly complexified) normal vector $\xi$, let us denote by
\begin{eqnarray}
A_{\xi }X=H_{\xi }X + S_{\xi } X
\end{eqnarray}
the decomposition of $A_{\xi }X $ into its $(1,0)$ part and $(0,1)$ part. This give us operators $H_{\xi }$ and $S_{\xi }$ which are determined by
$$ H_{\xi }X=\sum_{i=1}^n H^{\xi }_{X\overline{i}} e_i, \ \ \ \ S_{\xi }X=\sum_{i=1}^n S^{\xi }_{Xi} \overline{e_i} . $$
under any unitary frame $\{ e_1, \ldots , e_n\}$ of $V$. Note that $H_{\xi }(V)\subseteq V$ and $H_{\xi }(\overline{V})\subseteq \overline{V}$; \ while $S_{\xi }(V)\subseteq \overline{V}$ and $S_{\xi }(\overline{V})\subseteq V$. Extend the Codazzi equation linearly to all complexified tangent vectors, and by taking the $(1,0)$ and $(0,1)$ parts in (2.5), we get
\begin{eqnarray}
\nabla_X(H_{\xi }Y)-\nabla_Y(H_{\xi }X)-H_{ \nabla^{\perp }_X\xi }Y + H_{ \nabla^{\perp }_Y\xi }X -H_{\xi }[X,Y] &= &0 \ \ \ \ \\
\nabla_{X}(S_{\xi }Y)-\nabla_{Y}(S_{\xi }X)-S_{ \nabla^{\perp }_X\xi }Y+ S_{ \nabla^{\perp }_Y\xi }X -S_{\xi }[X,Y] &=&0
\end{eqnarray}
and
$$
\nabla_{\overline{Y}}(S_{\xi }X)-S_{\nabla^{\perp }_{\overline{Y}}\xi }X-S_{\xi }(\nabla_{\overline{Y}}X)=
\nabla_{X} (H_{\xi }{\overline{Y}})-H_{\nabla^{\perp }_X\xi }\overline{Y}-H_{\xi }(\nabla_X\overline{Y})
$$
for any type $(1,0)$ vector fields $X$, $Y$ on $M$ and any normal field $\xi$. In particular, in the minimal case, namely, when $H=0$, we have
\begin{eqnarray}
S_{\nabla^{\perp }_{\overline{Y}}\xi }X = \nabla_{\overline{Y}}(S_{\xi }X)-S_{\xi }(\nabla_{\overline{Y}}X), \ \ \ \ \ \mbox{when} \ H=0
\end{eqnarray}
for any $\xi$ in $N$ and any $X$, $Y$ in $V$.
\vspace{0.4cm}
\vspace{0.4cm}
\section{The Algebraic Lemma}
\vspace{0.4cm}
In this paper, we shall be primarily interested in the case when $p=4$ and $r>4$, although some of the arguments work in general cases as well. Our first objective is to show that at a generic point $x$ in $M^n$, the second fundamental form takes a rather special form. First, let us introduce the following
\vspace{0.4cm}
\noindent {\bf Definition.} {\em Let $V\cong {\mathbb C}^n$ and $N\cong {\mathbb R}^p$ be equipped with inner products, and let $H$, $S$ be respectively Hermitian or symmetric bilinear map from $V$ into $N_{\mathbb C}=N\otimes {\mathbb C}$ satisfying the symmetry conditions (2.1)-(2.3). Let $E$ be a subspace of $N$. An {\bf almost complex structure} $J$ on $E$ is an isometry from $E$ onto itself, such that $J^2=-I$, and for any $\eta \in E$, $H^{\eta}=0$ and $S^{J\eta }=-\sqrt{-1}S^{\eta }$ holds. }
\vspace{0.4cm}
Here we wrote $H^{\eta} =\langle H, \eta\rangle$ and $S^{\eta} =\langle S, \eta\rangle$. Note that $E$ is necessarily even dimensional, and the condition on $J$ is equivalent to $A_{J\eta }= JA_{\eta}$ for any $\eta \in E$, where $A_{\eta}$ is the shape operator, related to the shape form $A^{\eta}$ by the metric on $T\cong V$, which in turn is related to $H^{\eta}$ and $S^{\eta }$ by (2.4).
\vspace{0.2cm}
We will assume that the dimension $p$ of $N$ is the smallest, namely, for any $\eta \neq 0$ in $N$, either $H^{\eta}$ or $S^{\eta }$ is not zero. This is equivalent to $A_{\eta }\neq 0$ for any $\eta \neq 0$ in $N$. Note that under this assumption, the almost complex structure on any subspace $E$ of $N$, if exists, must be unique. To see this, suppose $J$ and $J'$ are both almost complex structures on $E\subseteq N$. Then for any $\eta \in E$, we have $H^{\eta }=0$ and $S^{J\eta }=-\sqrt{-1}S^{\eta} = S^{J'\eta }$, so $S^{J\eta -J'\eta }=0$. So if $J\neq J'$, then by (2.4) there will be $\eta \neq 0$ in $E$ such that $A_{\eta }=0$, contradicting our assumption that $p$ is the smallest.
\vspace{0.2cm}
As a consequence of this uniqueness, we know that if $E_1$, $E_2$ are both subspaces of $N$ admitting almost complex structures, then both $E_1\cap E_2$ and $E_1+E_2$ also admit almost complex structure. So there is always a (unique) maximal subspace $E$ in $N$, possibly trivial, that is equipped with an almost complex structure. We will call this subspace $E$ the {\em complex part of} $N$.
\vspace{0.2cm}
Let $E'$ be the orthogonal complement of the complex part $E$ in $N$, and write $S'=\langle S, E'\rangle $. Then by the definition of the almost complex structure, we know $S'$ again satisfies (2.3). Also, if $S^{\eta }$ has rank at most $1$, then in $\{\eta\}^{\perp}$, $S$ also satisfies (2.3). Our main goal in this section is to prove the following
\vspace{0.4cm}
\noindent {\bf Algebraic Lemma.} {\em Let $V\cong {\mathbb C}^r$, $N\cong {\mathbb R}^4$ be equipped with inner products, and let $H$, $S$ be respectively Hermitian or symmetric bilinear forms from $V$ into $N_{\mathbb C}$ satisfying symmetry conditions (2,1)-(2.3). We assume that $\mbox{ker}(H)\cap \mbox{ker}(S)=0$ and $r>4$. Then $N$ has non-trivial complex part. That is, either $N$ itself or a $2$-dimensional subspace $E$ in it admits an almost complex structure. Furthermore, in the latter case we have
$$\mbox{dim}(\mbox{ker}(H) \cap \mbox{ker}(S'))\geq r-2,$$
where $S'=\langle S, E'\rangle $ and $E'$ is the orthogonal complement of $E$ in $N$. }
\vspace{0.4cm}
\noindent {\em Proof:} Since $H$ is Hermitian, its image space is in the form $N'_{\mathbb C}=N'\otimes {\mathbb C}$ for some real linear subspace $N'\subseteq N$. Let $N=N'\oplus N''$ be the orthogonal decomposition and write $H=(H',H'')$ and $S=(S',S'')$ under this decomposition. We have $H''=0$ by definition. Denote by $p'$, $q=4-p'$ the dimension of $N'$, $N''$, respectively.
\vspace{0.2cm}
Let $V_0$ be the kernel of $H$, and $V=V_0\oplus V_1$ the orthogonal decomposition. Write $r_i=\mbox{dim}_{\mathbb C}V_i$ for $i=0,1$. Note that for any $X\in V_0$, $H_{X\overline{\ast }} =0$, so by (2.2), we know that $\langle S_{XY},H_{\ast \overline{\ast }}\rangle =0$ thus $S'_{XY}=0$, for any $Y\in V$. Hence $V_0\subseteq \mbox{ker}(S')$.
\vspace{0.2cm}
From the discussion in \cite{FHZ}, we know that $r_1\leq p'$, and the equality case would imply that $H'$ and $S'$ can be simultaneously diagonalized. In particular, $p'=4$ cannot happen, since $r\geq 5$. Similarly, $p'=3$ cannot happen, either. This is because in this case the rank of $S'$ is at most $r_1\leq 3$. The fact $r\geq 5$ and the symmetry condition (2.3) would make $S''$, thus $S$, having a zero-eigenvector within $V_0$, contradicting the fact that $\mbox{ker}(H)\cap \mbox{ker}(S)=0$ in $V$. So we have $p'\leq 2$.
\vspace{0.2cm}
If $p'=2$, then $r_1$ is necessarily $2$, and we are in the diagonal situation. That is, we will have orthonormal basis $\{\xi_1, \xi_2\}$ of $N'$ and basis $\{ e_1, e_2\}$ of $V_1$ such that $V_0=\mbox{ker}(H)\cap \mbox{ker}(S')$, and along $V_1$, the matrices $H^1$, $H^2$, $S^1$, and $S^2$ are respectively
\begin{eqnarray*}
\left( \begin{array}{ll} 1 & 0 \\ 0 & 0 \end{array} \right) ; \ \ \left( \begin{array}{ll} 0 & 0 \\ 0 & 1 \end{array} \right) ;\ \ \left( \begin{array}{ll} \ast & 0 \\ 0 & 0 \end{array} \right) ; \ \ \left( \begin{array}{ll} 0 & 0 \\ 0 & \ast \end{array} \right)
\end{eqnarray*}
Notice that both $S^1$ and $S^2$ have rank $\leq 1$, so the symmetric bilinear form $S''$ from $V$ into $N''\cong {\mathbb R}^2$ satisfies (2.3) as well. Its kernel cannot overlap with $V_0$, so its rank is at least $3$. By Lemma 1 below, we know that $N''$ admits an almost complex structure.
\vspace{0.2cm}
If $p'=1$, then $r_1=1$ necessarily, so $V_1$ is one-dimensional and both $H'$ and $S'$ are zero in the codimension one subspace $V_0$ of $V$. Since $S'$ is a matrix of rank $\leq 1$, the remaining part $S''$ will satisfy (2.3) and its rank is at least $4$. So by Lemma 1 below, $N''$ contains a $2$-dimensional subspace $E$ which admits an almost complex structure. Let $0\neq \eta \in N''$ be perpendicular to $E$. Then $S^{\eta }$ again satisfies (2.3), so its rank is at most $1$. Putting $\eta $ together with $N'$ to form the space $E'$, we know that the common kernel of $H$ and $S$ on $E'$ has dimension at least $r-2$.
\vspace{0.2cm}
Finally, when $p'=0$, we are left with $S$ from $V$ into $N={\mathbb R}^4$ satisfying (2.3) and with rank at least $5$. So by Lemma 1 below, we know that either $N$ itself admits an almost complex structure, or it contains a $2$-dimensional subspace $E$ which does. Let $E'=E^{\perp }$ in $N$. Since $S'=\langle S,E'\rangle$ also satisfies (2.3), if it does not admit an almost complex structure, then by Lemma 1 is must have rank less than or equal to $2$, namely, $\mbox{dim(ker}(S))\geq r-2$. This completes the proof of the Algebraic Lemma. \qed
\vspace{0.4cm}
\noindent {\bf Lemma 1.} {\em Let $V\cong {\mathbb C}^r$ and $N\cong {\mathbb R}^p$ be equipped with inner products, write $N_{\mathbb C}=N\otimes {\mathbb C}$. Let $S: V\times V \rightarrow N_{\mathbb C}$ be a symmetric bilinear map, satisfying (2.3) and with $\mbox{ker}(S)=0$. If $p\leq 4$ and $r>p$, then there exists $X,Y\in V$ such that $S_{XY}\neq 0$ and $\langle S_{XY}, S_{ZW}\rangle =0$ for any $Z,W\in V$. In other words, $N$ always has nontrivial complex part. }
\vspace{0.4cm}
\noindent {\em Proof:} The $p=2$ case is due to Dajczer in \cite{D}, and the $p=3$ case is due to Dajczer and Gromoll \cite{DG97}, even though their notations are quite different from here. We will just prove the $p=4$ case here, since the same argument would work for the $p=2$ and $p=3$ cases as well. Without loss of generality, we may assume that $r=5$ (as when $r>5$, we can just apply the result to any $5$-dimensional subspace of $V$).
\vspace{0.2cm}
For $X\in V$, consider the linear map $\phi_X:V\rightarrow N_{\mathbb C}$ sending $Y$ to $S_{XY}$. Denote by $K_X$ the kernel of $\phi_X$, and $k_X$ its complex dimension. Since $V\cong {\mathbb C}^5$, $N_{\mathbb C} \cong {\mathbb C}^4$, and $\mbox{ker}(S)=0$, we have $1\leq k_X\leq 4$.
\vspace{0.2cm}
Let $k$ be the minimum of $k_X$ for all $X\in V$, and denote by $V_0$ be the open dense subset of $V$ consisting of all $X$ with $k_X=k$. We will also write $m=5-k$. It is the dimension of the image of $\phi_X$ and is also between $1$ and $4$. Notice that the set $\Sigma =\{ X\in V\mid S_{XX}=0\}$ is the intersection of four quadratic hypersurfaces in $V$, so $V_0'=V_0\setminus \Sigma $ is still open dense in $V$.
\vspace{0.2cm}
Fix any $X\in V_0'$. Let $\{ e_1, \ldots , e_5\}$ be a basis of $V$ such that $e_1=X$, $\{ e_{m+1}, \ldots , e_5\}$ forms a basis of $K_X$. Again we will write $S_{ij}$ for $S_{e_ie_j}$. $\{ S_{11}, \ldots , S_{1m}\}$ forms a basis of the image space $P=\phi_X(V)$. We will denote by $Q$ the subspace of $N_{\mathbb C}$ spanned by $S_{i\alpha }$ for all $1\leq i\leq 5$ and all $m< \alpha \leq 5$. That is, $Q=S(K_X\times V)$. Since $S_{1\alpha }=0$, the symmetry condition (2.3) implies that $\langle P, Q\rangle =0$.
\vspace{0.2cm}
We claim that $Q\subseteq P$. Assume otherwise. Then there will some $m<\alpha \leq 5$ and some $1\leq i\leq 5$, such that $S_{i\alpha }$ is not contained in $P$. Consider the vector $Y=e_1+\lambda e_i$ for a sufficiently small $\lambda$. Then $S_{Y\alpha }=\lambda S_{i\alpha }$, and we have
\begin{eqnarray*}
S_{Y1} \wedge \cdots \wedge S_{Ym} \wedge S_{Y\alpha }= \lambda (S_{11}\wedge \cdots \wedge S_{1m}\wedge S_{i\alpha } +O(\lambda))
\end{eqnarray*}
whose leading term is not zero. So for a sufficiently small value of $\lambda$, the image of $\phi_Y$ has dimension bigger than $m$, a contradiction. This proves that $Q\subseteq P$. Note that $Q\neq 0$ since $\mbox{ker}(S)=0$.
\vspace{0.2cm}
When $m=1$, $Q=P$, so $0\neq S_{11}\in P=Q$ satisfies $\langle S_{11}, S_{ij}\rangle =0$ for any $i$, $j$. If $m=2$, then since we can take $e_2\in V_0'$ also, both $K_1$ and $K_2$ are of codimension $2$, thus there will be $0\neq Z\in K_1\cap K_2$. Take $W$ such that $S_{ZW}\neq 0$, then $S_{ZW}\in Q$, and $\langle S_{ZW}, S_{22}\rangle =0$, hence $\langle S_{ZW}, S_{ij}\rangle =0$ for any $i$, $j$. On the other hand, since $\langle P,Q\rangle =0$, $P$ is contained in the orthogonal complement of $\overline{Q}$ in $N_{\mathbb C}$, so $m\leq 3$. From now on, we will assume that $m=3$.
\vspace{0.2cm}
Note that if there are $\alpha , \beta \in \{ 4,5\}$ such that $S_{\alpha \beta }\neq 0$, then since $\langle Q, Q\rangle =0$, by (2.3), we would have
$$ \langle S_{\alpha \beta }, S_{ij}\rangle = \langle S_{\alpha i }, S_{\beta j}\rangle =0 $$
for any $i,j\leq 3$. So $S_{\alpha \beta }$ will give us the proof of the lemma. In other words, if for some $X\in V_0'$ we have $S(K_X\times K_X)\neq 0$, then any non-zero element $S_{ZW}$ in this subspace would satisfy $\langle S_{ZW}, S_{ij}\rangle =0$ for all $i$, $j$. So we may further assume that $S(K_X\times K_X)= 0$ for all $X\in V_0'$. We claim that this will not be possible at all, thus completing the proof of the lemma.
\vspace{0.2cm}
Since $V_0'$ is open dense in $V$. We may assume that $e_2$, $e_3$ are in $V_0'$ also. Consider their kernels $K_2$ and $K_3$. If they are both equal to $K_1$, then $e_4$ will be in the kernel of $S$, a contradiction. So we must have one of them, say $K_2$, not equal to $K_1$. Since $Q$ has dimension $1$, $S_{24}$ and $S_{25}$ are proportional to each other. Replace $\{ e_4, e_5\}$ by another basis of $K_1$ if necessary, we may assume that $S_{24}=0$. On the other hand, since $K_2\neq K_1$, we may replace $e_3$ by another vector in $K_2$. So $K_2=\mbox{span}\{ e_3, e_4\}$. Since $e_2\in V_0'$, we know that $S(K_2\times K_2)=0$ (unless the lemma holds). However, this means $S_{34}=S_{44}=0$. But we already have $S_{14}=S_{54}=0$ since $e_4\in K_1$, hence $e_4\in \mbox{ker}(S)$, a contradiction once again. This finishes the proof of the lemma. \qed
\vspace{0.4cm}
\vspace{0.4cm}
\section{The Extension Theorems}
\vspace{0.4cm}
Now let us consider a real K\"ahler submanifold $f: M^n \rightarrow {\mathbb R}^{2n+4}$ of codimension $4$. Reduce $M$ to a connected component $U$ of an open dense subset $U'$ of $M$ if necessary, we may assume that both $\Delta $ and $\Delta_0$ are of constant dimensions and are distributions. We will also assume that at any $x\in M$, the shape operator $A_{\xi }\neq 0$ for any $\xi \neq 0$. Note that the vanishing of some shape operator everywhere would mean that the codimension can be reduced. By the algebraic lemma proved in the previous section, we know that either the entire normal bundle $N$ or a rank two subbundle $E\subseteq N$ admits an almost complex structure.
\vspace{0.2cm}
We will call an almost complex structure $J$ on $E$ {\em admissible} if
\begin{eqnarray}
J (\nabla^{\perp }_v\xi)^E = (\nabla^{\perp }_vJ\xi)^E
\end{eqnarray}
holds for any $\xi \in E$ and any vector field $v$ in $M$. Here $(W)^E $ stands for the $E$ component of $W$.
\vspace{0.2cm}
Notice that in the case when $E$ has rank $2$, any almost complex structure $J$ on $E$ is automatically admissible: let $\{\xi_1,\xi_2\}$ be a local orthonormal frame of $E$ with $\xi_2=J\xi_1$. Equation (4.1) reduces to
$$ J( \langle \nabla^{\perp }\xi_1, \xi_2\rangle \xi_2 )= \langle \nabla^{\perp }\xi_2 , \xi_1\rangle \xi_1 ,$$
or equivalently
$$ \langle \nabla^{\perp }\xi_1, \xi_2\rangle = - \langle \nabla^{\perp }\xi_2 , \xi_1\rangle ,$$
which always holds.
\vspace{0.2cm}
In the case when $N$ itself admits an admissible almost complex structure $J$, our goal is to show that $M^n$ is actually a holomorphic submanifold in ${\mathbb C}^{n+2}$. We have the following:
\vspace{0.4cm}
\noindent {\bf Theorem 1. } {\em Let $f: M^n \rightarrow {\mathbb R}^{2n+4}$ be a real K\"ahler submanifold whose normal bundle admits an admissible almost complex structure. Then there exists an isometric identification $\sigma : {\mathbb R}^{2n+4}\cong {\mathbb C}^{n+2}$ such that $\sigma \circ f$ is a holomorphic isometric embedding.}
\vspace{0.4cm}
We will prove this theorem at the end of this section.
\vspace{0.2cm}
In the case of a rank two subbundle $E$ of $N$ admitting an almost complex structure, we would like to show that $M^n$ is a complex submanifold of another complex manifold $Q^{n+1}$, and $Q^{n+1}$ is a codimension two real K\"ahler submanifold of which $M$ is the restriction. We will call such a $Q^{n+1}$ a {\em K\"ahler extension} of $M^n$. To prove this extension theorem, we will need to know more information about the behavior of the second fundamental form beyond the existence of the almost complex structure on $E$. It turns out that what is needed here is the following data:
\vspace{0.4cm}
\noindent {\bf Definition.} {\em A {\em developable ruling in $E\oplus T$} is a rank two subbundle $L$ of $E\oplus T$, such that $L+T=E\oplus T$ and $\langle \widetilde{\nabla }L, E'\rangle =0$ along $M$. Here $T$ is the tangent bundle of $M$, $E'$ is the orthogonal complement of $E$ in the normal bundle $N$, and $\widetilde{\nabla }$ is the covariant differentiation of the ambient Euclidean metric.}
\vspace{0.4cm}
Note that the subbundle $L$ is necessarily transversal to $T$, but in general not contained in $N$. We will prove the following extension theorem:
\vspace{0.4cm}
\noindent {\bf Theorem 2. } {\em Let $f: M^n \rightarrow {\mathbb R}^{2n+4}$ be a real K\"ahler submanifold. If there is a rank two subbundle $E$ of the normal bundle $N$, an almost complex structure $J$ on $E$, and a developable ruling $L$ in $E\oplus T$. Then there exists a real K\"ahler submanifold $h: Q^{n+1}\rightarrow {\mathbb R}^{2n+4}$ and a holomorphic embedding $\sigma : M^n \rightarrow Q^{n+1}$ such that $f=h\circ \sigma $. }
\vspace{0.4cm}
\noindent {\em Proof:} Let $z=(z_1, \ldots , z_n)$ be a local holomorphic coordinate in $M$ and $\{ \xi_1, \ldots , \xi_4\}$ be an orthonormal frame of $N$, such that $\{ \xi_1,\xi_2\}$ spans $E'$ and $\{ \xi_3,\xi_4\}$ spans $E$. Write $P=E\oplus T$. Since $L+T=P$, there will be a local frame of $L$ given by
$$ \eta_1=\xi_3-v_1, \ \ \ \ \eta_2=\xi_4-v_2 $$
where $v_1$ and $v_2$ are real vector fields of $M$. Since $\langle \widetilde{\nabla }L, E'\rangle =0$, we know that
\begin{eqnarray}
\widetilde{\nabla }_v\eta_i \ \in \ P = L+T
\end{eqnarray}
for $i=1$, $2$ and for any vector field $v$ in $M$.
\vspace{0.2cm}
Let $B\subseteq {\mathbb C}$ be a sufficiently small disc and $t=t_1+\sqrt{-1}t_2$ be the coordinate. Define a $(2n+2)$-dimensional submanifold $h: Q\rightarrow {\mathbb R}^{2n+4}$ by
$$ h(z,t)=f(z)+t_1\eta_1(z)+t_2\eta_2(z) $$
Since $L$ is transversal to $T$, for sufficiently small values of $|t|$ the map $h$ is an embedding. $Q$ is ruled along the directions of $L$. By $(4.2)$, the bundle $E'$, which is the normal bundle of $Q$, is constant along each leave of $L$, thus $Q$ is a developable submanifold (meaning that its tangent space is constant along each ruling). Along the submanifold $M$ of $Q$, the restriction of the tangent bundle $TQ|_M$ is simply $P=L+T$. Since $P=E\oplus T$, and we have almost complex structure $J$ on both $T$ and $E$, we can take their direct sum to get an almost complex structure on $P$. Now take parallel translation along leaves of $L$, we get an almost complex structure on $TQ$. We will denote this almost complex structure on $TQ$ again by $J$.
\vspace{0.2cm}
To show that $Q$ is a K\"ahler manifold under the restriction of the Euclidean metric, it suffices to show that $\widehat{\nabla}J=0$ on $Q$, where $\widehat{\nabla}$ is the connection on $Q$, namely, the $Q$-component of $\widetilde{\nabla}$. That is, we just need to show that
\begin{eqnarray} \widehat{\nabla}_Z(JW) & = & J(\widehat{\nabla}_ZW)
\end{eqnarray}
holds for any two vector fields $Z$ and $W$ in $Q$. Since $TQ$ is the parallel translation in ${\mathbb R}^{2n+4}$ of $TQ|_M=P$ along the leaves of $L$, and $J$ is also defined by parallel translation along leaves of $L$, we just need to verify the above condition at points in $M$ and with $Z$ tangent to $M$. If $W$ is also tangent to $M$, then the above equation holds in the tangential component of $M$, since $M$ is K\"ahler. For the normal components, since we are only concerned within $Q$, it means that we just need to verify that for the $\xi_3$ and $\xi_4$ directions, namely:
\begin{eqnarray*} \langle \widehat{\nabla}_Z( J W) ,\xi_i \rangle & = & \langle J(\widehat{\nabla}_Z W) , \xi_i \rangle
\end{eqnarray*}
for $i=3$ and $4$ where $Z$ and $W$ are vector fields in $M$. It is equivalent to
\begin{eqnarray} J A_{\xi_i} & = & A_{J\xi_i}
\end{eqnarray}
for $i=3$ and $4$.
Since $H^{\xi_3}=H^{\xi_4}=0$, $S^{\xi_3}=\sqrt{-1}S^{\xi_4}$, so by (2.4) we get
$$ JA_{\xi_3}=\left( \begin{array}{cc} 0&-1\\1&0 \end{array} \right) \left( \begin{array}{cc} R_3&-I_3\\-I_3&-R_3 \end{array} \right) =\left( \begin{array}{cc} I_3&R_3\\R_3&-I_3 \end{array} \right) = A_{\xi_4}.$$
Here we wrote $S^{\xi_3}=R_3+\sqrt{-1}I_3$ and $S^{\xi_4}=R_4+\sqrt{-1}I_4$, so $R_3=-I_4$ and $I_3=R_4$. Recall that we have defined $J$ on $E$ by $J\xi_3=\xi_4$ and $J\xi_4=-\xi_3$. So (4.4) holds.
\vspace{0.2cm}
Now we are left with the case where $Z$ is a tangent vector field of $M$ and $W$ is a section of $E$, since $P=E\oplus T$. By the linearity of $J$ and the Leibniz formula, we just need to check this for $W=\xi_3$ and $W=\xi_4$. Namely,
\begin{eqnarray}
\widehat{\nabla}_Z(\xi_4)&=& J(\widehat{\nabla}_Z\xi_3)
\end{eqnarray}
for any tangent vector field $Z$ in $M$. First let us compare the tangential components on both sides. It reduces once again to (4.4). For the normal components in (4.5), notice that $\widehat{\nabla}$ is just the $TQ$ component of $\widetilde{\nabla}$, so we have
\begin{eqnarray*}
(\widehat{\nabla}_Z\xi_3)^{\perp }& = & \langle \widehat{\nabla}_Z\xi_3, \xi_4\rangle \xi_4 \ = \ \langle \nabla^{\perp }_Z\xi_3, \xi_4\rangle \xi_4 \ = \ -\langle \xi_3, \nabla^{\perp }_Z\xi_4\rangle \xi_4 \\
(\widehat{\nabla}_Z\xi_4)^{\perp }& = &\langle \widehat{\nabla}_Z\xi_4, \xi_3\rangle \xi_3 \ =\ \langle \nabla^{\perp }_Z\xi_4, \xi_3\rangle \xi_3
\end{eqnarray*}
So $(J\widehat{\nabla}_Z\xi_3)^{\perp} = J((\widehat{\nabla}_Z\xi_3)^{\perp}) =(\widehat{\nabla}_Z\xi_4)^{\perp }$. This proves the K\"ahlerness of the codimension $2$ submanifold $Q$ in the Euclidean space. The holomorphicity of $M$ in $Q$ is obvious, since we defined our $J$ on $Q$ in such a way that its restriction on $M$ comes from the complex structure. This completes the proof of Theorem 2. \qed
\vspace{0.4cm}
For the K\"ahler extension $h$ obtained in Theorem 2, clearly, if $h$ is minimal, then $f$ is necessarily minimal. Conversely, when $f$ is minimal, we would like to know when will $h$ be minimal. We have the following
\vspace{0.4cm}
\noindent {\bf Theorem 3.} {\em Let $f$, $(E,J)$ and $L$ be as in Theorem 2, and let $h$ be the K\"ahler extension of $f$ obtained by $L$. Suppose $f$ is minimal, then $h$ is minimal if and only $(v_2-Jv_1)\in \mbox{ker}(A_{\xi_1})\cap \mbox{ker}(A_{\xi_2})$. Here $\{ \xi_1 , \ldots , \xi_4\}$ is an orthonormal frame of $N$, with $\{ \xi_3, \xi_4\}$ a frame of $E$, $\xi_4=J\xi_3$, and $v_1, v_2\in T$ are determined (uniquely) by the condition that $\{ \xi_3-v_1, \xi_4-v_2\}$ spans $L$. }
\vspace{0.4cm}
\noindent {\em Proof:} Note that $\xi_1$ and $\xi_2$ span the normal bundle of $Q$ in ${\mathbb R}^{2n+4}$, and $h$ is minimal if and only if its $H=0$, or equivalently, $J\hat{A}_{\xi_{\alpha }}=\hat{A}_{\xi_{\alpha }}J$ for $\alpha =1 $ and $2$, where $J$ is the almost complex structure of $Q$ and $\hat{A}$ is the shape operator of $Q$. That is, for $1\leq \alpha \leq 2$ and any vector fields $Z$, $W$ on $Q$,
\begin{eqnarray*}
\langle J\hat{A}_{\xi_{\alpha }}Z, W\rangle = \langle \hat{A}_{\xi_{\alpha }}J Z, W\rangle ,
\end{eqnarray*}
or equivalently,
\begin{eqnarray}
- \langle \widetilde{\nabla}_Z J W , \xi_{\alpha } \rangle = \langle \widetilde{\nabla}_{JZ} W , \xi_{\alpha } \rangle .
\end{eqnarray}
By the construction of $h$, $TQ$ is the parallel translate of of $TQ|_M$ along the leaves of $L$, and $J$ and both $\xi_{\alpha }$ are parallel along each leaf of $L$, so we just need to check (4.6) at points in $M$, and for $Z$ a vector field in $M$.
\vspace{0.2cm}
Since $TQ|_M=E\oplus T$, we just need to verify (4.6) for $W$ being a vector field in $M$ and a section of $E$. In the former case, (4.6) is just the minimality of $f$. While when $W$ is a section of $E$, (4.6) becomes
\begin{eqnarray}
\langle J W , \widetilde{\nabla}_Z\xi_{\alpha } \rangle = - \langle W , \widetilde{\nabla}_{JZ}\xi_{\alpha } \rangle
\end{eqnarray}
for each $\alpha =1,2$. Clearly, we just need to verify (4.7) for $W=\xi_3$.
\vspace{0.2cm}
Now suppose that $\xi_3-v_1$ and $\xi_4-v_2$ span $L$, and $\xi_4=J\xi_3$. Note that since $L$ is transversal to $T$, the map $\pi|_L: L\rightarrow E$ is bijective. Here $\pi$ is the projection map from $E\oplus T$ onto $E$. So $v_1$, $v_2$ are uniquely determined by the choice of $\{ \xi_3,\xi_4\}$. By the definition of developable ruling, we know that $\langle \widetilde{\nabla}\xi_{\alpha }, L\rangle =0$, so
\begin{eqnarray*}
\langle \xi_4 , \widetilde{\nabla}_Z\xi_{\alpha } \rangle &=& \langle v_2 , \widetilde{\nabla}_Z\xi_{\alpha } \rangle \ = \ \langle A_{\xi_{\alpha }}(v_2), Z\rangle , \ \ \mbox{and} \\
\langle \xi_3 , \widetilde{\nabla}_{JZ}\xi_{\alpha } \rangle &= & \langle v_1 , \widetilde{\nabla}_{JZ}\xi_{\alpha } \rangle \ = \ \langle A_{\xi_{\alpha }} (v_1), JZ\rangle \ = \ \langle A_{\xi_{\alpha }} (J v_1), Z\rangle
\end{eqnarray*}
Note that in the last equality we used the minimality of $M$, namely, we always have $JA=-AJ$. Plug these two equalities into (4.7) for $W=\xi_3$, we get
$$ \langle A_{\xi_{\alpha }}(v_2-Jv_1) , Z\rangle =0 $$
for any vector field $Z$ in $M$, that is
\begin{eqnarray}
A_{\xi_{\alpha }}(v_2-J v_1)=0, \ \ \ \ \alpha = 1, 2.
\end{eqnarray}
So when $f$ is minimal, $h$ will be minimal if and only $v_2-Jv_1$ belongs to $\mbox{ker}(A_{\xi_1})\cap \mbox{ker}(A_{\xi_2})$, which is the real subspace of $T$ corresponding to $\mbox{ker}(S')$ in $V$. Here $S'=(S^1,S^2)$. This completes the proof of Theorem 3. \qed
\vspace{0.4cm}
\noindent {\em Remark:} \ Let us denote by $\pi : E\oplus T \rightarrow E$ the projection map, and by $\tau : E \rightarrow L$ the inverse of the restriction map $\pi|_L: L\rightarrow E$. Then the condition stated in Theorem 3 can be rephrased as
$$ \tau (J\eta)-J\tau (\eta) \ \in \ \mbox{ker}(A_{\xi_1})\cap \mbox{ker}(A_{\xi_2}) $$
for any $\eta$ in $E$. Here $\{ \xi_1, \xi_2\}$ is a basis of $E'$, the orthogonal complement of $E$ in $N$.
\vspace{0.2cm}
Now let us prove Theorem 1 stated at the beginning of this section.
\vspace{0.4cm}
\noindent {\em Proof of Theorem 1:} Note that in this case, the ambient Euclidean space is automatically a developable submanifold (of itself) over $M$, with fibers of the normal bundle $N$ as rulings leaves. Define an almost complex structure $J$ on $T\oplus N$ by taking the direct sum of the almost complex structure of $M$ with the given one on $N$, and use parallel translation along leaves of $N$ to push it to a small tubular neighborhood $\Omega $ of $M$, we get an almost complex structure $J$ on the open subset $\Omega $ of ${\mathbb R}^{2n+4}$. $J$ is clearly an isometry. One can see that $\widetilde{\nabla }J=0$ just like in the proof of Theorem 2, with the help of $(4.1)$. So this $J$ comes from an isometric identification ${\mathbb R}^{2n+4}\cong {\mathbb C}^{n+2}$ and $M$ becomes a complex submanifold with complex codimension $2$. This completes the proof of Theorem 1. \qed
\vspace{0.4cm}
\vspace{0.4cm}
\section{The Proof of the Main Theorem}
\vspace{0.4cm}
In this section, we will prove the main theorem. For $x\in M$, let us denote by $N_0(x)$ the subspace of $N_x$ consisting of all $\eta$ with $A_{\eta }=0$. Note that the presence of normal directions in which the shape operator vanishes would mean that the codimension can be reduced (see \cite{Spivak}, Prop. 24). In the interior part $U_0$ of the set where $N_0\neq 0$, there will be open dense subset of $U_0$, such that within each connected component of it the submanifold $M$ will be real K\"ahler submanifold with smaller codimensions. Since the main theorem is known in codimension three or less, in the following, we will assume that
\vspace{0.4cm}
{\em $N_0=0$ everywhere in $M$. That is, $A_{\eta }\neq 0$ for any $\eta \neq 0$.}
\vspace{0.4cm}
First let us consider the non-minimal case, in other words, we restrict ourselves to the open subset of $M$ in which $H\neq 0$, if that set is non-empty. Since $r\geq 5$, we know that the image of $H$ is either $1$ or $2$ dimensional. In the open subset $U_2$ where $H$ has $2$-dimensional image space $E'$, there are exactly two directions, perpendicular to each other, in which $H$ has rank $1$. Let $\xi_1$ and $\xi_2$ be the unit vectors in those two directions, they are unique up to $\pm 1$ and interchange. In this case, as a consequence of (2.2), $S^{\xi_1}$ and $S^{\xi_2}$ can be diagonalized accordingly.
\vspace{0.2cm}
In the open subset $M\setminus \overline{U_2}$, the image of $H$ is $1$-dimensional, and we will let $\xi_1$ be the unit vector in this direction (unique up to a sign).
\vspace{0.2cm}
In both cases, by the discussion on the algebraic lemma and formula (2.4), we know that locally there will be orthonormal frame $\{ \xi_1, \ldots , \xi_4\}$ such that $A_{\xi_1}$ and $A_{\xi_2}$ are both of rank $2$ or less, and $A_{\xi_4}=JA_{\xi_3}$ has rank at least $6$. Furthermore, $E'=\mbox{span}\{ \xi_1, \xi_2\}$, as the set of all normal directions in which the shape operator has rank $4$ or less, is uniquely determined. Also, if we restrict ourselves to a connected component $U$ in an open dense subset of of $M$, we may assume that in $U$ the orthonormal frame $\{ \xi_1, \xi_2\}$ of $E'$ is also uniquely determined, up to interchange and signs.
\vspace{0.2cm}
By letting $J\xi_3=\xi_4$ and $J\xi_4=-\xi_3$, we get an almost complex structure on $E$, the orthogonal complement of $E'$ in $N$. So to prove the main theorem, it suffices by Theorem 2 to find a developable ruling $L$ for $E$. This will follow from Codazzi equation (2.5) and a clever argument discovered by Dajczer and Gromoll in \cite{DG97}.
\vspace{0.2cm}
Consider $\eta=\xi_1$ or $\xi_2$. $A_{\eta}$ has rank $q\leq 2$. Denote by $\Delta_{\eta }$ the kernel of $A_{\eta}$ in $T$, and by $\Delta_{\eta}^{\perp}$ its orthogonal complete in $T$. $\Delta_{\eta}^{\perp}$ is also the image space of $A_{\eta}$. First we claim the following:
\vspace{0.4cm}
\noindent {\bf Claim:} {\em For either $\eta=\xi_1$ or $\eta=\xi_2$, the $E$-component of $ \nabla^{\perp }_v\eta$, denoted by $( \nabla^{\perp }_v\eta )^E$, is always $0$ for all $v\in \Delta_{\eta }$. That is, for any $v\in \Delta_{\eta }$, it holds}
\begin{eqnarray}
\langle \nabla^{\perp }_v\eta , \xi_3\rangle = \langle \nabla^{\perp }_v\eta , \xi_4\rangle =0 .
\end{eqnarray}
\vspace{0.2cm}
To prove the claim, assume the contrary. Without loss of generality, we may assume that $\eta=\xi_1$ and there is a $v\in \Delta_{\eta }$ such that $\xi=( \nabla^{\perp }_v\eta)^E \neq 0$. By (2.5), since $A_{\eta }v=0$, we have
\begin{eqnarray}
A_{ \nabla^{\perp }_v\eta }u= A_{ \nabla^{\perp }_u\eta }v + \nabla_v(A_{\eta }u)+ A_{\eta }[u,v]
\end{eqnarray}
for any $u\in T$. Let $T_{\eta}=\{ u\in T\mid (\nabla^{\perp}_u\eta )^E=0\}$. Since $E$ is $2$-dimensional, the codimension of $T_{\eta}$ in $T$ is at most $2$.
\vspace{0.2cm}
Let $\{ e_1, \ldots , e_n\} $ be a frame of $V$ such that $\{ e_3, \ldots , e_n\}$ is a unitary frame of $V_0=\mbox{ker}(H) \cap \mbox{ker}(S')$ and is perpendicular to $\{ e_1, e_2\}$. We will also assume that $\{ e_{r+1}, \ldots , e_n\}$ is a unitary frame of $D\subseteq V$ corresponds to $\Delta_0$. So $\{ e_1, \ldots , e_r\}$ is a frame of $D^{\perp }$ corresponds to $\Delta_0^{\perp}\cong {\mathbb R}^{2r}$.
\vspace{0.2cm}
Let $W\subseteq T$ be the subspace corresponds to $V_0$ under the identification $V\cong T$. Note that $W\subseteq \Delta_{\xi_1}\cap \Delta_{\xi_2}$. Now consider the space $W'=W\cap \Delta_0^{\perp }$. Its real dimension is $2r-4\geq 6$ since $r\geq 5$, so the space $W''=W'\cap T_{\eta }$ is at least $4$ dimensional, as $T_{\eta }$ has codimension at most $2$ in $T$.
\vspace{0.2cm}
By (5.2), we know that for any $u\in W''$, $A_{\xi}u$ is contained in the space
$$ \Delta_{\eta}^{\perp } + \mbox{span} \{ A_{\xi_2}v\} ,$$
which has dimension at most $3$. So there will be $0\neq u_0\in W''$ such that $A_{\xi}u_0=0$. We have $A_{\xi_1}u_0=A_{\xi_2}u_0=0$ since $u_0\in W$. On the other hand, since $\xi\neq 0$, $\{ \xi , J\xi \}$ spans $E$, so by the fact that $A_{J\xi}=JA_{\xi}$, we get $A_{\eta '}u_0=0$ for any normal direction $\eta '$. This means that $\alpha_f(u_0, w)=0$ for any $w\in T$.
\vspace{0.2cm}
If we write $u_0=X+\overline{X}$ for (a unique) $X\in V$, then for any $Y\in V$, we have
$$ \alpha_f(u_0,Y) = S_{YX} + H_{Y\overline{X}} =0, \ \ \ \ \ \forall \ Y\in V.$$
Since $X\in W\subseteq \mbox{ker}(H)$, so we get $S_{YX}=0$ for any $Y$ thus $X\in \mbox{ker}(S)$ as well. This will force $X=0$ since we assumed that $u_0\in \Delta_0^{\perp }$. Thus $u_0=0$, a contradiction, and we have completed the proof of the claim.
\vspace{0.4cm}
From the discussion in the algebraic lemma, we know that there will be local frame $\{ e_1, \ldots , e_n\}$ of $V$, such that $\{ e_3, \ldots , e_n\}$ is a unitary frame of $V_0$ and is perpendicular to $\{ e_1, e_2\}$, and under this frame it holds
\begin{eqnarray*}
H^{\xi_1}&=& \mbox{diag}(1,0,0, \ldots , 0)\\
S^{\xi_1}&=& \mbox{diag}(a,0,0, \ldots , 0)\\
H^{\xi_2}&=& \mbox{diag}(0,\delta ,0, \ldots , 0)\\
S^{\xi_2}&=& \mbox{diag}(0,b,0, \ldots , 0)
\end{eqnarray*}
where $\delta =0$ or $1$, and $a$, $b$ are nonnegative. Write $e_i=\varepsilon_{2i-1}-\sqrt{-1}\varepsilon_{2i}$ for $1\leq i\leq n$, then under the real tangent frame $\{ \varepsilon_1, \ldots , \varepsilon_{2n}\}$, the first two shape forms are given by
\begin{eqnarray*}
A^{\xi_1}&=& \mbox{diag}(1\!+\!a,1\!-\!a,\ \ 0,\ \ \ 0; \ \ 0, \ldots , 0)\\
A^{\xi_2}&=& \mbox{diag}(\ \ 0,\ \ \ \ 0,\ \ \delta \!+\!b, \delta \!-\!b; 0, \ldots , 0)
\end{eqnarray*}
Our goal is to show that there exists vector fields $v_1$ and $v_2$ on $M$ such that $L=\mbox{span}\{\xi_3-v_1, \xi_4-v_2\}$ satisfies $\langle \widetilde{\nabla }E', L\rangle =0$. That is, for any $i,j=1,2$,
$$ \langle \xi_{2+i} -v_i,\widetilde{\nabla }\xi_j\rangle =0 $$
or equivalently
\begin{eqnarray}
\langle \xi_{2+i} ,\nabla^{\perp }_u\xi_1 \rangle & = & \langle v_i ,A_{\xi_1}u \rangle \\
\langle \xi_{2+i} ,\nabla^{\perp }_u\xi_2 \rangle & = & \langle v_i ,A_{\xi_2}u \rangle
\end{eqnarray}
for each $i=1,2$ and any $u$ in $T$.
\vspace{0.2cm}
By the claim above, both sides of (5.3) are zero if $u$ is in the kernel space of $A_{\xi_1}$, which is spanned by $\varepsilon_3$ through $\varepsilon_{2n}$ and also $\varepsilon_2$ if $a=1$. So (5.3) just need to hold true for all $u\in \Delta_{\xi_1}^{\perp } = \mbox{Im}(A_{\xi_1 })$.
\vspace{0.2cm}
Similarly, both sides of (5.4) vanishes if $u$ is in the kernel of $A_{\xi_2}$, which is spanned by $\varepsilon_1$, $\varepsilon_2$, and $\varepsilon_5$ through $\varepsilon_{2n}$, and also $\varepsilon_4$ if $\delta =b$. So we just need (5.4) to hold true for all $u\in \Delta_{\xi_2}^{\perp } = \mbox{Im}(A_{\xi_2 })$.
\vspace{0.2cm}
Since $\Delta_{\xi_1} + \Delta_{\xi_2}=T$, we must have $\Delta_{\xi_1}^{\perp} \cap \Delta_{\xi_2}^{\perp}=0$. So we have direct sum decomposition
$$ T=(\Delta_{\xi_1} \cap \Delta_{\xi_2})\oplus \Delta_{\xi_1}^{\perp} \oplus \Delta_{\xi_2}^{\perp},$$
and $v_1$, $v_2$ can be uniquely determined in $\Delta_{\xi_1}^{\perp} \oplus \Delta_{\xi_2}^{\perp}$ by (5.3) and (5.4). But adding any element of $\Delta_{\xi_1} \cap \Delta_{\xi_2}$ onto $v_1$ or $v_2$ would not affect (5.3) or (5.4). This establish the existence of developable ruling $L$ for $E$ and the proof the main theorem is complete in the non-minimal case.
\vspace{0.4cm}
Next let us consider the minimal case, namely $H=0$ everywhere. By our previous discussion on the algebraic lemma, we know that either there exists a $2$-dimensional subspace $E'$ of $N$ in which the kernel of $S'$ has codimension at most $2$, and the orthogonal complement $E$ admits an almost complex structure $J$; or the entire normal bundle $N$ admits an almost complex structure $J$. In both cases, the almost complex structure is unique since no shape operator is allowed to vanish. We claim that $J$ is always admissible. This is automatic on any rank $2$ bundle, while in the case of $J$ on the rank four bundle $N$, we claim the following admissibility result:
\vspace{0.4cm}
\noindent {\bf Theorem 4.} {\em Let $f:M^n \rightarrow {\mathbb R}^{2n+4}$ be a real K\"ahler submanifold such that there is an almost complex structure $J$ on $N$. Assume that no shape operator vanishes, and the rank $r\geq 2$ everywhere, then $J$ is admissible, namely, for any tangent vector $v$ and any normal field $\xi$, it holds}
\begin{eqnarray}
\nabla^{\perp }_vJ\xi =J\nabla^{\perp }_v\xi
\end{eqnarray}
\vspace{0.2cm}
Let us continue with our proof of the main theorem first, assuming that Theorem 4 is already established. In the case when $N$ itself is equipped with an almost complex structure $J$, Theorem 4 says that $J$ is admissible. So by Theorem 1 in the previous section, we know that there is an isometric identification ${\mathbb R}^{2n+4} \cong {\mathbb C}^{n+2}$ under which $f$ becomes a holomorphic map. That is, $f: M^n\rightarrow {\mathbb C}^{n+2}$ is a holomorphic isometric embedding. Note that in this case, any local piece of holomorphic hypersurface $Q^{n+1}$ containing (a piece of) $M^n$ would be a K\"ahler extension of $M$. So the conclusion of the main theorem holds in this case.
\vspace{0.4cm}
\noindent {\em Proof of Theorem 4:} Let us choose a local orthonormal frame $\{ \xi_1, \ldots , \xi_4\}$ for the normal bundle $N$, so that $\xi_3=J\xi_1$ and $\xi_4=J\xi_2$. For any $1\leq \alpha , \beta \leq 4$, let us denote by $\phi_{\alpha \beta}$ the real $1$-form on $M$ given by $\langle \nabla^{\perp }\xi_{\alpha } , \xi_{\beta }\rangle $. Write the $4\times 4$ real, skew-symmetric matrix $\phi = (\phi_{\alpha \beta })$ in $2\times 2$ blocks:
$$ \phi = \left( \begin{array}{cc} \phi^1 & \phi^2 \\ - ^t\!\phi^2 & \phi^3 \end{array} \right) $$
It is easy to see that (5.5) is equivalent to $\phi^1=\phi^3$ and $\ ^t\phi^2=\phi^2$. Write
$$ (\phi^1-\phi^3) + \sqrt{-1}\ (\ ^t\!\phi^2-\phi^2) = \left( \begin{array}{cc} 0 & 1 \\ - 1 & 0 \end{array} \right) \lambda ,$$
then it suffices to show that $\lambda =0$. Let $\{ e_1, \ldots , e_n\}$ be a unitary frame of $V$, and let $\{ \varphi_1, \ldots , \varphi_n\}$ be its dual coframe of $(1,0)$-forms on $M$. Write $\langle \widetilde{\nabla } e_i, \xi_{\alpha }\rangle = \psi_i^{\alpha }$, then since $H=0$, each
$$ \psi_i^{\alpha } = \sum_{j=1}^n S^{\alpha }_{ij}\varphi_j $$
is a $(1,0)$-form. Denote by $\psi^{\alpha }$ for the column vector $^t\!(\psi^{\alpha }_1, \ldots , \psi_n^{\alpha })$, and write
$$ \psi = (\psi ' ; \ \psi '') = (\psi^1, \psi^2; \ \psi^3,\psi^4).$$
By our choice of the normal frame, we have $\psi '' = -\sqrt{-1} \psi '$, therefore
\begin{eqnarray}
\psi = (\psi ', -\sqrt{-1}\psi ').
\end{eqnarray}
The connection matrix of $\widetilde{\nabla }$ under the frame $\{ e, \overline{e}, \xi \}$ is
$$ \tilde{\theta } = \left( \begin{array}{ccc} \theta & 0 & \psi \\ 0 & \overline{\theta } & \overline{\psi } \\ - ^t\!\overline{\psi } & - ^t\!\psi & \phi \end{array} \right) .$$
Applying (5.6) to the Codazzi equation $d\psi = \theta \psi + \psi \phi $, we get two equations. Multiplying the second equation by $\sqrt{-1}$, and take its difference with the first equation, we get
$$ \psi ' \left( \begin{array}{cc} 0 & 1 \\ - 1 & 0 \end{array} \right) \lambda =0,$$
or equivalently, $\psi^1\wedge \lambda = \psi^2\wedge \lambda =0$. We claim that this will force $\lambda =0$, thus proving Theorem 3. Write $\lambda = \sum_{k} (a_k\varphi_k + b_k \overline{\varphi_k})$. The above equation on $\lambda $ means that for each $i$ and each $\alpha $,
$$ \sum_{j,k=1}^n S^{\alpha }_{ij} a_k \varphi_j \wedge \varphi_k + \sum_{j,k=1}^n S^{\alpha }_{ij}b_k \varphi_j \wedge \overline{\varphi_k} \ =\ 0.$$
The second part implies that
$ S_{ij}^{\alpha } b_k =0 $ for any $i,j,k$, thus $b_k=0$ for all $k$. The first part implies that
$S^{\alpha }_{ij}a_k=S^{\alpha }_{ik}a_j$ for any $\alpha $ and any $i,j,k$. Since $M$ has rank $r\geq 2$, there will be some combination $S=\sum t_{\alpha }S^{\alpha }$ so that $S$ is a complex symmetric matrix of rank at least $2$. Take a unitary matrix $P$ such that $^t\!P^{\!-\!1}SP^{\!-\!1}=D=\mbox{diag}(d_1, \ldots , d_n)$ is diagonal, with $d_1d_2\neq 0$. Then we have $S=\ ^t\!PDP$, and $S_{ij}a_k=S_{ik}a_j$ for any $i,j,k$ becomes
$$ d_lP_{lj}a_k= d_l P_{lk}a_j$$
for any $l,j,k$. Take $l=1$ and $2$, we notice that if $a_k$ are not all zero, then the first two rows of $P$ will be proportional, a contradiction. So we must have $a_k=0$ for all $k$. This completes the proof of Theorem 4. \qed
\vspace{0.4cm}
Now we are left with the situation when there exists orthogonal decomposition $N=E'\oplus E$ such that $E$ is equipped with an almost complex structure $J$, and the kernel of $S'$ is at most $2$-dimensional. Here $S'$ is the $E'$-component of $S$. Write $V_0=\mbox{ker}(S')$ and denote by $k$ its codimension. $k$ is either $1$ or $2$. Let $\{ \xi_1, \ldots , \xi_4\}$ be a local orthonormal frame of $N$ such that $\{ \xi_1, \xi_2\}$ is a frame of $E'$. We have $H=0$ and $S^{\xi_3} = \sqrt{-1} S^{\xi_4}$.
\vspace{0.2cm}
By our previous discussion, we may exclude the possibility that $E'$ is also equipped with an almost complex structure. In other words, we may assume that
\begin{eqnarray}
S^{\xi_1} \neq \pm \sqrt{-1} S^{\xi_2}.
\end{eqnarray}
Also, the symmetry condition (2.3) holds for $S'$ as well. Our goal is to establish the existence of a developable ruling $L$ for $E$.
\vspace{0.2cm}
We will consider the case $k=2$ first. Let $\{ e_1, \ldots , e_n\}$ be a unitary frame of $V$, such that $\{ e_3, \ldots , e_n\}$ is a frame of $V_0=\mbox{ker}(S')$. As in the proof of Theorem 4, we will write $$\psi_i^{\alpha }=\langle \widetilde{\nabla}e_i, \xi_{\alpha }\rangle, \ \ \ \phi_{\alpha \beta } = \langle \nabla^{\perp } \xi_{\alpha }, \xi_{\beta }\rangle $$
and denote by $\theta$ the connection matrix of $M$ under $e$. We also let $\{ \varphi_1, \ldots , \varphi_n\}$ be the coframe of $(1,0)$-forms dual to $e$.
\vspace{0.2cm}
Note that since $\psi_i^{\alpha }=\sum_{j=1}^n S^{\alpha }_{ij}\varphi_j$, we have
$ \psi^3=\sqrt{-1}\psi^4$, where $\psi^{\alpha }$ stands for the $\alpha$-th column of $\psi$. Also,
$ \psi^1_i=\psi^2_i=0$ for each $i\geq 3$.
\vspace{0.2cm}
By the Codazzi equation $d\psi = \theta \psi + \psi \phi$, we get
\begin{eqnarray*}
d\psi^3 &=&\theta \psi^3 + \psi^1\phi_{13}+\psi^2\phi_{23}+\psi^4\phi_{43}\\
d\psi^4 &=&\theta \psi^4 + \psi^1\phi_{14}+\psi^2\phi_{24}+\psi^3\phi_{34}
\end{eqnarray*}
Multiplying $-\sqrt{-1}$ on the second line, and then add the result to the first line, we get from $ \psi^3=\sqrt{-1}\psi^4$ that
\begin{eqnarray} 0 \ = \ \psi^1(\phi_{13}-\sqrt{-1}\phi_{14})+\psi^2(\phi_{23}-\sqrt{-1}\phi_{24})
\end{eqnarray}
We will write $\sigma_1=\phi_{13}-\sqrt{-1}\phi_{14}$ and $\sigma_2=\phi_{23}-\sqrt{-1}\phi_{24}$. Write
\begin{eqnarray*}
\psi^1_1 &=& a\varphi_1 + b\varphi_2, \ \ \ \ \ \psi^1_2 \ = \ b\varphi_1 + c\varphi_2 \\
\psi^2_1& =& a'\varphi_1 + b'\varphi_2, \ \ \ \ \psi^2_2 \ = \ b'\varphi_1 + c'\varphi_2
\end{eqnarray*}
Since $S'$ also satisfies the symmetry condition (2.3), we have
\begin{eqnarray}
ac-b^2+a'c'-b'^2=0.
\end{eqnarray}
We first claim that both $\sigma_1$ and $\sigma_2$ must be linear combinations of $\varphi_1$ and $\varphi_2$. Assume otherwise, then by (5.8), we must have $\psi^1_1\wedge \psi^2_1=0$ and $\psi^1_2\wedge \psi^2_2=0$. So $(a,b)$ is proportional to $(a',b')$ and $(b,c)$ is proportional to $(b',c')$. The proportionality constants are also equal, so we have $S^1=\lambda S^2$ for some constant $\lambda$. Because $S'$ satisfies (2.3), we have $\lambda^2=-1$ since we assumed that $k=2$ here. So $S^1=\pm \sqrt{-1}S^2$, a contradiction to (5.7). So the claim must hold, and we can write
$$ \sigma_1 = \alpha \varphi_1 + \beta \varphi_2, \ \ \ \ \sigma_2 = \alpha ' \varphi_1 + \beta '\varphi_2.$$
The first two rows of (5.8) become
\begin{eqnarray}
a\beta -b\alpha + a'\beta ' - b' \alpha ' & = & 0 \\
b\beta - c\alpha + b'\beta ' - c' \alpha ' & = & 0
\end{eqnarray}
We claim that there exists $w_1$ and $w_2$ such that
\begin{eqnarray}
(\alpha , \beta ) & = & \ w_1(a,b)+ \ w_2(b,c) \\
(\alpha ', \beta ') & = & w_1(a',b')+ w_2(b',c')
\end{eqnarray}
hold simultaneously. First let us assume that $ac-b^2\neq 0$. Let $w_1$, $w_2$ be uniquely determined by (5.12), we have
\begin{eqnarray}
a\beta -b\alpha = w_2(ac-b^2), \ \ \ b\beta - c\alpha =w_1(b^2-ac).
\end{eqnarray}
If we write
$$ \delta_1=\alpha ' - (w_1 a'+w_2b'), \ \ \ \delta_2=\beta ' -(w_1b'+w_2c'),
$$
then we have
\begin{eqnarray*}
a'\beta ' - b' \alpha ' &=& w_2(a'c'-b'^2) +(a'\delta_2-b'\delta_1) \\
b'\beta ' -c'\alpha '&=&w_1(b'^2-a'c')+(b'\delta_2-c'\delta_1)
\end{eqnarray*}
Adding with (5.14), and using (5.9)-(5.11), we derive at
$$ \left( \begin{array}{cc} a' & b' \\ b' & c' \end{array} \right) \left[\begin{array}{c} \delta_2 \\ \!-\!\delta_1 \end{array} \right] = 0
$$
Since $a'c'-b'^2=-(ac-b^2)\neq 0$, we get $\delta_1=\delta_2=0$, so (5.12) and (5.13) hold.
\vspace{0.2cm}
If $ac-b^2=0$, then $a'c'-b'^2=0$ by (5.9). We claim that in this case $(a,b)$ cannot be proportional to $(a',b')$. Assume otherwise, say, $(a,b)=\lambda (a',b')$. Since $S^1$ and $S^2$ have zero determinants, we have $(b,c)=\lambda (b',c')$ as well. So $S^1=\lambda S^2$, a contradiction to $k=2$, so the claim holds. Note that the claim means $\psi^1_1\wedge \psi^2_1\neq 0$. If we write $\psi^1_2=\lambda_1 \psi^1_1$ and $\psi^2_2 =\lambda_2 \psi^2_1$, then since $b=\lambda_1a$ and $b'=\lambda_2a'$, we know that $\lambda_1\neq \lambda_2$ by the above claim.
\vspace{0.2cm}
By (5.8), we have $\psi^1_1\sigma_1+\psi^2_1\sigma_2=0$ and $\lambda_1\psi^1_1\sigma_1+\lambda_2\psi^2_1\sigma_2=0$. Since $\psi^1_1\wedge \psi^2_1\neq 0$, the first equation implies that
$$ \sigma_1 = x\psi^1_1 + y\psi^2_1, \ \ \ \sigma_2 = y\psi^1_1 + z\psi^2_1 $$
for some scalar valued functions $x$, $y$, and $z$. Plug them into the second equation, we get $y(\lambda_1-\lambda_2)=0$, thus $y=0$. Take $w_2=(x-z)/(\lambda_2-\lambda_1)$ and $w_1=x-\lambda_1w_2$, we have $x=w_1+\lambda_1w_2$ and $z=w_1+\lambda_2w_2$, therefore
$$ \sigma_1=w_1\psi^1_1+w_2\psi^1_2, \ \ \ \sigma_2=w_1\psi^2_1+w_2\psi^2_2 $$
hold simultaneously. That is, (5.12) and (5.13) holds in this case as well.
\vspace{0.2cm}
Note that we have proved that, when $k=2$ and when $E'$ is not equipped with an almost complex structure, there are scalar valued functions $w_1$ and $w_2$ such that $w=w_1e_1+w_2e_2$ satisfies $\sigma_1=\psi^1_w$ and $\sigma_2=\psi^2_w$, namely, for $\alpha =1$ and $2$, it holds that
$$ \langle \nabla^{\perp }\xi_{\alpha } , \ \xi_3\!-\! \sqrt{-1} \xi_4 \rangle = \langle \widetilde{\nabla }w, \xi_{\alpha }\rangle .$$
If we write $w=-v_1+\sqrt{-1}v_2$, then the above just means that $\langle \widetilde{\nabla }E',L\rangle =0$ for the rank two subbundle $L$ in $T\oplus E$ spanned by $\{ \xi_3\!-\!v_1, \xi_4\!-\!v_2\}$. In other words, $L$ is a developable ruling of $E$. Thus by Theorem 2 we get a K\"ahler extension $h$ for $f$. Note that since $w$ is a type $(1,0)$ vector, we have $v_2=Jv_1$ in this case. So $h$ is minimal by Theorem 3.
\vspace{0.4cm}
Finally, let us consider the $k=1$ case, namely when $V_0=\mbox{ker}(S')$ has codimension one. Let $e=\{ e_1, \ldots , e_n\}$ be a unitary frame of $V$ so that $\{ e_2, \ldots , e_n\}$ is a frame of $V_0$. Let $\varphi$ be the dual coframe of $e$, and define $\psi$, $\phi$ as before. Then $\psi^3=\sqrt{-1}\psi^4$, and $\psi^1_i=\psi^2_i=0$ for all $i\geq 2$. Let us write $\psi^1_1=a\varphi_1$, $\psi^2_1=\lambda a\varphi_1$. Then $a\neq 0$, and $\lambda \neq \pm \sqrt{-1}$ since we have excluded the case where $S'$ admits an almost complex structure. By the Codazzi equation for $\psi^3$ and $\psi^4$, we again get
$$ \psi^1(\phi_{13}-\sqrt{-1}\phi_{14})+\psi^2(\phi_{23}-\sqrt{-1}\phi_{24})= \psi^1\sigma_1+\psi^2\sigma_2=0.$$
That is,
\begin{eqnarray}
\varphi_1(\sigma_1+\lambda \sigma_2)=0.
\end{eqnarray}
On the other hand, since $\psi^4=-\sqrt{-1}\psi^3$, the Codazzi equation for $\psi^1$ and $\psi^2$ give
\begin{eqnarray*}
d\psi^1&=&\theta \psi^1 -\psi^2\phi_{12} - \psi^3\sigma_1 \\
d\psi^2&=&\theta \psi^2 +\psi^1\phi_{12} - \psi^3\sigma_2
\end{eqnarray*}
Now if we use the fact that $\psi^2=\lambda \psi^1$, we get $d\psi^2=d\lambda \wedge \psi^1+\lambda d\psi^1$, so the above two equations yield
$$ d\lambda \wedge \psi^1 = (1+\lambda^2)\psi^1\phi_{12} +\psi^3(\lambda \sigma_1-\sigma_2) $$
Looking at the $i$-th row of this equation, for any $i\geq 2$, we get
$$ \psi^3_i(\lambda \sigma_1-\sigma_2)=0, \ \ \ \forall \ 2\leq i\leq n.$$
If $\lambda \sigma_1-\sigma_2\neq 0$, then $\psi^3_i$ for all $2\leq i\leq n$ are multiples of $\lambda \sigma_1-\sigma_2$, which implies that the lower right $(n-1)\times (n-1)$ corner of $S^{\xi_3}$ will have rank at most $1$. This together with the fact that $S^{\xi_4}=-\sqrt{-1}S^{\xi_3}$ shows that $(S^{\xi_3}, S^{\xi_4})$, hence $S$, must have non-trivial kernel in $V_0$, since the dimension of $V_0$ is bigger than $2$. This contradicts the assumption that the rank of $M$ is at least $5$. So we must have
\begin{eqnarray}
\lambda \sigma_1-\sigma_2=0
\end{eqnarray}
Plug this into (5.15), and using the fact that $1+\lambda^2\neq 0$, we get $\varphi_1\sigma_1=0$, thus
$$ \sigma_1=w\psi^1_1, \ \ \ \sigma_2=\lambda \sigma_1 = w\psi^2_1 $$
for some $w$. Write $we_1=-v_1+\sqrt{-1}v_2$ for $v_1$ and $v_2$ real, we get
$$ \langle \widetilde{\nabla }E', \xi_3\!-\!v_1\rangle = \langle \widetilde{\nabla }E', \xi_4\!-\!v_2\rangle =0$$
That is, $L=\mbox{span}\{ \xi_3\!-\!v_1, \xi_4\!-\!v_2\}$ gives a developable ruling for $E$. Note that just like in the $k=2$ case, here we also have $v_2=Jv_1$, so $h$ is minimal by Theorem 3. This finishes the proof of the $k=1$ case, and the proof of the main theorem is now complete.
\vspace{0.4cm}
Finally, let us remark that, in both the minimal and non-minimal cases, the K\"ahler extension is not necessarily unique, at least by the way we defined it, since one can add any vector fields in $\mbox{ker}(A_{E'})$ onto $v_1$, $v_2$, thus getting different developable rulings $L$. However, except in the case when $M^n$ is a complex submanifold of complex codimension $2$ in ${\mathbb C}^{n+2}$, there is always a `canonical' way to choose the developable ruling $L$, namely to take $L$ in such a way that $v_1$ and $v_2$ belong to
the orthogonal complement of $\mbox{ker}(A_{E'})$. This uniqueness of canonical extensions might become important in the discussion of the global situations, namely, when $M$ is assumed to be complete.
\vspace{0.4cm}
\vspace{0.4cm}
| {
"timestamp": "2012-10-16T02:03:04",
"yymm": "1210",
"arxiv_id": "1210.3772",
"language": "en",
"url": "https://arxiv.org/abs/1210.3772",
"abstract": "In this article, we prove a Kahler extension theorem for real Kahler submanifolds of codimension 4 and rank at least 5. Our main theorem states that such a manifold is a holomorphic hypersurface in another real Kahler submanifold of codimension 2. This generalizes a result of Dajczer and Gromoll in 1997 which states that any real Kahler submanifolds of codimension 3 and rank at least 4 admits a Kahler extension.",
"subjects": "Differential Geometry (math.DG)",
"title": "An Extension Theorem for Real Kahler Submanifolds in Codimension Four",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787883738261,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811358410695
} |
https://arxiv.org/abs/1611.07111 | The Total Acquisition Number of Random Geometric Graphs | Let $G$ be a graph in which each vertex initially has weight 1. In each step, the weight from a vertex $u$ to a neighbouring vertex $v$ can be moved, provided that the weight on $v$ is at least as large as the weight on $u$. The total acquisition number of $G$, denoted by $a_t(G)$, is the minimum cardinality of the set of vertices with positive weight at the end of the process. In this paper, we investigate random geometric graphs $G(n,r)$ with $n$ vertices distributed u.a.r. in $[0,\sqrt{n}]^2$ and two vertices being adjacent if and only if their distance is at most $r$. We show that asymptotically almost surely $a_t(G(n,r)) = \Theta( n / (r \lg r)^2)$ for the whole range of $r=r_n \ge 1$ such that $r \lg r \le \sqrt{n}$. By monotonicity, asymptotically almost surely $a_t(G(n,r)) = \Theta(n)$ if $r < 1$, and $a_t(G(n,r)) = \Theta(1)$ if $r \lg r > \sqrt{n}$. | \section{Introduction}
Gossiping and broadcasting are two well studied problems involving information dissemination in a group of individuals connected by a communication network~\cite{HHL}. In the gossip problem, each member has a unique piece of information which she would like to pass to everyone else. In the broadcast problem, there is a single piece of information (starting at one member) which must be passed to every other member of the network. These problems have received attention from mathematicians as well as computer scientists due to their applications in distributed computing~\cite{BGRV}. Gossip and broadcast are respectively known as ``all-to-all'' and ``one-to-all'' communication problems. In this paper, we consider the problem of acquisition, which is a type of ``all-to-one'' problem. Suppose each vertex of a graph begins with a weight of 1 (this can be thought of as the piece of information starting at that vertex). A \textbf{total acquisition move} is a transfer of all the weight from a vertex $u$ onto a neighbouring vertex $v$, provided that immediately prior to the move, the weight on $v$ is at least the weight on $u$. Suppose a number of total acquisition moves are made until no such moves remain. Such a maximal sequence of moves is referred to as an \textbf{acquisition protocol} and the vertices which retain positive weight after an acquisition protocol is called a \textbf{residual set}. Note that any residual set is necessarily an independent set. Given a graph $G$, we are interested in the minimum possible size of a residual set and refer to this number as the \textbf{total acquisition number of $G$}, denoted $a_t(G)$. The restriction to total acquisition moves can be motivated by the so-called ``smaller to larger'' rule in disjoint set data structures. For example, in the UNION-FIND data structure with linked lists, when taking a union, the smaller list should always be appended to the longer list. This heuristic improves the amortized performance over sequences of union operations. \vskip 0.1 in
\noindent \textbf{Example:} The weight of a vertex can at most double at every total acquisition move, and so a vertex with degree $d$ can carry at most weight $2^d$. (We will later use this fact in Observation~\ref{obs:min_degree}.) An acquisition protocol for a cycle $C_{4k}$ (for some $k \in {\mathbb N}$) that leaves a residual set of every fourth vertex is the best we can do; see Figure~\ref{fig:path}. Therefore, $a_t(C_{4k})=k.$
\begin{figure}[ht!]\centering
\begin{tikzpicture}
\draw (-0.75,0) -- (7.75,0);
\foreach \x in {0, 1, 2, 3, 4, 5, 6, 7}{
\filldraw[white] (\x,0) circle (6pt);
\draw (\x,0) circle (6pt);
\draw (\x,0) node {1};
}
\end{tikzpicture}\vskip 0.25 in
\begin{tikzpicture}
\draw (-0.75,0) -- (7.75,0);
\foreach \x in {0, 1, 2, 3, 4, 5, 6, 7}{
\filldraw[white] (\x,0) circle (6pt);
\draw (\x,0) circle (6pt);}
\foreach \x in {1, 2, 5, 6}{
\draw (\x,0) node {2};
}
\foreach \x in {0, 3, 4, 7}{
\draw (\x,0) node {0};
}
\draw (0.5,-0.3) node {$\rightarrow$};
\draw (4.5,-0.3) node {$\rightarrow$};
\draw (2.5,-0.3) node {$\leftarrow$};
\draw (6.5,-0.3) node {$\leftarrow$};
\end{tikzpicture}
\vskip 0.15 in
\begin{tikzpicture}
\draw (-0.75,0) -- (7.75,0);
\foreach \x in {0, 1, 2, 3, 4, 5, 6, 7}{
\filldraw[white] (\x,0) circle (6pt);
\draw (\x,0) circle (6pt);}
\foreach \x in {2,6}{
\draw (\x,0) node {4};
}
\foreach \x in {0, 1, 3, 4, 5, 7}{
\draw (\x,0) node {0};
}
\draw (1.5,-0.3) node {$\rightarrow$};
\draw (5.5,-0.3) node {$\rightarrow$};
\end{tikzpicture}
\caption{The total aquisition moves for a fragment of a cycle $C_{4k}$ that leave a residual set of size $a_t(C_{4k})=k$.} \label{fig:path}
\end{figure}
The parameter $a_t(G)$ was introduced by Lampert and Slater~\cite{LS} and subsequently studied in~\cite{SW, LPWWW}. In~\cite{LS}, it was shown that $a_t(G)\le \floor{\frac{n+1}{3}}$ for any connected graph $G$ on $n$ vertices and that this bound is tight. Slater and Wang~\cite{SW}, via a reduction to the three-dimension matching problem, showed that it is NP-complete to determine whether $a_t(G)=1$ for general graphs $G$. In LeSaulnier {\em et al.}~\cite{LPWWW}, various upper bounds on the acquisition number of trees were shown in terms of the diameter and the number of vertices, $n$. They also showed that $a_t(G) \le 32\log n\log\log n$ (here $\log n$ denotes the natural logarithm but throughout the paper we mostly use $\lg n$, the binary logarithm) for all graphs with diameter 2 and conjectured that the true bound is constant. For work on game variations of the parameter and variations where acquisition moves need not transfer the full weight of vertex, see~\cite{Wen, PWW, SW2}.
\medskip
Randomness often plays a part in the study of information dissemination problems, usually in the form of a random network or a randomized protocol, see {\em e.g.}~\cite{gos, FM, G95}. The total acquisition number of the \textbf{Erd\H{o}s-R\'{e}nyi-Gilbert random graph} $G(n,p)$ was recently studied in~\cite{BBDP}, where potential edges among $n$ vertices are added independently with probability $p$. In particular, LeSaulnier {\em et al.}~\cite{LPWWW} asked for the minimum value of $p=p_n$ such that $a_t(G(n,p)) = 1$ asymptotically almost surely (see below for a formal definition). In~\cite{BBDP} it was proved that $p = \frac{\lg n}{n} \approx 1.4427 \ \frac{\log n}{n}$ is a sharp threshold for this property. Moreover, it was also proved that almost all trees $T$ satisfy $a_t(T) = \Theta(n)$, confirming a conjecture of West. Another way randomness can come into the picture is when initial weights are generated at random. This direction, in particular the case where vertex weights are initially assigned according to independent Poisson distributions of intensity $1$, was recently considered in~\cite{GKKPZ}.
\medskip
In this note we consider the \textbf{random geometric graph} ${\mathcal{G}}(\mathcal{X}_n,r_n)$, where (i) $\mathcal{X}_n$ is a set of $n$ points located independently uniformly at random in $[0,\sqrt{n}]^2$, (ii) $(r_n)_{n \ge 1}$ is a sequence of positive real integers, and (iii) for $\mathcal{X} \subseteq \mathbb{R}^2$ and $r > 0$, the graph ${\mathcal{G}}(\mathcal{X}, r)$ is defined to have vertex set $\mathcal{X}$, with two vertices connected by an edge if and only if their spatial locations are at Euclidean distance at most $r$ from each other. As typical in random graph theory, we shall consider only asymptotic properties of ${\mathcal{G}}(\mathcal{X}_n,r_n)$ as $n\rightarrow \infty$. We will therefore write $r=r_n$, identifying vertices with their spatial locations and defining ${\mathcal{G}}(n,r)$ as the graph with vertex set $[n]=\{1,2,\dots, n\}$ corresponding to $n$ locations chosen independently uniformly at random in $[0,\sqrt{n}]^2$ and a pair of vertices within Euclidean distance $r$ appears as an edge. For more details see, for example, the monograph~\cite{pen}.
Finally, we say that an event in a probability space holds \textbf{asymptotically almost surely} (\textbf{a.a.s.}), if its probability tends to one as $n$ goes to infinity.
\medskip
We are going to show the following result.
\begin{theorem}\label{thm:main}
Let $r=r_n$ be any positive real number. Then, a.a.s.\ $a_t \left( {\mathcal{G}}(n,r) \right) = \Theta( f_n )$, where
$$
f_n =
\begin{cases}
n & \text{ if } r < 1, \\
\frac {n}{(r \lg r)^{2}} & \text{ if } r \ge 1 \text { and } r \lg r \le \sqrt{n},\\
1 & \text{ if } r \lg r > \sqrt{n}.
\end{cases}
$$
\end{theorem}
\section{Lower Bound}
Let us start with the following simple but useful observation.
\begin{observation}\label{obs:min_degree}
Let $G=(V,E)$ be a graph. If $v \in V$ is to acquire weight $w$ (at any time during the process of moving weights around), then $deg(v)$, the degree of $v$, is at least $\lg w$. Moreover, all vertices that contributed to the weight of $w$ (at this point of the process) are at graph distance at most $\lg w$ from $v$.
\end{observation}
\begin{proof}
Note that during each total acquisition move, when weight is shifted onto $v$ from some neighbouring vertex, the weight of $v$ can at most double. Thus, $v$ can only ever acquire $1 + 2 + \ldots + 2^{\deg(v)-1}$, in addition to the $1$ it starts with, and so $v$ can acquire at most weight $2^{\deg(v)}$. To see the second part, suppose that some vertex $u_0$ moved the initial weight of $1$ it started with to $v$ through the path $(u_0, u_1, \ldots, u_{k-1}, u_k=v)$. It is easy to see that after $u_{i-1}$ transfers its weight onto $u_i$, $u_i$ has weight at least $2^i$. So if $u_0$ contributed to the weight of $w$, $u_0$ must be at graph distance at most $\lg w$ from $v$. The proof of the observation is finished.
\end{proof}
We will also use the following consequence of Chernoff's bound (see, for example,~\cite{JLR} and~\cite{AS}).
\begin{theorem}[\textbf{Chernoff's Bound}]
\par \noindent
\begin{itemize}
\item [(i)] If $X$ is a Binomial random variable with expectation $\mu$, and $0<\delta<1$, then $$\Pr[X < (1-\delta)\mu] \le \exp \left( -\frac{\delta^2 \mu}{2} \right),$$ and if $\delta > 0$,
\[\Pr\sqbs{X > (1+\delta)\mu} \le \exp\of{-\frac{\delta^2 \mu}{2+\delta}}.\]
\item [(ii)] If $X$ is a Poisson random variable with expectation $\mu$, and $0 < \varepsilon < 1$, then
$$
Pr\sqbs{X > (1+\varepsilon)\mu}\le \exp \left( -\frac{\varepsilon^2 \mu}{2} \right),
$$
and if $\varepsilon > 0$,
$$
\Pr\sqbs{X > (1+\varepsilon)\mu} \le \left(\frac{e^{\varepsilon}}{(1+\varepsilon)^{1+\varepsilon}}\right)^{\mu}.
$$
\end{itemize}
In particular, for $X$ being a Poisson or a Binomial random variable with expectation $\mu$ and for $0 < \varepsilon < 1$, we have
$$
Pr\sqbs{|X-\mu| > \varepsilon\mu} \le 2\left( -\frac{\varepsilon^2 \mu}{3} \right).
$$
\end{theorem}
Now we are ready to prove the lower bound. First we concentrate on dense graphs for which, in fact, we show a stronger result that no vertex can acquire large weight a.a.s.
\begin{theorem}\label{thm:lower1}
Suppose that $r = r_n \ge c \sqrt{\lg n} / \lg \lg n$ for some sufficiently large $c \in {\mathbb R}$, and consider any acquisition protocol on ${\mathcal{G}}(n,r)$. Then, a.a.s.\ each vertex in the residual set acquires $O( (r \lg r)^2 )$ weight. As a result, a.a.s.\
$$
a_t \left( {\mathcal{G}}(n,r) \right) = \Omega \left( \frac {n}{(r \lg r)^{2}} \right).
$$
\end{theorem}
\begin{proof}
Let $\ell = 2 \lg r + 2 \lg \lg r + \lg (8 \pi)$. For a contradiction, suppose that at some point of the process some vertex $v$ acquires weight $w \ge 2^\ell = 8 \pi (r \lg r)^2$. Since one total acquisition move corresponding to transferring all the weight from some neighbour of $v$ onto $v$, increases the weight on $v$ by a factor of at most 2, we may assume that $w < 2^{\ell+1}$. It follows from Observation~\ref{obs:min_degree} that all vertices contributing to the weight of $w$ are at graph distance at most $\ell+1$ from $v$ (and so at Euclidean distance at most $(\ell+1)r$). The desired contradiction will be obtained if no vertex has at least $2^\ell$ vertices (including the vertex itself) at Euclidean distance at most $(\ell+1)r$.
The remaining part is a simple consequence of Chernoff's bound and the union bound over all vertices. For a given vertex $v$, the number of vertices at Euclidean distance at most $(\ell+1)r$ is a random variable $Y$ that is stochastically bounded from above by the random variable $X \sim \textrm{Bin}(n-1, \pi (\ell+1)^2 r^2 / n)$ with $\mathbb E [X] \sim \pi \ell^2 r^2 \sim 4 \pi (r \lg r)^2$. (Note that $Y=X$ if $v$ is at distance at least $(\ell+1)r$ from the boundary; otherwise, $Y \le X$.) It follows from Chernoff's bound that
\begin{eqnarray*}
\mathbb{P} (Y \ge 2^\ell) &\le& \mathbb{P} \Big(X \ge (2+o(1)) \mathbb E[X] \Big) \le \exp \Big( -(1/3+o(1)) \mathbb E[X] \Big) \\
&\le& \exp \Big( -(4 \pi /3+o(1)) (r \lg r)^2 \Big) \le \exp \Big( -(\pi c^2 /3+o(1)) \lg n \Big) \\
&=& o(1/n),
\end{eqnarray*}
provided that $c$ is large enough. The conclusion follows from the union bound over all $n$ vertices of ${\mathcal{G}}(n,r)$.
\end{proof}
In order to simplify the proof of the theorem for sparser graphs we will make use of a technique known as de-Poissonization, which has many applications in geometric probability (see~\cite{pen} for a detailed account of the subject). Here we only sketch it.
Consider the following related model of a random geometric graph. Let $V=V'$, where $V'$ is a set obtained as a homogeneous Poisson point process of intensity $1$ in $[0,\sqrt{n}]^2$. In other words, $V'$ consists of $N$ points in the square $[0,\sqrt{n}]^2$ chosen independently and uniformly at random, where $N$ is a Poisson random variable of mean $n$. Exactly as we did for the model ${\mathcal{G}}(n,r)$, again identifying vertices with their spatial locations, we connect by an edge $u$ and $v$ in $V'$ if the Euclidean distance between them is at most $r$. We denote this new model by ${\mathcal{P}}(n,r)$.
The main advantage of defining $V'$ as a Poisson point process is motivated by the following two properties: the number of vertices of $V'$ that lie in any region $A\subseteq [0,\sqrt{n}]^2$ of area $a$ has a Poisson distribution with mean $a$, and the number of vertices of $V'$ in disjoint regions of $[0,\sqrt{n}]^2$ are independently distributed. Moreover, by conditioning ${\mathcal{P}}(n,r)$ upon the event $N=n$, we recover the original distribution of ${\mathcal{G}}(n,r)$. Therefore, since $\Pr(N=n)=\Theta(1/\sqrt n)$, any event holding in ${\mathcal{P}}(n,r)$ with probability at least $1-o(f_n)$ must hold in ${\mathcal{G}}(n,r)$ with probability at least $1-o(f_n \sqrt n)$.
\medskip
Now, let us come back to our problem. For sparser graphs we cannot guarantee that no vertex acquires large weight a.a.s.\ but a lower bound of the same order holds.
\begin{theorem}\label{thm:lower2}
Suppose that $r = r_n \ge c$ for some sufficiently large $c \in {\mathbb R}$. Then, a.a.s.\
$$
a_t \left( {\mathcal{G}}(n,r) \right) = \Omega \left( \frac {n}{(r \lg r)^{2}} \right).
$$
\end{theorem}
\begin{proof}
Since Theorem~\ref{thm:lower1} applies to dense graphs, we may assume here that $r = O(\sqrt{\lg n} / \lg \lg n)$ (in particular, $r \lg r = o(\sqrt{n})$). Tessellate $[0,\sqrt{n}]^2$ into $\lfloor \sqrt{n} / (20 r \lg r) \rfloor^2$ squares, each one of side length $(20+o(1)) r \lg r$. Consider the unit circle centered on the center of each square and call it the \emph{center circle}. We say that a given square is \emph{dangerous} if the corresponding center circle contains at least one vertex and the total number of vertices contained in the square is less than $1200 (r \lg r)^2$.
Consider any acquisition protocol. First, let us show that at least one vertex from each dangerous square must belong to the residual set. Let $u_0$ be a vertex inside the corresponding center circle. For a contradiction, suppose that the square has no vertex in the residual set. In particular, it means that $u_0$ moved the initial weight of 1 it started with onto some vertex outside the square through some path $(u_0, u_1, \ldots, u_k)$. Note that the Euclidean distance between $u_0$ and the border of the square (and so also $u_k$) is at least $(20+o(1)) r \lg r / 2 - 1 \ge 9 r \lg r$, provided that $c$ is large enough, and so $k \ge 9 \lg r$.
\begin{figure}[ht!]\centering
\begin{tikzpicture}[scale=0.5]
\filldraw[blue!10] (1.5,0) circle (60pt);
\foreach \x in {(0,0), (1.5,0), (6,0)}{
\filldraw \x circle (3pt);}
\draw[dashed] (0,0) -- (6,0);
\draw[dashed] (-4,-4) rectangle (4,4);
\draw (0,0.35) node {$u_0$};
\draw (1.5,0.35) node {$u_l$};
\draw (6,0.35) node {$u_k$};
\end{tikzpicture}
\caption{Residual sets contain at least one vertex from each dangerous square.} \label{fig:dangerous_square}
\end{figure}
Consider the vertex $u_\ell$ on this path, where $\ell = \lfloor 4 \lg r \rfloor \ge 3 \lg r$, provided $c$ is large enough; see Figure~\ref{fig:dangerous_square}. Right after $u_{\ell-1}$ transferred all the weight onto $u_\ell$, $u_\ell$ had weight at least $2^{\ell} \ge r^3 > 1200 (r \lg r)^2$, provided $c$ is large enough. As argued in the proof of the previous theorem, at some point of the process $u_\ell$ must have acquired weight $w$ satisfying $2^{\ell} \le w < 2^{\ell+1}$. Observation~\ref{obs:min_degree} implies that all vertices contributing to the weight of $w$ are at Euclidean distance at most $(\ell+1) r$ from $v$ and so inside the square (as always, provided $c$ is large enough). However, dangerous squares contain less than $1200(r\lg r)^2$ vertices, and so we get a contradiction. The desired claim holds.
Showing that a.a.s.\ a positive fraction of the squares is dangerous is straightforward. In ${\mathcal{P}}(n,r)$, the probability that the center circle contains no vertex is $\exp(-\pi) \le 1/3$. On the other hand, the number of vertices falling into the square is a Poisson random variable $X$ with expectation $\mu \sim 400 (r \lg r)^2$. By Chernoff's bound applied with $\eps = e-1$,
$$
\mathbb{P} (X \ge e \mu) \le \left(\frac{e^{e-1}}{(1+(e-1))^e}\right)^{\mu}= \exp(-\mu).
$$
Hence, we get
$$
\mathbb{P} (X \ge 1200 (r \lg r)^2 ) \le \mathbb{P} (X \ge e\mu ) \le \exp(- \mu) \le 1/3,
$$
provided $c$ is large enough. Hence the expected number of dangerous squares is at least $(1/3) (1/400+o(1)) n / (r \lg r)^2 \gg \lg n \to \infty$. By Chernoff bounds, with probability at least $1-o(n^{-1/2})$, the number of dangerous squares in ${\mathcal{P}}(n,r)$ is at least $(1/2500) n / (r \lg r)^2$. By the de-Poissonization argument mentioned before this proof, the number of dangerous squares in ${\mathcal{G}}(n,r)$ is a.a.s.\ also at least $(1/2500) n / (r \lg r)^2$, and the proof of the theorem is finished.
\end{proof}
The only range of $r=r_n$ not covered by the two theorems is when $r < c$ for $c$ as in Theorem~\ref{thm:lower2}. However, in such a situation a.a.s.\ there are $\Omega(n)$ isolated vertices which clearly remain in the residual set. Moreover, if $r$ is such that $r \lg r > \sqrt{n}$, then the trivial lower bound $\Omega(1)$ applies. The lower bound in the main theorem holds for the whole range of $r$.
\section{Upper Bound}
As in the previous section, let us start with a simple, deterministic observation that turns out to be useful in showing an upper bound. Before we state it, let us define a family of rooted trees as follows. Let $\hat{T}_0$ be a rooted tree consisting of a single vertex $v$ (the root of $\hat{T}_0$). For $i \in {\mathbb N}$, we define $\hat{T}_i$ recursively: the root $v$ of $\hat{T}_i$ has $i$ children that are roots of trees $\hat{T}_0, \hat{T}_1, \ldots, \hat{T}_{i-1}$; see Figure~\ref{fig:Tree_T}.
\begin{figure}[ht!]\centering
\begin{tikzpicture}[scale=0.75]
\foreach \x in {(0,0), (-2,-1), (-1,-1), (0,-1), (-2,-2), (-1,-2), (0,-2), (0,-3)}{
\filldraw \x circle (2pt);
}
\foreach \x in {(-2,-1), (-1,-1), (0,-1),(1.1,-0.6)}{
\draw (0,0) -- \x;
}
\draw (0.75,-1) node {$\dots$};
\draw (-1,-1) -- (-2,-2);
\draw (-1,-2) -- (0,-1) -- (0,-3);
\draw (1.8,-1) node {$\hat{T}_{i-1}$};
\end{tikzpicture}\hskip 1 in
\begin{tikzpicture}[scale=0.75]
\foreach \x in {(0,0)}{
\filldraw \x circle (2pt);
}
\draw (-2,-1.1) node {$\hat{T}_0$};
\draw (-1,-1.1) node {$\hat{T}_1$};
\draw (0,-1.1) node {$\hat{T}_2$};
\foreach \x in {(-1.8,-0.8), (-0.8,-0.8), (0,-0.75),(1.1,-0.6)}{
\draw (0,0) -- \x;
}
\draw (0.77,-1.2) node {$\dots$};
\draw (1.8,-1.1) node {$\hat{T}_{i-1}$};
\draw[white] (0,-3) node {.};
\end{tikzpicture}
\caption{The tree $\hat{T}_i$.} \label{fig:Tree_T}
\end{figure}
Clearly, $\hat{T}_i$ has $2^i$ vertices and depth $i$. Moreover, it is straightforward to see that vertices of $\hat{T}_i$ can move their initial weight of 1 to the root $v$ (in particular, $a_t(\hat{T}_i)=1$): indeed, this clearly holds for $i=0$ so suppose that it holds inductively up to $i-1$. Then, since all children of the root of $\hat{T}_i$ can send all their accumulated weight to the root of $\hat{T}_i$ (starting from the smallest subtree), this also holds for $i$. This, in particular, shows that Observation~\ref{obs:min_degree} is tight.
As showed in the previous section, the main bottleneck that prevents us from moving a large weight to some vertex in ${\mathcal{G}}(n,r)$ is that there are simply not enough vertices in the Euclidean neighborhood of a vertex. If we want to match the lower bound, then rooted trees induced by the acquisition protocol must be as deep as possible in order to access vertices that are in a Euclidean sense far away from the corresponding roots. It turns out that trees $\hat{T}_i$ from the family we just introduced are efficient from that perspective. However, we cannot guarantee that the vertex set of ${\mathcal{G}}(n,r)$ can be partitioned in such a way that each set has some tree from the family as a spanning subgraph. Fortunately, it is easy to ``trim'' $\hat{T}_i$ to get a tree on $n < 2^i$ vertices that can shift all of its initial weight to the root.
\begin{observation}\label{obs:trees}
For any $d \in {\mathbb N} \cup \{0\}$ and $n \le 2^d$, $\hat{T}_d$ contains a rooted sub-tree $T$ on $n$ vertices such that $a_t(T)=1$. Moreover, the number of vertices at distance $\ell$ ($0 \le \ell \le d$) from the root of $T$ is at most $\binom{d}{\ell}$.
\end{observation}
\begin{proof}
In order to obtain the desired tree $T$ on $n$ vertices, we trim $\hat{T}_d$ by cutting some of its branches (from largest to smallest, level by level). We may assume that $n \ge 2$; otherwise, the statement trivially holds.
Since we will be trimming the tree recursively, let us concentrate on $v$, the root of $\hat{T}_d$, and $d$ branches attached to it. Our goal is to get a tree rooted at $v$ that has $n \ge 2$ vertices. Let $k_0$ be the largest integer $k$ such that
$$
1 + \Big( 1 + 2 + 4 + \ldots 2^k \Big) = 2^{k+1} \le n;
$$
that is, $k_0 = \lfloor \lg n \rfloor - 1$ (note that $k_0 \ge 0$ as $n \ge 2$ and that $k_0 \le d-1$ as $n \le 2^d$). We leave the branches inducing the trees $\hat{T}_0, \hat{T}_1, \ldots, \hat{T}_{k_0}$ untouched. We trim the branches inducing the trees $\hat{T}_{k_0+2}, \hat{T}_{k_0+3}, \ldots, \hat{T}_{d}$ completely (note that possibly $k_0 = d-1$ in which case we trim nothing). Finally, we would like to carefully trim the branch inducing the tree $\hat{T}_{k_0+1}$ so that the number of vertices it contains is precisely $n - 2^{k_0+1}$. If $n - 2^{k_0+1}$ is equal to 0 or 1, then we trim the whole branch or leave just the root of this branch, respectively. Otherwise, we recursively trim the branch as above. It is straightforward to see that all vertices of $T$ can move their initial weight of 1 to the root of $T$ which, in particular, implies that $a_t(T)=1$, thus proving the first part.
In order to show the second part, it is enough to prove the desired property for $\hat{T}_d$ (since $T$ is a sub-tree of $\hat{T}_d$). We prove it by (strong) induction on $d$; clearly, the statement holds for $d=0$ and $\ell=0$. Let $d_0 \in {\mathbb N}$ and suppose inductively that the property holds for all $d$ such that $0 \le d \le d_0-1$. The claim clearly holds for $\ell=0$. We count the number of grandchildren at distance $\ell$ (for any $1 \le \ell \le d_0$) from the root $v$ by considering grandchildren at distance $\ell-1$ from each child of $v$. By the recursive construction of $\hat{T}_d$ we get that the number of vertices at distance $\ell$ from $v$ is $\sum_{k=\ell-1}^{d_0-1} \binom{k}{\ell-1} = \binom{d_0}{\ell}$ (this equality is well-known and can be easily proven by induction). The proof of the observation is finished.
\end{proof}
\bigskip
Before we are ready to state the next result, we need to introduce a few definitions. Let $c, \eps \in (0,1)$ be any constants, arbitrarily small. Suppose that we are given a function $r = r_n$ such that $r \lg r \le \sqrt{n}$ and $r \ge C$ for some large constant $C=C(c,\eps)$ that will be determined soon. Let $k = \lceil \sqrt{n}/(c r \lg r) \rceil$ and tessellate $[0,\sqrt{n}]$ into $k^2$ \textbf{large squares}, each one of side length $x r \lg r$, where $x = \sqrt{n}/(k r \lg r)$. Clearly, $c/2 \le x \le c$ (the lower bound follows as $c r \lg r \le \sqrt{n}$) and $x \sim c$, provided $r \lg r = o(\sqrt{n})$. Now, let $\ell = 20 \lceil x r \lg r / (20 c r) \rceil = 20 \lceil x \lg r/(20 c) \rceil$ and tessellate each large square into $\ell^2$ \textbf{small squares}, each one of side length $yr$, where $y =x \lg r/\ell$; see Figure~\ref{fig:tessellate}. Clearly, $c/2 \le y \le c$ (the lower bound follows assuming that $C$ is large enough which we may) and $y \sim c$, provided $r = r_n \to \infty$ as $n \to \infty$.
\begin{figure}[ht!]\centering
\begin{tikzpicture}[scale=0.75]
\draw[thick] (1.5,1.5) rectangle (7.5,7.5);
\foreach \x in {1.5,3,4.5,6}{
\foreach \y in {0.375,0.75,1.125}{
\draw[dashed] (1.2,\x+\y) -- (7.8,\x+\y);
\draw[dashed] (\x+\y,1.2) -- (\x+\y,7.6);
}}
\foreach \x in {3,4.5,6}{
\draw (1.2,\x) -- (7.8,\x);
\draw (\x,1.2) -- (\x,7.8);}
\draw (2.25,8.1) node {\large{$\frac{\sqrt{n}}{k}$}};
\draw (0.8,7.3) node {\large{$\frac{\sqrt{n}}{kl}$}};
\end{tikzpicture}
\caption{We tessellate $[0,\sqrt{n}]^2$ into $k^2$ large squares, and each large square is tessellated into $l^2$ small squares; $k=\lceil \sqrt{n}/(cr\lg r)\rceil$ and $l=20\lceil x\lg r/(20c)\rceil$.} \label{fig:tessellate}
\end{figure}
We say that a small square is \textbf{good} if the number of vertices it contains is between $(1-\eps) (yr)^2$ and $(1+\eps) (yr)^2$; otherwise, it is \textbf{bad}. Moreover, we say that a large square is \textbf{good} if all small squares it contains are good and the following properties hold (otherwise, it is \textbf{bad}):
\begin{itemize}
\item [(a)] no vertex lies on the border of the large square nor on its two diagonals,
\item [(b)] no two vertices lie on any line parallel to any base of the large square,
\item [(c)] no two vertices lie on any line passing the center of the large square.
\end{itemize}
Now we are ready to state the following crucial observation.
\begin{theorem}\label{thm:distr_good_squares}
For any pair $c, \eps \in (0,1)$ of constants, there exists a constant $C=C(c,\eps)$ such that the following two properties hold a.a.s.\ for ${\mathcal{G}}(n,r)$.
\begin{itemize}
\item [(i)] All large squares are good, provided that $r \ge C \sqrt{\log n}$.
\item [(ii)] The number of large squares that are bad is at most $n/(r^2 \lg^5 r)$, provided that $r \ge C$.
\end{itemize}
\end{theorem}
\begin{proof}
Properties (a)-(c) on the distribution of the vertices hold with probability 1 for all large squares. Hence, we need to concentrate on showing that small squares are good.
For part (i), consider any small square in ${\mathcal{G}}(n,r)$. The number of vertices in such a square follows a binomial random variable $X \sim \textrm{Bin}(n,(yr)^2/n)$ with $\mathbb E[X] =(yn)^2$. It follows immediately from Chernoff's bound that the probability of the square being bad can be estimated as follows:
$$
\mathbb{P} \left( |X - (yr)^2| \ge \eps (yr)^2 \right) \le 2 \exp \left( - \frac {\eps^2 (yr)^2}{3} \right) \le 2/n^2 = o(1/n),
$$
provided that $C \ge \sqrt{6}/(c\eps)$. Hence, since there are in total $O(n)$ small squares appearing in large squares, the expected number of such small squares is $o(1)$, and the conclusion follows from the first moment method.
For part (ii), consider any small square in ${\mathcal{P}}(n,r)$. As before, let $X \sim Po( (yr)^2 )$ be the random variable counting the number of vertices in the square. By Chernoff's bound, for the probability of the square being bad we have
$$
\mathbb{P} \left( |X - (yr)^2| \ge \eps (yr)^2 \right) \le 2 \exp \left( - \frac {\eps^2 (yr)^2}{3} \right) \le 2 \exp \left( - \frac { (\eps c r)^2}{12} \right).
$$
By a union bound, a given large square is bad with probability at most
$$
2 \ell^2 \exp \left( - \frac { (\eps c r)^2}{12} \right) \le 2 \left( 2 \lg r \right)^2 \exp \left( - \frac { (\eps c r)^2}{12} \right) \le \frac {1}{\lg^4 r};
$$
both inequalities hold provided $C$ is large enough. (Note that $\ell \le 20 \lceil \lg r/20 \rceil \le 2 \lg r$, provided $r \ge C$.)
Now, the number of large squares that are bad can be stochastically bounded from above by the random variable $Y \sim \textrm{Bin}(k^2, 1/\lg^4 r)$. By part (i), we may assume that, say, $r = O(\log n)$ and so, in particular, $r \lg r = o(\sqrt{n})$. Note that
$$
\mathbb E[Y] = \frac {k^2}{\lg^4 r} \sim \frac {n}{(c r \lg r)^2 \lg^4 r} \le \frac {n}{3 r^2 \lg^5 r},
$$
provided $C$ is large enough. On the other hand, note that, say, $\mathbb E[Y] = \Omega( n / \log^3 n )$. Hence, it follows immediately from Chernoff's bound that
$$
\mathbb{P} \left( Y \ge \frac {n}{r^2 \lg^5 r} \right) \le \mathbb{P} \left( Y \ge 2 \mathbb E[Y] \right) = \exp \left( - \Omega( n / \log^3 n ) \right) = o(1/\sqrt{n}).
$$
By the de-Poissonization argument explained above, the desired property holds for ${\mathcal{G}}(n,r)$ and the proof is finished.
\end{proof}
The next deterministic result shows that there exists an acquisition protocol that pushes weights from all vertices of each large good square into a single vertex.
\begin{theorem}\label{thm:good_squares}
Fix $c = 1/10000$, $\eps = 1/100$, $n \in {\mathbb N}$, and radius $r=r_n \ge C$ for some large enough constant $C \in {\mathbb R}$. Consider any distribution of vertices that makes a given large square $\mathcal{S}$ good (with respect to $c$, $\eps$, $r$, and $n$). Finally, let $G$ be any geometric graph induced by vertices from $\mathcal{S}$ with radius $r$. Then, $a_t(G) = 1$.
\end{theorem}
Before we prove this theorem, let us state the following corollary that follows immediately from Theorems~\ref{thm:good_squares} and~\ref{thm:distr_good_squares}.
\begin{corollary}\label{cor:upper_bound}
Suppose that $r = r_n$ is such that $r \lg r \le \sqrt{n}$ and $r \ge C$ for some sufficiently large $C \in {\mathbb R}$. Then, a.a.s.\
$$
a_t \left( {\mathcal{G}}(n,r) \right) = O \left( \frac {n}{(r \lg r)^{2}} \right).
$$
\end{corollary}
\begin{proof}
Let $c, \eps$ be fixed as in Theorem~\ref{thm:good_squares} and let $C = C(c, \eps)$ be the constant implied by Theorem~\ref{thm:distr_good_squares}. If $r \ge C \sqrt{\log n}$, then Theorem~\ref{thm:distr_good_squares}(i) implies that a.a.s.\ all large squares are good and so by Theorem~\ref{thm:good_squares} a.a.s.\
$$
a_t({\mathcal{G}}(n,r)) \le k^2 = O \left( \frac {n}{(r \lg r)^2} \right).
$$
On the other hand, if $r \ge C$, then Theorem~\ref{thm:distr_good_squares}(ii) implies that a.a.s.\ at most $n/(r^2 \lg^5 r)$ large squares are bad. Clearly, each large bad square can be tessellated into $O(\lg^2 r)$ squares of side length $r / \sqrt{2}$, and so the graph $G$ induced by vertices of any large bad square satisfies $a_t(G) = O(\lg^2 r)$. This time we get that a.a.s.
$$
a_t({\mathcal{G}}(n,r)) \le k^2 + \frac {n}{r^2 \lg^5 r} \cdot O(\lg^2 r) = O \left( \frac {n}{(r \lg r)^2} \right),
$$
and the proof of the corollary is finished.
\end{proof}
The only ranges of $r=r_n$ not covered by Corollary~\ref{cor:upper_bound} are when $r < C$ for $C$ as in the corollary or when $r \lg r > \sqrt{n}$. For the first case there is nothing to prove as the bound $O(n)$ trivially holds. The latter case follows immediately by monotonicity of $a_t(G)$.
Hence, it remains to prove Theorem~\ref{thm:good_squares}.
\begin{proof}[Proof of Theorem~\ref{thm:good_squares}]
Split $\mathcal{S}$ into four triangles using the two diagonals of $\mathcal{S}$. (Note that by property (a) of the distribution of the vertices, no vertex lies on the border of any triangle.) By symmetry, we may concentrate on the bottom triangle: the base of the triangle has length $\ell (yr)$ and the height is $\ell (yr)/2$. Since $\ell$ is divisible by $2$, the center of the large square is the corner of four small squares. Clearly, the number of small squares that are completely inside the triangle is $\ell^2 / 4 - \ell / 2$ (the total area of the triangle is $\ell^2/4$, and there are $\ell$ small squares only partially contained in this area, contributing a total area of $\ell/2$); on the other hand, $\ell^2 / 4 + \ell / 2$ of them cover the triangle. Hence, since all small squares are good, the number of vertices $z$ that lie in the triangle is at most
$$
\left( \frac {\ell^2}{4} + \frac{\ell}{2} \right) (1+\eps) (yr)^2 = \left( 1 + \frac{2}{\ell} \right) (1+\eps) \frac {(xr\lg r)^2}{4} \le (1+2\eps) \frac {(xr\lg r)^2}{4} =: z^+,
$$
provided that $C$ is large enough. Similarly, we get that $z \ge z^- := (1-2\eps) (xr\lg r)^2 / 4$.
Let $d$ be the smallest integer such that $2^d \ge z$. Since $z^- \le z \le z^+$, it follows that $d = \lg z + O(1) = 2 \lg r + 2 \lg \lg r + O(1)$. Observation~\ref{obs:trees} implies that there exists a rooted sub-tree $T$ of $\hat{T}_d$ on $z$ vertices with $a_t(T)=1$. Our goal is to show that $T$ can be embedded on the set of vertices that belong to the triangle with the root being the vertex closest (in Euclidean distance) to the apex of the triangle. If this can be done, then one can merge all the accumulated weights from the four triangles partitioning $\mathcal{S}$ into one of them and finish the proof: indeed, as the Euclidean distance from the closest vertex to the apex of the triangle is at most $\sqrt{5} y r \le \sqrt{5} c r \le r / 2$, the four roots induce a clique; see Figure~\ref{fig:root_of_5}.
\begin{figure}[ht!]\centering
\begin{tikzpicture}[scale=1.2]
\filldraw (0,2) circle (2pt);
\filldraw (-1,0) circle (2pt);
\draw (-1,0) -- (0,2);
\draw (-0.37,0.5) node {$\sqrt{5}yr$};
\draw (-2,0) -- (0,2) -- (2,0);
\draw[dashed] (-1,0) rectangle (1,2);
\draw[dashed] (-1,1) -- (1,1);
\draw[dashed] (0,0) -- (0,2);
\end{tikzpicture}
\hskip 0.35 in
\begin{tikzpicture}[scale=1]
\draw[thick] (0,2.5) -- (0,0) -- (5,0) -- (5,2.5);
\foreach \x in {0.25,0.5,0.75,1,1.25,1.5,1.75,2,2.25,2.5,2.75,3,3.25,3.5,3.75,4,4.25,4.5,4.75}{\draw[dashed, black!20] (\x,0) -- (\x,2.5);}
\foreach \x in {0.25,0.5,0.75,1,1.25,1.5,1.75,2,2.25,2.5}{\draw[dashed, black!20] (0,\x) -- (5,\x);}
\draw[thick] (0,0) -- (2.5,2.5) -- (5,0);
\foreach \x in {(2.5,2.2)}{\filldraw \x circle (2pt);}
\foreach \x in {(1.8,1.3),(2.3,1.5),(3.3,1.5)}{\draw (2.5,2.2) -- \x;}
\foreach \x in {(1.8,1.3)}{\draw \x node {$T_0$};}
\foreach \x in {(2.3,1.5)}{\draw \x node {$T_1$};}
\foreach \x in {(2.8,1.55)}{\draw[thick] \x node {$\dots$};}
\draw (3.3,1.3) node {$T_{i-1}$};
\end{tikzpicture}
\caption{On the left: there is a vertex in the triangle at distance at most $\sqrt{5}yr$ from the apex. On the right: in each triangle, we attempt to embed a tree that includes all vertices in the triangle. The four roots induce a clique, and so if such trees can be embedded, all weights in the square can be pushed onto a single vertex.}
\label{fig:root_of_5}
\end{figure}
We divide the triangle into $\ell / 20$ strips by introducing \textbf{auxiliary lines} $A_i$ ($i \in \{0, 1, \ldots, \ell/20\}$; recall that $\ell$ is divisible by 20), all of them are lines parallel to the base of the triangle. $A_0$ is the line that passes through the apex of the triangle, $A_1$ is at distance $10yr$ from $A_0$, etc., $A_{\ell/20}$ coincides with the base of the triangle.
Note that there are exactly 10 strips of little squares between any two consecutive auxiliary lines $A_{j-1}$ and $A_j$. Any two points $a_1, a_2$ on the base of the triangle and a line $L$ parallel to the base induce an \textbf{auxiliary region}, a trapezoid with vertices $a_1, a_2$ and two vertices on $L$, the intersection of the line between the apex of the triangle and $a_1$ with $L$, and the intersection of the line between the apex of the triangle and $a_2$ with $L$, respectively. In particular, the triangle itself is a (degenerate) auxiliary region, induced by the two vertices from the base of the triangle and $A_0$.
We will now give a recursive algorithm how to embed the tree $T$ on all $z$ vertices of the triangle. As already mentioned, we pick the vertex closest in Euclidean distance to the apex of the triangle and assign it to the root of $T$. Let $L_0$ be any line parallel to the base separating the vertex assigned to the root from other vertices that are not yet assigned to any vertex of $T$. (Note that by our assumption of the distribution of the vertices, there are no two vertices on any line parallel to the base.) This will be a typical situation that we have to deal with, in a recursive fashion. Suppose thus that we are given a line $L_{i-1}$ parallel to the base such that vertices above $L_{i-1}$ are already assigned to vertices in $T$, and vertices below $L_{i-1}$ that belong to the auxiliary region $\mathcal{Q}$ we currently deal with are not yet assigned to vertices in $T$. We will always keep the property that $\mathcal{Q}$ contains exactly the number of vertices we need to assign to some part of the tree $T$; these vertices induce a family of rooted trees in $T$, with roots that are at graph distance $i$ from the root of $T$. Denote by $Q_i$ and $R_i$ the number of vertices that belong to $\mathcal{Q}$ and, respectively, to the part of $\mathcal{Q}$ above $A_i$; see Figure~\ref{fig:5}.
\begin{figure}[ht!]\centering
\hskip 0.5 in
\begin{tikzpicture}
\filldraw[blue!15] (3,0.75) -- (4.12,0.75) -- (3.85,1.25)--(3,1.25);
\draw[blue] (3,3) -- (3,0);
\draw[blue] (3,3) -- (4.5,0);
\draw[thick] (0,0) -- (3,3) -- (6,0) -- (0,0);
\foreach \x in {0,1,2,3,4}{\draw (6.3,3-\x*0.75) node {$A_\x$};\draw[blue, dashed] (0,3-\x*0.75) -- (6,3-\x*0.75);}
\draw (3,-0.3) node {$a_1$};
\draw (4.5,-0.3) node {$a_2$};
\draw[blue] (2.7,1.25) -- (4,1.25);
\draw (4.25,1.2) node {$L_2$};
\end{tikzpicture}
\caption{The number of vertices in the shaded region is $R_3$, the number of vertices in the trapezoid determined by $L_2$, the base of the triangle, and the two blue sides of the triangle associated with $\mathcal{Q}$ is $Q_3$.\label{fig:5}}
\end{figure}
Let $a_1$ and $a_2$ be the two corners of $\mathcal{Q}$ that belong to the base of the triangle. Let $b_1$ and $b_2$ be the intersection points of $A_i$ with the line going through the apex and $a_1$, and with the line going through the apex and $a_2$, respectively; see Figure~\ref{fig:6}. If the Euclidean distance between $b_1$ and $b_2$ is more than $r/3$, then we split $\mathcal{Q}$ into two auxiliary regions (the first one induced by $b_1$ and some $b$ on $A_i$, the other one induced by $b$ and $b_2$; in both situations the auxiliary line $L_{i-1}$ is used), where $b$ is chosen in such a way that $Q_i$ vertices are partitioned into two families of rooted trees in $T$ as evenly as possible. Observe that it is possible to split $\mathcal{Q}$ in such a way so that both auxiliary sub-regions contain at least $Q_i/4$ vertices; indeed, one can order the family of rooted trees according to their sizes and then notice that adding one rooted tree to one of the auxiliary sub-regions obtained after splitting can increase the total number of vertices there by a multiplicative factor of at most 2. (Note that by property (c) of the distribution of the vertices, we can perform a split so that no vertex belongs to the border of any resulting auxiliary region.) We stop the algorithm prematurely if the Euclidean distance between $b_1$ and $b$ (or between $b$ and $b_2$) is less than $r/20$ or more than $r/3$ (\textbf{Error 1} is reported). If everything goes well, we deal with each auxiliary region recursively (we update $Q_i$ and $R_i$, and all lines defining the auxiliary region).
\begin{figure}[ht!]\centering
\begin{tikzpicture}[scale=1.5]
\draw (2,0) -- (5,0);
\draw[blue,dashed] (2,0.75) -- (5,0.75);
\draw (3,-0.2) node {$a_1$};
\draw (4.5,-0.2) node {$a_2$};
\draw[blue] (2.7,1.25) -- (4,1.25);
\draw (4.25,1.3) node {$L_2$};
\draw (5.25,0.75) node {$A_3$};
\draw[blue] (3,0) -- (3,2);
\draw[blue] (4.5,0) -- (3.5,2);
\filldraw (3,0.75) circle (1.5pt);
\filldraw (4.15,0.75) circle (1.5pt);
\draw (2.8,0.6) node {$b_1$};
\draw (4.45,0.6) node {$b_2$};
\filldraw (3.65,0.75) circle (1.5pt);
\draw (3.55, 0.55) node {$b$};
\draw (3.85,0) -- (3.5,1.25);
\end{tikzpicture}
\caption{If the Euclidean distance between $b_1$ and $b_2$ is more than $r/3$, we split the region into two regions.} \label{fig:6}
\end{figure}
Now, we want to assign all roots from the family of rooted trees (recall that they are at level $i$ of $T$) to vertices of $\mathcal{Q}$ above $A_i$. If there are more than $R_i$ vertices on level $i$ in $T$, then stop the algorithm prematurely (\textbf{Error 2} is reported). In fact, we typically only need to embed a small portion of the vertices of level $i$, but we nevertheless stop prematurely if $R_i$ is smaller than the total number of vertices at level $i$ in the tree. Otherwise, we first assign all roots of the family of rooted trees we deal with. Then, we order the trees rooted at them according to their sizes (in non-decreasing order), and keep adding whole rooted trees, as long as the total number of vertices added is at most $R_i$ (see Figure~\ref{fig:tree}). By the same argument as before, we are guaranteed that at least $R_i/2$ vertices are assigned to the corresponding vertices of $T$. Clearly, if $i=\ell/20$, we are able to fit all rooted trees, and so all $R_i$ (which is equal to $Q_i$ in this case) vertices are dealt with. On the other hand, that is, as long as $i < \ell/20$, we introduce any line $L_i$, parallel to the base, that separates vertices of $\mathcal{Q}$ that are assigned (that are above the line) from those that are still not assigned to any vertex in $T$ (below the line). (As usual, by property (b) of the distribution of the vertices, we can do it so that no vertex lies on $L_i$.) We continue recursively with the new auxiliary region below $L_i$ and the new family of rooted trees consisting of all the branches that are not assigned to any vertices; see Figure~\ref{fig:tree}. Note that the line $L_i$ depends on $\mathcal{Q}$, and different auxiliary regions corresponding to embedding vertices of $T$ of the same level might have a different line $L_i$. We will show below that these lines will all be close to $A_i$.
\begin{figure}[ht!]\centering
\begin{tikzpicture}[scale=3]
\foreach \x in {(5.25,5.35), (5.4,5.5), (5.5,5.33), (5.8,5.15), (4.8,5.55), (4.65,5.4), (4.8,5.35), (4.7,5.2), (5.35,5.4), (5.3,5.2), (5.4,5.15), (5.6,5.1)}{
\filldraw[black!40] \x circle (1pt);
}
\foreach \x in {(5.2,5.6), (5.35,5.65), (4.8,5.55)}{
\filldraw \x circle (1pt);}
\draw[dashed,blue] (4.5,5.8) -- (5.5,5.8);
\draw (5.7,5.8) node {$L_0$};
\draw (5,5.87) -- (5.2,5.6);
\draw (5,5.87) -- (4.8,5.55);
\draw[black!50] (4.65,5.4) -- (4.8,5.55);
\draw[black!50] (4.8,5.35) -- (4.8,5.55);
\draw[black!50] (4.8,5.35) -- (4.7,5.2);
\draw (5,5.87) -- (5.35,5.65);
\draw[black!50] (5.8,5.15) -- (5.35,5.65);
\draw[black!50] (5.2,5.6) -- (5.25,5.35);
\draw[black!50] (5.2,5.6) -- (5.4,5.5);
\draw[black!50] (5.3,5.2) -- (5.35,5.4);
\draw[black!50] (5.4,5.15) -- (5.35,5.4);
\draw[black!50] (5.4,5.15) -- (5.6,5.1);
\draw[black!50] (5.5,5.33) -- (5.4,5.5);
\draw[black!50] (5.35,5.4) -- (5.2,5.6);
\draw (6.3,4.8) -- (5,6) -- (3.7,4.8);
\draw[dashed,blue] (3.7,5) -- (6.2,5);
\draw (6.3,5) node {$A_1$};
\draw[dashed] (5,5) -- (5,4.7);
\filldraw (5,5.87) circle (1pt);
\foreach \x in {(5.25,5.1), (4.8,5.15), (5.05,5.35), (4.3,5.15)}{
\filldraw \x circle (1pt);
\draw (5,5.87) -- \x;}
\foreach \x in {(5.1,4.8),(5.17,4.8),(5.4,4.8)}{\filldraw \x circle (0.75pt); \draw \x -- (5.25,5.1);\draw (5.3,4.8) node {$\dots$};}
\foreach \x in {(4.65,4.85),(4.72,4.85),(4.95,4.85)}{\filldraw \x circle (0.75pt); \draw \x -- (4.8,5.15);\draw (4.85,4.85) node {$\dots$};}
\foreach \x in {(4.15,4.85),(4.22,4.85),(4.45,4.85)}{\filldraw \x circle (0.75pt); \draw \x -- (4.3,5.15);\draw (4.35,4.85) node {$\dots$};}
\foreach \x in {4.45,4.95,4.22,4.15,4.65,4.72}{\foreach \y in {4.85}{\draw[dashed,thick] (\x,\y) -- (\x-0.075,\y-0.15) -- (\x+0.075,\y-0.15) -- (\x,\y);}}
\foreach \x in {5.4,5.1,5.17}{\foreach \y in {4.8}{\draw[dashed,thick] (\x,\y) -- (\x-0.075,\y-0.15) -- (\x+0.075,\y-0.15) -- (\x,\y);}}
\end{tikzpicture}
\caption{Vertices from layer 1 in the tree are assigned to vertices in $Q$. We assign the rest of the vertices in $\mathcal{Q}$ by embedding entire branches of the tree, as long as the number of vertices assigned is at most $R_i$ (in grey). The remaining branches become roots for the next iteration. }\label{fig:tree}
\end{figure}
Finally, if at some point of this process two vertices in $\mathcal{Q}$ are assigned to two adjacent vertices in $T$ that are at Euclidean distance more than $r$, then we clearly have to stop the algorithm prematurely (\textbf{Error 3} is reported).
It remains to argue that we never stop the algorithm prematurely as this implies that $T$ is embedded on the vertices inside the triangle. Let us deal first with Error~2, then with Error~3, leaving Error~1 for the end.
\medskip
\textbf{Error 2}---level $i$ in $T$ contains more vertices than are available in $\mathcal{Q}$ above $A_i$ (that is, more than $R_i$): First, let us observe that for $i \in \{1, 2, \ldots, 50\}$, the auxiliary line $A_i$ intersects the triangle so that the Euclidean distance between the two points on the sides of the triangle under consideration intersecting with $A_i$ is $(20yr) i \le (20 c r) i = r i /500 \le r/3$. Hence, splitting of auxiliary regions cannot happen during the first 50 rounds. On the other hand, for $i \in \{1, 2, \ldots, 50\}$ we have $(20yr) i \ge (10 c r) i = r i /1000$. Let us then concentrate on any $i \in \{51, 52, \ldots, \ell/20\}$. We show, inductively, that when dealing with line $A_i$, the two corresponding points $b_1$ and $b_2$ are at distance at least $r/20$. The claim is true for $A_{50}$ as argued above. Suppose then that the claim holds for $A_{i-1}$ for some $i \in \{51, 52, \ldots, \ell/20\}$. If $\mathcal{Q}$ is split into two auxiliary regions, then the claim holds for $A_i$ unless \textbf{Error~1} is reported. On the other hand, if no splitting is performed, then the Euclidean distance between the two corresponding points can only increase, and so the claim clearly holds for $A_i$. This implies, in particular, that $\mathcal{Q}$ contains at least one small square, and thus $R_i \ge (1-\eps)(yr)^2 \ge (1-\eps)(cr/2)^2 \ge r^2 10^{-9}$. On the other hand, since
$$
i \le \frac {\ell}{20} = \frac {(x r \lg r)/(yr)}{20} \le \frac {c \lg r}{10 c} = \frac {\lg r}{10},
$$
we get from Observation~\ref{obs:trees} that the number of vertices on level $i$ in $T$ is at most
$$
\binom{d}{i} \le \binom{2 \lg r + 2 \lg \lg r + O(1)}{\lg r / 10 } \le \binom{ 3 \lg r }{ \lg r / 10 } \le (30e)^{\lg r / 10} \le 2^{7 \lg r / 10} < r^2 10^{-9},
$$
provided that $C$ is large enough. Hence, this error never occurs.
\medskip
\textbf{Error 3}---two vertices assigned to adjacent vertices in $T$ are at distance more than $r$: It follows from the definition of $L_i$ that for any $i \in \{ 1, 2, \ldots, \ell/20\}$, $L_i$ lies above the auxiliary line $A_i$ ($L_0$ is exceptional and lies slightly below $A_0$). We are going to argue that $L_i$ is relatively close to $A_i$.
\smallskip
\noindent \emph{Claim: For any $i \in \{ 4, 5, \ldots, \ell/20\}$, $L_i$ lies below the auxiliary line $A_{i-4}$.}
\smallskip
We will be done once the claim is proved as it implies that we never connect vertices by an edge that are at Euclidean distance more than $50 (yr) + r/3 \le 50 c r + r/3 < r$. Indeed, vertices that need to be connected by an edge must lie in the part of $\mathcal{Q}$ between $L_{i-1}$ and $A_i$. The Euclidean distance between $L_{i-1}$ and $A_i$ is at most $50(yr)$ and the intersection of $A_i$ and $\mathcal{Q}$ is at most $r/3$; see Figure~\ref{fig:connecting_Vertical}.
\smallskip
\noindent \emph{Proof of the Claim:} For a contradiction, suppose that there exists $i$ such that $L_i$ lies above $A_{i-4}$ and consider the smallest such $i$. Hence, $L_{i-1}$ lies below $A_{i-5}$. Let $\mathcal{Q}_1$ be the part of $\mathcal{Q}$ that lies between $L_{i-1}$ and $L_i$, and recall that $\mathcal{Q}_1$ contains at least $R_i/2$ vertices. Similarly, let $\mathcal{Q}_2$ be the part between $L_{i-1}$ and $A_i$ and recall that $\mathcal{Q}_2$ contains precisely $R_i$ vertices.
\begin{figure}[ht!]\centering
\begin{tikzpicture}
\filldraw[purple!20] (0,0) -- (3,0) -- (3,4.8*0.75) -- (0,4.8*0.75) -- (0,0);
\foreach \x in {0.1,0.6,1.1,1.6,2.1,2.6}{\filldraw[red!50, dashed] (0+\x,4.3*0.75) -- (0.25+\x,4.3*0.75) -- (0.25+\x,4.8*0.75) -- (0+\x,4.8*0.75);}
\foreach \i in {1,2,3,4, 5}{
\draw[dashed] (0,0.75*\i) -- (3,0.75*\i);
\draw (3.3,0.75*\i) node {$A_{i-\i}$};
}
\draw[dashed] (0,0) -- (3,0);
\draw (3.3,0) node {$A_{i}$};
\draw[thick] (-0.5,0.75*4.3) -- (3.5,0.75*4.3);
\draw (4.1,0.75*4.3) node {$L_{i}$};
\draw[thick] (-0.5,0.75*4.8) -- (3.5,0.75*4.8);
\draw (4.1,0.75*4.7) node {$L_{i-1}$};
\end{tikzpicture}
\caption{All vertices that need to be connected by an edge must be in the part of $\mathcal{Q}$ between $L_{i-1}$ and $A_i$.} \label{fig:connecting_Vertical}
\end{figure}
The fact that the area of $\mathcal{Q}_1$ is at least five times smaller than the one of $\mathcal{Q}_2$ but it contains at least half of vertices will lead us to the desired contradiction. Recall that the length of the intersection of $A_{i-4}$ with the triangle is $s \ge r/20-80(yr) \ge r/20-r/125 \ge r/25$. Hence, the number of small squares covering $\mathcal{Q}_1$ is at most $10(u+2)$, where $u=\lceil s/(yr) \rceil \ge 400$. The number of vertices in $\mathcal{Q}_1$ is then at most $10(u+2)(1+\eps)$, and so $R_i \le 20(u+2)(1+\eps)$. On the other hand, the number of small squares that are completely contained in $\mathcal{Q}_2 \setminus \mathcal{Q}_1$ is at least $40(u-2)$, and so $R_i$ is at least $40(u-2)(1-\eps)$. The contradiction follows, since $20(u+2)(1+\eps)<40(u-2)(1-\eps)$ for any $u \ge 400$.
\medskip
\textbf{Error 1}---the Euclidean distance between $b_1$ and $b$ (or $b$ and $b_2$) is either less than $r/20$ or more than $r/3$: Suppose that we partition $\mathcal{Q}$ containing $Q_i$ vertices into $\mathcal{Q}_1$ and $\mathcal{Q}_2$, where $\mathcal{Q}_1$ is the part of $\mathcal{Q}$ induced by $b_1$ and $b$, all the way down to $A_{\ell/20}$. Recall that $\mathcal{Q}_1$ contains at least $Q_i/4$ vertices, and that the Euclidean distance between $b_1$ and $b_2$ is more than $r/3$ (since we performed splitting).
\begin{figure}[ht!]\centering
\begin{tikzpicture}
\draw[thick] (0,0) -- (5,0);
\draw[thick] (0,0) -- (2.5,5);
\draw[thick] (5,5) -- (5,0);
\draw[thick] (2,0) -- (3.5,5);
\draw[thick, blue] (1,3.1) -- (5,3.1);
\draw[blue] (0.6,3.2) node {$L_{i-1}$};
\filldraw (1.55,3.1) circle (2pt);
\draw (1.25,3.35) node {$d_1$};
\draw (5.3,3.35) node {$d_2$};
\draw (2.85,3.35) node {$d$};
\filldraw (2.9,3.1) circle (2pt);
\filldraw (5,3.1) circle (2pt);
\filldraw (5,0) circle (2pt);
\filldraw (0,0) circle (2pt);
\filldraw (2,0) circle (2pt);
\draw (1.8,2) node {$\mathcal{Q}_1$};
\draw (-0.3,0.3) node {$b_1$};
\draw (5.3,0.3) node {$b_2$};
\draw (2.3,0.3) node {$b$};
\draw[black!60] (5,2.4) -- (1.2,2.4);
\draw (0.75,2.4) node {$A_{i-1}$};
\draw [brace] (1.7,3.25) -- (4.9,3.25);
\draw (3.35,3.5) node {$s$};
\draw[white] (0,-1.1) node {.};
\end{tikzpicture}\hskip 0.5 in
\begin{tikzpicture}
\filldraw[red!30] (1.5,3) rectangle (5,3.25);
\filldraw[red!30] (5,0) rectangle (4.75,3.25);
\foreach \x/\y in {0/0,0/0.25,0.25/0.5,0.25/0.75,0.5/1,0.5/1.25,0.75/1.5,0.75/1.75,1/2,1/2.25,1.25/2.5,1.25/2.75}{\filldraw[red!30] (0+\x,0+\y) rectangle (0.25+\x,0.25+\y);}
\draw[thick] (0,0) -- (5,0);
\foreach \x in {0.25,0.5,0.75,1,1.25,1.5,1.75,2,2.25,2.5,2.75,3,3.25,3.5,3.75,4,4.25,4.5,4.75}{\draw[dashed, black!20] (\x,0) -- (\x,5);}
\foreach \x in {0.25,0.5,0.75,1,1.25,1.5,1.75,2,2.25,2.5}{\draw[dashed, black!20] (0,\x) -- (5,\x);
\draw[dashed, black!20] (0,\x+2.5) -- (5,\x+2.5);}
\draw[thick] (0,0) -- (2.5,5);
\draw[thick] (5,5) -- (5,0);
\draw[thick] (2,0) -- (3.5,5);
\draw[thick, blue] (0,3.1) -- (5,3.1);
\draw[blue] (-0.4,3.2) node {$L_{i-1}$};
\draw [bracem] (0,-0.1) -- (1.9,-0.1);
\draw [bracem] (0,-0.6) -- (4.9,-0.6);
\draw (0.9,-0.4) node {$<r/3$};
\draw (2.5,-1) node {$>r/20$};
\draw[black!60] (5,2.4) -- (1,2.4);
\draw (-0.3,2.4) node {$A_{i-1}$};
\end{tikzpicture}
\caption{On the left: definitions of points and regions used in \textbf{Error~1}. On the right: illustration of the squares in case \textbf{Error 1} occurs because the Euclidean distance between $b_1$ and $b$ is less than $r/20$.} \label{FigureError1}
\end{figure}
Suppose that \textbf{Error 1} occurs because the Euclidean distance between $b_1$ and $b$ is less than $r/20$. Exactly the same argument can be applied to the case when the Euclidean distance between $b$ and $b_2$ is less than $r/20$. Let $d_1$, $d$, and $d_2$ be the three points of intersection of the line $L_{i-1}$ with the lines going between the apex of the triangle and $b_1$, $b$ and, respectively, $b_2$; see Figure~\ref{FigureError1}. Note that the Euclidean distance between $d_1$ and $d$ is less than $r/20$ and, since by the claim $L_{i-1}$ lies below $A_{i-5}$, the Euclidean distance between $d_1$ and $d_2$, denoted by $s$, satisfies $s > r/3 - 100(yr) \ge 3 r /10$. As the corresponding triangles are similar, the length of the intersection of each horizontal line between $L_{i-1}$ and $A_{\ell/20}$ inside $\mathcal{Q}_1$ is at most a factor of $(r/20)/(r/3) = 3/20$ of the total length of the intersection of the line with the triangle. Hence, the area of $\mathcal{Q}_1$ is by a multiplicative factor of at most $3/20$ smaller than the area of $\mathcal{Q}$, which will be denoted by $A$.
Arguing as in the previous error, the area of small squares completely contained in $\mathcal{Q}$ is at least $A \cdot \frac {u-2}{u} \cdot \frac {10}{11} \ge 0.9 A$ ($u = \lceil s/(yr) \rceil \ge \lceil (3r/10)/(yr) \rceil \ge 3000$). Indeed, since $L_{i-1}$ might cross small squares, the first row of small squares that intersects $\mathcal{Q}$ might be completely lost, giving an additional factor of $10/11$ (note that there are at least $10$ complete rows between $A_{i-1}$ and $A_i$). It follows that $Q_i \ge 0.9 A (1-\eps) > 0.89 A$.
On the other hand, the area of small squares having non-empty intersection with $\mathcal{Q}_1$ is at most $A \cdot \frac{3}{20} \cdot \frac {u'+3}{u'} \cdot \frac {11}{10} < 0.17 A$ ($u'=\lceil (3s/20)/(yr) \rceil \ge 450$). Hence, the total number of vertices in $\mathcal{Q}_1$ is at most $0.17 A (1+\eps) < 0.18 A$. This time we get $Q_i \le 4 \cdot 0.18 A = 0.72 A$, and the desired contradiction occurs.
Finally, let us note that \textbf{Error 1} cannot occur because the Euclidean distance between $b_1$ and $b$ is larger than $r/3$ (provided that the distances between $b_1$ and $b$ as well as between $b$ and $b_2$ are at least $r/20$). Since we consider the smallest $i$ for which such error occurred, the length of the intersection of $A_i$ with $\mathcal{Q}$ is at most $r/3+20(yr)$ and so the Euclidean distance between $b_1$ and $b$ is at most $r/3+20(yr)-r/20 < r/3$. The same argument shows that the Euclidean distance between $b$ and $b_2$ cannot be larger than $r/3$.
\end{proof}
\section{Concluding remarks}
The proof of the lower bound can be easily generalized to show that for any fixed dimension $d$ and sufficiently large radius $r$, $a_t(G)=\Omega(n/(r \lg r)^d)$. For $d=1$, it is also easy to get the matching upper bound $a_t(G)=O(n/(r \lg r))$. It is natural to conjecture that for $d \ge 3$ the proof of the upper bound can also be adapted to show $a_t(G)=O(n/(r \lg r)^d)$, but in order not to make the paper too technical, we opted for not pursuing further this approach.
| {
"timestamp": "2016-11-23T02:01:48",
"yymm": "1611",
"arxiv_id": "1611.07111",
"language": "en",
"url": "https://arxiv.org/abs/1611.07111",
"abstract": "Let $G$ be a graph in which each vertex initially has weight 1. In each step, the weight from a vertex $u$ to a neighbouring vertex $v$ can be moved, provided that the weight on $v$ is at least as large as the weight on $u$. The total acquisition number of $G$, denoted by $a_t(G)$, is the minimum cardinality of the set of vertices with positive weight at the end of the process. In this paper, we investigate random geometric graphs $G(n,r)$ with $n$ vertices distributed u.a.r. in $[0,\\sqrt{n}]^2$ and two vertices being adjacent if and only if their distance is at most $r$. We show that asymptotically almost surely $a_t(G(n,r)) = \\Theta( n / (r \\lg r)^2)$ for the whole range of $r=r_n \\ge 1$ such that $r \\lg r \\le \\sqrt{n}$. By monotonicity, asymptotically almost surely $a_t(G(n,r)) = \\Theta(n)$ if $r < 1$, and $a_t(G(n,r)) = \\Theta(1)$ if $r \\lg r > \\sqrt{n}$.",
"subjects": "Combinatorics (math.CO); Probability (math.PR)",
"title": "The Total Acquisition Number of Random Geometric Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787883738261,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811358410695
} |
https://arxiv.org/abs/2207.00768 | Sum-of-Max Partition under a Knapsack Constraint | Sequence partition problems arise in many fields, such as sequential data analysis, information transmission, and parallel computing. In this paper, we study the following partition problem variant: given a sequence of $n$ items $1,\ldots,n$, where each item $i$ is associated with weight $w_i$ and another parameter $s_i$, partition the sequence into several consecutive subsequences, so that the total weight of each subsequence is no more than a threshold $w_0$, and the sum of the largest $s_i$ in each subsequence is minimized.This problem admits a straightforward solution based on dynamic programming, which costs $O(n^2)$ time and can be improved to $O(n\log n)$ time easily. Our contribution is an $O(n)$ time algorithm, which is nontrivial yet easy to implement. We also study the corresponding tree partition problem. We prove that the problem on the tree is NP-complete and we present an $O(w_0 n^2)$ time ($O(w_0^2n^2)$ time, respectively) algorithm for the unit weight (integer weight, respectively) case. | \section{Introduction}
Sequence and tree partition problems have been studied extensively since 1970s,
due to their importance in parallel processing \cite{seq-2,app-parallel-1,app-parallel-2},
task scheduling \cite{app-task-scheduling-1,app-task-scheduling-2},
sequential data analysis \cite{app-data-analyis-1,app-data-analyis-2,app-data-analyis-3},
network routing and telecommunication \cite{seq-1,app-routing-1,app-routing-2,app-routing-3}.
In this paper, we study the following variant of partition problem:
\begin{description}
\item[Sequence partition] Given a sequence of $n$ items $1,\ldots,n$, where item $i$ is associated with a weight $w_i$ and a parameter $s_i$
(which can be interpreted as the significance, or safety level, or distance from origin, or CPU delaying time, or length of object,
of item $i$, depending on the different applications of the problem),
partition the sequence into several consecutive subsequences,
so that the total weight of each subsequence is no more than a given threshold $w_0$ (this will be referred to as the Knapsack constraint),
and the objective is the sum of the largest $s_i$ in each subsequence, which should be minimized.
Throughout, we assume that $w_1,\ldots,w_n,s_1,\ldots,s_n$ are nonnegative.
\item[Tree partition] Given a tree of $n$ nodes $1,\ldots,n$, where node $i$ is associated with a weight $w_i$ and a parameter $s_i$,
partition the tree into several connected components,
so that the total weight of each component is no more than $w_0$
and the sum of the largest $s_i$ in each component is minimized.
\end{description}
Denote $w(j+1,i)=\sum_{j<k\leq i} w_k$ and $s(j+1,i)=\max_{j<k\leq i} s_k$.
The sequence partition algorithm can be solved in $O(n^2)$ time by a straightforward dynamic programming
of the following formulation: $F[i] = \min\{F[j]+s(j+1,i) \mid j<i,w(j+1,i)\leq w_0\}~(1\leq i\leq n)$.
Those $j$ appeared in the formula of $F[i]$ are called the options of $i$, and $F[j]+s(j+1,i)$ is referred to as the value of $j$.
Organizing all these values by a min-heap, the running time can be improved to $O(n \log n)$.
Our main contribution is an even more satisfactory $O(n)$ time algorithm.
To obtain the mentioned $O(n)$ time algorithm, we abandon the min-heap and use a more clever data structure
for organizing the candidate values.
We first show that computing $F[i]$ reduces to finding the best s-maximal option,
where an option $j$ is \emph{s-maximal} if $s_j>s(j+1,i)$.
Interestingly, the s-maximal options fall into two categories:
As $i$ grows, some of these options will be out of service due to the Knapsack constraint, and
we call them \emph{patient options} -- they admit the first-in-first-out (FIFO) property clearly,
whereas the other options will be out of service due to the s-maximal condition,
and we call them \emph{impatient options} -- they somehow admit exactly the opposite property first-in-last-out (FILO).
We then use a monotonic queue \cite{book-IntroAlg} for organizing the values of patient options
and a monotonic stack \cite{book-IntroAlg} for organizing the values of impatient options.
As a result, we find the best patient and impatient options, and thus the overall best option, in amortized $O(1)$ time,
thus obtaining the linear time algorithm.
The difficulty lies in analyzing and throwing the options into correct container -- the queue or the stack.
Nontrivial mechanisms are applied for handling this; see section~\ref{sect:chain}.
Note that in a final simplified version of our algorithm,
we further replace the monotonic queue and stack by a deque, see a discussion in subsection~\ref{subsect:alg-final-on}.
Although our algorithm is inevitably more difficult to analyze compared to its alternative (based on heap),
it is still quite easy to implement.
In fact, our implementation using \emph{C/C++ program} (given in appendix) contains only 30 lines, which beats its alternative for sure.
The alternative algorithm is implemented as well for a comparison of the real performances.
Experimental results show that our algorithm is stable and is much faster as $n$ grows large; see \ref{sect:experiment}.
\smallskip Our second result says that the decision version of our tree partition problem (see Problem~\ref{problem:tree-partition} in section~\ref{sect:tree}) is NP-complete.
For proving it,
we first show that a variant of the Knapsack problem (see Problem~\ref{problem:knapsack-2} in section~\ref{sect:tree}) is NP-complete,
and then prove that this Knapsack problem reduces to the tree partition problem.
In addition, we consider a special case of the tree partition problem where all the weights are unit.
We show that this problem admits an $O(w_0^2n^2)$ time solution (note that $w_0=O(n)$),
which is based on a standard dynamic programming.
\subsection{Motivations \& Applications}
Our partition problems are not only of theoretical value (because they have clean definitions),
but also of practical value, as they can be applied in real-life.
\medskip In physical distribution, $n$ cargos with weights $w_1,\ldots,w_n$ in a center
need to be loaded into vehicles and then be delivered to different destinations along a route,
having distances $s_1,\ldots,s_n$ away from the center.
Those cargos coming in a line but not exceeding a constraint $w_0$ can be loaded into the same vehicle.
A good partition of cargos is required for saving the total transportation fee.
Sometimes, cargos have the same destination but have different significance / fragile levels $s_1,\ldots,s_n$ and
each vehicle buys an insurance according to the highest level of cargos it contains.
A good partition saves the total insurance fee.
In a more realistic situation,
there are $k$ types of vehicles, each of different weight limit and rates on oil consumption,
and we are allowed to select a vehicle for each batch of cargos.
We can model this by an extended partition problem and solve it in $O(kn)$ time (using the ideas for case $k=1$); see subsection~\ref{subsect:extension}.
\medskip Similar applications may be found in telecommunication / network routing,
where we may want to send $n$ messages on time using the satellite or cable.
The total length of message in each block is limited, which corresponds to the Knapsack constraint.
Moreover, the higher safety level a message has, the more expensive communication channel we must use for sending it.
Each block chooses a channel according to the highest safety level of the message it contains,
and we want to partition the messages into blocks so that the total expense is minimized.
\medskip The partition problem finds applications in parallel computing and job scheduling.
We may also interpret $s_1,\ldots,s_n$ as processing times of jobs.
Each job requires some resources and the total resources a batch of jobs can apply is limited.
\subsection{Related work}
Sequence partition problems have been studied extensive in literature.
Olstad and Manne \cite{seq-1} presented an $O(k(n-k))$ time algorithm for
finding a partition of a given sequence of length $n$ into $k$ pieces $\gamma_1,\ldots,\gamma_k$ so that $\max_i f(\gamma_i)$ is minimized, where $f$ is any prescribed, nonnegative, and monotone function.
P{\i}nar and Aykanat \cite{seq-2} designed an $O(k\log n+n)$ time algorithm for a special case of this problem
where $f(\gamma_i)$ is defined as the sum of the weights of elements in $\gamma_i$.
As a comparison, the problem studied in \cite{seq-2} aims to minimize the Max-of-Sum, whereas our problem aims to minimize the Sum-of-Max.
Zobel and Dart \cite{seq-3} gave an $O(n)$ time algorithm for the following variant:
Given a threshold value $L$, find a partition into $k$ pieces $\gamma_1,\ldots,\gamma_k$ so that the total weight of each piece $\gamma_i$ is at least $L$ and $\sum_i (\text{the weight of $\gamma_i$}-L)^2$ is minimized.
Tree partition is more complicated than sequence partition,
and it has drawn more attention over the last four decades, especially in theoretical computer science.
Given a threshold $w_0$ and a tree whose nodes have assigned weights,
Kunda and Misra \cite{tree-1} showed a linear time algorithm for
finding a partition of the tree into $k$ components (by deleting $k-1$ edges),
so that each component has a total weight no more than $w_0$, meanwhile $k$ is minimized.
Note that this problem is a special case of our tree partition problem (where $s_i$'s are set to be 1).
Parley et. al \cite{tree-2} considered partitioning a tree into the minimal number of components
so that the diameter of each component is no more than a threshold $D_0$.
Becker and Schach \cite{tree-3} gave an $O(Hn)$ time tree partition algorithm
towards the minimal number of components so that
the weight of each component is no more than a threshold $w_0$ and
the height of each component is no more than another threshold $H$.
Ito et. al \cite{tree-4} partitioned a tree in $O(n^5)$ time
into the minimum (or maximum, respectively) number of components with weights in a given range.
Pioneers in this area have also studied the tree partition problems
in which the number of components $k$ is fixed
and an objective function defined by the components is to be optimized.
For example, maximize the minimum weight of the components \cite{tree-k-1},
or minimize the maximum weight of components \cite{tree-k-2}.
Surprisingly, both problems can be solved in linear time by parametric search; see Frederickson \cite{tree-k-3,tree-k-4}.
Yet the linear time algorithm is extremely complicated.
Agasi et. al \cite{tree-k-5} showed that a variant of the min-max problem is NP-hard.
\newcommand{\mathsf{cost}}{\mathsf{cost}}
\newcommand{\mathsf{weight}}{\mathsf{weight}}
\newcommand{\mathsf{next}}{\mathsf{next}}
\section{A linear time algorithm for the partition problem}\label{sect:chain}
The partition problem can be solved by dynamic programming as shown below.
Let $F[i]$ be the optimal value of the following optimization problem:
\begin{quote}
Partition $[1,i]$ into several intervals $I_1,\ldots,I_j$ such that
their total cost $\sum_{k=1}^{j} \mathsf{cost}(I_k)$ is minimized,
subject to the constraint that
the weight $\mathsf{weight}(I_k)$ of each interval $I_k~(1 \leq k\leq j)$ is less than or equal to $w_0$.
\end{quote}
Throughout, $\mathsf{cost}(I_k)=\max_{v \in I_k} s_v$ and $\mathsf{weight}(I_k)=\sum_{v\in I_k} w_v$,
and they are abbreviated as $W_k$ and $S_k$, respectively, in the following.
Moreover, denote $W_{a,b}=\sum_{v:a\leq v \leq b} w_v$ and
$S_{a,b}=\max_v\{s_v | a\leq v\leq b\}$ for convenience.
The following transfer equation is obvious.
\begin{equation}\label{eq:1}
F[i]=\min_{j:0\leq j<i} \{ F[j]+ S_{j+1,i} \mid W_{j+1,i}\leq W\}.
\end{equation}
Clearly, the partition problem reduces to computing $F[1],\ldots,F[n]$.
Using formula \eqref{eq:1}, we can compute $F[1],\ldots,F[n]$ in $O(n^2)$ time. For computing $F[i]$, it takes $O(n)$ times
to search the options of $i$ and select the best.
\subsection{An $O(n\log n)$ time algorithm using heap}\label{subsect:alg-heap-nlogn}
To speed up the na\"{\i}ve quadratic time algorithm above,
we have to search the best option of each $i$ more efficiently.
This subsection shows that we can find the best option in $O(\log n)$ time by utilizing the data structure heap.
Denote $O_i=\{j \mid 0\leq j <i, W_{j+1,i} \leq W\}$ for each $i~(1\leq i \leq n)$.
Call each element $j$ in $O_i$ an \emph{option} of $i$.
An option $j$ is called a \emph{s-maximal option} of $i$ if $j>0$ and $s_j> S_{j+1,i}$.
Denote by $O_i^s$ the set of s-maximal options of $i$.
Denote $o_i=\min{O_i}$ and note that $O_i=[o_i, i-1]$.
\begin{lemma} \label{lem:1}
Set $O_i^s \cup \{o_i\}$ contains an optimal option of $F[i]$. As a corollary:
\begin{equation}\label{eq:4}
F[i]=\min_j \left\{F[j]+S_{j+1,i} \mid j \in O_i^s \cup \{o_i\} \right\}.
\end{equation}
\end{lemma}
\begin{proof}
Assume $j > o_i$ and $j$ is not s-maximal. As $j$ is not s-maximal, $s_j \leq S_{j+1,i}$, therefore (a) $S_{j,i} = S_{j+1,i}$.
Moreover, we have (b) $F[i-1] \leq F[i]$. The proof of this inequality is as follows.
Let $\Pi$ be the optimal partition of $1\ldots i$.
Let $\Pi'$ be the same as $\Pi$ except for deleting $j$ (from the last interval).
Clearly, the cost of $\Pi'$ is at most the cost of $\Pi$ and the latter equals $F[i]$.
Moreover, the cost of the best partition of $1\ldots i-1$ is no more than that of $\Pi'$.
Together, $F[i-1] \leq F[i]$.
Combining (a) and (b), $F[j-1] + S_{j,i} =F[j-1] + S_{j+1,i} \leq F[j] +S_{j+1,i}$, which means option $j-1$ is no worse than $j$
in computing $F[i]$.
By the assumption of $j$, it follows that there is a best option of $F[i]$ that is s-maximal or equal to $o_i$.
\end{proof}
The subscript $i$ of $o_i$ is omitted when it is clear from the context.
\medskip Without loss of generality, assume $O_i^s = \{j_1,\cdots,j_t\}$, where $j_1 < \cdots < j_t$. According to the definition of s-maximal: $s_{j_1} > \cdots > s_{j_t}>s_i$.
We use a deque $J$ to store $O_i^s$ during the computation of $F[1],\ldots,F[n]$.
When we are about to compute $F[i]$, the deque $J$ shall be updated as follows:
\begin{enumerate}
\item $i-1$ joins $J$ (to the tail).
\item Several options $j$ at the tail of $J$ are popped out, since they do not satisfy the ``s-maximal constraint'' $s_j>s_i$.
\item Several options $j$ at the head of $J$ are popped out, since they do not satisfy the ``weight constraint'' $W_{j+1,i} \leq w_0$
\end{enumerate}
Clearly, each $j~(1\leq j\leq n)$ will be pushed in and popped out from $J$ at most once,
so the total time for maintaining $J$ in the algorithm is $O(n)$.
Below we show how to compute $F[1],\ldots,F[n]$ using $J$ (i.e., $O^s_i$) and the equation \eqref{eq:4}.
\begin{definition}\label{def:next(j)}
For any s-maximal option $j$,
let $\mathsf{next}(j)$ be the first s-maximal option on the right side of s-maximal option $j$;
and define $\mathsf{next}(j) = i$ if $j$ is the rightmost s-maximal option.
Note that $\mathsf{next}(j)$ is variant while $i$ increases.
By this definition, $S_{j+1,i}=s_{\mathsf{next}(j)}$. For convenience, denote $$\mathsf{cost}[j] = F[j] +s_{\mathsf{next}(j)}$$
Furthermore, let $j^* = \mathop{\arg\min}_{j \in J}\ \{cost[j]\}$. To be precise, if $J=\varnothing$, define $j^*=-1$.
Let $u = \mathop{\arg\max}_{o < j \leq i}\ s_j$ (if not unique, let $u$ be the largest one of them).
\end{definition}
It is obvious that $u=\left\{
\begin{array}{ll}
\mathsf{next}(o), & o \in O_i^s; \\
\min{\{ O_i^s \cap \{i\}\}}, & o \notin O_i^s.
\end{array}
\right.$
(by the monotonicity of $J$).
\smallskip Equipped with these notations, equation \eqref{eq:4} can be simplified as follows:
\begin{equation} \label{eq:5}
F[i]=
\begin{cases}
\min (F[o]+s_u, cost[j^*]) &j^*\neq -1\\
F[o]+s_u & j^*=-1
\end{cases}
\end{equation}
\begin{proof}
When $j^*\neq -1$, set $J$ is not empty, and we have
\begin{equation}
\begin{aligned}
F[i]&=\min\left(F[o] + S_{o+1,j}, \min_{j\in J}\{F[j]+S_{j+1,i}\}\right)~\quad \text{(according to \eqref{eq:4})}\\
&=\min \left(F[o]+s_u, \min_{j\in J}\{F[j]+s_{\mathsf{next}(j)}\}\right)\\
&=\min \left(F[o]+ s_u, \min_{j\in J} \mathsf{cost}[j] \right)=\min (F[o]+ s_u, \text{cost}[j^*])
\end{aligned}
\end{equation}
When $j^* = -1$, set $J=O^s_i=\varnothing$ and $F[i]=F[o] + S_{o+1, i} = F[o] + s_u$.
\end{proof}
We can compute $F[1],\ldots,F[n]$ in $O(n\log n)$ time based on formula \eqref{eq:5}.
Notice that $o_i$ can be computed in $O(1)$ amortized time, and so as $u$ -- $u$ can be computed easily from $J$.
The challenge only lies in computing $j^*$ and $\mathsf{cost}[j^*]$.
For computing $j^*$ and $\mathsf{cost}[j^*]$ efficiently,
we organize $\{(\mathsf{cost}[j], j)\mid j \in J\}$ into a min-heap. Then, $j^*$ can be found in O(1) time.
Note that $\mathsf{cost}[j]$ changes only when $\mathsf{next}[j]$ changes.
Moreover, at most one value in the $\mathsf{next}$ array changes when $i$ increases by 1.
Hence, $\{(\mathsf{cost}_j, j) \mid j \in J\}$ would change at most $O(n)$ times during the process of the algorithm.
Each of them takes $O(\log n)$ time.
\subsection{An $O(n)$ time algorithm using a novel grouping technique}
This section shows a novel \emph{grouping technique} that computes $j^*$ in $O(1)$ time.
For describing it, a concept called ``renew'' needs to be introduced.
\begin{definition} \label{def:renew}
We say a s-maximal option $j$ is \emph{renewed} when $\mathsf{next}(j)$ changes. An option $j$ is regarded as a new option after being renewed, which is different from the previous $j$ -- the same $j$ with different $\mathsf{next}(j)$ will be treated differently.
\end{definition}
With this concept, the way for an option $j$ to exit $J$ falls into three classes:
\begin{itemize}
\item[\textcircled{1}] (as $i$ increases) $j$ pops out from the head of the deque, since the constraint $W_{j+1, i} \leq w_0$ is no longer satisfied.
\item[\textcircled{2}] (as $i$ increases) $j$ pops out from the tail of the deque, since the constraint $s_j>s_i$ is no longer satisfied.
\item[\textcircled{3}] (as $i$ increases) $j$ is renewed; the old $j$ pops out and a new $j$ is added to $J$.
\end{itemize}
\textbf{Note.} 1. Assume that the weight constraint $W_{j+1,i} \leq w_0$ is checked before the s-maximal constraint $s_j > s_i$. That is, if an option satisfies neither of these constraints, we regard that it pops out in way \textcircled{1} .
2. In each iteration, after some options pop out in way \textcircled{2},
the last option $j$ in $J$ (if $J\neq \varnothing$) will be renewed.
\medskip We divide the options into two groups: the patient ones and impatient ones.
\begin{definition} \label{def:patien and impatient}
An option that exit $J$ by \textcircled{1} is called a \emph{patient option}.
An option that exit $J$ by \textcircled{2} and \textcircled{3} is called an \emph{impatient option}.
To be clear, the option that remains in $J$ until the end of the algorithm is called a \emph{patient option}.
\end{definition}
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{1.pdf}\\
\caption{Patient and impatient options. Suppose $i=6$. The s-maximal options are 2, 4, 5.
Option 2 would exit $J$ in way \textcircled{1} when $i=7$. So, it is patient.
Options 4 and 5 are impatient. When $i=7$, option 5 would exit $J$ in way \textcircled{2} and option 4 in way
\textcircled{3}. Note in particular that after option 4 is renewed, it becomes patient.}\label{fig:1}
\end{figure}
See Figure~\ref{fig:1} for an illustration of patient and impatient options.
As can be seen from this illustration: An option $j$ may belong to different groups before and after renew,
such as $j=4$ in the example.
Because of this, the options before and after renew must be distinguished so that each option has its own group.
\medskip Denote the set of patient options by $\text{$J$}^{(p)}$ and the set of impatient options by $\text{$J$}^{(ip)}$.
Obviously, $\text{$J$} = \text{$J$}^{(p)} \cup \text{$J$}^{(ip)}$. The idea of our algorithm is briefly as follows:
First, find the best option in $\text{$J$}^{(p)}$ and the best option in $\text{$J$}^{(ip)}$.
Then, choose the better one among them to be $j^*$.
Two subproblems are yet to be resolved:
1. How to determine the group a newly added or renewed option belongs to?
2. How to efficiently obtain the optimal option in $\text{$J$}^{(p)}$ and $\text{$J$}^{(ip)}$ respectively?
Towards a linear time algorithm, we should resolve them in constant time.
\subsubsection{Determine whether an option is patient or impatient}
We associate each option $j~(1 \leq j \leq n)$ with a counter, denoted by $counter[j]$,
which stores the number of times that $j$ exit in way \textcircled{2} or \textcircled{3} in the future.
For an option $j$ in $J$, we determine that it is patient if and only if $counter[j]=0$.
In the following, we present a preprocessing algorithm (see Algorithm \ref{algo:2}) that obtains the counters at the initial state.
In the main process, when an option is to be renewed, we decrease its corresponding counter by 1;
and if $counter[j]$ drops to 0 at that point, we get that option $j$ becomes patient from impatient.
\begin{algorithm}[h]
\LinesNumbered
\SetAlgoNoLine
\begin{small}
$o \leftarrow 0$\;
\For{$i = 1$ \textbf{to} $n$}{
\lWhile{ $W[o+1,i] > w_0 $}{
$o++$\;
}
\lWhile{$J~\&\&~W(J.head+1, i) > w_0$}{
$J$.deleteHead()\;
}
\lWhile{$J~\&\&~s[J.tail] \leq s[i] $}{
\{ $counter[J.tail]++$; ~~$J$.deleteTail();\} \\
}
\lIf{\text{$J$}}{
$counter[J.tail]++$ \;
}
$J$.insertTail($i$)\;
\lIf{$J.head = o$}{
$u[i] \leftarrow J.second$;
}
\lElse{
$u[i] \leftarrow J.head$\;
}
}
\end{small}
\caption{preprocess}
\label{algo:2}
\end{algorithm}
The preprocessing algorithm simulates the change of $J$ in advance.
Line 4-5 in Algorithm \ref{algo:2}: Deal with the options that exit $J$ by way \textcircled{1} and way \textcircled{2} . When the options exit in way \textcircled{2}, the corresponding counters increase by 1.
Line 6 in Algorithm \ref{algo:2}: $J$.tail is renewed, thus $counter[J.tail]$++.
Line 8 in Algorithm \ref{algo:2}: Compute the value of $u$ for option $i$. Recall variable $u$ in Definition~\ref{def:next(j)} and \eqref{eq:5}. (Note: It would be troublesome to compute $u$ until the main process, since the main process no longer maintains $J$ as we will see.)
\smallskip Algorithm~\ref{algo:2} runs in $O(n)$ time. The analysis is trivial and omitted.
\newcommand{J^{(p)}}{J^{(p)}}
\newcommand{J^{(ip)}}{J^{(ip)}}
\subsubsection{Compute the optimal option in $\text{$J$}^{(p)}$ and $\text{$J$}^{(ip)}$}
The following (trivial) observations are crucial to our algorithm.
\begin{enumerate}
\item When an option exit $J^{(p)}$, it must be the smallest one in $J^{(p)}$.
In other words, the options in $J^{(p)}$ (i.e. patient options) are \textbf{first-in-first-out (FIFO)}.
\item When an option exit $J^{(ip)}$, it must be the largest one in $J^{(ip)}$.
In other words, the options in $J^{(ip)}$ (i.e. impatient options) are \textbf{first-in-last-out (FILO)}.
\end{enumerate}
Indeed, the options in $J$ are partitioned carefully into two groups (i.e. patient / impatient) such that they are either FIFO or FILO in each group. By doing this, the best option in each group might be found efficiently as shown below.
\smallskip We use a deque and a stack to store $J^{(p)},J^{(ip)}$, respectively.
The maintenance of $J^{(p)},J^{(ip)}$ are similar to that of $J$, which are summarized in the following.
\begin{enumerate}
\item Before computing $F[i]$, if $s_{i-1}>s_i$, the s-maximal option $i-1$ needs to be added into $J^{(p)}$ or $J^{(ip)}$,
depending on whether $counter[i-1]=0$ or not.
\item Some options at the head of deque $J^{(p)}$ are popped out, since they no longer satisfy the constraint ``$W_{j+1,i} \leq w_0$'', and some options at the top of stack $J^{(ip)}$ are popped out, since they no longer satisfy the constraint ``$s_j>s_i$''.
\item If $J^{(ip)}\neq \varnothing$ after step 2, the counter of $j=J^{(ip)}.top$ is decreased by 1, meanwhile $\mathsf{next}(J^{(ip)}.top)$ becomes $i$. If $counter[j]$ drops to $0$, option $j$ becomes patient, and
we transfer $j$ to $J^{(p)}$ from $J^{(ip)}$ accordingly.
\end{enumerate}
\textbf{Note 1.} An option in $J^{(p)}$ can leave only due to the weight constraint $W_{j+1,i} \leq w_0$,
so it is unnecessary to check whether the tail of $J^{(p)}$ satisfies $s_j > s_i$.
Likewise, it is unnecessary to check the weight constraints of options in $J^{(ip)}$.
\textbf{Note 2.} When an option is transferred to $J^{(p)}$ from $J^{(ip)}$, it can be added to the tail of deque $J^{(p)}$ in $O(1)$ time.
At this time, $j$ is renewed, which means that it is the largest option in $J$. Hence it can be directly added to the tail of $J^{(p)}$.
Throughout, the options in $J^{(p)}$ and $J^{(ip)}$ are in ascending order from head to tail, or bottom to top.
Each option joins and exits $J^{(p)}$ and $J^{(ip)}$ at most once respectively.
Therefore the maintenance of $J^{(p)},J^{(ip)}$ takes $O(1)$ amortized time.
\medskip Next, we show how to quickly compute the optimal options in $J^{(p)}$ and $J^{(ip)}$
respectively. To this end, we use the monotonic queue and monotonic stack.
First, we define the concept called \emph{dead}.
\begin{definition} \label{def:dead}
Consider any option $j\in J^{(p)}$ ($j\in J^{(ip)}$, respectively). If there is another option $j'$ in $J^{(p)}$ ($J^{(ip)}$, respectively) with $\mathsf{cost}[j'] \leq \mathsf{cost}[j]$ and that $j'$ stays in $J^{(p)}$ ($J^{(ip)}$, respectively) as long as $j$ does, then $j$ is regarded \emph{dead}.
(Note: In this definition, the renewed option is still regarded as a different option.)
\end{definition}
\begin{lemma} \label{lem:2}~
(1) Suppose $j,j'\in J^{(p)}$. If $j<j'$ and $\mathsf{cost}[j'] \leq \mathsf{cost}[j]$, option $j$ is dead;
(2) Suppose $j,j'\in J^{(ip)}$. If $j'<j$ and $\mathsf{cost}[j'] \leq \mathsf{cost}[j]$, option $j$ is dead.
\end{lemma}
\begin{proof}
First, we prove (1). Because $j<j'$, we know $j$ is closer to the head than $j'$ in the deque,
which means $j'$ leaves $J^{(p)}$ later than $j$. By definition \ref{def:dead}, $j$ is dead.
Next, we prove (2). Because $j'<j$, we know $j$ is closer to the top than $j'$ in the stack,
which means $j'$ leaves $J^{(ip)}$ later than $j$. By definition \ref{def:dead}, $j$ is dead.
\end{proof}
\newcommand{K^{(p)}}{K^{(p)}}
\newcommand{K^{(ip)}}{K^{(ip)}}
To compute the optimal option of $J^{(p)}$ or $J^{(ip)}$, we only need to focus on the options that are not dead.
The dead ones are certainly not optimal by definition.
(To be rigorous, there is always an optimal option that is not dead.)
Denote by $\text{K}^{(p)} =(p_1, \cdots, p_a)$ all the patient options that are not dead.
Denote by $\text{K}^{(ip)} =(q_1, \cdots, q_b)$ all the impatient options that are not dead.
Assume that $p_1 < \cdots <p_a$ and $q_1 < \cdots <q_b$.
As a corollary of Lemma \ref{lem:2}, $\mathsf{cost}[p_1] < \cdots < \mathsf{cost}[p_a]$, whereas $\mathsf{cost}[q_1] > \cdots > \mathsf{cost}[q_b]$.
Therefore, the optimal option in $J^{(p)}$ is $p_1$ and the optimal option in $J^{(ip)}$ is $q_b$.
It remains to explain how to maintain $K^{(p)}$ and $K^{(ip)}$ in $O(1)$ amortized time.
Because $K^{(p)}$ is a monotonic subsequence of $J^{(p)}$ and $K^{(ip)}$ is a monotonic subsequence of $J^{(ip)}$,
the maintenance of $K^{(p)},K^{(ip)}$ resemble that of $J^{(p)},J^{(ip)}$. Details are summarized below.
(Note: the cost of option $j$ is always stored in $\mathsf{cost}[j]$).
\begin{enumerate}
\item After adding an option to the tail of $K^{(p)}$, if $\mathsf{cost}[p_a] \leq \mathsf{cost}[p_{a-1}]$, then $p_{a-1}$ is dead, and hence it would be removed from deque $K^{(p)}$. Repeat this until $\mathsf{cost}[p_a] > \mathsf{cost}[p_{a-1}]$. Zero or multiple options in $K^{(p)}$ are deleted.
\item After adding an option to the top of $K^{(ip)}$, if $\mathsf{cost}[q_b] \geq \mathsf{cost}[ q_{b-1}]$, then $q_b $ is dead, and it would be popped out of the stack directly. Otherwise, we have $\mathsf{cost}[q_1] > \ldots > \mathsf{cost}[q_b]$, and $q_b$ remains in the stack.
\item When we want to delete some options from $K^{(p)}$ or $K^{(ip)}$ (due to the weight or s-maximal condition),
no additional operation is required except the deletion itself.
\end{enumerate}
\subsection{Combine $K^{(p)}$ and $K^{(ip)}$ to simplify the above $O(n)$ algorithm}\label{subsect:alg-final-on}
The $O(n)$ time algorithm shown in the last subsection applies two data structures $K^{(p)}$ and $K^{(ip)}$, which are monotonic queue or stack.
This subsection simplifies the algorithm by combining the two data structures into a deque.
First, we state a relationship between patient and impatient options.
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{2.pdf}\\
\caption{The distribution of patient and impatient options.}\label{fig:2}
\end{figure}
\begin{lemma} \label{lem:3}
The patient options are less than the impatient options in $J=O_i^s$.
\end{lemma}
\begin{proof}
Take any impatient option $j$. Since $j$ is impatient, it will leave $J$ by way \textcircled{2} or way \textcircled{3} , which means that $j$ is at the tail of $J$ when it is removed. This means that the options to the right of $j$ must leave $J$ at its tail as well
(they cannot leave at the head of $J$ since $j$ is over there, in front of them).
Therefore, the options to the right of $j$ must be impatient, which implies the lemma. See Figure~\ref{fig:2}.
\end{proof}
Recall that $K^{(p)}$ and $K^{(ip)}$ consist of options that are not dead and $K^{(p)}\subseteq J^{(p)}$ and $K^{(ip)}\subseteq J^{(ip)}$.
As a corollary of Lemma \ref{lem:3}, $K^{(p)}$ are to the left of $K^{(ip)}$.
\begin{figure}[h]
\centering
\includegraphics[width=.7\textwidth]{3.pdf}\\
\caption{The cost distribution of options in $K^{(p)}$ an $K^{(ip)}$}\label{fig:3}
\end{figure}
Our final algorithm replaces $K^{(p)}$ and $K^{(ip)}$ by a deque $K$, whose left part (head) is $K^{(p)}$ ($counter=0$) and the right part (tail) is $K^{(ip)}$ ($counter>0$).
The costs of options in the head (i.e. $K^{(p)}$) is monotonically increasing,
and the costs of options in the tail (i.e. $K^{(ip)}$ ) is monotonically decreasing, as shown in Figure~\ref{fig:3}.
In particular, the optimal option in $K$ is at the head or tail of $K$.
\medskip The maintenance of $K$ is similar to the maintenance of $K^{(p)}$ and $K^{(ip)}$ separately. Algorithm~\ref{algo:3} demonstrates the process for maintaining $K$ and computing $F[1],\ldots,F[n]$.
Recall the preprocessing algorithm in Algorithm~\ref{algo:2}.\smallskip
\begin{algorithm}[H]
\LinesNumbered
\SetAlgoNoLine
\begin{small}
$o \leftarrow 0$\;
\For{$i = 1$ \textbf{to} $n$}{
\lWhile{$K~\&\&~W(K.head+1, i) > w_0$}{
$K$.deleteHead()\;
}
\lWhile{$K~\&\&~s[K.tail] \leq s[i] $}{
$K$.deleteTail()\;
}
\If{$K~\&\&~\mathsf{cost}[K.tail]\leq F[K.tail]+s[i]$}{
$\mathsf{cost}[K.tail] \leftarrow F[K.tail] + s[i]$; \quad $counter[K.tail]--$\;
}
\lIf{$|K|>1~\&\&~counter[K.tail_2] > 0~\&\&~ \mathsf{cost}[K.tail_2] \leq \mathsf{cost}[K.tail]$}{
$K$.deleteTail(); \quad // ``$K.tail_2$'' refers to the second last option in $K$.\\
}
\lWhile{$|K|>1~\&\&~counter[K.tail]=0~\&\&~\mathsf{cost}[K.tail_2] \geq \mathsf{cost}[K.tail]$}{
$K$.deleteTail2(); \quad // ``deleteTail2'' = delete the second last option in $K$.\\
}
\lWhile{$W[o+1, i] > w_0$}{
$o++$\;
}
$F[i] \leftarrow F[o] + s[u[i]]$\;
\lIf{$K$}{
$F[i] \leftarrow \min \{ \mathsf{cost}[K.head], \mathsf{cost}[K.tail], F[i]\}$\;
}
K.insertTail($i$); \quad $\mathsf{cost}[i] \leftarrow -1$\;
}
\end{small}
\caption{compute {$F[i]$}}
\label{algo:3}
\end{algorithm}
\smallskip Line~3 in Algorithm \ref{algo:3}: Some options at the head of $K$ exit by way \textcircled{1}.
Line~4 in Algorithm \ref{algo:3}: Some options at the tail of $K$ exit by way \textcircled{2}.
\smallskip Lines~5-7 in Algorithm \ref{algo:3}: After Line~4, the largest s-maximal option $J.tail$
shall be renewed as $\mathsf{next}(J.tail)$ becomes $i$.
But be aware that $J.tail$ could be dead and if so, we need to do nothing.
Observe that $J.tail$ is not dead if and only if $J.tail=K.tail$.
Moreover, $J.tail=K.tail$ occurs if and only if $\mathsf{cost}[K.tail] \leq F[K.tail]+s[i]$.
When the last condition holds (as checked by Line~5), we renew $K.tail$ at Line~6.
(This avoids computing $J.tail$ and comparing it to $K.tail$).
\smallskip Lines~8-9 in Algorithm \ref{algo:3}: Remove the dead options.
Because a new option (including the renewing one) can join $K$ only at its tail,
we can find dead options through comparing $K.tail_2$ and $K.tail$ as follows.
If $counter[K.tail_2] > 0$, the last two options of $K$ belong to $K^{(ip)}$.
In this case, if $\mathsf{cost}[K.tail] \geq \mathsf{cost}[K.tail_2]$, $K.tail$ is dead and thus deleted.
When $counter[K.tail] = 0$, the last two options in $K$ belong to $K^{(p)}$.
We then check if $\mathsf{cost}[K.tail_2] \geq \mathsf{cost}[K.tail]$. If so, $K.tail_2$ is dead and thus deleted.
Repeat it as long as $\mathsf{cost}[K.tail_2] \geq \mathsf{cost}[K.tail]$.
\subsubsection{An example with some comments}
Figure~\ref{fig:4} shows an example where $n=8$.
\begin{figure}[h]
\centering
\includegraphics[width=.75\textwidth]{4.pdf}\\
\caption{Illustration of an example.}\label{fig:4}
\end{figure}
We simulate the whole computation process for the example above and
the deque $K$ at each iteration of $i$ is shown in Table~\ref{table:1}.
\begin{table}[h!]
\centering
\begin{scriptsize}
\begin{tabular}{cccccc}
\hline
\multicolumn{1}{|c|}{$\boldsymbol{i}$} & \multicolumn{2}{c|}{$K$} & \multicolumn{1}{c|}{$\boldsymbol{o_{i}}$} & \multicolumn{1}{c|}{$\boldsymbol{F[i]}$} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{NULL } & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{1}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\multirow{-3}{*}{0}} & \multicolumn{1}{c|}{\multirow{-3}{*}{12}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\,\footnotesize{\ding{172}}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{22} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{2}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{~0} & \multicolumn{1}{c|}{\multirow{-3}{*}{0}} & \multicolumn{1}{c|}{\multirow{-3}{*}{12}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\,\footnotesize{\ding{173}}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{21} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{3}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{~0} & \multicolumn{1}{c|}{\multirow{-3}{*}{1}} & \multicolumn{1}{c|}{\multirow{-3}{*}{21}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\,\footnotesize{\ding{173}}\qquad \hspace{2pt}\footnotesize{\ding{174}}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{21\qquad\quad 28} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{4}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{~0\qquad\quad ~~0} & \multicolumn{1}{c|}{\multirow{-3}{*}{1}} & \multicolumn{1}{c|}{\multirow{-3}{*}{21}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\,\footnotesize{\ding{173}}\qquad\hspace{3pt}\footnotesize{\ding{175}}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{21\qquad\quad 26} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{5}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{~0\qquad\quad ~~0} & \multicolumn{1}{c|}{\multirow{-3}{*}{1}} & \multicolumn{1}{c|}{\multirow{-3}{*}{21}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\,\footnotesize{\ding{173}}\qquad\hspace{3pt}\footnotesize{\ding{175}}\qquad\hspace{3pt}\footnotesize{\ding{176}}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{21\qquad\quad 26\qquad\quad 24} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{6}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{~0\qquad\quad ~~0\qquad\quad ~1} & \multicolumn{1}{c|}{\multirow{-3}{*}{2}} & \multicolumn{1}{c|}{\multirow{-3}{*}{21}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\,\footnotesize{\ding{173}}\qquad\hspace{3pt}\footnotesize{\ding{175}}\qquad\hspace{3pt}\footnotesize{\ding{176}}
\qquad\hspace{3pt}\footnotesize{\ding{177}}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{21\qquad\quad 26\qquad\quad 24\qquad\quad 23} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{7}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{~0\qquad\quad ~~0\qquad\quad ~1\qquad\quad ~1} & \multicolumn{1}{c|}{\multirow{-3}{*}{2}} & \multicolumn{1}{c|}{\multirow{-3}{*}{21}} \\ \hline
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{NULL} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{\textit{cost}} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \cline{2-3}
\multicolumn{1}{|c|}{\multirow{-3}{*}{8}} & \multicolumn{1}{c|}{\textit{counter}} & \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\multirow{-3}{*}{5}} & \multicolumn{1}{c|}{\multirow{-3}{*}{30}} \\ \hline
\end{tabular}
\end{scriptsize}
\caption{Simulation of the entire process of the example shown in Figure~\ref{fig:4}.} \label{table:1}
\setlength\tabcolsep{15pt}
\end{table}
\begin{remark}
The reader may wonder whether the costs of the options in $K$ is monotonic (increase or decrease).
If this were true, our algorithm can be simplified.
However, Table~\ref{table:1} shows that the answer is to the opposite.
When $i=7$, there are two options in each of $K^{(p)}$ and $K^{(ip)}$, so the costs of $K$ is not monotonic.
\end{remark}
\subsection{Extension}\label{subsect:extension}
In this subsection, we discuss an extension that not only partitions the subsequence but also assigns each part to one of two (or more) agents.
\begin{problem}\label{problem:extended}
Given two threshold values $W_A,W_B$ together with two coefficients $c_A,c_B$.
We have $n$ jobs $1,\ldots,n$ to process (in order), where job $i$ is associated with $(w_i,s_i)$. All parameters are nonnegative.
A group of consecutive jobs $i,\ldots,j$ can be processed in a batch as follows:
\begin{enumerate}
\item[(a)] If $w_i+\ldots+w_j\leq W_A$, jobs $i,\ldots,j$ can be processed in a batch by an A-type agent,
and the cost is $c_A \cdot \max\{s_i,\ldots,s_j\}$.
\item[(b)] If $w_i+\ldots+w_j\leq W_B$, jobs $i,\ldots,j$ can be processed in a batch by a B-type agent,
and the cost is $c_B \cdot \max\{s_i,\ldots,s_j\}$.
\end{enumerate}
Find a partition and choose an agent for each part that minimizes the total cost.
\end{problem}
Comparing to the original problem, we now have two choices for each part.
\smallskip Gladly, our technique shown in the last subsections can be generalized to solving the extended problem.
Let $F[i]$ be the same as before. We have
\begin{equation}\label{eq:extended-dp}
F[i]=\min \left\{
\begin{array}{l}
F^A[i]:=\min_{j:0\leq j<i} \{ F[j]+ c_A \cdot S_{j+1,i} \mid W_{j+1,i}\leq W_A\}. \\
F^B[i]:=\min_{j:0\leq j<i} \{ F[j]+ c_B \cdot S_{j+1,i} \mid W_{j+1,i}\leq W_B\}.
\end{array}
\right.
\end{equation}
Denote $O^A_i=\{j \mid 0\leq j <i, W_{j+1,i}\leq W_A\}$ and $o^A_i=\min{O^A_i}$.
Call each element $j$ in $O^A_i$ an \emph{A-option} of $i$.
An A-option $j$ is called a \emph{s-maximal A-option} of $i$ if $j>0$ and $s_j> S_{j+1,i}$.
Denote by $O_i^{A,s}$ the set of s-maximal A-options of $i$.
The following lemma is similar to Lemma~\ref{lem:1}; proof omitted.
\begin{lemma} \label{lem:1-extended}
Set $O^{A,s}_i \cup \{o^A_i\}$ contains an optimal option of $F^A[i]$. As a corollary:
\begin{equation}\label{eq:4-extended}
F^A[i]=\min_j \left\{F[j]+c_A \cdot S_{j+1,i} \mid j \in O_i^{A,s} \cup \{o^A_i\} \right\}.
\end{equation}
\end{lemma}
The difficult lies in computing the right part of \eqref{eq:4-extended}.
We can maintain $J^A=O_i^{A,s}$ and find the best $j\in J^A$ in $O(\log n)$ time using a min-heap.
Or, we can partition $J^A$ into patient and impatient options as we did for $J$, and
find the optimal option in each group in $O(1)$ time using a monotonic queue / stack.
Therefore, we can compute $F^A[i]$, and so as $F^B[i]$, in $O(1)$ amortized time. As a corollary,
\begin{theorem}
Problem~\ref{problem:extended} can be solved in $O(n)$ time.
\end{theorem}
\begin{remark}
Indeed, if there are $k$ kinds of agents (for example, $k=2$ in problem~\ref{problem:extended}),
we can solve the (extended) partition problem in $O(nk)$ time.
\end{remark}
\section{Tree partition}\label{sect:tree}
In this section, we move on to the tree partition problem defined as follows.
\begin{problem}\label{problem:tree-partition}
Given two reals $w_0, b$ and a tree. Each vertex $v$ of tree is associated with two real parameters $w_v$ and $s_v$.
Determine whether the tree (vertices) can be partitioned into several connected components $\{T_k\}$ such that,
\begin{equation}\label{eq:tree-partition}
\sum_{v \in T_k} w_v \leq w_0~(\forall k) \quad and \quad \sum_{k} \max(s_v\mid v \in T_k)\leq b
\end{equation}
\end{problem}
\newcommand{\mathcal{NP}}{\mathcal{NP}}
\newcommand{\mathcal{NPC}}{\mathcal{NPC}}
Our first result about this problem is a hardness result:
\begin{theorem}\label{thm:npc}
Problem \ref{problem:tree-partition} belongs to $\mathcal{NPC}$, i.e., it is NP-complete.
\end{theorem}
\subsection{A proof of the hardness result}
\begin{problem}\label{problem:knap-sack}
Given a sequence of real numbers $(w_1,\cdots,w_n,s_1,\cdots,s_n,w_0,s_0)$, where $w_i\geq 0~(1\leq i\leq n)$, determine whether there exists a set $A \subseteq [1,n]$ such that
\begin{equation}\label{eq:13}
\sum_{i \in A} w_i \leq w_0 \quad \text{and} \quad \sum_{i \in A} s_i \geq s_0
\end{equation}
\end{problem}
\begin{problem}\label{problem:knapsack-2}
Given a sequence of real numbers $(w_1,\cdots,w_n,s_1,\cdots,s_n,w_0,s_0)$, where $w_i\geq 0~(1\leq i\leq n)$, determine whether there exists a set $A \subseteq [1,n]$ such that
\begin{equation}\label{eq:14}
\sum_{i \in A} w_i \leq w_0 \quad \text{and} \quad \sum_{i \in A} s_i - \max_{i \in A} s_i \geq s_0
\end{equation}
\end{problem}
\begin{lemma}\label{lemma:npc}
Problem \ref{problem:knapsack-2} belongs to $\mathcal{NPC}$.
\end{lemma}
\begin{proof}
We will prove that problem~\ref{problem:knap-sack} reduces to problem~\ref{problem:knapsack-2}.
Further since problem~\ref{problem:knap-sack} $\in \mathcal{NPC}$ (which is well-known \cite{book-IntroAlg}), we obtain that problem~\ref{problem:knapsack-2} $\in \mathcal{NPC}$.
Assume $I = (w_1,\cdots,w_n,s_1,\cdots,s_n,w_0,s_0)$ is an instance of problem~\ref{problem:knap-sack}.
Let $I'= (w_1,\cdots,w_n,w_{n+1}=0,s_1,\cdots,s_n,s_{n+1}=\max\{s_1,\cdots,s_n\},w_0,s_0)$,
which is an instance of problem \ref{problem:knapsack-2}.
Denote by $\mathbf{L},\mathbf{L}'$ the set of yes instances of problem~\ref{problem:knap-sack}, \ref{problem:knapsack-2} respectively.
It reduces to proving that $I \in \mathbf{L} \Leftrightarrow I' \in \mathbf{L}'$.
Assume that $I \in \mathbf{L}$. This means that there exists $A \subseteq [1,n]$ such that \eqref{eq:13} holds.
It is easy to see that $A \cup \{n + 1\}$ satisfies \eqref{eq:14}, therefore $I' \in \mathbf{L}'$.
Assume that $I' \in \mathbf{L}'$. This means that there exists $A \subseteq [1,n+1]$ such that \eqref{eq:14} holds.
Without loss of generality, assume $n+1\in A$; otherwise $A \cup \{n+1\}$ still satisfies \eqref{eq:14}.
It is easy to see that $A-\{n+1\}$ satisfies \eqref{eq:13}, therefore $I \in \mathbf{L}$.
\end{proof}
With the above lemma, we can now prove Theorem~\ref{thm:npc}.
\begin{proof}[Proof of Theorem~\ref{thm:npc}]
We will show that problem~\ref{problem:knapsack-2} reduces to problem \ref{problem:tree-partition}.
Further since problem~\ref{problem:knapsack-2} $\in \mathcal{NPC}$ (see Lemma~\ref{lemma:npc}), we obtain that problem~\ref{problem:tree-partition} $\in \mathcal{NPC}$.
Consider an instance of problem~\ref{problem:knapsack-2}, $I = (w_1,\cdots,w_n,s_1,\cdots,s_n,w_0,s_0)$.
Without loss of generality, we assume that each $w_i$ is at most $w_0$.
Otherwise, we can simply remove $(w_i,s_i)$ from the instance and the answer does not change.
Let $b = \sum_{i=1}^{n} s_i - s_0$. Then, formula \eqref{eq:14} can be rewritten as follows.
\begin{equation}\label{eq:15}
\sum_{i \in A} w_i \leq w_0 \quad \text{and} \quad \sum_{i \in A} s_i - \max_{i \in A} s_i \geq \sum_{i=1}^{n} s_i - b
\end{equation}
Equivalently,
\begin{equation}\label{eq:16}
\sum_{i \in A} w_i \leq w_0 \quad \text{and} \quad \max_{i \in A} s_i + \sum_{i \notin A} s_i \leq b.
\end{equation}
Now, we construct an instance $I'$ of problem~\ref{problem:tree-partition} from $I$.
First, build a tree with vertices $1,\ldots,n$ and $n+1$, where $1,\ldots,n$ are all connected to $n+1$.
The $i$-th~$(1\leq i\leq n)$ node is associated with $w_i$ and $s_i$.
Moreover, set $w_{n+1}=s_{n+1}=0$.
Note that a partition of this tree corresponds to a subset $A$ of $[1,n]$ -- $A$ contains the labels of those vertices in the same connected component with $n+1$.
Moreover, the cost of the partition $\sum_{k} \max(s_i \mid i\in T_k)$ is $\max_{i \in A} s_i + \sum_{i \notin A} s_i$.
Therefore, subset $A$ satisfies formula \eqref{eq:16}
if and only if the corresponding partition of $A$ satisfies formula \eqref{eq:tree-partition}.
It follows that $I$ is a yes instance of problem \ref{problem:knapsack-2} if and only if
$I'$ is a yes instance to problem~\ref{problem:tree-partition}.
Hence the reduction works.
\end{proof}
\subsection{A dynamic programming approach for the case of unit weight}
This subsection considers the tree partition problem under the restriction that
all the nodes have a unit weight. Assume $w_i$'s are all 1 henceforth.
Denote the given tree by $T$, and denote by $T_v$ the subtree rooted at vertex $v$.
\begin{definition}
See Figure~\ref{fig:11}. In any partition of $T_v$, the component containing $v$ is called the \emph{growing component},
and the other components are called \emph{grown components}. (Within this subsection, a \emph{component} is short for a connected component of $T_v$.) The \emph{grown part} refers to the set of all grown components.
\end{definition}
For a vertex $v$ and integers $j~(1 \leq j \leq w_0)$ and $k~(1 \leq k \leq n)$,
let $f[v][j][k]$ be the minimum cost of grown part,
among all the partitions of $T_v$ whose growing component
has exactly $j$ nodes and has no $v'$ with $s_{v'}>s_k$. Formally,
\begin{equation}\label{eq:9}
f[i][j][k] = \min_{\begin{subarray}{c}
\Pi: ~\text{partition of }T_v\text{ with $j$ nodes}\\
\text{in the growing component, and}\\
\text{$s_{v'}\leq s_k$ for each such node $v'$.}\end{subarray}} \text{the cost of the grown part of }\Pi.
\end{equation}
To be clear, the cost of the grown part is the total costs of the grown components.
Moreover, we define $f[v][j][k]= \infty$ in case there is no such partition.
\smallskip Let $F[v]$ be the cost of the optimal partition of $T_v$. Clearly,
\begin{equation}\label{eq:10}
F[v] = \min_{j,k}\{ f[i][j][k] + s_k\}
\end{equation}
\begin{figure}[h]
\setcaptionwidth{6cm}
\begin{minipage}[b]{.55\textwidth}
\centering
\includegraphics[width=\textwidth]{11.pdf}\\
\caption{Illustration of growing and grown components.}\label{fig:11}
\end{minipage}
\begin{minipage}[b]{.45\textwidth}
\centering
\includegraphics[width=.9\textwidth]{12}\\
\caption{Illustration of the computation of $g[a][j][k]$.}\label{fig:12}
\end{minipage}
\end{figure}
We address the computation of $f[v][j][k]$ in the following.
Fix $v$. Assume $v$ has $d$ children $c_1,\ldots, c_d$ (left to right).
Denote by $T_v^a~(0 \leq a\leq d)$ the tree obtained by deleting $c_{a+1}, \cdots, c_d$ and all their descendants from $T_v$.
Let $g[a][j][k]$ be the minimum cost of the grown part,
among all partitions of $T_v^a$ whose growing component has $j$ vertices and has no $v'$ with $s_{v'}>s_k$.
To be clear, $g[a][j][k]=\infty$ if no such partition exists. Note that $T_v=T_v^d$ and
\begin{equation} \label{eq:11}
f[v][j][k]=g[d][j][k].
\end{equation}
It reduces to computing $g[a][j][k]~~(0\leq a\leq d, 1\leq j\leq w_0, 1\leq k \leq n)$.
Assume that $s_v\leq s_k$ and $a>0$. Otherwise it is trivial to get $g[a][j][k]$.
Now, note that $d>0$ (as $a>0$) and therefore $v$ is not a leaf. We have
\begin{eqnarray}\label{eq:12}
g[a][j][k]&=&\min_{1 \leq j'\leq j} \{g[a-1][j'][k] + \Delta_{j'} \},\\
\text{where }\Delta_{j'}&=&\left\{
\begin{array}{ll}
f[c_a][j-j^{'}][k], & j'<j; \\
F[c_a], & j'=j.
\end{array}
\right.
\end{eqnarray}
See Figure~\ref{fig:12} for an illustration of \eqref{eq:12}. We omit the easy proof of \eqref{eq:12}.
\paragraph{Running time analysis}
Let $d_v$ be the number of children of $v$.
It takes $O(j)$ time for computing $g[a][j][k]$ based on \eqref{eq:12},
so computing the $g$'s take $O(\sum_v d_v w_0^2 n)=O(w_0^2 n^2)$ time.
It is easy to compute $f$ using \eqref{eq:11} and $F$ using \eqref{eq:10},
within the $O(w_0^2 n^2)$ time bound.
So, the total time is $O(w_0^2 n^2)$. (Be aware that $w_0\leq n$.)
\begin{theorem}
When all the nodes have a unit weight, the tree partition problem can be solved in $O(w_0^2n^2)$ time by dynamic programming.
\end{theorem}
\section{Summary}
A linear time algorithm is proposed for the Sum-of-Max sequence partition problem under a Knapsack constraint,
which arises in cargo delivery, telecommunication, and parallel computation.
The algorithm applies a novel dynamic programming speed-up technique
that partitions the candidate options into groups, such that the options in each group are FIFO or FILO --
hence the selection of the best option becomes easy by using monotonic queues and stacks.
In order to efficiently throw the options to correct groups, two points are crucial:
first, introduce the concept of renew for distinguishing options in different states;
second, use a counter for each option that stores its renewing times in future.
For completeness, we also study the tree partition problem, but it is NP-complete.
In the future, it worths exploring more applications of the speed-up technique
that divides candidate options into (FIFO or FILO) groups.
\bibliographystyle{elsarticle-num}
| {
"timestamp": "2022-07-05T02:07:32",
"yymm": "2207",
"arxiv_id": "2207.00768",
"language": "en",
"url": "https://arxiv.org/abs/2207.00768",
"abstract": "Sequence partition problems arise in many fields, such as sequential data analysis, information transmission, and parallel computing. In this paper, we study the following partition problem variant: given a sequence of $n$ items $1,\\ldots,n$, where each item $i$ is associated with weight $w_i$ and another parameter $s_i$, partition the sequence into several consecutive subsequences, so that the total weight of each subsequence is no more than a threshold $w_0$, and the sum of the largest $s_i$ in each subsequence is minimized.This problem admits a straightforward solution based on dynamic programming, which costs $O(n^2)$ time and can be improved to $O(n\\log n)$ time easily. Our contribution is an $O(n)$ time algorithm, which is nontrivial yet easy to implement. We also study the corresponding tree partition problem. We prove that the problem on the tree is NP-complete and we present an $O(w_0 n^2)$ time ($O(w_0^2n^2)$ time, respectively) algorithm for the unit weight (integer weight, respectively) case.",
"subjects": "Data Structures and Algorithms (cs.DS); Optimization and Control (math.OC)",
"title": "Sum-of-Max Partition under a Knapsack Constraint",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787879966233,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811355700137
} |
https://arxiv.org/abs/1103.4346 | Quantized algebras of functions on homogeneous spaces with Poisson stabilizers | Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0<q<1. We study a quantization C(G_q/K_q) of the algebra of continuous functions on G/K. Using results of Soibelman and Dijkhuizen-Stokman we classify the irreducible representations of C(G_q/K_q) and obtain a composition series for C(G_q/K_q). We describe closures of the symplectic leaves of G/K refining the well-known description in the case of flag manifolds in terms of the Bruhat order. We then show that the same rules describe the topology on the spectrum of C(G_q/K_q). Next we show that the family of C*-algebras C(G_q/K_q), 0<q\le1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra \C[G/K]. Finally, extending a result of Nagy, we show that C(G_q/K_q) is canonically KK-equivalent to C(G/K). | \section*{Introduction}
Following the foundational works of Woronowicz \cite{W} and Soibelman and Vaksman \cite{SV1}, the algebras of functions on $q$-deformations of compact groups and their homogeneous spaces were extensively studied in the 90s. Later the interest moved more towards noncommutative geometry of these quantum spaces, see for example \cite{CS}, \cite{DAD}, \cite{NT2} and references therein, leaving the basic algebraic results scattered in the literature and proved mostly in particular cases with various degrees of generality limited often to $SU(2)$ or $SU(N)$ and some of their homogeneous spaces, and more rarely to classical compact simple groups and the corresponding full flag manifolds. The goal of this paper is to establish the main properties of quantized algebras of functions in full generality, for arbitrary Poisson homogeneous spaces of compact semisimple Lie groups such that the stabilizer of one point is a Poisson-Lie subgroup. (Full generality is of course a relative term here, as such spaces form only a relatively small class within the class of all Poisson homogeneous spaces \cite{Ka}.) As often happens, working in the general setting streamlines arguments and renders proofs more transparent, since there is less motivation and possibilities to make use of particular generators and relations. Most results are achieved by first considering $SU(2)$ and then using an inductive argument on the length of an element of the Weyl group. As such the proofs owe much to the fundamental work of Soibelman~\cite{So}. We will now describe the contents of the paper.
In Section~\ref{s1} we briefly remind how a standard Poisson structure on a compact simply connected semisimple Lie group $G$ is defined and what the symplectic leaves are for this structure~\cite{LW,So}. Here we also classify all closed Poisson-Lie subgroups of $G$, a result which probably is known to experts on Poisson geometry.
In Section~\ref{s2} we fix a closed Poisson-Lie subgroup $K$ of $G$ and define the C$^*$-algebra $C(G_q/K_q)$ of functions on the $q$-deformation of $G/K$. The irreducible representations of $C(G_q)$ were classified by Soibelman~\cite{So}. Using his results Dijkhuizen and Stokman~\cite{DS}, following an earlier work of Podkolzin and Vainerman~\cite{PV} on quantum Stiefel manifolds, classified the irreducible representations of quantized algebras of functions on flag manifolds. From this we easily obtain a classification of the irreducible representations of $C(G_q/K_q)$, showing in particular that the equivalence classes of the irreducible representations are in a one-to-one correspondence with the symplectic leaves of $G/K$.
The structure of irreducible representations is refined in Section~\ref{s3}, where we obtain a composition series for $C(G_q/K_q)$. Such a composition series appeared already in work of Soibelman and Vaksman~\cite{SV2} on quantum odd-dimensional spheres. Similar results were then obtained in a number of particular cases~\cite{KL,Sh}, most recently for quantum Stiefel manifolds~\cite{CS}. The main part of the proof can be thought of as an analogue of the fact that the product of symplectic leaves of dimensions~$n$ and~$m$ in~$G$ decomposes into leaves of dimensions $\le n+m$.
Further refinement is obtained in Section~\ref{s4}, where we describe the Jacobson topology on the spectrum of $C(G_q/K_q)$. For $C(G_q)$, when the spectrum is identified with $W\times T$, where $W$ is the Weyl group and $T$ is the maximal torus in $G$, it was observed already by Soibelman~\cite{So} that the closure of $\{w\}\times T$ coincides with the set $\{\sigma\mid \sigma\le w\}\times T$, where $W$ is given the Bruhat order. It follows that the closure of a point $(w,t)\in W\times T$ is a union of sets $\{\sigma\}\times tT_{\sigma,w}$ with $\sigma\le w$ and $T_{\sigma,w}\subset T$. In Section~\ref{s4} we give a combinatorial description of the sets $T_{\sigma,w}$. The corresponding result for $q=1$ is that the closure of the symplectic leaf $\Sigma_w$ associated with $w\in W$ is the union of the sets $\Sigma_\sigma T_{\sigma,w}$ with $\sigma\le w$. This refines the well-known description of the cellular decomposition of $G/T$~\cite{St}.
In the formal deformation setting the algebra ${\mathbb C}[G_q]$, $q=e^{-h}$, of regular functions on $G_q$ is a deformation quantization of the Poisson algebra ${\mathbb C}[G]$. An accepted analytic analogue of deformation quantization is Rieffel's notion of strict deformation quantization~\cite{Ri}. In Section~\ref{s5} we show that the family of C$^*$-algebras $C(G_q/K_q)$ has a canonical continuous field structure, and then that ${\mathbb C}[G_q/K_q]$ define (non-canonically) a strict deformation quantization of ${\mathbb C}[G/K]$. This was proved for $G=SU(2)$ in~\cite{Sh0} and~\cite{Bau} and for $G=SU(N)$ in~\cite{Na3} (although it is not clear from the argument in~\cite{Na3} which Poisson structure on $SU(N)$ is quantized). The main observation which allows us to reduce the proof to the case $G=SU(2)$, and which we already essentially made in~\cite{NT6}, is that it is possible to canonically define a $C[a,b]$-algebra $\Gamma_{alg}(({\mathbb C}[G_q])_{q\in[a,b]})$ playing the role of ${\mathbb C}[G_q]$ when $q$ is considered not as a fixed number, but as the identity function on $[a,b]$. Furthermore, these algebras have the expected functorial properties: given an embedding $K\hookrightarrow G$ we get a homomorphism $\Gamma_{alg}(({\mathbb C}[G_q])_{q})\to \Gamma_{alg}(({\mathbb C}[K_q])_{q})$.
A composition series similar to the one obtained in Section~\ref{s3} was used already by Soibelman and Vaksman~\cite{SV2} to compute the K-theory of the odd-dimensional quantum spheres. Such series were later used for K-theoretic computations in~\cite{Sh} and~\cite{CS}. The most powerful result of this sort was obtained by Nagy~\cite{Na}, who showed that the C$^*$-algebra $C(SU_q(N))$ is KK-equivalent to $C(SU(N))$, and remarked that similar arguments work for all classical simple compact groups. In Section~\ref{s6} we extend this result by showing that $C(G_q/K_q)$ is canonically KK-equivalent to $C(G/K)$.
\bigskip
\section{Poisson-Lie subgroups} \label{s1}
Let $G$ be a simply connected semisimple compact Lie group, $\g$ its
complexified Lie algebra. The universal enveloping algebra $U\g$ is a Hopf $*$-algebra with involution such that the real Lie algebra of~$G$ consists of skew-adjoint elements. Fix a nondegenerate symmetric $\operatorname{ad}$-invariant form on $\g$ such that its restriction to the real Lie algebra of $G$ is negative definite. Let $\h\subset\g$ be the Cartan subalgebra defined by a maximal torus $T$ in $G$. For every root $\alpha\in\Delta$ put $d_\alpha=(\alpha,\alpha)/2$. Let $H_\alpha\in\h$ be the element corresponding to the coroot $\alpha^\vee=2\alpha/(\alpha,\alpha)$ under the identification $\h\cong\h^*$. Under the same identification let $h_\beta\in\h$ be the element corresponding to $\beta\in\h^*$, so $h_\alpha=d_\alpha H_\alpha$ for $\alpha\in\Delta$. Fix a system $\Pi=\{\alpha_1,\dots,\alpha_r\}$ of simple roots. For every positive root $\alpha\in\Delta_+$ choose $E_\alpha\in\g_\alpha$ such that $(E_\alpha,E_\alpha^*)=d_\alpha^{-1}$, and put $F_\alpha=E_\alpha^*\in \g_{-\alpha}$, so that $[E_\alpha,F_\alpha]=H_\alpha$. We write $E_i,F_i,H_i,h_i$ for $E_{\alpha_i},F_{\alpha_i}, H_{\alpha_i}, h_{\alpha_i}$, respectively. Denote by $\omega_1,\dots,\omega_r$ the fundamental weights, so $\omega_i(H_j)=\delta_{ij}$.
The standard Poisson structure on $G$ is defined by the classical $r$-matrix
$$
r=i\sum_{\alpha\in\Delta_+}d_\alpha(F_\alpha\otimes E_\alpha-E_\alpha\otimes F_\alpha),
$$
meaning that if we consider the Hopf $*$-algebra ${\mathbb C}[G]$ of regular functions on $G$ as a subspace of~$(U\g)^*$, the Poisson bracket on $G$ is given by
\begin{equation} \label{epoissonbracket}
\{f_1,f_2\}=(f_1\otimes f_2)([\Dhat(\cdot),r]) \ \ \hbox{for}\ \ f_1,f_2\in{\mathbb C}[G],
\end{equation}
where $\Dhat$ is the comultiplication on $U\g$.
Soibelman~\cite{So} gave the following description of the symplectic leaves of $G$. For every simple root~$\alpha$ consider the corresponding embedding $\gamma_\alpha\colon SU(2)\to G$. It is a Poisson map when~$SU(2)$ is equipped with the Poisson structure defined by the classical $r$-matrix $id_\alpha(F\otimes E-E\otimes F)$. Consider the symplectic leaf
$$
\Sigma_0=\left\{\begin{pmatrix}\bar z & (1-|z|^2)^{1/2}\\ -(1-|z|^2)^{1/2} & z\end{pmatrix}: |z|<1\right\}\subset SU(2).
$$
Let $W$ be the Weyl group of $G$. Denote by $s_\gamma\in W$ the reflection defined by $\gamma\in\Delta$. We write $s_i$ for~$s_{\alpha_i}$. For every $w\in W$ choose a reduced expression $w=s_{i_1}\dots s_{i_n}$ and consider the map
\begin{equation} \label{esymplleaf}
\gamma_w\colon\Sigma_0^{n}\to G, \ \ (g_1,\dots,g_n)\mapsto \gamma_{i_1}(g_1)\dots \gamma_{i_n}(g_n),
\end{equation}
where $\gamma_i=\gamma_{\alpha_i}$; for $w=e$ the image of $\gamma_e$ consists solely of the identity element in $G$. It is a symplectomorphism of~$\Sigma_0^{n}$ onto a symplectic leaf $\Sigma_w$ of $G$. The leaf $\Sigma_w$ does not depend on the reduced expression for~$w$, although the map $\gamma_w$ depends on it. The decomposition of $G$ into its symplectic leaves is given by $G=\sqcup_{w\in W,t\in T}\Sigma_wt$.
We next define a class of subgroups of $G$. Let $S$ be a subset of $\Pi$. Denote by $\tilde K^S$ the closed connected subgroup of $G$ such that its complexified Lie algebra $\tilde\g_S$ is generated by the elements $E_i$ and $F_i$ with $\alpha_i\in S$, so
$$
\tilde\g_S=\operatorname{span}\{H_i\mid \alpha_i\in S\}\oplus\bigoplus_{\alpha\in\Delta_S}\g_\alpha,
$$
where $\Delta_S$ is the set of roots that lie in the group generated by $\alpha_i\in S$. Denote by $P(S^c)$ the subgroup of the weight lattice $P$ generated by the fundamental weights $\omega_i$ with $\alpha_i\in S^c=\Pi\setminus S$. Let $L$ be a subgroup of $P(S^c)$. Identifying $P$ with the dual group of the maximal torus $T$, denote by $T_L$ the annihilator of $L$ in $T$. Since $T$ normalizes $\tilde K^S$, the group $K^{S,L}$ generated by $\tilde K^S$ and $T_L$ is a closed subgroup of $G$, and its complexified Lie algebra is
$$
\g_{S,L}=\h_L\oplus\bigoplus_{\alpha\in\Delta_S}\g_\alpha,
$$
where $\h_L\subset\h$ is the annihilator of $L\subset\h^*$. Note that if $L=P(S^c)$ then $K^{S,L}$ is the group $\tilde K^S$. If $L=0$, we write $K^S$ for $K^{S,L}$. Then $K^S=G\cap P_S$, where $P_S\subset G_{\mathbb C}$ is the parabolic subgroup corresponding to $S$, and $\tilde K^S$ is the semisimple part of $K^S$.
\begin{proposition} \label{ppoissonsub}
For any subset $S\subset\Pi$ and any subgroup $L\subset P(S^c)$ we have:
\enu{i} $K^{S,L}$ is a Poisson-Lie subgroup of $G$, and any closed Poisson-Lie subgroup of $G$ is of this form for uniquely defined $S$ and $L$;
\enu{ii} $K^{S,L}\cap T=T_L$ and $(K^{S,L})^\circ\cap T=T_L^\circ$;
\enu{iii} $K^{S,L}$ is connected if and only if $P(S^c)/L$ is torsion-free.
\end{proposition}
\bp (i) By \cite[Proposition 2.1]{Stok} a closed connected Lie subgroup $K$ of $G$ is a Poisson-Lie subgroup if and only if its complexified Lie algebra $\mathfrak k_{\mathbb C}$ lies between $\tilde \g_S$ and $\g_S$ for some $S\subset \Pi$, so it has the form
\begin{equation} \label{epoissonsub}
\mathfrak k_{\mathbb C}=\mathfrak a\oplus\bigoplus_{\alpha\in\Delta_S}\g_\alpha,
\end{equation}
where $\mathfrak a$ is the complexified Lie algebra of $K\cap T$. It follows that for any $S\subset\Pi$ and $L\subset P(S^c)$, $(K^{S,L})^\circ$ is a Poisson-Lie subgroup. Furthermore, by construction $K^{S,L}$ is a finite disjoint union of sets of the form $(K^{S,L})^\circ t$ with $t\in T_L$. Since the translations by the elements of $T$ are Poisson maps, every such set $(K^{S,L})^\circ t$ is a Poisson submanifold of $G$, hence $K^{S,L}$ is a Poisson-Lie subgroup.
Conversely, assume $K$ is a closed Poisson-Lie subgroup of $G$. Assume first that $K$ is connected. Then its complexified Lie algebra has form \eqref{epoissonsub}. Denote by $L$ the annihilator of $K\cap T$ in $\hat T=P$. Then $K\cap T=T_L$ and $\mathfrak a=\h_L$, and since $H_i$ lies in this Lie algebra for $\alpha_i\in S$, we have $L\subset P(S^c)$. Therefore $K=(K^{S,L})^\circ$. Observe next that the group $T_L=K\cap T$ is connected, since it is abelian and contains a maximal torus of $K$. Hence the group $K^{S,L}=\tilde K^S T_L$ is connected, and thus $K=K^{S,L}$.
Consider now a not necessarily connected closed Poisson-Lie subgroup $K$. Then $K^\circ=K^{S,\Gamma}$ for some $S\subset\Pi$ and $\Gamma\subset P(S^c)$. Let $g\in K$. Consider a symplectic leaf $\Sigma$ of $G$ passing through $g$. By assumption the whole leaf $\Sigma$, and hence its closure, lies in $K$. From the description of symplectic leaves given above it is clear that $\bar\Sigma\cap T\ne\emptyset$. Since $\bar\Sigma$ is connected, it follows that there exists $t\in K\cap T$ such that $gt^{-1}\in K^\circ$. Therefore $K$ is generated by $K^\circ=K^{S,\Gamma}$ and $K\cap T$. Let $L\subset \Gamma$ be such that $K\cap T=T_L$. Then we conclude that $K=K^{S,L}$.
That $S$ is uniquely defined by $K$, is clear. That $L$ is also uniquely defined, will follow from (ii).
\smallskip
(ii) As we already observed, the group $(K^{S,L})^\circ\cap T$ is connected. Since the Lie algebras of $K^{S,L}\cap T$ and $T_L$ clearly coincide, we conclude that $(K^{S,L})^\circ\cap T=T_L^\circ$. Since $K^{S,L}=(K^{S,L})^\circ T_L$, it follows that $K^{S,L}\cap T=T_L$.
\smallskip
(iii) As $K^{S,L}=(K^{S,L})^\circ T_L$ and $(K^{S,L})^\circ\cap T=T_L^\circ$, we have $K^{S,L}/(K^{S,L})^\circ=T_L/T_L^\circ$. In particular, $K^{S,L}$ is connected if and only if $T_L$ is connected, that is, $\hat T_L\cong P/L$ is torsion-free, or equivalently, $P(S^c)/L$ is torsion-free.
\ep
To describe the symplectic leaves of the Poisson manifold $G/K^{S,L}$, consider the subgroup $W_S$ of~$W$ generated by the simple reflections $s_\alpha$ with $\alpha\in S$. Let $W^S\subset W$ be the set of elements $w$ such that $w(\alpha)>0$ for all $\alpha\in S$. Then (see e.g.~page~140 in \cite{St}) every element $w\in W$ decomposes uniquely as $w=w'w''$ with $w'\in W^S$ and $w''\in W_S$, and we have $\ell(w)=\ell(w')+\ell(w'')$; recall also that the length of an element in~$W_S$ is the same as in $W$.
\begin{proposition}
Let $\pi\colon G\to G/K^{S,L}$ be the quotient map. Then, for every $w\in W^S$ and $t\in T$, the map~$\pi$ defines a symplectomorphism of $\Sigma_w t$ onto a symplectic leaf of $G/K^{S,L}$. The decomposition of~$G/K^{S,L}$ into its symplectic leaves is given by $\sqcup_{w\in W^S,t\in T/T_L}\pi(\Sigma_wt)$.
\end{proposition}
\bp This is just a slight extension of results of Lu and Weinstein~\cite{LW} and Soibelman~\cite{So}, see also~\cite{DS}. We have the decompositions $G=\sqcup_{w\in W,t\in T}\Sigma_wt$ and $K^{S,L}=\sqcup_{w\in W_S,t\in T_L}\Sigma_wt$. Using that $T\Sigma_w=\Sigma_wT$ for any $w\in W$, and that the multiplication map $\Sigma_{w'}\times\Sigma_{w''}\to\Sigma_{w'w''}$ is a bijection for $w'\in W^S$ and $w''\in W_S$ (since $\ell(w'w'')=\ell(w')+\ell(w'')$), we conclude that $\pi$ is injective on every leaf $\Sigma_w t$ with $w\in W^S$ and arbitrary $t\in T$, and $G/K^{S,L}=\sqcup_{w\in W^S,t\in T/T_L}\pi(\Sigma_wt)$. Since by \cite[Theorem~4.6]{LW} the symplectic leaves of $G$ and $G/K^{S,L}$ are orbits of the right dressing action of the Poisson-Lie dual of $G$, the sets $\pi(\Sigma_wt)$ are symplectic leaves of $G/K^{S,L}$ for all $w\in W$ and $t\in T$.
\ep
\bigskip
\section{Irreducible representations of quantized function algebras} \label{s2}
Fix $q\in(0,1]$. If $q=1$ we put $U_1\g=U\g$. For $q\ne1$ the quantized universal
enveloping algebra~$U_q\g$ is generated by elements $E_i$, $F_i$, $K_i$,
$K_i^{-1}$, $1\le i\le r$, satisfying the relations
$$
K_iK_i^{-1}=K_i^{-1}K_i=1,\ \ K_iK_j=K_jK_i,\ \
K_iE_jK_i^{-1}=q_i^{a_{ij}}E_j,\ \
K_iF_jK_i^{-1}=q_i^{-a_{ij}}F_j,
$$
$$
E_iF_j-F_jE_i=\delta_{ij}\frac{K_i-K_i^{-1}}{q_i-q_i^{-1}},
$$
$$
\sum^{1-a_{ij}}_{k=0}(-1)^k\begin{bmatrix}1-a_{ij}\\
k\end{bmatrix}_{q_i} E^k_iE_jE^{1-a_{ij}-k}_i=0,\ \
\sum^{1-a_{ij}}_{k=0}(-1)^k\begin{bmatrix}1-a_{ij}\\
k\end{bmatrix}_{q_i} F^k_iF_jF^{1-a_{ij}-k}_i=0,
$$
where $\displaystyle\begin{bmatrix}m\\
k\end{bmatrix}_{q_i}=\frac{[m]_{q_i}!}{[k]_{q_i}![m-k]_{q_i}!}$,
$[m]_{q_i}!=[m]_{q_i}[m-1]_{q_i}\dots [1]_{q_i}$,
$\displaystyle[n]_{q_i}=\frac{q_i^n-q_i^{-n}}{q_i-q_i^{-1}}$,
$q_i=q^{d_i}$ and $d_i=d_{\alpha_i}$. This is a Hopf $*$-algebra with coproduct $\Dhat_q$ and
counit $\hat\eps_q$ defined by
$$
\Dhat_q(K_i)=K_i\otimes K_i,\ \
\Dhat_q(E_i)=E_i\otimes1+ K_i\otimes E_i,\ \
\Dhat_q(F_i)=F_i\otimes K_i^{-1}+1\otimes F_i,
$$
$$
\hat\eps_q(E_i)=\hat\eps_q(F_i)=0,\ \ \hat\eps_q(K_i)=1,
$$
and with involution given by $K_i^*=K_i$, $E_i^*=F_iK_i$, $F_i^*=K_i^{-1}E_i$.
If $V$ is a finite dimensional $U_q\g$-module and $\lambda\in
P$ is an integral weight, denote by $V(\lambda)$ the
space of vectors $v\in V$ of weight $\lambda$, so that
$K_iv=q^{(\lambda,\alpha_i)}v=q_i^{(\lambda,\alpha_i^\vee)}v$ for all $i$. Recall that $V$ is called admissible if
$V=\oplus_{\lambda\in P}V(\lambda)$. We denote by $\CC_q(\g)$ the tensor category of finite dimensional admissible $U_q\g$-modules.
Denote by ${\mathbb C}[G_q]\subset (U_q\g)^*$ the Hopf $*$-algebra of matrix coefficients of finite dimensional admissible $U_q\g$-modules, and let $C(G_q)$ be its C$^*$-enveloping algebra.
Consider also the endomorphism ring $\U(G_q)$ of the forgetful functor $\CC_q(\g)\to\Vect$.
In other words, if for every $\lambda\in P_+$ we fix
an irreducible $*$-representation of $U_q\g$ on a Hilbert space $V_\lambda$ with highest weight
$\lambda$, then $\U(G_q)$ can be identified with $\prod_{\lambda\in P_+}B(V_\lambda)$. Yet another way to think of~$\U(G_q)$ is as the algebra of closed densely defined operators affiliated with the von Neumann algebra~$W^*(G_q)$ of $G_q$. The maximal torus $T\subset G$ can be considered as a subset of group-like elements of $W^*(G_q)\subset \U(G_q)$: if $X\in\mathfrak t$ then for any admissible $U_q\g$-module $V$ and $\lambda\in P$ the element $\exp( X)\in T$ acts on $V(\lambda)$ as multiplication by $e^{\lambda(X)}$. Under this embedding $T\hookrightarrow\U(G_q)$, we have $K_j^{it}=\exp(it(\log q) h_j)\in T$ for $j=1,\dots,r$ and $t\in{\mathbb R}$.
From now on fix a subset $S\subset\Pi$ and a subgroup $L\subset P(S^c)$. Let $\U(K^{S,L}_q)$ be the $\sigma(\U(G_q),{\mathbb C}[G_q])$-closed subalgebra of $\U(G_q)$ generated by $T_L$ and $E_i$, $F_i$ with $\alpha_i\in S$. In other words, an element $\omega\in\U(G_q)$ belongs to $\U(K^{S,L}_q)$ if and only if for every finite dimensional admissible $U_q\g$-module $V$ the operator of the action by $\omega$ on $V$ lies in the algebra generated by $T_L$ and $E_i$, $F_i$ with $\alpha_i\in S$. Denote by ${\mathbb C}[K^{S,L}_q]\subset \U(K^{S,L}_q)^*$ the Hopf $*$-algebra that is the image of ${\mathbb C}[G_q]$ under the restriction map $\U(G_q)^*\to\U(K^{S,L}_q)^*$, and let $C(K^{S,L}_q)$ be its C$^*$-enveloping algebra. By construction we have an epimorphism $\pi\colon{\mathbb C}[G_q]\to{\mathbb C}[K^{S,L}_q]$ of Hopf $*$-algebras. Put
$$
{\mathbb C}[G_q/K^{S,L}_q]=\{a\in {\mathbb C}[G_q]\mid (\iota\otimes\pi)\Delta_q(a)=a\otimes 1\},
$$
where $\Delta_q$ is the comultiplication on ${\mathbb C}[G_q]$. Equivalently, $a\in {\mathbb C}[G_q/K^{S,L}_q]$ if and only if $(\iota\otimes \omega)\Delta_q(a)=\hat\eps_q(\omega)a$ for all $\omega\in\U(K^{S,L}_q)$. Denote by $C(G_q/K^{S,L}_q)$ the norm-closure of ${\mathbb C}[G_q/K^{S,L}_q]$ in $C(G_q)$.
For $\lambda\in P_+$ and $\xi,\zeta\in V_\lambda$ denote by $C^\lambda_{\zeta,\xi}\in {\mathbb C}[G_q]$ the matrix coefficient $(\cdot \,\xi,\zeta)$. Then ${\mathbb C}[G_q/K^{S,L}_q]$ is the linear span of elements $C^\lambda_{\zeta,\xi}$ such that $\lambda\in P_+$, $\zeta\in V_\lambda$ and $\xi\in V_\lambda$ is fixed by $K^{S,L}_q$, that is, $\omega\xi=\hat\eps_q(\omega)\xi$ for all $\omega\in\U(K^{S,L}_q)$.
\begin{remark}
The above description of ${\mathbb C}[G_q/K^{S,L}_q]$ as the linear span of certain elements $C^\lambda_{\zeta,\xi}$ implies that there exists a C$^*$-enveloping algebra of ${\mathbb C}[G_q/K^{S,L}_q]$, since if $\{ e_i\}_i$ is an orthonormal basis in $V_\lambda$ then $\sum_i(C^\lambda_{e_i,\xi})^*C^\lambda_{e_i,\xi}=\|\xi\|^21$, so that the norm of $C^\lambda_{\zeta,\xi}\in{\mathbb C}[G_q/K^{S,L}_q]$ in every representation of ${\mathbb C}[G_q/K^{S,L}_q]$ by bounded operators is not bigger than $\|\zeta\|\,\|\xi\|$. This C$^*$-algebra coincides with~$C(G_q/K^{S,L}_q)$. This was proved by Stokman~\cite{Stok} in the case $L=0$ using results of~\cite{DS} on representations of a certain subalgebra of ${\mathbb C}[G_q/K^{S,L}_q]$. An alternative way to see this is to use coamenability of $G_q$~\cite{Ba1}, see also Appendix~A in~\cite{NT5}, together with the following well-known fact: if a coamenable compact quantum group $G$ acts ergodically on a unital C$^*$-algebra $A$ (that is, we have a coaction $\alpha\colon A\to C(G)\otimes A$ such that $A^G=\{a\in A\mid\alpha(a)=1\otimes a\}=\C1$), $\A\subset A$ is the $*$-subalgebra spanned by the spectral subspaces of the action, and there exists an enveloping C$^*$-algebra $\tilde A$ for~$\A$, then~$\tilde A=A$. Indeed, let $\tilde\alpha\colon \tilde A\to C(G)\otimes \tilde A$ be the action of $G$ extending that on~$\A$ and let $\pi\colon\tilde A\to A$ be the quotient map. Consider the conditional expectation $\tilde E\colon \tilde A\to \tilde A^G=\C1$ defined by $\tilde E(a)=(h\otimes\iota)\tilde\alpha(a)$, where $h$ is the Haar state on $C(G)$, and a similar conditional expectation $E\colon A\to A^G=\C1$. As $h$ is faithful by coamenability of $G$, these conditional expectations are faithful. Since $\pi\tilde E=E\pi$, it follows that the kernel of $\pi$ is trivial.
\end{remark}
Our goal is to describe the irreducible representations of the C$^*$-algebra $C(G_q/K^{S,L}_q)$ for $q\in(0,1)$. For this recall the classification of irreducible representations of $C(G_q)$ obtained by Soibelman~\cite{So}.
Consider first the case $G=SU(2)$. We assume that the invariant symmetric form on $\sltwo({\mathbb C})$ is the standard one, so $(\alpha,\alpha)=2$ for the unique simple root $\alpha$. Consider the fundamental representation
$$
E\mapsto\begin{pmatrix}0 & q^{1/2}\\ 0 & 0\end{pmatrix}, \ \
F\mapsto\begin{pmatrix}0 & 0\\ q^{-1/2} & 0\end{pmatrix}, \ \
K\mapsto\begin{pmatrix}q & 0\\ 0 & q^{-1}\end{pmatrix}
$$
of $U_q\sltwo$. Then the corresponding corepresentation of ${\mathbb C}[SU_q(2)]$ has the form
$$
\begin{pmatrix}\alpha & -q\gamma^*\\ \gamma & \alpha^*\end{pmatrix},
$$
and the elements $\alpha,\gamma\in{\mathbb C}[SU_q(2)]$ satisfy the relations
$$
\alpha^*\alpha+\gamma^*\gamma=1,\ \, \alpha\alpha^*+q^2\gamma^*\gamma=1, \ \ \gamma^*\gamma=\gamma\gamma^*,
\ \ \alpha\gamma=q\gamma\alpha, \ \ \alpha\gamma^*=q\gamma^*\alpha.
$$
We will write $\alpha_q,\gamma_q$ when we want to emphasize that these are elements of ${\mathbb C}[SU_q(2)]$ for a particular~$q$.
Define a representation $\rho_q$ of $C(SU_q(2))$ on $\ell^2(\Z_+)$ by
\begin{equation} \label{erepSU2}
\rho_q(\alpha)e_n=\sqrt{1-q^{2n}}\,e_{n-1}, \ \ \rho_q(\gamma)e_n=-q^ne_n, \ \ n\ge0.
\end{equation}
Return to the general case. For every $1\le i\le r$ consider the homomorphism $\sigma_i\colon C(G_q)\to C(SU_{q_i}(2))$ which is dual to the embedding $U_{q_i}\sltwo\hookrightarrow U_q\g$ corresponding to the simple root $\alpha_i$. Then $\pi_i=\rho_{q_i}\sigma_i$ is a representation of $C(G_q)$ on $\ell^2(\Z_+)$. Now for every element $w\in W$ fix a reduced decomposition $w=s_{i_1}\dots s_{i_n}$ and put
$$
\pi_w=\pi_{i_1}\otimes\dots\otimes\pi_{i_n},
$$
so $\pi_w(a)=(\pi_{i_1}\otimes\dots\otimes\pi_{i_n})\Delta_q^{(n-1)}(a)$. Then $\pi_w$ is a representation of $C(G_q)$ on $\ell^2(\Z_+)^{\otimes\ell(w)}=\ell^2(\Z_+^{\ell(w)})$. Up to equivalence it does not depend on the choice of the reduced expression for $w$. In addition we have one-dimensional representations $\pi_t$ of $C(G_q)$ defined by the points of the maximal torus $T\subset\U(G_q)={\mathbb C}[G_q]^*$. In other words, $\pi_t(C^\lambda_{\zeta,\xi})=(t\xi,\zeta)$. Then the result of Soibelman says that the representations $\pi_{w,t}=\pi_w\otimes\pi_t$ are irreducible, mutually inequivalent, and exhaust all irreducible representations of $C(G_q)$ up to equivalence. Note that
\begin{equation}\label{et}
\pi_{w,t}(C^\lambda_{\zeta,\xi})=\pi_{w}(C^\lambda_{\zeta,t\xi}).
\end{equation}
The following result is a minor generalization of \cite[Theorem~5.9]{DS}.
\begin{theorem} \label{treps}
Assume $q\in(0,1)$. Then for every $w\in W^S$ and $t\in T$ the restriction of the representation $\pi_{w,t}$ of $C(G_q)$ to $C(G_q/K^{S,L}_q)$ is irreducible. Such representations exhaust all irreducible representations of $C(G_q/K^{S,L}_q)$ up to equivalence.
For $w,w'\in W^S$ and $t,t'\in T$ the restrictions of~$\pi_{w,t}$ and~$\pi_{w',t'}$ to $C(G_q/K^{S,L}_q)$ are equivalent if and only if $w=w'$ and $t't^{-1}\in T_L$, and in this case they are actually equal.
\end{theorem}
To prove the theorem we will need further properties of the representations $\pi_w$. Let $\lambda\in P_+$.
Fix a highest weight unit vector $\xi_\lambda\in V_\lambda$. For every $w\in W$ choose a unit vector $\eta\in V_\lambda$ of weight $w\lambda$. Since the weight spaces $V_\lambda(\lambda)$ and $V_\lambda(w\lambda)$ are one-dimensional, the element $C^\lambda_{\eta,\xi_\lambda}$ does not depend on the choice of $\xi_\lambda$ and $\eta$ up to a factor of modulus one. To simplify the notation we will thus write~$C^\lambda_{w\lambda,\lambda}$ for~$C^\lambda_{\eta,\xi_\lambda}$.
\begin{lemma}[\cite{So}] \label{lsoib}
Let $w\in W$ and $\lambda\in P_+$. Then
\enu{i} $\pi_w(C^\lambda_{w\lambda,\lambda})$ is a compact contractive diagonalizable operator with zero kernel, and the vector $e_0^{\otimes\ell(w)}\in\ell^2(\Z_+)^{\otimes\ell(w)}$ is its only (up to a scalar factor) eigenvector with eigenvalue of modulus $1$;
\enu{ii} if $\zeta\in V_\lambda$ is orthogonal to $(U_q\bb) V_\lambda(w\lambda)$, where $U_q\bb\subset U_q\g$ is the subalgebra generated by $K_i$, $K_i^{-1}$ and $E_i$, $1\le i\le r$, then $\pi_w(C^\lambda_{\zeta,\xi_\lambda})=0$.
\end{lemma}
\bp Part (i) is a consequence of the proof of \cite[Proposition~6.1.5]{KS}, see also identity (6.2.4) there (although notice that a factor of modulus one depending on the choice of orthonormal bases is missing there).
Part (ii) is \cite[Theorem~6.2.1]{KS}, since by that theorem $\pi_w$ corresponds to the Schubert cell $X_w$ in the terminology of~\cite{KS}, which in particular means that it has the property in the statement of the lemma.
\ep
\bp[Proof of Theorem~\ref{treps}]
Write $K^S_q$ for $K^{S,0}_q$. By \cite[Theorem~5.9]{DS} the restrictions of the representations~$\pi_{w,t}$ to $C(G_q/K^S_q)\subset C(G_q/K_q^{S,L})$ are irreducible for $w\in W^S$. Hence the restrictions of $\pi_{w,t}$ to~$C(G_q/K_q^{S,L})$ are irreducible as well. To see that this way we get all irreducible representations, note that any irreducible representation of $C(G_q/K_q^{S,L})$ extends to an irreducible representation of~$C(G_q)$ on a larger space. Therefore we have to find decompositions of $\pi_{w,t}$ into irreducible representations of $C(G_q/K_q^{S,L})$ for arbitrary $w\in W$ and $t\in T$. Write $w=w'w''$ with $w'\in W^S$ and $w''\in W_S$. We may assume that $\pi_w=\pi_{w'}\otimes\pi_{w''}$. Then by the proof~\cite[Proposition~5.7]{DS} we have
\begin{equation} \label{eDS}
\pi_w(a)=\pi_{w'}(a)\otimes 1^{\otimes\ell(w'')}\ \ \hbox{for all}\ \ a\in C(G_q/K_q^{S,L})
\end{equation}
(the key point here is that for the homomorphism $\sigma_i\colon C(G_q)\to C(SU_{q_i}(2))$ we have $\sigma_i(a)=\eps_q(a)1$ for all $a\in C(G_q/K_q^{S,L})$ and $\alpha_i\in S$, where $\eps_q$ is the counit on $C(G_q)$). Using \eqref{et} it follows that if $\zeta\in V_\lambda$ and $\xi\in V_\lambda$ is fixed by $K^{S,L}_q$ then
$$
\pi_{w,t}(C^\lambda_{\zeta,\xi})=\pi_{w}(C^\lambda_{\zeta,t\xi})=\pi_{w'}(C^\lambda_{\zeta,t\xi})\otimes 1^{\otimes\ell(w'')}
=\pi_{w',t}(C^\lambda_{\zeta,\xi})\otimes 1^{\otimes\ell(w'')}.
$$
We therefore see that the representations $\pi_{w',t}$ with $w'\in W^S$ and $t\in T$ exhaust all irreducible representations of $C(G_q/K_q^{S,L})$.
Consider now two representations $\pi_{w,t}$ and $\pi_{w',t'}$ with $w,w'\in W^S$ and $t,t'\in T$. By~\cite[Theorem~5.9]{DS} if $w\ne w'$ then already the restrictions of these representations to $C(G_q/K^S_q)\subset C(G_q/K_q^{S,L})$ are inequivalent. Therefore assume $w=w'$. Since $\pi_{w,t}$ and $\pi_{w,t'}$ coincide on $C(G_q/K^S_q)$ and are irreducible as representations of $C(G_q/K^S_q)$, they can be equivalent as representations of $C(G_q/K_q^{S,L})$ only if they coincide on $C(G_q/K_q^{S,L})$. If $t't^{-1}\in T_L$, this is indeed the case, since for $\zeta\in V_\lambda$ and $\xi\in V_\lambda$ fixed by $K^{S,L}_q$ we have
$$
\pi_{w,t}(C^\lambda_{\zeta,\xi})=\pi_{w}(C^\lambda_{\zeta,t\xi})=\pi_{w}(C^\lambda_{\zeta,t'\xi})=
\pi_{w,t'}(C^\lambda_{\zeta,\xi}).
$$
Assume now that $t't^{-1}\notin T_L$. Then there exists $\nu\in L\subset P=\hat T$ such that $\nu(t't^{-1})\ne1$. Choose weights $\lambda,\mu\in P_+(S^c)=P_+\cap P(S^c)$ such that $\lambda-\mu=\nu$. We have $E_i\xi_\lambda=F_i\xi_\lambda=0$ for $\alpha_i\in S$, so that $(\iota\otimes \omega)\Delta_q(C^\lambda_{w\lambda,\lambda})=\hat\eps_q(\omega)C^\lambda_{w\lambda,\lambda}$ for $\omega$ lying in the algebra generated by $E_i$ and $F_i$ with $\alpha_i\in S$. We also have $(\iota\otimes \tau)\Delta_q(C^\lambda_{w\lambda,\lambda})=\lambda(\tau)C^\lambda_{w\lambda,\lambda}$ for $\tau\in T$. It follows that $(C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda}\in{\mathbb C}[G_q/K^{S,L}_q]$. Using \eqref{et} we get
$$
\pi_{w,t}((C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda})
=\overline{\mu(t)}\lambda(t)\pi_{w}((C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda})
=\nu(t)\pi_w((C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda}),
$$
and similarly $\pi_{w,t'}((C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda})
=\nu(t')\pi_w((C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda})$. Since $\pi_w((C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda})\ne0$ by Lemma~\ref{lsoib}(i), we see that $\pi_{w,t}\ne\pi_{w,t'}$ on $C(G_q/K^{S,L}_q)$.
\ep
\begin{corollary}
The $*$-algebra ${\mathbb C}[G_q/K^{S,L}_q]$ is spanned by the elements of the form $(C^\mu_{\zeta,\xi_\mu})^*C^\lambda_{\eta,\xi_\lambda}$, where $\mu,\lambda\in P_+(S^c)$ are such that $\lambda-\mu\in L$, $\zeta\in V_\mu$ and $\eta\in V_\lambda$.
\end{corollary}
\bp This is similar to \cite[Theorem~2.5]{Stok}. That the linear span $\A$ in the formulation forms a $*$-algebra, is proved exactly as~\cite[Lemma~4.3]{DS}. That $\A\subset C[G_q/K^{S,L}_q]$, is checked in the same way as that $(C^\mu_{w\mu,\mu})^*C^\lambda_{w\lambda,\lambda}\in{\mathbb C}[G_q/K^{S,L}_q]$ in the proof of the above theorem. Since $\A$ is invariant with respect to the left coaction $\Delta_q\colon{\mathbb C}[G_q]\to C(G_q)\otimes {\mathbb C}[G_q]$, we have $\bar\A\cap{\mathbb C}[G_q]=\A$. On the other hand, by the proof of the above theorem and by~\cite[Lemma~5.8]{DS} the algebra $\A$ has the property that the restriction of irreducible representations of $C(G_q/K^{S,L}_q)$ to $\A$ are irreducible and inequivalent representations restrict to inequivalent representations. By the Stone-Weierstrass theorem for type~I C$^*$-algebras it follows that $\bar\A=C(G_q/K^{S,L}_q)$. Hence $\A={\mathbb C}[G_q/K^{S,L}_q]$.
\ep
\bigskip
\section{Composition series} \label{s3}
In this section we will use the classification of irreducible representations of $C(G_q/K^{S,L}_q)$ to construct a composition series for $C(G_q/K^{S,L}_q)$. Since in the subsequent sections it will be important to have such a sequence for all $q$ including $q=1$, it is convenient to look at the irreducible representations in a slightly different way.
For $q\in[0,1)$ denote by $C(\bar{\mathbb D}_q)$ the universal unital C$^*$-algebra with one generator $Z_q$ such that
$$
1-Z_q^*Z_q=q^2(1-Z_qZ_q^*).
$$
Since the least upper bounds of the spectra of $aa^*$ and $a^*a$ coincide, it is easy to see that in every nonzero representation of the above relation the norm of $Z_q$ is equal to $1$, so $C(\bar{\mathbb D}_q)$ is well-defined. It follows then, see e.g. Section V in \cite{KL}, that $C(\bar{\mathbb D}_q)$ is isomorphic to the Toeplitz algebra $\TT\subset B(\ell^2(\Z_+))$ via an isomorphism which maps $Z_q$ into the operator
$$
e_n\mapsto \sqrt{1-q^{2(n+1)}}\, e_{n+1}, \ \ n\ge0.
$$
The inverse homomorphism maps the operator $S\in\TT$ of the shift to the right into $Z_q(Z_q^*Z_q)^{-1/2}$. Under this isomorphism the representation $\rho_q\colon C(SU_q(2))\to B(\ell^2(\Z_+))$ defined by~\eqref{erepSU2} becomes the $*$-homomorphism $C(SU_q(2))\to C(\bar{\mathbb D}_q)$ given by
\begin{equation} \label{erho}
\rho_q(\alpha_q)=Z_q^*, \ \ \rho_q(\gamma_q)=-(1-Z_qZ_q^*)^{1/2}.
\end{equation}
In this form $\rho_q$ makes sense for $q=1$. Namely, consider the C$^*$-algebra $C(\bar{\mathbb D}_1)=C(\bar{\mathbb D})$ of continuous functions on the closed unit disk, and denote by $Z_1$ its standard generator, $Z_1(z)=z$. Then $\rho_1\colon C(SU(2))\to C(\bar{\mathbb D}_1)$ defined by the above formula is the homomorphism of restriction of a function on $SU(2)$ to the closure of the symplectic leaf
$$
\Sigma_0=\left\{\begin{pmatrix}\bar z & (1-|z|^2)^{1/2}\\ -(1-|z|^2)^{1/2} & z\end{pmatrix}: |z|<1\right\}\cong{\mathbb D}.
$$
The representation $\pi_{w,t}$ defined by $w=s_{i_1}\dots s_{i_n}$ now becomes a $*$-homomorphism
$$
\pi_{w,t}\colon C(G_q)\to C(\bar{\mathbb D}_{q_{i_1}})\otimes\dots\otimes
C(\bar{\mathbb D}_{q_{i_n}}).
$$
For $q=1$ this is exactly the homomorphism $\gamma^*_w$, where $\gamma_w\colon{\mathbb D}^n\cong\Sigma_0^n\to G$ is defined by~\eqref{esymplleaf} and extended by continuity to $\bar{\mathbb D}^n$.
For every $q\in[0,1]$ we have a $*$-homomorphism $C(\bar{\mathbb D}_q)\to C(\T)$ mapping $Z_q$ into the standard generator of $C(\T)$. Denote by $C_0({\mathbb D}_q)$ its kernel. For $q=1$ this is the usual algebra of continuous functions on $\bar{\mathbb D}$ vanishing on the boundary. For $q\in[0,1)$ this is the ideal of compact operators in~$\TT=C(\bar{\mathbb D}_q)$.
We can now formulate the main result of this section.
\begin{theorem} \label{tcompseries}
Assume $q\in(0,1]$. Let $w_0$ be the longest element in the Weyl group. Write $w_0=w_0'w_0''$ with $w'_0\in W^S$ and $w_0''\in W_S$, and put $m_0=\ell(w_0')$. For every $0\le m\le m_0$ denote by $J_m$ the ideal in $C(G_q/K^{S,L}_q)$ consisting of elements $a$ such that $\pi_{w,t}(a)=0$ for all $t\in T$ and $w\in W^S$ with $\ell(w)=m$. Then
$$
0=J_{m_0}\subset J_{m_0-1}\subset\dots\subset J_0\subset J_{-1}=C(G_q/K^{S,L}_q),
$$
and for every $0\le m\le m_0$ we have
$$
J_{m-1}/J_m\cong\bigoplus_{w\in W^S: \ell(w)=m}C(T/T_L;C_0({\mathbb D}_{q_{i_1(w)}})\otimes\dots\otimes
C_0({\mathbb D}_{q_{i_m(w)}})), \ \ a\mapsto (a_w)_w,
$$
where $a_w(t)=\pi_{w,t}(a)$ and $w=s_{i_1(w)}\dots s_{i_m(w)}$ is the fixed reduced decomposition of $w$ used to define~$\pi_w$.
\end{theorem}
Note that there is no ambiguity in the definition of $a_w$, since by Theorem~\ref{treps} we have $\pi_{w,t}(a)=\pi_{w,t'}(a)$ if $t't^{-1}\in T_L$.
We need some preparation to prove this theorem. Recall some properties of the Bruhat order, see e.g.~\cite{BGG}.
The Bruhat order on $W$ is defined by declaring that $w\le w'$ iff $w$ can be obtained from $w'$ by dropping letters in some (equivalently any) reduced word for~$w'$. Furthermore, then the letters can be dropped in such a way that we get a reduced word for~$w$.
Consider the coset space $X_S=W/W_S$. For $x\in X_S$ define
$$
\ell_S(x)=\min\{\ell(w)\mid x=wW_S\}.
$$
As we know, every coset $x\in X_S$ has a unique representative $w_x\in W^S$, and $w_x$ is the smallest element in $x$ in the Bruhat order; in particular, $\ell_S(x)=\ell(w_x)$. Define an order on $X_S$ by declaring $x\le y$ iff $w_x\le w_y$, and call it again the Bruhat order.
\begin{lemma} \label{lBruhat}
We have:
\enu{i} the factor-map $W\to X_S$ is order-preserving;
\enu{ii} for any $x\in X_S$ and $\alpha\in\Pi$, either $s_\alpha x=x$ or $w_{s_\alpha x}=s_\alpha w_x$.
\end{lemma}
\bp To prove (i), take $u,v\in W$ such that $u\le v$, and put $x=uW_S$, $y=vW_S$. Write $v=w_yw$ with $w\in W_S$. Then
$w_x\le u\le v=w_yw$. Take a reduced word for $w_yw$ which is the concatenation of a reduced word for $w_y$ and a reduced word for $w$ in letters $s_\alpha$ with $\alpha\in S$. Then a reduced word for $w_x$ can be obtained by dropping some letters in this reduced word for $w_yw$. As $w_x$ is the shortest element in $x$, to get $w_x$ we have to drop all letters in the reduced word for $w$ and some letters in the reduced word for $w_y$, so $w_x\le w_y$.
To prove (ii), recall that by a well-known property of the Bruhat order on $W$ we have either $\ell(s_\alpha w_x)=\ell(w_x)-1$ and $s_\alpha w_x\le w_x$, or $\ell(s_\alpha w_x)=\ell(w_x)+1$ and $s_\alpha w_x\ge w_x$, depending on whether $w_x^{-1}(\alpha)<0$ or $w_x^{-1}(\alpha)>0$. In the first case $s_\alpha w_x$ is the shortest element in $s_\alpha x$, as $\ell(s_\alpha w_x)=\ell_S(x)-1$ and we obviously always have $|\ell_S(s_\alpha x)-\ell_S(x)|\le1$ by definition of~$\ell_S$. Hence $w_{s_\alpha x}=s_\alpha w_x$. In the second case we have $s_\alpha x\ge x$ by~(i) and hence
$\ell_S(s_\alpha x)\ge\ell_S(x)$. Therefore either $\ell_S(s_\alpha x)=\ell_S(x)+1$, in which case $s_\alpha w_x$ is the shortest element in $s_\alpha x$ and hence $w_{s_\alpha x}=s_\alpha w_x$, or $\ell_S(s_\alpha x)=\ell_S(x)$, in which case $s_\alpha x=x$ as $s_\alpha x\ge x$.
\ep
Note that part (i) implies in particular that if $w_0$ is the longest element in $W$ then $x_0=w_0W_S$ is the largest, hence the longest, element in $X_S$. Part (ii) implies that if $x\in X_S$ and $w=s_{i_1}\dots s_{i_n}\in x$ is written in reduced form, then a reduced word for $w_x$ can be obtained from $s_{i_1}\dots s_{i_n}$ by dropping all letters $s_{i_j}$ such that $s_{i_n}\dots s_{i_{j+1}}s_{i_j}s_{i_{j+1}}\dots s_{i_n}\in W_S$.
\smallskip
Denote by $P_{++}(S^c)\subset P_+(S^c)$ the subset consisting of weights $\lambda$ such that $\lambda(H_\alpha)>0$ for every $\alpha\in S^c$. The following result is well-known in the case $S=\emptyset$, see \cite[Theorem 2.9]{BGG}.
\begin{proposition} \label{pBGG}
Let $\lambda\in P_{++}(S^c)$ and $x,y\in X_S$. Then $x\le y$ if and only if $V_\lambda(w_x\lambda)\subset (U_q\bb)V_\lambda(w_y\lambda)$.
\end{proposition}
\bp By virtue of Lemma \ref{lBruhat} the proof is essentially identical to that of \cite[Theorem 2.9]{BGG} for the case $S=\emptyset$, and proceeds along the following lines. Define a partial order on $X_S$ by declaring $x\preceq y$ iff $V_\lambda(w_x\lambda)\subset (U_q\bb)V_\lambda(w_y\lambda)$. Note that it is indeed a partial order, since the stabilizer of $\lambda$ in~$W$ is exactly $W_S$ by Chevalley's lemma, see~\cite[Proposition~2.72]{Knapp}. It is checked that this order has the properties
\begin{itemize}
\item[(i)] if $\ell_S(s_\alpha x)=\ell_S(x)+1$ for some $x\in X_S$ and $\alpha\in\Pi$ then $x\preceq s_\alpha x$;
\item[(ii)] if $x\preceq y$ and $\alpha\in\Pi$ then either $s_\alpha x\preceq y$ or $s_\alpha x\preceq s_\alpha y$.
\end{itemize}
It is proved then that the Bruhat order is the unique order on $X_S$ satisfying these properties.
\ep
As usual define an action of the Weyl group on $T$ by requiring $\lambda(w(t))=(w^{-1}\lambda)(t)$ for $\lambda\in P=\hat T$. For $z\in\T$ define an automorphism $\theta_z$ of $C(\bar{\mathbb D}_q)$ by $\theta_z(Z_q)=\bar z Z_q$.
\begin{lemma} \label{ltconj}
For every simple root $\alpha\in\Pi$ and $t\in T$ we have $\pi_t\otimes\pi_{s_\alpha}=\theta_{\alpha(t)}\pi_{s_\alpha}\otimes\pi_{s_\alpha(t)}$ as homomorphisms $C(G_q)\to C(\bar{\mathbb D}_{q^{d_\alpha}})$. In particular, for every $w\in W$ and $t\in T$ the kernels of $\pi_t\otimes\pi_w$ and $\pi_w\otimes\pi_{w^{-1}(t)}$ coincide.
\end{lemma}
\bp Consider first the case $G=SU(2)$. In this case the claim is that $\pi_t\otimes\rho_q=\theta_{t^2}\rho_q\otimes\pi_{t^{-1}}$ for $t\in T\cong\T$. This is immediate by definition~\eqref{erho} of $\rho_q$, as
$$
(\pi_t\otimes\iota)\Delta_q(\alpha)=t\alpha,\ \ (\pi_t\otimes\iota)\Delta_q(\gamma)=t^{-1}\gamma, \ \
(\iota\otimes\pi_{t^{-1}})\Delta_q(\alpha)=t^{-1}\alpha,\ \
(\iota\otimes\pi_{t^{-1}})\Delta_q(\gamma)=t^{-1}\gamma.
$$
Consider now the general case. Note that similarly to~\eqref{et} we have $(\pi_t\otimes\pi_w)(C^\lambda_{\zeta,\xi})=\pi_w(C^\lambda_{t^{-1}\zeta,\xi})$. Let $t
=\exp({2\pi i h})\in T$, $h\in i\mathfrak t\subset\h$. Write $h=c H_\alpha+h_2$ with $c\in{\mathbb R}$ and $h_2\in\ker\alpha$, and put $t_1=\exp({2\pi i c H_\alpha})$ and $t_2=\exp({2\pi i h_2})$. Then $t=t_1t_2$, $s_\alpha(t)=t_1^{-1}t_2$ and $\alpha(t)=\alpha(t_1)=e^{4\pi i c}$. The homomorphisms $\pi_{t_1}$ and $\pi_{s_\alpha}$ factor through $C(SU_{q^{d_\alpha}}(2))$, hence
$$
\pi_{t_1}\otimes\pi_{s_\alpha}=\theta_{\alpha(t_1)}\pi_{s_\alpha}\otimes\pi_{t_1^{-1}}=\theta_{\alpha(t)}\pi_{s_\alpha}\otimes\pi_{t_1^{-1}}.
$$
Observe next that since $t_2$ commutes with $E_\alpha, F_\alpha\in U_q\g$, the restrictions of the matrix coefficients~$C^\lambda_{\zeta, t_2\xi}$ and~$C^\lambda_{t_2^{-1}\zeta, \xi}$ to the algebra generated by $E_\alpha$, $F_\alpha$ and $K_\alpha$ coincide. In other words, $C^\lambda_{\zeta, t_2\xi}$ and~$C^\lambda_{t_2^{-1}\zeta, \xi}$ have the same images in $C(SU_{q^{d_\alpha}}(2))$. We then have
\begin{align*}
(\pi_t\otimes\pi_{s_\alpha})(C^\lambda_{\zeta,\xi})&=(\pi_{t_1}\otimes\pi_{s_\alpha})(C^\lambda_{t_2^{-1}\zeta,\xi})
=(\pi_{t_1}\otimes\pi_{s_\alpha})(C^\lambda_{\zeta,t_2\xi})=(\theta_{\alpha(t)}\pi_{s_\alpha}\otimes\pi_{t_1^{-1}})(C^\lambda_{\zeta,t_2\xi})\\
&=(\theta_{\alpha(t)}\pi_{s_\alpha}\otimes\pi_{t_1^{-1}t_2})(C^\lambda_{\zeta,\xi})
=(\theta_{\alpha(t)}\pi_{s_\alpha}\otimes\pi_{s_\alpha(t)})(C^\lambda_{\zeta,\xi}).
\end{align*}
If $w=s_{i_1}\dots s_{i_n}$ then by induction we get
$$
\pi_t\otimes\pi_w=(\theta_{z_1}\otimes\dots\otimes\theta_{z_n})\pi_w\otimes\pi_{w^{-1}(t)},
$$
where $z_k=(s_{i_1}\dots s_{i_{k-1}}\alpha_{i_k})(t)$. This gives the last statement in the formulation of the lemma.
\ep
For $q=1$ the above lemma implies that $t\Sigma_w=\Sigma_w w^{-1}(t)$. This slightly weaker statement can be deduced without any computations from the fact that every symplectic leaf intersects the normalizer of $T$ at a unique point. For $q<1$ the lemma implies that the representations $\pi_t\otimes\pi_w$ and $\pi_w\otimes\pi_{w^{-1}(t)}$ of $C(G_q)$ on $\ell^2(\Z_+)^{\otimes\ell(w)}$ are equivalent. This can also be easily proved by comparing the highest weights~\cite{So} of these representations.
\smallskip
For $z\in\T$ denote by $\chi_z$ the character of $C(\bar{\mathbb D}_q)$ defined by $\chi_z(Z_q)=z$. Assume $\alpha\in\Pi$ and $c\in{\mathbb R}$. Put $t=\exp({2\pi i c H_\alpha})\in T$ and $z=e^{-2\pi i c}$. Then
\begin{equation} \label{echaracter}
\pi_t=\chi_z\pi_{s_\alpha}\ \ \hbox{on}\ \ C(G_q).
\end{equation}
Indeed, this is enough to check for $G=SU(2)$, and then this is immediate, since $\pi_t(\alpha_q)=\bar z$ and~$\pi_t(\gamma_q)=0$.
\begin{lemma} \label{lproductcells}
For every $1\le m\le m_0$ the ideal $J_m\subset C(G_q/K^{S,L}_q)$ is contained in the kernel of $\pi_{s_{i_1}}\otimes\dots\otimes\pi_{s_{i_n}}\otimes\pi_t$ for any $t\in T$ and any indices $i_1,\dots,i_n$ with $n\le m$.
\end{lemma}
\bp We will prove this in several steps. First note that $J_{m+1}\subset J_m$ for all $0\le m<m_0$. Indeed, let $w\in W^S$, $\ell(w)=m$, and $t\in T$. There exists $\alpha\in\Pi$ such that $s_\alpha w\in W^S$ and $\ell(s_\alpha w)=m+1$. (This follows e.g.~from Lemma~~\ref{lBruhat}(ii). Indeed, let $w_0$ be the longest element in~$W$. Write $w_0=s_{i_n}\dots s_{i_1}w$ with $n=\ell(w_0)-\ell(w)$. Then we can take $\alpha=\alpha_{i_k}$, where $k$ is the smallest number such that $w^{-1}s_{i_k}w\notin W_S$.) By \eqref{echaracter} we have $\pi_{w,t}=(\chi_1\otimes\iota)(\pi_{s_\alpha}\otimes\pi_{w,t})$. Therefore if $a\in J_{m+1}\subset\ker\pi_{s_\alpha w,t}=\ker(\pi_{s_\alpha}\otimes\pi_{w,t})$ then $a\in\ker \pi_{w,t}$. Thus $J_{m+1}\subset J_m$.
It follows that if $a\in J_m$ for some $1\le m\le m_0$ then $\pi_{w,t}(a)=0$ for any $w\in W$ with $\ell(w)\le m$ and any $t\in T$. Indeed, write $w=w'w''$ with $w'\in W^S$ and $w''\in W_S$. Then, as already used in the proof of Theorem~\ref{treps}, we can assume that $\pi_w=\pi_{w'}\otimes\pi_{w''}$, so that $\pi_{w,t}(a)=\pi_{w',t}(a)\otimes 1^{\otimes\ell(w'')}$ by~\eqref{eDS}. Since $n=\ell(w')\le m$ and $J_m\subset J_n$, we have $\pi_{w',t}(a)=0$, hence also $\pi_{w,t}(a)=0$.
The next thing to observe is that if $a\in J_{m+1}$ for some $0\le m< m_0$ then $a\in\ker(\pi_{s_\alpha}\otimes\pi_{w,t})$
for any $t\in T$, $\alpha\in\Pi$ and $w\in W$ with $\ell(w)\le m$. If $\ell(s_\alpha w)=\ell(w)+1$, this is clearly true. Therefore assume that $\ell(s_\alpha w)=\ell(w)-1$. Put $w'=s_\alpha w$. Then $w=s_\alpha w'$ and we may assume that $\pi_w=\pi_{s_\alpha}\otimes\pi_{w'}$. Since $a\in J_{m+1}\subset J_m$, we have $a\in\ker(\pi_{s_\alpha}\otimes\pi_{w',\tau})$ for any $\tau\in T$.
Denote by $T_\alpha\subset T$ the set of elements $u$ of the form $u=\exp({2\pi i c H_\alpha})$, $c\in {\mathbb R}$.
By Lemma~\ref{ltconj} it follows that for any $u\in T_\alpha$ the element $a$ belongs to
$$
\ker(\pi_{s_\alpha,u}\otimes\pi_{w',t})=\ker(\pi_{s_\alpha}\otimes\pi_{w',w'^{-1}(u)t}).
$$
In other words, if $\varphi$ is a bounded linear functional on $C(G_q/K^{S,L}_q)$ of the form $\psi\pi_{w',t}$ then
$$
(\iota\otimes\varphi)\Delta_q(a)\in\ker \pi_{s_\alpha,u}\ \ \hbox{for any}\ \ u\in T_\alpha.
$$
Since the intersection of the kernels of the homomorphisms $\rho_q\otimes\pi_u\colon C(SU_q(2))\to C(\bar{\mathbb D}_q)$, $u\in \T$, is zero, the intersection of the kernels of the homomorphisms $\pi_{s_\alpha,u}\colon C(G_q)\to C(\bar{\mathbb D}_{q^{d_\alpha}})$, $u\in T_\alpha$, is exactly the kernel of the homomorphism $C(G_q)\to C(SU_{q^{d_\alpha}}(2))$. Therefore $(\iota\otimes\varphi)\Delta_q(a)$ is in the kernel of the latter homomorphism. Since $\pi_{s_\alpha}\otimes\pi_{s_\alpha}$ factors through the homomorphism $C(G_q)\to C(SU_{q^{d_\alpha}}(2))$, we conclude that
$$
(\iota\otimes\varphi)\Delta_q(a)\in\ker (\pi_{s_\alpha}\otimes\pi_{s_\alpha}).
$$
Since this is true for any $\varphi$ of the form $\psi\pi_{w',t}$, it follows that $a\in\ker(\pi_{s_\alpha}\otimes\pi_{s_\alpha}\otimes\pi_{w',t})$. As $\pi_{s_\alpha}\otimes\pi_{w',t}=\pi_{w,t}$, this proves the claim.
We now turn to the proof of the statement in the formulation. The proof is by induction on~$m$. For $m=1$ the result is already proved in the second paragraph. So assume the result is true for all numbers not bigger than $m<m_0$. Since $J_{m+1}\subset J_n$ for $n\le m$, it suffices to show that the kernel of $\pi_{s_{i_1}}\otimes\dots\otimes\pi_{s_{i_{m+1}}}\otimes\pi_t$ contains $J_{m+1}$ for any $i_1,\dots,i_{m+1}$ and $t\in T$. Let $a\in J_{m+1}$. Then by the previous paragraph $a\in\ker(\pi_{s_{i_1}}\otimes\pi_{w,t})$ for all $t\in T$ and all $w\in W$ with $\ell(w)\le m$ . Hence, for any bounded linear functional $\varphi$ on $C(G_q)$ of the form $\psi\pi_{s_{i_1}}$ we have $(\varphi\otimes\iota)\Delta_q(a)\in\ker\pi_{w,t}$. It follows that $(\varphi\otimes\iota)\Delta_q(a)\in J_m$. Hence, by the inductive assumption, $(\varphi\otimes\iota)\Delta_q(a)\in\ker(\pi_{s_{i_2}}\otimes\dots\otimes\pi_{s_{i_{m+1}}}\otimes\pi_t)$. Since this is true for any $\varphi$ of the form $\psi\pi_{s_{i_1}}$, we conclude that $a\in \ker(\pi_{s_{i_1}}\otimes\dots\otimes\pi_{s_{i_{m+1}}}\otimes\pi_t)$.
\ep
\bp[Proof of Theorem~\ref{tcompseries}] That $J_m\subset J_{m-1}$, follows from Lemma~\ref{lproductcells} (and was explicitly established in its proof). In particular, $J_{m_0}$ is contained in the kernel of every irreducible representation of~$C(G_q/K^{S,L}_q)$, hence $J_{m_0}=0$.
Let $1\le m\le m_0$. Consider the homomorphism
$$
\Theta_m\colon C(G_q/K^{S,L}_q)\to \bigoplus_{w\in W^S: \ell(w)=m}C(T/T_L;C(\bar{\mathbb D}_{q_{i_1(w)}})\otimes\dots\otimes
C(\bar{\mathbb D}_{q_{i_m(w)}}))
$$
defined by $\Theta_m(a)_w(t)=\pi_{w,t}(a)$. The kernel of $\Theta_m$ is by definition the ideal $J_m$. Let $a\in J_{m-1}$, $t\in T$ and $w\in W^S$, $\ell(w)=m$. Let $w=s_{i_1}\dots s_{i_m}$ be the reduced expression used to define $\pi_w$. Let $1\le k\le m$ and $z=e^{-2\pi i c}\in\T$. Put $u=\exp({2\pi i cH_{i_k}})\in T$. Then applying $\chi_z\colon C(\bar {\mathbb D}_{q_{i_k}})\to{\mathbb C}$ to the~$k$th factor of the image of $\pi_{w,t}$, by \eqref{echaracter} and Lemma~\ref{ltconj} we get
\begin{align*}
\ker((\iota\otimes\dots\otimes\chi_z\otimes\dots\otimes\iota)\pi_{w,t})
&=\ker(\pi_{s_{i_1}}\otimes\dots\otimes \pi_u\otimes\dots\otimes\pi_{s_{i_m}})\\
&=\ker(\pi_{s_{i_1}}\otimes\dots\otimes\pi_{s_{i_{k-1}}}\otimes\pi_{s_{i_{k+1}}}\otimes\dots\otimes\pi_{s_{i_m}}\otimes \pi_{u'}),
\end{align*}
where $u'=(s_{i_m}\dots s_{i_{k+1}})(u)$. Since $a\in J_{m-1}$, by Lemma~\ref{lproductcells} we thus see that $a$ is contained in the kernel of $(\iota\otimes\dots\otimes\chi_z\otimes\dots\otimes\iota)\pi_{w,t}$. Since this is true for all $z\in \T$, it follows that $\pi_{w,t}(a)$ is contained in the kernel of $\iota\otimes\dots\otimes\beta\otimes\dots\otimes\iota$, where $\beta\colon C(\bar{\mathbb D}_{q_{i_k}})\to C(\T)$ is the homomorphism that maps $Z_{q_{i_k}}$ to the standard generator of $C(\T)$. The kernel of $\beta$ is by definition the ideal $C_0({\mathbb D}_{q_{i_k}})$. Therefore
$$
\pi_{w,t}(a)\in C(\bar{\mathbb D}_{q_{i_1}})\otimes\dots\otimes C_0({\mathbb D}_{q_{i_k}})\otimes\dots
\otimes C(\bar{\mathbb D}_{q_{i_m}}).
$$
Since this is true for every $k$, we conclude that $\pi_{w,t}(a)\in C_0({\mathbb D}_{q_{i_1}})\otimes\dots\otimes C_0({\mathbb D}_{q_{i_m}})$. Thus the image of $J_{m-1}$ under $\Theta_m$ is contained in
$$
\bigoplus_{w\in W^S: \ell(w)=m}C(T/T_L;C_0({\mathbb D}_{q_{i_1(w)}})\otimes\dots\otimes
C_0({\mathbb D}_{q_{i_m(w)}})).
$$
To see that this algebra is the whole image, we will consider separately the cases $q=1$ and $q<1$.
Assume $q=1$. In this case $J_{m-1}$ is the ideal of continuous functions on $G/K^{S,L}$ that vanish on the symplectic leaves of dimension $2m-2$. By the Stone-Weierstrass theorem it is then enough to show that for any two distinct points on the union of the leaves of dimension $2m$, there is a continuous function which vanishes on all leaves of dimension $2m-2$ and takes different nonzero values at these points. For this it suffices to know that the union of the leaves of dimension $\le 2m-2$ is a closed subset of $G/K^{S,L}$. This, in turn, is enough to check for $G/K^S$, which is the quotient of $G/K^{S,L}$ by an action of the compact group $T/T_L$. The result is well-known for $S=\emptyset$, see e.g.~Theorem~23 on p.~127 in~\cite{St}. Since the union of the symplectic leaves of $G/K^S$ of dimension $\le 2m-2$ is the image of the union of the symplectic leaves of $G/T$ of dimension $\le 2m-2$, we conclude that this set is closed for any $S$.
Turning to the case $q<1$, first we prove that $\pi_{w,t}(J_{m-1})\ne0$ for any $t\in T$ and $w\in W^S$ with $\ell(w)=m$. For this take any $\lambda\in P_{++}(S^c)$. Since $w$ cannot be smaller than any $v\in W^S$ with $\ell(v)=m-1$, by Proposition~\ref{pBGG} we see that $V_\lambda(w\lambda)$ is orthogonal to $(U_q\bb)V_\lambda(v\lambda)$, hence $\pi_{v,\tau}(C^\lambda_{w\lambda,\lambda})=\lambda(\tau)\pi_{v}(C^\lambda_{w\lambda,\lambda})=0$ for any $\tau\in T$ by Lemma~\ref{lsoib}(ii). Therefore $(C^\lambda_{w\lambda,\lambda})^*C^\lambda_{w\lambda,\lambda}\in J_{m-1}$. By Lemma~\ref{lsoib}(i) we also have $\pi_{w,t}((C^\lambda_{w\lambda,\lambda})^*C^\lambda_{w\lambda,\lambda})\ne0$.
Since $J_{m-1}$ is an ideal, it follows that the representations $\pi_{w,t}$ of $J_{m-1}$, with $w\in W^S$, $\ell(w)=m$ and $t\in T/T_L$, are irreducible and mutually inequivalent. In other words, the subalgebra $\Theta_m(J_{m-1})$ of the algebra
$$
\bigoplus_{w\in W^S: \ell(w)=m}C(T/T_L;C_0({\mathbb D}_{q_{i_1(w)}})\otimes\dots\otimes
C_0({\mathbb D}_{q_{i_m(w)}}))=\bigoplus_{w\in W^S: \ell(w)=m}C(T/T_L;K(\ell^2(\Z_+^m)))
$$
has the property that its projection to different fibers gives mutually inequivalent irreducible representations of~$\Theta_m(J_{m-1})$ on~$\ell^2(\Z_+^m)$. By the Stone-Weierstrass theorem for type I C$^*$-algebras we conclude that $\Theta_m(J_{m-1})$ coincides with the whole algebra.
\ep
\bigskip
\section{Topology on the spectrum} \label{s4}
In this section we will describe the Jacobson, or hull-kernel, topology on the spectrum of the type~I C$^*$-algebra $C(G_q/K^{S,L}_q)$ for $q\in(0,1)$. By Theorem~\ref{treps}, as a set the spectrum can be identified with $W^S\times T/T_L$.
To formulate the result it is convenient to use the description of the Bruhat order given in~\cite{BGG}. For $\sigma,w\in W$ and $\gamma\in\Delta_+$ we write $\sigma\xrightarrow{\gamma}w$ if $w=\sigma s_{\gamma}$ and $\ell(w)=\ell(\sigma)+1$ (note that in the notation of~\cite{BGG} this corresponds to $\sigma^{-1}\xrightarrow{\gamma}w^{-1}$). Then, for any $\sigma,w\in W$, we have $\sigma\le w$ if and only if there exist $\sigma_1,\dots,\sigma_k\in W$ and $\gamma_1,\dots,\gamma_k\in\Delta_+$ such that $\sigma\xrightarrow{\gamma_1}\sigma_1\xrightarrow{\gamma_2}\dots\xrightarrow{\gamma_k}\sigma_k=w$.
Assume $\sigma\le w$. For every path $\sigma\xrightarrow{\gamma_1}\sigma_1\xrightarrow{\gamma_2}\dots\xrightarrow{\gamma_k}\sigma_k=w$ consider the closed connected subgroup of~$T$ consisting of the elements of the form $\exp({i h_\beta})$ with $\beta\in\operatorname{span}_{\mathbb R}\{\gamma_1,\dots,\gamma_k\}$. Denote by~$T_{\sigma,w}$ the union of such groups for all possible paths. The closed sets~$T_{\sigma,w}$ clearly have the following multiplicative property, which will play an important role: if $\sigma\le v\le w$ then
\begin{equation}\label{eTmult}
T_{\sigma,v} T_{v,w}:=\{\tau t\mid \tau\in T_{\sigma,v},\ t\in T_{v,w}\}\subset T_{\sigma,w}.
\end{equation}
Recall that we denote by $\pi\colon G\to G/K^{S,L}$ the quotient map.
\begin{theorem} \label{ttopology}
Let $w\in W^S$ and $\Omega\subset T$. Then
\enu{i} the closure of the union of the symplectic leaves $\pi(\Sigma_w\tau)\subset G/K^{S,L}$, $\tau\in\Omega$, is the union of the leaves $\pi(\Sigma_\sigma t)$ such that $\sigma\in W^S$, $\sigma\le w$ and $t\in\bar\Omega T_{\sigma,w}T_L$;
\enu{ii} if $q\in(0,1)$, for any $\sigma\in W^S$ and $t\in T$, the kernel of the representation $\pi_{\sigma,t}$ contains the intersection of the kernels of the representations $\pi_{w,\tau}$ of $C(G_q/K^{S,L}_q)$, $\tau\in\Omega$, if and only if $\sigma\le w$ and $t\in\bar\Omega T_{\sigma,w}T_L$.
\end{theorem}
Therefore if for $q\in(0,1)$ we identify the spectrum of $C(G_q/K_q^{S,L})$ with the quotient of $G/K^{S,L}$ by the partition defined by its symplectic leaves (or in other words, with the quotient of $G/K^{S,L}$ by the right dressing action), then the Jacobson topology on the spectrum is exactly the quotient topology.
\smallskip
The proof is based on the following refinement of Lemma~\ref{lproductcells}. Recall that in the proof of that lemma we denoted by $T_\alpha\subset T$ the subgroup consisting of elements of the form $\exp({2\pi i c H_\alpha})$, $c\in {\mathbb R}$. We write $T_i$ for $T_{\alpha_i}$.
\begin{lemma} \label{lproductcells2}
Let $1\le i_1,\dots,i_n\le r$ and $t\in T$. Assume $a\in C(G_q)$ is such that
$\pi_{w,\tau}(a)=0$ for all $1\le k\le n$, $w=s_{i_{j_1}}\dots s_{i_{j_k}}$ with $1\le j_1<\dots <j_k\le n$ and $\tau\in T$ such that $\tau t^{-1}$ lies in the group generated by $(s_{i_{j_k}}s_{i_{j_{k-1}}}\dots s_{i_{j_l}})(T_{i_m})$ with $1\le l\le k$ and $j_{l-1}<m<j_l$ (we let $j_0=0$).
Then $a\in\ker(\pi_{s_{i_1}}\otimes\dots\otimes \pi_{s_{i_n}}\otimes\pi_t)$.
\end{lemma}
\bp The proof is by induction on $n$. For $n=1$ the statement is tautological. So assume $n>1$ and that the result is true for all numbers $<n$. By induction, exactly as in the proof of Lemma~\ref{lproductcells}, it suffices to show that
$(\pi_{s_{i_1}}\otimes\pi_{w,\tau})(a)=0$ for all $1\le k\le n-1$, $w=s_{i_{j_1}}\dots s_{i_{j_k}}$ with $2\le j_1<\dots <j_k\le n$ and $\tau\in T$ such that $\tau t^{-1}$ lies in the group generated by $(s_{i_{j_k}}s_{i_{j_{k-1}}}\dots s_{i_{j_l}})(T_{i_m})$ with $1\le l\le k$ and $j_{l-1}<m<j_l$ (with $j_0=1$).
To see that this is true, fix $k$, $w=s_{i_{j_1}}\dots s_{i_{j_k}}$ and $\tau$. If $\ell(s_{i_1} w)=\ell(w)+1$ then we may assume that $\pi_{s_{i_1}}\otimes\pi_w=\pi_{s_{i_1}w}$, and then the claim is part of the assumption. If $\ell(s_{i_1} w)=\ell(w)-1$, the claim is proved by the same argument as in the third paragraph of the proof of Lemma~\ref{lproductcells}, using that $\pi_{w,\tau w^{-1}(u)}(a)=0$ for any $u\in T_{i_1}$, which is true by assumption.
\ep
Our goal is to relate the groups in the above lemma to the sets $T_{\sigma,w}$.
\begin{lemma} \label{lpaths}
Assume $\sigma,w\in W$ and $\alpha\in\Pi$ are such that $\sigma<s_\alpha\sigma$ and $w<s_\alpha w$. Then for any path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_k}w$ there exists a path $s_\alpha\sigma\xrightarrow{\gamma_1'}\dots\xrightarrow{\gamma_k'}s_\alpha w$ such that the group generated by $\gamma_1',\dots,\gamma_k'$ coincides with the group generated by $\gamma_1,\dots,\gamma_k$.
\end{lemma}
\bp The proof is by induction on $k=\ell(w)-\ell(\sigma)$. Put $v=\sigma\gamma_1$. Consider the two possible cases.
Assume first that $v<s_\alpha v$. By the inductive assumption there exists a path $s_\alpha v\xrightarrow{\gamma_2'}\dots\xrightarrow{\gamma_k'}s_\alpha w$ such that the group generated by $\gamma_2',\dots,\gamma_k'$ coincides with the group generated by $\gamma_2,\dots,\gamma_k$. Then $s_\alpha\sigma\xrightarrow{\gamma_1}s_\alpha v\xrightarrow{\gamma_2'}\dots\xrightarrow{\gamma_k'}s_\alpha w$ is the required path.
Assume now that $s_\alpha v<v$. Then $v=s_\alpha \sigma$. In particular, $\gamma_1=\sigma^{-1}(\alpha)$. Consider the path $s_\alpha\sigma\xrightarrow{\gamma_2}\dots\xrightarrow{\gamma_k}w \xrightarrow{w^{-1}(\alpha)} s_\alpha w$. Since $w=s_\alpha\sigma s_{\gamma_2}\dots s_{\gamma_k}$, we have $w^{-1}(\alpha)=-(s_{\gamma_k}\dots s_{\gamma_2}\sigma^{-1})(\alpha)$. Therefore the groups generated by $\gamma_1=\sigma^{-1}(\alpha),\gamma_2,\dots,\gamma_k$ and by $\gamma_2,\dots,\gamma_k, w^{-1}(\alpha)$ coincide.
\ep
Note that the proof actually shows that as the sequence $\gamma_1',\dots,\gamma_k'$ we can take $$\gamma_1,\dots,\gamma_{i-1},\gamma_{i+1},\dots,\gamma_k,w^{-1}(\alpha),$$ where $i$ is the first number such that $s_\alpha\sigma s_{\gamma_1}\dots s_{\gamma_{i-1}}=\sigma s_{\gamma_1}\dots s_{\gamma_i}$ (if there is no such number then the sequence is $\gamma_1,\dots,\gamma_k$).
\begin{lemma} \label{lTsets}
Let $w=s_{i_1}\dots s_{i_n}$ be written in reduced form, and consider $\sigma\le w$.
\enu{i} Assume $\sigma =s_{i_{j_1}}\dots s_{i_{j_k}}$ for some $0\le k\le n$ and $1\le j_1<\dots< j_k\le n$. Let $\Gamma\subset P$ be the group generated by $(s_{i_{j_k}}s_{i_{j_{k-1}}}\dots s_{i_{j_l}})(\alpha_{i_m})$ with $1\le l\le k+1$ and $j_{l-1}<m<j_l$ (we let $j_0=0$ and $j_{k+1}=n+1$). Then $\Gamma$ coincides with the group generated by the elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$ such that $m\notin\{j_1,\dots,j_k\}$.
\enu{ii} Under the assumptions of {\rm (i)}, there exists a path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_p}w$ such that $\Gamma$ is contained in the group generated by $\gamma_1,\dots,\gamma_p$, and these two groups coincide if the expression $\sigma=s_{i_{j_1}}\dots s_{i_{j_k}}$ is reduced.
\enu{iii} For any path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_{n-k}}w$ there exist $1\le j_1<\dots< j_k\le n$ such that $\sigma =s_{i_{j_1}}\dots s_{i_{j_k}}$ and the group $\Gamma$ defined as in {\rm (i)} coincides with the group generated by $\gamma_1,\dots,\gamma_{n-k}$.
\end{lemma}
\bp (i) The proof is by induction on $n$. For $n=1$ the statement is tautological. Assume $n>1$. If $j_k=n$ then the result is immediate by the inductive assumption. Assume $j_k<n$. Let $w'=s_{i_1}\dots s_{i_{n-1}}$ and $\Gamma'$ be the group defined similarly to $\Gamma$ by the elements $\sigma\le w'$. Then $\Gamma$ is generated by $\Gamma'$ and~$\alpha_{i_n}$, hence by $s_{{i_n}}(\Gamma')$ and $\alpha_{i_n}$. Since by the inductive assumption $\Gamma'$ is generated by the elements $(s_{i_{n-1}}\dots s_{i_{m+1}})(\alpha_{i_m})$ such that $m\le n-1$ and $m\notin\{j_1,\dots,j_k\}$, we get the result.
\smallskip
(ii) The proof is again by induction on $n$. For $n=1$ the statement is trivial. So assume $n>1$. We may assume that the word $s_{i_{j_1}}\dots s_{i_{j_k}}$ is reduced. Indeed, if it is not, then a reduced expression for~$\sigma$ can be obtained by dropping some letters in the word $s_{i_{j_1}}\dots s_{i_{j_k}}$, see Lemma~21(c) in Appendix to~\cite{St}. By the description of $\Gamma$ given in (i) this can only increase the group~$\Gamma$. Consider two cases.
Assume $k\ge1$ and $j_1=1$. Put $\sigma'=s_{i_{j_2}}\dots s_{i_{j_k}}$ and $w'=s_{i_2}\dots s_{i_n}$. By the inductive assumption there exists a path $\sigma'\xrightarrow{\gamma_1'}\dots\xrightarrow{\gamma_{n-k}'}w'$ such that $\Gamma$ coincides with the group generated by $\gamma_1',\dots,\gamma_{n-k}'$. By Lemma~\ref{lpaths} we can then find a path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_{n-k}}w$ such that the group generated by $\gamma_1,\dots,\gamma_{n-k}$ coincides with the group generated by $\gamma_1',\dots,\gamma_{n-k}'$.
Assume now that either $k=0$ or $j_1>1$. Let $v=s_{i_2}\dots s_{i_n}$. Then by (i) and the inductive assumption there exists a path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_{n-k-1}}v$ such that $\Gamma$ coincides with the group generated by $\gamma_1,\dots,\gamma_{n-k-1}$
and $v^{-1}(\alpha_{i_1})$. Therefore we can take $\gamma_{n-k}=v^{-1}(\alpha_{i_1})$, so that we get a path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_{n-k-1}}v\xrightarrow{\gamma_{n-k}}w$.
\smallskip
(iii) By \cite[Proposition~2.8(c)]{BGG}, given a path $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_{n-k}}w$ there exist uniquely defined numbers $p_1,\dots,p_{n-k}$ such that $\sigma\gamma_1\dots\gamma_{n-k-l}$ is obtained from $s_{i_1}\dots s_{i_n}$ by dropping the letters $s_{i_{p_1}},\dots,s_{i_{p_l}}$. Let $\{j_1<\dots <j_k\}$ be the complement of $\{p_1,\dots,p_{n-k}\}$ in $\{1,\dots,n\}$. It remains to show that the group~$\Gamma$ is generated by $\gamma_1,\dots,\gamma_{n-k}$. Once again the proof is by induction on~$n$. Put $p=p_1$. Consider the element $w'=\sigma\gamma_1\dots\gamma_{n-k-1}=s_{i_1}\dots\hat s_{i_{p}}\dots s_{i_n}$. Let $\Gamma'$ be the group defined similarly to $\Gamma$ by the elements $\sigma\le w'$. By the inductive assumption it is generated by $\gamma_1,\dots,\gamma_{n-k-1}$. We also have $\gamma_{n-k}=(s_{i_n}\dots s_{i_{p+1}})(\alpha_{i_p})$. By part (i) the group $\Gamma'$ is generated by the elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$ such that $m\in\{p_2,\dots,p_{n-k}\}$ and $m>p$ and the elements $(s_{i_n}\dots\hat s_{i_p}\dots s_{i_{m+1}})(\alpha_{i_m})$ such that $m\in\{p_2,\dots,p_{n-k}\}$ and $m<p$. Since $\gamma_{n-k}=(s_{i_n}\dots s_{i_{p+1}})(\alpha_{i_p})$, for $m<p$ we have
$$
s_{i_n}\dots\hat s_{i_p}\dots s_{i_{m+1}}=s_{\gamma_{n-k}}s_{i_n}\dots s_{i_{m+1}}.
$$
Therefore the group generated by $\gamma_{n-k}$ and $\Gamma'$ coincides with the group generated by the elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$ such that $m\in\{p,p_2,\dots,p_{n-k}\}$, which is exactly the group $\Gamma$.
\ep
The previous lemma shows that the collection $X_{\sigma,w}$ of groups generated by $\gamma_1,\dots,\gamma_{n-k}$, where $n=\ell(w)$ and $k=\ell(\sigma)$, for all possible paths $\sigma\xrightarrow{\gamma_1}\dots\xrightarrow{\gamma_{n-k}}w$, can be described as follows. Fix a reduced decomposition $w=s_{i_1}\dots s_{i_n}$. For every sequence $1\le j_1<\dots< j_k\le n$ such that $\sigma=s_{j_1}\dots s_{j_k}$ consider the group generated by the elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$ such that $m\notin\{j_1,\dots,j_k\}$. Then $X_{\sigma,w}$ consists of such groups for all possible $j_1,\dots, j_k$.
In the particular case $\sigma=e$ this implies that $X_{e,w}$ consists of just one group, and for any reduced decomposition $w=s_{i_1}\dots s_{i_n}$ this group is generated by the elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$, $1\le m\le n$ (or equivalently, by the elements $\alpha_{i_1},\dots,\alpha_{i_n}$). That the latter group is independent of the reduced decomposition is well-known. In fact, the set of elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$, $1\le m\le n$, is exactly $\Delta_+\cap w^{-1}\Delta_-$, see e.g.~Corollary 2 to Proposition VI.6.17 in~\cite{Bour}. Therefore the set $T_{e,w}$ is the group consisting of the elements $\exp(ih_\beta)$ with $\beta\in\operatorname{span}_{\mathbb R}(\Delta_+\cap w^{-1}\Delta_-)$. It would be interesting to have a geometric description of $X_{\sigma,w}$ and $T_{\sigma,w}$ for all $\sigma\le w$.
\smallskip
The following lemma improves the main part of the proof of Theorem~\ref{tcompseries}.
\begin{lemma}
Let $t\in T$ and let $w=s_{i_1}\dots s_{i_n}\in W^S$ be written in the reduced form used to define~$\pi_w$. Assume $a\in C(G_q/K^{S,L}_q)$ is such that $\pi_{\sigma,\tau}(a)=0$ for all $\sigma\in W^S$ such that $\sigma<w$ and all $\tau\in tT_{\sigma,w}$. Then
$$
\pi_{w,t}(a)\in C_0({\mathbb D}_{q_{i_1}})\otimes\dots\otimes
C_0({\mathbb D}_{q_{i_{n}}}).
$$
\end{lemma}
\bp As in the proof of Theorem~\ref{tcompseries} it suffices to check that for any $z\in\T$ and $1\le m\le n$, applying $\chi_z$ to the $m$th factor of
$\pi_{w,t}(a)\in C(\bar{\mathbb D}_{q_{i_1}})\otimes\dots\otimes
C(\bar{\mathbb D}_{q_{i_{n}}})$ we get zero. Assume $z=e^{-2\pi ic}$ and put $u=\exp({2\pi i cH_{i_m}})$. Then exactly as in the proof of Theorem~\ref{tcompseries} we have to check that
$$
a\in\ker(\pi_{s_{i_1}}\otimes\dots\otimes\pi_{s_{i_{m-1}}}\otimes\pi_{s_{i_{m+1}}}\otimes\dots\otimes\pi_{s_{i_n}}\otimes \pi_{tu'}),
$$
where $u'=(s_{i_n}\dots s_{i_{m+1}})(u)$. For this, by Lemmas~\ref{lproductcells2} and~\ref{lTsets}, it suffices to check that $\pi_{\sigma,\tau}(a)=0$ for every $\sigma=s_{i_{j_1}}\dots s_{i_{j_k}}$ such that $m\notin\{j_1,\dots,j_k\}$ and all $\tau\in tT_{\sigma,w}$. If $\sigma\in W^S$, this is true by assumption. Otherwise write $\sigma=\sigma'\sigma''$ with $\sigma'\in W^S$ and $\sigma''\in W_S$. Then we may assume that $\pi_\sigma=\pi_{\sigma'}\otimes\pi_{\sigma''}$ and then by~\eqref{eDS} we have $\pi_{\sigma,\tau}(a)=\pi_{\sigma',\tau}(a)\otimes 1^{\otimes\ell(\sigma'')}$. Since $\sigma'\le\sigma<w$, we have $T_{\sigma,w}\subset T_{\sigma',w}$, and hence $\pi_{\sigma',\tau}(a)=0$ by assumption. Therefore we still get $\pi_{\sigma,\tau}(a)=0$.
\ep
\bp[Proof of Theorem~\ref{ttopology}] The main part of the argument works for all $q\in(0,1]$. Namely, we will show that the kernel of $\pi_{\sigma,t}$ contains the intersection of the kernels of the representations $\pi_{w,\tau}$ of~$C(G_q/K^{S,L}_q)$, $\tau\in\Omega$, if and only if $\sigma\le w$ and $t\in\bar\Omega T_{\sigma,w}T_L$.
\smallskip
Assume $\sigma\nleq w$ and $t\in T$. For $q=1$ it is known that the set $\cup_{v\le w}\Sigma_{v}T$ is closed in $G$, see again \cite[Theorem~23]{St} or \cite[Theorem~2.11]{BGG}, hence $\cup_{W^S\ni v\le w}\pi(\Sigma_{v}T)$ is closed in $G/K^{S,L}$ and does not intersect $\pi(\Sigma_\sigma t)$. For $q<1$, using Proposition~\ref{pBGG} and Lemma~\ref{lsoib}, for any $\lambda\in P_{++}(S^c)$ we get $\pi_{\sigma,t}((C^\lambda_{\sigma\lambda,\lambda})^*C^\lambda_{\sigma\lambda,\lambda})\ne0$ and $\pi_{w,\tau}((C^\lambda_{\sigma\lambda,\lambda})^*C^\lambda_{\sigma\lambda,\lambda})=0$ for all $\tau\in T$.
\smallskip
Assume $\sigma\le w$. Let $w=s_{i_1}\dots s_{i_n}$ be the reduced expression used to define $\pi_w$. For every $\tau\in T_{\sigma,w}$, by Lemma~\ref{lTsets} we can find $j_1<\dots <j_k$ such that $\sigma=s_{i_{j_1}}\dots s_{i_{j_k}}$ is reduced and $\tau$ has the form $\exp({i h_\beta})$ with $\beta$ lying in the real span of the elements $(s_{i_n}\dots s_{i_{m+1}})(\alpha_{i_m})$, $m\notin\{j_1,\dots,j_k\}$. Using Lemma~\ref{ltconj} we conclude that by applying the characters $\chi_{z_m}$ to the $m$th factor of $\pi_{w,\tau'}$ for appropriate numbers $z_{i_m}\in\T$ we can factor $\pi_{\sigma,\tau\tau'}$ through $\pi_{w,\tau'}$ for any $\tau'\in T$. In other words, if $t\in \Omega T_{\sigma,w}T_L$ then the kernel of $\pi_{\sigma,t}$ contains the kernel of $\pi_{w,\tau}$ for some $\tau\in\Omega$. In particular, the intersection of the kernels of $\pi_{w,\tau}$, $\tau\in\Omega$, is contained in the kernel of $\pi_{\sigma,t}$ for any $t\in\Omega T_{\sigma,w}T_L$. Since clearly the map $T\ni t\mapsto\pi_{\sigma,t}(a)$ is continuous for every $a\in C(G_q/K^{S,L}_q)$, the same is true for $t\in\bar\Omega T_{\sigma,w}T_L$.
\smallskip
Finally, assume that $\sigma\le w$, but $t\notin\bar\Omega T_{\sigma,w}T_L$. Let $m=\ell(\sigma)$ and $n=\ell(w)$. By Theorem~\ref{tcompseries} we can find $a_m\in J_{m-1}$ such that $\pi_{\sigma,t}(a_m)\ne0$, $\pi_{\sigma,\tau}(a_m)=0$ for all $\tau \in\bar\Omega T_{\sigma,w}T_L$, and $\pi_{\sigma',\tau}(a_m)=0$ for all $\tau\in T$ and $\sigma'\in W^S$ such that $\sigma'\ne\sigma$, $\ell(\sigma')=m$. If $n=m$ this already shows that the intersection of the kernels of $\pi_{w,\tau}$, $\tau\in\Omega$, is not contained in the kernel of $\pi_{\sigma,t}$. So assume $n>m$. We will construct by induction elements $a_k\in C(G_q/K^{S,L}_q)$, $m+1\le k\le n$, such that $a_k-a_{k-1}\in J_{k-1}$ for $m+1\le k\le n$, and $\pi_{v,\tau}(a_k)=0$ for $m\le k\le n$ for any $v\in W^S$ such that $v\le w$, $\ell(v)\le k$, and any $\tau \in\bar\Omega T_{v,w}T_L$. Note that $a_m$ satisfies the last requirement for $k=m$, since $a_m\in J_{m-1}$ and hence $\pi_{v,\tau}(a_m)=0$ if $\ell(v)\le m-1$ for any $\tau\in T$.
Assume $a_k$ is constructed. Let $v\in W^S$, $v\le w$, $\ell(v)=k+1$. By construction we have $\pi_{\sigma',\tau}(a_k)=0$ for any $\sigma'\in W^S$ such that $\sigma'<v$ and any $\tau\in \bar\Omega T_{\sigma',w}T_L$. Since $T_{\sigma',v}T_{v,w}\subset T_{\sigma',w}$ by~\eqref{eTmult}, by Lemma~\ref{lproductcells2} it follows that
$$
\pi_{v,\tau}(a_k)\in C_0({\mathbb D}_{q_{i_1(v)}})\otimes\dots\otimes
C_0({\mathbb D}_{q_{i_{k+1}(v)}})
$$
for any $\tau\in \bar\Omega T_{v,w}T_L$, where $v=s_{i_1(v)}\dots s_{i_{k+1}(v)}$ is the reduced decomposition used to define $\pi_{v}$. By Theorem~\ref{tcompseries} we can find $b\in J_k$ such that $\pi_{v,\tau}(a_k)=\pi_{v,\tau}(b)$ for all $v\in W^S$, $v\le w$, $\ell(v)=k+1$, and all $\tau\in \bar\Omega T_{v,w}T_L$. We then take $a_{k+1}=a_k-b$.
By construction we have $\pi_{w,\tau}(a_n)=0$ for all $\tau\in \bar\Omega T_L$. As $a_n-a_m\in J_m$, we also have $\pi_{\sigma,t}(a_n)=\pi_{\sigma,t}(a_m)\ne0$.
\smallskip
This finishes the proof of (ii). For $q=1$ what we have proved means that a leaf $\pi(\Sigma_\sigma t)$ is contained in the closure of the union of the leaves $\pi(\Sigma_w\tau)$, $\tau\in\Omega$, if and only if $\sigma\le w$ and $t\in\bar\Omega T_{\sigma,w}T_L$. To establish (i) it remains to note that since the symplectic leaves are orbits of the right dressing action, the closure of the union of the leaves $\pi(\Sigma_w\tau)$, $\tau\in\Omega$, consists of entire leaves, so if a leaf $\pi(\Sigma_\sigma t)$ is not contained in this closure, it does not intersect it.
\ep
\bigskip
\section{Strict deformation quantization} \label{s5}
In this section we will consider the family of C$^*$-algebras $C(G_q/K^{S,L}_q)$. To distinguish elements of different algebras we will use upper and lower indices $q$. Indices corresponding to $q=1$ will often be omitted.
In \cite{NT6} we showed that the family $(C(G_q))_q$ has a canonical structure of a continuous field of C$^*$-algebras. It is defined as follows. For every $q\in(0,1]$ choose a $*$-isomorphism $\varphi^q\colon\U(G_q)\to\U(G)$ extending the canonical identifications of the centers. In other words, for every $\lambda\in P_+$ choose a $*$-isomorphism $\varphi^q_\lambda\colon B(V^q_\lambda)\to B(V_\lambda)$. Then upon identifying $\U(G_q)$ with $\prod_{\lambda\in P_+}B(V^q_\lambda)$ the isomorphism $\varphi^q$ is given by $(\varphi^q_\lambda)_\lambda$. The family of isomorphisms $\{\varphi^q\}_q$ is called continuous if the maps $q\mapsto \varphi^q(X^q)\in\U(G)={\mathbb C}[G]^*$ are $\sigma(\U(G),{\mathbb C}[G])$-continuous for $X^q=E_i^q,F_i^q,h_i$; in other words, for every $\lambda\in P_+$ and $\xi\in V_\lambda$, the maps $q\mapsto \varphi^q(X^q)\xi\in V_\lambda$ are continuous. By \cite[Lemma~1.1]{NT6} there always exists a continuous family of $*$-isomorphisms such that $\varphi^1=\iota$. Fix such a family and consider the dual maps $\hat\varphi^q\colon{\mathbb C}[G]\to{\mathbb C}[G_q]$. They are coalgebra isomorphisms. Then by \cite[Proposition~1.2]{NT6} the family $(C(G_q))_q$ has a unique structure of a continuous field of C$^*$-algebras such that for every $a\in{\mathbb C}[G]$ the section $q\mapsto\hat\varphi^q(a)\in C(G_q)$ is continuous, and this structure does not depend on the choice of a continuous family of isomorphisms. The proof is based on two results, which we will now recall as they both play an important role in what follows.
The first one, exploited in one way or another in all cases where a continuous field has been constructed~\cite{Bau}, \cite{Bl}, \cite{Na}, \cite{Sh0}, is that in view of the classification of irreducible representations the key step is to prove continuity for quantum disks. We include a sketch of a by now standard proof for the reader's convenience.
\begin{lemma} \label{ldiskcont}
The family of C$^*$-algebras $C(\bar{\mathbb D}_q)$, $q\in[0,1]$, has a unique structure of a continuous field of C$^*$-algebras such that $q\mapsto Z_q$ is a continuous section. The continuous field $(C(\bar{\mathbb D}_q))_{q\in[0,1)}$ is isomorphic to the constant field with fiber $\TT$.
\end{lemma}
\bp Consider the universal unital C$^*$-algebra $A$ generated by two elements $Z$ and $Q$ such that
$$
1-Z^*Z=Q^2(1-ZZ^*),\ \ QZ=ZQ, \ \ \|Z\|\le1, \ \ 0\le Q\le1.
$$
For $q\in[0,1]$ let $I_q\subset A$ be the ideal generated by $Q-q1$. Put $A_q=A/I_q$ and denote by $\pi_q$ the quotient map $A\to A_q$. Since $Q$ is in the center of $A$, the function $q\mapsto\|\pi_q(a)\|$ is automatically upper semicontinuous for every $a\in A$. It is also clear that $A_q\cong C(\bar{\mathbb D}_q)$. Therefore to prove the lemma we just have to check that the functions $q\mapsto\|\pi_q(a)\|$ are lower semicontinuous. For this define states $\psi_q$ on $A_q$ as follows. The state $\psi_1$ on $A_1\cong C(\bar{\mathbb D})$ is given by the normalized Lebesgue measure on the unit disk. For $q<1$ the state $\psi_q$ on $A_q\cong\TT$ is defined by
$$
\psi_q(a)=(1-q^2)\sum^\infty_{n=0}q^{2n}(ae_n,e_n)\ \ \hbox{for}\ \ a\in\TT.
$$
It is not difficult to check that the family $(\psi_q)_q$ is continuous in the sense that the map $q\mapsto\psi_q(\pi_q(a))$ is continuous for every $a\in A$. Since the states are faithful, the corresponding GNS-representations~$\pi_{\psi_q}$ are faithful as well, which implies that the functions $q\mapsto\|\pi_q(a)\|=\|\pi_{\psi_q}(\pi_q(a))\|$ are lower semicontinuous.
The last statement in the formulation is immediate from the explicit isomorphism $C(\bar{\mathbb D}_q)\cong\TT$.
\ep
The second result is that under the homomorphism $C(G_q)\to C(SU_{q^{d_\alpha}}(2))$ corresponding to the simple root $\alpha$ the image of $\hat\varphi^q(a)$ is a polynomial in the standard generators of ${\mathbb C}[SU_{q^{d_\alpha}}(2)]$ with coefficients that are continuous in $q$. This is a consequence of the following lemma, which we formulate in a more general setting needed later. The group $\tilde K^S$ is simply connected, semisimple (if nontrivial) and compact, and its set of dominant integral weights can be identified with $P_+(S)$. So for the same reasons as for $G$ we have a continuous family of $*$-isomorphisms $\U(\tilde K^S_q)\to\U(\tilde K^S)$ extending the identification of the centers of these algebras with the algebra of functions on $P_+(S)$. Slightly more generally, as $T_L=T_{P(S^c)}\times (P(S)+L)^\perp$, from Proposition~\ref{ppoissonsub}(ii) we get $K^{S,L}=\tilde K^S\times(P(S)+L)^\perp$, and therefore the irreducible representations of $K^{S,L}$ are classified by $P_+(S)\times P/(P(S)+L)=P_+(S)\times P(S^c)/L$. It follows then that the irreducible corepresentations of $C(K^{S,L}_q)$ are classified by a subset of $P_+(S)\times P(S^c)/L$, and the compact quantum group $K^{S,L}_q$ is a quotient of $\tilde K^S_q\times(P(S)+L)^\perp$. Therefore there exist injective $*$-homomorphisms $\psi^q\colon\U(K^{S,L}_q)\to \U(K^{S,L})$ extending the embeddings of the equivalence classes of irreducible corepresentations of $C(K^{S,L}_q)$ into $P_+(S)\times P(S^c)/L$. Then we say that a family $\{\psi^q\}_q$ of such homomorphisms is continuous if the maps $q\mapsto \varphi^q(X^q)\in\U(K^{S,L})$ are $\sigma(\U(K^{S,L}),{\mathbb C}[K^{S,L}])$-continuous for $X^q=E_i^q,F_i^q$ with $\alpha_i\in S$, for $X^q=h_\beta$ with $\beta\in L^\perp$, and for $X^q=t\in T_L$.
\begin{lemma}
We have $K^{S,L}_q=\tilde K^S_q\times (P(S)+L)^\perp$ for all $q\in(0,1)$, hence there exists a continuous family $\{\psi^q\colon\U(K^{S,L}_q)\to\U(K^{S,L})\}_{q\in(0,1]}$ of $*$-isomorphisms with $\psi^1=\iota$. For any such a family~$\{\psi^q\}_q$, there exists a continuous family $\{\varphi^q\colon\U(G_q)\to\U(G)\}_{q\in(0,1]}$ of $*$-isomorphisms with $\varphi^1=\iota$ and such that $\varphi^q=\psi^q$ on $\U(K^{S,L}_q)$.
\end{lemma}
This is established in the course of the proof of \cite[Proposition~1.2]{NT6} in the particular case when $K^{S,L}=SU(2)_\alpha$ is the subgroup corresponding to a simple root $\alpha$, so $S=\{\alpha\}$ and $L=P(S^c)$. The general case is proved in the same way.
\begin{proposition}
The family of C$^*$-algebras $(C(G_q/K^{S,L}_q))_{q\in(0,1]}$ is a continuous subfield of the continuous field $(C(G_q))_{q\in(0,1]}$.
\end{proposition}
\bp We have to check that the family $(C(G_q/K^{S,L}_q))_{q\in(0,1]}$ has enough continuous sections. It suffices to show that for every $q_0\in(0,1]$ and $a\in{\mathbb C}[G_{q_0}/K^{S,L}_{q_0}]$ there exists a continuous section $q\mapsto a(q)$ of the field $(C(G_q))_{q\in(0,1]}$ such that $a(q)\in C(G_q/K^{S,L}_q)$ for all $q$ and $a(q_0)=a$. Let $\{\varphi^q\colon\U(G_q)\to\U(G)\}_{q\in(0,1]}$ be a continuous family of $*$-isomorphisms with $\varphi^1=\iota$ such that $\varphi^q(\U(K^{S,L}_q))=\U(K^{S,L})$. Then, since ${\mathbb C}[G_q/K^{S,L}_q]$ consists of the elements $b\in{\mathbb C}[G_q]$ such that $(\omega\otimes\iota)\Delta_q(b)=\hat\eps_q(\omega)b$ for all $\omega\in\U(K^{S,L}_q)$, $\hat\varphi^q$ is a coalgebra isomorphism and $\hat\eps_q=\hat\eps\varphi^q$, we have $\hat\varphi^q({\mathbb C}[G/K^{S,L}])={\mathbb C}[G_q/K^{S,L}_q]$. Since the section $q\mapsto\hat\varphi^q(b)$ is continuous for any $b\in{\mathbb C}[G/K^{S,L}]$ by definition of the continuous field structure on $(C(G_q))_{q\in(0,1]}$, we get the result.
\ep
For $0<a<b\le1$ denote by $\Gamma((C(G_q/K^{S,L}_q))_{q\in[a,b]})$ the C$^*$-algebra of continuous sections of the field $(C(G_q/K^{S,L}_q))_{q\in[a,b]}$. Let $\{\varphi^q\colon\U(G_q)\to\U(G)\}_{q\in(0,1]}$ be a continuous family of $*$-isomorphisms. Denote by $\Gamma_{alg}(({\mathbb C}[G_q])_{q\in[a,b]})$ the space of sections of the form $q\mapsto\sum^{n}_{i=1}f_i(q)\hat\varphi^q(a_i)$, where $n\in{\mathbb N}$, $a_i\in{\mathbb C}[G]$ and $f_i\in C[a,b]$. By \cite[Remark~1.3]{NT6} the space $\Gamma_{alg}((C[G_q])_{q\in[a,b]})$ does not depend on the choice of $\varphi^q$ and forms a dense $*$-subalgebra of $\Gamma((C(G_q))_{q\in[a,b]})$. Put
$$
\Gamma_{alg}(({\mathbb C}[G_q/K^{S,L}_q])_{q\in[a,b]})=\Gamma((C(G_q/K^{S,L}_q))_{q\in[a,b]})\cap\Gamma_{alg}(({\mathbb C}[G_q])_{q\in[a,b]}).
$$
Then $\Gamma_{alg}(({\mathbb C}[G_q/K^{S,L}_q])_{q\in[a,b]})$ is a dense involutive $C[a,b]$-subalgebra of $\Gamma((C(G_q/K^{S,L}_q))_{q\in[a,b]})$.
\smallskip
Recall that ${\mathbb C}[G/K^{S,L}]$ is a Poisson algebra with Poisson bracket defined by \eqref{epoissonbracket}.
\begin{theorem} \label{tquantization}
Assume $\eps\in(0,1)$. Then for any $a,b\in \Gamma_{alg}(({\mathbb C}[G_q/K^{S,L}_q])_{q\in[\eps,1]})$ we have
$$
\lim_{h\downarrow0}\frac{[a(e^{-h}),b(e^{-h})]}{ih}=\{a(1),b(1)\}.
$$
\end{theorem}
Here by the limit we mean that for some (equivalently, for any) $c\in \Gamma((C(G_q/K^{S,L}_q))_{q\in[\eps,1]})$ such that $\{a(1),b(1)\}=c(1)$ we have
$$
\lim_{h\downarrow0}\|[a(e^{-h}),b(e^{-h})]/{ih}-c(e^{-h})\|=0.
$$
Another way of formulating this theorem is to say that the section $c$ defined by $c(1)=\{a(1),b(1)\}$ and $c(e^{-h})=[a(e^{-h}),b(e^{-h})]/{ih}$ for $h>0$, is continuous.
\bp[Proof of Theorem~\ref{tquantization}] We may assume that $K^{S,L}$ is trivial. Let $\{\varphi^q\colon\U(G_q)\to\U(G)\}_{q\in(0,1]}$ be a continuous family of $*$-isomorphisms with $\varphi^1=\iota$. By definition of $\Gamma_{alg}(({\mathbb C}[G_q])_{q\in[\eps,1]})$ and by linearity we may assume that $a(q)=\hat\varphi^q(a')$ and $b(q)=\hat\varphi^q(b')$ for some $a',b'\in{\mathbb C}[G]$. Let $w_0=s_{i_1}\dots s_{i_n}$ be the longest element in the Weyl group written in reduced form. Consider the homomorphism
$$
\Theta^q\colon C(G_q)\to C(SU_{q^{d_{i_1}}}(2))\otimes\dots\otimes C(SU_{q^{d_{i_n}}}(2)), \ \ \Theta^q(x)=(\sigma^q_{i_1}\otimes\dots\otimes \sigma^q_{i_n})\Delta_q^{(n-1)}(x),
$$
where $\sigma^q_i\colon C(G_q)\to C(SU_{q_i}(2))$ is the $*$-homomorphism which is dual to the embedding $U_{q_i}\sltwo\hookrightarrow U_q\g$ corresponding to the simple root $\alpha_i$. Since $\hat\varphi^q$ are coalgebra maps, we then have
\medskip
$\displaystyle\Theta^q([a(q),b(q)])=\Theta^q([\hat\varphi^q(a'),\hat\varphi^q(b')])$
$$
=\sum_{k=0}^{n-1}\sigma^q_{i_1}(\hat\varphi^q(b'_{(0)})\hat\varphi^q(a'_{(0)}))\otimes\dots\otimes \sigma^q_{i_{k+1}}([\hat\varphi^q(a'_{(k)}),\hat\varphi^q(b'_{(k)})])\otimes\dots\otimes \sigma^q_{i_n}(\hat\varphi^q(a'_{(n-1)})\hat\varphi^q(b'_{(n-1)})),
$$
where we use Sweedler's sumless notation for the coproduct $\Delta$ on ${\mathbb C}[G]$.
Since $\Delta^{(n-1)}\colon{\mathbb C}[G]\to{\mathbb C}[G]^{\otimes n}$ is a Poisson map with respect to the product Poisson structure on~${\mathbb C}[G]^{\otimes n}$, we also have
$$
\Theta^q\hat\varphi^q(\{a',b'\})
=\sum_{k=0}^{n-1}\sigma^q_{i_1}\hat\varphi^q(a'_{(0)}b'_{(0)})\otimes\dots\otimes \sigma^q_{i_{k+1}}\hat\varphi^q(\{a'_{(k)},b'_{(k)}\})\otimes\dots\otimes \sigma^q_{i_n}\hat\varphi^q(a'_{(n-1)}b'_{(n-1)}).
$$
By the classification of the irreducible representations of $C(G_q)$ we know that the homomorphism~$\Theta^q$ is an isometry. We thus see that it suffices to show that for any $a,b,c\in\Gamma_{alg}(({\mathbb C}[G_q])_{q\in[\eps,1]})$ with $\{a(1),b(1)\}=c(1)$ and any $1\le j\le r$ we have
$$
\lim_{h\downarrow0}\left\|\sigma_j^{e^{-h}}\left([a(e^{-h}),b(e^{-h})]/{ih}-c(e^{-h})\right)\right\|=0.
$$
Since the family of homomorphisms $(\sigma_j^q)_q$ maps $\Gamma_{alg}(({\mathbb C}[G_q])_{q\in[\eps,1]})$ into $\Gamma_{alg}(({\mathbb C}[SU_{q^{d_j}}(2)])_{q\in[\eps,1]})$ (see the proof of \cite[Proposition 1.2]{NT6}), and $\sigma^1_j\colon {\mathbb C}[G]\to {\mathbb C}[SU(2)]$ is a homomorphism of Poisson algebras, when $SU(2)$ is given the Poisson structure defined by the classical $r$-matrix $i d_j(F\otimes E-E\otimes F)$, to prove the theorem it is therefore enough to consider $G=SU(2)$ with the standard normalization of the invariant form on $\sltwo({\mathbb C})$ and the classical $r$-matrix $i(F\otimes E-E\otimes F)$.
The space $\Gamma_{alg}(({\mathbb C}[SU_{q}(2)])_{q\in[\eps,1]})$ is generated as an involutive $C[\eps,1]$-algebra by the sections $q\mapsto\alpha_q$ and $q\mapsto\gamma_q$ (see again the proof of \cite[Proposition 1.2]{NT6}). It follows that it suffices to consider the following four pairs of $(a(q),b(q))$: $(\alpha_q,\alpha^*_q)$, $(\alpha_q,\gamma_q)$, $(\alpha_q,\gamma^*_q)$ and $(\gamma_q,\gamma^*_q)$. By \eqref{epoissonbracket} the Poisson bracket of $a',b'\in{\mathbb C}[G]$ is given by
$$
\{a',b'\}=(a'_{(1)}\otimes b'_{(1)})(r)a'_{(0)}b'_{(0)}-(a'_{(0)}\otimes b'_{(0)})(r)a'_{(1)}b'_{(1)},
$$
from which we compute
$$
\{\alpha_1,\alpha_1^*\}=-2i\gamma_1\gamma_1^*,\ \
\{\alpha_1,\gamma_1\}=i\alpha_1\gamma_1,\ \
\{\alpha_1,\gamma_1^*\}=i\alpha_1\gamma_1^*,\ \
\{\gamma_1,\gamma_1^*\}=0.
$$
By the relations in ${\mathbb C}[SU_q(2)]$ this gives the result: for instance, $$\frac{[\alpha_{e^{-h}},\alpha_{e^{-h}}^*]}{ih}=\frac{1-e^{-2h}}{ih}\gamma^*_{e^{-h}}\gamma_{e^{-h}}\to -2i\gamma_1^*\gamma_1\ \ \hbox{as}\ \ h\to0.$$
\ep
Recall \cite{Ri}, \cite{La} that a strict quantization of a commutative Poisson $*$-algebra $\A$ is a continuous field $(A_h)_{h\in[0,\delta]}$ of C$^*$-algebras together with a linear map $\QQ=(\QQ_h)_h\colon \A\to\Gamma((A_h)_{h\in[0,\delta]})$ such that~$\A$ is a dense $*$-subalgebra of $A_0$, $\QQ_0=\iota$, $\QQ_h(\A)$ is a dense subspace of $A_h$ for every $h$, and
$$
\lim_{h\downarrow0}\left\|{[\QQ_h(a),\QQ_h(b)]}/{ih}-\QQ_h(\{a,b\})\right\|=0\ \ \hbox{for all}\ \ a,b\in\A.
$$
The pair $((A_h)_h,(\QQ_h)_h)$ is called a strict deformation quantization of $\A$ if in addition every map $\QQ_h$ is injective and its image is a $*$-subalgebra of $A_h$.
The structure which emerges from Theorem~\ref{tquantization} is only slightly different: we have a continuous field $(A_h)_{h\in[0,\delta]}$ of C$^*$-algebras and a dense involutive $C[0,\delta]$-subalgebra $\QQ$ of $\Gamma((A_h)_{h\in[0,\delta]})$ such that~$\A$ is a dense $*$-subalgebra of $A_0$, the image of $\QQ$ in $A_0$ coincides with $\A$ and
$$
\lim_{h\downarrow0}\frac{[a(h),b(h)]}{ih}=\{a(0),b(0)\}\ \ \hbox{for all}\ \ a,b\in\QQ.
$$
The advantage of this formulation is that in our examples this structure is completely canonical. If one however insists on the standard formulation of deformation quantization, the required maps $\QQ_h$ are in abundance, but it is impossible to make a canonical choice.
\begin{corollary}
Let $\{\varphi^q\colon\U(G_q)\to\U(G)\}_{q\in(0,1]}$ be a continuous family of $*$-isomorphisms with $\varphi^1=\iota$ such that $\varphi^q(\U(K^{S,L}_q))=\U(K^{S,L})$. Then the pair $((A_h)_{h\in[0,\delta]},(\QQ_h)_{h\in[0,\delta]})$, where $A_h=C(G_{e^{-h}}/K^{S,L}_{e^{-h}})$ and $\QQ_h$ is the restriction of $\hat\varphi^{e^{-h}}$ to ${\mathbb C}[G/K^{S,L}]$, defines a strict deformation quantization of the Poisson algebra ${\mathbb C}[G/K^{S,L}]$ for any $\delta>0$.
\end{corollary}
\begin{remark}
Sometimes one also requires the maps $\QQ_h$ to be $*$-preserving. The maps in the above corollary do not satisfy this property, but it is easy to modify them to get maps that are $*$-preserving. To do this, for every $\lambda\in P_+$ and $h\ge0$ consider the subspace $\tilde\A_h^\lambda\subset {\mathbb C}[G_{e^{-h}}]$ spanned by the matrix coefficients of the irreducible representations with highest weights $\lambda$ and $-w_0\lambda$. Put $\A_h^\lambda= \tilde \A_h^\lambda\cap {\mathbb C}[G_{e^{-h}}/K^{S,L}_{e^{-h}}]$. Then $\A_h^\lambda$ is a finite dimensional selfadjoint subspace of $A_h$ and $\QQ_h$ maps $\A_0^\lambda$ onto $\A_h^\lambda$. Then $R^\lambda_h=\QQ^{-1}_h((\A^\lambda_h)_{sa})\subset \A_0^\lambda$ is a continuous family of real forms of the space $\A^\lambda_0$. Hence there exists a continuous family of linear isomorphisms $T^\lambda_h\colon \A_0^\lambda\to \A^\lambda_0$ such that $T^\lambda_h(R^\lambda_0)=R^\lambda_h$ and $T^\lambda_0=\iota$. The space ${\mathbb C}[G/K^{S,L}]$ is the direct sum of the spaces $\A_0^\lambda$ over a set of representatives $\lambda$ of the quotient space of $P_+$ by the action of the involution $-w_0$. Fixing such a direct sum decomposition define $T_h\colon {\mathbb C}[G/K^{S,L}]\to{\mathbb C}[G/K^{S,L}]$ using the operators $T_h^\lambda$. Then the maps $\QQ_hT_h\colon{\mathbb C}[G/K^{S,L}]\to {\mathbb C}[G_{e^{-h}}/K^{S,L}_{e^{-h}}]$ are $*$-preserving linear isomorphisms defining a strict deformation quantization of ${\mathbb C}[G/K^{S,L}]$.
\end{remark}
\bigskip
\section{K-theory} \label{s6}
In this section we will show that the C$^*$-algebras $C(G_q/K^{S,L}_q)$ are KK-equivalent to $C(G/K^{S,L})$. In fact we will prove the following more precise and stronger result, which is important for applications~\cite{NT6}.
\begin{theorem} \label{tNagy}
For any $0<a<b\le1$ and $q_0\in[a,b]$ the evaluation map $$\Gamma((C(G_q/K^{S,L}_q))_{q\in[a,b]})\to C(G_{q_0}/K^{S,L}_{q_0})$$ is a KK-equivalence.
\end{theorem}
That $C(SU_q(N))$ is KK-equivalent to $C(SU(N))$, was proved by Nagy~\cite{Na} using the composition series obtained by Sheu~\cite{Sh}. In view of Theorem~\ref{tcompseries} the general case of $C(G_q/K^{S,L}_q)$ is virtually the same, but since it might seem that the proof of Nagy depends in an essential way on the extension of E-theory developed in~\cite{Na2}, we will give a complete argument within just the standard KK-theoretic framework.
\smallskip
We will repeatedly use the following basic properties of KK-equivalence.
\begin{lemma} Let $0\to J\to A\xrightarrow{\pi} A/J\to0$ be a semisplit short exact sequence of separable C$^*$-algebras. Then the following conditions are equivalent:
\enu{i} the homomorphism $\pi\colon A\to A/J$ is a KK-equivalence;
\enu{ii} the map $\pi_*\colon KK(D,A)\to KK(D,A/J)$ is an isomorphism for every separable C$^*$-algebra $D$;
\enu{iii} the map $\pi^*\colon KK(A/J,D)\to KK(A,D)$ is an isomorphism for every separable C$^*$-algebra $D$;
\enu{iv} the C$^*$-algebra $J$ is KK-contractible, that is, $KK(J,J)=0$.
\end{lemma}
\bp Equivalence of (ii), (iii) and (iv) follows from the two $6$-term exact sequences in KK-theory associated with $0\to J\to A\xrightarrow{\pi} A/J\to0$. That (i) implies (ii) is immediate. Finally, that (ii) implies~(i) follows from the general observation that if $f\colon X\to Y$ is a morphism in some category $\CC$ such that for every object $Z$ the map $\operatorname{Mor}(Z,X)\to\operatorname{Mor}(Z,Y)$, $g\mapsto f\circ g$, is a bijection, then $f$ is an isomorphism.
\ep
Note that all the algebras appearing in this section will be of type I, hence nuclear, so all the short exact sequences will automatically be semisplit.
\smallskip
To prove the theorem we will first establish the analogous result for quantum disks.
\begin{lemma} \label{ldisk}
For any $0\le a<b\le1$ and $q_0\in[a,b]$ the evaluation maps $\Gamma((C_0({\mathbb D}_q))_{q\in [a,b]})\to C_0({\mathbb D}_{q_0})$ and $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,b]})\to C(\bar{\mathbb D}_{q_0})$ are KK-equivalences.
\end{lemma}
\bp Any of the two $6$-term exact sequences in KK-theory applied to the exact rows of the commutative diagram
$$
\xymatrix{
0\ar[r] & \Gamma((C_0({\mathbb D}_q))_{q\in [a,b]})\ar[d]\ar[r] & \Gamma((C(\bar{\mathbb D}_q))_{q\in [a,b]})\ar[r]\ar[d] & C[a,b]\otimes C(\T)\ar[r]\ar[d] & 0\\
0\ar[r] & C_0({\mathbb D}_{q_0}) \ar[r] & C(\bar{\mathbb D}_{q_0})\ar[r] & C(\T)\ar[r] & 0
}
$$
implies that it suffices to show that $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,b]})\to C(\bar{\mathbb D}_{q_0})$ is a KK-equivalence. Observe also that for $q_0\in(a,b)$ the kernel of $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,b]})\to C(\bar{\mathbb D}_{q_0})$ is the direct sum of the kernels of $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,q_0]})\to C(\bar{\mathbb D}_{q_0})$ and $\Gamma((C(\bar{\mathbb D}_q))_{q\in [q_0,b]})\to C(\bar{\mathbb D}_{q_0})$. So to prove that the kernels of the evaluation maps are KK-contractible it suffices to consider the evaluations at the end points.
Since the field $(C(\bar{\mathbb D}_q))_{q\in[0,1)}$ is constant with fiber $\TT$, the kernel of $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,b]})\to C(\bar{\mathbb D}_b)$ is isomorphic to $C_0[a,b)\otimes\TT$, hence it is contractible. Similarly, if $b<1$ then the kernel of $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,b]})\to C(\bar{\mathbb D}_{a})$ is contractible.
It remains to prove that $ev_a\colon \Gamma((C(\bar{\mathbb D}_q))_{q\in [a,1]})\to C(\bar{\mathbb D}_{a})$ is a KK-equivalence. Since we already know that $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,1]})\to C(\bar{\mathbb D}_{1})=C(\bar{\mathbb D})$ is a KK-equivalence, the C$^*$-algebra $\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,1]})$ is KK-equivalent to~${\mathbb C}$ and the group $K_0(\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,1]}))$ is generated by $[1]$. But it is also well-known that the C$^*$-algebra $C(\bar{\mathbb D}_a)\cong\TT$ is KK-equivalent to ${\mathbb C}$ and its $K_0$-group is generated by~$[1]$. Therefore we just have to check that the KK-class of $ev_a$ is a generator of
$$
KK(\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,1]}),C(\bar{\mathbb D}_a))\cong KK({\mathbb C},{\mathbb C})\cong\Z.
$$
Since ${ev_a}_*\colon K_0(\Gamma((C(\bar{\mathbb D}_q))_{q\in [a,1]}))\to K_0(C(\bar{\mathbb D}_a))$ is an isomorphism, this is clearly the case.
\ep
We remark that the last part of the above proof can be slightly shortened by using the Universal Coefficient Theorem. Similarly the next lemma can be quickly deduced from Lemma~\ref{ldisk} using Kasparov's ${\mathcal R}$KK-groups. Since both proofs are quite short anyway, we prefer to keep things as elementary as possible.
\begin{lemma} \label{lpolydisk}
Assume $p_1,\dots,p_n>0$ and $0\le a<b\le1$.
Then for any $q_0\in[a,b]$ the evaluation map
$$
\Gamma((C_0({\mathbb D}_{q^{p_1}})\otimes\dots\otimes C_0({\mathbb D}_{q^{p_n}}))_{q\in[a,b]})\to C_0({\mathbb D}_{q^{p_1}_0})\otimes\dots\otimes C_0({\mathbb D}_{q^{p_n}_0})
$$
is a KK-equivalence.
\end{lemma}
Here the family $(C_0({\mathbb D}_{q^{p_1}})\otimes\dots\otimes C_0({\mathbb D}_{q^{p_n}}))_{q\in[a,b]}$ is of course given the unique continuous field structure such that the tensor product of continuous sections is a continuous section. That such a structure exists, can be checked by the same argument as in the proof of Lemma~\ref{ldiskcont}, but this is also a consequence of a general result of Kirchberg and Wassermann \cite[Theorem~4.6]{KW} saying that if $(A_q)_q$ and $(B_q)_q$ are continuous fields of C$^*$-algebras and $\Gamma((A_q)_q)$ is exact then $(A_q\otimes B_q)_q$ is a continuous field.
\bp[Proof of Lemma~\ref{lpolydisk}] To simplify the notation assume $p_1=\dots=p_n=1$.
The proof of the lemma is by induction on $n$. Furthermore, it is convenient to simultaneously prove the same result for the continuous fields of the C$^*$-algebras
$$
A_{m,n}^q=\underbrace{C_0({\mathbb D}_q)\otimes\dots\otimes C_0({\mathbb D}_q)}_m\otimes \underbrace{C(\bar{\mathbb D}_q)\otimes\dots\otimes C(\bar{\mathbb D}_q)}_{n-m}
$$
for $m=0,\dots,n$. For $n=1$ the result is proved in Lemma~\ref{ldisk}. Assume $n>1$. We will prove that the evaluation map $\Gamma((A^q_{m,n})_{q\in[a,b]})\to A^{q_0}_{m,n}$ is a KK-equivalence by induction on $m$. For $m=0$ the proof is literally the same as that of Lemma~\ref{ldisk}, with $\TT$ replaced by $\TT^{\otimes n}$. For $m\ge1$ applying $\otimes A^q_{m-1,n-1}$ to the exact sequence $0\to C_0({\mathbb D}_q)\to C(\bar{\mathbb D}_q)\to C(\T)\to 0$ we get an exact sequence
\begin{equation} \label{eshortdisks}
0\to A^q_{m,n}\to A^q_{m-1,n}\to A^q_{m-1,n-1}\otimes C(\T)\to 0.
\end{equation}
Since the evaluation map $\Gamma((A^q_{m-1,n-1})_{q\in[a,b]})\to A^{q_0}_{m-1,n-1}$ is a KK-equivalence by the inductive assumption on $n$, the map
$$
\Gamma((A^q_{m-1,n-1}\otimes C(\T))_{q\in[a,b]})=\Gamma((A^q_{m-1,n-1})_{q\in[a,b]})\otimes C(\T)\to A^{q_0}_{m-1,n-1}\otimes C(\T)
$$
is a KK-equivalence as well. Since $\Gamma((A^q_{m-1,n})_{q\in[a,b]})\to A^{q_0}_{m-1,n}$ is a KK-equivalence by the inductive assumption on $m$, applying one of the $6$-term exact sequences in KK-theory to \eqref{eshortdisks} we conclude that $\Gamma((A^q_{m,n})_{q\in[a,b]})\to A^{q_0}_{m,n}$ is also a KK-equivalence.
\ep
\bp[Proof of Theorem~\ref{tNagy}]
Consider the ideals $J_m^q\subset C(G_q/K^{S,L}_q)$ defined in Theorem~\ref{tcompseries}. Since they are the fiber-wise kernels of morphisms of continuous fields of C$^*$-algebras, they form continuous subfields of C$^*$-algebras of $(C(G_q/K^{S,L}_q))_q$, see e.g.~\cite[Proposition~2.6(ii)]{Na}. Furthermore, we have short exact sequences $0\to J^q_{m}\to J^q_{m-1}\to A^q_m\to0$ and corresponding short exact sequences of the C$^*$-algebras of continuous sections, where
$$
A_m^q=\bigoplus_{w\in W^S: \ell(w)=m}C(T/T_L)\otimes C_0({\mathbb D}_{q^d{_{i_1(w)}}})\otimes\dots\otimes
C_0({\mathbb D}_{q^{d_{i_m(w)}}}).
$$
By Lemma~\ref{lpolydisk} the evaluation maps $\Gamma((A^q_m)_q)\to A^m_{q_0}$ are KK-equivalences. As $J^q_{m_0}=0$, using the $6$-term exact sequences in KK-theory we prove that $\Gamma((J^q_m)_q)\to J_m^{q_0}$ are KK-equivalences for all~$m$ by downward induction from $m=m_0$ to $m=-1$. The case $m=-1$ is the statement of the theorem.
\ep
Since the continuous field structure on $(C(G_q/K^{S,L}_q))_q$ does not depend on any choices, we therefore know that the K-groups of $C(G_q/K^{S,L}_q)$ are canonically isomorphic to those of $C(G/K^{S,L})$, but this gives no information about explicit generators of these groups. Some information can however be extracted. Let $e$ be a projection in a matrix algebra over $C(G/K^{S,L})$. Assume we can find a continuous field of projections $e(q)$ in matrix algebras over $C(G_q/K^{S,L}_q)$ such that $e(1)=e$. Then the class of $e(q)$ in $K_0(C(G_q/K^{S,L}_q))$ is exactly the class corresponding to $[e]$ under the KK-equivalence between $C(G_q/K^{S,L}_q)$ and $C(G/K^{S,L})$.
As a simple example consider the Podle\'s sphere $S^2_q=SU_q(2)/\T$. It is well-known, and follows immediately from Theorem~\ref{tcompseries}, that the homomorphism $\rho_q$ defines an isomorphism of $C(S^2_q)$ onto the unitization $C_0({\mathbb D}_q)^\sim\subset C(\bar{\mathbb D}_q)$ of $C_0({\mathbb D}_q)$. So for $q\in(0,1)$ the C$^*$-algebra $C(S^2_q)$ is isomorphic to the algebra of compact operators on $\ell^2(\Z_+)$ with unit adjoined. From this point of view the most natural generators of $K_0(C(S^2_q))\cong\Z^2$ are $[1]$ and the class of the rank-one projection onto ${\mathbb C} e_0$. The latter projection has no meaning for $q=1$. On the other hand, $K_0(S^2)$ is generated by $[1]$ and the class of the Bott element. Under the identification $S^2=SU(2)/\T$ this class can be represented by the projection $\begin{pmatrix}\gamma^*_1\gamma_1 & -\alpha_1\gamma_1^*\\ -\gamma_1\alpha_1^* & \alpha_1^*\alpha_1\end{pmatrix}$. This projection belongs to the continuous family of projections $e(q)=\begin{pmatrix}q^2\gamma^*_q\gamma_q & -\alpha_q\gamma_q^*\\ -\gamma_q\alpha_q^* & \alpha_q^*\alpha_q\end{pmatrix}$, see~\cite{HM}. Therefore $K_0(C(S^2_q))$ is generated by $[1]$ and $[e(q)]$.
As another example, from the classical result of Hodgkin~\cite{Ho} we conclude that the fundamental corepresentations of $C(G_q)$ define independent generators of $K_1(C(G_q))$ (but not all of them if the rank of $G$ is at least $3$).
It would be interesting to develop a general technique for how to lift K-theory classes for $G/K^{S,L}$ to $\Gamma((C(G_q/K^{S,L}_q))_q)$.
\bigskip
| {
"timestamp": "2011-04-19T02:04:10",
"yymm": "1103",
"arxiv_id": "1103.4346",
"language": "en",
"url": "https://arxiv.org/abs/1103.4346",
"abstract": "Let G be a simply connected semisimple compact Lie group with standard Poisson structure, K a closed Poisson-Lie subgroup, 0<q<1. We study a quantization C(G_q/K_q) of the algebra of continuous functions on G/K. Using results of Soibelman and Dijkhuizen-Stokman we classify the irreducible representations of C(G_q/K_q) and obtain a composition series for C(G_q/K_q). We describe closures of the symplectic leaves of G/K refining the well-known description in the case of flag manifolds in terms of the Bruhat order. We then show that the same rules describe the topology on the spectrum of C(G_q/K_q). Next we show that the family of C*-algebras C(G_q/K_q), 0<q\\le1, has a canonical structure of a continuous field of C*-algebras and provides a strict deformation quantization of the Poisson algebra \\C[G/K]. Finally, extending a result of Nagy, we show that C(G_q/K_q) is canonically KK-equivalent to C(G/K).",
"subjects": "Operator Algebras (math.OA); Quantum Algebra (math.QA)",
"title": "Quantized algebras of functions on homogeneous spaces with Poisson stabilizers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787879966232,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811355700136
} |
https://arxiv.org/abs/1910.06201 | Graphs in which the Maxine heuristic produces a maximum independent set | The residue of a graph is the number of zeros left after iteratively applying the Havel-Hakimi algorithm to its degree sequence. Favaron, Mahéo, and Saclé showed that the residue is a lower bound on the independence number. The Maxine heuristic reduces a graph to an independent set of size $M$. It has been shown that given a graph $G$, $M$ is bounded between the independence number and the residue of a graph for any application of the Maxine heuristic. We improve upon a forbidden subgraph classification of graphs such that $M$ is equal to the independence number given by Barrus and Molnar in 2015. | \section{Introduction} \label{sec: intro}
We will be considering simple graphs and we will let $N(v)$ represent the neighborhood of a vertex $v$ in a graph, and let $u\sim v$ mean that $u$ and $v$ are adjacent in the graph. For such a graph $G$ and subset of vertices $U$ in the graph, let $G[U]$ be the induced subgraph on the set $U$. For a set of graphs $\mathcal{S}$, a graph $G$ is said to be $\mathcal{S}$-free, if no graph in $\mathcal{S}$ appear as an induced subgraph in $G$.
Given a degree sequence $d=(d_1,d_2,\ldots, d_n)$, an iterative step in the Havel-Hakimi algorithm, developed independently by Havel \cite{Havel} and Hakimi \cite{Hakimi}, reduces $d$ to $d^1=(d_2-1,d_3-1,\ldots,d_{d_1+1}-1,d_{d_1+2}\ldots, d_n)$. After reordering the vertices to be non-increasing, the algorithm iterates until no positive entries are present. The algorithm arose to determine when a degree sequence is graphic: that is a list of integers $d$ is graphic if and only if the Havel Hakimi algorithm terminates in a list of zeros. The number of these zeros is said to be the residue of the degree sequence, and the residue of a graph $G$, denoted $R(G)$, is the residue of the degree sequence of $G$. The residue is of interest because of its connection to the independence number of a graph, $\alpha(G)$. In 1988, the conjecture-making computer program Graffiti \cite{Fajtlowicz} proposed the following theorem,
\begin{thm}
\cite{FavaronMaheoSacle} For every graph $G$, $R(G)\leq \alpha(G)$.
\end{thm}
This result was proven by Favaron et. al. in 1991 and improved upon by Griggs and Kleitman \cite{GriggsKleitman}, Triesch \cite{Triesch}, and Jelen \cite{Jelen} in the 1990's. Determining the independence number is NP-hard, but since it takes only $O(E)$ steps to determine the residue where $E$ is the number of edges in a graph, it is of interest to know how well $R(G)$ approximates $\alpha(G)$ and when is the bound realized.
To further illustrate the relationship between the residue and the independence number, we can consider the Maxine heuristic, which is the process of iteratively deleting vertices of maximum degree until an independent set of vertices is realized \cite{GriggsKleitman}. We will call $M$ the size of the independent set achieved by the Maxine heuristic and note that this is clearly a lower bound on the independence number. Note that the heuristic depends on our choice of deleted vertices and $M$ can vary accordingly. It was shown by Griggs and Kleitman \cite{GriggsKleitman} that
\begin{thm}(\cite{GriggsKleitman}) If $M$ is the size of the independent set produced by any application of the Maxine heuristic for a graph $G$, then $R(G)\leq M\leq \alpha(G)$.
\end{thm}
Thus if $R(G)=\alpha(G)$ for some $G$, then every application of the Maxine heuristic must achieve a maximum independent set.
A vertex in a graph is said to have the Havel-Hakimi property if it is of maximum degree and its neighbors are of maximal degree, i.e. the deletion of said vertex corresponds to the reduction in the degree sequence by one step of the Havel-Hakimi algorithm. Not every graph has a vertex with this property, but every degree sequence has a realization that has such a vertex \cite{ChartrandLesniakZhang}. If at each step of the Maxine heuristic, a vertex with the Havel-Hakimi property is deleted, then $R(G)=M$. To find when $M=\alpha(G)$ we will consider graphs with certain conditions.
A vertex $v$ in a graph $G$ is said to have maximum degree-independence conditions (or MDI conditions) if it is has maximum degree and is a part of every maximum independent set. Also we will say that a graph $G$ has maximum degree-independence conditions (or MDI conditions), if there exists a vertex $v\in V(G)$ that has MDI conditions.
In 2016, Barrus and Molnar found that if a vertex $v$ in $G$ has MDI conditions, then $G$ must contain an induced subgraph of $C_4$ (the cycle on 4 vertices) containing $v$ or an induced subgraph of $P_5$ (the path on 5 vertices) with $v$ as the center vertex \cite{BarrusMolnar}. From this it can be quickly shown that
\begin{thm}\label{BMthm} (\cite{BarrusMolnar})
The Maxine heuristic always produces a maximum independent set when applied to a $\{C_4,P_5\}$-free graph.
\end{thm}
\section{Results} \label{sec: results}
We will work to strengthen Theorem \ref{BMthm} by examining the case where $v$ with MDI conditions is in an induced copy of $C_4$, since $C_4$ does not have MDI conditions itself. Since we will only strengthen the condition on $C_4$, we will assume that all graphs considered have no subgraph isomorphic to $P_5$ in which the center vertex has MDI conditions. We will call a graph $P_5*$-free when referring to the condition that the center vertex must have MDI conditions, as we will not restrict the existence of an induced $P_5$ in general. We will allude to the aforementioned MDI conditions as the maximum degree condition and independence condition separately. To start, we will prove a few lemmas to reduce our search of induced subgraphs needed to strengthen the $C_4$ condition.
\begin{lem}\label{UniqueIndLemma}
If $v\in V(G)$ has MDI conditions and is a part of more than one maximum independent set, then there is an induced subgraph of $G$ in which $v$ also has MDI conditions and there is only one maximum independent set.
\end{lem}
\begin{proof}
Let $v$ belong to maximum independent sets $I_1,I_2,...,I_n$. Then we can consider the subgraph induced by deleting $\bigcup_{i=2}^{n}I_i\setminus I_1$. The maximum degree condition is not violated since none of the deleted edges were adjacent to $v$, and there is exactly one maximum independent set in the induced subgraph.
\end{proof}
\vspace{1 cm}
Because of Lemma \ref{UniqueIndLemma}, we will now only consider a graph $G$ with one maximum independent set $I$ including a vertex $v$ such that $v$ has MDI conditions.
\begin{lem}\label{OnlyN(v)} Let $x$ be a vertex such that $x\notin N(v)\cup I$ where $I$ is the lone independent set. Then $G\setminus\{x\}$ has MDI conditions as well.
\end{lem}
\begin{proof}
Deleting $x$ does not change the degree of $v$ and thus the maximum degree condition is unaffected. Furthermore, since $x$ is not in $I$, the independent set is unaffected as well. Thus $v$ still has MDI conditions in $G\setminus\{x\}$.
\end{proof}
\vspace{1 cm}
If $\alpha(G)=1$, $G$ must be a clique, and if there is only one maximum independent set, then $G$ must be an isolated vertex. Furthermore, if $\alpha(G)=2$ with maximum independent set $\{u,v\}$, then $N(v)=N(u)$ must form a clique and thus every element of $N(v)$ must have strictly larger degree than both $u$ and $v$. Since we require an element of the maximum independent set to have maximum degree, $N(v)$ must be empty and $G$ must be the graph of two isolated vertices. Hence, if $G$ has MDI conditions and $\alpha(G)\leq 2$, then every application of the Maxine heuristic vacuously produces a set of size $\alpha$.
Thus we will now assume that the size of $I$ is 3 and that $I=\{u,v,w\}$ where $v$ is the vertex with MDI conditions and $I'=I\setminus\{v\}$. Note that if $x\in N(v)$ where $v$ has MDI conditions and $x$ is not adjacent to any other element in $I$, the maximum independent set, then we have another maximum independent set $(I\setminus\{v\})\cup\{x\}$. From \ref{OnlyN(v)} we can then delete $x$ and retain conditions on $G$. Thus we only need to consider $N(v)\cup I$. We will then partition $N(v)$ into $Q_u$ and $Q_w$ as the vertices in $N(v)$ whose neighbors in $I'$ are only $u$ and $w$ respectively. We will call $Q=Q_u\cup Q_w$. Let $N$ be the set of vertices in $N(v)$ that are adjacent to both $u$ and $w$. Since the independence number of $G$ must be 3 and $I$ is the unique independent set of size 3, we have that $Q_u$ and $Q_w$ must have independence number at most 1; hence $Q_u$ and $Q_w$ are cliques, since otherwise there would exist another independent set of size 3. Similarly, $N$ must have independence number at most 2. Then since $G$ must be $P_5^*$free, we have that $Q$ must form a clique as every vertex in $Q_u$ must dominate $Q_w$ and vice versa as otherwise there exists $q_u\in Q_u$ and $q_w\in Q_w$ non-adjacent; hence $\{u, q_u, v, q_w, w\}$ induce $P_5$ with $v$ as the center vertex.
\begin{thm}\label{Ind=3} Let $G$ have MDI conditions with $\alpha=3$. Then $G$ has at least one of the following induced subgraphs where $Q'$ is a subset of $Q$ and $N'$ a subset of $N$:
\begin{enumerate}
\item $|Q'|=0$, $G[N']\cong\overline{C_n}$.
\item $|Q'|=1$, $G[N'\cup Q']\cong\overline{C_n}$.
\item $|Q'|=2$, $G[N'\cup Q']\cong\overline{P_n}$ where the elements of $Q$ are the endpoints of $P_n$ in the complement.
\end{enumerate}
\end{thm}
\begin{proof}
We will first consider the case where $|Q|=0$. First note that if $|N|=0$, then $N(v)$ is empty and $G$ is only the independent set and the result follows immediately. Thus we will assume that $N$ is non-empty. We have that every vertex in $N$ has two non-neighbors in $N$ as $Q$ is empty and every vertex in $N$ is also adjacent to $u$, $v$, and $w$, otherwise $v$ would not have maximum degree as $N(v)=N\cup Q$. We can then arrange the non-neighbors into one or more disjoint cycle complements. Consider a smallest cycle complement, and label its vertices $x_0,\ldots, x_{m-1}$ where $x_i$ is non-adjacent to both $x_{i+1}$ and $x_{i-1}$ modulo $m$. If there exists an $x_i$ that does not dominate the rest of the cycle complement, then we have a smaller cycle complement which is a contradiction. Thus we have that $x_i$ dominates the rest of the cycle complement for every $i$ and thus we have $G[N']\cong \overline{C_m}$ where $N'$ is the vertex set of the cycle complement.
We will next consider the case where $|Q|=1$. We will call $q$ the lone vertex in $Q$. If $|N|=0$, then $q$ has larger degree than $v$, which is a contradiction so we will assume that $N$ is non-empty. Note that every vertex in $N$ has to have at least 2 non-neighbors in $N\cup Q$ otherwise $v$ is not of maximum degree, as every vertex in $N$ is also adjacent to $u$, $v$, and $w$. If $q$ dominates $N$ then $\deg(q)>\deg(v)$ which is a contradiction. Thus there exists a non-neighbor of $q$ in $N$; call it $x_0$, and call the other guaranteed non-neighbor of $x_0$, $x_1$. Similarly, $x_1$ is guaranteed to have another non-neighbor in $N\cup Q$ as $x_1\in N$ and must have at least two non-neighbors in $N\cup Q$. If this other non-neighbor is $q$ then we have that $\{q,x_0,x_1\}$ induce $\overline{C_3}$ and we are done. Thus we will assume that the other non-neighbor is in $N$, call it $x_2$. Inductively this creates a sequence of non-neighbors in $N$, $\{x_i\}$, as each $x_i$ must be adjacent to $q$ otherwise we are done as $\overline{C_{i+1}}$ is induced on $q\cup x_1\cup\cdots\cup x_i$. Furthermore each $x_i$ must be adjacent to $\{x_0,\ldots,x_{i-2}\}$ otherwise we have an induced copy of $\overline{C_n}$ in $N$ for some $n$. Since we have a finite graph, this sequence must terminate at $x_m$ for some $m$, and thus we have that $x_m$ must be non-adjacent to either $q$ or some vertex in $\{x_0,\ldots,x_{m-2}\}$ giving the result.
Finally we will show the result if $|Q|\geq2$. We will proceed by induction on the size of $Q$.
We will now consider the base case where $|Q|=2$, calling the 2 vertices $q_1,q_2$. Note that $q_1,q_2$ are adjacent as $Q$ forms a clique. Similar to the case $|Q|=1$, if $N$ is empty then $q_1$ has strictly larger degree than $v$ which is a contradiction. Thus we will assume that $N$ is non-empty. Each of the vertices has at least one non-neighbor in $N$; if they have the same non-neighbor then those three vertices induce the desired $\overline{P_3}$ and we are done. Thus we will assume that they have different non-neighbors, call them $x_1$ and $x_2$ respectively. If $x_1\nsim x_2$, then the four vertices induce the desired $\overline{P_4}$ and we are done, so assume that $x_1\sim x_2$. Each of these vertices has another non-neighbor in $N$; if they share a non-neighbor then the five vertices induce the desired $\overline{P_5}$, so assume that $x_1$ and $x_2$ have different non-neighbors call them $x_3$ and $x_4$ respectively. Inductively, we have that the pair of vertices $x_{2i},x_{2i+1}$ are the new non-neighbors of $x_{2i-2}$ and $x_{2i-1}$. Note that $x_{2i}$ and $x_{2i+1}$ must be adjacent to $Q$ otherwise there is an induced complement of a cycle and we are done. Furthermore $x_{2i}$ must be adjacent to each $x_j$ for $j$ even and $x_{2i+1}$ must be adjacent to each $x_j$ for $j$ odd, otherwise we have an induced complement of a cycle in $N$. Then $x_{2i}$ must be adjacent to each $x_j$ for $j$ odd, and $x_{2i+1}$ must be adjacent to each $x_j$ for $j$ even, otherwise we have the desired induced complement of a path. We thus have that both $x_{2i}$ and $x_{2i+1}$ must have another non-neighbor in $N$. Since we have a finite graph, this process must terminate, yielding the result.
We will now show that if $|Q|>2$, $G$ has one of the desired induced subgraphs above. We will proceed by induction on $|Q|$, noting that the base case of $|Q|=2$ is done above. Assume the result is true for $|Q|<k$ and consider the case with $|Q|=k$. We will label the vertices of $Q$, $\{q_1,q_2,\ldots, q_k\}$. Each of these has a non-neighbor in $N$, call it $x_i$ for each $q_i$. Note that these are distinct otherwise we have an induced copy of $\overline{P_3}$ with 2 elements of $Q$ has endpoints in the complement. Furthermore $q_i\sim x_j$ for all $i\neq j$ as otherwise we have an induced $\overline{P_4}$. Then there exists another non-neighbor of $x_1$ in $N$, call it $y_1$. We have that
\begin{itemize}
\item $y_1\sim q_1$, otherwise $\{q_1,x_1,y_1\}$ induce $\overline{C_3}$.
\item $y_1\sim q_j$ for all $j>1$ otherwise $\{q_1,x_1,y_1,q_j\}$ induce $\overline{P_4}$.
\item $y_1\sim x_j$ for all $j>1$, otherwise $\{q_1,x_1,y_1,x_j,q_j\}$ induce $\overline{P_5}$.
\end{itemize}
We then have that $y_1$ must have another non-neighbor in $N$, call it $y_2$. Inductively let $y_k$ be the other non-neighbor of $y_{k-1}$ where each $y_i$ for $1\leq i<k$ dominates all preceding vertices except $y_{i-1}$. Then we have that
\begin{itemize}
\item $y_k\sim q_1$, otherwise $\{q_1,x_1,y_1,\ldots,y_{k}\}$ induce $\overline{C_{k+2}}$.
\item $y_k\sim q_j$ for all $j>1$, otherwise $\{q_1,x_1,y_1,\ldots,y_{k},q_j\}$ induce $\overline{P_{k+3}}$.
\item $y_k\sim x_1$, otherwise $\{x_1,y_1,\ldots,y_{k}\}$ induce $\overline{C_{k+1}}$.
\item $y_k\sim x_j$ for all $j>1$, otherwise $\{q_1,x_1,y_1,\ldots,y_{k},x_j,q_j\}$ induce $\overline{P_{k+4}}$
\item $y_k\sim y_i$ for all $i<k$ otherwise inductively there is an induced complement of a cycle.
\end{itemize}
Thus $y_k$ has another non-neighbor in $N$. Since our graph is finite, this process must terminate and the result holds.
\end{proof}
We will now extend the result to a graph with independence number greater than 3.
\begin{thm}\label{Ind=k} Let $G$ have MDI conditions with $\alpha=k$ such that $k>3$. Then the result from \ref{Ind=3} holds as well.
\end{thm}
\begin{proof}
We will assume the contrary, that there exists such a graph without the desired induced subgraphs and derive a contradiction.
From Lemma \ref{UniqueIndLemma} we have that $G$ has one maximum independent set with $v$ a vertex with MDI conditions. First call $I$ the lone independent set, and $I'=I\setminus\{v\}$. Furthermore, we will use the notation that a set $A\subseteq N(v)$ induces a subgraph on $G_{ij}$, to mean $G[\{v,A,i,j\}]$ where $i,j$ are elements of $I'$. Then call $Q_{i}\subseteq N(v)$ the vertices that are adjacent to exactly $i$ members of $I'$. Then $\{Q_i\}_{i=1}^{k-1}$ partition $N(v)$, using Lemma \ref{OnlyN(v)}. Also note, that in order for $G$ to have MDI conditions, every vertex in $Q_i$ must have $i$ non-neighbors in $N(v)$, otherwise $v$ would not have maximum degree. We will first show that $Q_{k-1}$ must be empty.
Let $q\in Q_{k-1}$. We will show that $q$ must have at most one non-neighbor in $N(v)\setminus Q_{k-1}$. Suppose that $q$ has two such non-neighbors; $x$ and $y$.
First suppose $x\sim y$. If $x$ and $y$ have distinct neighbors in $I'$, call them $u$ and $w$ respectively, then $\{q,x,y\}$ induce $\overline{P_3}$ in $G_{u,w}$. Otherwise, without loss of generality, $(N(x)\cap I')\subseteq (N(y)\cap I')$, and we must have that $N(x)\cap I'$ is non-empty, so it contains an element $u$, and $(N(y)\cap I')^c$ is non-empty as $y\notin Q_{k-1}$, and thus $w\in (N(y)\cap I')^c $. We then have that, again, $\{q,x,y\}$ induce $\overline{P_3}$ in $G_{u,w}$.
Then suppose that $x\nsim y$. We must have that $x$ and $y$ do not have any distinct neighbors in $I'$, say $a$ and $b$, as otherwise $\{x,v,y,a,b\}$ would induce $P_5$. Then $x$ and $y$ share a neighbor in $I'$, call it $u$ and note that both $x,y$ cannot belong to $Q_1$, as $Q_1$ forms a clique. Thus, without loss of generality, we can say that $y$ has another neighbor, $w$, in $I'$, and thus $\{q,x,y\}$ induce $\overline{C_3}$ in $G_{u,w}$.
We thus have that, for each $q\in Q_{k-1}$, $q$ must have at most one non-neighbor in $N(v)\setminus Q_{k-1}$, and thus must have at least 2 non-neighbors in $Q_{k-1}$. As in the proof of the $\alpha=3$ case, we can arrange a smallest cycle complement of non-neighbors and thus we have an induced $\overline{C_n}$ in $G_{u,w}$ where $u,w$ are any two members of $I'$. This is a contradiction, and thus $Q_{k-1}$ must be empty.
We will then proceed by induction to show that $Q_i$ is empty for $3\leq i\leq k-1$. We will assume that $Q_{i}$ is empty for all $i>\ell$, and we will show that $Q_{\ell}$ is empty as well.
let $q\in Q_{\ell}$, and assume that $q$ has two non-neighbors in $N(v)\setminus Q_{\ell}$, call them $x$ and $y$. If any pair of $\{x,y,q\}$ have distinct neighbors in $I'$, then we have an induced $P_5$, as seen above. Thus we must have that without loss of generality, $(N(x)\cap I')\subseteq (N(y)\cap I')$, and since $q$ has the most neighbors in $I'$, $(N(y)\cap I')\subseteq (N(q)\cap I')$. Then we argue, in the same way as in the base case of $Q_{k-1}$, that $q$ can only have at most one non-neighbor in $N(v)\setminus Q_{\ell}$. Thus, $q$ has at least $\ell-1$ non-neighbors in $Q_{\ell}$ as $Q_i$ is empty for all $i>\ell$, and as above this means that we have an induced $\overline{C_n}$, a contradiction. Thus we have that $Q_{\ell}$ must be empty. Hence by induction we have that $Q_i$ is empty for all $i>2$.
Note that $N(v)$ must be non-empty, as we cannot have an edgeless graph, and $Q_2$ cannot be empty as $Q_1$ forms a clique, and each element of $Q_1$ must have at least one non-neighbor in $N(v)$. Then let $q\in Q_2$. If $q$ has two non-neighbors in $Q_1$, adjacent to $u$ and $w$ respectively in $I'$, then $q$ must also be adjacent to $u,w$, otherwise we have an induced $P_5$. Thus the three vertices induce $\overline{P_2}$ in $G_{u,w}$. Then assume that $q$ has exactly one non-neighbor in $Q_1$, call it $x$ and a non-neighbor in $Q_2$, call it $y$. We must have that $q,y$ share the same neighbors in $I'$, otherwise we have an induced $P_5$, and thus the neighbor of $x$ in $I'$ is shared by both $q,y$. We then have that if $x\nsim y$, we have that $\{x,y,q\}$ induce $\overline{C_3}$. We will thus assume that $x\sim y$.
Then if all $q\in Q_2$ have 2 non-neighbors in $Q_2$, we must have an induced copy of $\overline{C_n}$ in $Q_2$. Suppose then that there are 2 vertices in $Q_2$, $q,q'$ that have a non-neighbor in $Q_1$, and choose these vertices such that the distance between them in $Q_2^c$ is as small as possible. Note that there must exist a chain of vertices in $Q_2$ such that $q\nsim q_1\nsim q_2\nsim\cdots\nsim q'$, such that $q_i$ does not have a non-neighbor in $Q_1$. Furthermore, $q,q',q_i$ must share the same neighbors in $I'$, otherwise we have an induced copy of $P_5$. If $q,q'$ have the same non-neighbor in $Q_1$, call it $x$, then $\{x,q,q',q_1,\ldots\}$ induce $\overline{C_n}$. If $q,q'$ have different non-neighbors, $x$ and $x'$ in $Q_1$, then $\{x,x',q,q',q_1,\ldots\}$ induce $\overline{P_n}$. This is a contradiction, and thus for every graph $G$ with conditions and $\alpha=k>3$, we have the result.
\end{proof}
For ease, we will call the families of induced subgraphs in \ref{Ind=3} $\mathcal{F}$. We wanted to improve the $C_4$ condition introduced by Barrus and Molnar as $C_4$ itself was not MDI. By construction, each graph in $\mathcal{F}$ is itself MDI alongside $P_5$. We then have the immediate corollary,
\begin{cor}
The Maxine heuristic always produces a maximum independent set when applied to a $\{\mathcal{F},P_5\}$-free graph.
\end{cor}
\section{Open Questions} \label{sec: open questions}
Barrus and Molnar used their results to show that if a graph is $\{P_5, 4-\text{pan}, K_{2,3}, K^+_{2,3}, \text{kite}, 2P_3, P_3+ K_3, \text{stool}, \text{co-domino}\}$-free, then $R(G)=\alpha(G)$.\cite{BarrusMolnar} It can be expected that this class of graphs can be expanded with the strengthened conditions shown in this paper. We pose the following open questions/problems:
\begin{itemize}
\item Can we fully classify the graphs in which the Maxine heuristic produces a maximum independent set.
\item What other conditions, other than forbidding MDI conditions, can be considered to guarantee that the Maxine heuristic produces a maximum independent set?
\item Can we fully classify the graphs in which the Maxine heuristic produces a graph with an independent set the same size as the residue? Note that graphs with the Havel-Hakimi property introduced in \cite{BarrusMolnar} are a subset of these graphs.
\item Can we fully classify the graphs in which the residue equals the independence number?
\end{itemize}
| {
"timestamp": "2019-10-15T02:29:01",
"yymm": "1910",
"arxiv_id": "1910.06201",
"language": "en",
"url": "https://arxiv.org/abs/1910.06201",
"abstract": "The residue of a graph is the number of zeros left after iteratively applying the Havel-Hakimi algorithm to its degree sequence. Favaron, Mahéo, and Saclé showed that the residue is a lower bound on the independence number. The Maxine heuristic reduces a graph to an independent set of size $M$. It has been shown that given a graph $G$, $M$ is bounded between the independence number and the residue of a graph for any application of the Maxine heuristic. We improve upon a forbidden subgraph classification of graphs such that $M$ is equal to the independence number given by Barrus and Molnar in 2015.",
"subjects": "Combinatorics (math.CO)",
"title": "Graphs in which the Maxine heuristic produces a maximum independent set",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787876194205,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811352989579
} |
https://arxiv.org/abs/0812.1064 | Graph Minors and Minimum Degree | Let $\mathcal{D}_k$ be the class of graphs for which every minor has minimum degree at most $k$.Then $\mathcal{D}_k$ is closed under taking minors.By the Robertson-Seymour graph minor theorem, $\mathcal{D}_k$ is characterised by a finite family of minor-minimal forbidden graphs, which we denote by $\widehat{\mathcal{D}}_k$.This paper discusses $\widehat{\mathcal{D}}_k$ and related topics. We obtain four main results:We prove that every $(k+1)$-regular graph with less than ${4/3}(k+2)$ vertices is in $\widehat{\mathcal{D}}_k$, and this bound is best possible.We characterise the graphs in $\widehat{\mathcal{D}}_{k+1}$ that can be obtained from a graph in $\widehat{\mathcal{D}}_k$ by adding one new vertex.For $k\leq 3$ every graph in $\widehat{\mathcal{D}}_k$ is $(k+1)$-connected, but for large $k$, we exhibit graphs in $\widehat{\mathcal{D}}_k$ with connectivity 1. In fact, we construct graphs in $\mathcal{D}_k$ with arbitrary block structure.We characterise the complete multipartite graphs in $\widehat{\mathcal{D}}_k$, and prove analogous characterisations with minimum degree replaced by connectivity, treewidth, or pathwidth. | \section{Introduction}
The theory of graph minors developed by \citet{RS-GraphMinors} is one of the most important in graph theory influencing many branches of mathematics. Let \ensuremath{\mathcal{X}}\ be a minor-closed class of graphs\footnote{All graphs considered in this paper are undirected, simple, and finite.
To \emph{contract} an edge $vw$ in a graph $G$ means to delete $vw$, identify $v$ and $w$, and replace any parallel edges by a single edge. The contracted graph is denoted by $G/vw$. If $S\subseteq E(G)$ then $G/S$ is the graph obtained from $G$ by contracting each edge in $S$ (while edges in $S$ remain in $G$). The graph $G/S$ is called a \emph{contraction minor} of $G$.
A graph $H$ is a \emph{minor} of a graph $G$ if a graph isomorphic to $H$ can be obtained from a subgraph of $G$ by contracting edges. That is, $H$ can be obtained from $G$ by a sequence of edge contractions, edge deletions, or vertex deletions. For each vertex $v$ of $H$, the set of vertices of $G$ that are contracted into $v$ is called a \emph{branch set} of $H$. A class \ensuremath{\mathcal{X}}\ of graphs is \emph{minor-closed} if every minor of every graph in \ensuremath{\mathcal{X}}\ is also in \ensuremath{\mathcal{X}}, and some graph is not in \ensuremath{\mathcal{X}}.
The \emph{join} of graphs $G$ and $H$, denoted by $G*H$, is the graph obtained by adding all possible edges between disjoint copies of $G$ and $H$. Let $\overline{G}$ denote the complement of a graph $G$.}. A graph $G$ is a \emph{minimal forbidden minor} of \ensuremath{\mathcal{X}}\ if $G$ is not in \ensuremath{\mathcal{X}}\ but every proper minor of $G$ is in \ensuremath{\mathcal{X}}. Let \ensuremath{\widehat{\mathcal{X}}}\ be the set of minimal forbidden minors of \ensuremath{\mathcal{X}}. By the graph minor theorem of \citet{RS-GraphMinors}, \ensuremath{\widehat{\mathcal{X}}}\ is a finite set. For various minor-closed classes the list of minimal forbidden minors is known. Most famously, if \ensuremath{\mathcal{P}}\ is the class of planar graphs, then the Kuratowski-Wagner theorem states that $\ensuremath{\widehat{\mathcal{P}}}=\{K_5,K_{3,3}\}$.
However, in general, determining the minimal forbidden minors for a particular minor-closed class is a challenging problem.
Let $\delta(G)$ be the minimum degree of a graph $G$. Let $\D{k}$ be the class of graphs $G$ such that every minor of $G$ has minimum degree at most $k$. Then \D{k} is minor-closed. Let \DD{k} be the set of minimal forbidden minors of \D{k}. By the graph minor theorem, \DD{k} is finite for each $k$. The structure of graphs in \DD{k} is the focus of this paper. For small values of $k$, it is known that $\DD{0}=\{K_2\}$ and $\DD{1}=\{K_3\}$ and $\DD{2}=\{K_4\}$ and $\DD{3}=\{K_5,K_{2,2,2}\}$; see \secref{Basics}. Determining \DD{4} is an open problem.
The majority this paper studies the case of general $k$ rather than focusing on small values. Our first main result shows that, in some sense, there are many graphs in \DD{k}. In particular, every sufficiently small $(k+1)$-regular graph is in \DD{k}. This result is proved in \secref{SmallRegularGraphs}.
\begin{theorem}
\thmlabel{SmallRegular}
Every $(k+1)$-regular graph with less than $\frac{4}{3}(k+2)$ vertices is in \DD{k}. Moreover, for all $k\equiv1\pmod{3}$ there is a $(k+1)$-regular graph on $\frac{4}{3}(k+2)$ vertices that is not in \DD{k}.
\end{theorem}
Our second main result characterises the graphs in \DD{k+1} that can be obtained from a graph in \DD{k} by adding one new vertex.
\begin{theorem}
\thmlabel{AddVertex}
Let $S$ be a set of vertices in a graph $G\in\D{k}$. Let $G'$ be the graph obtained from $G$ by adding one new vertex adjacent to every vertex in $S$. Then $G'\in\DD{k+1}$ if and only if $S$ is the set of vertices of degree $k+1$ in $G$.
\end{theorem}
\thmref{AddVertex} is proved in \secref{Construction} along with various corollaries of
\twothmref{SmallRegular}{AddVertex}.
It is natural to expect that graphs in \DD{k} are, in some sense, highly connected. For example for $k\leq3$ all the graphs in \DD{k} are $(k+1)$-connected. However, this is not true in general. In \secref{Basics} we exhibit a graph in \DD{4} with connectivity $1$. In fact, our third main result, proved in \secref{BlockStructure}, constructs graphs in \DD{k} ($k\geq9$) with arbitrary block structure.
\begin{theorem}
Let $T$ be the block decomposition tree of some graph.
Then for some $k$, $T$ is the block decomposition tree of some graph in \DD{k}.
\end{theorem}
A complete characterisation of graphs in \DD{k} is probably hopeless. So it is reasonable to restrict our attention to particular subsets of \DD{k}. A graph is \emph{complete $c$-partite} if the vertices can be $c$-coloured so that two vertices are adjacent if and only if they have distinct colours. Let $K_{n_1,n_2,\dots,n_c}$ be the complete $c$-partite graph with $n_i$ vertices in the $i$-th colour class. Since every graph in \DD{k} for $k\leq 3$ is complete multipartite, it is natural to consider the complete multipartite graphs in \DD{k}. Our fourth main result characterises the complete multipartite graphs in \DD{k}.
\begin{theorem}
\thmlabel{CompleteMultipartite}
For all $k\geq1$, a complete multipartite graph $G$ is in \DD{k} if and only if for some $b\geq a\geq1$ and $p\geq2$, $$G=K_{a,\underbrace{b,\dots,b}_p}\enspace,$$
such that $k+1=a+(p-1)b$ and if $p=2$ then $a=b$.
\end{theorem}
\thmref{CompleteMultipartite} is proved in \secref{CompleteMultipartite}.
Moreover, we prove that the same characterisation holds for the minimal forbidden complete multipartite minors for the class of graphs for which every minor has connectivity at most $k$. And \thmref{CMG-Treewidth} is an analogous result for graphs of treewidth at most $k$ and pathwidth at most $k$.
\section{Basics and Small Values of $k$}\seclabel{Basics}
This section gives some basic results about \DD{k} and reviews what is known about \DD{k} for small values of $k$. We have the following characterisation of graphs in \DD{k}.
\begin{lemma}
\lemlabel{BasicDegree}
$G\in\DD{k}$ if and only if
\begin{enumerate}
\item[{\rm (D1)}] $\delta(G)=k+1$,
\item[{\rm (D2)}] every proper contraction minor of $G$ has minimum degree at most $k$,
\item[{\rm (D3)}] $G$ is connected, and
\item[{\rm (D4)}] no two vertices both with degree at least $k+2$ are adjacent in $G$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(\Longrightarrow)$ Suppose that $G\in\DD{k}$. That is, $\delta(G)\geq k+1$ and every minor of $G$ has minimum degree at most $k$. In particular, every
contraction minor of $G$ has minimum degree at most $k$, thus proving (D2).
If $G$ is not connected then each component of $G$ is a proper minor with minimum degree $k+1$. This contradiction proves (D3). If adjacent vertices $v$ and $w$ both have degree at least $k+2$, then $G-vw$ is a proper minor of $G$ with minimum degree at least $k+1$. This contradiction proves (D4). In particular, some vertex has degree $k+1$. Thus $\delta(G)=k+1$ and (D1) holds.
$(\Longleftarrow)$ Suppose that conditions (D1)--(D4) hold. Suppose on the contrary that some proper minor of $G$ has minimum degree at least $k+1$. Let $H$ be such a minor with the maximum number of edges. Since $G$ is connected, $H$ can be obtained by edge contractions and edge deletions only. (Deleting a non-isolated vertex $v$ can be simulated by contracting one and deleting the other edges incident to $v$.)\ Condition (D4) implies that every edge has an endpoint with degree $k+1$, implying that every proper subgraph of $G$ has minimum degree at most $k$. Hence at least one edge of $G$ was contracted in the construction of $H$. Since $H$ was chosen with the maximum number of edges, no edges were deleted in the construction of $H$. That is, $H$ is a contraction minor. Condition (D2) implies that $H$ has minimum degree at most $k$. This contradiction proves that every proper minor of $G$ has minimum degree at most $k$.
Thus condition (D1) implies that $G\in\DD{k}$.
\end{proof}
Observe that \lemref{BasicDegree} immediately implies that for all $k\geq0$,
\begin{equation}
\eqnlabel{Complete}
K_{k+2}\in\DD{k}\enspace.
\end{equation}
Now consider small values of $k$. Observe that \D{0} is the class of edgeless graphs, and $\DD{0}=\{K_2\}$. Similarly \D{1} is the class of forests, and $\DD{1}=\{K_3\}$. Graphs in \D{2} are often called \emph{series-parallel}. \DD{2} and \DD{3} are easily determined; see \figref{DD23}.
\Figure{DD23}{\includegraphics{k}}{Graphs in $\DD{2}$ and $\DD{3}$.}
\begin{proposition}
$\DD{2}=\{K_4\}$.
\end{proposition}
\begin{proof}
By \eqnref{Complete}, $K_4\in\DD{2}$. Consider $G\in\DD{2}$. By \lemref{BasicDegree}, $G$ has minimum degree $3$. \citet{Dirac52} proved that every graph with minimum degree at least $3$ contains a $K_4$-minor; also see \citep{Tutte-NAW61, Hadwiger43, Zeidl58, Woodall-JGT92}.Thus $G$ contains a $K_4$-minor. If $G\not\cong K_4$, then the $K_4$-minor in $G$ is not proper, implying $G\not\in\DD{2}$ by \lemref{BasicDegree}. Hence $G\cong K_4$.
\end{proof}
\begin{proposition}
\proplabel{DDthree}
$\DD{3}=\{K_5,K_{2,2,2}\}$.
\end{proposition}
\begin{proof}
By \eqnref{Complete}, $K_5\in\DD{3}$. Since $K_{2,2,2}$ is planar, every proper minor of $K_{2,2,2}$ is a planar graph on at most five vertices, which by Euler's Formula, has a vertex of degree at most $3$. Thus $K_{2,2,2}\in\DD{3}$.
Consider $G\in\DD{3}$. By \lemref{BasicDegree}, $G$ has minimum degree $4$.
In \appref{DegreeFour} we prove that every graph with minimum degree at least $4$ contains a $4$-connected minor\footnote{This result was attributed by \citet{Maharry-JGT99} to \citet{HJ-MA63}. While the authors acknowledge their less than perfect understanding of German, Halin and Jung actually proved that every $4$-connected graph contains $K_5$ or $K_{2,2,2}$ as a minor. This is confirmed by Tutte's review of the Halin and Jung paper in MathSciNet.}. \citet{HJ-MA63} proved that every $4$-connected graph contains $K_5$ or $K_{2,2,2}$ as a minor. Thus $G$ contains $K_5$ or $K_{2,2,2}$ as a minor. Suppose on the contrary that $G$ is isomorphic to neither $K_5$ nor $K_{2,2,2}$. Then $G$ contains $K_5$ or $K_{2,2,2}$ as a proper minor. Thus $G$ contains a proper minor with minimum degree 4, implying $G\not\in\DD{4}$ by \lemref{BasicDegree}. Hence $G$ is isomorphic to $K_5$ or $K_{2,2,2}$.
\end{proof}
Determining \DD{4} is an open problem. But we do know nine graphs in \DD{4}, as illustrated in \figref{MinDeg5Graphs}. One of these graphs is the \emph{icosahedron} $I$, which is the unique $5$-regular planar triangulation (on twelve vertices). \citet{Mader68} proved that every planar graph with minimum degree $5$ contains $I$ as a minor. More generally, \citet{Mader68} proved that every graph with minimum degree at least $5$ contains a minor in $\{K_6,I,C_5*\overline{K_3},K_{2,2,2,1}-e\}$, where $e$ is an edge incident to the degree-$6$ vertex in $K_{2,2,2,1}$. However, since $K_{2,2,2,1}-e$ has a degree-$4$ vertex, it is not in \DD{4}. \citet{Fijavz-PhD} proved that every graph on at most $9$ vertices with minimum degree at least $5$ contracts to $K_6$, $K_{2,2,2,1}$ or $C_5*\overline{K_3}$. The graphs $G_1$ and $G_2$ are discussed further in \secref{GeneralSetting}. The graphs $D_1$ and $D_3$ are due to \citet{Fijavz-PhD}, while $D_2$ is due to \citet{Mader68}. Note that $D_1$, $D_2$ and $D_3$ are not $5$-connected. In fact, $D_3$ has a cut-vertex. It is an example of a more general construction given in \secref{BlockStructure}. In the language used there, $D_3$ is obtained from two copies of the single-horned graph $G_{5,4}$ by identifying the two horns.
\Figure{MinDeg5Graphs}{\includegraphics{MinDeg5Graphs}}{The known graphs in \DD{4}; vertices with degree more than $5$ are highlighted.}
\begin{proposition}
$\{K_6,I,C_5*\overline{K_3},K_{1,2,2,2},G_1,G_2,D_1,D_2,D_3\}\subseteq\DD{4}$.
\end{proposition}
\begin{proof}
This result was verified by computer.
(The code is available from the authors upon request.)\
We now give manual proofs for some of these graphs.
$K_6\in\DD{4}$ by \eqnref{Complete}.
$I$ is not in \D{4} since it is $5$-regular. Every proper minor of $I$ is planar with at most eleven vertices. By Euler's Formula, every such graph has minimum degree at most $4$, and is thus in \D{4}. Hence $I\in\DD{4}$.
We now prove that $C_5*\overline{K_3}\in\D{4}$. Since $C_5*\overline{K_3}$ is $5$-regular, conditions (D1), (D3) and (D4) in \lemref{BasicDegree}.
Suppose that $C_5*\overline{K_3}$ contains a proper contraction minor $H$ with $\delta(H)\geq 5$. Thus $|V(H)|\geq6$, and $H$ was obtained by at most two edge contractions.
Since every edge of $C_5*\overline{K_3}$ is in a triangle with a degree-5 vertex, $H$ was obtained by exactly two edge contractions.
Since each edge in the $C_5$ part of $C_5*\overline{K_3}$ is in three triangles, no edge in the $C_5$ part was contracted.
Thus one contracted edge was $vw$ where $v\in C_5$ and $w\in\overline{K_3}$. Observe that $vw$ is in two triangles $vwx$ and $vwy$, where $x$ and $y$ are the neighbours of $v$ in $C_5$. Since both $x$ and $y$ have degree $4$ in $G/vw$, some edge incident to $x$ and some edge incident to $y$ is contracted in $H$.
This is impossible since $x$ and $y$ are not adjacent, and only one contraction besides $vw$ is allowed. This contradiction proves that every proper contraction minor of $G$ has minimum degree at most $4$. Thus condition (D2) holds for $C_5*\overline{K_3}$, and $C_5*\overline{K_3}\in\DD{4}$.
That $K_{1,2,2,2}$ is in \DD{4} follows from \thmref{CMG-Degree} with $a=1$ and $b=2$ and $p=3$.
We now prove that $D_3\in\D{4}$. Observe that conditions (D1), (D3) and (D4) in \lemref{BasicDegree} hold for $D_3$. Suppose that $D_3$ contains a proper contraction minor $H$ with $\delta(H)\geq 5$. Thus $H=D_3/S$ for some $S\subseteq E(G)$ such that $|V(H)|=13-|S|$. Let $v$ be the cut-vertex in $D_3$. Let $G_1$ and $G_2$ be the subgraphs of $D_3$ such that $D_3=G_1\cup G_2$ where $V(G_1)\cap V(G_2)=\{v\}$. Let $S_i:=S\cap E(G_i)$. We have $|S_i|\leq|V(G_i)|-1=6$. Every edge of $D_3$ is in a triangle with a vertex distinct from $v$. Thus, if $|S_i|=1$ then some vertex in $H$ has degree at most $4$, which is a contradiction. If $2\leq |S_i|\leq 5$ then $G_i/S$ has at least two and at most five vertices, and every vertex in $G_i/S$ (except possibly $v$) has degree at most $4$, which is a contradiction. Thus $|S_i|\in\{0,6\}$. Now $|S_1|+|S_2|=|S|\leq 7$, as otherwise $H$ has at most five vertices. Thus $|S_1|=0$ and $|S_2|=6$ without loss of generality. Hence $H\cong G_1$, in which $v$ has degree $4$, which is a contradiction. Thus condition (D2) holds for $D_3$. Hence $D_3\in\DD{4}$.
\end{proof}
\section{A General Setting}\seclabel{GeneralSetting}
The following general approach for studying minor-closed class was introduced by \citet{Fijavz-PhD}. A \emph{graph parameter} is a function $f$ that assigns a non-negative integer $f(G)$ to every graph $G$, such that for every integer $k$ there is some graph $G$ for which $f(G)\geq k$. Examples of graph parameters include minimum degree $\delta$, maximum degree $\Delta$, (vertex-)connectivity $\kappa$, edge-connectivity $\lambda$, chromatic number $\chi$, clique number $\omega$, independence number $\alpha$, treewidth $\textsf{tw}$, and pathwidth $\textsf{pw}$; see \citep{Diestel00} for definitions.
For a graph parameter $f$ and a graph $G$, let $\down{f}(G)$ be the maximum of $f(H)$ taken over all minors $H$ of $G$. Then \down{f} also is a graph parameter\footnote{Let $\up{f}(G)$ be the minimum of $f(H)$ where $G$ is a minor of $H$. Then the class of graphs $G$ with $\up{f}(G)\leq k$ is minor-closed, and we can ask the same questions for $\up{f}$ as for $\down{f}$. For example, the minor crossing number \citep{BFM-SJDM06, BCSV-ENDM, BFW} fits into this framework.}. For example, $\down{\omega}(G)$ is the order of the largest clique minor in $G$, often called the \emph{Hadwiger number} of $G$.
Let $$\ensuremath{\mathcal{X}}_{f,k}:=\{G:\down{f}(G)\leq k\}\enspace.$$
That is, $\ensuremath{\mathcal{X}}_{f,k}$ is the class of graphs $G$ such that $f(H)\leq k$ for every minor $H$ of $G$. Then $\ensuremath{\mathcal{X}}_{f,k}$ is minor-closed, and the set $\ensuremath{\widehat{\mathcal{X}}}_{f,k}$ of minimal forbidden minors is finite.
We have the following characterisation of graphs in $\ensuremath{\widehat{\mathcal{X}}}_{f,k}$, analogous to \lemref{BasicDegree}.
\begin{lemma}
\lemlabel{Basic}
$G\in\ensuremath{\widehat{\mathcal{X}}}_{f,k}$ if and only if $f(G)\geq k+1$ and every proper minor $H$ of $G$ has $f(H)\leq k$.
\end{lemma}
\begin{proof}
By definition, $G\in\ensuremath{\widehat{\mathcal{X}}}_{f,k}$ if and only if $G\not\in\ensuremath{\mathcal{X}}_{f,k}$ but every proper minor of $G$ is in $\ensuremath{\mathcal{X}}_{f,k}$. That is, there exists a minor $H$ of $G$ with $f(H)\geq k+1$, but every proper minor $H$ of $G$ has $f(H)\leq k$. Thus the only minor $H$ of $G$ with $f(H)\geq k+1$ is $G$ itself.
\end{proof}
\begin{lemma}
\lemlabel{alphabeta}
Let $\alpha$ and $\beta$ be graph parameters such that $\alpha(G)\leq\beta(G)$ for every graph $G$. Then $\ensuremath{\mathcal{X}}_{\beta,k}\subseteq\ensuremath{\mathcal{X}}_{\alpha,k}$ and $\{G:G\in\ensuremath{\widehat{\mathcal{X}}}_{\beta,k},\alpha(G)\geq k+1\}\subseteq\ensuremath{\widehat{\mathcal{X}}}_{\alpha,k}$.
\end{lemma}
\begin{proof}
For the first claim, let $G$ be a graph in $\ensuremath{\mathcal{X}}_{\beta,k}$. Then $\beta(H)\leq k$ for every minor $H$ of $G$. By assumption, $\alpha(H)\leq\beta(H)\leq k$. Hence $G\in\ensuremath{\mathcal{X}}_{\alpha,k}$, implying $\ensuremath{\mathcal{X}}_{\beta,k}\subseteq\ensuremath{\mathcal{X}}_{\alpha,k}$.
For the second claim, suppose that $G\in\ensuremath{\widehat{\mathcal{X}}}_{\beta,k}$ and $\alpha(G)\geq k+1$. By \lemref{Basic} applied to $\beta$, $\beta(G)\geq k+1$ and every proper minor $H$ of $G$ has $\beta(H)\leq k$. By assumption, $\alpha(H)\leq\beta(H)\leq k$. Since $\alpha(G)\geq k+1$, \lemref{Basic} applied to $\alpha$ implies that $G\in \ensuremath{\widehat{\mathcal{X}}}_{\alpha,k}$.
\end{proof}
Recall that $\delta$ and $\kappa$ are the graph parameters minimum degree and connectivity. Observe that $\D{k}=\ensuremath{\mathcal{X}}_{\delta,k}$. Let $$\C{k}:=\ensuremath{\mathcal{X}}_{\kappa,k}$$ be the class of graphs for which every minor has connectivity at most $k$. For $k\leq 3$, we have $\C{k}=\D{k}$ and $\CC{k}=\DD{k}$. That is, $\CC{1}=\{K_3\}$, $\CC{2}=\{K_4\}$, and $\CC{3}=\{K_5,K_{2,2,2}\}$. Determining \CC{4} is an open problem; \citet{Fijavz-PhD} conjectured that $\CC{4}=\{K_6,I,C_5*\overline{K_3},K_{1,2,2,2},G_1,G_2\}$.
\citet{Dirac-PLMS63} proved that every $5$-connected planar graph contains the icosahedron as a minor (which, as mentioned earlier, was generalised by \citet{Mader68} for planar graphs of minimum degree $5$). Thus the icosahedron is the only planar graph in \CC{4}. \citet{Fijavz-5} determined the projective-planar graphs in \CC{4} to be $\{K_6, I, G_1,G_2\}$. \citet{Fijavz-JCTB} determined the toroidal graphs in \CC{5} to be $\{K_7, K_{2,2,2,2}, K_{3,3,3}, K_9-C_9\}$. See \citep{Fijavz-EuJC04, FM-Comb03} for related results. Also relevant is the large body of literature on contractibility; see the surveys \citep{Kriesell-GC02, Mader-DM05}.
Let $$\T{k}:=\{G:\tw{G}\leq k\}\quad\text{ and }\W{k}:=\{G:\tw{G}\leq k\}$$ respectively be the classes of graphs with treewidth and pathwidth at most $k$. Since treewidth and pathwidth are minor-closed, $\T{k}=\ensuremath{\mathcal{X}}_{\textsf{tw},k}$ and $\W{k}=\ensuremath{\mathcal{X}}_{\textsf{pw},k}$. We have $$\kappa(G)\leq\delta(G)\leq\tw{G}\leq\pw{G}$$ for every graph $G$; see \citep{Bodlaender-TCS98,Diestel00}. Thus \lemref{alphabeta} implies that
\begin{equation*}
\W{k}\subseteq\T{k}\subseteq\D{k}\subseteq\C{k},
\end{equation*}
and
\begin{align}
\eqnlabel{DDinCC}&\{G:G\in\DD{k},\kappa(G)\geq k+1\}\subseteq\CC{k}\\
\eqnlabel{TTinDD}&\{G:G\in\TT{k},\delta(G)\geq k+1\}\subseteq\DD{k}\\
\eqnlabel{WWinTT}&\{G:G\in\WW{k},\tw{G}\geq k+1\}\subseteq\TT{k}.
\end{align}
Thus the $(k+1)$-connected graphs that we show are in \DD{k} are also in \CC{k}. In particular, \thmref{SmallRegular} implies:
\begin{theorem}
Every $(k+1)$-connected $(k+1)$-regular graph with less than $\frac{4}{3}(k+2)$ vertices is in \CC{k}.
\end{theorem}
The relationship between \CC{k} and \DD{k} is an interesting open problem.
\begin{open}
Is $\CC{k}\subseteq\DD{k}$ for all $k$? Is $\CC{k}=\{G:G\in\DD{k},\kappa(G)=k+1\}$ for all $k$?
\end{open}
Note that $\DD{4}\neq \CC{4}$ since there are graphs in \DD{4} with connectivity $1$; see \secref{BlockStructure}.
\section{General Values of $k$}
Let $G$ be a graph. A vertex of $G$ is \emph{low-degree} if its degree equals the minimum degree of $G$. A vertex of $G$ is \emph{high-degree} if its degree is greater than the minimum degree of $G$. Recall that every graph in \DD{k} has minimum degree $k+1$. Thus a vertex of degree $k+1$ in a graph in \DD{k} is low-degree; every other vertex is high-degree. \lemref{BasicDegree} implies that for every graph $G\in\DD{k}$, the high-degree vertices in $G$ form an independent set.
\begin{proposition}
\proplabel{ManyLows}
Every graph $G\in\DD{k}$ has at least $k+2$ low-degree vertices (of degree $k+1$).
\end{proposition}
\begin{proof}
Suppose on the contrary that $G$ has at most $k+1$ low-degree vertices. By \lemref{BasicDegree}, each high-degree vertex is only adjacent to low-degree vertices. Since a high-degree vertex has degree at least $k+2$, there are no high-degree vertices. Thus $G$ has at most $k+1$ vertices. Thus $G$ has maximum degree at most $k$, which is a contradiction.
\end{proof}
For a set $S$ of vertices in a graph $G$, a \emph{common neighbour} of $S$ is a vertex in $V(G)-S$ that is adjacent to at least two vertices in $S$. A \emph{common neighbour} of an edge $vw$ is a common neighbour of $\{v,w\}$. Common neighbours are important because of the following observation.
\begin{observation}
\obslabel{DegreeChange}
Let $vw$ be an edge of a graph $G$ with $p$ common neighbours. Let $H$ be the graph obtained from $G$ by contracting $vw$ into a new vertex $x$. Then
$$\deg_H(x)=\deg_G(v)+\deg_G(w)-p-2.$$
For every common neighbour $y$ of $vw$,
$$\deg_H(y)=\deg_G(y)-1.$$
For every other vertex $z$ of $H$,
$$\deg_H(z)=\deg_G(z).$$
\end{observation}
\begin{proposition}
\proplabel{CommonNeighbour}
For every graph $G\in\DD{k}$, every edge $vw$ of $G$ has a low-degree common neighbour.
\end{proposition}
\begin{proof}
If $k=1$ then $G=K_3$ and the result is trivial. Now assume that $k\geq 2$.
Suppose on the contrary that for some edge $vw$ of $G$, every common neighbour of $vw$ (if any) is high-degree. By \lemref{BasicDegree}, at least one of $v$ and $w$ is low-degree (with degree $k+1$). Thus $v$ and $w$ have at most $k$ common neighbours. Let $u_1,\dots,u_p$ be the common neighbours of $v$ and $w$, where $0\leq p\leq k$.
Let $H$ be the graph obtained from $G$ by contracting $vw$ into a new vertex $x$. The degree of each vertex of $G$ is unchanged in $H$, except for $v$, $w$, and each $u_i$. Since $\deg_G(u_i)\geq k+2$, we have $\deg_H(u_i)\geq k+1$. By \obsref{DegreeChange},
\begin{equation*}
\deg_H(x)=\deg_G(v)+\deg_G(w)-p-2
\geq2(k+1)-p-2
=2k-p\enspace.
\end{equation*}
Thus if $p\leq k-1$ then $\deg_H(x)\geq k+1$ and $H$ is a proper minor of $G$ with minimum degree at least $k+1$, implying $G\not\in\DD{k}$.
Otherwise $p=k$, implying both $v$ and $w$ are low-degree vertices whose only neighbours are each other and the high-degree vertices $u_1,\dots,u_k$. Let $J$ be the graph obtained from $G$ by contracting $v,w,u_1$ into a new vertex $y$. Since each neighbour of $v$ is high-degree and each neighbour of $w$ is high-degree, if a vertex (other than $v,w,u_1$) is adjacent to at least two of $v,w,u_1$ then it is high-degree. Since no two high-degree vertices are adjacent, the only vertices (other than $v,w,u_1$) that are adjacent to at least two of $v,w,u_1$ are $u_2,\dots,u_k$. Thus every vertex in $J$ (possibly except $y$) has degree at least $k+1$. Now $u_1$ has at least $k$ neighbours in $G$ outside of $\{v,w,u_2,\dots,u_k\}$. Thus $\deg_J(y)\geq k+(k-1)\geq k+1$, and $J$ is a proper minor of $G$ with minimum degree at least $k+1$, implying $G\not\in\DD{k}$.
\end{proof}
The next result says that for graphs in \DD{k}, every sufficiently sparse connected induced subgraph has a common neighbour.
\begin{proposition}
For every graph $G\in\DD{k}$, for every connected induced subgraph $H$ of $G$ with $n$ vertices and $m\leq\ensuremath{\protect\tfrac{1}{2}}(k+1)(n-1)$ edges, there exists a vertex $x$ in $G-H$ adjacent to at least $\deg_G(x)-k+1\geq2$ vertices in $H$.
\end{proposition}
\begin{proof}
Suppose that for some connected induced subgraph $H$ with $n$ vertices and $m\leq\ensuremath{\protect\tfrac{1}{2}}(k+1)(n-1)$ edges, every vertex $x$ in $G-H$ is adjacent to at most $\deg_G(x)-k$ vertices in $H$. Let $G'$ be the graph obtained from $G$ by contracting $H$ into a single vertex $v$. The degree of every vertex in $G-H$ is at least $\deg_G(x)-(\deg_G(x)-k)+1=k+1$ in $G'$. Since $G$ has minimum degree $k+1$, we have
$$\deg_{G'}(v)
=\bracket{\sum_{w\in V(H)}\!\!\!\deg_G(w)}-2m
\geq n(k+1)-2m
\geq k+1.$$
Thus $G'$ is a proper minor of $G$ with minimum degree at least $k+1$. Hence $G\not\in\DD{k}$. This contradiction proves the result.
\end{proof}
\begin{corollary}
For every graph $G\in\DD{k}$, for every clique $C$ of $G$ with at most $k+1$ vertices, there exists a vertex in $V(G)-C$ adjacent to at least two vertices of $C$.
\end{corollary}
\section{Small Regular Graphs are in \DD{k}}
\seclabel{SmallRegularGraphs}
In this section we show that that every $(k+1)$-regular graph with sufficiently few vertices is in \DD{k}. Moreover, the bound on the number of vertices is tight.
\begin{lemma}
\lemlabel{ManyTrianglesGraphs}
Let $G$ be a connected $(k+1)$-regular graph on $n$ vertices.
If every edge of $G$ is in at least $2n-2k-5$ triangles, then $G\in\DD{k}$.
\end{lemma}
\begin{proof}
By assumption, conditions (D1), (D3) and (D4) of \lemref{BasicDegree} are satisfied by $G$. Suppose on the contrary that $H$ is a contraction minor of $G$ with minimum degree at least $k+1$. Let $S$ be the set of vertices of $G$ that are incident to an edge contracted in the construction of $H$. Let $vw$ be one such edge. We have $|S|\leq2(n-|V(H)|)\leq2n-2k-4$. By assumption, there is a set $T$ of vertices of $G$ that are adjacent to both $v$ and $w$, and $|T|\geq 2n-2k-5\geq|S|-1$. Thus there is at least one vertex $x\in T-(S-\{v,w\})$, which is a vertex of $H$. Since $x$ is adjacent to both endpoints of the contracted edge $vw$, $\deg_H(x)\leq k$. This contradiction proves condition (D2) for $G$. \lemref{BasicDegree} implies that $G\in\DD{k}$.
\end{proof}
\begin{lemma}
\lemlabel{ManyTriangles}
For every $(k+1)$-regular graph $G$ on $n$ vertices, every edge $vw$ of $G$ is in at least $2k+2-n$ triangles.
\end{lemma}
\begin{proof}
Say $vw$ is in $t$ triangles. Thus $v$ and $w$ have $t$ common neighbours. Thus $v$ has $k-t$ neighbours not adjacent to $w$, and $w$ has $k-t$ neighbours not adjacent to $v$. Thus $n\geq 2+t+2(k-t)=2k+2-t$, implying $t\geq2k+2-n$.
\end{proof}
\begin{theorem}
\thmlabel{RegularGraphs}
Every $(k+1)$-regular graph $G$ with $n<\frac{4}{3}(k+2)$ vertices is in \DD{k}.
\end{theorem}
\begin{proof}
Every disconnected $(k+1)$-regular graph has at least $2k+4$ vertices. Since $n<2k+4$ we can assume that $G$ is connected. By \lemref{ManyTriangles}, every edge of $G$ is in at least $2k+2-n$ triangles. Now $2k+2-n\geq 2n-2k-5$ since $n\leq\frac{1}{3}(4k+7)$. Thus every edge of $G$ is in at least $2n-2k-5$ triangles. By \lemref{ManyTrianglesGraphs}, $G\in\DD{k}$.
\end{proof}
\thmref{RegularGraphs} is best possible in the following sense.
\begin{proposition}
\proplabel{TightRegularGraphs}
For all $k\equiv1\pmod{3}$ there is a $(k+1)$-regular graph $G$ on $n=\frac{4}{3}(k+2)$ vertices that is not in \DD{k}.
\end{proposition}
\begin{proof}
Let $p:=\frac{1}{3}(k+2)$. Then $p\in\ensuremath{\mathbb{Z}}$. Let $G$ be the graph whose complement $\overline{G}$ is the disjoint union of $K_{p,p}$ and $K_{p,p}$. Then $G$ has $4p=n$ vertices, and every vertex has degree $n-1-p=k+1$. Observe that $G$ contains a matching $M$ of $p$ edges (between the two $K_{p,p}$ subgraphs in $\overline{G}$), such that every vertex is adjacent to at least one endpoint of every edge in $M$. Contracting each edge in $M$ we obtain a $K_{3p}$-minor in $G$, which has minimum degree $k+1$. Thus $G\not\in\DD{k}$.
\end{proof}
\thmref{RegularGraphs} can be rewritten in terms of complements.
\begin{corollary}
If $G$ is an $r$-regular graph on $n\geq4r+1$ vertices, then $\overline{G}\in\DD{n-r-2}$.
\end{corollary}
\section{A Construction}
\seclabel{Construction}
We now describe how a graph in \DD{k+1} can be constructed from a graph in \DD{k}. Let $G^+$ be the graph obtained from a graph $G$ by adding one new vertex that is adjacent to each vertex of minimum degree in $G$. If $G\in\DD{k}$ then the vertices of minimum degree are the low-degree vertices.
\begin{lemma}
\lemlabel{Construction}
If $G\in\DD{k}$ then $G^+\in\DD{k+1}$.
\end{lemma}
\begin{proof}
Let $v$ be the vertex of $G^+-G$. Every low-degree vertex in $G$ has degree $k+1$, and thus has degree $k+2$ in $G^+$. Every high-degree vertex in $G$ has degree at least $k+2$, which is unchanged in $G^+$. By \propref{ManyLows}, $G$ has at least $k+2$ low-degree vertices. Thus $v$ has degree at least $k+2$ in $G^+$. Thus $G^+$ has minimum degree $k+2$. Suppose on the contrary that $G\in\DD{k}$ but $G^+\not\in\DD{k+1}$. Thus there is a proper minor $H$ of $G^+$ with minimum degree at least $k+2$. If $v$ is not in a branch set of $H$, then $H$ is a minor of $G$, implying $H$ has minimum degree at most $k+1$, which is a contradiction. Now assume that $v$ is in some branch set $B$ of $H$. (Think of $B$ simultaneously as a vertex of $H$ and as a set of vertices of $G^+$.)\ Now $H-B$ is a minor of $G$. If $H-B$ is $G$, then $B=\{v\}$ and $H$ is not a proper minor of $G^+$. Thus $H-B$ is a proper minor of $G$. Since $G\in\DD{k}$, $H-B$ has a vertex $X$ of degree at most $k$. Thus $X$ has degree at most $k+1$ in $H$, which is a contradiction.
\end{proof}
We also have a converse result.
\begin{lemma}
\lemlabel{AddVertex}
Let $S$ be a set of vertices in a graph $G\in\D{k}$. Let $G'$ be the graph obtained from $G$ by adding one new vertex $v$ adjacent to every vertex in $S$. If $G'\in\DD{k+1}$ then $S$ is the set of low-degree vertices in $G$.
\end{lemma}
\begin{proof}
Suppose that $G'\in\DD{k+1}$. If some low-degree vertex $x$ of $G$ is not in $S$, then $\deg_{G'}(x)=k+1$ and $G'\not\in\DD{k+1}$. Now assume that every low-degree vertex of $G$ is in $S$. Suppose on the contrary that some high-degree vertex $y$ of $G$ is in $S$.
Thus $\deg_G(y)\geq k+2$, implying $\deg_{G'}(y)\geq k+3$. By \propref{ManyLows} there are at least $k+2$ low-degree vertices of $G$, all of which are adjacent to $v$ in $G'$. Thus $\deg_{G'}(v)\geq k+3$. Hence $v$ and $y$ are adjacent vertices of degree at least $k+3$ in $G'$. Therefore $G'\not\in\DD{k+1}$ by \lemref{BasicDegree}. This contradiction proves that no high-degree vertex of $G$ is in $S$. Therefore $S$ is the set of low-degree degree vertices.
\end{proof}
Observe that \twolemref{Construction}{AddVertex} together prove \thmref{AddVertex}. \lemref{Construction} generalises as follows. For a non-negative integer $p$, let $G^{+p}$ be the graph obtained from a graph $G$ by adding $p$ independent vertices, each adjacent to every vertex in $G$.
\begin{lemma}
\lemlabel{GeneralConstruction}
Let $G$ be a $(k+1)$-regular $n$-vertex graph in \DD{k}.
Then $G^{+p}\in\DD{k+p}$ whenever $0\leq p\leq n-k-1$.
\end{lemma}
\begin{proof}
Every vertex of $G$ has degree $k+1+i$ in $G^{+i}$.
Every vertex of $G^{+i}-G$ has degree $n$ in $G^{+i}$.
Thus, if $n>k+1+i$ then the vertices of minimum degree in $G^{+i}$ are exactly the vertices of $G$. Thus $G^{+i}=(G^{+(i-1)})^+$ whenever $1\leq i\leq n-k-1$.
By induction on $i$, applying \lemref{Construction} at each step, we conclude that $G^{+i}\in\DD{k+i}$ and $G^{+p}\in\DD{k+p}$.
\end{proof}
\thmref{RegularGraphs} and \lemref{GeneralConstruction} imply:
\begin{corollary}
\corlabel{RegularGraphs}
Let $G$ be a $(k+1)$-regular graph with $n<\frac{4}{3}(k+2)$ vertices.
Then $G^{+p}\in\DD{k+p}$ whenever $0\leq p\leq n-k-1$.
\end{corollary}
\corref{RegularGraphs} implies:
\begin{lemma}
\lemlabel{Vida}
Let $L(G)$ denote the set of minimum degree vertices in a graph $G$.
Let $p:=|G-L(G)|$. Suppose that \begin{itemize}
\item the minimum degree of $G$ is $k+1$, and
\item $|L(G)|<\frac{4}{3}(k+2-p)$, and
\item $V(G)-L(G)$ is an independent set of $G$, and
\item every vertex in $V(G)-L(G)$ dominates $L(G)$.
\end{itemize}
Then $G\in\DD{k}$.
\end{lemma}
\begin{proof}
Let $X$ be the subgraph of $G$ induced by the vertices of degree $k+1$.
Thus $X$ is $(r+1)$-regular, where $r=k-p$.
Say $X$ has $n$ vertices. By assumption,
$n<\frac{4}{3}(k+2-p)=\frac{4}{3}(r+2)$.
The high-degree vertices of $G$ have degree $n$, and the low-degree vertices of $G$ have degree $r+1+p$. Thus $n>r+1+p$. That is, $p<n-r-1$.
Thus, by \corref{RegularGraphs}, we have $G=X^{+p}\in\DD{r+p}=\DD{k}$.
\end{proof}
\section{Block Structure}
\seclabel{BlockStructure}
In this section we show that graphs in \DD{k} can have an arbitrary block decomposition tree\footnote{Let $G$ be a connected graph. Let $B$ denote the set of blocks of $G$ (that is, cut-edges and maximal $2$-connected components). Let $C$ denote the set of cut-vertices of $G$. The \emph{block decomposition tree of $G$} is the tree $T$ where $V(T)=B \cup C$, and $bc \in E(T)$ whenever the block $b$ contains $c$. A \emph{block decomposition tree} is a tree that is isomorphic to a block decomposition tree of some graph. The \emph{bipartition} of a tree $T$ is the partition of $V(T)$ obtained from a proper $2$-colouring of $T$. Since every cut-vertex is contained in at least two blocks, every leaf of a block decomposition tree $T$ belongs to the same bipartition class of $T$. Conversely, if a tree $T$ admits a bipartition of its vertices such that all leaves lie in the same bipartition class, then $T$ is a block decomposition tree.}. \twothmref{main}{main2} are the main results.
Note that every graph in \DD{k} has no cut-edge (except $K_2$), since a cut-edge can be contracted without decreasing the minimum degree.
A \emph{low-high tree} is a tree $T$ that admits a bipartition $V(T)=\ensuremath{V_{\ell}} \cup \ensuremath{V_{h}}$, such that every vertex in \ensuremath{V_{\ell}}\ has degree at most $2$, and every vertex in \ensuremath{V_{h}}\ has degree at least $2$. Vertices in $\ensuremath{V_{\ell}}$ are called \emph{low}, and vertices in $\ensuremath{V_{h}}$ are called \emph{high}. Since every leaf in a low-high tree is low, every low-high tree is a block decomposition tree.
In the following discussion, let $T$ be a low-high tree.
Let $L$ be the set of leaves in $T$.
Let $r$ be an arbitrary high vertex of $T$, called the \emph{root}.
For each edge $vw \in E(T)$, let $\mathop{{\rm dist}}(r,vw):=\min\{\mathop{{\rm dist}}(r,v),\mathop{{\rm dist}}(r,w)\}$.
Let $B$ be the set of edges of $T$ at even distance from $r$.
Call these edges \emph{blue}.
Similarly let $R:=E(T)-B$ be the set of \emph{red} edges in $T$.
Since $r$ is high and each leaf is low, each leaf is at odd distance from $r$. Thus each edge incident with a leaf is blue.
\begin{lemma}
\lemlabel{p1}
The number of blue edges $|B|$ and the number of red edges $|R|$ do not depend on the choice of $r$.
\end{lemma}
\begin{proof}
It is enough to show that $|B|$ and $|R|$ do not change if we choose an alternative root $r'$ at distance $2$ from $r$. Let $R'$ and $B'$ be the sets of red and blue edges with respect to $r'$. Let $x$ be the common neighbour of $r$ and $r'$. Thus $rx \in B- B'$ and $r'x \in B' - B$. Apart from these edges, $B$ and $B'$ do not differ. Hence $|B|=|B'|$, and also $|R|=|R'|$.
\end{proof}
Define
\begin{equation*}
d:= 4|L|+2|R|\enspace.
\end{equation*}
Since $T$ has at least two leaves, $d\geq8$.
By \lemref{p1}, $d$ does not depend on the choice of $r$.
For each edge $e=vw$ of $T$ such that $\mathop{{\rm dist}}(r,v)=\mathop{{\rm dist}}(r,w)-1$, let $T_e$ be the maximal rooted subtree of $T$ containing $vw$, and no other neighbour of $v$.
Define the function $\varphi: E(T) \rightarrow \ensuremath{\mathbb{N}}$ as follows.
For each blue edge $e$ in $T$, define
\begin{equation}
\eqnlabel{F1F2}
\varphi(e):=4|L\cap E(T_e)|+2|R\cap E(T_e)|\enspace.
\end{equation}
Now consider a red edge $vw$ in $T$ with $\mathop{{\rm dist}}(r,v)=\mathop{{\rm dist}}(r,w)-1$. Thus $\mathop{{\rm dist}}(r,v)$ is odd, $v$ is low, and $\deg(v)=2$. Let $uv$ be the blue edge incident to $v$. Define
\begin{equation}
\eqnlabel{F3}
\varphi(vw):= d+2-\varphi(uv)
\enspace.
\end{equation}
\Figure{Tree}{\includegraphics{Tree}}{An example of the edge labelling $\varphi$ with $|R|=6$ and $|B|=14$ and $|L|=8$ and $d=2\cdot 6+4\cdot 8=44$. Red edges are drawn thick.}
The next lemma immediately follows from \eqnref{F3}.
\begin{lemma}
\lemlabel{p3}
If $v$ is a low vertex of degree $2$ and $v$ is incident with
edges $e$ and $f$, then $\varphi(e)+\varphi(f)=d+2$.
\end{lemma}
The sum of $\varphi$ values around a high vertex is also constant.
\begin{lemma}
\lemlabel{p4}
Let $v$ be a high vertex and let $E_v$ be the set of edges incident with $v$. Then $\sum_{e \in E_v} \varphi(e)=d$.
\end{lemma}
\begin{proof}
First suppose that $v=r$. Then
\begin{align*}
\sum_{rx\in E_r}\varphi(rx)
\;=\;\sum_{rx\in E_r}4|L\cap E(T_{rx})|+2|R\cap E(T_{rx})|
\;=\;4|L\cap E(T)|+2|R\cap E(T)|
\;=\;d\enspace.
\end{align*}
Now assume that $v\neq r$. Since $v$ is high, $\mathop{{\rm dist}}(v,r)$ is even, and $v$ is incident to one red edge $uv$ (where $u$ is the neighbour of $v$ closer to $r$ than $v$). Thus $u$ is low, and $\deg(u)=2$. Let $t$ be the other neighbour of $u$.
Let $e_1,\dots,e_k$ be the blue edges incident to $v$.
Then
\begin{align*}
\sum_{e \in E_v} \varphi(e)
\;=\;&\varphi(uv)+\sum_{i=1}^k\varphi(e_i)\\
\;=\;&d+2-\varphi(tu)+\sum_{i=1}^k\varphi(e_i)\\
\;=\;&d+2-4|L\cap E(T_{tu})|-2|R\cap E(T_{tu})|
+\sum_{i=1}^k4|L\cap E(T_{e_i})|+2|R\cap E(T_{e_i})|\enspace.
\end{align*}
Observe that $L\cap E(T_{tu})=\bigcup_iL\cap E(T_{e_i})$
and $R\cap E(T_{tu})-\bigcup_iR\cap E(T_{e_i})=\{uv\}$.
Thus
$$\sum_{e \in E_v} \varphi(e)\;=\;d+2-2=d\enspace.$$
\end{proof}
Observe that, in principle, the definition of $\varphi$ depends on the choice of $r$. However, this is not the case.
\begin{lemma}
\lemlabel{p5}
Let $r$ and $r'$ be high vertices of $T$, and let $\varphi$ and $\varphi'$ be the functions defined above using $r$ and $r'$ as roots, respectively. Then $\varphi=\varphi'$.
\end{lemma}
\begin{proof}
Since $T$ is connected, it is enough to show that $\varphi=\varphi'$ whenever $\mathop{{\rm dist}}(r,r')=2$.
Let $x$ be the common neighbour of $r$ and $r'$.
Let $B'$ be the set of blue edges with respect to $r'$.
Now $B$ and $B'$ (as well as $R$ and $R'$) differ only in $rx$ and $r'x$.
Since \eqnref{F1F2} only considers $\varphi$ and $\varphi'$ values of blue edges away from the root, $\varphi(e)=\varphi'(e)$ for each $e \in B\cap B'$.
Since each edge incident with $r$ or $r'$ apart from $rx$ and $r'x$ is in $B \cap B'$, and since $d$ is invariant, \eqnref{F3} shows that $\varphi$ and $\varphi'$ match on every edge in $R \cap R'$. Finally \lemref{p4} implies that $\varphi$ and $\varphi'$ also match on edges between $rx$ and $r'x$.
\end{proof}
\begin{lemma}
\lemlabel{p6}
$\varphi(e) \ge 4$ for every edge $e \in E(T)$.
\end{lemma}
\begin{proof}
While the colour of an edge $e$ may depend on the choice of $r$, \lemref{p5} says that $\varphi(e)$ does not depend on the choice of $r$. Every edge can be made blue for an appropriate choice of $r$, and $\varphi(e)\geq4$ for every blue edge $e$ by \eqnref{F1F2}.
\end{proof}
And now for something completely different.
Let $e=u_1u_2$ and $f=u_3u_4$ be two independent edges in the complete graph
$K_{d+1}$, where $d \ge 4$. The \emph{single-horned graph} $G_{d,4}$ is obtained from $K_{d+1}$ by adding a new vertex $x$, connecting $x$ to $u_1,u_2,u_3,$ and $u_4$ and
removing edges $e$ and $f$.
Observe that $\deg(x)=4$. Call $x$ the \emph{horn} of $G_{d,4}$. Call
the remaining vertices the \emph{original vertices} of $G_{d,4}$, which
all have degree $d$.
Let $a,b \ge 4$ be even integers such that $d=a+b-2$.
Choose matchings $M_a$ and $M_b$ with $\frac{a}{2}$ and $\frac{b}{2}$ edges, respectively, that cover all the vertices of $K_{d+1}$.
Hence $M_a$ and $M_b$ share exactly one vertex.
Take two new vertices $x_a$ and $x_b$ and join $x_a$ to every vertex of $M_a$ and $x_b$ to every vertex of $M_b$. Next delete the edges of $M_a$ and $M_b$.
The resulting graph is called the \emph{double-horned graph} $G_{d,a,b}$. As above, $x_a$ and $x_b$ are called the \emph{horns} of $G_{d,a,b}$, and the remaining vertices, all of degree $d$, are the \emph{original vertices}.
Let $e=uv$ be an edge in a single- or double-horned graph $G$.
If $u$ or $v$ is a horn in $G$, then the vertex $uv$ is a \emph{horn} in $G/e$ and is \emph{original} otherwise. Inductively, we can define horns and original
vertices for every contraction minor of a horned graph.
\begin{lemma}
\lemlabel{contr}
Let $G'$ be a proper contraction minor of a horned graph $G_{d,4}$ or $G_{d,a,b}$. If $G'$ contains an original vertex, then some original vertex of $G'$ has degree less than $d$.
\end{lemma}
\begin{proof}
We shall leave the proof for $G_{d,4}$ to the keen reader.
Let $G$ be the doubly horned graph $G_{d,a,b}$, and let $F \subseteq E(G)$ such
that $G/F =G'$. If $|F| \ge 3$, then $G'$ has at most $d$ vertices,
and \emph{all} its vertices have degree less than $d$. Now assume that $|F| \le 2$.
Let $e=uv$ be an edge connecting a pair of original vertices.
There are at least $7 \le a+b-1$ original vertices in $G$ and at least
three original vertices are connected with both $u$ and $v$.
Thus $G/e$ has at least three original vertices of degree less than $d$, which cannot all be eliminated by a single additional contraction.
Hence every edge in $F$ is incident with a horn. Let $x$ be a horn incident with
$e$. At least two neighbours of $x$ (which are original vertices)
have degree less than $d$ in $G/e$, yet by the above argument, the edge between them cannot be contracted.
\end{proof}
We are now ready to state the first theorem of this section.
\begin{theorem}
\thmlabel{main}
For every low-high tree $T$, there is an integer $d$ and a graph $G$ such that:
\begin{enumerate}
\item[{\rm (G1)}] $G$ is $d$-regular,
\item[{\rm (G2)}] $T$ is the block decomposition tree of $G$,
\item[{\rm (G3)}] $8\leq d \le 4 |E(T)|$, and
\item[{\rm (G4)}] $G \in \DD{d+1}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Adopt the above notation. Let $d:= 4|L| + 2|R|$. By construction, $d\geq8$.
Note that $d \le 4 |E(T)|$ with equality only if $T$ is a star.
For every leaf $u$ of $T$, let $G_u$ be a copy of the single-horned graph $G_{d,4}$.
For every non-leaf low vertex $v$ of $T$ incident with edges $e$ and $f$,
let $G_v$ be a copy of the double-horned graph $G_{d,a,b}$, where $a:=\varphi(e)$ and $b:=\varphi(f)$. Note that $a,b\geq4$ by \lemref{p6}.
Observe that there is a natural correspondence between the set of horns in the above graphs and their degrees, and between $E(T)$ and their $\varphi$ values.
As illustrated in \figref{Example}, identifying horns wherever the edges in $T$ have a common (high) end-vertex gives rise to a $d$-regular graph $G$ (by \lemref{p4}). Hence $G$ satisfies (G1), (G2) and (G3).
\Figure{Example}{\includegraphics[width=\textwidth]{Example}}{The graph $G$ produced from the given low-high tree with $d=4\cdot 4+2\cdot2=20$. Shaded regions represent cliques minus the dashed matchings.}
Since $G$ is connected and $d$-regular, \lemref{BasicDegree} implies that to establish (G4) it suffices to show that every proper contraction minor of $G$ has a vertex of degree less than $d$. Suppose on the contrary that there is a proper contraction minor $G'=G/E'$ of $G$ with $\delta(G) \ge d$. Take such a $G'$ with the minimum number of vertices. Thus $G'$ has no cut-edges, since contracting a cut-edge does not decrease $\delta$ (since $G'\not\cong K_2$).
Let $H$ be an arbitrary block of $G$ and consider $H/E'$.
Suppose that $H/E'$ is not contracted to a single vertex.
Now $H/E'\not\cong K_2$ (as this would either be a
nonexistent cut-edge in $G'$ or would imply that $G'$ has a vertex of degree 1
which is also absurd).
But if $H/E'$ has at least three vertices and $H/E'$ is a proper minor
of $H$, then by \lemref{contr}, $H/E'$ has an inner vertex of degree less than $d$.
Hence $H/E'$ is either trivial or is left intact in a contraction.
So we may assume that $G'$ is obtained by shrinking several blocks of
$G$ to single vertices. We may assume that $G'$ is obtained by first
contracting $k_i \ge 0$ inner blocks of $G$, and later contracting
$k_e \ge 0$ end-blocks of $G$, where $k_i+k_e \ge 1$.
Let $G^*$ be the graph obtained after contracting the inner blocks.
Now $k_i > 0$, as otherwise $G'$ is a proper subgraph of $G$.
By shrinking $k_i$ inner blocks we have reduced the number of
cut-vertices by $k_i$, and also reduced the sum of their
degrees by $k_i(d+2)$; see~\twolemref{p3}{p4}. Hence $G^*$ has at least one
\emph{cut-vertex} $v$ of degree less than $d$, and since $G' \ne G^*$, at least one
contraction of an end-block follows.
Finally, contracting an end-block cannot increase $\deg(v)$.
This contradiction completes the proof of (G4).
\end{proof}
We now prove that minor-minimal minimum-degree graphs can have arbitrary block structure.
\begin{theorem}
\thmlabel{main2}
For every block decomposition tree $T$,
there is an integer $d$ and a graph $G$ such that
\begin{enumerate}
\item[{\rm (H1)}] $T$ is the block decomposition tree of $G$,
\item[{\rm (H2)}] $\delta(G) \le 8 |E(T)|$, and
\item[{\rm (H3)}] $G \in \DD{d+1}$ where $d=\delta(G)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $V_c \cup V_b$ be the bipartition of $V(T)$, such that every leaf of $T$ is in $V_b$. Let $H_b$ denote the set of vertices in $V_b$ with degree at least $3$ in $T$.
Thus $T$ is low-high if and only if $H_b=\emptyset$.
By \thmref{main} we may assume that $T$ is not low-high, and $H_b\neq\emptyset$. Choose an arbitrary vertex $x \in H_b$.
Let $T'$ be the tree obtained from $T$ by subdividing each edge that is incident with a vertex in $H_b$ once. Each such subdivision vertex and each vertex in $V_b-H_b$ has degree at most $2$ in $T'$. Each vertex in $V_c \cup H_b$ has degree at least $2$ in $T'$. Thus $T'$ is low-high. In particular, $x$ is a high vertex of $T'$.
Now $|E(T')| < 2 |E(T)|$ since at least one edge of $T$ is incident with a leaf and did not get subdivided in the construction of $T'$. By \thmref{main} there exists an integer $d' \le 4 E(T') < 8 E(T)$ and a $d'$-regular graph $G' \in \DD{d'+1}$ such that $T'$ is the block decomposition tree of $G'$. In order to keep the arguments below as simple as possible, assume that $G'$ \emph{is} the graph obtained by the construction in the proof of \thmref{main}. Observe that every block of $G'$ contains at least $12$ vertices, since $T$ has at least one vertex in $H_b$. Note that the cut-vertices of $G'$ come in two flavours: ones that correspond to vertices of $V_c$, and ones that correspond to vertices of $H_b$. Similarly, every non-cut-vertex of $G'$ corresponds to a vertex of $V_b- H_b$.
Now define a partition of $V(G')$ into bags $\{B_y : y \in V_b\}$ labelled by vertices $V_b$, satisfying the following conditions:
\begin{enumerate}
\item[(C1)] for every $y \in H_b$ the bag $B_y$ contains the cut-vertex $c$ that corresponds to $y$ as well as the interior vertices of every block that contains $c$,
\item[(C2)] for every $y \in V_b- H_b$ the bag $B_y$ contains every interior vertex of a block that corresponds to $y \in H_b$.
\end{enumerate}
We have so far partitioned every vertex of $G'$ that is not a cut-vertex corresponding to a vertex in $V_c$.
\begin{enumerate}
\item[(C3)] if $c$ is a cut-vertex corresponding to a vertex of $V_c$, then let $c_x$ be its neighbour on some shortest $c$--$x$ path in $G'$, and put $c$ in the bag $B$ that already contains $c_x$.
\end{enumerate}
Observe that every block of $G'$ contains $d'+1$ interior vertices, hence
every bag $B_y$ contains at least $d'+1$ vertices.
Finally we obtain $G$ from $G'$ by adding for each bag $B_y$ of $G'$ a new vertex $\tilde{y}$ which is made adjacent to every vertex of its bag
$B_y$. Now $G'$ is a subgraph of $G$ and every $v \in V(G')$ has degree equal to $d'+1$, and new vertices have degree at least $d'+1$. Call this process \emph{bag extension} and let $d:=d'+1$.
Now $G$ contains two types of blocks: \emph{small blocks} that contain
interior vertices of exactly one block of $G'$, and \emph{big blocks} that contain interior vertices of several blocks of $G'$. Observe that
every big block $B$ contains a separating set of size two comprised of its new vertex and a vertex from $H_b$.
Let $B'$ (respectively, $B$) be an end-block of $G'$ ($G$) and let $c$ be a cut-vertex that separates $B'$ ($B$) from the rest of $G'$ ($G$).
By the construction of $G'$ there are exactly four edges incident with
$c$ whose other end-vertex is in $B' (B)$.
Let $e$ be an arbitrary edge of $G$ that is not one of the four edges incident to some cut-vertex of an end-block. Assume that $e$ belongs to block $B$ of $G$. Then there are at least six vertices of degree $d$ in $G$ that are all adjacent to both end-vertices of $e$. This implies that $G/e$ contains at least six vertices of degree less than $d$, and no contraction of an additional two edges of $B$ can eliminate all the vertices of degree less than $d$.
First observe that an end-block of $G$ contains exactly $d+2$ vertices and the other small blocks contain exactly $d+3$ vertices. Every big block on the other hand contains a pair of vertices: the new vertex and a cut-vertex of $G'$ corresponding to a vertex in $H_b$.
It remains to prove that $G \in \DD{d+1}$. Since every edge has an end-vertex of degree $d$, no edge-deleted subgraph of $G$ has minimum degree at least $d$.
Hence we only have to consider contraction minors of $G$.
Let $F \subseteq E(G)$ be a nonempty edge set and let $G^*=G/F$.
We may split $F =F' \cup F^*$ so that $F' \subseteq E(G')$.
A block $B$ of $G$ may either get contracted to a single vertex, get partially contracted, or survive the contraction of $F$ without changes.
First assume that $B/F$ gets partially contracted. If $B'$ is an end-block, then $B/F$ has exactly $d+1$ vertices obtained by contracting a single edge. This is not possible as a vertex of degree less than $d$ would be created.
If $B$ is a small block, then contracting any edge of $B$ leaves at least six vertices of degree less than $d$ in $B$. Since $B$ has $d+3$ vertices in the beginning, an additional two contractions decrease the vertex count below $d+1$, which is absurd.
Let $B$ be a big block that gets partially contracted.
If contraction identifies the new vertex $n$ of $B$ and
a cut-vertex $c$ of $G'$ corresponding to a vertex in $H_b$ then
$B/{nc}$ contains at least six vertices of degree less than $d$ in \emph{every}
block $B'$ of $G'$ that is a subgraph of $B$.
Since $B'$ contains $d+2$ vertices, $B'/F$ is trivial for every
$B' \subseteq B$, which is nonsense.
Otherwise assume that $B' \subseteq B$ is a block of $G'$ that gets partially contracted. The $d+1$ interior vertices of $B'$ are separated from the rest of $G$ by three vertices. This implies that at most three edges are contracted in order to contract $B'$ partially. Yet a single contraction produces six vertices of degree less than $d$ in $B'$, so that an additional two contractions do not suffice.
Hence no block of $G$ gets partially contracted in $G/F$. Now $G/F$ may be obtained from $G'/F$ by extension of bags, where $G'/F$ is a contraction of $G'$ that either identifies a block of $G'$ or leaves it unchanged.
In this case, $G'/F$ contains a vertex of degree less than $d'$, and
bag extension can only increase its degree by one. This completes the proof of \thmref{main2}.
\end{proof}
\begin{open}
By the Robertson-Seymour graph minor theorem, every graph in \DD{k} has at most $f(k)$ vertices, for some function $f$. It would be interesting to obtain a simple proof of this result, and to obtain good bounds on $f$.\\
By \thmref{main} with $T=K_{1,s}$, there is a graph $G\in\DD{4s+1}$ with $1+s(4s+1)$ vertices. Does every graph in \DD{k} have $O(k^2)$ vertices? \\
By \thmref{main} with $T=P_{2s+1}$, there is a graph $G\in\DD{2s+7}$ with diameter $2s$. Does every graph in \DD{k} have $O(k)$ diameter?
\end{open}
\section{Complete Multipartite Graphs}
\seclabel{CompleteMultipartite}
This section characterises the complete multipartite graphs in \DD{k}, in \CC{k}, in \TT{k}, and in \WW{k}. See \citep{Lucena-DAM07, Chleb-DAM02, Rama-SJDM97} for other results on treewidth obstructions. We first prove three lemmas about complete multipartite graphs. The first says that complete multipartite graphs are highly connected.
\begin{lemma}
\lemlabel{CMGconnected}
Every complete multipartite graph $G$ with minimum degree $k$ is $k$-connected. Moreover, if $vw$ is an edge of $G$ such that both $v$ and $w$ have degree at least $k+1$, then $G-vw$ is $k$-connected.
\end{lemma}
\begin{proof}
Let $x$ and $y$ be distinct vertices in $G$. It suffices to prove that there is a set of $k$ internally disjoint paths between $x$ and $y$ that avoid $vw$. Let $R$ be the set of vertices coloured differently from both $x$ and $y$.
First suppose that $x$ and $y$ have the same colour. Then $\deg(x)=\deg(y)\geq k$, and $P:=\{xry:r\in R\}$ is a set of $\deg(x)$ internally disjoint paths between $x$ and $y$. If $vw$ is in some path in $P$, then without loss of generality $v=x$, implying $\deg(x)\geq k+1$, and at least $k$ paths in $P$ avoid $vw$.
Now assume that $x$ and $y$ are coloured differently. Let $S:=\{x_1,x_2\dots,x_p\}$ be the colour class that contains $x$, where $x=x_p$. Let $T:=\{y_1,y_2\dots,y_q\}$ be the colour class that contains $y$, where $y=y_q$. Without loss of generality, $n-p=\deg(x)\leq\deg(y)=n-q$, implying $q\leq p$. Thus $$P:=\{xy\}\cup\{xry:r\in R\}\cup\{xy_ix_iy:i\in[q-1]\}$$ is a set of $\deg(x)$ internally disjoint paths between $x$ and $y$. If $\deg(x)\geq k+1$ then at least $k$ paths in $P$ avoid $vw$. Now assume that $vw$ is in some path in $P$, but $\deg(x)=k$. Since each vertex $x_i$ has the same degree as $x$, and $v$ and $w$ both have degree at least $k+1$, the only possibility is that $v=y$ and $w=r$ for some $r\in R$ (or symmetrically $w=y$ and $v=r$). Thus $\deg(x)<\deg(y)$ and $q<p$. Replace the path $xry$ in $P$ by the path $xrx_{p-1}y$, which is internally disjoint from the other paths in $P$.
\end{proof}
\begin{lemma}
\lemlabel{CMGequal}
Let $G$ be a complete multipartite graph on $n$ vertices. Then
$$\kappa(G)=\delta(G)=\tw{G}=\pw{G}=n-\alpha(G).$$
\end{lemma}
\begin{proof}
The degree of a vertex $v$ equals $n$ minus the size of the colour class that contains $v$. Since every independent set is contained within a colour class, the size of the largest colour class equals $\alpha(G)$. Thus $\delta(G)=n-\alpha(G)$. We have $\kappa(G)\leq\delta(G)\leq\tw{G}\leq\pw{G}$ for every graph $G$; see \citep{Bodlaender-TCS98,Diestel00}. By \lemref{CMGconnected}, $\kappa(G)\geq\delta(G)$. Thus it suffices to prove that $\delta(G)\geq\pw{G}$ for every complete multipartite graph $G$. Let $S=\{v_1,\dots,v_{\alpha(G)}\}$ be a largest colour class in $G$. Let $X:=V(G)-S$. Observe that $(X\cup\{v_1\},X\cup\{v_2\},\dots,X\cup\{v_p\})$
is a path decomposition of $G$ with width $|X|=n-\alpha(G)=\delta(G)$. Thus $\pw{G}\leq\delta(G)$.
\end{proof}
\begin{lemma}
\lemlabel{CMG}
If $H$ is a minor of a complete multipartite graph $G$, then $H$ can be obtained from $G$ by a sequence of edge contractions, vertex deletions, and edge deletions, such that each operation does not increase the minimum degree, connectivity, treewidth, or pathwidth.
\end{lemma}
\begin{proof}
Every minor of a graph can be obtained by a sequence of edge contractions and vertex deletions, followed by a sequence of edge deletions. Contracting an edge or deleting a vertex in a complete multipartite graph produces another complete multipartite graph. Edge deletions do not increase the minimum degree, connectivity, treewidth, or pathwidth. Thus by \lemref{CMGequal}, it suffices to prove that edge contractions and vertex deletions in complete multipartite graphs do not increase the minimum degree.
Say $G=K_{a_1,\dots,a_p}$ has $n$ vertices. Then $G$ has minimum degree $n-\max_ia_i$. Let $G'$ be the graph obtained from $G$ by contracting an edge. Then $G'$ is a complete multipartite graph $K_{1,a'_1,\dots,a'_p}$ with $n-1$ vertices, where $a_i-1\leq a'_i\leq a_i$. Thus $$\delta(G')=n-1-\max_ia'_i\leq n-1-\max_i(a_i-1)=n-\max_ia_i=\delta(G)\enspace.$$ Now let $G'$ be the graph obtained from $G$ by deleting a vertex. Then $G'$ is a complete multipartite graph $K_{a'_1,\dots,a'_p}$ with $n-1$ vertices, where $a_i-1\leq a'_i\leq a_i$. By the same argument as before, $\delta(G')\leq \delta(G)$.
\end{proof}
We now state and prove our first characterisation.
\begin{theorem}
\thmlabel{CMG-Degree}
For all $k\geq1$, the following are equivalent for a complete multipartite graph $G$:\\
\hspace*{5mm}\textup{(a)} $G\in\CC{k}$\\
\hspace*{5mm}\textup{(b)} $G\in\DD{k}$\\
\hspace*{5mm}\textup{(c)} for some $b\geq a\geq1$ and $p\geq2$ such that $k+1=a+(p-1)b$, $$G=K_{a,\underbrace{b,\dots,b}_p}\enspace,$$
\hspace*{10mm}and if $p=2$ then $a=b$.
\end{theorem}
\begin{proof}
(b) $\Longrightarrow$ (a): Say $G\in\DD{k}$.
By \lemref{BasicDegree}, $\delta(G)=k+1$.
By \lemref{CMGequal}, $\kappa(G)=k+1$.
By \eqnref{DDinCC}, $G\in\CC{k}$.
\medskip (a) $\Longrightarrow$ (c):
Consider a complete multipartite graph $G\in\CC{k}$. Thus $\kappa(G)\geq k+1$ by \lemref{Basic}. If $\kappa(G)\geq k+2$ then $\kappa(G-e)\geq k+1$ for any edge $e$ of $G$, implying $G\not\in\CC{k}$ by \lemref{Basic}. Now assume that $\kappa(G)=k+1$. Thus $\delta(G)=k+1$ by \lemref{CMGequal}.
Suppose on the contrary that adjacent vertices $v$ and $w$ in $G$ both have degree at least $k+2$. By \lemref{CMGconnected}, $G-vw$ is $k$-connected, implying $G\not\in\CC{k}$. This contradiction proves that no two high-degree vertices in $G$ are adjacent. If two vertices in a complete multipartite graph have distinct degrees, then they are adjacent. Thus the high-degree vertices in $G$ have the same degree, and the vertices of $G$ have at most two distinct degrees. Since the degree of each vertex $v$ equals $|V(G)|$ minus the number of vertices in the colour class that contains $v$, the colour classes of $G$ have at most two distinct sizes. Hence for some $b\geq a\geq 1$ and $p,q\geq1$, $$G=K_{\underbrace{a,\dots,a}_q,\underbrace{b,\dots,b}_p}.$$
Hence $\kappa(G)=aq+b(p-1)=k+1$. If $a=b$ then, taking $q=1$, we are done. Now assume that $a<b$. Thus $q=1$ as otherwise two high-degree vertices are adjacent. Thus
$$G=K_{a,\underbrace{b,\dots,b}_p}\enspace.$$
Suppose on the contrary that $p=1$. Then $G=K_{a,b}$ and $\kappa(G)=a$. Contracting one edge in $G$ gives $K_{1,a-1,b-1}$, which by \lemref{CMGequal} also has connectivity $a$, implying $G\not\in\CC{k}$. This contradiction proves that $p\geq2$.
Now suppose that $p=2$. Then $G=K_{a,b,b}$ and $\kappa(G)=a+b$. Contracting one edge gives $K_{1,a,b-1,b-1}$, which by \lemref{CMGequal} also has connectivity $a+b$ (since $a<b$), implying $G\not\in\CC{k}$. This contradiction proves that if $p=2$ then $a=b$.
\medskip (c) $\Longrightarrow$ (b) Let $$G=K_{a,\underbrace{b,\dots,b}_p}\enspace,$$
for some $b\geq a\geq1$ and $p\geq2$, such that $k+1=a+(p-1)b$ and if $p=2$ then $a=b$. Thus $G$ has minimum degree $k+1$ by \lemref{CMGequal}. Suppose on the contrary that $G\not\in\DD{k}$. By \lemref{Basic}, $G$ has a proper minor $H$ with $\delta(H)\geq k+1$. By \lemref{CMG}, every minor of $G$ in the sequence from $G$ to $H$ has minimum degree at most $k+1$. Thus we can assume that $H$ was obtained from $G$ by a single edge contraction, a vertex deletion, or an edge deletion. In each case we prove that $\delta(H)\leq k$, which is the desired contradiction.
First suppose that $H$ is obtained from $G$ by an edge contraction. Then
$$\text{(i) }H=K_{1,a-1,b-1\underbrace{b,\dots,b}_{p-1}}
\;\;\text{ or }\;\;
\text{(ii) }H=K_{1,a,b-1,b-1,\underbrace{b,\dots,b}_{p-2}}\enspace.$$
In case (i), $\delta(H)=1+(a-1)+(b-1)+(p-2)b=k$.
In case (ii) with $p\geq3$, $\delta(H)=1+a+2(b-1)+(p-3)b=k$.
Now consider case (ii) with $p=2$. By assumption, $a=b$. Thus $H=K_{1,a,a-1,a-1}$ has minimum degree $1+2(a-1)=k$.
Now suppose that $H$ is obtained from $G$ by a vertex deletion. Then
$$\text{(i) }H=K_{a-1,\underbrace{b,\dots,b}_{p}}
\;\;\;\text{ or }\;\;\;
\text{(ii) }H=K_{a,b-1,\underbrace{b,\dots,b}_{p-1}}\enspace.$$
In case (i), $\delta(H)=(a-1)+(p-1)b=k$.
In case (ii), $\delta(H)=a+(b-1)+(p-2)b=k$ (since $p\geq2$).
In $G$, every edge is incident to a vertex of degree $k+1$. Thus, if $H$ is obtained from $G$ by an edge deletion, then $\delta(H)\leq k$.
\end{proof}
The remainder of this section is devoted to characterising the complete multipartite graphs in \TT{k} and in \WW{k}. We start with a lemma about independent sets in complete multipartite graphs.
\begin{lemma}
\lemlabel{CMG-IndependentSets}
For every edge $vw$ in a complete multipartite graph $G$, every independent set in $G-vw$ is either $\{v,w\}$ or is also independent in $G$. Thus if $\alpha(G)\geq2$ (that is, $G$ is not a complete graph) then $\alpha(G-vw)=\alpha(G)$.
\end{lemma}
\begin{proof}
Let $G':=G-vw$. Let $I$ be an independent set in $G'$ that is not independent in $G$. Thus both $v$ and $w$ are in $I$. Let $S$ be the colour class containing $v$. Every vertex not in $S\cup\{w\}$ is adjacent to $v$ in $G'$. Thus $I\subseteq S\cup\{w\}$. Every vertex in $S-\{v\}$ is adjacent to $w$ in $G'$. Thus $I:=\{v,w\}$. Hence every independent set in $G'$ is either $\{v,w\}$ or is also independent in $G$. Thus $\alpha(G')=\alpha(G)$ whenever $\alpha(G)\geq2$.
\end{proof}
To prove lower bounds on treewidth we use the following idea. Let $G$ be a graph. Two subgraphs $X$ and $Y$ of $G$ \emph{touch} if $X\cap Y\neq\emptyset$ or there is an edge of $G$ between $X$ and $Y$. A \emph{bramble} in $G$ is a set of pairwise touching connected subgraphs. The subgraphs are called \emph{bramble elements}. A set $S$ of vertices in $G$ is a \emph{hitting set} of a bramble \ensuremath{\mathcal{B}}\ if $S$ intersects every element of \ensuremath{\mathcal{B}}. The \emph{order} of \ensuremath{\mathcal{B}}\ is the minimum size of a hitting set. The following `Treewidth Duality Theorem' shows the intimate relationship between treewidth and brambles.
\begin{theorem}[\citet{SeymourThomas-JCTB93}]
\thmlabel{TreewidthBramble}
A graph $G$ has treewidth at least $k$ if and only if $G$ contains a bramble of order at least $k+1$.
\end{theorem}
For example, say $G$ is a complete multipartite graph on $n$ vertices. Let $S$ be a set of vertices in $G$, one from each colour class; that is, $S$ is a maximum clique in $G$. Then it is easily seen that $\ensuremath{\mathcal{B}}:=E(G)\cup S$ is a bramble of order $n-\alpha(G)+1$, and thus $\tw{G}\geq n-\alpha(G)$ by \thmref{TreewidthBramble} (confirming \lemref{CMGequal}). The next two lemmas give circumstances when an edge can be deleted from a complete multipartite graph without decreasing the treewidth.
\begin{lemma}
\lemlabel{CMG-DeleteEdge}
Let $G$ be a complete multipartite graph with $\alpha(G)\geq3$, such that at least two colour classes contain at least two vertices. Let $vw$ be an edge, where both $v$ and $w$ are in colour classes that contain at least two vertices. Then $\tw{G-vw}=\tw{G}$.
\end{lemma}
\begin{proof}
Say $G$ has $n$ vertices. Let $G':=G-vw$. By \twolemref{CMGequal}{CMG-IndependentSets}, $\tw{G}=n-\alpha(G)=n-\alpha(G')$. Clearly $\tw{G'}\leq\tw{G}$. Thus it suffices to prove that $\tw{G'}\geq n-\alpha(G')$.
Since $v$ and $w$ are in colour classes that contain at least two vertices, there is a set $S$ of vertices, such that both $v$ and $w$ are not in $S$, and each colour class has exactly one vertex in $S$. Thus $S$ is a maximum clique in $G$ and in $G'$. Let $\ensuremath{\mathcal{B}}:=E(G')\cup S$.
We now prove that \ensuremath{\mathcal{B}}\ is a bramble in $G'$. Each element of \ensuremath{\mathcal{B}}\ induces a connected subgraph in $G'$. Every pair of vertices in $S$ are adjacent. Say $x\in S$ and $pq\in E(G')$. Since $p$ and $q$ have distinct colours, $x$ is coloured differently from $p$ or $q$, and thus $x$ is adjacent to $p$ or $q$ (since $x\ne v$ and $x\ne w$). Hence $x$ touches $pq$. Say $pq\in E(G')$ and $rs\in E(G')$. If $\{p,q\}\cap\{r,s\}\neq\emptyset$ then $pq$ and $rs$ touch. So assume that $p,q,r,s$ are distinct. Thus there are at least two edges in $G$ between $\{p,q\}$ and $\{r,s\}$, one of which is not $vw$.
Hence $pq$ touches $rs$. Therefore \ensuremath{\mathcal{B}}\ is a bramble in $G'$.
Let $H$ be a minimum hitting set of \ensuremath{\mathcal{B}}. If $|H|\geq n-\alpha(G')+1$, then \ensuremath{\mathcal{B}}\ has order at least $n-\alpha(G')+1$, implying $\tw{G'}\geq n-\alpha(G')$ by \thmref{TreewidthBramble}, and we are done. Now assume that $|H|\leq n-\alpha(G')$.
Since every edge of $G'$ is in \ensuremath{\mathcal{B}}, $H$ is a vertex cover of $G'$, and $V(G')-H$ is an independent set of $G'$. Thus $n-|H|\leq\alpha(G')$. Hence $|H|=n-\alpha(G')$, and
$V(G')-H$ is a maximum independent set of $G'$. By \lemref{CMG-IndependentSets}, every independent set of $G'$ is $\{v,w\}$ or is an independent set of $G$.
Since $\alpha(G')\geq3$, $\{v,w\}$ is not a maximum independent set. Hence
$V(G)-H$ is a maximum independent set of $G$. That is, $V(G)-H$ is a colour class in $G$, which implies that $H$ does not contain one vertex in $S$, and $H$ is not a hitting set of \ensuremath{\mathcal{B}}. This is the desired contradiction.
\end{proof}
\begin{lemma}
\lemlabel{CMG-DeleteSpecialEdge}
Let $G$ be a complete multipartite graph with $\alpha(G)\geq2$, and at least one singleton colour class. Let $vw$ be an edge, where $v$ is in a singleton colour class, and $w$ is in a colour classes that contains at least two vertices. Then $\tw{G-vw}=\tw{G}$.
\end{lemma}
\begin{proof}
Say $G$ has $n$ vertices. Let $G':=G-vw$. By \twolemref{CMGequal}{CMG-IndependentSets}, $\tw{G}=n-\alpha(G)=n-\alpha(G')$. Clearly $\tw{G'}\leq\tw{G}$. Thus it suffices to prove that $\tw{G'}\geq n-\alpha(G')$.
By assumption, there is a set $S$ of vertices, such that $w\not\in S$, and every colour class has exactly one vertex in $S$. Thus $v\in S$. Note that $S$ is a maximum clique in $G$ and in $G'$. Let $\ensuremath{\mathcal{B}}:=E(G')\cup S$.
We now prove that \ensuremath{\mathcal{B}}\ is a bramble in $G'$. Each element of \ensuremath{\mathcal{B}}\ induces a connected subgraph in $G'$. Every pair of vertices in $S$ are adjacent. Consider $v\in S$ and $pq\in E(G')$. Since $v$ is in a singleton colour class, $v$ is adjacent to both $p$ and $q$ in $G$, and thus $v$ is adjacent to $p$ or $q$ in $G'$. Hence $v$ touches $pq$. Now consider $x\in S-\{v\}$ and $pq\in E(G')$. Since $p$ and $q$ have distinct colours, $x$ is coloured differently from $p$ or $q$, and thus $x$ is adjacent to $p$ or $q$ (since $x\ne v$ and $x\ne w$). Hence $x$ touches $pq$. Finally consider two edges $pq\in E(G')$ and $rs\in E(G')$. If $\{p,q\}\cap\{r,s\}\neq\emptyset$ then $pq$ and $rs$ touch. So assume that $p,q,r,s$ are distinct. Thus there are at least two edges in $G$ between $\{p,q\}$ and $\{r,s\}$, one of which is not $vw$. Hence $pq$ touches $rs$. Therefore \ensuremath{\mathcal{B}}\ is a bramble in $G'$.
Let $H$ be a minimum hitting set of \ensuremath{\mathcal{B}}. If $|H|\geq n-\alpha(G')+1$, then \ensuremath{\mathcal{B}}\ has order at least $n-\alpha(G')+1$, implying $\tw{G'}\geq n-\alpha(G')$ by \thmref{TreewidthBramble}, and we are done. Now assume that $|H|\leq n-\alpha(G')$.
Since every edge of $G'$ is in \ensuremath{\mathcal{B}}, $H$ is a vertex cover of $G'$, and $V(G')-H$ is an independent set of $G'$. Thus $n-|H|\leq\alpha(G')$. Hence $|H|=n-\alpha(G')$, and $V(G')-H$ is a maximum independent set of $G'$. By \lemref{CMG-IndependentSets}, every independent set of $G'$ is $\{v,w\}$ or is an independent set of $G$. If $V(G')-H=\{v,w\}$ then $H$ does not contain $v$, and $H$ is not a hitting set of \ensuremath{\mathcal{B}}, which is a contradiction. Otherwise, $V(G)-H$ is a maximum independent set of $G$. That is, $V(G)-H$ is a colour class in $G$, which implies that $H$ does not contain some vertex in $S$, and $H$ is not a hitting set of \ensuremath{\mathcal{B}}. This is the desired contradiction.
\end{proof}
\begin{theorem}
\thmlabel{CMG-Treewidth}
For all $k\geq1$, the following are equivalent for a complete multipartite graph $G$:\\
\hspace*{5mm}\textup{(a)} $G\in\TT{k}$\\
\hspace*{5mm}\textup{(b)} $G\in\WW{k}$\\
\hspace*{5mm}\textup{(c)} $G=K_{k+2}$, or $k\geq 3$ is odd and $\displaystyle G=K_{\underbrace{2,\dots,2}_{(k+3)/2}}$.
\end{theorem}
\begin{proof}
(b) $\Longrightarrow$ (a): Say $G\in\WW{k}$.
By \lemref{Basic}, $\pw{G}=k+1$.
By \lemref{CMGequal}, $\tw{G}=k+1$.
By \eqnref{WWinTT}, $G\in\TT{k}$.
\medskip (a) $\Longrightarrow$ (c): Say $G\in\TT{k}$.
By \lemref{Basic}, $\tw{G}\geq k+1$.
If $\tw{G}\geq k+2$ then $\tw{G-v}\geq k+1$ for any vertex $v$ of $G$, implying $G\not\in\TT{k}$ by \lemref{Basic}. Now assume that $\tw{G}=k+1$.
Thus $\delta(G)=k+1$ by \lemref{CMGequal}, and
$G\in\DD{k}$ by \eqnref{TTinDD}. By \thmref{CMG-Degree}, $$G=K_{a,\underbrace{b,\dots,b}_p}\enspace,$$
for some $b\geq a\geq1$ and $p\geq2$,
such that $k+1=a+(p-1)b$ and if $p=2$ then $a=b$.
\textbf{\boldmath Case. $b=1$:} Then $a=1$ and $G=K_{k+2}$, and we are done.
\textbf{\boldmath Case. $b=2$:} Then $k+3=a+2p$.
If $a=1$, then by \lemref{CMG-DeleteSpecialEdge}, $\tw{G-e}=\tw{G}$ for some edge $e$ of $G$, implying $G\not\in\T{k}$ by \lemref{Basic}.
Otherwise $a=2$. Thus $k=2p-1$ is odd, and $k\geq3$ since $p\geq2$.
Hence
$$\displaystyle G=K_{\underbrace{2,\dots,2}_{(k+3)/2}}\enspace.$$
\textbf{\boldmath Case. $b\geq3$:} Then $\alpha(G)\geq3$. Since $p\geq2$, there are at least two colour class that contain at least two vertices, and by \lemref{CMG-DeleteEdge}, $\tw{G-e}=\tw{G}$ for some edge $e$ of $G$, implying $G\not\in\T{k}$ by \lemref{Basic}.
\medskip (c) $\Longrightarrow$ (b): If $G=K_{k+2}$ then $G\in\WW{k}$ by \lemref{Basic}. Now assume that $k\geq 3$ is odd and
$$\displaystyle G=K_{\underbrace{2,\dots,2}_{(k+3)/2}}\enspace.$$
Thus $\pw{G}=k+1$ by \lemref{CMGequal}. Suppose on the contrary that $G\not\in\WW{k}$. By \lemref{Basic}, $G$ has a proper minor $H$ with $\pw{H}\geq k+1$. By \lemref{CMG}, every minor of $G$ in the sequence from $G$ to $H$ has pathwidth at most $k+1$. Thus we can assume that $H$ was obtained from $G$ by a single edge contraction, a vertex deletion, or an edge deletion. Since an edge contraction or a vertex deletion produce another complete multipartite graph, and the minimum degree of a complete multipartite graph equals its pathwidth (\lemref{CMGequal}), the same proof used in \thmref{CMG-Degree} shows that $\pw{H}\leq k$. Now assume that $H=G-vw$ for some edge $vw$ of $G$. Let $x$ be the other vertex in the colour class that contains $v$. Let $y$ be the other vertex in the colour class that contains $w$. Let $S:=V(G)-\{v,w,x,y\}$. Then $(S\cup\{v,y\},S\cup\{x,y\},S\cup\{x,w\})$ is a path decomposition of $H$ with width $k$, which is the desired contradiction.
\end{proof}
\begin{open}
Complete multipartite graphs have diameter $2$. Are there generalisations of \twothmref{CMG-Degree}{CMG-Treewidth} for all diameter-$2$ graphs in \DD{k} or in \TT{k}?
\end{open}
\section*{Acknowledgements}
Thanks to Vida Dujmovi\'c who first proved \lemref{Vida}.
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi} \def\Dbar{\leavevmode\lower.6ex\hbox to
0pt{\hskip-.23ex \accent"16\hss}D}
| {
"timestamp": "2008-12-05T03:14:21",
"yymm": "0812",
"arxiv_id": "0812.1064",
"language": "en",
"url": "https://arxiv.org/abs/0812.1064",
"abstract": "Let $\\mathcal{D}_k$ be the class of graphs for which every minor has minimum degree at most $k$.Then $\\mathcal{D}_k$ is closed under taking minors.By the Robertson-Seymour graph minor theorem, $\\mathcal{D}_k$ is characterised by a finite family of minor-minimal forbidden graphs, which we denote by $\\widehat{\\mathcal{D}}_k$.This paper discusses $\\widehat{\\mathcal{D}}_k$ and related topics. We obtain four main results:We prove that every $(k+1)$-regular graph with less than ${4/3}(k+2)$ vertices is in $\\widehat{\\mathcal{D}}_k$, and this bound is best possible.We characterise the graphs in $\\widehat{\\mathcal{D}}_{k+1}$ that can be obtained from a graph in $\\widehat{\\mathcal{D}}_k$ by adding one new vertex.For $k\\leq 3$ every graph in $\\widehat{\\mathcal{D}}_k$ is $(k+1)$-connected, but for large $k$, we exhibit graphs in $\\widehat{\\mathcal{D}}_k$ with connectivity 1. In fact, we construct graphs in $\\mathcal{D}_k$ with arbitrary block structure.We characterise the complete multipartite graphs in $\\widehat{\\mathcal{D}}_k$, and prove analogous characterisations with minimum degree replaced by connectivity, treewidth, or pathwidth.",
"subjects": "Combinatorics (math.CO)",
"title": "Graph Minors and Minimum Degree",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787872422175,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811350279019
} |
https://arxiv.org/abs/2006.11815 | Quantum trees which maximize higher eigenvalues are unbalanced | The isoperimetric problem of maximizing all eigenvalues of the Laplacian on a metric tree graph within the class of trees of a given average edge length is studied. It turns out that, up to rescaling, the unique maximizer of the $k$-th positive eigenvalue is the star graph with three edges of lengths $2 k - 1$, $1$ and $1$. This complements the previously known result that the first nonzero eigenvalue is maximized by all equilateral star graphs and indicates that optimizers of isoperimetric problems for higher eigenvalues may be less balanced in their shape -- an observation which is known from numerical results on the optimization of higher eigenvalues of Laplacians on Euclidean domains. | \section{Introduction}
Within spectral geometry, isoperimetric problems for eigenvalues have a long history that reaches back at least as far as to Lord Rayleigh's famous book {\it The Theory of Sound} \cite[§210]{Rayleigh}. This class of problems deals with finding a shape which maximizes or minimizes (functions of) eigenvalues of the Laplacian or other differential operators under a constraint on a geometric quantity such as the volume, perimeter or diameter of the underlying space. To review just one well-known example, consider the Laplacian with Neumann boundary conditions on a bounded domain $\Omega \subset \R^2$ of area $|\Omega|$ and its eigenvalues $0 = \mu_1 (\Omega) < \mu_2 (\Omega) \leq \mu_3 (\Omega) \leq \dots$. The unique domain $\Omega$ which maximizes the first positive eigenvalue $\mu_2 (\Omega)$ under the constraint $|\Omega| = 1$ is the disc with area one \cite{S54}, while the maximizer of $\mu_3 (\Omega)$ with $|\Omega| = 1$ is the union of two disjoint discs of area $1/2$ each \cite{GNP09}; cf.\ also \cite{BH19}. For higher eigenvalues it is conjectured that the domains maximizing $\mu_4 (\Omega), \mu_5 (\Omega), \dots$ are of less simple shape, cf.\ the numerical observations and pictures in \cite{AF12}. For instance, numerics indicates that the maximizer for $\mu_5 (\Omega)$ is the disjoint union of a ball and a larger, non-convex domain with certain symmetries. For a broad overview on shape optimization problems for eigenvalues of Euclidean domains we refer the reader to~\cite{H06}.
In the present paper we deal with the Laplacian $- \Delta_\Gamma$ on a metric graph $\Gamma$ with standard (continuity--Kirchhoff) vertex conditions, the natural counterpart for metric graphs of the Neumann Laplacian on a domain. We refer to Section~2 for its precise definition. We denote by
\begin{align*}
0 = \mu_1 (\Gamma) < \mu_2 (\Gamma) \leq \mu_3 (\Gamma) \leq \dots
\end{align*}
the eigenvalues of $- \Delta_\Gamma$ in nondecreasing order, counted according to their multiplicities. Isoperimetric problems for this operator have come into focus in recent years, where most results deal with $\mu_2 (\Gamma)$, the so-called spectral gap, and its optimizers within the class of graphs with fixed ``volume'' (i.e.\ total length), diameter, or average edge length. Amongst other results it is known by now that $\mu_2 (\Gamma)$ is minimized among all graphs of given total length $L$ by the interval (the graph with two vertices and one edge of length $L$ connecting the two) \cite{N87}, see also \cite{KN14}. If we restrict ourselves to the class of doubly connected graphs, the minimizers were identified to be so-called necklace graphs \cite{BL17}. As simple examples such as equilateral star graphs show, a maximizer of $\mu_2 (\Gamma)$ among graphs of fixed length cannot exist. Instead it turned out that a suitable parameter is the {\em average edge length}
\begin{align*}
\cA := \frac{L}{E},
\end{align*}
where $E$ is the number of edges of $\Gamma$ and, again, $L$ is the total length. It was shown in \cite{KKMM16} that the only maximizing graphs of $\mu_2 (\Gamma)$ for fixed $\cA$ are equilateral flower graphs and equilateral pumpkins; see \cite[Section 3]{KKMM16} for their definitions and further details. If one restricts the considered class of graphs to trees, i.e.\ graphs without cycles, then the unique maximizers of $\mu_2 (\Gamma)$ are all equilateral star graphs \cite{R17}. For further related results we refer the reader to \cite{BL17,BKKM17,BKKM19,EJ12,KKT16,K20,KN19,KR20,MP20,P20,RS20}.
While the minimizing result extends to higher eigenvalues, where $\mu_{k + 1} (\Gamma)$ for fixed total length is minimized by the equilateral star with $k + 1$ edges, see \cite{F05}, to the best of our knowledge no results are available yet about which graphs maximize $\mu_{k + 1} (\Gamma)$ for $k \geq 2$ when $\cA$ is fixed. It is the aim of this paper to characterize these maximizers within the class of tree graphs---a class of graphs which seems to share particularly many spectral properties with Euclidean domains. It turns out that the maximizers suffer a certain lack of balance. More precisely, they are non-equilateral and their edge lengths get the more unbalanced the higher $k$ gets. Our main result is the following theorem.
\begin{theorem}\label{thm:intro}
Let $k \geq 2$. Among all finite, connected metric trees with $E \geq 3$ edges and fixed average length $\cA$, $\mu_{k + 1} (\Gamma)$ is maximal if and only if $\Gamma$ is a star graph with 3 edges of lengths
\begin{align*}
\frac{2 k - 1}{2 k + 1} L, \quad \frac{1}{2 k + 1} L, \quad \frac{1}{2 k + 1} L,
\end{align*}
where $L$ denotes the total length of $\Gamma$.
\end{theorem}
The maximizers of the first few eigenvalues are displayed in Figure \ref{fig:intro}. Compared with the known results on $\mu_2 (\Gamma)$ within the class of metric trees with given average length $\cA$, where any equilateral star is a maximizer, it is remarkable that for higher eigenvalues only 3-stars with the specified lengths do the job.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
\draw[fill] (0,0) circle(0.05);
\draw[fill] (3,0) circle(0.05);
\draw[fill] (3.87,-0.5) circle(0.05);
\draw[fill] (3.87,0.5) circle(0.05);
\draw (0,0)--(3,0);
\draw (3,0)--(3.87,-0.5);
\draw (3,0)--(3.87,0.5);
\node[] at (1.5,-0.4) {$\Gamma_3$};
\begin{scope}[shift={(5.5,0)}]
\draw[fill] (0,0) circle(0.05);
\draw[fill] (5,0) circle(0.05);
\draw[fill] (5.87,-0.5) circle(0.05);
\draw[fill] (5.87,0.5) circle(0.05);
\draw (0,0)--(5,0);
\draw (5,0)--(5.87,-0.5);
\draw (5,0)--(5.87,0.5);
\node[] at (2.5,-0.4) {$\Gamma_4$};
\end{scope}
\begin{scope}[shift={(2,-1.5)}]
\draw[fill] (0,0) circle(0.05);
\draw[fill] (7,0) circle(0.05);
\draw[fill] (7.87,-0.5) circle(0.05);
\draw[fill] (7.87,0.5) circle(0.05);
\draw (0,0)--(7,0);
\draw (7,0)--(7.87,-0.5);
\draw (7,0)--(7.87,0.5);
\node[] at (3.5,-0.4) {$\Gamma_5$};
\end{scope}
\end{tikzpicture}
\caption{The maximizing trees $\Gamma_j$ of $\mu_j (\Gamma)$ for fixed $\cA$, $j = 3, 4, 5$.}
\label{fig:intro}
\end{figure}
Our proof of Theorem \ref{thm:intro}, based on domain monotonicity properties of metric graph eigenvalues, actually yields an explicit, sharp upper bound for the quantity $\mu_{k + 1} (\Gamma) \cA^2$ depending only on $k$,
\begin{align*}
\mu_{k + 1} (\Gamma) \cA^2 \leq \frac{(2 k + 1)^2 \pi^2}{36},
\end{align*}
see Theorem \ref{thm:trees}. This bound may, however, also be obtained from the spectral estimates in \cite{BKKM17}, see Remark \ref{rem:BKKM} below for a more detailed discussion. Thus the present paper confirms the sharpness of the mentioned estimate in \cite{BKKM17} for trees.
\section{Metric graphs, the (standard) Laplacian and its eigenvalues}
A metric graph $\Gamma$ is a discrete graph on a vertex set $\cV$ with edge set $\cE$ that is equipped, additionally, with a length function $L : \cE \to (0, \infty)$. By parametrizing each edge $e$ along the interval $[0, L (e)]$ we may identify $e$ with that interval, and this parametrization induces a natural metric on $\Gamma$. We will always assume that $\Gamma$ is a finite graph, i.e.\ $V := V (\Gamma) := |\cV|$ and $E := E (\Gamma) := |\cE|$ are finite, and that $\Gamma$ is connected. We write $L := L (\Gamma) := \sum_{e \in \cE} L (e)$ for the total length of $\Gamma$. By the finiteness assumption and since we do not allow edges of infinite length, the metric space $\Gamma$ is always compact. We also assume that $\Gamma$ does not contain any loops (i.e.\ edges whose both endpoints correspond to the same vertex). Actually, we will mostly deal with the case that $\Gamma$ is a tree, i.e.\ a graph without cycles, anyway.
By a complex-valued function $f$ on a metric graph $\Gamma$ we mean a collection of functions $f_e : (0, L (e)) \to \C$, $e \in \cE$. In line with this, $f$ belongs to $L^2 (\Gamma)$ if and only if $f_e \in L^2 (0, L (e))$ holds for all $e \in \cE$. Moreover, the Sobolev spaces
\begin{align*}
\widetilde H^k (\Gamma) := \left\{ f \in L^2 (\Gamma) : f_e \in H^k (\Gamma), e \in \cE \right\}
\end{align*}
for $k = 1, 2, \dots$ and
\begin{align*}
H^1 (\Gamma) := \left\{ f \in \widetilde H^1 (\Gamma) : f~\text{is continuous at each vertex} \right\}
\end{align*}
are natural spaces for the treatment of differential operators on metric graphs; in the latter definition, continuity at $v$ means that on all edges incident to the vertex~$v$, $f$ has the same boundary value (or trace) at the endpoint corresponding to $v$. These spaces have the usual properties; for instance they are compactly embedded into~$L^2 (\Gamma)$.
The present text focuses on the Laplacian (i.e.\ the second derivative operator on each edge) subject to standard (also called continuity-Kirchhoff) matching conditions at all vertices. For this, at any vertex $v$, for $f \in \widetilde H^2 (\Gamma)$ we define
\begin{align*}
\partial_\nu f (v) : = \sum \partial f_e (v),
\end{align*}
where the sum is taken over all edges incident to $v$ and $\partial f_e (v)$ denotes the derivative of $f_e$ at the endpoint of $e$ corresponding to $v$, taken in the direction towards $v$.
\begin{definition}
On any finite, connected metric graph $\Gamma$ the operator $- \Delta_\Gamma$ in $L^2 (\Gamma)$ defined as
\begin{align*}
(- \Delta_\Gamma f)_e & = - f_e'', \quad e \in \cE, \\
\dom (- \Delta_\Gamma) & = \left\{ f \in \widetilde H^2 (\Gamma) \cap H^1 (\Gamma) : \partial_\nu f (v) = 0~\text{for all}~v \in \cV \right\},
\end{align*}
is called {\em standard Laplacian} or just {\em Laplacian} on $\Gamma$.
\end{definition}
Note that at the ``loose ends'' (i.e.\ the vertices of degree one) the condition $\partial_\nu f (v) = 0$ simply is a Neumann boundary condition. It is well known that $- \Delta_\Gamma$ is a self-adjoint, non-negative operator. Its lowest eigenvalue is 0 with multiplicity one, with the corresponding eigenfunctions being constant. When ordering the eigenvalues non-decreasingly and counting them with their respective multiplicities (which may be larger than one) we have a sequence
\begin{align*}
0 = \mu_1 (\Gamma) < \mu_2 (\Gamma) \leq \mu_3 (\Gamma) \leq \dots,
\end{align*}
and we just write $\mu_j$ instead of $\mu_j (\Gamma)$ if the context is clear. In full analogy to the Neumann Laplacian on an interval or a Euclidean domain, the eigenvalues of $-\Delta_\Gamma$ enjoy the variational characterization
\begin{align}\label{eq:minMax}
\mu_k (\Gamma) = \min_{\substack{F \subset H^1 (\Gamma) \\ \dim F = k}} \max_{\substack{f \in F \\ f \neq 0}} \frac{\int_\Gamma |f'|^2 \dd x}{\int_\Gamma |f|^2 \dd x}, \quad k = 1, 2, \dots.
\end{align}
It is worth mentioning that vertices of degree two do not matter for all our considerations since they may always be added by splitting an edge $e$ into two edges $e', e''$ each of which is incident to the same (new) vertex of degree two and which satisfy $L (e') + L (e'') = L (e)$. This procedure does neither change the domain of $- \Delta_\Gamma$ nor its action, nor, in particular, its eigenvalues.
It has turned out in recent years that eigenvalue inequalities for quantum graphs may often be proven elegantly by using so-called {\em surgery principles}, i.e.\ by employing the (often but not always) monotonous behavior of eigenvalues with respect to surgery operations performed to the metric graph such as adding or removing edges, joining vertices or transplanting subgraphs; see, e.g.,~\cite{BKKM17,BKKM19,KKMM16,KMN13,R17,RS20}. For the proof of the main result of the present note we only need the following surgery principle. It has been well known for long that removing ``pendant'' edges from a graph has a non-decreasing effect on all eigenvalues; see, e.g.,~\cite[Theorem~2]{KMN13} or~\cite[Proposition~3.1]{R17}. However, for us the following necessary condition for equality will be crucial; therefore we provide a short proof.
\begin{proposition}\label{prop}
Let $\Gamma$ be a finite, connected metric graph and let $\Gamma'$ be the graph obtained from $\Gamma$ by removing a pendant edge $e_0$, i.e.\ an edge with a vertex of degree one as one of its endpoints. Then
\begin{align*}
\mu_{k + 1} (\Gamma) \leq \mu_{k + 1} (\Gamma')
\end{align*}
holds for $k = 1, 2, \dots$. If $\mu_{k + 1} (\Gamma) = \mu_{k + 1} (\Gamma')$ then there exists an eigenfunction of $- \Delta_{\Gamma'}$ corresponding to the eigenvalue $\mu_{k + 1} (\Gamma')$ which is zero at the vertex $v_0$ of $\Gamma'$ to which $e_0$ is incident in $\Gamma$.
\end{proposition}
\begin{proof}
We interpret $\Gamma'$ as a subset of $\Gamma$. Let $k \in \N$, $\mu := \mu_{k + 1} (\Gamma')$, and let $f_1, \dots, f_{k + 1}$ be pairwise orthogonal eigenfunctions of $- \Delta_{\Gamma'}$ corresponding to the eigenvalues $\mu_1 (\Gamma'), \dots, \mu_{k + 1} (\Gamma')$. Moreover, let $F$ denote the linear span of these functions. An easy integration by parts, taking into account the continuity-Kirchhoff vertex conditions, shows that their derivatives $f_1', \dots, f_{k + 1}'$ are then pairwise orthogonal as well; note that the latter depend on the chosen direction of parametrization of each edge. Extending each function $f \in F$ constantly on $e_0$ in a way that $f$ is continuous on $\Gamma$, we obtain a $(k + 1)$-dimensional subspace $\widetilde F$ of $H^1 (\Gamma)$. If $f \in F \setminus \{0\}$ is orthogonal to $\ker (- \Delta_{\Gamma'} - \mu)$ then
\begin{align*}
\frac{\int_\Gamma |\widetilde f'|^2 \dd x}{\int_\Gamma |\widetilde f|^2 \dd x} \leq \frac{\int_{\Gamma'} |f'|^2 \dd x}{\int_{\Gamma'} |f|^2 \dd x} < \mu
\end{align*}
anyway. On the other hand, if each nontrivial $f \in \ker (-\Delta_{\Gamma'} - \mu)$ is nonzero at $v_0$ then for all such $f$
\begin{align*}
\frac{\int_\Gamma |\widetilde f'|^2 \dd x}{\int_\Gamma |\widetilde f|^2 \dd x} = \frac{\int_{\Gamma'} |f'|^2 \dd x}{\int_{\Gamma'} |f|^2 \dd x + |f (v_0)|^2 L (e_0)} < \frac{\int_{\Gamma'} |f'|^2 \dd x}{\int_{\Gamma'} |f|^2 \dd x} = \mu.
\end{align*}
Hence, in this case, by the min-max principle \eqref{eq:minMax},
\begin{align*}
\mu_{k + 1} (\Gamma) \leq \max_{\substack{\widetilde f \in \widetilde F \\ \widetilde f \neq 0}} \frac{\int_\Gamma |\widetilde f'|^2 \dd x}{\int_\Gamma |\widetilde f|^2 \dd x} < \mu = \mu_{k + 1} (\Gamma'),
\end{align*}
which proves the proposition.
\end{proof}
We would like to emphasize that the necessary condition for equality given in the previous proposition is not sufficient. In fact, if one adds a sufficiently long (compared to the total length of $\Gamma$) pendant edge to a given metric graph $\Gamma$ then the $k$-th positive eigenvalue will always decrease strictly.
\section{An isoperimetric inequality for higher eigenvalues of the Laplacian}
In this section we state and proof the main result of this article. In fact, the following theorem yields, in particular, the statement of Theorem \ref{thm:intro} in the introduction. Recall that
\begin{align*}
\cA = \cA (\Gamma) = \frac{L (\Gamma)}{E (\Gamma)}
\end{align*}
denotes the average edge length of $\Gamma$ and that we are assuming throughout that our trees do not contain vertices of degree two; in particular, $\Gamma$ is not a path graph.
\begin{theorem}\label{thm:trees}
Let $\Gamma$ be a finite, connected tree with $E \geq 3$ edges. Then
\begin{align}\label{eq:bound}
\mu_{k + 1} \cA^2 \leq \frac{(2 k + 1)^2 \pi^2}{36}
\end{align}
holds for all $k = 1, 2, \dots$. The bound \eqref{eq:bound} is sharp; more precisely, the following assertions hold.
\begin{enumerate}
\item For $k = 1$, equality holds if and only if $\Gamma$ is any equilateral star graph.
\item For each $k \geq 2$, equality holds if and only if $\Gamma$ is a 3-star with edge lengths $\frac{2 k - 1}{2 k + 1} L, \frac{1}{2 k + 1} L$, and $\frac{1}{2 k + 1} L$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{rem:BKKM}
We emphasize once more that the bound \eqref{eq:bound} itself can also be derived from \cite[Theorem 4.9]{BKKM17}, which, for the case of trees and standard vertex conditions, reads
\begin{align}\label{eq:sharp}
\mu_{k + 1} \leq \left( k - 1 + \frac{|\partial \Gamma|}{2} \right)^2 \frac{\pi^2}{L^2},
\end{align}
where $|\partial \Gamma|$ denotes the number of vertices of degree one. Especially for each 3-star this estimate coincides with the one in Theorem \ref{thm:trees} and, thus, we show that the estimate \eqref{eq:sharp} is sharp for trees. Sharpness of its counterpart for graphs with cycles was earlier established in \cite{KS18}. However, our main interest here is in the class of optimizers of \eqref{eq:bound}, and the following proof shows the bound \eqref{eq:bound} and characterizes all maximizing trees at the same time.
\end{remark}
\begin{remark}
The bound \eqref{eq:bound} holds also if we admit vertices of degree two, but no optimizing trees may have such vertices. In fact, removing a vertex of degree two (by joining the two incident edges into one edge) does not change the eigenvalues of $- \Delta_\Gamma$, but it strictly increases the average edge length $\cA$. Due to this fact, also the assumption $E \geq 3$ in the theorem is not very restrictive; the only trees which are excluded by this are intervals. However, for the eigenvalues of the Laplacian with standard (Neumann) vertex conditions on an interval we have, by explicit calculation, $\mu_{k + 1} \cA^2 = k^2 \pi^2$, which, by the above theorem, is strictly larger than the value of $\mu_{k + 1} \cA^2$ on any nontrivial metric tree.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:trees}]
For $k = 1$ both the estimate and the characterization of maximizers can be found in \cite[Theorem 3.2]{R17}. In the following we show the theorem for $k \geq 2$ in seven steps.
{\bf Step 1:} the estimate \eqref{eq:bound} is true if $\Gamma$ is a 3-star and $k = 2$. That is, on any 3-star $\Gamma$ we have
\begin{align}\label{eq:bound33}
\mu_3 (\Gamma) \leq \frac{25}{4} \frac{\pi^2}{L (\Gamma)^2}.
\end{align}
To prove this, assume that the edges $e_1, e_2, e_3$ of $\Gamma$ are ordered such that $L (e_1) \geq L (e_2) \geq L (e_3)$. Denote by $\cS$ the equilateral star graph obtained from $\Gamma$ by shortening $e_1$ and $e_2$ to length $L (e_3)$. Then by Proposition \ref{prop},
\begin{align*}
\mu_3 (\Gamma) \leq \mu_3 (\cS) = \frac{9 \pi^2}{4 L (\cS)^2}.
\end{align*}
If we set $\alpha := L (\cS) / L (\Gamma) = 3 L (e_3) / L (\Gamma) \leq 1$, it follows
\begin{align}\label{eq:alpha1}
\mu_3 (\Gamma) \leq \frac{9 \pi^2}{4 \alpha^2 L (\Gamma)^2}.
\end{align}
On the other hand, if $\Pi$ denotes the path graph formed by $e_1$ and $e_2$ then again
\begin{align}\label{eq:alpha2}
\mu_3 (\Gamma) \leq \mu_3 (\Pi) = \frac{4 \pi^2}{(L (e_1) + L (e_2))^2} = \frac{4 \pi^2}{(1 - \frac{\alpha}{3})^2 L (\Gamma)^2}.
\end{align}
Now, if $0 < \alpha \leq \frac{3}{5}$ then \eqref{eq:alpha2} yields the bound \eqref{eq:bound33}. On the other hand, the same bound follows from \eqref{eq:alpha1} if $\frac{3}{5} \leq \alpha \leq 1$.
{\bf Step 2:} among all 3-stars, equality in \eqref{eq:bound33} implies that $\Gamma$ has edge lengths $\frac{3}{5} L, \frac{1}{5} L$, and $\frac{1}{5} L$. Indeed, Step 1 of this proof shows that if~$\Gamma$ is a maximizer then $\alpha = 3/5$, i.e., the shortest edge $e_3$ satisfies $L (e_3) = L/ 5$, and at the same time, all inequality signs in the above estimates are equalities. But equality in \eqref{eq:alpha2} implies, by Proposition \ref{prop}, that $e_3$ is attached to the path graph $\Pi$ at a zero of the eigenfunction of $- \Delta_\Pi$ corresponding to $\mu_3 (\Pi)$. Since these zeroes appear at the two symmetric points with distance $L (\Pi) / 4$ to the boundary of $\Pi$, it follows that $L (e_1) = 3 (L (e_1) + L (e_2))/4$. Together with $L (e_3) = L / 5$ this yields that each maximizer $\Gamma$ has the claimed edge lengths. We will see in Step 7 below that 3-stars with the specified edge lengths indeed satisfy the desired equality.
{\bf Step 3:} if $\Gamma$ is any 3-star then the estimate \eqref{eq:bound} holds for all $k$. That is, on any 3-star $\Gamma$ we have
\begin{align}\label{eq:bound3k}
\mu_{k + 1} (\Gamma) \leq \frac{(2 k + 1)^2}{4} \frac{\pi^2}{L (\Gamma)^2}
\end{align}
for $k = 2, 3, \dots$. We show \eqref{eq:bound3k} by induction over $k$. For $k = 2$ it was already established in Step 1. Suppose that \eqref{eq:bound3k} holds for some fixed $k$ and each 3-star. Let $\Gamma$ be a 3-star with its edges $e_1, e_2, e_3$ ordered nonincreasingly according to their lengths. Our aim is to show that
\begin{align}\label{eq:inductionBound}
\mu_{k + 2} (\Gamma) \leq \frac{(2 k + 3)^2}{4} \frac{\pi^2}{L (\Gamma)^2}.
\end{align}
First of all, since $k + 2 \geq 4 = E + 1$, a comparison with the direct sum of the decoupled Neumann Laplacians on the separate edges of $\Gamma$ yields
\begin{align*}
\mu_{k + 2} (\Gamma) \geq \frac{\pi^2}{L (e_1)^2}.
\end{align*}
Hence $r := \sqrt{\mu_{k + 2} (\Gamma)}$ satisfies $L (e_1) \geq \pi / r$. Therefore we may consider the graph $\Gamma'$ obtained from $\Gamma$ by removing a piece of length $\pi / r$ from the ``loose end'' of the edge $e_1$.
If we denote by $\psi_{k + 2}$ an eigenfunction of $- \Delta_\Gamma$ corresponding to $r^2$ then its restriction to $\Gamma'$ will be an eigenfunction of $- \Delta_{\Gamma'}$; in particular, $r^2$ is an eigenvalue on $\Gamma'$ with the same multiplicity as on $\Gamma$,
\begin{align*}
m := \dim \ker \big(- \Delta_{\Gamma'} - r^2 \big) = \dim \ker \big(- \Delta_\Gamma - r^2 \big).
\end{align*}
Our next aim is to show that
\begin{align}\label{eq:greatDeal}
r^2 = \mu_j (\Gamma') \quad \text{for some}~j \leq k + 1.
\end{align}
Assume the converse, i.e., $\mu_{k + 1} (\Gamma') < r^2$. If we denote by $I$ the interval of length $\pi/r$ then the disconnected graph consisting of $\Gamma'$ and $I$ as its two connected components has at least $k + 1 + m + 2 = k + m + 3$ eigenvalues in $[0, r^2]$, where we have used that $r^2$ is the second Neumann eigenvalue of $I$. On the other hand, the Laplacian on the disconnected graph is a rank-one perturbation of $- \Delta_\Gamma$ and the latter operator has at most $k + 1 + m$ eigenvalues in $[0, r^2]$, a contradiction. We have proved~\eqref{eq:greatDeal}.\footnote{A slightly more intuitive argument to prove \eqref{eq:greatDeal} goes as follows: generically, the eigenvalue $\mu_{k + 2}$ is simple and its corresponding eigenfunction is a nonzero multiple of $\cos (r x)$ on $e_1$ (assuming $e_1$ is parametrized towards the star vertex) and has exactly $k + 1$ zeroes in $\Gamma$. Cutting away a piece of length $\pi / r$ then leads to an eigenfunction on $\Gamma'$ with exactly $k$ zeroes and, hence, it has to correspond to $\mu_{k + 1} (\Gamma')$. However, the latter argument is less suitable for identifying the maximizers in the next step, since the eigenfunctions of the latter do not satisfy the generic property.}
We may now distinguish two cases. If $L (e_1) = \pi / r$ then $\Gamma'$ is a path graph and
\begin{align*}
\mu_{k + 2} (\Gamma) = r^2 \leq \mu_{k + 1} (\Gamma') = \frac{k^2 \pi^2}{L (\Gamma')^2} = \frac{k^2 \pi^2}{(L (\Gamma) - \pi/r)^2} < \frac{(2 k + 1)^2}{4} \frac{\pi^2}{(L (\Gamma) - \pi/r)^2}.
\end{align*}
Otherwise, $\Gamma'$ is still a 3-star and, by the induction hypothesis,
\begin{align}\label{eq:bigBoss}
\mu_{k + 2} (\Gamma) = r^2 \leq \mu_{k + 1} (\Gamma') \leq \frac{(2 k + 1)^2}{4} \frac{\pi^2}{(L (\Gamma) - \pi/r)^2}
\end{align}
as well. Employing this we obtain
\begin{align*}
r L (\Gamma) - \pi = r (L (\Gamma) - \pi/r) \leq \frac{(2 k + 1) \pi}{2}
\end{align*}
and thus
\begin{align*}
r \leq \frac{(2 k + 3) \pi}{2 L (\Gamma)},
\end{align*}
which is \eqref{eq:inductionBound}.
{\bf Step 4:} among all 3-stars, equality in \eqref{eq:bound3k} implies that the edges have lengths as stated in the theorem. We show this by induction over $k$ again. The case $k = 2$ was treated in Step 2. Suppose that $k \geq 2$ is fixed and that equality holds in \eqref{eq:bound3k} only for the above-stated choice of edge lengths. Assume further that $\Gamma$ is a 3-star for which equality holds in~\eqref{eq:inductionBound}. Then in the reasoning of Step~3 we are in the case that $L (e_1) > \pi / r$ and we must have equality in~\eqref{eq:bigBoss}. But this implies equality in~\eqref{eq:bound3k} with $\Gamma$ replaced by $\Gamma'$, the 3-star obtained from $\Gamma$ by removing a piece of length $\pi/r$ from the loose end of $e_1$. In other words, the 3-star $\Gamma'$ maximizes $\mu_{k + 1} \cA^2$, and from the induction assumption we obtain that $\Gamma'$ has edge lengths $L' (e_1) = \frac{2 k - 1}{2 k + 1} L (\Gamma'), L' (e_2) = L' (e_3) = \frac{1}{2 k + 1} L (\Gamma')$. By construction, the edge lengths of $\Gamma$ are then given by
\begin{align*}
L (e_1) & = L' (e_1) + \pi/r = \frac{2 k - 1}{2 k + 1} \left( L (\Gamma) - \frac{2 L (\Gamma)}{2 k + 3} \right) + \frac{2 L (\Gamma)}{2 k + 3} = \frac{2 k + 1}{2 k + 3} L (\Gamma)
\end{align*}
and, for $j = 2, 3$,
\begin{align*}
L (e_j) = L' (e_j) = \frac{1}{2 k + 1} \left( L (\Gamma) - \frac{2 L (\Gamma)}{2 k + 3} \right) = \frac{1}{2 k + 3} L (\Gamma).
\end{align*}
Hence among 3-stars any maximizers need to have the lengths specified in the theorem.
{\bf Step 5:} the estimate \eqref{eq:bound} holds on arbitrary trees. For this let now $\Gamma$ be an arbitrary finite, connected tree with $E \geq 3$. Let $e_1, e_2, e_3$ be three edges such that $L (e_1) \geq L (e_2) \geq L (e_3) \geq L (e)$ for all $e \in \cE$, $e \neq e_1, e_2, e_3$. Within $\Gamma$ choose any maximal path $\Pi_1$ which contains $e_1$ and $e_2$ and connects two vertices of degree one. Let $\hat e_3$ be such that $\hat e_3$ is not contained in $\Pi_1$ but has maximal length in $\Gamma \setminus \Pi_1$, i.e., $L (\hat e_3) \geq e$ for all edges $e$ not belonging to $\Pi_1$; if $e_3$ is not part of $\Pi_1$ then we choose $\hat e_3 = e_3$. Furthermore, let $\Pi_2$ denote any path which contains $\hat e_3$ and connects a vertex of degree one with a vertex on $\Pi_1$ without having any joint edge with $\Pi_1$. Then $\cS := \Pi_1 \cup \Pi_2$ is a connected subgraph of $\Gamma$ and it may, after cutting off all further edges of $\Gamma$ and then removing all vertices of degree two, be viewed as a $3$-star. Moreover, by construction the longest edges $e_1, e_2, e_3$ of $\Gamma$ are contained in $\cS$ and hence
\begin{align}\label{eq:avLengths}
\cA (\cS) = \frac{L (\cS)}{3} \geq \frac{L (e_1) + L (e_2) + L (e_3)}{3} \geq \frac{L (\Gamma)}{E (\Gamma)}.
\end{align}
Consequently, by the result of Step 3 and Proposition \ref{prop},
\begin{align}\label{eq:fetzt}
\mu_{k + 1} (\Gamma) \leq \mu_{k + 1} (\cS) \leq \frac{(2 k + 1)^2}{36} \frac{E (\cS)^2 \pi^2}{L (\cS)^2} \leq \frac{(2 k + 1)^2}{36} \frac{E (\Gamma)^2 \pi^2}{L (\Gamma)^2},
\end{align}
which proves \eqref{eq:bound}.
{\bf Step 6:} equality in \eqref{eq:bound} implies that $\Gamma$ is a 3-star and has the edge lengths specified in the theorem. To this end, let us assume that $\Gamma$ is a tree for which equality holds in \eqref{eq:bound} for some $k \geq 2$. It suffices to show that $\Gamma$ is a 3-star; after that the lengths property follows from Step 4. First of all, from the equality in \eqref{eq:bound} we get, in particular, equalities everywhere in \eqref{eq:avLengths} and \eqref{eq:fetzt}. The first (in)equality in~\eqref{eq:avLengths} then implies $L (e_1) + L (e_2) + L (e_3) = L (S)$, so that $S$ consists only of $e_1, e_2$ and $e_3$; by the construction of $S$ this also yields that $e_1, e_2$ and $e_3$ all are incident to vertices of degree one in $\Gamma$. Hence $\cS$ is the 3-star with $e_1, e_2, e_3$ as its edges, and from the equalities in \eqref{eq:fetzt} we get, furthermore, that $\cS$ is a maximizer itself and, by Step 4, has to have edge lengths
\begin{align}\label{eq:yeah}
L (e_1) = \frac{2 k - 1}{2 k + 1} L (\cS), \quad L (e_2) = L (e_3) = \frac{1}{2 k + 1} L (\cS).
\end{align}
It remains to show that $\Gamma = \cS$. In fact, any edge $e$ different from $e_1, e_2, e_3$ necessarily would have to satisfy $L (e) \leq \frac{1}{2 k + 1} L (\cS)$, and since $L (e_1)$ is larger, this would yield $L (\Gamma)/E (\Gamma) < L (\cS) / E (\cS)$, in contradiction to the equality in \eqref{eq:avLengths}. Therefore $\Gamma = \cS$ and \eqref{eq:yeah} is the desired statement on the edge lengths.
{\bf Step 7:} each 3-star with edge lengths $\frac{2 k - 1}{2 k + 1} L$, $\frac{1}{2 k + 1} L$, $\frac{1}{2 k + 1} L$ satisfies
\begin{align}\label{eq:dasGrosseGanze}
\mu_{k + 1} = \mu_{k + 2} = \frac{(2 k + 1)^2 \pi^2}{4 L^2}
\end{align}
and, thus, yields equality in \eqref{eq:bound}.
Firstly, note that the expression in \eqref{eq:dasGrosseGanze} is indeed an eigenvalue of multiplicity two since the function $\cos ( (2 k + 1) \pi x /(2 L))$ along the path consisting of the long and one of the short edges and complemented by zero on the other short edge is an eigenfunction and we may interchange the roles of the two short (and equally long) edges.
Secondly, by the estimate \eqref{eq:bound} proven in Step 1 and 3 we have
\begin{align}\label{eq:erfolg}
\mu_k \leq \frac{(2 k - 1)^2 \pi^2}{4 L^2} < \frac{(2 k + 1)^2 \pi^2}{4 L^2},
\end{align}
and, on the other hand, the disconnected graph $\cD$ consisting of a path formed by $e_1$ and $e_2$ as one connected component and the separated edge $e_3$ as its other connected component satsifies, by an easy computation,
\begin{align*}
\mu_{k + 2} (\cD) = \frac{(2 k + 1)^2 \pi^2}{4 L^2} < \frac{(k + 1)^2 (2 k + 1)^2 \pi^2}{4 k^2 L^2} = \mu_{k + 3} (\cD) \leq \mu_{k + 3} (\Gamma),
\end{align*}
the latter inequality being valid as $- \Delta_\cD$ is a rank-one perturbation of $- \Delta_\Gamma$. Together with \eqref{eq:erfolg} this yields \eqref{eq:dasGrosseGanze} as its only possible conclusion and completes this proof.
\end{proof}
\begin{remark}
In Step 7 of the proof of the previous theorem we have seen more than the theorem claims. Indeed, for the 3-stars which maximize $\mu_{k + 1}$ we always have $\mu_{k + 1} = \mu_{k + 2}$. This is in line with other results on isoperimetric inequalities, where for the optimally shaped domains and graphs the eigenvalues in question are often multiple.
\end{remark}
\begin{ack}
The author is grateful to the Swedish Research Council (VR) for supporting this research financially through grant no.\ 2018-04560.
\end{ack}
| {
"timestamp": "2020-09-03T02:02:40",
"yymm": "2006",
"arxiv_id": "2006.11815",
"language": "en",
"url": "https://arxiv.org/abs/2006.11815",
"abstract": "The isoperimetric problem of maximizing all eigenvalues of the Laplacian on a metric tree graph within the class of trees of a given average edge length is studied. It turns out that, up to rescaling, the unique maximizer of the $k$-th positive eigenvalue is the star graph with three edges of lengths $2 k - 1$, $1$ and $1$. This complements the previously known result that the first nonzero eigenvalue is maximized by all equilateral star graphs and indicates that optimizers of isoperimetric problems for higher eigenvalues may be less balanced in their shape -- an observation which is known from numerical results on the optimization of higher eigenvalues of Laplacians on Euclidean domains.",
"subjects": "Spectral Theory (math.SP); Mathematical Physics (math-ph); Analysis of PDEs (math.AP)",
"title": "Quantum trees which maximize higher eigenvalues are unbalanced",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787857334058,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811339436783
} |
https://arxiv.org/abs/1211.0765 | Holomorphic flexibility properties of the space of cubic rational maps | For each natural number d, the space R_d of rational maps of degree d on the Riemann sphere has the structure of a complex manifold. The topology of these manifolds has been extensively studied. The recent development of Oka theory raises some new and interesting questions about their complex structure. We apply geometric invariant theory to the cases of degree 2 and 3, studying a double action of the Möbius group on R_d. The action on R_2 is transitive, implying that R_2 is an Oka manifold. The action on R_3 has C as a categorical quotient; we give an explicit formula for the quotient map and describe its structure in some detail. We also show that R_3 enjoys the holomorphic flexibility properties of strong dominability and C-connectedness. | \section{Introduction and statement of results} \label{section:intro}
The space of rational maps on the Riemann sphere
can be given the structure of a complex manifold.
The topology of this manifold
(the compact-open topology)
has been studied extensively,
beginning with the work of Segal~\cite{Segal-1979}.
In this paper we study rational maps from a geometric point of view,
motivated by the recent development of Oka theory.
In particular, we are interested in the
holomorphic flexibility properties of
dominability and ${\mathbb C}$-connectedness (defined below),
which can be viewed as opposite to Kobayashi hyperbolicity.
We write $R_d$ for the set of rational maps
of degree~$d$.
Each such map can be written as a quotient of two relatively prime polynomials
whose maximum degree is $d$.
The space ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$
of all rational maps is the union of $R_d$ for $d=0,1,2,\ldots$;
the $R_d$ are exactly the connected components of this space.
Section~\ref{section:context} describes
the complex structure on $R_d$
and gives a brief overview of the relevant concepts from Oka theory.
\medskip
\textbf{Basic question:}
\textit{Is ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$ an Oka manifold?}
\medskip
The question can be approached one degree at a time:
is each component $R_d$ an Oka manifold?
We apply geometric invariant theory,
using the results of Snow~\cite{Snow-1982}.
In particular, the Möbius group ${\PSL_2(\complex)}$
acts on $R_d$ in two ways, by precomposition and postcomposition
(see for example Ono and Yamaguchi~\cite{Ono-Yamaguchi-2003}).
We combine these two actions into a two-sided action
of ${\PSL_2(\complex)}\times{\PSL_2(\complex)}$, described in Section~\ref{section:actions}.
For low degree,
we have $R_0={\mathbb P}^1$ and $R_1={\PSL_2(\complex)}$,
both of which are known examples of Oka manifolds.
For $d=2$, the two-sided group action is transitive:
\begin{maintheorem}[{\cite[Proposition~2.1]{Guest-et-al-1995}}] \label{thm:r2oka}
The space $R_2$ of rational maps of degree~2
is a complex homogeneous manifold.
\end{maintheorem}
At the end of Section~\ref{section:actions} we give
an alternative proof of this result, as an introduction
to the methods used later in this paper.
Complex homogeneous manifolds are always Oka
(see for example \cite[Proposition~5.5.1]{Forstneric-2011}),
so we have the following consequence:
\begin{maincorollary} \label{cor:r2oka}
$R_2$ is an Oka manifold.
\end{maincorollary}
For $d\geq3$ it is presently unknown whether $R_d$ is an Oka manifold.
The ideal situation would be to express $R_d$ as a holomorphic
fibre bundle whose base and fibre are Oka.
This would be sufficient to show that $R_d$ is Oka
(see for example \cite[Theorem~5.5.4]{Forstneric-2011}).
The quotient map of a group action would be a natural candidate for
such a bundle.
The nearest we can get at present
is to exhibit a group action whose \textit{categorical}
(rather than geometric) quotient is Oka,
and to prove the following two weaker properties for $R_3$.
\begin{maindefinition} \label{def:dominable}
Let $X$ be a complex manifold, $p\in X$
and $\phi\colon{\mathbb C}^n\to X$ a holomorphic map with $\phi(0)=p$.
(The number $n$ is not necessarily equal to the dimension of $X$.)
We say that $\phi$ \emph{dominates $X$ at $p$}
if $d\phi_0$ is surjective.
If such a $\phi$ exists, then $X$ is
\emph{dominable at $p$},
and $\phi$ is a \emph{dominating map}.
If $X$ is dominable at every $p\in X$,
then $X$ is \emph{strongly dominable}.
\end{maindefinition}
\begin{maindefinition} \label{def:connected}
A manifold $X$ is \emph{strongly ${\mathbb C}$-connected}
if every pair of points can be joined by an entire curve;
that is, for every pair of points of $X$
there is a holomorphic map ${\mathbb C}\to X$
whose image contains both points.
\end{maindefinition}
\begin{mainremark}
Every Oka manifold is strongly ${\mathbb C}$-connected:
this follows from the basic Oka property
described in \cite[page~16]{Forstneric-Larusson-2011}.
The definition of ``${\mathbb C}$-connected'' is not standardised:
Gromov in \cite[3.4(B)]{Gromov-1989} uses the term
to refer to strong ${\mathbb C}$-connectedness as described here,
while other authors use it to refer to the weaker property
that every pair of points can be joined by
a finite chain of entire curves.
\end{mainremark}
The three main results of this paper
are Theorems~\ref{thm:r3quotient} through \ref{thm:r3connected} below.
\begin{maintheorem} \label{thm:r3quotient}
The categorical quotient for the action of
${\PSL_2(\complex)}\times{\PSL_2(\complex)}$ on $R_3$ by pre- and postcomposition
is ${\mathbb C}$.
\end{maintheorem}
In fact we can give an explicit formula for the quotient map,
as well as a detailed description of the orbits of the group action.
\begin{proof}[Overview of proof]
The quotient map $\pi\colon R_3\to{\mathbb C}$
is constructed in Section~\ref{section:quotient}.
Given $f\in R_3$, the value of $\pi(f)$
is expressed as a rational function of
cross-ratios of critical points and critical values
of $f$.
Section~\ref{subs:s2} introduces the role of symmetric polynomials
in describing this function;
Section~\ref{subs:standard} describes a standard form
for elements of $R_3$ as an aid to computation,
and Section~\ref{subs:cross_ratio} gives an explicit description of $\pi$
on an open subset of $R_3$.
The extension of $\pi$ to the rest of $R_3$ is given in Section~\ref{subs:quotient}.
To show that $\pi$ is the desired quotient map,
we need to describe the orbits of the group action
and determine which orbits are closed.
This is the content of Section~\ref{subs:orbits}.
Then Lemma~\ref{lemma:same_pi} tells us that $\pi$ distinguishes the closed orbits,
Lemma~\ref{lemma:image_of_pi} and Section~\ref{subs:quotient}
tell us that that the image of $\pi$
is all of ${\mathbb C}$, and
Corollary~\ref{cor:pi_holo} tells us that $\pi$ is holomorphic.
Remark~\ref{remark:r3quotient} explains why these properties
imply that $\pi$ is the quotient map.
\end{proof}
We can say a little more about the group action:
Section~\ref{subs:stabilisers}
describes the stabilisers of the orbits.
Also, it is interesting to notice
that all the constructions of Section~\ref{section:quotient}
can be carried out within the algebraic category,
whereas the proof of the next two theorems
involves exponential maps.
\begin{maintheorem} \label{thm:r3dominable}
The space $R_3$ of rational maps of degree~3
is strongly dominable.
\end{maintheorem}
\begin{proof}[Overview of proof]
Proposition~\ref{prop:composition}
describes a method of constructing dominating maps,
and the rest of Section~\ref{subs:composition}
gives explicit maps from ${\mathbb C}^8$ to $R_3$.
The building blocks of this construction
are a map $\eta_0$ from a subset of ${\mathbb C}^2$ to $R_3$
that is transverse to the orbits of the group action,
a dominating map from ${\mathbb C}^6$ to the group,
and an embedding of $R_3$ into ${\mathbb P}^7$.
We also need the fact that the domain of $\eta_0$
is dominable: this is Proposition~\ref{prop:chi_surjective}.
Section~\ref{subs:surjective} shows that the image of
$\eta_0$ intersects every orbit of the group action.
This fact allows us to use translates of $\eta_0$
to obtain dominating maps at each point of $R_3$.
Section~\ref{subs:transverse}
contains the proof that $\eta_0$ is transverse to the orbits.
\end{proof}
\begin{maintheorem} \label{thm:r3connected}
The space $R_3$
is strongly ${\mathbb C}$-connected.
\end{maintheorem}
\begin{proof}
The dominating maps of the previous theorem
are in fact surjective: this follows from Proposition~\ref{prop:all_orbits}.
Thus there exists a surjective map ${\mathbb C}^8\to R_3$.
Given $f$ and $g$ in $R_3$,
we can choose any preimages of $f$ and $g$,
join the preimages by an entire curve in ${\mathbb C}^8$,
and compose the entire curve with the surjective map
to obtain an entire curve in $R_3$ joining $f$ and $g$.
\end{proof}
I thank Finnur Lárusson for many helpful discussions
during the preparation of this paper.
\section{Context: rational maps and the parametric Oka~property} \label{section:context}
In Section~\ref{subs:douady} we describe the
complex structure for the spaces $R_d$ of rational maps of degree~$d$.
It is most convenient to use the coefficients of rational maps
as coordinates.
In fact the complex structure thus obtained has an important
universal property,
telling us that it is the right complex structure for our purposes.
In Section~\ref{subs:oka}
we introduce some relevant concepts from Oka theory.
These serve as motivation for the questions addressed in this paper;
however, the definition of an Oka manifold is not directly used.
Consequently, only a brief sketch is given here.
The interested reader can refer to
the survey paper \cite{Forstneric-Larusson-2011}
of Forstnerič and Lárusson for definitions and examples,
or to the book \cite{Forstneric-2011} of Forstnerič
for a more detailed exposition.
\subsection{The space of rational maps} \label{subs:douady}
For each $d=0,1,2\ldots$,
we can embed $R_d$ (as a set) into $\projspace{2d+1}$
by sending a rational function
$$\frac{a_dz^d+a_{d-1}z^{d-1}+\cdots+a_0}{b_dz^d+b_{d-1}z^{d-1}+\cdots+b_0}$$
to the point with homogeneous coordinates
$$(a_d \colon\! a_{d-1}\cdots \colon\! a_0 \colon\! b_d
\colon\! b_{d-1} \colon\! \cdots \colon\! b_0).$$
We introduce a complex structure on $R_d$ as the pullback of the
complex structure on $\projspace{2d+1}$.
The image of $R_d$ under this embedding is an open subset of $\projspace{2d+1}$.
Specifically, the condition for a rational function $p/q$ to belong to $R_d$,
where $p$ and $q$ are polynomials of maximum degree~$d$,
is that $p$ and $q$ should have no common factors.
This is equivalent to the non-vanishing of the resultant of $p$ and $q$,
and so the image of $R_d$ is the complement of the
resultant locus in $\projspace{2d+1}$.
The topology induced on $R_d$ by this complex structure
coincides with the compact-open topology
on ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)=\bigcup\limits_{d=0}^\infty R_d$.
In this topology, each $R_d$ is connected,
and the map ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)\to{\mathbb Z}$
sending a rational function to its degree is continuous.
Therefore the connected components of ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$
are precisely the $R_d$.
\begin{proposition}
With the complex structure described above,
the space ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$
is an internal hom-object in the category
of reduced complex spaces and holomorphic maps.
\end{proposition}
\begin{proof}
We wish to show that ${\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$
is a representing object for the functor
${\mathcal O}(- \times {\mathbb P}^1, {\mathbb P}^1)$.
Given a reduced complex space $T$
and a holomorphic map $\phi \colon T\times{\mathbb P}^1\to{\mathbb P}^1$,
define $\tilde\phi \colon T\to{\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$
by $\tilde\phi(t)(x)=\phi(t,x)$.
For each $t$,
the map $\tilde\phi(t)$
is a member of some $R_d$;
by continuity of the degree map,
the degree~$d$ must be constant on each connected component of $T$.
Thus we can use the embedding $R_d\to\projspace{2d+1}$
to write $\tilde\phi$ in local coordinates.
From this, it is easy to see that $\tilde\phi$ is holomorphic.
Let $\eta_T \colon {\mathcal O}(T\times{\mathbb P}^1,{\mathbb P}^1)\to{\mathcal O}(T,{\mathcal O}({\mathbb P}^1,{\mathbb P}^1))$
be the map sending $\phi$ to $\tilde\phi$.
Then $\eta$ is natural transformation
from the functor
${\mathcal O}(- \times X, Y)$ to the functor ${\mathcal O}(-,{\mathcal O}(X,Y))$.
If $\psi \colon T\to{\mathcal O}({\mathbb P}^1,{\mathbb P}^1)$ is holomorphic,
then $\psi=\tilde\phi$ where $\phi(t,x)=\psi(t)(x)$,
and it is immediate that $\phi$ is holomorphic.
Therefore $\eta_T$ is a bijection,
and $\eta$ is a natural isomorphism.
\end{proof}
This result is implicit in the work
of Kaup~\cite{WKaup-1969}, but is not stated explicitly there.
See also Douady~\cite[Section~10.2]{Douady-1966}
regarding universal properties of mapping spaces.
\subsection{A brief outline of Oka theory} \label{subs:oka}
Informally speaking,
Oka manifolds can be viewed as the opposite
of Kobayashi hyperbolic manifolds.
Hyperbolicity is a type of holomorphic rigidity property;
conversely, Oka manifolds enjoy a variety of holomorphic flexibility properties.
The most concrete expression of flexibility for a manifold $X$
is that there should be ``many'' holomorphic maps ${\mathbb C}\to X$.
This is formalised in Gromov's notion of ellipticity,
introduced in~\cite{Gromov-1989}.
Dominability and ${\mathbb C}$-connectedness (Definitions~\ref{def:dominable}
and~\ref{def:connected}) express weaker versions of the same idea.
There is a chain of implications
$$\text{elliptic}\Rightarrow\text{Oka}\Rightarrow\text{strongly dominable
and strongly }{\mathbb C}\text{-connected};$$
at present, it is unknown whether the reverse implications hold
in general.
(Campana and Winkelmann~\cite[Example~8.3]{Campana-Winkelmann-2012}
give an example of a manifold that is ${\mathbb C}$-connected but not Oka.
However, there are no known examples of manifolds
that are strongly dominable but not Oka.)
Oka manifolds also enjoy a number of homotopy properties.
The simplest is the so-called basic Oka property (BOP):
every continuous map from a Stein manifold to an Oka manifold
is homotopic to a holomorphic map.
Oka manifolds satisfy a stronger version of the BOP
with added approximation and interpolation conditions
(see \cite[page~16]{Forstneric-Larusson-2011} for details).
Forstnerič and Lárusson have identified a number of equivalent
properties which characterise Oka manifolds.
The following is of particular interest
in the context of mapping spaces.
\begin{definition}[Parametric Oka property (POP), simple version]
A manifold $X$ satisfies the \emph{parametric Oka property}
if for every Stein manifold $S$ and every compact subset
$P\subset{\mathbb R}^m$,
every continuous map $f \colon P\times S\to X$
is homotopic to a map
$f_1 \colon P\times S\to X$
such that $f_1(\cdot,x) \colon S\to X$
is holomorphic for every $x\in P$.
\end{definition}
In other words, a family of continuous maps
can be deformed to a family of holomorphic maps
with continuous dependence on the parameter.
This is apparently a stronger condition than the BOP.
However, it turns out that the POP with approximation and interpolation
is equivalent to the BOP with approximation and interpolation;
either condition can be taken as the definition of an Oka manifold
(see \cite[Section~1]{Forstneric-2009}).
It is natural to ask whether continuous dependence on a parameter
can be replaced by holomorphic dependence.
To put it another way, if $P$ is a compact complex manifold,
then is the mapping space ${\mathcal O}(P,X)$
an Oka manifold or similar?
(The results of \cite{Douady-1966}
guarantee that if $P$ is compact,
then ${\mathcal O}(P,X)$ carries a universal complex structure.)
In this paper we begin with the simplest interesting case,
that of $P=X={\mathbb P}^1$.
\section{Group actions and the degree 2 case} \label{section:actions}
The action of the Möbius group ${\PSL_2(\complex)}$
on the Riemann sphere ${\mathbb P}^1$ is sharply 3-transitive:
given any two triples of distinct points of ${\mathbb P}^1$,
there is a unique Möbius transformation taking the first triple to the second.
Because of this transitivity,
the Möbius group is a valuable tool
for simplifying the study of $R_d$ when $d$ is small.
Specifically, rational maps can be composed with Möbius transformations,
giving rise to a number of interesting group actions on $R_d$.
Since ${\PSL_2(\complex)}$ is reductive
(being the complexification of the compact subgroup $\PSU_2$),
we can study its actions using
geometric invariant theory as
described in~\cite{Snow-1982}
(which gives analytic analogues of the
results of~\cite{Luna-1973}).
First there are the pre- and postcomposition actions.
Precomposition is the action $R_d\times{\PSL_2(\complex)}\to R_d$
defined by
$f\cdot g=f\circ g$ for $f\in R_d$ and $g\in{\PSL_2(\complex)}$.
Postcomposition is the action ${\PSL_2(\complex)}\times R_d\to R_d$
defined by $g\cdot f=g\circ f$.
An interesting asymmetry appears here.
It is easy to verify that postcomposition is a free action.
Therefore there is a well defined geometric quotient:
the set of orbits has the structure of a complex manifold,
and the quotient map is a fibre bundle~\cite[Corollary~5.5]{Snow-1982}.
This quotient space is a useful tool
in studying the topology of $R_d$:
see for example \cite{Havlicek-1995} and \cite{Ono-Yamaguchi-2003}.
On the other hand,
the precomposition action is not free.
(For example, consider $f(x)=x^d$ and let $g$ be
multiplication by a $d$th root of unity.)
This group action has received much less attention in the literature.
The two group actions can be combined to give the conjugation action:
$f\cdot g=g^{-1}\circ f\circ g$.
This action is of interest in the study of holomorphic dynamics:
see for example \cite[Section~3]{Milnor-1993}.
In particular, the quotient space of $R_2$ under this action
is ${\mathbb C}^2$
(see also \cite[Section~5]{Silverman-1998}).
For $d>2$, the quotient of $R_d$
is a rational variety~\cite[Section~4]{Levy-2011}.
However, the behaviour of holomorphic flexibility properties
under birational maps is not well understood,
so it is not obvious how to apply this group action
to our present investigations.
The actions can also be considered jointly via the two-sided action
of $G={\PSL_2(\complex)}\times{\PSL_2(\complex)}$ on $R_d$:
if $g=(g_1,g_2)\in G$ and $f\in R_d$,
then define $f^g$ by
$$f^g=g_1^{-1}\circ f\circ g_2.$$
The two-sided action, like the precomposition action,
is not free.
Furthermore, for $d\geq3$
there exist non-closed orbits (see Section~\ref{subs:orbits} below),
so the orbit space is not Hausdorff;
the geometric quotient does not exist as a manifold.
However, Snow's main theorem
tells us that the categorical quotient for this action
exists as a reduced complex space.
Since $R_3$ is $7$-dimensional and $G$ is $6$-dimensional,
we expect to find a $1$-dimensional quotient space.
Study of this quotient reveals a great deal about the structure of $R_3$.
This will be explored further
in Section~\ref{section:quotient} below.
For the case $d=2$, the action is transitive.
This can be proved by elementary means,
as in \cite{Guest-et-al-1995}.
An alternative and more intuitive proof
can be obtained by considering rational maps in terms of critical values.
A rational map of degree $d$ can be viewed as a branched
$d$-sheeted covering map ${\mathbb P}^1\to{\mathbb P}^1$.
By the Riemann--Hurwitz formula, there are $2d-2$ critical values
when counted with multiplicity.
Each critical value has multiplicity at most $d-1$,
so there must be at least two distinct critical values.
The Riemann existence theorem
(see for example~\cite[page~49]{Donaldson-2011})
plays a key role in understanding the orbits.
We need only the following special case.
\begin{maintheorem}[Riemann existence theorem, special case]\label{thm:RET}
Let $\Delta$ be a finite subset of ${\mathbb P}^1$,
and $\phi \colon \pi_1({\mathbb P}^1\setminus\Delta)\to\Sym{d}$
a group homomorphism
(where $\Sym{d}$ denotes the symmetric group on $d$ symbols).
Suppose the image of $\phi$ is transitive.
Then there exists a compact connected Riemann surface $X$
and a $d$-fold branched holomorphic covering map
$f \colon X\to{\mathbb P}^1$
with critical values $\Delta$ and monodromy given by $\phi$.
If $f_1 \colon X_1\to{\mathbb P}^1$ and $f_2 \colon X_2\to{\mathbb P}^1$ are two such coverings,
then there exists a biholomorphic map $g \colon X_1\to X_2$
such that $g\circ f_1=f_2$.
\end{maintheorem}
The genus of the surface $X$ is given by the Riemann--Hurwitz formula;
the multiplicities of the critical values
can be calculated from the cycle structure of the permutations
as described in Chapter~1 of \cite{Lando-Zwonkin-2004}.
If the multiplicities sum to $2d-2$,
then the resulting covering map is a rational function ${\mathbb P}^1\to{\mathbb P}^1$,
and the orbit of this rational function under the precomposition
action described above
is uniquely determined by the critical values and monodromy.
In the case of $d=2$, there are exactly two distinct critical values.
For a two-sheeted covering,
there is only one possible monodromy permutation.
Triple transitivity of the group implies
that pairs of critical values are all equivalent under the postcomposition action.
It follows that the two-sided action on $R_2$
has only one orbit.
This proves Theorem~\ref{thm:r2oka}.
\section{Degree 3: the categorical quotient} \label{section:quotient}
In this section we study the two-sided group action
of $G={\PSL_2(\complex)}\times{\PSL_2(\complex)}$ on $R_3$
from the point of view of geometric invariant theory.
The group is reductive,
and $R_3$, being the complement of a hypersurface in ${\mathbb P}^7$,
is Stein.
Therefore the main theorem of Snow's paper \cite{Snow-1982}
tells us that the categorical quotient exists
and is a reduced Stein space.
We will show that the quotient is in fact ${\mathbb C}$,
proving Theorem~\ref{thm:r3quotient} above.
The quotient map is explicitly described
in Section~\ref{subs:cross_ratio},
and the proof is completed in Section~\ref{subs:quotient}.
A rational function of degree~3 has four critical points
and four critical values, counted with multiplicity.
The multiplicity of each critical value is either one or two.
Therefore there are only three possible cases
(all of which occur):
\begin{itemize}
\item[-] four distinct simple critical values;
\item[-] one double and two simple critical values;
\item[-] two double critical values.
\end{itemize}
The set of rational functions with four distinct critical values
is an open subset of $R_3$,
and will be referred to as the \emph{open stratum},
denoted $R_3^O$.
The open stratum is the set of $f\in R_3$
such that the zeros of $f'$ are distinct;
in other words, in local coordinates
it is the complement of the zero locus
of the discriminant of the numerator of $f'$.
Therefore it is an open subset of $R_3$,
in both the compact-open topology
and the Zariski topology.
The complement of $R_3^O$ will be called the
\emph{null fibre},
for reasons that will become clear later.
Most of this section will be concerned with understanding the
orbits in the open stratum.
For each of the other two cases there is a single orbit;
this is proved in Section~\ref{subs:orbits} below.
\begin{mainremark} \label{remark:dimensions}
Since $R_3$ is $7$-dimensional and $G$ is $6$-dimensional,
it is reasonable to expect that the generic orbit will have codimension~1.
In fact, if $g=(g_1,g_2)\in G$ fixes $f\in R_3$,
then $g_1$ must permute the critical values of $f$,
and $g_2$ permutes the critical points.
Since an element of the Möbius group is uniquely determined
by the image of three points,
it follows that if $f$ has at least three critical values
(and therefore at least three critical points),
then the stabiliser of $f$ is finite,
so the orbit is $6$-dimensional.
In the case where $f$ has only two critical values,
the stabiliser can contain a one-parameter subgroup.
For example, $f=x^3$
is fixed by the group element $(a^3x,ax)$
for all $a\in{\mathbb C}^*={\mathbb C}\setminus\{0\}$.
Hence the orbit of such $f$ is at most $5$-dimensional.
\end{mainremark}
More details about the stabilisers are given
in Section~\ref{subs:stabilisers}.
\begin{mainremark} \label{remark:null_codim}
The null fibre, being locally
the zero locus of a discriminant polynomial,
is a proper analytic subvariety of~$R_3$.
Since it contains at least one $6$-dimensional orbit
(the non-closed orbit of Proposition~\ref{prop:orbits}),
it has codimension~1.
\end{mainremark}
\subsection{Cross-ratio and symmetrised cross-ratio} \label{subs:s2}
Given four distinct points (a \emph{quartet}) $z_1,z_2,z_3,z_4\in{\mathbb C}$,
their cross-ratio is the number
$$(z_1,z_2;z_3,z_4)=\frac{(z_1-z_3)(z_2-z_4)}{(z_2-z_3)(z_1-z_4)}.$$
The definition is extended to quartets in ${\mathbb P}^1$
by adopting the convention that $\infty/\infty=1$.
For example,
$(0,\infty;1,\lambda)=(-1\cdot\infty)/(\infty\cdot(-\lambda))=\lambda$.
For quartets of distinct points, the cross-ratio can take on any value except
0, 1 or~$\infty$.
The Möbius group ${\PSL_2(\complex)}$ preserves cross-ratio.
Therefore we aim to use the cross-ratio to construct invariant functions
for the action of $G={\PSL_2(\complex)}\times{\PSL_2(\complex)}$ on $R_3$.
In particular, we are interested in
the cross-ratios of the critical points
and of the critical values of an element of the open stratum.
There is a technical issue that needs to be addressed:
there is no canonical way of ordering the four critical points or values.
Therefore the ``cross-ratio of the critical points'' is not well defined.
We will address this by symmetrising the cross-ratio.
(This is analogous to the relationship between
the elliptic modular function $\lambda$
and Klein's $j$-invariant
given by $j(\tau)=256(1-\lambda+\lambda^2)^2/\lambda^2(1-\lambda^2)$;
the $j$-invariant is a symmetrised version of the modular function.)
Generically, the 24 possible orders of four points
give rise to six cross-ratios.
If one ratio is $\lambda$, then the six ratios are
\begin{equation}\label{eq:sixXratios}
\lambda,\frac{1}{\lambda},1-\lambda,\frac{1}{1-\lambda},
\frac{\lambda}{\lambda-1},\frac{\lambda-1}{\lambda}.
\end{equation}
For most values of $\lambda$ these six numbers are distinct.
If $\lambda$ is one of $-1$, $\tfrac{1}{2}$ or~2,
then there are only three distinct cross-ratios, namely
$\{-1,\tfrac{1}{2},2\}=\{\lambda,1/\lambda,1-\lambda\}$.
If $\lambda$ is a primitive sixth root of unity,
i.e.\ $\lambda=e^{\pm\pi i/3}$,
then there are only two distinct cross-ratios,
namely $\lambda$ and $\bar\lambda$.
We will write
$\sigma_1,\ldots,\sigma_6$
for the elementary symmetric functions of six variables.
Thus $\sigma_1(x_1,\ldots,x_6)=x_1+\cdots+x_6$,
$\sigma_2(x_1,\ldots,x_6)=x_1x_2+\cdots+x_5x_6$ (fifteen terms),
and so on up to
$\sigma_6(x_1,\ldots,x_6)=x_1x_2x_3x_4x_5x_6$.
For $k=1,\ldots,6$, define functions
$s_k \colon {\mathbb C}\setminus\{0,1\}\to{\mathbb C}$
by
$$s_k(\lambda)=\sigma_k\left(\lambda,\frac{1}{\lambda},1-\lambda,\frac{1}{1-\lambda},
\frac{\lambda}{\lambda-1},\frac{\lambda-1}{\lambda}\right).$$
It follows that for a quartet $(z_1,z_2,z_3,z_4)$ of distinct points of ${\mathbb P}^1$,
the quantity $s_k((z_1,z_2;z_3,z_4))$
depends only on the set $\{z_1,z_2,z_3,z_4\}$.
Thus we can regard these quantities
as cross-ratios of an unordered set.
Routine calculations (easily verified using a computer algebra system:
see Appendix~\ref{appendix:sage})
show that $s_k$ is constant when $k=1$, $5$ or $6$:
we have $s_1=s_5=3$ and $s_6=1$.
Also $s_4=s_2$ and $s_3=2s_2-5$.
We will only use $s_2$ in the sequel.
It is also worth noting that the function $s_2+3/4$ factorises nicely.
Thus we define $s:{\mathbb C}\setminus\{0,1\}\to{\mathbb C}$ by
\begin{equation} \label{eq:symmXratio}
s(\lambda)=s_2(\lambda)+\frac{3}{4}
=-\frac{(\lambda+1)^2(2\lambda-1)^2(\lambda-2)^2}{4\lambda^2(\lambda-1)^2},
\end{equation}
and we say that the
\emph{symmetrised cross-ratio}
of a set of four distinct points $\{z_1,z_2,z_3,z_4\}$
of ${\mathbb P}^1$ is the number
$s((z_1,z_2;z_3,z_4))$.
\begin{proposition} \label{prop:symmXratio}
The symmetrised cross-ratio
is a complete invariant
for the action of the Möbius group
on unordered sets of four distinct points of ${\mathbb P}^1$.
\end{proposition}
\begin{proof}
The (usual) cross-ratio is a complete invariant
for the action on ordered sets of four points.
Changing the order of the points
transforms the cross-ratio as described above.
Therefore we simply need to show that if
$s(\mu)=s(\lambda)$,
then $\mu$ is one of the six quantities listed above
at \eqref{eq:sixXratios}.
This follows from the following identity,
easily verified by mechanical calculation:
$$
\mu^2(\mu-1)^2(s_2(\lambda)-s_2(\mu))
= (\mu-\lambda)(\mu-\frac{1}{\lambda})\cdots(\mu-\frac{\lambda-1}{\lambda}).
\qedhere
$$
\end{proof}
Using this symmetrised cross-ratio,
we define a map $R_3^O\to{\mathbb C}$, also called $s$, as follows.
For $f\in R_3^O$ with critical points $z_1,z_2,z_3,z_4$,
\begin{equation} \label{eq:s}
s(f)= s((z_1,z_2;z_3,z_4)).
\end{equation}
The dependence of the critical points on $f$ is continuous
by Hurwitz's theorem,
and so $s$ is continuous.
\subsection{Closed and non-closed orbits} \label{subs:orbits}
We can think of the categorical quotient
as parametrising the closed orbits of the group action.
Therefore we need to determine which orbits are closed.
First we will deal with the null fibre,
i.e.\ the set of functions in $R_3$
whose critical values are not distinct.
There are two cases.
Suppose $f$ has exactly three critical values:
one double and two simple.
By the transitivity of the postcomposition action,
we see that $f$ lies in the same orbit as a function $g$
with $\infty$ as a double critical value, i.e.\ a polynomial.
We can also assume that 0 is a simple critical value;
and by transitivity of the precomposition action
we can require $g(0)=0$ and $g(\infty)=\infty$.
Such $g$ must be of the form $ax^3+bx^2$
for some $a,b\in{\mathbb C}^*$.
Conversely, every such function has exactly three critical values:
$\infty$ is a double critical value of every cubic polynomial,
and the finite critical values are the two roots
$0$ and $-2b/3a$ of $g'$.
If $g(x)=ax^3+bx^2$, then
$$\frac{a^2}{b^3}g\left(\frac{b}{a}x\right)
=\frac{a^2}{b^3}\left(a\frac{b^3}{a^3}x^3+b\frac{b^2}{a^2}x^2\right)
=x^3+x^2,$$
so $g$, and therefore $f$, is in the same orbit as $x^3+x^2$.
Thus the set of functions with exactly three critical values is a single orbit.
It is easy to see that this orbit is not closed:
the sequence $(x^3+\tfrac{1}{n}x^2)_{n=1}^{\infty}$
converges to $x^3$, which is outside the orbit
because it has only two critical values.
This fact has a geometrical interpretation:
travelling along the sequence, the two simple critical values
of $(x^3+\tfrac{1}{n}x^2)$ get closer and eventually coalesce.
The second case is that of functions with two double critical values.
This is the smallest possible number of critical values
for an element of $R_3$.
Bearing in mind the above geometrical interpretation,
it is immediate that having two critical values is a closed condition:
it is not possible for the critical values to coalesce within $R_3$.
Thus the set of such functions is closed.
Similar arguments to those presented above
show that all such functions lie in the same orbit as $x^3$.
To summarise:
the null fibre consists of exactly two orbits, namely
a non-closed orbit consisting of the functions with exactly three critical values,
and a closed orbit consisting of the functions with exactly two critical values.
Recall (Remark~\ref{remark:dimensions})
that orbits of functions with at least three critical values
are $6$-dimensional, and that any other orbits are of strictly smaller dimension.
It follows that the closed orbit in the null fibre
is the unique orbit in $R_3$ of minimal dimension.
We will see in Theorem~\ref{thm:stabilisers}
that this orbit is in fact $5$-dimensional.
Now we turn our attention to the open stratum $R_3^O$,
i.e.\ the set of functions in $R_3$
with four distinct critical values.
Here we use the map $s \colon R_3^O\to{\mathbb C}$ defined in the previous section.
Since the group action preserves cross-ratio,
each fibre of $s$ is a union of orbits.
Suppose the orbit $f^G$ of $f\in R_3^O$ is not closed.
Since $s$ is continuous,
the closure of $f^G$ is contained in $s^{-1}(s(f))$.
Proposition~2.3 of \cite{Snow-1982}
tells us that the closure of $f^G$
contains an orbit of strictly smaller dimension,
and therefore $s^{-1}(s(f))$ contains orbits of at least two different dimensions.
But since all orbits in the open stratum are $6$-dimensional,
this is impossible.
The following proposition collects together the results obtained so far.
\begin{proposition}\label{prop:orbits}
The orbits for the action of $G={\PSL_2(\complex)}\times{\PSL_2(\complex)}$ on $R_3$ are of three types.
\begin{itemize}
\item[-] The points with two distinct critical values form a single orbit,
which is closed and $5$-dimensional.
\item[-]The points with three distinct critical values form a single orbit,
which is non-closed and $6$-dimensional.
The closure of this orbit is its union with the $5$-dimensional orbit.
\item[-]The orbit of a function with four distinct critical values
is closed and $6$-dimensional.
\end{itemize}
\end{proposition}
\subsection{Standard form for cubic rational functions} \label{subs:standard}
The following will be useful as an aid to calculation.
\begin{definition} \label{def:standard_form}
Let $a\in{\mathbb C}$. Then $f_a$ will denote the rational function
$$f_a(x)=\frac{x^2(x+a)}{(2a+3)x-(a+2)}.$$
\end{definition}
\begin{remark} \label{remark:fa}
The function $f_a$ fixes the points
$0$, $1$ and~$\infty$.
If $a$ is $-1$ or $-2$,
then $f_a$ equals $x^2$ or $x(2-x)$ respectively.
Otherwise $f_a\in R_3$,
and $0$, $1$ and $\infty$ are critical points and critical values of~$f_a$.
\end{remark}
\begin{lemma} \label{lemma:standard_form}
Suppose $f\in R_3^O$ has critical points
$0$, $1$, $\infty$ and~$\mu$
and critical values
$0$, $1$, $\infty$ and~$\lambda$,
with $f$ sending $0$, $1$, $\infty$ and~$\mu$
to $0$, $1$, $\infty$ and~$\lambda$ respectively.
Then there exists unique $a\in{\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}$ such that
$f=f_a$.
The values of $a$, $\mu$ and~$\lambda$
are related by the equations
\begin{equation}\mu=-\frac{a(a+2)}{2a+3},\quad
\lambda=\frac{\mu^3}{(a+2)^2},\quad
a=\frac{\mu^3+3\mu\lambda-4\lambda}{2\lambda(1-\mu)}.
\label{eq:a_from_lambda}
\end{equation}
Conversely, $f_a\in R_3^O$
for all $a\in{\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}$.
\end{lemma}
\begin{remark}
Given an arbitrary element of $R_3^O$,
we can pre- and postcompose with Möbius transformations
to send three of the critical points and values
to $\{0,1,\infty\}$.
Thus every orbit in $R_3^O$ contains at least one $f_a$.
\end{remark}
\begin{proof}[Proof of lemma]
For 0 to be a double zero of $f$ but not a triple zero,
the numerator of $f$ must take the form $cx^2(x+a)$
for some $a,c\in{\mathbb C}^*$.
Since $f(1)\neq0$, we have the condition $a\neq -1$.
For $\infty$ to map to $\infty$ with multiplicity exactly~2,
the denominator must be a linear polynomial,
say $x+b$ for some $b\in{\mathbb C}^*\setminus\{a\}$.
(We can take the leading coefficient to be 1 because
we have the coefficient $c$ in the numerator.
We require $b\notin\{0,a\}$ in order for $f$ to have degree~3.)
Thus $f$ is of the form
$$f(x)=\frac{cx^2(x+a)}{x+b}.$$
The condition $f(1)=1$ gives
$$\frac{c(1+a)}{1+b}=1,$$
so $b=c(a+1)-1$, and therefore
$$f(x)=\frac{cx^2(x+a)}{x+(a+1)c-1}.$$
The finite critical points are exactly the zeros of $f'$.
These zeros must be $0$, $1$ and~$\mu$.
By the quotient rule,
the numerator of $f'$ is
\begin{equation} \label{eq:cvalue}
cx(3x+2a)(x+(a+1)c-1)-cx^2(x+a).
\end{equation}
Evaluating this at $x=1$ gives
$$c^2(3+2a)(a+1)-c(1+a)=(a+1)c((2a+3)c-1),$$
which vanishes when $a=-1$ or $c=0$
or $c=1/(2a+3)$.
The first two cases are impossible.
Substituting the third value of $c$ into the above expression for $f$,
and dividing the numerator and denominator by $c$, gives
$$f(x)=\frac{x^2(x+a)}{(2a+3)x+a+1-(2a+3)}=\frac{x^2(x+a)}{(2a+3)x-(a+2)},$$
as required.
Now we wish to calculate the values of $\mu$ and $\lambda$.
We have already ensured that 0, 1 and $\infty$ are critical points of $f$;
the fourth critical point is $\mu$.
Substituting the value of $c$ into \eqref{eq:cvalue}
and dividing by the common factor
$cx$ gives
\begin{align*}
\noalign{$(3x+2a)(x+(a+1)c-1)-x(x+a)$}
&=(3x+2a)(x+\tfrac{a+1}{2a+3}-1)-x(x+a) \\
&= \tfrac{1}{2a+3}((3x+2a)((2a+3)x+a+1-(2a+3))-(2a+3)x(x+a)) \\
&= \tfrac{1}{2a+3}((4a+6)x^2+(2a^2-6)x-2a(a+2)) \\
&= \tfrac{2}{2a+3}((2a+3)x^2+(a^2-3)x-a(a+2)) \\
&= \tfrac{2}{2a+3}(x-1)((2a+3)x+a(a+2)).
\end{align*}
This vanishes at $x=1$ (which we already know to be a critical point)
and at $x=\mu=-a(a+2)/(2a+3)$.
Then $\lambda$ is given by $f(\mu)$; multiplying numerator and denominator
by $(2a+3)^3$ we obtain
\begin{align*}
f(\mu)
&=\frac{a^2(a+2)^2(-a(a+2)+a(2a+3))}{-a(a+2)(2a+3)^2-(a+2)(2a+3)^3} \\
&=\frac{a^2(a+2)^2(a^2+a)}{-(2a+3)^2(a(a+2)+(a+2))} \\
&=\frac{-a^3(a+2)^2(a+1)}{(2a+3)^2(a+1)(a+2)} \\
&=\frac{-a^3(a+2)}{(2a+3)^3} \\
&=\frac{\mu^3}{(a+2)^2},
\end{align*}
as required.
Conversely, $a$ can be calculated from $\mu$ and $\lambda$ as follows.
The second equation of the lemma
can be rearranged to give
\begin{align}
(2a+3)\mu &= -a(a+2), \notag \\
\shortintertext{and so}
a^2+2(\mu+1)a+3\mu &= 0. \label {eq:a_mu_quadratic} \\
\shortintertext{The third equation from Lemma~\ref{lemma:standard_form} gives}
a^2+4a+4 &= \mu^3/\lambda. \label{eq:a_mu_lambda}
\shortintertext{Subtracting \eqref{eq:a_mu_quadratic}
from \eqref{eq:a_mu_lambda}:}
2(1-\mu)a+4-3\mu &= \mu^3/\lambda, \notag
\end{align}
which gives the required expression for $a$.
Next, we need to identify the ``forbidden'' values of $a$.
These come from the constraints $\mu,\lambda\notin\{0,1,\infty\}$.
We have $\mu=0$ exactly when $a=0$ or $a=-2$,
and $\mu=\infty$ when $a=-3/2$.
The equation $\mu=1$ gives $a(a+2)+2a+3=0$,
which factorises as $a^2+4a+3=(a+1)(a+3)=0$,
eliminating the values $a=-1,-3$.
Looking at $\lambda=0$ and $\lambda=\infty$ gives nothing new.
The equation $\lambda=1$ gives
\begin{align*}
0 &=\lambda-1 \\
&=\mu^3-(a+2)^2 \qquad\text{(if $a\neq-2$)}\\
&=(2a+3)^3(\mu^3-(a+2)^2) \qquad\text{(if $a\neq-3/2$)}\\
&=-a^3(a+2)^3-(a+2)^2(2a+3)^3 \\
&=-(a+2)^2(a^3(a+2)+(2a+3)^3) \\
&=-(a+2)^2(a^4+2a^3+8a^3+36a^2+54a+27) \\
&=-(a+2)(a+1)(a^3+9a^2+27a+27)\\
&=-(a+1)(a+2)(a+3)^3,
\end{align*}
so again no new forbidden values are obtained.
Finally, if $a$ is not one of the forbidden values,
then it is clear that we can form the function
$f_a$ of Definition~\ref{def:standard_form},
that it is an element of $R_3$,
and that the corresponding values of $\mu$ and $\lambda$
are not in $\{0,1,\infty\}$, so that $f_a\in R_3^O$.
Thus every value of $a$ in ${\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}$ can be realised.
\end{proof}
\begin{remark} \label{remark:special_value}
Given $\mu$, equation~\eqref{eq:a_mu_quadratic} in general
gives two possible values of $a$,
and therefore two possible values of $\lambda$.
The discriminant of \eqref{eq:a_mu_quadratic} is
$$\Delta=(2(\mu+1))^2-12\mu=4(\mu^2-\mu+1).$$
Therefore there is a unique value of $a$ exactly when
$\mu=e^{\pm\pi i/3}$;
as mentioned in Section~\ref{subs:s2},
these are the cross-ratio values for which different orderings of the critical points
give only two distinct cross-ratios
rather than the usual six.
\end{remark}
\subsection{Cross-ratio and invariant functions} \label{subs:cross_ratio}
Our goal is to find a complete set of invariants
for the action of $G$.
The function $s$ of \eqref{eq:s}
is invariant,
but we will see in Example~\ref{ex:a_values}
that $s$ is not sufficient to distinguish the closed orbits.
In this section we define a new function $\pi$,
described in \eqref{eq:pi} below,
using the results of the previous section.
We will see that this function is in fact the categorical quotient map.
The definition parallels that of $s$:
the quantity $a$ of Lemma~\ref{lemma:standard_form}
plays the role of the cross-ratio,
Lemma~\ref{lemma:a_transform} plays the role of \eqref{eq:sixXratios},
and the elementary symmetric function $\sigma_2$ is again used.
Let $f\in R_3^O$.
Choose an ordering $\sigma$ of the critical values,
and let $\lambda$ be the cross-ratio
of the critical values in that order.
Each critical value has two preimages,
one of which is a critical point,
so there is an induced ordering of the critical points.
Let $\mu$ be the cross-ratio of the critical points in this order.
\begin{definition} \label{def:signature}
The \emph{signature} of $f$
with respect to $\sigma$
is the pair $(\mu,\lambda)$.
\end{definition}
\begin{lemma} \label{lemma:signature}
Two elements of $R_3^O$ are in the same orbit
if and only if there exist orderings for which they have the same signature.
\end{lemma}
\begin{proof}
Let $f\in R_3^O$ and choose an ordering $\sigma$ of its critical values.
If we precompose $f$ with a Möbius transformation,
then the critical points move but the critical values are unchanged.
Similarly, postcomposition will move the critical values
but leave the critical points unchanged.
Given $\sigma$, there is a unique Möbius transformation $\alpha_1$ moving the first
three critical points to 0, $\infty$ and~1 in order.
Since cross-ratio is preserved, the fourth critical point will be moved to $\mu$.
Similarly, there is a unique Möbius transformation $\alpha_2$
moving the critical values to 0, $\infty$, 1 and~$\lambda$ in order.
Write $f^{(\sigma)}$ for the function $\alpha_2\circ f \circ \alpha_1^{-1}$.
Note that $f^{(\sigma)}$ has critical points 0, 1, $\infty$ and~$\mu$,
critical values 0, 1, $\infty$ and~$\lambda$,
and fixes the points 0, 1 and~$\infty$.
It follows that $f^{(\sigma)}$
is in fact the function $f_a$ of
Definition~\ref{def:standard_form},
for the value of $a$ given by \eqref{eq:a_from_lambda}.
Hence $f^{(\sigma)}$ is uniquely determined
by the signature.
If $f$ and $g$ have the same signature with respect to
orderings $\sigma$, $\rho$,
then $f^{(\sigma)}=g^{(\rho)}$,
and hence $f$ and~$g$ are in the same orbit.
Conversely, suppose $f$ and~$g$ are in the same orbit,
and choose an ordering $\sigma$ for $f$.
Then there exist Möbius transformations $\beta_1$ and $\beta_2$
such that $\beta_2\circ g\circ\beta_1^{-1}=f^{(\sigma)}$.
Taking the critical values of $g$ in the ordering $\rho$ given by
$\beta^{-1}(0)$, $\beta^{-1}(1)$, $\beta^{-1}(\infty)$, $\beta^{-1}(\mu)$,
we see that $g^{(\rho)}=f^{(\sigma)}$.
\end{proof}
We would like to use the quantity $a$ of Lemma~\ref{lemma:standard_form}
to parametrise the orbits.
However, an orbit can contain more than one $f_a$.
The situation is analogous to that of a quartet of points
having more than one cross-ratio,
and we resolve it in the same way, by symmetrising
with respect to the set of values that can occur.
\begin{lemma} \label{lemma:a_transform}
Let $a,b\in{\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}$.
Then $f_a$ and $f_b$ are in the same orbit
if and only if $b$ is in the set
$$
\left\{
a,\>-\frac{2a+3}{a+2},\> -(a+3),\> -\frac{a}{a+1},\>
-\frac{2a+3}{a+1},\> -\frac{a+3}{a+2}
\right\}.$$
\end{lemma}
\begin{proof}
By Remark~\ref{remark:fa},
$f_a$ has critical points $0$, $1$, $\infty$ and $\mu$,
and critical values $0$, $1$, $\infty$ and $\lambda$,
for some $\mu,\lambda\in{\mathbb P}^1$.
Therefore $f_a$ has signature $(\mu,\lambda)$.
Since $a\not\in\{0,-1,-3/2,-2,-3\}$,
Lemma~\ref{lemma:standard_form} implies that
$f_a\in R_3^O$, so
$\mu,\lambda\in{\mathbb C}\setminus\{0,1\}$,
and
$$a=\frac{\mu^3+3\mu\lambda-4\lambda}{2\lambda(1-\mu)}.$$
Similarly, let $\mu'$ and $\lambda'$ be the fourth critical point
and critical value respectively of $f_b$, so that
$f_b$ has signature $(\mu',\lambda')$
and
$$b=\frac{\mu'^3+3\mu'\lambda'-4\lambda'}{2\lambda'(1-\mu')}.$$
If the critical points and critical values of $f_a$
are taken in a different order,
then the cross-ratios change as described in \eqref{eq:sixXratios}.
Thus the signatures of $f_a$ with respect to the
various orderings are
$$
\left(\mu,\lambda\right),
\left(\tfrac{1}{\mu}, \tfrac{1}{\lambda}\right),
\left(1-\mu, 1-\lambda\right),
\left(\tfrac{1}{1-\mu}, \tfrac{1}{1-\lambda}\right),
\left(\tfrac{\mu}{\mu-1}, \tfrac{\lambda}{\lambda-1}\right),
\left(\tfrac{\mu-1}{\mu}, \tfrac{\lambda-1}{\lambda}\right).
$$
It follows from Lemma~\ref{lemma:signature}
that $f_a$ and $f_b$ are in the same orbit
if and only if $(\mu',\lambda')$
equals one of the six pairs listed above.
We will show that $(\mu',\lambda')=(1/\mu, 1/\lambda)$
if and only if $b=-(2a+3)/(a+2)$.
The other cases are handled similarly.
First suppose that $\mu'=1/\mu$ and $\lambda'=1/\lambda$.
Using \eqref{eq:a_from_lambda},
we have
\begin{align*}
b &= \frac{\mu^{-3}+3\mu^{-1}\lambda^{-1}-4\lambda^{-1}}{2\lambda^{-1}(1-\mu^{-1})} \\
&= \frac{\mu^{-3}\lambda+3\mu^{-1}-4}{2(1-\mu^{-1})} \\
&=\frac{(a+2)^{-2}-3(2a+3)a^{-1}(a+2)^{-1}-4}
{2(1+(2a+3)a^{-1}(a+1)^{-1})} \\
&=\frac{a-3(2a+3)(a+2)-4a(a+2)^2}
{2(a(a+2)^2+(2a+3)(a+2))} \\
&=\frac{a-3(2a^2+7a+6)-4a(a^2+4a+4)}
{2(a+2)(a^2+2a+2a+3)} \\
&=\frac{-4a^3-22a^2-36a-18}
{2(a+2)(a^2+4a+3)} \\
&=-\frac{2a^3+11a^2+18a+9}
{(a+2)(a+1)(a+3)} \\
&=-\frac{(a+1)(2a+3)(a+3)}
{(a+1)(a+2)(a+3)} \\
&=-\frac{2a+3}
{a+2}.
\intertext{Conversely, if $b=-(2a+3)/(a+2)$, then}
\mu'&=-\frac{b(b+2)}{2b+3} \\
&=\frac{2a+3}{a+2}\cdot\frac{1}{a+2}\cdot\frac{-(a+2)}{a} \\
&=-\frac{2a+3}{a(a+2)} \\
&=1/\mu,
\end{align*}
and similarly $\lambda'=1/\lambda$.
\end{proof}
\begin{example} \label{ex:a_values}
Recall that for most choices of $\mu$
there are two values of $a$ (Remark~\ref{remark:special_value}).
The two corresponding $f_a$
may or may not belong to the same orbit.
For example, if we take $\mu=2$,
then we find that $a=-3\pm\sqrt3$.
But if $a=-3+\sqrt3$, then
$-a/(a+1)=-3-\sqrt3$,
and so it follows from the lemma that there is only one
orbit corresponding to $\mu=2$.
On the other hand, $\mu=5$ yields
$a=-6\pm\sqrt{21}$.
By the lemma, these two values of $a$ correspond to distinct orbits.
This justifies our earlier claim that
the function $s$ of Section~\ref{subs:s2}
is not sufficient to distinguish the closed orbits.
\end{example}
By analogy with the symmetrised cross-ratio
of \eqref{eq:symmXratio},
we define a function
$\pi \colon {\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}\to{\mathbb C}$
by
\begin{align}
\pi(a) &= \sigma_2( a, -\tfrac{2a+3}{a+2}, -(a+3), -\tfrac{a}{a+1},
-\tfrac{2a+3}{a+1}, -\tfrac{a+3}{a+2}) -\tfrac{117}{4} \notag \\
&= \frac{a^2(3a+2)^2(a+3)^2}{(a+1)^2(a+2)^2}. \label{eq:pi}
\end{align}
The term $-\tfrac{117}{4}$ is again chosen to enable a nice factorisation,
and also ensures that $\pi(a)\to 0$ as $a\to 0$.
Define a map from $R_3^O$ to ${\mathbb C}$,
also called $\pi$, by
\begin{equation} \label{eq:pi_on_R3}
\pi(f)=\pi(a)\text{ where } f \text{ is in the same orbit as } f_a.
\end{equation}
It follows from Lemma~\ref{lemma:a_transform}
and the use of the symmetric polynomial $\sigma_2$ in
\eqref{eq:pi}
that $\pi(f)$ is well defined.
\begin{lemma} \label{lemma:pi_holo}
The map $\pi$ is holomorphic on the open stratum.
\end{lemma}
\begin{proof}
Given $f\in R_3^O$
let $\lambda$ be the cross-ratio of the critical values of $f$
taken in some order,
and let $\mu$ be the cross-ratio of the critical points in the
corresponding order.
Then the value of $a$ is given by \eqref{eq:a_from_lambda}.
It follows that $\pi(f)$ is a holomorphic function
of $\lambda$ and~$\mu$.
A straightforward application of the argument principle
shows that, locally,
as $f$ varies holomorphically then so does each critical point,
and therefore so does each critical value.
Hence the dependence of $\mu$ and $\lambda$ on $f$ is holomorphic,
and so $\pi$ is holomorphic.
\end{proof}
\begin{remark}
The dependence of $f$ on $\mu$ and $\lambda$
can be described explicitly:
substituting the formulae of \eqref{eq:a_from_lambda}
into \eqref{eq:pi} yields
$$
\pi(\mu,\lambda)=\frac
{p(\mu,\lambda)}
{4 \lambda^2 (\lambda - 1)^2 \mu^4 (\mu - 1)^4}
$$
where
\begin{align*}
p(\mu,\lambda) =
& - \lambda^{2} \mu^{12} + 6 \lambda^{2} \mu^{11} + \lambda \mu^{12} + 160
\lambda^{4} \mu^{8} + 54 \lambda^{3} \mu^{9} - 45 \lambda^{2} \mu^{10}\\
& - 4 \lambda \mu^{11} - 640 \lambda^{4} \mu^{7} - 563 \lambda^{3} \mu^{8}
+ 89 \lambda^{2} \mu^{9} + 34 \lambda \mu^{10} - \mu^{11}\\
& - 44 \lambda^{5} \mu^{5} +
1044 \lambda^{4} \mu^{6} + 1676 \lambda^{3} \mu^{7} + 173 \lambda^{2} \mu^{8} -
98 \lambda \mu^{9} + \mu^{10}\\
& + 110 \lambda^{5} \mu^{4} - 782 \lambda^{4} \mu^{5}
- 2340 \lambda^{3} \mu^{6} - 782 \lambda^{2} \mu^{7} + 110 \lambda \mu^{8}\\
& + \lambda^{6} \mu^{2} - 98 \lambda^{5} \mu^{3} + 173 \lambda^{4}
\mu^{4} + 1676 \lambda^{3} \mu^{5} + 1044 \lambda^{2} \mu^{6} - 44 \lambda \mu^{7}\\
& - \lambda^{6} \mu + 34 \lambda^{5} \mu^{2} + 89 \lambda^{4} \mu^{3} -
563 \lambda^{3} \mu^{4} - 640 \lambda^{2} \mu^{5} - 4 \lambda^{5} \mu\\
& - 45
\lambda^{4} \mu^{2} + 54 \lambda^{3} \mu^{3} + 160 \lambda^{2} \mu^{4} +
\lambda^{5} + 6 \lambda^{4} \mu - \lambda^{4}.
\end{align*}
\end{remark}
\begin{lemma} \label{lemma:same_pi}
Two elements $f_1$ and $f_2$ of $R_3^O$ are in the same orbit
if and only if $\pi(f_1)=\pi(f_2)$.
\end{lemma}
\begin{proof}
The forward implication is immediate from the definition.
Suppose $f_1$ is in the same orbit as $f_{a_1}$,
and $f_2$ in the same orbit as $f_{a_2}$.
Then a mechanical calculation gives
\begin{align*}
& \pi(f_2)-\pi(f_1) = \\
& \tfrac{
(a_1-a_2)(a_1+a_2+3)(a_1a_2+a_1+a_2)(a_1a_2+a_1+2a_2+3)
(a_1a_2+2a_1+a_2+3)(a_1a_2+2a_1+2a_2+3)
}
{(a_1+1)^2(a_1+2)^2(a_2+1)^2(a_2+2)^2}.
\end{align*}
If the right hand side vanishes, then one of the factors of the numerator must vanish.
Each factor corresponds to one of the expressions
of Lemma~\ref{lemma:a_transform}.
For example,
$a_2=-(2a_1+3)/(a_1+1)$ is equivalent to
$a_1a_2+2a_1+a_2+3=0$.
Hence if $\pi(f_1)=\pi(f_2)$, then
Lemma~\ref{lemma:a_transform} implies that
$f_{a_1}$ and $f_{a_2}$ are in the same orbit,
and so $f_1$ and $f_2$ are in the same orbit.
\end{proof}
\begin{lemma} \label{lemma:image_of_pi}
The image of the open stratum under $\pi$
is ${\mathbb C}^*$.
\end{lemma}
\begin{proof}
We need to determine the values of $c\in{\mathbb C}$ for which the equation
\begin{equation*}
\frac{a^2(3a+2)^2(a+3)^2}{(a+1)^2(a+2)^2} = c
\end{equation*}
has a solution $a\in{\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}$.
We can multiply through to obtain
\begin{equation}
p(a,c)=a^2(3a+2)^2(a+3)^2 -c(a+1)^2(a+2)^2 = 0.\label{eq:image_of_pi}
\end{equation}
For fixed $c$, this is polynomial in $a$, and will
always have a solution in ${\mathbb C}$.
For which $c$ does there exist a solution in ${\mathbb C}\setminus\{0,-1,-3/2,-2,-3\}$?
If $a=-1$ or $-2$, then the second term of $p$ vanishes:
the value of $p$ is independent of $c$,
and is nonzero.
If $a=0$ or $-3/2$ or $-3$, then the first term of $p$ vanishes:
hence we find $c=0$.
Thus for nonzero $c$, the solutions $a$ of \eqref{eq:image_of_pi}
never lie in the set $\{0,-1,-3/2,-2,-3\}$,
and so $\pi$ maps $R_3^O$ onto~${\mathbb C}^*$.
\end{proof}
\subsection{The quotient map} \label{subs:quotient}
In the previous section we defined a holomorphic map
$\pi\colon R_3^O\to{\mathbb C}^*$.
We extend $\pi$ to all of $R_3$
by defining $\pi(f)=0$ whenever $f$ is in the null fibre.
\begin{lemma} \label{lemma:pi_conts}
The map $\pi \colon R_3\to{\mathbb C}$ is continuous.
\end{lemma}
\begin{proof}
We only need to prove continuity at the null fibre.
That is, we want to show that $\pi(f)\to0$
as $f$ approaches the null fibre.
First we give an intuitive picture of the situation;
a precise calculation follows.
We start by examining the limiting cases for the formulae
given in Lemma~\ref{lemma:standard_form}
as the parameter $a$ approaches a ``forbidden'' value or $\infty$.
Recall that the restrictions on $a$ arise
from requiring the critical points and critical values to be distinct.
As $a$ tends towards a forbidden value,
$\mu$, $\lambda$ and $f_a$ behave as in the following table.
\bigskip
\begin{tabular}{c|c|c|l}
$a$ & $\mu$ & $\lambda$ & $f$ tends to \\
\hline
0 & 0 & 0 & $x^3/(3x-2)$ \\
$-2$ & 0 & 0 & $x(2-x)$ \\
\hline
$-3$ & 1 & 1 & $x^2(x-3)/(1-3x)$ \\
$-1$ & 1 & 1 & $x^2$ \\
\hline
$-3/2$ & $\infty$ & $\infty$ & $x^2(3-2x)$ \\
$\infty$ & $\infty$ & $\infty$ & $x^2/(2x-1)$ \\
\end{tabular}
\bigskip
As $a$ approaches $-1$, $-2$ or $\infty$,
we see that the degree of $f_a$ drops,
so $f_a$ ``falls out of $R_3$'' as the critical points coalesce.
However, for the other cases,
$f_a$ approaches an element of the null fibre.
The idea of the proof, informally, is
that as we approach the null fibre in $R_3$,
the value of $a$ must approach $0$, $-3/2$ or $-3$.
These are exactly the values for which \eqref{eq:pi}
takes on the value~0.
Let us make this more precise.
Suppose $(f_n)_{n=1}^{\infty}$ is a sequence in $R_3^O$
tending to an element of the null fibre.
For each $n$, choose an ordering $\sigma_n$ of the critical points of $f_n$,
and Möbius transformations taking the critical points in order
to $\{0,1,\infty,\mu_n\}$
and the corresponding critical values to $\{0,1,\infty,\lambda_n\}$.
Furthermore, choose $\sigma_n$ so that $\mu_n$ is at least as close
to $0$ (in the spherical metric, say)
as it is to $1$ or $\infty$.
With respect to this choice of $\sigma_n$, we must have $\mu_n\to0$
as $n\to\infty$.
Using \eqref{eq:a_from_lambda},
if we let
$$ a_n = \frac{\mu_n^3+3\mu_n\lambda_n-4\lambda_n}{2\lambda_n(1-\mu)_n}, $$
then we have
$$\pi(f_n)= \frac{a_n^2(3a_n+2)^2(a_n+3)^2}{(a_n+1)^2(a_n+2)^2} =O(a_n^2).$$
Also
$$\mu_n=-\frac{a_n(a_n+2)}{2a_n+3}=O(a_n).$$
So as $n\to\infty$ and $\mu\to0$
we have $a_n\to0$ and therefore $\pi(f_n)\to0$.
\end{proof}
\begin{corollary} \label{cor:pi_holo}
The map $\pi \colon R_3\to{\mathbb C}$ is holomorphic.
\end{corollary}
\begin{proof}
We already know that
$\pi$ is holomorphic outside the null fibre (Lemma~\ref{lemma:pi_holo}).
Recall (Remark~\ref{remark:null_codim})
that the null fibre has codimension~1.
Since $\pi$ is continuous at the null fibre,
it follows from Riemann's removable singularity theorem
that $\pi$ is holomorphic everywhere.
\end{proof}
\begin{remark} \label{remark:r3quotient}
Lemma~\ref{lemma:same_pi}
tells us that $\pi$ is constant on the orbits
and distinguishes the closed orbits,
and Lemma~\ref{lemma:image_of_pi} tells us that $\pi$ is surjective.
Informally, this means that
$\pi$ does the best possible job of distinguishing
the orbits---it is not possible for a holomorphic function
(or even a continuous function) to distinguish
the two orbits of the null fibre, since one is inside the closure
of the other---and so $\pi$ by itself forms a complete set of invariant functions
for the group action.
Therefore $\pi$ is the categorical quotient map,
proving Theorem~\ref{thm:r3quotient}.
This argument can be expressed more rigorously.
We know that the categorical quotient map $\pi' \colon R_3\to Y$
exists~\cite[page~70]{Snow-1982}.
Furthermore, $Y$ is a reduced complex space
and inherits from $R_3$
the properties of being connected, irreducible and normal~\cite[page~84]{Snow-1982}.
By the universal property of the quotient~\cite[Lemma~3.1]{Snow-1982},
there exists a unique holomorphic map $\alpha \colon Y\to{\mathbb C}$
such that $\pi=\alpha\circ\pi'$.
From the above mentioned properties of $\pi$,
it follows that $\alpha$ is a bijection.
The difficulty is that $Y$ may have singularities,
so we cannot assume immediately that $\alpha^{-1}$ is holomorphic.
In our case, however,
$Y$ is irreducible and reduced, and therefore pure-dimensional.
Since there exists a holomorphic bijection $Y\to{\mathbb C}$, the dimension must be~$1$.
But a $1$-dimensional normal space is necessarily smooth.
It follows that $\alpha$ is a biholomorphism,
and so $\pi \colon R_3\to{\mathbb C}$ is also a categorical quotient map.
This completes the proof of Theorem~\ref{thm:r3quotient}.
\end{remark}
\subsection{Further structure: stabilisers and the exceptional orbit} \label{subs:stabilisers}
From the point of view of Oka theory,
we would like to know whether $\pi$
is an Oka map, in the sense defined in \cite[Definition~6.3]{Forstneric-Larusson-2011}.
A necessary condition is that $\pi$ should be a topological fibration.
In fact it is not.
This can be seen by studying the stabilisers of elements of $R_3$
and applying the results of \cite[Section~2.3]{Rainer-2009}.
Recall that if the cross-ratio of four points in some order
is $e^{\pm\pi i/3}$,
then the six cross-ratios listed in \eqref{eq:sixXratios}
take on only the two distinct values $e^{\pm\pi i/3}$.
\begin{definition}\label{def:exceptional_orbit}
The \emph{exceptional orbit}
is the set of elements of the open stratum of $R_3$
whose critical points, taken in some order,
have a cross-ratio of $e^{\pm\pi i/3}$.
\end{definition}
\begin{remark} \label{remark:exceptional_orbit}
The exceptional orbit is in fact a single orbit of the group action.
Straightforward calculations show that this orbit is
$\pi^{-1}(-27/4)$:
we can use \eqref{eq:a_mu_quadratic}
to find $a=-e^{\pm\pi i/3}-1$,
and for both choices of sign, \eqref{eq:pi} gives
the value $-27/4$.
Note also that $e^{\pm\pi i/3}$
are the special cross-ratio values referred to in Remark~\ref{remark:special_value}.
\end{remark}
\begin{theorem}\label{thm:stabilisers}
The stabilisers of elements of $R_3$ are of four types:
\begin{enumerate}[itemindent=-.1cm]
\item An element of the closed orbit in the null fibre has stabiliser of dimension~1.
Hence this orbit is $5$-dimensional.
The stabiliser is the nontrivial semidirect product of ${\mathbb C}^*$
and ${\mathbb Z}_2$.
\item An element of the non-closed orbit has finite stabiliser of size~2.
\item An element of the exceptional orbit has finite stabiliser
of size~12, isomorphic to the alternating group on four symbols.
\item An element of the open stratum outside the exceptional orbit
has finite stabiliser of size~4, isomorphic to the Klein 4-group.
Furthermore, all such stabilisers are conjugate.
\end{enumerate}
\end{theorem}
\begin{remark}
By \cite[Corollary 5.5]{Snow-1982},
the restriction of the quotient map
to the open stratum minus the exceptional orbit
is a holomorphic fibre bundle.
The fibres are homogeneous spaces for $G$,
therefore Oka manifolds,
and so $\pi$ restricted to this domain is an Oka map.
This is the maximal open subset of $R_3$ over which
$\pi$ is a fibration.
\end{remark}
\begin{proof}[Proof of theorem]
Let $f\in R_3$ and $g=(\alpha,\beta)\in G$
such that $f^g=f$.
Then $\alpha^{-1}\circ f\circ\beta=f$.
In particular, $f$ and $\alpha^{-1}\circ f\circ\beta$
have the same critical points
and the same critical values.
It follows that $\alpha$ permutes the critical values of $f$
and $\beta$ permutes the critical points of $f$.
Furthermore, both permutations must preserve multiplicities.
\textit{Case (1):} We can choose $f=x^3$ as a representative of the
$5$-dimensional orbit.
The critical points are $0$ and $\infty$, each with multiplicity~2,
and the critical values are the same.
The only Möbius transformations
fixing the set $\{0,\infty\}$
are $x\mapsto cx$ and $x\mapsto c/x$ for $x\in{\mathbb C}^*$.
Choosing $\alpha$ and $\beta$ to be of this form,
and adding the restriction that $\alpha^{-1}\circ f\circ\beta=f$,
we obtain an explicit realisation of the stabiliser as
$$\{(c^3x,cx):c\in{\mathbb C}^*\}\cup \{(c^3/x,c/x):c\in{\mathbb C}^*\}.$$
\textit{Case (2):} Similarly, take $f=x^3+x^2$
as a representative of the non-closed orbit.
This has a double critical value of $\infty$ with preimage $\infty$,
and finite critical points and values $0\mapsto0$ and $-2/3\mapsto 4/27$.
(The finite critical points are simply the zeros of the derivative
$3x^2+2x$.)
Suppose $\alpha^{-1}\circ f \circ\beta=f$.
Then $\alpha$ and $\beta$ must both fix $\infty$.
This means that they are both of the form $x\mapsto ax+b$
for some $a\in{\mathbb C}^*$ and $b\in{\mathbb C}$.
Also, $\alpha$ must either fix or interchange
the points $0$ and $4/27$.
Thus $\alpha$ is either the identity or the map $x\mapsto 4/27-x$.
Similarly, $\beta$ is either the identity or $x\mapsto -2/3-x$.
If we set $\alpha(x)=4/27-x$ and $\beta(x)=-2/3-x$,
noting that $\alpha^{-1}=\alpha$,
then we can calculate:
\begin{align*}
\alpha\circ f &=4/27-f\neq f, \\
f \circ\beta &= (-2/3-x)^2(-2/3-x+1) \\
&= (4/9+4x/3+x^2)(1/3-x) \\
&= 4/27-x^2-x^3 \neq f,\\
\alpha\circ f\circ\beta &=4/27-(4/27-x^2-x^3) =f.
\end{align*}
and so the stabiliser is $\{(1,1),(\alpha,\beta)\}$
which has size~2 as stated above.
\textit{Cases (3) and (4), descriptions of the stabilisers:}
For orbits of $R_3^O$,
we can choose a representative $f$
as described in Lemma~\ref{lemma:standard_form},
with distinct critical points
$0$, $1$, $\infty$ and~$\mu$
and distinct critical values
$0$, $1$, $\infty$ and~$\lambda$.
Step 1: If $\alpha^{-1}\circ f\circ \beta=f$,
then $\alpha$ must permute the four critical points of $f$,
and $\beta$ must permute the four critical values;
furthermore, $\beta$ must induce the same permutation as $\alpha$.
Since a Möbius transformation is determined by the images of three points,
this greatly restricts the possibilities for $(\alpha,\beta)$.
To be specific, we can choose an ordering $(z_1,z_2,z_3,z_4)$
of $(0,1,\infty,t)$ (where $t$ can stand for $\lambda$ or $\mu$),
find the unique Möbius transformation $g$ sending $(0,1,\infty)$
to $(z_1,z_2,z_3)$, and check whether $g(t)=z_4$.
The 24 possibilities are listed in Appendix~\ref{appendix:table}.
For generic values of $t$,
there are only four permutations,
given by the rows of the table
with ``any'' in the fourth column.
For each permutation
we can calculate $\alpha^{-1}\circ f\circ \beta=f$ explicitly
and verify that $(\alpha,\beta)$ does indeed stabilise $f$.
The nontrivial permutations are all pairs of transpositions,
giving the Klein 4-group.
This proves case~(4).
We obtain additional permutations only when $\lambda$
and $\mu$ are both special cross-ratio values,
i.e.\ one of $-1$, $\tfrac{1}{2}$, 2 or~$e^{\pm\pi i/3}$.
Step 2: If either $\mu$ or $\lambda$ is not one of the above special values,
then the only possible elements of the stabiliser are
those identified in Step~1 above.
So we need to check whether $\mu$ and $\lambda$ can be simultaneously special.
This is straightforward:
for each value of $\mu$ we solve \eqref{eq:a_mu_quadratic} above
to find the corresponding values of $a$, and then calculate $\lambda$.
The result is that if $\mu$ is one of $-1$, $\tfrac{1}{2}$ or $2$,
then $\lambda$ is real and irrational, therefore not special,
but if $\mu$ is $e^{\pm\pi i/3}$, then $\lambda=\bar\mu$ is special.
In this case there are an additional eight candidate elements in the stabiliser.
The calculations for the special values of $\lambda$ and $\mu$
are summarised in the following table.
\bigskip
{\tiny
\begin{tabular}{c|c|c|c}
$\mu$ & equation & $a$ & $\lambda=\mu^3/(a+2)^2$ \\
\hline
$-1$ & $a^2-3=0 $& $\pm\sqrt 3$ & real and irrational \\
\hline
$1/2$ & $a^2+3a+3/2 =0$ & $(-3\pm\sqrt 3)/2$ & real and irrational \\
\hline
$2$ & $a^2+6a+6=0$ & $-3\pm\sqrt 3$ & real and irrational \\
\hline
$e^{\pi i/3}$ & $a^2+2(e^{\pi i/3}+1)+3e^{\pi i/3}=0$ & $-e^{\pi i/3}-1$ & $1-e^{\pi i/3}=e^{-\pi i/3}$ \\
\hline
$e^{-\pi i/3}$ & $a^2+2(e^{-\pi i/3}+1)+3e^{-\pi i/3}=0$ & $-e^{-\pi i/3}-1$ & $1-e^{-\pi i/3}=e^{\pi i/3}$
\end{tabular}
}
\bigskip
Step 3: For each of the candidate elements identified above,
calculate $\alpha^{-1}\circ f \circ \beta$ and verify that it equals $f$.
(In fact, knowing that the stabiliser is a group,
we only need to verify this for one element outside the generic stabiliser.
It is easiest to work with the permutation $(0 1 \infty)$,
for which $\alpha(x)=\beta(x)=1/(1-x)$ and $\alpha^{-1}(x)=(x-1)/x$.)
Hence the stabiliser of the exceptional orbit
has size~$12$.
From the table in Appendix~\ref{appendix:table}
we see that all elements of this stabiliser
induce even permutations on the set
$\{0,1,\infty,e^{\pi i/3}\}$,
and so the stabiliser is
isomorphic to the alternating group on four symbols.
\textit{Case (4), conjugacy of stabilisers:}
Every finite subgroup of the Möbius group
is conjugate to a subgroup of the group
$\PSU_2({\mathbb C})$, which can be viewed as the group of rigid motions
of the Riemann sphere with respect to the usual embedding into ${\mathbb R}^3$.
See for example \cite{Lyndon-Ullman-1967} or \cite[Section~2.13]{Jones-Singerman-1987}.
Two finite subgroups
of the Möbius group are conjugate if and only if they are isomorphic as abstract groups
(\cite[remarks after Corollary~2.13.7]{Jones-Singerman-1987}).
However, a slightly stronger result is needed for our purposes.
For a Klein 4-subgroup of $\PSU_2({\mathbb C})$,
viewed as a group of rigid motions of the sphere,
each non-identity element is a rotation by an angle of $\pi$ about some axis.
It is clear that two rotations commute if and only if their axes are orthogonal.
Thus Klein 4-groups correspond to sets of three mutually orthogonal axes.
For any two such sets of axes,
there is an orientation-preserving rigid motion of the sphere taking one to the other.
This gives a group element conjugating one Klein 4-group to the other.
Therefore any two such subgroups are conjugate.
We can go a little further. For a set of three mutually orthogonal axes,
and for any permutation of those axes,
there exists a rotation realising that permutation.
Conjugating by this rotation will yield an automorphism of the corresponding
Klein 4-group which permutes the non-identity elements in the same way.
Hence we can conclude that
given Klein 4-subgroups $\{1,\alpha_1,\alpha_2,\alpha_3\}$ and $\{1,\beta_1,\beta_2,\beta_3\}$
of the Möbius group,
there exists a group element $g$ with $g\alpha_jg^{-1}=\beta_j$ for $j=1,2,3$.
This is the stronger result referred to above.
Now we apply this to stabilisers in
${\PSL_2(\complex)}\times{\PSL_2(\complex)}$.
If $f$ is in the open stratum but not in the exceptional orbit,
then the stabiliser is of the form
$\{(1,1),(\alpha_1,\beta_1),(\alpha_2,\beta_2),(\alpha_3,\beta_3)\}$,
where each $\alpha_j$ permutes the critical values of $f$
via a pair of disjoint transpositions,
and each $\beta_j$ carries out the same permutation on the corresponding
critical points.
Given another stabiliser of the form
$\{(1,1),(\alpha'_1,\beta'_1),(\alpha'_2,\beta'_2),(\alpha'_3,\beta'_3)\}$,
we seek $(g,h)$ such that
$g$ conjugates $\{1,\alpha_1,\alpha_2,\alpha_3\}$
to $\{1,\alpha'_1,\alpha'_2,\alpha'_3\}$ in some order, and
$h$ conjugates $\{1,\beta_1,\beta_2,\beta_3\}$
to $\{1,\beta'_1,\beta'_2,\beta'_3\}$ in the same order.
The fact that any two isomorphic finite subgroups of ${\PSL_2(\complex)}$ are conjugate
tells us that a suitable $g$ exists.
Then the existence of $h$ is guaranteed by the stronger result
that we can conjugate the elements of one Klein 4-subgroup to another
in any desired order.
\end{proof}
\section{Degree 3: dominability and ${\mathbb C}$-connectedness} \label{section:dominating}
\subsection{Composition of dominability} \label{subs:composition}
When exploring the Oka property or related flexibility properties of a manifold $X$,
a logical first step is to investigate holomorphic maps ${\mathbb C}^n\to X$,
and in particular to look for dominating maps (Definition~\ref{def:dominable}).
In the case of $R_d$, the principal difficulty in constructing
explicit dominating maps is cancellation.
The easiest way to write down a map ${\mathbb C}^n\to R_d$
is in the form $p(t)/q(t)$ where $p$ and $q$
are families of polynomials parametrised by $t\in{\mathbb C}^n$.
However, it is necessary to ensure that $p$ and $q$
do not have common factors as the parameter $t$ varies.
We can achieve this by
embedding $R_d$ in a larger space and then
applying the following result.
\begin{proposition}[Composition of dominability] \label{prop:composition}
Let $X$ be an open subset of a complex manifold $Z$,
and $p\in X$.
Suppose $\phi \colon {\mathbb C}^n\to Z$ dominates $Z$ at $p$.
If $\phi^{-1}(X)$ is dominable at 0, then $X$ is dominable at $p$.
\end{proposition}
\begin{proof}
If $\psi \colon {\mathbb C}^m\to \phi^{-1}(X)$ dominates $\phi^{-1}(X)$ at 0,
then $\phi\circ\psi$ dominates $X$ at $p$.
\end{proof}
We construct a map ${\mathbb C}^8\to R_3$ as follows.
We can view a point of ${\mathbb P}^7$ as a formal rational function:
$$(a_0 \colon\! \cdots \colon\! a_7) \longleftrightarrow
\frac{a_0x^3+a_1x^2+a_2x+a_3}{a_4x^3+a_5x^2+a_6x+a_7}.$$
We also have the group $G={\PSL_2(\complex)}\times{\PSL_2(\complex)}$
acting on $R_3$ by pre- and post-composition.
This extends to an action of $G$ on ${\mathbb P}^7$:
we can compose a formal rational function with a Möbius transformation
to get a well-defined result.
Recall that we can embed $R_3$ into ${\mathbb P}^7$
by using the coefficients of a rational function
as the homogeneous coordinates of a point of ${\mathbb P}^7$.
This embedding is $G$-equivariant;
in the following discussion we will identify $R_3$ with its image in ${\mathbb P}^7$.
Now choose $f\in R_3$
and suppose we have a map $\eta \colon {\mathbb C}^2\to{\mathbb P}^7$
sending 0 to $f$.
Let $\exp \colon {\mathbb C}^6\to G$ be the exponential map.
This map dominates $G$ at the identity;
for this particular group, it is also surjective~\cite[page~47]{Gorbatsevich-et-al-1997}.
Define $\phi \colon {\mathbb C}^8\to{\mathbb P}^7$ by
\begin{equation}
\phi(s,t)=\eta(s)^{\exp(t)},\qquad s\in{\mathbb C}^2,\> t\in{\mathbb C}^6.
\label{eq:phi}
\end{equation}
Our strategy is to choose $\eta$ so that $\phi$ dominates ${\mathbb P}^7$ at $f$,
and find some $\psi \colon {\mathbb C}^m\to\phi^{-1}(R_3)$ which is dominating at 0.
The proposition then tells us that $\phi\circ\psi$ dominates $R_3$ at $f$.
This proves Theorem~\ref{thm:r3dominable}.
Furthermore, $\psi$ can be chosen so that
$\phi\circ\psi$ is surjective (Corollary~\ref{cor:comp_surjective}).
Since ${\mathbb C}^8$ is ${\mathbb C}$-connected (Definition~\ref{def:connected}),
it follows that $R_3$ is ${\mathbb C}$-connected,
proving Theorem~\ref{thm:r3connected}.
To construct a suitable $\eta$,
first we define $\eta_0 \colon {\mathbb C}^2\to{\mathbb P}^7$ by
\begin{equation}
\eta_0(a,b)=\frac{x^3-ax}{-bx^2+1}
=(1 \colon\! 0 \colon\!\! -a \colon\! 0 \colon\!
0 \colon\!\! -b \colon\! 0 \colon\! 1). \label{eq:eta}
\end{equation}
We will see in Proposition~\ref{prop:all_orbits}
that the image of $\eta_0$ intersects every orbit
of $G$ on $R_3$.
Thus given $f\in R_3$ there exist
$a_0,b_0\in{\mathbb C}$ and $g\in G$
such that $g$ takes $\eta_0(a_0,b_0)$ to $f$.
Define $\eta$ by
\begin{equation*}
\eta(a,b)=\eta_0(a+a_0,b+b_0)^g.
\end{equation*}
\begin{remark}
The form of $\eta$ is not uniquely determined by the choice of $f$.
For the purpose of proving strong dominability,
this does not matter:
all we need is that given $f$
there exists at least one suitable $\eta$.
If we could in fact find a canonical $\eta$ for each $f$,
in such a way that the map $f\mapsto\eta$ were holomorphic,
then we could join the resulting dominating maps
to make a spray (\cite[Definition~5.1]{Forstneric-Larusson-2011}).
This would imply that $R_3$ is Oka.
\end{remark}
The numerator of $\eta_0$ has roots $0$ and $\pm\sqrt{a}$,
and the denominator has roots $\pm1/\sqrt{b}$.
Therefore $\eta_0(a,b)$ fails to be in $R_3$ exactly when $ab=1$.
Similarly, given $a_0$, $b_0$ and $g$,
the set of $(a,b)$ such that $\eta(a,b)\not\in R_3$
is a translate of ${\mathbb C}^2\setminus\{(a,b):ab=1\}$.
Therefore
$$\phi^{-1}(R_3)\cong({\mathbb C}^2\setminus\{(a,b):ab=1\})\times{\mathbb C}^6.$$
Now ${\mathbb C}^2\setminus\{ab=1\}$ is Oka:
this is a consequence of \cite[Proposition~4.10]{Hanysz-2012},
or see Appendix~\ref{appendix:conic} for an elementary proof.
In particular, ${\mathbb C}^2\setminus\{ab=1\}$,
and hence $\phi^{-1}(R_3)$,
is dominable at $0$.
Thus $\phi^{-1}(R_3)$ is dominable at $0$.
We will show in Section~\ref{subs:transverse}
that $\phi$ dominates ${\mathbb P}^7$ at $f$.
Therefore $\phi\circ\psi$ dominates $R_3$ at~$f$.
\subsection{Proof of surjectivity} \label{subs:surjective}
The goal of this section
is to show that
the map $\psi \colon {\mathbb C}^8\to\phi^{-1}(R_3)$
can be chosen so that $\phi\circ\psi$ is surjective.
First we prove that the image of the map $\eta_0$
defined by \eqref{eq:eta}
intersects every orbit,
and therefore $\phi$ is surjective.
Then we will describe the choice of $\psi$.
For the first part, we exploit the fact
that the critical values of $\eta_0(a,b)$
have a certain kind of symmetry.
\begin{definition} \label{def:balanced}
Let $z_1,\ldots,z_4\in{\mathbb C}$.
We say that $(z_1,\ldots,z_4)$ is \emph{balanced}
if $z_1+z_2=z_3+z_4=0$.
\end{definition}
\begin{lemma} \label{lemma:balanced}
Let $z_1,\ldots,z_4$ be distinct points of ${\mathbb C}$.
There exists a Möbius transformation $\alpha$
such that $(\alpha(z_1),\ldots,\alpha(z_4))$
is balanced.
\end{lemma}
\begin{proof}
By transitivity of the Möbius group,
we can assume that $z_3=1$ and $z_4=-1$.
We will find a Möbius transformation
$\alpha$ fixing $1$ and $-1$,
and such that $\alpha(z_1)+\alpha(z_2)=0$.
Suppose
$$\alpha(x)=\frac{ax+b}{cx+d}.$$
Then $\alpha(1)=1$ tells us that $a+b=c+d$,
and $\alpha(-1)=-1$ implies $a-b=c-d$.
Hence $a=d$ and $b=c$, so $\alpha$ is of the form
$$\alpha(x)=\frac{ax+b}{bx+a}$$
for some $a,b\in{\mathbb C}$.
For $\alpha$ to be invertible
we also need $a\neq\pm b$.
We wish to find $a$ and $b$ such that
$$\frac{az_1+b}{bz_1+a}+\frac{az_2+b}{bz_2+a}=0.$$
This gives
$$(z_1+z_2)a^2+2(1+z_1z_2)ab+(z_1+z_2)b^2=0,$$
or, setting $A=a/b$ or $A=b/a$
(by symmetry, both are possible),
$$(z_1+z_2)A^2+2(1+z_1z_2)A+(z_1+z_2)=0.$$
This always has a solution for $A$.
The condition $a\neq\pm b$ means that we require $A\neq\pm 1$.
But if $A=1$, then the left hand side of the equation is
$$2(z_1+z_2+1+z_1z_2)=2(z_1+1)(z_2+1),$$
which is always nonzero when $z_1,z_2,\pm1$ are distinct.
Similarly, $A=-1$ will also give a nonzero left hand side.
Hence it is always possible to find $a$ and $b$
satisfying the required conditions.
\end{proof}
The proof of the following elementary result
is left as an exercise for the reader.
\begin{lemma} \label{lemma:odd}
Let $U\subset{\mathbb C}$ be a connected open set containing 0
and such that $-x\in U$ for every $x\in U$.
Let $f \colon U\to{\mathbb C}$ be a holomorphic function
such that $f'$ is even and $f(0)=0$.
Then $f$ is odd.
\end{lemma}
\begin{lemma} \label{lemma:odd_over_even}
Let $f=p/q\in R_d$
where $p$ and $q$ are polynomials with no common factors.
Suppose $f$ is an odd function.
Then either $p$ is odd and $q$ is even,
or $p$ is even and $q$ is odd.
\end{lemma}
\begin{proof}
Since $p$ and $q$ have no common factors,
the zeros of $p$ are precisely the zeros of $f$ in ${\mathbb C}$.
Since $f$ is odd, the zeros are distributed
symmetrically about $0$,
and so $p$ is either odd or even
(depending on whether $f(0)=0$ or $\infty$).
Similarly, $q$ is either even or odd.
\end{proof}
\begin{corollary} \label{cor:odd_over_even}
If $f\in R_3$ is odd and $f(0)=0$,
then $f$ can be written in the form
$$f(x)=\frac{Ax^3+Bx}{Cx^2+1}$$
for some $A,B,C\in{\mathbb C}$.
\end{corollary}
\begin{proposition} \label{prop:all_orbits}
The image of the map $\eta_0$
of \eqref{eq:eta}
intersects every orbit in $R_3$.
\end{proposition}
\begin{proof}
First, note that $\eta_0(0,0)=x^3$ is in the small orbit
and $\eta_0(1,0)=x^3-x$ is in the non-closed orbit.
Therefore we only need to consider orbits outside the null fibre.
Given $f\in R_3$ outside the null fibre,
Lemma~\ref{lemma:balanced} ensures that there is a Möbius transformation $\alpha$
such that the critical points of $f\circ\alpha$ are balanced.
(In fact $\alpha$ is the inverse of a transformation
taking the critical points to a balanced quadruple.
If one of the critical points of $f$ is $\infty$,
then before applying the lemma we precompose $f$
with a suitable transformation so that the resulting critical points
are all finite.)
Now the finite critical points of $f$ are the zeros of $f'$, that is,
the roots of the numerator of $f'$.
For $f=p/q$ we have $f'=(p'q-pq')/p^2$.
After balancing the critical points,
the numerator of $(f\circ\alpha)'$ will be of the form
$C(x^2-A^2)(x^2-B^2)$ for some $A,B,C\in{\mathbb C}^*$.
The denominator is a perfect square.
Therefore $(f\circ\alpha)'$ is an even function.
Let $\beta$ be a Möbius transformation taking $f(\alpha(0))$ to 0,
so that $\beta\circ f\circ\alpha$ has the same critical points as $f\circ\alpha$.
Let
$U={\mathbb C}\setminus\{x\in{\mathbb C}:
x\text{ or }{-x}\text{ is a pole of }\beta\circ f\circ\alpha\}.$
Then $\beta\circ f\circ\alpha|_U$ satisfies the conditions
of Lemma~\ref{lemma:odd}, and is therefore an odd function.
By continuity, it follows that any poles of $\beta\circ f\circ\alpha$
must be symmetrically distributed about 0,
and $\beta\circ f\circ\alpha$ is odd.
By Corollary~\ref{cor:odd_over_even} we have
$$(\beta\circ f\circ\alpha)(x)=\frac{Ax^3+Bx}{Cx^2+1}$$
for some $A,B,C\in{\mathbb C}$.
Also, $f\in R_3$ implies $A\neq0$.
Therefore we can postcompose with the Möbius transformation
$x\mapsto x/A$
to see that $f$ is in the same orbit as $\eta_0(-B/A,-C)$.
\end{proof}
\begin{corollary} \label{cor:surjective}
The image of the map $\phi$ of \eqref{eq:phi} is exactly $R_3$.
\end{corollary}
Now we describe the map $\psi \colon {\mathbb C}^8\to\phi^{-1}(R_3)$.
Since the exponential map $\exp \colon {\mathbb C}^6\to G$ is surjective,
we simply need to find a surjective holomorphic map
$\chi \colon {\mathbb C}^2\to{\mathbb C}^2\setminus\{ab=1\}$
which is dominating at~$0$,
and then we can take $\psi=\chi\times\exp$.
Following Buzzard and Lu~\cite[page~645]{Buzzard-Lu-2000},
define a map $\omega \colon {\mathbb C}^2\to{\mathbb C}$ by
\begin{align*}
\omega(x,y)&=\left\{
\begin{array}{cl}
\dfrac{e^{xy}-1}{x} & \textup{if }x\neq0 \label{eq:BLmap} \\
y & \textup{if }x=0
\end{array}
\right. \\
&= y+\frac{xy^2}{2}+\frac{x^2y^3}{3!}+\cdots.
\end{align*}
From the first form of the definition,
we can see that for fixed $x$,
the image of $y\mapsto\omega(x,y)$ is ${\mathbb C}\setminus\{-1/x\}$.
From the series expression we can see that $\omega$ is holomorphic,
and we can calculate derivatives
\begin{align*}
\frac{\partial\omega}{\partial x}\bigg\vert_{x=0} & = y^2/2, \\
\frac{\partial\omega}{\partial y} &= e^{xy}.
\end{align*}
\begin{proposition} \label{prop:chi_surjective}
The map $\chi \colon {\mathbb C}^2\to{\mathbb C}^2\setminus\{ab=1\}$ defined by
$$\chi(x,y)=\left(x,\frac{1-e^{xy}}{x}\right)=(x,-\omega(x,y))$$
is surjective and dominating at~$0$.
\end{proposition}
\begin{proof}
Recall that for fixed $a$, the image of
$y\mapsto\omega(a,y)$ is ${\mathbb C}\setminus\{-1/a\}$.
Thus if $ab\neq1$, then there exists $y$ such that
$\omega(a,y)\neq -b$,
and then $\chi(a,y)=(a,b)$.
Hence $\chi$ is surjective.
To prove dominability, we need to verify that the
vectors $\partial\chi/\partial x$ and $\partial\chi/\partial y$
evaluated at $(x,y)=(0,0)$ span ${\mathbb C}^2$.
But we have
$\partial\chi/\partial x\vert_{x=0}= (1,y^2/2)$
and $\partial\chi/\partial y=(0,e^{xy})$.
Thus the derivatives evaluated at $(0,0)$
are $(1,0)$ and $(0,1)$.
\end{proof}
\begin{corollary} \label{cor:comp_surjective}
With $\chi$ as above, $\psi=\chi\times\exp$
and $\phi$ as defined in \eqref{eq:phi},
the composition $\phi\circ\psi$ is surjective.
\end{corollary}
\subsection{Proof of transversality} \label{subs:transverse}
We wish to show that the map $\phi$
of \eqref{eq:phi} is dominating.
Recall that $\phi \colon {\mathbb C}^8\to{\mathbb P}^7$ is built
from maps $\eta \colon {\mathbb C}^2\to{\mathbb P}^7$
and $\exp \colon {\mathbb C}^6\to G$.
For convenience, we repeat the definitions here.
\begin{align*}
\eta_0(a,b)&=\frac{x^3-ax}{-bx^2+1}
=(1 \colon\! 0 \colon\!\! -a \colon\! 0 \colon\! 0 \colon\!\! -b \colon\! 0 \colon\! 1), \\
\eta(a,b)&=\eta_0(a+a_0,b+b_0)^g\text{ for some constants }
a_0,b_0\in{\mathbb C}\text{ and }g\in G, \\
\phi(s,t)&=\eta(s)^{\exp(t)},\qquad s\in{\mathbb C}^2,\> t\in{\mathbb C}^6. \\
\end{align*}
Since $\exp$ is dominating at the identity,
it follows that the image of $d\phi_0$ contains the tangent space
to the fibre of $\eta(0,0)$.
So to prove that $\phi$ is dominating,
it is sufficient to show that the image of $d\eta$ is transverse to the
tangent space of the fibre.
More precisely, we will show that the images of $d\phi_0$ and $d\eta_{(0,0)}$
together span the tangent space $T{\mathbb P}^7_f$,
where $f=\phi(0,0)$.
\begin{proposition} \label{prop:transverse}
Let $a,b\in{\mathbb C}$, $ab\neq1$, and $f=\eta_0(a,b)$.
Suppose $f\in R_3$.
Then the tangent space $T{\mathbb P}^7_f$
is spanned by the tangent space to the fibre of $f$
together with the image of $d\eta_0{(a,b)}$.
\end{proposition}
\begin{proof}
For convenience, we will work in affine coordinates:
rational functions of the form
$$\frac{a_0x^3+a_1x^2+a_2x+a_3}{b_0x^3+b_1x^2+b_2x+1}$$
will be written as
$(a_0,a_1,a_2,a_3;b_0,b_1,b_2)$.
In this notation, we have
$$\eta_0(a,b)=(1,0,-a,0;0,-b,0).$$
We can identify $T{\mathbb P}^7_f$ with ${\mathbb C}^7$,
and $d\eta_{(a,b)}$ is the subspace
\begin{equation}
\{(0,0,u,0;0,v,0):u,v\in{\mathbb C}\}. \label{eq:transverse}
\end{equation}
The main part of the proof consists in finding a set of vectors
spanning the tangent space to the fibre.
Such a set can be realised as derivatives of
infinitesimal generators of the group.
The Lie algebra $\Liesl_2({\mathbb C})$ is generated by the three matrices
$$
A=\begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \qquad
B=\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \qquad
C=\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}.
$$
Infinitesimal generators of ${\PSL_2(\complex)}$
are given by their exponentials:
$$
e^{At}=\begin{pmatrix} e^t & 0 \\ 0 & e^-t \end{pmatrix} \qquad
e^{Bt}=\begin{pmatrix} 1 & t \\ 0 & 1 \end{pmatrix} \qquad
e^{Ct}=\begin{pmatrix} 1 & 0 \\ t & 1 \end{pmatrix}
$$
or, interpreted as Möbius transformations:
$$e^{At} \colon x\mapsto e^{2t}x \qquad
e^{Bt} \colon x\mapsto x+t \qquad
e^{Ct} \colon x\mapsto \frac{x}{tx+1}.
$$
As infinitesimal generators
of the group $G={\PSL_2(\complex)}\times{\PSL_2(\complex)}$,
we can use ordered pairs
$(g^{-1},\text{id})$
and $(\text{id},g)$,
where $g$ is one of $e^{At}$, $e^{Bt}$ or $e^{Ct}$.
Thus we simply need to calculate the six vectors
$$\frac{d}{dt}(\eta_0(a,b)\circ e^{At})|_{t=0}, \qquad
\frac{d}{dt}(e^{At}\circ\eta_0(a,b))|_{t=0},$$
and the corresponding vectors for $e^{Bt}$ and $e^{Ct}$.
The first of those six vectors is computed as follows:
$$\eta_0(a,b)\circ e^{At} \colon x\mapsto
\frac{e^{6t}x^3-ae^{2t}x}{-be^{4t}x^2+1}
=(e^{6t},0,-ae^{2t},0;0.-be^{4t},0),
$$
and so the derivative with respect to $t$,
evaluated at $t=0$, is
$$(6,0,-2,0;0,-4b,0).$$
The remaining cases are handled similarly.
The end result of the calculation
is that the tangent space to the fibre
is spanned by the rows of the following matrix:
$$\begin{pmatrix}
6 & 0 & -2a & 0 & 0 & -4b & 0 \\
0 & 3 & 0 & -a & 0 & 0 & -2b \\
0 & -2a & 0 & 0 & -b & 0 & 3 \\
2 & 0 & -2a & 0 & 0 & 0 & 0 \\
0 & -b & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & a
\end{pmatrix}
$$
and it is straightforward to verify that
the rows together with the vectors of \eqref{eq:transverse}
span ${\mathbb C}^7$.
\end{proof}
Since the above calculation does not depend on the choice of $a$ and $b$,
we have a dominating map for every $f\in R_3$,
proving Theorem~\ref{thm:r3dominable}.
| {
"timestamp": "2012-11-13T02:05:15",
"yymm": "1211",
"arxiv_id": "1211.0765",
"language": "en",
"url": "https://arxiv.org/abs/1211.0765",
"abstract": "For each natural number d, the space R_d of rational maps of degree d on the Riemann sphere has the structure of a complex manifold. The topology of these manifolds has been extensively studied. The recent development of Oka theory raises some new and interesting questions about their complex structure. We apply geometric invariant theory to the cases of degree 2 and 3, studying a double action of the Möbius group on R_d. The action on R_2 is transitive, implying that R_2 is an Oka manifold. The action on R_3 has C as a categorical quotient; we give an explicit formula for the quotient map and describe its structure in some detail. We also show that R_3 enjoys the holomorphic flexibility properties of strong dominability and C-connectedness.",
"subjects": "Complex Variables (math.CV)",
"title": "Holomorphic flexibility properties of the space of cubic rational maps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787849789998,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811334015664
} |
https://arxiv.org/abs/2009.10458 | Lower bounds for multicolor Ramsey numbers | We give an exponential improvement to the lower bound on diagonal Ramsey numbers for any fixed number of colors greater than two. | \section{Introduction}
The Ramsey number $r(t; \ell)$ is the smallest natural number $n$ such that every $\ell$-coloring of the edges of the complete graph $K_n$ contains a monochromatic $K_t$. For $\ell = 2$, the problem of determining $r(t) := r(t;2)$ is arguably one of the most famous in combinatorics. The bounds
\[\sqrt{2}^t < r(t) < 4^t\]
have been known since the 1940s, but, despite considerable interest, only lower-order improvements \cite{Con09, Sah20, Spe75} have been made to either bound. In particular, the lower bound $r(t) > (1 + o(1))\frac{t}{\sqrt{2} e} \sqrt{2}^t$, proved by Erd\H{o}s~\cite{Erd47} as one of the earliest applications of the probabilistic method, has only been improved~\cite{Spe75} by a factor of $2$ in the intervening 70 years.
If we ignore lower-order terms, the best known upper bound for $\ell \geq 3$ is $r(t; \ell) < \ell^{\ell t}$, proved through a simple modification of the Erd\H{o}s--Szekeres neighborhood-chasing argument~\cite{ESz35} that yields $r(t) < 4^t$. For $\ell = 3$, the best lower bound, $r(t; 3) > \sqrt{3}^{t}$, again comes from the probabilistic method. For higher $\ell$, the best lower bounds come from the simple observation of Lefmann~\cite{L87} that
\[r(t; \ell_1 + \ell_2) - 1 \geq (r(t; \ell_1) - 1)(r(t;\ell_2) - 1).\]
To see this, we blow up an $\ell_1$-coloring of $K_{r(t;\ell_1) - 1}$ with no monochromatic $K_t$ so that each vertex set has order $r(t; \ell_2) - 1$ and then color each of these copies of $K_{r(t;\ell_2) - 1}$ separately with the remaining $\ell_2$ colors so that there is again no monochromatic $K_t$.
By using the bounds $r(t; 2) - 1 \geq 2^{t/2}$ and $r(t;3) - 1 \geq 3^{t/2}$, we can repeatedly apply this observation to conclude that
\[r(t; 3k) > 3^{kt/2}, \qquad r(t; 3k+1) > 2^t 3^{(k-1)t/2}, \qquad r(t; 3k +2) > 2^{t/2} 3^{kt/2}.\]
Our main result is an exponential improvement to all these lower bounds for three or more colors.
Our principal contribution is the following theorem, proved via a construction which is partly deterministic and partly random. The deterministic part shares some characteristics with a construction of Alon and Krivelevich~\cite{AK97}, in that we consider a graph whose vertices are vectors over a finite field where adjacency is determined by the value of their scalar product, while randomness comes in through both random coloring and random sampling.
\begin{theorem}
\label{technical}
For any prime $q$, $r(t; q+1) > 2^{t/2} q^{3t/8 + o(t)}$.
\end{theorem}
In particular, the cases $q=2$ and $q= 3$ yield exponential improvements over the previous bounds for $r(t; 3)$ and $r(t;4)$, both of which came from the probabilistic method (in fact, Lefmann's observation gives an additional polynomial factor in the four-color case, but this is of lower order than the exponential improvements that are our concern).
\begin{corollary}
$r(t;3) > 2^{7t/8 + o(t)}$ and $r(t;4) > 2^{t/2} 3^{3t/8 + o(t)}$.
\end{corollary}
For the sake of comparison, we note that the improvement for three colors is from $1.732^t$ to $1.834^t$, while, for four colors, it is from $2^t$ to $2.135^t$. Improvements for all $\ell \geq 5$ now follow from repeated applications of Lefmann's observation, yielding
\[r(t; 3k) > 2^{7kt/8 + o(t)}, \qquad r(t; 3k+1) > 2^{7(k-1)t/8 + t/2} 3^{3t/8 + o(t)}, \qquad r(t; 3k +2) > 2^{7kt/8 + t/2 + o(t)},\]
where we used, for instance,
\[r(t; 3k+1) - 1 \geq (r(t; 3(k-1)) - 1)( r(t;4) - 1) \geq (r(t; 3) - 1)^{k-1}( r(t;4) - 1).\]
\section{Proof of Theorem \ref{technical}}
Let $q$ be a prime. Suppose $t \neq 0 \bmod{q}$ and let $V\subseteq \mathbb{F}_q^t$ be the set consisting of all vectors $v\in \mathbb{F}_q^t$ for which $\sum_{i=1}^t v^2_i=0 \bmod{q}$, noting that $q^{t-2} \leq |V|\leq q^t$. Here the lower bound follows from observing that we may pick $v_1, \dots, v_{t-2}$ arbitrarily and, since every element in $\mathbb{F}_q$ can be written as the sum of two squares, there must then exist at least one choice of $v_{t-1}$ and $v_t$ such that $v_{t-1}^2 + v_t^2 = - \sum_{i=1}^{t-2} v_i^2$.
We will first color all the pairs $\binom{V}{2}$ and then define a coloring of $E(K_n)$ by restricting our attention to a random sample of $n$ vertices in $V$. Formally:
\paragraph{Coloring all pairs in $\binom{V}{2}$.}
For every pair $uv\in \binom{V}{2}$, we define its color $\chi(uv)$ according to the following rules:
\begin{itemize}
\item If $u\cdot v=i \bmod{q}$ and $i\neq 0$, then set $\chi(uv)=i$.
\item Otherwise, choose $\chi(uv)\in \{q,q+1\}$ uniformly at random, independently of all other pairs.
\end{itemize}
\paragraph{Mapping $[n]$ into $V$.}
Take a random injective map $f:[n]\rightarrow V$ and define the color of every edge $ij$ as $\chi(f(i)f(j))$.
\vspace{4mm}
Our goal is to upper bound the orders of the cliques in each color class.
\paragraph{Colors $1\leq i\leq q-1$.} There are no $i$-monochromatic cliques of order larger than $t$ for any $1\leq i \leq q-1$. Indeed, suppose that $v_1,\ldots,v_s$ form an $i$-monochromatic clique. We will try to show that they are linearly independent and, therefore, that there are at most $t$ of them.
To this end, suppose that
$$u:=\sum_{j=1}^s \alpha_j v_j=\bar{0}$$
and we wish to show that $\alpha_j=0\bmod{q}$ for all $j$.
Observe that since $v_j\cdot v_j=0\bmod{q}$ for all $j$ (our ground set $V$ consists only of such vectors) and $v_k\cdot v_{j}=i\bmod{q}$ for each $k\neq j$, by considering all the products $u\cdot v_j$, we obtain that the vector $\bar{\alpha}=(\alpha_1,\ldots,\alpha_s)$ is a solution to
$$M\bar{\alpha}=\bar{0}$$
with $M=iJ-iI$, where $J$ is the $s \times s$ all $1$ matrix and $I$ is the $s \times s$ identity matrix. In particular, we obtain that the eigenvalues of $M$ (over $\mathbb{Z}$) are $is-i$ with multiplicity $1$ and $-i$ with multiplicity $s-1$. Therefore, if $s \neq 1 \bmod{q}$, the matrix is also non-singular over $\mathbb{Z}_q$, implying that $\bar{\alpha}=0$, as required. On the other hand, if $s = 1 \bmod{q}$, we can apply the same argument with $v_1, \dots, v_{s-1}$ to conclude that $s-1 \leq t$. But, we cannot have $s - 1 = t$, since this would imply that $t = 0 \bmod{q}$, contradicting our assumption. Therefore, we may also conclude that $s \leq t$ in this case.
\paragraph{Colors $q$ and $q+1$.} We call a subset $X\subseteq V$ a \emph{potential clique} if $|X|=t$ and $u\cdot v=0\bmod{q}$ for all $u,v\in X$. Given a potential clique $X$, we let $M_X$ be the $t \times t$ matrix whose rows consist of all the vectors in $X$. Observe that $M_X\cdot M_X^T=0$, where we use the fact that each vector is self-orthogonal. First we wish to count the number of potential cliques and later we will calculate the expected number of cliques that survive after we color randomly and restrict to a random subset of order $n$.
Suppose that $X$ is a potential clique and let $r:=\textrm{rank}(X)$ be the rank of the vectors in this clique, noting that $r \leq t/2$, since the dimension of any isotropic subspace of $\mathbb{F}_q^t$ is at most $t/2$. By assuming that the first $r$ elements are linearly independent, the number of ways to build a potential clique $X$ of rank $r$ is upper bounded by
\[
\left(\prod_{i=0}^{r-1}q^{t-i}\right) \cdot q^{(t-r)r} =q^{tr - \binom{r}{2} + tr - r^2}
=q^{2tr - \frac{3r^2}{2} +\frac{r}{2}}.\]
Indeed, suppose that we have already chosen the vectors $v_1,\ldots,v_s \in X$ for some $s<r$. Then, letting $M_s$ be the $s\times t$ matrix with the $v_i$ as its rows, we need to choose $v_{s+1}$ such that $M_s\cdot v_{s+1}=\bar{0}$. Since the rank of $M_s$ is assumed to be $s$, there are exactly $q^{t-s}$ choices for $v_{s+1}$ in $\mathbb{F}_q^t$ and, therefore, at most that many choices for $v_{s+1}\in V$. If, instead, $s\geq r$, then we need to choose a vector $v_{s+1}\in \textrm{span}\{v_1,\ldots,v_r\}$ and there are at most $q^r$ such choices in $V$.
Now observe that the function $2tr - \frac{3r^2}{2} +\frac{r}{2}$ appearing in the exponent of the expression above is increasing up to $r = \frac{2t}{3} + \frac{1}{6}$, so the maximum occurs at $t/2$. Therefore, by plugging this into our estimate and summing over all possible ranks, we see that the number $N_t$ of potential cliques in $V$ is upper bounded by $q^{\frac{5 t^2}{8} + o(t^2)}$.
The probability that a potential clique becomes monochromatic after the random coloring is $2^{1 - \binom{t}{2}}$. Suppose now that $p$ is such that $p|V|=2n$ and observe that $p= n q^{-t +O(1)}$. If we choose a random subset of $V$ by picking each $v \in V$ independently with probability $p$, the expected number of monochromatic potential cliques in this subset is, for $n = 2^{t/2} q^{3t/8 + o(t)}$,
$$p^t 2^{1 - \binom{t}{2}} N_t \leq q^{- t^2 + o(t^2)} n^t 2^{-\frac{t^2}{2} + o(t^2)} q^{\frac{5t^2}{8} + o(t^2)}
= \left(2^{-\frac{t}{2}} q^{-\frac{3t}{8} + o(t)} n\right)^t <1/2.$$
Since our random subset will also contain more than $n$ elements with probability at least $1/2$, there exists a choice of coloring and a choice of subset of order $n$ such that there is no monochromatic potential clique in this subset. This completes the proof.
\vspace{3mm}
\noindent
{\bf Remark.}
Our method also gives a construction which matches Erd\H{o}s' bound $r(t) > \sqrt{2}^t$ up to lower-order terms. To see this, we set $V = \mathbb{F}_2^{2t}$ and color edges red or blue depending on whether $u \cdot v = 0$ or $1 \bmod{2}$. If we then sample $2^{t/2 + o(t)}$ vertices of $V$ at random, we can show that w.h.p.~the resulting set does not contain a monochromatic clique of order $t$. We believed this to be new, but, after the first version of this article was made public, we learned that such a construction was already discovered by Pudl\'ak, R\"odl and Savick\'y~\cite{PRS88} in 1988.
It was also pointed out to us by Jacob Fox that one can achieve the same end by starting with any pseudorandom graph on $n$ vertices for which the count of cliques and independent sets of order $2c \log_2 n$ is approximately the same as in $G(n, 1/2)$ and sampling $n^c$ vertices. This can be applied, for instance, with the Paley graph.
\vspace{3mm}
\noindent
{\bf Acknowledgements.} We are extremely grateful to Vishesh Jain and Wojciech Samotij for reading an early draft of this paper and offering several suggestions which improved the presentation. We also owe a debt to Noga Alon and Anurag Bishnoi, both of whom pointed out the constraint on the dimension of isotropic subspaces, thereby improving the bound in our original posting.
| {
"timestamp": "2020-11-30T02:02:44",
"yymm": "2009",
"arxiv_id": "2009.10458",
"language": "en",
"url": "https://arxiv.org/abs/2009.10458",
"abstract": "We give an exponential improvement to the lower bound on diagonal Ramsey numbers for any fixed number of colors greater than two.",
"subjects": "Combinatorics (math.CO)",
"title": "Lower bounds for multicolor Ramsey numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787846017969,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811331305104
} |
https://arxiv.org/abs/2212.12492 | An ODE characterisation of multi-marginal optimal transport with pairwise cost functions | The purpose of this paper is to introduce a new numerical method to solve multi-marginal optimal transport problems with pairwise interaction costs. The complexity of multi-marginal optimal transport generally scales exponentially in the number of marginals $m$. We introduce a one parameter family of cost functions that interpolates between the original and a special cost function for which the problem's complexity scales linearly in $m$. We then show that the solution to the original problem can be recovered by solving an ordinary differential equation in the parameter $\epsilon$, whose initial condition corresponds to the solution for the special cost function mentioned above; we then present some simulations, using both explicit Euler and explicit higher order Runge-Kutta schemes to compute solutions to the ODE, and, as a result, the multi-marginal optimal transport problem. | \section{Introduction}
The theory of optimal transport plays an important role in many applications (see \cite{Villani-OptimalTransport-09,Villani-TOT2003,santambook,peyre2017computational}). Its generalization to the multi-marginal case consists in minimizing the functional
\[\gamma\mapsto\int_{X^1 \times ...\times X^m}c(x^1,\cdots,x^m)\dd\gamma \]
among all probability measures $\gamma\in\mathcal P(X^1\times\cdots\times X^m)$ having fixed $\mu^i\in\mathcal P(X^i)$ with $i=1,\cdots,m$ as marginals, for a given cost function $c(x^1,....,x^m)$.
This problem has been at the center of growing interest in recent years since it arises naturally in many different areas of applications, including Economics \cite{carlier2010matching}, Financial Mathematics \cite{beiglbock2013model,dolinsky2014martingale,dolinsky2014robust, Ennajietal22}, Statistics \cite{bigot2018characterization,carlier2016vector}, Image Processing \cite{rabin2011wasserstein}, Tomography \cite{abraham2017tomographic}, Machine Learning \cite{Hassleretal21, Trillosetal22}, Fluid Dynamics \cite{brenier1989least} and Quantum Physics and Chemistry, in the framework of Density Functional Theory \cite{buttazzo2012optimal,cotar2013density}.
The structure of solutions to the multi-marginal optimal transport problem is a notoriously delicate issue, and is still not well understood, despite substantial efforts on the part of many researchers \cite{gangbo1998optimal, Carlier03, CarlierNazaret08, Heinich05, Pass11,Pass12, KimPass14, KimPass15, ColomboDePascaleDiMarino15, ColomboStra16, PassVargas21, MoameniPass17, PassVargas2022, PassVargas21-2}; see also the surveys \cite{PassSurvey} and \cite{DiMarinoGerolinNenna15}. In many of the aforementioned applications, it is therefore pertinent to develop efficient numerical algorithms to compute solutions. This, however, represents a significant challenge, since the problem amounts to a linear (or convex, in a popular regularized variant discussed below),
yet high dimensional optimization problem: the complexity scales exponentially in the number $m$ of marginals. For instance a crude discretization of each of $5$ marginals (notice that in many applications the number of marginals could be dramatically large, e.g. in quantum mechanics where $m$ is the number of electrons in a molecule) using 100 Dirac masses would mean that the coupling $\gamma$ between the $5$ marginals is supported over $100^5 = 10^{10}$ Dirac masses, rendering the problem practically intractable.
There have been recently some attempts to tackle this problem by using different approaches: entropic regularization \cite{benamou2017generalized,thesislulu,benamou2016numerical}, relaxation via moment constraints approximation \cite{alfonsi2021approximation,alfonsi2022constrained}, genetic column generation algorithm exploiting the existence of a sparse solution in the discrete case \cite{friesecke2022genetic,friesecke2022gencol}, Wasserstein penalisation of the marginal constraints \cite{merigot2016minimal} and semidefinite relaxation \cite{khoo2019convex,khoo2020semidefinite}.\\
In many cases of interest, the cost function $c(x^1,....x^m) =\sum_{i=1}^mw(x^i,x^j)$ is given by a sum of two marginal cost functions; when $w(x^i,x^j) =|x^i-x^j|^2$, for instance the multi-marginal problem is equivalent to the well known Wasserstein barycenter problem \cite{Carlier_wasserstein_barycenter}, while the Coulomb cost $w(x^i,x^j) =\frac{1}{|x^i-x^j|}$ plays a central role in the quantum chemistry applications pioneered in \cite{cotar2013density} and \cite{buttazzo2012optimal}. Here, for such pairwise interaction costs, our aim is to develop a continuation method which, by introducing a suitable one parameter family of cost functions, establishes a link between the original multi-marginal problem and a simpler one whose complexity scales linearly in the number of marginals. For discrete marginals, we show that, after the addition of an entropic regularization term, the solution of the original multi-marginal problem can be recovered by solving an ordinary differential equation (ODE) whose initial condition is the solution to the simpler problem.
This method is actually inspired by the one introduced in \cite{CarlierGalichonSantambrogio10} to compute the Monge solution of the two marginal problem, starting from the Knothe-Rosenblatt rearrangement; note, however, that since we apply this strategy to a regularized problem, our resulting ODE enjoys better regularity than the one in \cite{CarlierGalichonSantambrogio10}, which, in turn, makes it amenable to higher order numerical schemes (see the description of numerical results in Section \ref{algorithm} below).
The above mentioned differential equation will be derived by differentiating the optimality conditions of the dual problem; in particular by penalizing the constraints with the soft-max function we will obtain a well defined ODE for which existence and uniqueness of a solution can be established.
When developing the ODE approach in Section \ref{sect: ode approach} below, we restrict our attention to the case when the marginals $\mu^i$ are all identical. This has the significant advantage of reducing the Kantorovich dual problem to a maximization over a single potential function, while also capturing important applications arising in density functional theory. Though we do not pursue this direction here, our approach, with minor modifications, will also work with distinct marginals. In this case, if each measure is discretized using $N$ points, one would need to solve $(m-1)N$ coupled, real-valued ODEs (rather than the $N$ coupled ODEs dealt with here in Section \ref{sect: ode approach}), reflecting the $m-1$ independent Kantorovich potentials needed to fully capture the solution.
The remainder of this manuscript is organized as follows. In Section 2, we recall some basic facts about multi-marginal optimal transport, as well as its entropic regularization and the duals of both problems, and prove that for a particular, simple cost function, the solutions to the regularized problem and its dual can be computing by solving $m-1$ two marginal problems. This solution will serve as the initial condition for an ODE, which is introduced, and proven to be well-posed, in Section 3. In Section 4, algorithms, based on this ODE, to compute the solution to the multi-marginal optimal transport problem are described and some resulting numerical simulations are presented.
\section{Multi-marginal optimal transport and entropic regularization}
Given $m$ probability measures $\mu^i$ on bounded domains $X^i \subseteq \mathbb{R}^n$ for $i=1,2...,m$ and a lower semi-continuous cost function $c: X^1 \times X^2 \times ...\times X^m \rightarrow \mathbb{R} \cup \{+\infty\}$, the multi-marginal optimal transport problem consists in solving the following optimization problem
\begin{equation}
\label{pb:mmot}
\inf_{\gamma\in\Pi(\mu^1,\cdots,\mu^m)} \int_{X^1 \times X^2...\times X^m} c(x^1,...,x^m)\dd\gamma
\end{equation}
where $\Pi(\mu^1,\cdots,\mu^m)$ denotes the set of probability measures on $X^1 \times X^2 \times ...\times X^m$ whose marginals are the $\mu^i$. One can easily show by means of the direct method of calculus of variations that this problem admits at least a solution, which will be referred as the \textit{optimal transport plan}.
It is well known that under some mild assumptions the above problem is dual to the following
\begin{equation}\label{eqn: multi-marginal dual}
\sup\left\{ \sum_{i=1}^m\int_{X^i}\phi^i(x^i)\dd\mu^i \;|\;\phi^i \in L^1(\mu^i),\text{ }\sum_{i=1}^m\phi^i(x^i) \leq c(x^1,...,x^m)\right \}.
\end{equation}
We will also consider a common variant of \eqref{pb:mmot}, known as \textit{entropic optimal transport} which consists in adding an entropy regularization term. For a parameter $\eta >0$, this is to minimize
\begin{equation}
\label{pb:mmot-reg}
\inf_{\gamma\in\Pi(\mu^1,\cdots,\mu^m)}\left\{ \int_{X^1 \times X^2...\times X^m} c(x^1,....,x^m)\dd\gamma + \eta H_{\otimes_{i=1}^m\mu^i}(\gamma)\right \}
\end{equation}
where the relative entropy $H_{\otimes_{i=1}^m\mu^i}(\gamma)$ with respect to product measure $\otimes_{i=1}^m\mu^i$ is defined by
$$
H_{\otimes_{i=1}^m\mu^i}(\gamma) = \int_{X^1 \times...\times X^m} \frac{\dd\gamma}{\dd(\otimes_{i=1}^m\mu^i)}\log(\frac{\dd\gamma}{\dd(\otimes_{i=1}^m\mu^i)}) \dd(\otimes_{i=1}^m\mu^i),
$$
if $\gamma$ is absolutely continuous with respect to the product measure $\otimes_{i=1}^m\mu^i$ and $+\infty$ otherwise. The regularized transport is, in turn, dual to the following unconstrained optimization problem
\begin{equation}\label{eqn: multi-marginal reg dual}
\begin{split}
\sup &\sum_{i=1}^m\int_{X^i}\phi^i(x^i)\dd\mu^i\\
&-\eta \int_{X^1 \times...\times X^m} e^{\frac{\sum_{i=1}^m\phi^i(x^i) -c(x^1,...,x^m)}{\eta} }\dd(\otimes_{i=1}^m\mu^i)(x^1,\cdots,x^m).
\end{split}
\end{equation}
The regularized problem \eqref{pb:mmot-reg} and its dual \eqref{eqn: multi-marginal reg dual} arise frequently in computational work. We note in particular that \eqref{pb:mmot-reg} is the minimization of a strictly convex functional and therefore admits a unique solution. It is well known that as $\eta \rightarrow 0$, solutions of \eqref{pb:mmot-reg} and \eqref{eqn: multi-marginal reg dual} converge to solutions of \eqref{pb:mmot} and \eqref{eqn: multi-marginal dual}, respectively.
When each $X^i$ is a finite set (a case of particular interest in this paper), we obtain discrete versions of \eqref{pb:mmot} and its dual, which amount to linear programs, taking the forms
\begin{equation}
\label{eq:discretePrimal}
\inf\left\{\sum_{\bar x\in \times_{i=1}^m X^{i}}c(\bar x)\gamma_{\bar x}\;|\;\gamma\in\Pi(\mu^1,\cdots,\mu^m) \right\},
\end{equation}
where $ \bar x=(x^1,\cdots,x^m) \in X^1 \times ....\times X^m$ and
\begin{equation}
\label{pb:discreteDual}
\sup\{ \sum_{i=1}^m\sum_{x\in X^i}\phi^{i}_x\mu^i_x\;|\;(\phi^1,\cdots,\phi^m)\in\mathcal T\}
\end{equation}
where, if we identify functions $\phi^i:X^i \rightarrow \mathbb{R}$ with points in $\mathbb{R}^{|X^i|}$, \[\mathcal T:=\{ \phi^i\in\mathbb{R}^{|X^i|}\;\forall i=1,\cdots,m,\sum_{i=1}^m\phi^i_{x^i}\leq c(x^1,\cdots, x^m),\; \forall (x^1,\cdots, x^m)\in \times_{i=1}^mX^i \}.\]
Notice that, if each $|X^i|=N$, in the case of the primal problem we have to deal with $N^m$ unknowns and $mN$ constraints, whereas in the dual problem there are $mN$ unknowns and $N^m$ constraints. In both cases we have to deal with the so called ``curse of dimensionality,'' namely the complexity of the problem increases exponentially with the number of marginals.\\
In this discrete setting, the entropy regularized problem \eqref{pb:mmot-reg} and its dual \eqref{eqn: multi-marginal reg dual} then become finite dimensional convex optimization problems:
\begin{equation}
\label{pb:primalEntropic}
\inf\left\{\sum_{\bar x\in \times_{i=1}^m X^{i}}c(\bar x)\gamma_{\bar x}+\eta [H(\gamma)-H(\otimes^m\mu^i)]\;|\;\gamma\in\Pi(\mu^1,\cdots,\mu^m), \right\}
\end{equation}
where $\eta>0$ and $H$ is the entropy with respect to uniform measure on the finite set $X$ (we suppress the subscript on $H$ indicating the reference measure in the finite case, as we will only deal with entropy relative to the uniform measure), $H(\gamma)=\sum_{\bar x\in \times_{i=1}^m X^{i}}h(\gamma_{\bar x})$, with
\[
h(t)=\begin{cases}
&t(\log(t)-1),\;t>0\\
&0,\;t=0\\
&+\infty,\; t<0,
\end{cases}
\]
and
\begin{equation}
\label{pb:dualEntropic}
\sup\left \{ \sum_{i=1}^m\sum_{x\in X^i}\phi^{i}_x\mu^i_x-\eta\sum_{\bar x\in \times_{i=1}^m X^{i}}\exp\Bigg(\frac {\sum_i\phi^i_{x^i}-c(\bar x)}{\eta}\Bigg)(\otimes^m\mu^i)_{\bar x} \right\}.
\end{equation}
We note in particular that \eqref{pb:dualEntropic} is an \emph{unconstrained} finite dimensional concave maximization problem. Solutions may be computed using a multi-marginal version of the Sinkhorn algorithm \cite{benamouetalentropic,CuturiSinkhorn,peyre2017computational,Galichon-Entropic}, and one can then recover the optimal $\gamma$ in \eqref{pb:primalEntropic} from the solutions $\phi^1,...,\phi^m$ to \eqref{pb:dualEntropic} via the well known formula:
$$
\gamma_{\bar x}=\exp{\bigg(\frac{\sum_{i=1}^m\phi^i_{x^i} -c(\bar x)}{\eta}\bigg)}\mu^1_{x^1}\mu^2_{x^2}...\mu^m_{x^m}
$$
where $\bar x=(x^1,....,x^m)$.
\subsection{Pairwise costs}
We are especially interested in this paper in cost functions $c(x^1,....,x^m)$ involving pair-wise interactions, that is
\[
c(x^1,....,x^m) =\sum_{i< j}^m w(x^i,x^j).
\]
Such costs are ubiquitous in applications: for example, for systems of interacting classical particles in \cite{cotar2013density, buttazzo2012optimal}, $c$ is a pair-wise cost, with $w(x-y)=\dfrac{1}{|x-y|}$, known as the Coulomb cost.\\
Let us now consider
costs $c_\epsilon$ of the form
\begin{equation}\label{eqn: epsilon cost}
c_\epsilon(x^1,\cdots,x^m):=\epsilon\sum_{i=2}^m\sum_{j=i+1}^mw(x^i,x^j) +\sum_{i=2}^mw(x^1,x^i).
\end{equation}
It is clear that when $\epsilon=1$ we retrieve a pair-wise cost as defined above whereas in the limit $\epsilon\to 0$ we obtain a cost involving only the interactions between $x^1$ and the other $x^i$ individually.
Later on, we will develop an ordinary differential equation that governs the evolution with $\epsilon$ of the solutions to the regularized dual problem \eqref{pb:dualEntropic}; the results below assert that the initial condition for that equation (that is, the solutions when $\epsilon =0$) can be recovered by solving each of the individual two marginal problems between $\mu^1$ and $\mu^i$.
In what follows, we will assume that each marginal $\mu^i$ is absolutely continuous with respect to a fixed based measure $\nu^i$ with density given by $\frac{\dd\mu^i}{\dd\nu^i}$.
\begin{proposition}Assume that each marginal $\mu^i$ is absolutely continuous with respect to a fixed based measure $\nu^i$ with density given by $\frac{\dd\mu^i}{\dd\nu^i}(x^i)$: $\dd\mu^i(x^i) = \frac{\dd\mu^i}{\dd\nu^i}(x^i)\dd\nu^i(x^i)$.
Consider the regularized problem \eqref{pb:mmot-reg} with limiting pairwise cost; that is, set $\epsilon =0$ in \eqref{eqn: epsilon cost} to obtain:
\begin{equation}\label{eqn: limiting regularized problem}
\min_{\gamma \in \Pi(\mu^1,\mu^2,...,\mu^m)}\int \sum_{i=2}^mw(x^1,x^i)\dd\gamma +\eta H_{ \otimes_{i=1}^m \mu^i}(\gamma).
\end{equation}
Let $\frac{\dd\bar \pi^i}{\dd(\nu^1 \otimes \nu^i)}$ be the density with respect to product measure $\nu^1(x^1) \otimes \nu^i(x^i)$ of the minimizer $\bar \pi^i=\frac{\dd\bar \pi^i}{\dd(\nu^1 \otimes \nu^i)}\nu^1\otimes\nu^i$ in the regularized two marginal problem:
$$
\min_{\pi^i \in \Pi(\mu^1,\mu^i) } \int w(x^1,x^i)\dd\pi^i(x^1,x^i) +\eta H_{\mu^1 \otimes \mu^i}(\pi^i).
$$
Then the density $\frac{\dd\bar\gamma}{\dd(\otimes_{i=1}^m\nu^i)}$ of the optimal $\bar\gamma =\frac{\dd\bar\gamma}{\dd(\otimes_{i=1}^m\nu^i)}(\otimes_{i=1}^m\nu^i)$ in \eqref{eqn: limiting regularized problem} is given by
$$
\frac{\dd\bar\gamma}{\dd(\otimes_{i=1}^m\nu^i)}(x^1,...,x^m)=\frac{\frac{\dd\bar\pi^2}{\dd(\nu^1 \otimes \nu^2)}(x^1,x^2)}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}\frac{\frac{\dd\bar\pi^3}{\dd(\nu^1 \otimes \nu^3)}(x^1,x^3)}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}...\frac{\frac{\dd\bar\pi^m}{\dd(\nu^1 \otimes \nu^m)}(x^1,x^m)}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}\frac{\dd\mu^1}{\dd\nu^1}(x^1).
$$
\end{proposition}
\begin{proof}
Choose any $\gamma =\frac{\dd\gamma}{\dd(\otimes_{i=1}^m \nu^i)}(\otimes_{i=1}^m \nu^i)\in \Pi(\mu^1,\mu^2,...,\mu^m)$ which is absolutely continuous with respect to $\otimes_{i=1}^m \nu^i$ and let $\pi^i(x^1,x^i) =\Big((x^1,..,x^m)\mapsto (x^1,x^i)\Big)_\#\gamma \in \Pi(\mu^1,\mu^i)$ be its twofold marginals. Then
\begin{eqnarray}
H_{ \otimes_{i=1}^m \mu^i}(\gamma)&=& \int_{x^1\times ...\times X^m} [\log(\frac{\dd\gamma}{\dd(\otimes_{i=1}^m\nu^i)}(x^1,...,x^m)) -\sum_{i=1}^m \log(\frac{\dd\mu^i}{\dd\nu^i}(x^i))]\dd\gamma(x^1,...,x^m)\nonumber\\%\frac{\dd\gamma}{\dd(\otimes_{i=1}^m\nu^i)}x^1,...,x_N)\dd\nu^1(x^1),...\dd\nu^m(x^m)\\
&=&\int_{X^1\times ...\times X^m} [\log(\frac{\dd\gamma}{\dd(\otimes_{i=1}^m\nu^i)}(x^1,...,x^m))] \dd\gamma(x^1,....x_N) -\sum_{i=1}^m H_{\nu^i}(\mu^i),\label{eqn: relative entropy decomposition}
\end{eqnarray}
where each $H_{\nu^i}(\mu^i) =\int_{X^i}\log(\frac{\dd\mu^i}{\dd\nu^i}(x^i))\dd\mu^i(x^i)$ is constant throughout $\Pi(\mu^1,...,\mu^m)$.
Now disintegrating $\gamma =\gamma_{x^1}(x^2,...,x^m)\otimes \mu^1(x^1)$ with respect to its first marginal $\mu^1$, we note that $\gamma_{x^1}(x^2,...,x^m) = \frac{\dd\gamma}{\dd(\otimes_{i=1}^m\nu^1)}(x^1,x^2,...,x^m)\frac{1}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}(\otimes_{i=2}^m\nu^i) $ and so
\begin{equation}
\label{eqn: conditional entropy}
\begin{split}
&\int_{X^1\times ...\times X^m} \log\bigg(\frac{\dd\gamma}{\dd(\otimes_{i=1}^m\nu^i)}\bigg) \dd\gamma \\
=&\int_{X^1\times ...\times X^m} \log\bigg(\frac{\dd\gamma_{x^1}}{\dd(\otimes_{i=2}^m\nu^i)}\bigg) \dd\gamma +H_{\nu^1}(\mu^1)\\
=&\int_{X^1}\int_{X^2\times ...\times X^m} \log\bigg(\frac{\dd\gamma_{x^1}}{\dd(\otimes_{i=2}^m\nu^i)}\bigg) \dd\gamma_{x^1}\dd\mu^1 +H_{\nu^1}(\mu^1)\\
=&\int_{X^1}H_{\otimes_{i=2}^m \nu_{i}}(\gamma_{x^1})\dd\mu^1 +H_{\nu^1}(\mu^1)
\end{split}
\end{equation}
where $H_{\otimes_{i=2}^m \nu_{i}}(\gamma_{x^1})$ is the entropy of $\gamma_{x^1}$ with respect to $\otimes_{i=2}^m \nu_{i}$ and $H_{\nu^1}(\mu^1)$ is the entropy of $\mu^1$ with respect to $\nu^1$. Now note that if we disintegrate each $\pi^i =\pi^i_{x^1}(x^i)\otimes\mu^1(x^1)$ with respect to $\mu^1$, then for each fixed $x^1$, the conditional probability $\pi^i_{x^1}$ is the $i$th marginal of $\gamma_{x^1}$ and so
\begin{eqnarray}
\int_{X^1}H_{\otimes_{i=2}^m \nu_{i}}(\gamma_{x^1})\dd\mu^1(x^1) &\geq& \int_{X^1} \sum_{i=2}^m H_{\nu^i}(\pi^i_{x^1}) \dd\mu^1(x^1)\nonumber\\
&=&\sum_{i=2}^m\int_{X^1} H_{\nu^i}(\pi^i_{x^1}) \dd\mu^1(x^1)\nonumber\\
&=&\sum_{i=2}^m [H_{\nu^1 \otimes \nu^i}(\pi^i) -H_{\nu^1}(\mu^1)]\nonumber\\
&=&\sum_{i=2}^m [H_{\mu^1 \otimes \mu^i}(\pi^i) +H_{\nu^i}(\mu^i)], \label{eqn: conditional entorpy inequality}
\end{eqnarray}
where the equality $H_{\nu^1 \otimes \nu^i}(\pi^i)=\int_{X^1} H_{\nu^i}(\pi^i_{x^1}) \dd\mu^1(x^1) +H_{\nu^1}(\mu^1)$ in the second to last line follows very similarly to the derivation of \eqref{eqn: conditional entropy} above and the equality $H_{\nu^1 \otimes \nu^i}(\pi^i) -H_{\nu^1}(\mu^1)=H_{\mu^1 \otimes \mu^i}(\pi^i) +H_{\nu^i}(\mu^i)$ in the last line follows very similarly to
the derivation of \eqref{eqn: relative entropy decomposition}.
Therefore, combining \eqref{eqn: relative entropy decomposition}, \eqref{eqn: conditional entropy} and \eqref{eqn: conditional entorpy inequality}, we get
\begin{eqnarray*}
& \int_{X^1 \times ...X^m} \sum_{i=1}^mw(x^1,x^i)\dd\gamma +\eta H_{\otimes_{i=1}^m \mu_{i}}(\gamma)
= \int_{X^1 \times X^i} \sum_{i=1}^mw(x^1,x^i)\dd\pi^i +\eta H_{\otimes_{i=1}^m \mu_{i}}(\gamma) \\
&\geq \int_{X^1 \times X^i} \sum_{i=2}^mw(x^1,x^i)\dd\pi^i+\sum_{i=2}^m [H_{\mu^1 \otimes \mu^i}(\pi^i) +H_{\nu^i}(\mu^i)]+H_{\nu^1}(\mu^1) -\sum_{i=1}^mH_{\nu^i}(\mu^i)\\
&\geq \int_{X^1 \times X^i} \sum_{i=2}^mw(x^1,x^i)\dd\bar \pi^i+\sum_{i=2}^m H_{\mu^1 \otimes \mu^i}(\bar \pi^i)\
\end{eqnarray*}
by optimality of $\bar \pi$.
We have equality in the last line if and only if $\pi^i =\bar \pi^i$ for each $i$, and equality in the line above if and only if $\mu^1$ almost every $\gamma_{x^1}$ couples the $\pi^i_{x^i}$ independently; this yields the desired result.
\end{proof}
Note in particular that this result allows us to recover the solution to problem \eqref{pb:primalEntropic} with cost \eqref{eqn: epsilon cost}, when $\epsilon =0$, by solving $m-1$ individual regularized two marginal optimal transport problems. In the following section, we will develop a dynamical approach to solve the dual problem \eqref{pb:dualEntropic} to \eqref{pb:primalEntropic} for cost \eqref{eqn: epsilon cost} with $\epsilon >0$. Our initial condition will be the dual potentials when $\epsilon =0$, which we can obtain from the corresponding two marginal dual potentials, as the following corollary confirms.
\begin{corollary}\label{cor: epsilon =0 dual potentials}
Assume each $\mu^i$ is absolutely continuous with respect to a given reference measure $\nu^i$. For each $i=2,...m$, let $\psi^i(x^1), \phi^i(x^i),$ solve the regularized two marginal dual problem \eqref{pb:dualEntropic} between marginals $\mu^1$ and $\mu^i$ with cost function $w(x^1,x^i)$. Then $\phi^1(x^1), \phi^2(x^2),...,\phi^m(x^m)$, with $\phi^1(x^1) =\sum_{i=2}^m\psi^i(x^1)$ solve the regularized dual \eqref{pb:dualEntropic} with marginals $\mu^1,\mu^2,....,\mu^m$ and cost $c(x^1,...,x^m)=\sum_{i=2}^mw(x^1,x^i)$.
\end{corollary}
\begin{proof}
We have that for each $i$, the optimizer $\bar \pi^i$ in the regularized two marginal primal problem satisfies
$$
\frac{\dd\bar\pi^i}{\dd(\nu^1 \otimes \nu^i)}(x^1,x^i) = e^{\frac{\phi^i(x^i) +\psi^i(x^i) -w(x^1,x^i)}{\eta}}\frac{\dd \mu^1}{\dd\nu^1}\frac{\dd \mu^i}{\dd\nu^i}.
$$
By the preceding proposition, the optimizer in the regularized multi-marginal problem \eqref{eqn: limiting regularized problem} satisfies
\begin{eqnarray*}
\frac{\dd\bar\gamma}{\dd(\otimes_{i=1}^m\nu^i)}(x^1,x^2,...,x^m)&=&\frac{\frac{\dd\bar\pi^2}{\dd(\nu^1 \otimes \nu^2)}(x^1,x^2)}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}\frac{\frac{\dd\bar\pi^3}{\dd(\nu^1 \otimes \nu^3)}(x^1,x^3)}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}...\frac{\frac{\dd\bar\pi^m}{\dd(\nu^1 \otimes \nu^m)}(x^1,x^m)}{\frac{\dd\mu^1}{\dd\nu^1}(x^1)}\frac{\dd\mu^1}{\dd\nu^1}(x^1)\\
&=& e^{\frac{\sum_{i=2}^m[\phi^i(x^i) +\phi^i(x^1) -w(x^1,x^i)]}{\eta}}\frac{\dd \mu^2}{\dd\nu^2}\frac{\dd \mu_3}{\dd\nu^3}...\frac{\dd \mu^m}{\dd\nu^m}\frac{\dd \mu^1}{\dd\nu^1}\\
&=&e^{\frac{\sum_{i=1}^m\phi^i(x^i) -\sum_{i=2}^mw(x^1,x^i)}{\eta}}\frac{\dd \mu^1}{\dd\nu^1}\frac{\dd \mu^2}{\dd\nu^2}\frac{\dd \mu_3}{\dd\nu^3}...\frac{\dd \mu^m}{\dd\nu^m}
\end{eqnarray*}
This is exactly the first order condition identifying the regularized potentials for the multi-marginal regularized problem with cost $\sum_{i=2}^mw(x^1,x^i)$.
\end{proof}
\section{An ODE characterisation of discrete multi-marginal optimal transport}\label{sect: ode approach}
We now turn our attention to developing an ODE for the Kantorovich potentials after discretizing the marginals. Working with the regularized discrete problem \eqref{pb:primalEntropic} and its dual \eqref{pb:dualEntropic} with pairwise cost \eqref{eqn: epsilon cost}, we make the following, standing assumptions throughout this section:
\begin{enumerate}
\item (Equal marginals) All the marginals are equal $\mu^i=\rho =\sum_{x \in X}\rho_x\delta_x$, where $X$ is a finite subset of $\mathbb{R}^d$,
\item (Symmetric cost) The two body cost $w$ is symmetric $w(x,y) =w(x,y)$.
\item (Finite cost) The two body cost function $w:X \times X \rightarrow \mathbb{R}$ is everywhere real-valued
\end{enumerate}
A motivating example of a pairwise, symmetric two body cost arises in Density Functional Theory where the cost is given by $w(x,y)=\dfrac{1}{|x-y|}$; in problems with this cost, the marginals are typically also identical.
The cost does not satisfy the finiteness hypothesis, but one can consider a truncation $w(x,y)=\min\bigg(\dfrac{1}{|x-y|},C\bigg)$ cost; it is known that the solution stays away from the diagonal, and for sufficiently large $C$, the solution with the truncated cost coincides with the solution for the original Coulomb cost (for instance see \cite{buttazzo2018continuity})
\begin{remark}One could dispose of the equal marginal and symmetric cost assumptions. Analogues of the results proved below would sill hold; one could characterize the solution to the regularized dual problem \eqref{pb:dualEntropic} by an ODE, and prove that this ODE is well-posed. As we will see below, however, solving the problem numerically becomes more feasible under the hypotheses above, as the solution can be characterized by a single Kantorovich potential, and so the resulting ODE is an equation on $\mathbb{R}^N$, where $N$ is the number of points in the support of the marginal. With unequal marginals and a non-symmetric cost, one would require $m-1$ independent Kantorovich potentials to fully characterize the solution; if each marginal is supported on $N$ points, this would lead to an $(m-1)N$ dimensional system of ODEs.
\end{remark}
\subsection{Formulation of the ODE problem}
Notice now that although the cost \eqref{eqn: epsilon cost} at $\epsilon=1$ is symmetric in the variables $x^1,x^2,...,x^m$, the one at $\epsilon<1$ is not. It is, however, symmetric in the variables $x^2,...,x^m$; this means that the optimal $\phi^i$ in \eqref{pb:dualEntropic} satisfy $\phi^i=\phi^j=\phi$ for $i,j \geq 2$ and so, setting $\phi^1 =\psi,$ we can rewrite \eqref{pb:dualEntropic} as
\begin{equation}
\label{pb:discreteDualRe}
\inf_{\phi,\psi: X \rightarrow \mathbb{R}}\left\{\Phi(\phi,\psi,\epsilon)\right\},
\end{equation}
where
\[ \Phi(\phi,\psi,\epsilon):=-(m-1)\sum_{x\in X}\phi_x\rho_x-\sum_{x\in X}\psi_x\rho_x+\eta\sum_{\bar x\in X^{m}}e^{\Bigg(\frac {\sum_{i=2}^m\phi_{x^i}+\psi_{x^1}-c_\epsilon(\bar x)}{\eta}\Bigg)}\otimes^m\rho.\]
\begin{remark}[Notation]
Recall that we use the notation $\bar x$ to represent a point in a product space, such as $\bar x =(x^1,\cdots,x^{m})\in X^m$, as above, or, as will often be the case in what follows, $\bar x =(x^1,\cdots,x^{m-1}) \in X^{m-1}$. We introduce the following notation to represent corresponding products of the densities:
\[ \tilde \rho_{\bar x}=(\otimes^{m-1}\rho)_{\bar x}=\otimes_{i=1}^{m-1}\rho_{x^i} \]
\end{remark}
Since the functional $\Phi(\phi,\psi,\epsilon)$ is convex on the set $\{\phi,\psi: X \rightarrow \mathbb{R}\} \approx \mathbb{R}^{2|X|}$, as the sum of a linear and an exponential function, optimal solutions $(\phi^*,\psi^*)$ can be characterized by the first order optimality conditions $\nabla_\phi\Phi=\nabla_\psi\Phi=0$, or (component-wise):
\[\phi^*_z=-\eta\log\Bigg(\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m-1}\phi^*_{x^i}+\psi^*_{x^1}-c_\epsilon(\bar x,z)}{\eta}\Bigg)\tilde \rho_{\bar x}\Bigg) \]
and
\begin{equation}\label{eqn: psi from phi}
\psi^*_z=-\eta\log\Bigg(\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi^*_{x^i}-c_\epsilon(z,\bar x)}{\eta}\Bigg)\tilde \rho_{\bar x}\Bigg).
\end{equation}
In particular, note that \eqref{eqn: psi from phi} allows us to express the optimal $\psi^*$ in \eqref{pb:discreteDualRe} in terms of the optimal $\phi^*$, after which \eqref{pb:discreteDualRe} reduces to the following optimization problem
\begin{equation}
\label{pb:discreteDualRered}
\inf_{\phi: X \rightarrow \mathbb{R}}\left\{\tilde\Phi(\phi,\epsilon)\right\},
\end{equation}
where
\[\tilde\Phi(\phi,\epsilon):=-(m-1)\sum_{x\in X}\phi_x\rho_x+\eta\sum_z\log\Bigg(\sum_{\bar x\in X^{m-1}}e^{\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}-c_\epsilon(z,\bar x)}{\eta}\Bigg)}\tilde \rho_{\bar x}\Bigg)\rho_z . \]
\begin{remark}[LogSumExp and convexity]
The function \[\phi\mapsto\log\Bigg(\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}-c_\epsilon(z,\bar x)}{\eta}\Bigg)\tilde \rho_{\bar x}\Bigg):=LSE_{c_\epsilon}(\phi)_z\]
is also known as Log-Sum-Exp function (LSE). By using the Hölder inequality one can easily show that the Log-Sum-Exp is convex.
\end{remark}
It is well known that the solution to \eqref{pb:dualEntropic} is unique up to the addition of constants $\phi^i \mapsto \phi^i+C^i$ adding to $0$, $\sum_{i=1}^mC^i=0$; thus, solutions to \eqref{pb:discreteDualRered} are unique up to the addition of a single constant, $\phi \mapsto \phi+C$.
We therefore impose the normalization
\begin{equation}\label{eqn: normalization}
\phi_{x_0}=0
\end{equation}
for all $\epsilon\in[0,1]$ and a fixed $x_0\in X$.
The problem \eqref{pb:discreteDualRered}, restricted to $\phi$'s satisfying \eqref{eqn: normalization} then has a unique solution; the function $\tilde\Phi(\cdot,\epsilon)$ is strictly convex when restricted to this set, and the solution $\phi^*=\phi(\epsilon)$
can be characterized by the optimality condition $\nabla_\phi\tilde\Phi(\phi^*,\epsilon)=0$, where each component of the gradient is given by
\begin{equation}
\label{eq:grad}
\dfrac{\partial}{\partial\phi_z}\tilde\Phi=-(m-1)\rho_z+(m-1)e^{\phi_z/\eta}\rho_z\sum_y\sum_{\bar x\in X^{m-2}}e^{\Bigg(\frac{\sum_{i=3}^{m}\phi_{x^i}-c_\epsilon(y,z,\bar x)}{\eta}\Bigg)}(\otimes^{m-2}\rho_{\bar x})\bar \rho_y.
\end{equation}
where
\[\bar \rho_y=\dfrac{\rho_y}{\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}-c_\epsilon(y,\bar x)}{\eta}\Bigg)\tilde\rho_{\bar x}}.\]
Our numerical method consists then in solving an ODE for the evolution of $\phi(\epsilon)$ obtained by differentiating
\begin{equation}
\label{eqn: first order optimality}
\nabla_{\phi}\tilde\Phi(\phi(\epsilon),\epsilon)=0
\end{equation}
with respect to $\epsilon$:
\begin{equation}
\label{eq:ODE}
\frac{\partial}{\partial \epsilon}\nabla_{\phi}\tilde\Phi(\phi(\epsilon),\epsilon)+D^2_{\phi,\phi}\tilde\Phi(\phi(\epsilon),\epsilon)\frac{\dd\phi}{\dd\epsilon}(\epsilon)=0.
\end{equation}
If the pure second derivatives with respect to $\phi$ as well as the mixed second derivatives with respect to $\phi$ and $\epsilon$ exist and are Lipschitz, and the Hessian with respect to $\phi$ is invertible, we will obtain a characterization of $\phi$ as the solution to the following well-posed Cauchy problem:
\begin{equation}
\label{eq:PdCprelim}
\begin{cases}
&\dfrac{\dd\phi}{\dd\epsilon}(\epsilon)=-[D^2_{\phi,\phi}\tilde\Phi(\phi(\epsilon),\epsilon)]^{-1}\dfrac{\partial}{\partial \epsilon}\nabla_{\phi}\tilde\Phi(\phi(\epsilon),\epsilon),\\
&\phi(0)=\phi_w,
\end{cases}
\end{equation}
where, by Corollary \ref{cor: epsilon =0 dual potentials}, the initial value $\phi(0)$ of $\phi$ when $\epsilon =0$ coincides with $\phi_w$, the optimal potential for the two marginal optimal transport problem with cost $w$.
The next section is devoted to proving these properties.
\subsection{Well posedness of the ODE}
We refer the reader to appendix \ref{app:seconderv} for the computation of the second pure and mixed derivatives with respect to $\phi$ and the second mixed derivative with respect to $\phi$ and $\epsilon$.
In order to prove invertibility of $D^2_{\phi,\phi}\tilde \Phi$ and well posedness of the ODE we need some lemmas giving uniform bounds on the potential $\phi$ and the eigenvalues of $D^2_{\phi,\phi}\tilde \Phi$. We highlight that the following arguments are similar to (and largely inspired by) those in \cite{carlier2021linear} (the main differences lie in the fact that we re-write the dual problem by using the Log-Sum-Exp function).
\begin{lemma}
\label{lemma:bounds}
Let $c_\epsilon$ satisfy the boundedness assumption $\norm{c_\epsilon}_\infty\leq M$, $\forall \epsilon\in[0,1]$ \footnote{Note that the boundedness $\norm{c_\epsilon}_\infty\leq M$ for some $M>0$ follows immediately from our finite cost assumption on the finite set $X^m$. }. Then the maximizer $\phi(\epsilon)$ of \eqref{pb:discreteDualRered} subject to the normalization constraint \eqref{eqn: normalization} satisfies
\[ \norm{\phi(\epsilon)}_\infty\leq 4M. \]
\end{lemma}
\begin{proof}
By the first order optimality condition $\nabla_\phi\tilde\Phi =0$ for \eqref{pb:discreteDualRered} we deduce that each component of $\phi(\epsilon)$ is given by
\[ \phi_z=-\eta\log\Bigg(\sum_y\sum_{\bar x\in X^{m-2}}\exp\Bigg(\frac{\sum_{i=3}^{m}\phi_{x^i}-c_\epsilon(y,z,\bar x)}{\eta}\Bigg)(\otimes^{m-2}\rho)_{\bar x}\bar \rho_y\Bigg) \]
It is is easy to see that $\bar \rho_y$ can be bounded as follows
\[ \dfrac{e^{-M/\eta}\rho_y}{\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}}{\eta}\Bigg)\tilde\rho_{\bar x}}\leq \bar \rho_y\leq \dfrac{e^{M/\eta}\rho_y}{\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}}{\eta}\Bigg)\tilde\rho_{\bar x}}. \]
Since we have imposed the normalization $\phi_{x_0}=0$ we get
\[
\begin{split}
\phi_z=\phi_z-\phi_{x_0}&\leq-\eta\log\Bigg(e^{-2M/\eta}\frac{\sum_{\bar x\in X^{m-2}}\exp\Bigg(\frac{\sum_{i=3}^{m}\phi_{x^i}}{\eta}\Bigg)(\otimes^{m-2}\rho)_{\bar x}}{\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}}{\eta}\Bigg)\tilde\rho_{\bar x}} \Bigg)\\
&+\eta\log\Bigg(e^{2M/\eta}\frac{\sum_{\bar x\in X^{m-2}}\exp\Bigg(\frac{\sum_{i=3}^{m}\phi_{x^i}}{\eta}\Bigg)(\otimes^{m-2}\rho)_{\bar x}}{\sum_{\bar x\in X^{m-1}}\exp\Bigg(\frac{\sum_{i=2}^{m}\phi_{x^i}}{\eta}\Bigg)\tilde\rho_{\bar x}} \Bigg),\\
&\leq 4M,
\end{split}\]
and the desired result immediately follows.
\end{proof}
Having established the above bounds, we aim to prove the well posedness of the Cauchy problem \eqref{eq:PdCprelim} on the set
\begin{equation}\label{eqn:def of U}
U:=\{\phi: X \rightarrow \mathbb{R}\;|\;\phi_{x_0}=0,\; \norm{\phi}_\infty\leq 4M\}.
\end{equation}
\begin{lemma}\label{lem: lipschitz second derivatives}
$D^2_{\phi,\phi}\tilde\Phi(\phi,\epsilon)$ and $\frac{\partial}{\partial \epsilon}\nabla_{\phi}\tilde\Phi(\phi,\epsilon)$ are Lipschitz with respect to $\phi$ on $U$.
\end{lemma}
\begin{proof}
This immediately follows from the fact that the the second pure and mixed derivatives computed in Appendix \ref{app:seconderv} are easily seen to be $C^1$, and their derivatives are all bounded on $U$.
\end{proof}
In order to prove the invertibility of $D^2_{\phi,\phi}\tilde\Phi$ we need the following lemma assuring the strong convexity of the Log-Sum-Exp function on the set $U$.
\begin{lemma}
\label{lem:strong_convexity}
Let $\Psi: \tilde U \rightarrow \mathbb{R}$ be defined on
$$
\tilde U_C=\{\theta:X^{m-1} \rightarrow \mathbb{R} \;|\; \theta_{\bar x_0} =0,\;\norm{\theta}_{\infty}<C\}.
$$
where $\bar x_0=(x_0,...,x_0) \in X^{m-1}$, by $\Psi(\theta)=\sum_{y\in X}\log\Big(\sum_{\bar x\in X^{m-1}}e^{\theta_{\bar x}-c_\epsilon(y,\bar x)}\tilde \rho_{\bar x} \Big)\rho_y$.
Then $\Psi$ is $\beta$-strongly convex for some $\beta >0$.
\end{lemma}
\begin{proof}
It is enough to show strong convexity on this set of the function \[f_y:\theta\in \tilde U_C\mapsto\log\Big(\sum_{\bar x\in X^{m-1}}e^{\theta_{\bar x}-c_\epsilon(y,\bar x)}\tilde\rho_{\bar x} \Big) =\log\Big(e^{-c_\epsilon(y, \bar x_0)}\tilde \rho_{\bar x_0}+\sum_{\bar x\in X^{m-1}\setminus \{\bar x_0\}}e^{\theta_{\bar x}-c_\epsilon(y,\bar x)}\tilde\rho_{\bar x} \Big)\] for a fixed $y$.
Enumerating the set $ X^{m-1}\setminus \{\bar x_0\}$ of independent variables as $\bar x_j$ for $j \in (1,...,K)$ with $K= |X|^{m-1}-1$, and denoting $z^j =e^{\theta_{\bar x}-c_\epsilon(y,\bar x)}\tilde\rho_{\bar x}$ the Hessian of $f_y$ is
$$
\frac{1}{\Big(e^{-c_\epsilon(y, \bar x_0)}\tilde \rho_{\bar x_0}+\sum_{j}z^j\Big)^2}\Big(-z\otimes z +\diag(z)(\sum_{j}z^j +e^{-c_\epsilon(y, \bar x_0)}\tilde \rho_{\bar x_0})\Big)
$$
The first two terms together constitute a positive semi-definite matrix (see \cite{BoydVandenberghe04}, p.74), while the third is positive definite, with lower bound
\[\beta =\frac{e^{-C-2M}\tilde \rho_{\bar x_0}\min_{\bar x} \tilde \rho_{\bar x}}{(e^{C+M}\tilde \rho_{\bar x_0} +\sum_{\bar x\in X^{m-1}\setminus \{\bar x_0\}}e^{C+M}\tilde\rho_{\bar x})^2}=e^{-4M-3C}\tilde \rho_{\bar x_0}\min_{\bar x} \tilde \rho_{\bar x}.\]
It follows that $f_y$, and therefore $\Psi$, is $\beta$-convex on $\tilde U$.
\end{proof}
\begin{lemma}\label{lem: strong convexity}
Let $\Lambda: U \rightarrow \tilde U_{4(m-1)M}$ be the linear mapping defined by $\Lambda(\phi)_{\bar x}=\phi_{x^1}+\cdots+\phi_{x^{m-1}}$ $\forall \bar x\in X^{m-1}$. Then $\tilde\Psi(\phi):=\Psi(\Lambda(\phi))$ is $\alpha-$strongly convex on $U$.
\end{lemma}
\begin{proof}
By the linearity of $\Lambda$, one gets that, for $\phi\in U$, $D^2\tilde\Psi(\phi)(v,v)=D^2\Psi(\Lambda(\phi))(\Lambda(v),\Lambda(v))$ for all $v\in U$. Thus,
\[ D^2\tilde\Psi(\phi)(v,v)=D^2\Psi(\Lambda(\phi))(\Lambda(v),\Lambda(v))\geq \beta \norm{\Lambda(v)}^2. \]
Since $\norm{\Lambda(v)}^2\geq \sum_{x\in X}\norm{(m-1)v_x}^2$ we finally get
\[ D^2\tilde\Psi(\phi)(v,v)\geq \alpha \norm{v}^2,\]
with $\alpha=\beta (m-1)^2>0$, proving the $\alpha-$strong convexity of $\tilde\Psi$.
\end{proof}
\begin{remark}
The $\alpha$ obtained in the Lemma above is not optimal; indeed we would have obtained a better lower bound on the eigenvalues of $D^2_{\phi,\phi}\tilde\Psi$ by computing the smallest eigenvalue of $\Lambda^*\Lambda$. Moreover, in the previous lemma we take, for simplicity, $\eta=1$, otherwise the parameter $\alpha$ would have taken the form $\alpha=e^{(-4C-3M)/\eta}(m-1)^2$. Notice now that $\alpha$ approaches to 0 as $\eta\to 0$, meaning the the condition number of the Hessian of $\tilde\Phi$ explodes. Namely, this will produce numerical instabilities.
\end{remark}
It easily follows from the previous lemma that $D^2_{\phi,\phi}\tilde\Phi =D^2_{\phi,\phi}\tilde\Psi$ is invertible on the set $U$; we can then state the following result on the well posedness of \eqref{eq:PdCprelim}.
\begin{theorem}
Let be $\phi(\epsilon)$ the solution to \eqref{pb:discreteDualRered} for all $\epsilon\in[0,1]$. Then $\epsilon\mapsto\phi(\epsilon)$ is $\mathcal C^1$ and is the unique solution to the Cauchy problem
\begin{equation}
\label{eq:PdC}
\begin{cases}
&\dfrac{\dd\phi}{\dd\epsilon}(\epsilon)=-[D^2_{\phi,\phi}\tilde\Phi(\phi(\epsilon),\epsilon)]^{-1}\dfrac{\partial}{\partial \epsilon}\nabla_{\phi}\tilde\Phi(\phi(\epsilon),\epsilon),\\
&\phi(0)=\phi_w,
\end{cases}
\end{equation}
where $\phi_w$ is the optimal solution to \eqref{pb:dualEntropic} with cost $w$ and two marginals equal to $\rho$.
\end{theorem}
\begin{proof}
As $\phi(\epsilon)$ minimizes $\tilde \Phi(\cdot,\epsilon )$ for each fixed $\epsilon$, we clearly have \eqref{eqn: first order optimality}. Since $\tilde \Phi$ is clearly twice differentiable with respect to $\phi$ and $\epsilon$ and $D^2_{\phi\phi}\tilde \Phi$ is invertible by Lemma \ref{lem: strong convexity}, the Implicit Function Theorem then implies that $\epsilon \mapsto \phi(\epsilon)$ is $C^1$ and satisfies \eqref{eq:ODE}, or equivalently, \eqref{eq:PdC}.
Since $D^2_{\phi,\phi}\tilde \Phi$ and $\frac{\partial }{\partial \epsilon}\nabla _{\phi}\tilde \Phi$ are Lipschitz continuous with respect to $\phi$ on $U$ by Lemma \ref{lem: lipschitz second derivatives} and clearly continuous with respect to $\epsilon$, and since $D^2_{\phi,\phi}\tilde \Phi$ is uniformly positive definite by Lemma \ref{lem: strong convexity}, we have that
$$
(\phi, \epsilon) \mapsto -[D^2_{\phi,\phi}\tilde\Phi(\phi(\epsilon),\epsilon)]^{-1}\dfrac{\partial}{\partial \epsilon}\nabla_{\phi}\tilde\Phi(\phi(\epsilon),\epsilon)
$$
is Lipschitz with respect to $\phi$ and continuous with respect to $\epsilon$ on $U$. Since by Lemma \ref{lemma:bounds} $\phi(\epsilon) \in U$ for all $\epsilon,$ the Cauchy-Lipschitz Theorem then implies uniqueness of the solution to \eqref{eq:PdC} on $U \times [0,1]$, as desired.
\end{proof}
\section{Algorithm and simulation}\label{algorithm}
In this subsection we present some numerical simulations obtained by discretizing the above ODE.
The algorithm consists in discretizing \eqref{eq:PdC} by an explicit Euler scheme (notice that one could also use some high order method for the ODEs). Let $h$ be the step size and set $\phi(0)=\phi_w$ the solution of a 2 marginal problem with cost $w$, then the $\phi$ can be defined inductively as follows.
\begin{enumerate}
\item Let $\phi^{(k)}$ the solution at step $k$, then compute
\[D^{(k)}:=D^{2}_{\phi,\phi}\tilde\Phi(\phi^{(k)},kh),\quad b^{(k)}:=-\dfrac{\partial}{\partial \epsilon}\nabla_{\phi}\tilde\Phi(\phi^{(k)},kh). \]
\item Solve the linear system $D^{(k)}z=b^{(k)}$. We denote by $z^\star$ the solution.
\item Update the potential by setting
\[ \phi^{(k+1)} = \phi^{(k)}+hz^\star .\]
\end{enumerate}
Notice that by the regularity we have proved above, we can conclude that the Euler scheme converges linearly. Moreover, the uniform error between the discretized solution obtained via the scheme and the solution to the ODE is $O(h)$.
In Figure (left plot) \ref{fig:convergence} we plot the convergence order for the Euler scheme described above. The error is computed with respect to the solution to \eqref{pb:discreteDualRered} computed via a gradient descent method with backtracking. Notice that the regularity of the objective function and the boundedness of the Hessian guarantee the convergence of the method. For these simulations we have taken $m=3$, the uniform measure on $[0,1]$ uniformily discretized with $100$ gridpoints and the pairwise interaction $w(x,y)=-\log(0.1+|x-y|)$. Moreover, since the RHS in \eqref{eq:PdC} is regular one can try to apply an high order scheme to solve the Cauchy problem. In figure (right plot)\ref{fig:convergence} we compare the convergence of the Euler method and a Runge-Kutta of order 3; notice that with $100$ time steps the RK method converges to a solution with an error of order $10^{-5}$ and by an estimation of the slope of the two lines we obtain 3 and 1.16, respectively for RK and Euler (as expected).
\begin{figure}[h!]
\TabTwo{
\includegraphics[width=.5\linewidth]{figures/convergence_error_euler.png}&
\includegraphics[width=.5\linewidth]{figures/convergence_error_RK.png}\\
}
\caption{Convergence order for the ODE approach. Left: Linear convergence for an explicit Euler scheme. Right: comparison between explicit Euler (blue line) and an explicit Runge-Kutta (red line) of order 3}
\label{fig:convergence}
\end{figure}
In Figure \ref{fig:fig1} we show the numerical result obtained with $\eta=0.006$, $h=1/1000$, $m=3$, the uniform measure on $[0,1]$ uniformily discretized with $100$ gridpoints and the pairwise interaction $w(x,y)=-\log(0.1+|x-y|)$. Notice that since we have developed our continuation method by the entropic regularization of optimal transport, we can easily reconstruct the optimal plan at each $k$ by using the potential $\phi^{(k)}$. In Figure \ref{fig:comp} we have compared the potential computed by the ODE with the one obtained by solving the regularized multi-marginal problem via Sinkhorn (at same $\eta$). Since in this case the optimal solution to the unregularised dual problem (as well as the primal) can be explicitly computed we compare the relative error, that is $\frac{\|\phi-\phi_{exact}\|_{\infty}}{\|\phi_{exact}\|_{\infty}}$, between the ODE approach (in this case we have used an 8th order Runge-Kutta) and Sinkhorn, with respect to the number of iteration. By looking at Table \ref{tab1}, it is clear that both methods achieve the same relative error, but the number of iterations to reach it is smaller for the ODE approach.
\begin{table}
\begin{tabular}{ |c|c|c| }
\hline
& ODE approach & Sinkhorn \\
\hline
relative error & 0.0097108 & 0.0097085 \\
iterations & 100 & 820 \\
\hline
\end{tabular}
\label{tab1}
\caption{Comparison between the ODE approach and Sinkhorn}
\end{table}
In Figures \ref{fig:comp}-\ref{fig:fig3} we have kept the same data as before, but we have chose the negative harmonic cost, that is $w(x,y)=-|x-y|^2$. We highlight that the solution at $\epsilon=0$ is $-Id$ and then the final coupling is supported, as expected, on the hyperplane $x+y+z=1.5$.
\begin{figure}[h!]
\TabTwo{
\includegraphics[width=.55\linewidth]{figures/pot_paper1comparison}&
\includegraphics[width=.55\linewidth]{figures/pot_paper2comparison}\\
}
\caption{Optimal potential computed via Sinkhorn (red line). Potential computed via the ODE (black dot-dashed line). Left: Log cost. Right: Negative Harmonic cost}
\label{fig:comp}
\end{figure}
\begin{figure}[h!]
\TabThree{
\includegraphics[width=.35\linewidth]{figures/plan_paper10}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper10} &\includegraphics[width=.35\linewidth]{figures/pot_paper10}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper1250}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper1250} &\includegraphics[width=.35\linewidth]{figures/pot_paper1250}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper1500}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper1500} &\includegraphics[width=.35\linewidth]{figures/pot_paper1500}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper1750}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper1750} &\includegraphics[width=.35\linewidth]{figures/pot_paper1750}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper11000}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper11000} &\includegraphics[width=.35\linewidth]{figures/pot_paper11000}
}
\caption{(Log cost) Left: support of the coupling $\gamma^{\epsilon}_{1,2}$. Center: surface of the coupling $\gamma^{\epsilon}_{1,2}$. Right: potential $\phi(\epsilon)$. For $\epsilon=0,0.25,0.5,0.75,1$}
\label{fig:fig1}
\end{figure}
\begin{figure}[h!]
\TabThree{
\includegraphics[width=.35\linewidth]{figures/plan_paper20}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper20} &\includegraphics[width=.35\linewidth]{figures/pot_paper20}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper2250}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper2250} &\includegraphics[width=.35\linewidth]{figures/pot_paper2250}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper2500}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper2500} &\includegraphics[width=.35\linewidth]{figures/pot_paper2500}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper2750}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper2750} &\includegraphics[width=.35\linewidth]{figures/pot_paper2750}\\
\includegraphics[width=.35\linewidth]{figures/plan_paper21000}&\includegraphics[width=.35\linewidth]{figures/plasurf_paper21000} &\includegraphics[width=.35\linewidth]{figures/pot_paper21000}
}
\caption{(Negative Harmonic cost) Left: support of the coupling $\gamma^{\epsilon}_{1,2}$. Center: surface of the coupling $\gamma^{\epsilon}_{1,2}$. Right: potential $\phi(\epsilon)$. For $\epsilon=0,0.25,0.5,0.75,1$}
\label{fig:fig3}
\end{figure}
\bigskip
\noindent{\textbf{Acknowledgement.}} B.P. is pleased to acknowledge the support of National Sciences and Engineering Research Council of Canada Discovery Grant number 04658-2018. He is also grateful for the kind hospitality at the Institut de Mathématiques d'Orsay, Université Paris-Saclay during his stay in November of 2019 as a missionaire scientifique invité, when this work was partially completed. L.N. was supported by a public grant as part of the Investissement d'avenir project, reference ANR-11-LABX-0056-LMH, LabEx LMH and from the CNRS PEPS JCJC (2022). The authors would like to thank Guillaume Carlier for all the fruitful discussions which lead to Lemma \ref{lem:strong_convexity}. A preliminary version of the results in this paper appeared in the arxiv preprint version of our paper \cite{NennaPass22}, posted on January 3, 2022. Prior to that paper being accepted for publication on \textit{Applied Mathematics and Optimization}, these results were removed and they have since been refined and expanded to form the present manuscript.
| {
"timestamp": "2022-12-26T02:13:55",
"yymm": "2212",
"arxiv_id": "2212.12492",
"language": "en",
"url": "https://arxiv.org/abs/2212.12492",
"abstract": "The purpose of this paper is to introduce a new numerical method to solve multi-marginal optimal transport problems with pairwise interaction costs. The complexity of multi-marginal optimal transport generally scales exponentially in the number of marginals $m$. We introduce a one parameter family of cost functions that interpolates between the original and a special cost function for which the problem's complexity scales linearly in $m$. We then show that the solution to the original problem can be recovered by solving an ordinary differential equation in the parameter $\\epsilon$, whose initial condition corresponds to the solution for the special cost function mentioned above; we then present some simulations, using both explicit Euler and explicit higher order Runge-Kutta schemes to compute solutions to the ODE, and, as a result, the multi-marginal optimal transport problem.",
"subjects": "Optimization and Control (math.OC); Numerical Analysis (math.NA)",
"title": "An ODE characterisation of multi-marginal optimal transport with pairwise cost functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787842245938,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811328594545
} |
https://arxiv.org/abs/2008.13200 | Arithmetic of Some Sequences Via $2$-determinants | We extend our investigation of $2$-determinants, which we defined in a previous paper. For a linear homogenous recurrence of the second order, we consider relations between different sequences satisfying the same linear homogeneous recurrence of the second order. After we prove a generalized identity of d'Ocagne, we derive, from a single identity, a number of classical identities (and their generalizations) such as d'Ocagne's, Cassini's, Catalan's and Vajda's. Along the way, the corresponding combinatorial interpretations in terms of restricted words over a finite alphabet are stated for the sequences we investigate. | \section{Introduction and preliminaries}
We continue our investigation of $2$-determinants,
defined in \cite{ja1}. For a linear homogenous recurrence of the second order, we consider
relations between different sequences satisfying the same recurrence. Linear homogenous recurrences of the second order are much studied objects and there is an overwhelming amount of literature containing various formulas involving sequences defined recurrently (as an introduction to the topic, we recommend \cite{Va}, \cite{JO}, \cite{BQ}, and \cite{Du}).
Our method produces a number of identities involving Fibonacci numbers and polynomials, bisection of Fibonacci numbers, positive integers, Pell numbers, Jacobsthal numbers, Mersenne numbers, and Chebyshev polynomials of the second kind.
For these and some other objects, we derive variations of well-known identities, such as d'Ocagne's, Cassini's, Vajda's, and Catalan's identities.
Let $x$ and $y$ be integer-valued variables.
We consider the following recurrence of the second order:
\begin{equation}\label{rr}a_{n+1}(x,y)=x\cdot a_{n}(x,y)+y\cdot a_{n-1}(x,y),\,\,\, n>0,\\
\end{equation}
where \begin{equation}\label{rr2} a_{0}(x,y)=0,\,\, a_1(x,y)=1. \end{equation}
We investigate mutual connections between three sequences $(a),(b),$ and $(c)$ which satisfy
(\ref{rr}), and where $(a)$ also satisfies (\ref{rr2}). Our main result is the following general identity.
\begin{theorem}[A generalized identity of d'Ocagne] Let $(a)$ be a sequence satisfying Equation~(\ref{rr}) and Equation~(\ref{rr2}), and let
$(b)=(b_0,b_1,\ldots)$ and $(c)=(c_0,c_1,\ldots)$ both satisfy Equation~(\ref{rr}). Then
\begin{equation}\label{ocintr}
\begin{vmatrix}b_k&b_{k+m}\\c_k&c_{k+m}
\end{vmatrix}=(-y)^{k}\cdot a_{m}\cdot\begin{vmatrix}b_{0}&b_{1}\\c_{0}&c_{1}
\end{vmatrix}.
\end{equation}
\end{theorem}
Once we establish the identity (\ref{ocintr}), from this single identity we will derive a number of identities involving some classical sequences of numbers. We note that this formula is, in a sense, asymmetric, since factors $(-y)^k$
and $a_m$ appear only on the right-hand side of the equation. Moreover, on
the right-hand side only two initial terms of sequences $(b)$ and $(c)$ appear.
We also note that the left-hand side of (\ref{ocintr}) contains two arbitrary parameters $k$
and $m$. Furthermore, in Theorem~\ref{fpt}, a formula containing four
different parameters will be proved. Extra parameters allow us to derive a number of identities
concerning classical sequences as special cases of this particular identity.
We see that the fundamental role in our investigation is played by sequences that satisfy (\ref{rr}) and (\ref{rr2}). In this section we closely investigate such sequences. In particular, we give explicit
formulas for them and combinatorial interpretations of such sequences in
terms of restricted words over a finite alphabet.
In the cases under consideration, $x$ and $y$ will always have fixed values, so that
we can write $a_n$ instead of $a_n(x,y)$, omitting $x$ and $y$ to simplify notation.
The following result is proved in Proposition~11, Proposition~12, and Proposition~16 in \cite{ja2}.
\begin{proposition} Let $(a)$ be a sequence satisfying (\ref{rr}) and (\ref{rr2}).
\begin{enumerate}
\item The following explicit formula holds
\begin{equation}\label{pp1}
a_n=\sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor}{n-1-k\choose k}\cdot x^{n-2k-1}\cdot
y^k.
\end{equation}
\item
If $x > 0$ and $y > 0$, then $a_{n+1}$ equals the number
of words of length $n$ over the alphabet
$\{0,1,\ldots,x-1,x,\ldots,x+y-1\}$ in which letters $0,1,\ldots,x-1$
avoid runs of odd lengths.
\item
If $x>0$, $y<0$, and $-y< x$, then $a_n$ is the number of words of length $n-1$
over
$\{0,1,\ldots, x-1\}$
with no subword of the form $0i$, where $i\in \{1,2,\ldots, -y\}$.
\end{enumerate}
\end{proposition}
Some well-known integer sequences are given by a linear homogeneous recurrence of the second order, for instance, Fibonacci numbers, Fibonacci polynomials, Jacobsthal numbers, and Pell numbers. They are
obtained when $x>0,\, y>0$. The first example concerning the case $x>0,\, y<0$ shows that positive
integers also satisfy (\ref{rr}). Chebyshev
polynomials of the second kind, bisection of Fibonacci numbers, and Mersenne numbers
also belong to this class.
In the next example, Equation~(\ref{pp1}) is used to give an explicit expression for each of the classical sequences involved. These formulas are well known and most of them have been widely described in numerous literature (\cite{Va}, \cite{Ko}, \cite{BQ}, \cite{qF}, \cite{qQ}, \cite{Ho}, \cite{Ho2}). Here, ten examples are listed in order to emphasize the idea that all of them hold a common origin in Equation~(\ref{pp1}). Along the way, the corresponding combinatorial interpretations are stated for each of the sequences.
We start with the most important two: Fibonacci polynomials and Chebyshev polynomials of the second kind. Some of the examples that follow are just particular cases of these two.
\begin{example}\label{primjer}
\begin{enumerate}
\item If $x>0$ and $y=1$, then $a_{n+1}=F_{n}(x), \, n\geq 1$,
where $F_{n}(x)$ is the $n$th Fibonacci polynomial. Also, Equation~(\ref{pp1}) gives the
explicit expression for $F_{n}(x)$ (see \cite{qF}):
\begin{displaymath}
F_{n}(x)=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}{n-k-1\choose k}x^{n-2k-1}.
\end{displaymath}
In terms of restricted words, if $x>0$ is an integer, then $F_{n}(x)$ equals the number of words
of length $n$ over $\{0,1,\ldots,x-1\}$ in which $0$ avoids runs of odd lengths.
\item
If $x=2z$ and $y=-1$, then $a_n=U_{n}(z)$, where $U_n(z)$ is the Chebyshev polynomial
of the second kind. From Equation~(\ref{pp1}), we get the following well-known formula (see \cite{qQ}):
\begin{displaymath}
U_{n}(z)=\sum_{k=0}^{\lfloor \frac {n-1}{2}\rfloor}(-1)^k\cdot{n-k-1\choose
k}(2z)^{n-2k-1}.
\end{displaymath}
In terms of restricted words, if $z>0$ is an integer, then
$U_{n+1}(z)$ equals the number of words of length $n$ over the alphabet
$\{0,1,\ldots,2z-1\}$ avoiding the subword $01$.
\item Particular case of (1) is when $x=1$ and $y=1$. In this case we have $a_{n+1}=F_{n},\, n\geq 1$.
Also, Equation~(\ref{pp1}) is the standard expression for the Fibonacci numbers in terms of
the binomial coefficients (see Identity (54) of \cite{Va}):
\begin{displaymath}
F_{n}=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}{n-k-1\choose k}.
\end{displaymath}
Combinatorially, the Fibonacci number $F_{n}$ equals the number of binary words of
length $n$ avoiding a run of zeros of odd length.
\item For $x=2$ and $y=1$, we have $a_{n+1}=P_{n}$, where $P_n, \, n\geq 0$, is the $n$th Pell
number. From
Equation~(\ref{pp1}), we get (see \cite{Ho})
\begin{displaymath}
P_{n}=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}2^{n-2k-1}\cdot {n-k-1\choose k}.
\end{displaymath}
Also, $P_{n}$ equals the number of ternary words of length $n$ in which $0$
avoids runs of odd lengths. The Pell numbers are sometimes called ``silver Fibonacci numbers".
\item For $x=1$ and $y=2$, we have $a_{n+1}=J_{n}$, where $J_n,\, ( n=0,1,2,\ldots)$, are the
Jacobsthal numbers.
From Equation~(\ref{pp1}), we obtain (see \cite{Ho2})
\begin{displaymath}
J_{n}=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}2^k{n-k-1\choose k-1}.
\end{displaymath}
Also, the number $J_{n}$ equals the number of ternary words of length $n$ in which
$0$ and $1$ avoid runs of odd lengths.
\item If $x=2$ and $y=2$, then
$a_{n+1}$ is the number of ways to tile a board of length $n$ using tiles of two colors of length 1 and 2. Also,
\begin{displaymath}
a_{n+1}=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}2^{n-k-1}\cdot {n-k-1\choose
k},\,\, n>0.
\end{displaymath}
\item If $x=2$ and $y=-1$, then $a_0=0,\, a_1=1,$ and $a_{n+1}=2a_n-a_{n-1}$,
which is the recurrence for non-negative integers. Thus, we have
\begin{displaymath}
n=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}(-1)^k\cdot 2^{n-2k-1}{n-k-1\choose
k},\,\, n>0.
\end{displaymath}
This formula for $n$ may seem rather complex, but its combinatorial meaning is very
simple.
Namely, $n$ equals the number of binary words of length $n-1$ avoiding
$01$, which is obvious.
\item If $x=3$ and $y=-1$, then we have that $a_{n+1}$ is the bisection of Fibonacci numbers, that is, $a_{n+1}=F_{2n}$. From Equation~(\ref{pp1}), we obtain
\begin{displaymath}
F_{2n}=\sum_{k=0}^{\lfloor\frac {n-1}{2}\rfloor}(-1)^k\cdot 3^{n-2k-1}\cdot
{n-k-1\choose
k}.
\end{displaymath}
Also, $F_{2n}$ equals the number of ternary words of length $n-1$ avoiding $01$.
\item If $x=3$ and $y=-2$, then $a_n=2^n-1$. These numbers are usually called
Mersenne numbers.
We have
\begin{displaymath}
2^{n}-1=\sum_{k=0}^{\lfloor\frac{ n-1}{2}\rfloor}(-2)^k\cdot 3^{n-2k-1}\cdot
{n-k-1\choose
k}.
\end{displaymath}
Also, $2^{n+1}-1$ equals the number of ternary words of length $n$ avoiding $01$ and $02$.
\item If $x=4$ and $y=-3$, then $a_n=\frac{3^n-1}{2}$.
Next we have
\begin{displaymath}
\frac{3^n-1}{2}=\sum_{k=0}^{\lfloor\frac{ n-1}{2}\rfloor}(-3)^k\cdot 4^{n-2k-1}\cdot
{n-k-1\choose
k}.
\end{displaymath}
Also, $\frac{3^n-1}{2}$ equals the number of
quaternary words of length $n$ avoiding $01, 02$ and $03$.
\end{enumerate}
\end{example}
\section{Identities}
Unless stated otherwise, throughout this section, we assume that $(a)$ is a sequence satisfying Equation~(\ref{rr}) and Equation~(\ref{rr2}), and that
$(b)=(b_0,b_1,\ldots)$ and $(c)=(c_0,c_1,\ldots)$ both satisfy Equation~(\ref{rr}). Note that we do not have any assumptions about the initial conditions for the sequences $(b)$ and $(c)$.
In our previous paper \cite{ja1}, we defined the notion of an $n$-determinant, and used it to derive numerous identities related to some sequences given by linear homogeneous recurrences of the second order. Here, we use the results obtained via $2$-determinants to derive a number of identities related to some classical sequences. For the definition of an $n$-determinant and related results, we refer the reader to \cite{ja1}. We remark here that the matrix methods are widely used when it comes to proving some of the classical identities. For example, see \cite{JO} for a demonstration of how powerful these methods can be in simplifying proofs of some of the identities.
The starting point of our investigation is the following general theorem that relates sequences satisfying the same linear homogeneous recurrence of the second order.
\begin{theorem}[\cite{ja2}, Proposition 8] Let $(u)=(u_0,u_1,\ldots)$ and $(v)=(v_0,v_1,\ldots)$ be any two sequences, and let $(b)=(b_0,b_1,\ldots)$ and $(c)=(c_0,c_1,\ldots)$ be two sequences both satisfying the same recurrence:
\begin{align*}b_{n+1}&=u_{n}b_{n}+v_{n-1}\cdot b_{n-1},\,\,\, n>0,\\
c_{n+1}&= u_{n}c_{n}+v_{n-1}\cdot c_{n-1},\,\,\, n>0.
\end{align*}
Then, for $k\leq n+2$, we have
\begin{equation}\label{prop8}
\begin{vmatrix}b_k&b_{n+2}\\c_k&c_{n+2}
\end{vmatrix}=(-1)^{k}\cdot v_1\cdot v_2\cdots v_k\cdot a_{n-k+2}\cdot\begin{vmatrix}b_{0}&b_{1}\\c_{0}&c_{1}
\end{vmatrix},
\end{equation}
where $a_0=0$, $a_1=1$, $a_2=u_{k+1}$, $a_i=v_{k+i-2}a_{i-2}+u_{k+i-1}a_{i-1}$, $i>2$.
\end{theorem}
The basic result that we use to investigate sequences of numbers is the following direct corollary of the previous theorem.
\begin{theorem}[A generalized identity of d'Ocagne] Let $(a)$ be a sequence satisfying Equation~(\ref{rr}) and Equation~(\ref{rr2}), and let
$(b)=(b_0,b_1,\ldots)$ and $(c)=(c_0,c_1,\ldots)$ both satisfy Equation~(\ref{rr}). Then
\begin{equation}\label{oc}
\begin{vmatrix}b_k&b_{k+m}\\c_k&c_{k+m}
\end{vmatrix}=(-y)^{k}\cdot a_{m}\cdot\begin{vmatrix}b_{0}&b_{1}\\c_{0}&c_{1}
\end{vmatrix}.
\end{equation}
\end{theorem}
\begin{proof} The statement follows from the previous theorem by setting, for all $n$, $u_n=x$, $v_n=y$, and $n+2=k+m$.
\end{proof}
The importance of Equation~(\ref{oc}) lies in the fact that there is an extra term $a_m$ on the right-hand side of the formula. This gives us a lot of freedom in choosing concrete values of the sequence $a_m$, yielding many identities as special cases of Equation~(\ref{oc}). We will derive some of these identities now. Note that when it comes to the indices that appear on the left-hand side of Equation~(\ref{oc}), they are not as general as indices appearing in some well-known identities, such as Identity $(70)$ of \cite{JO}, but the extra term $a_m$ on the right-hand side makes up for the lack of full generality of the indices. Moreover, there are four different parameters appearing as indices in the identity given in Theorem~\ref{fpt}. Extra parameters allow us to derive a number of identities concerning classical sequences as special cases of this particular identity.
In Section 4 of \cite{ja1}, we stated a number of d'Ocagne's identities for Fibonacci
numbers and polynomials, Lucas and Chebyshev polynomials.
We illustrate Equation~(\ref{oc}) by
deriving identities for some sequences described in the preceding section.
To clarify the name of the identity given by Equation~(\ref{oc}), we prove that
d'Ocagne's identity for Fibonacci
numbers is a particular case of this identity.
\begin{corollary}[d'Ocagne's identity] The following formula holds
\begin{displaymath}
\begin{vmatrix}F_{k+1}&F_{k+m+1}\\F_{k}&F_{k+m}\end{vmatrix}=(-1)^{k}\cdot F_{m},
\end{displaymath}
\end{corollary}
\begin{proof}
When $x=y=1$, Equation (\ref{oc}) becomes
the recurrence for Fibonacci numbers. Hence, $a_{m}=F_{m}$, $m\geq 0$.
For sequences $(b)$ and $(c)$, we again choose the Fibonacci numbers with the
initial
conditions such that the
determinant on the right-hand side of Equation~(\ref{oc}) is equal to $1$. For
instance, we
can choose $c_0=0,c_1=1$, and $b_0=1,b_1=1$, that is,
$b_k=F_{k+1}$ and $c_k=F_{k}$. We thus obtain
d'Ocagne's identity.
\end{proof}
We note one more consequence of Equation~(\ref{oc}). Namely, if we know sequences $(b)$ and $(c)$ from the previous theorem, we can determine the sequence $(a)$.
\begin{corollary}If $(a)$, $(b)$, and $(c)$ are sequences satisfying (\ref{oc}), then the
members of the sequence $(a)$ are rational functions of numbers
$b_i,c_i,(i=0,1,\ldots)$, with denominator equal to $(-y)^k\cdot(b_0c_1-b_1c_0)$.
\end{corollary}
\begin{remark}
Besides d'Ocagne's identity, three of the most important identities
are: Cassini's, Catalan's, and Vajda's. All these identities can be derived from the generalized identity of d'Ocagne (\ref{oc}).
What we show here is that these important identities hold for each sequence satisfying (\ref{rr}) and (\ref{rr2}), i.e.\ they are, in a sense, consequences of the homogenous linear recurrence of the second order, not of the particular coefficients.
\end{remark}
For $m=1$, we have $a_1=1$, so that from Equation~(\ref{oc}) we get the following identity (see Identity (70) of \cite{JO}).
\begin{proposition}[A Cassini-like identity] The following formula holds
\begin{displaymath}
\begin{vmatrix}b_k&b_{k+1}\\c_k&c_{k+1}\end{vmatrix}
=(-y)^{k}\cdot\begin{vmatrix}b_0&b_{1}\\c_0&c_{1}\end{vmatrix}.
\end{displaymath}
\end{proposition}
It is easy to see that for $x=y=1,b_0=1,b_1=1$, $c_0=0,$ and $c_1=1$,
we obtain the standard Cassini's identity for Fibonacci numbers.
We now consider the Lucas numbers $L_0=2,L_1=1,L_2=3,\ldots$. We know that these numbers satisfy the same recurrence as the Fibonacci numbers do.
We thus take $x=y=1$ and obtain the following identity (see Identity $(16b)$ of \cite{Va}).
\begin{identity}[A Cassini-like identity for Fibonacci and Lucas numbers]
\begin{displaymath}
L_k\cdot F_{k+1}-L_{k+1}\cdot F_k=2\cdot (-1)^k.
\end{displaymath}
\end{identity}
In Example \ref{primjer}, we have seen that the Jacobsthal numbers $J_n,(n=0,1,2,\ldots)$, are obtained from Equation~(\ref{pp1}) for $x=1,y=2$. If we take $b_0=J_1,b_1=J_2,c_0=J_0,c_1=J_1$, we obtain
the following identity (see Identity $(2.5)$ of \cite{Ho2}).
\begin{identity}[A Cassini-like identity for Jacobsthal numbers]
\begin{displaymath}
{J }^2_{k+1}-J_k\cdot J_{k+2}=(-2)^{k}.\end{displaymath}
\end{identity}
Since Chebyshev polynomials $T_n(z)$ of the first kind satisfy the same recurrence as $U_n(x)$ do, by taking
\begin{displaymath}
T_0(0)=1,T_1(1)=z, x=2z,y=-1,
\end{displaymath}
we obtain the following identity.
\begin{identity}[A Cassini-like identity for Chebyshev polynomials of the first kind]
\begin{displaymath}
T^2_{k+1}(z)-T_k(z)\cdot T_{k+2}(z)
=1-z^2.\end{displaymath}
\end{identity}
Next, we assume that $k\geq p$. If we replace $k$ by $k-p$ in Equation~(\ref{oc}), then we get
\begin{displaymath}
\begin{vmatrix}b_{k-p}&b_{k+m-p}\\c_{k-p}&c_{k+m-p}\end{vmatrix}=(-y)^{k-p}\cdot
a_{m}\cdot
\begin{vmatrix}b_0&b_{1}\\c_0&c_{1}\end{vmatrix}.
\end{displaymath}
If we apply Equation~(\ref{oc}) to the right-hand side of the previous equality, we obtain the following universal
property. This is a well known index reduction formula (cf.\ Identity $(70)$ of \cite{JO}).
\begin{proposition}[Index reduction formula] If $k\geq p$, then
\begin{displaymath}
\begin{vmatrix}b_k&b_{k+m}\\c_k&c_{k+m}\end{vmatrix}=(-y)^{p}\cdot
\begin{vmatrix}b_{k-p}&b_{k-p+m}\\c_{k-p}&c_{k-p+m}\end{vmatrix}.
\end{displaymath}
\end{proposition}
In particular, if $k=p$, then by using the index reduction formula, we can write d'Ocagne's identity in the form:
\begin{displaymath}
\begin{vmatrix}b_k&b_{k+m}\\c_k&c_{k+m}
\end{vmatrix}=(-y)^{k}\cdot \begin{vmatrix}b_{0}&b_{m}\\c_{0}&c_{m}\end{vmatrix}.
\end{displaymath}
By comparing the last equality with Equation~(\ref{oc}), we get the following identity.
\begin{proposition}[A reduced identity of d'Ocagne]
\begin{displaymath}
a_m\cdot\begin{vmatrix} b_{0}&b_{1}\\c_{0}&c_{1}\end{vmatrix}=
\begin{vmatrix} b_{0}&b_{m}\\c_{0}&c_{m}\end{vmatrix}.
\end{displaymath}
\end{proposition}
We illustrate this formula with two identities. The first identity concerns Fibonacci numbers.
\begin{identity} For arbitrary non-negative integers $m,\, p$, and $q$, where $m>p$, the following holds
\begin{displaymath}
F_m\cdot\begin{vmatrix} F_p&F_{p+1}\\F_{q}&F_{q+1}\end{vmatrix}=
\begin{vmatrix} F_{p}&F_{m+p}\\F_{q}&F_{m+q}\end{vmatrix}.
\end{displaymath}
\end{identity}
The next identity concerns Chebyshev polynomials $U_n(x)$ of the second kind.
\begin{identity} For arbitrary non-negative integers $m,\, p$, and $q$, where $m>p$, the following holds
\begin{displaymath}
U_m(x)\cdot\begin{vmatrix} U_p(x)&U_{p+1}(x)\\U_{q}(x)&U_{q+1}(x)\end{vmatrix}=
\begin{vmatrix} U_{p}(x)&U_{m+p}(x)\\U_{q}(x)&U_{m+q}(x)\end{vmatrix}.
\end{displaymath}
\end{identity}
In the next result, we introduce two more parameters in the formula
(\ref{oc}). In this theorem we take $(c)=(a)$.
\begin{theorem}[Four parameter theorem]\label{fpt} For $m\geq k,p\geq q$, the
following formula holds
\begin{displaymath}
\begin{vmatrix}b_{k+p}&b_{m+p}\\a_{k+q}&a_{m+q}
\end{vmatrix}
=(-y)^{k+q}\cdot a_{m-k}\cdot b_{p-q}.
\end{displaymath}
\end{theorem}
\begin{proof}
We only need to use the index reduction formula twice.
We have
\begin{displaymath}
\begin{vmatrix}b_{k+p}&b_{m+p}\\a_{k+q}&a_{m+q}\end{vmatrix}=(-y)^k\cdot
\begin{vmatrix}b_{p}&b_{m+p-k}\\a_{q}&a_{m+q-k}
\end{vmatrix}.
\end{displaymath}
Applying the index reduction formula on the right-hand side of the last equation, we
obtain
\begin{displaymath}
\begin{vmatrix}b_{k+p}&b_{m+p}\\a_{k+q}&a_{m+q}\end{vmatrix}=(-y)^{k+q}\cdot
\begin{vmatrix}b_{p-q}&b_{m+p-k-q}\\a_{0}&a_{m-k}
\end{vmatrix},
\end{displaymath}
and since $a_0=0$, the assertion follows.
\end{proof}
By replacing $m$ by $m+k$, $p$ by $p+q$, and finally $k+q$ by $k$, we obtain the following corollary of the previous theorem.
\begin{identity}[{A Vajda-like identity}]
For sequences $(a)$ and $(b)$ we have
\begin{equation}\label{tri}
\begin{vmatrix}b_{k+p}&b_{k+m+p}\\a_k&a_{k+m}
\end{vmatrix}=(-y)^{k}\cdot a_{m}\cdot b_p.
\end{equation}
\end{identity}
It is clear that, if $x=1,y=1$ and $a_i=F_i,\, b_i=F_i\, (i=0,1,\ldots)$, we obtain the well known Vajda's
identity for Fibonacci numbers.
Also, it is clear that, if $x=1,y=1$ and $a_i=F_i,\, b_i=L_i\, (i=0,1,\ldots)$,
where $L_i$ is the $i$th Lucas number, we obtain the following identity (see Identity (19b) in \cite{Va}).
\begin{identity}[A Vajda-like identity for Fibonacci and Lucas numbers]
\begin{displaymath}
\begin{vmatrix}L_{k+p}&L_{k+m+p}\\F_k&F_{k+m}
\end{vmatrix}=(-1)^{k}\cdot F_{m}\cdot L_p.
\end{displaymath}
\end{identity}
The following three identities are special cases of a Vajda-like identity for non-Fibonacci numbers.
\begin{identity}[A Vajda-like identity for Mersenne numbers] If $a_i=2^{i-1},(i\geq
1)$, then
\begin{displaymath}
\begin{vmatrix}2^{k+p}-1&2^{k+m+p}-1\\2^k-1&2^{k+m}-1
\end{vmatrix}=2^k\cdot(2^m-1)\cdot(2^p-1).
\end{displaymath}
\end{identity}
\begin{identity}[A Vajda-like identity for positive integers] If $a_i=i$, $i\geq 0,$ then
\begin{displaymath}
\begin{vmatrix}k+p&k+m+p\\k&k+m
\end{vmatrix}=m\cdot p.
\end{displaymath}
\end{identity}
If we take $(a)=(b), m=p=r,k=n-r$ in (\ref{tri}), we obtain the following identity.
\begin{identity}[A Catalan-like identity]
\begin{displaymath}
\begin{vmatrix}a_{n}&a_{n+r}\\a_{n-r}&a_{n}
\end{vmatrix}=(-y)^{n-r}\cdot a_{r}^2.
\end{displaymath}
\end{identity}
It is clear that this identity generalizes the standard Catalan's identity for
Fibonacci numbers, which is obtained for $(a)=(F)$ and $y=1$.
We illustrate this case with several identities. The first identity is for Jacobsthal numbers (cf.\ \cite{Ho2}).
\begin{identity}[A Catalan-like identity for Jacobsthal numbers] If $a_n=J_{n}$, then
\begin{displaymath}
J_{n}^2-J_{n-i}\cdot J_{n+i}=(-2)^{n-i}\cdot J_{i}^2.
\end{displaymath}
\end{identity}
\begin{identity}[A Catalan-like identity for Pell numbrs] If $a_n=P_{n}$, then
\begin{displaymath}
P_{n}^2-P_{n-i}\cdot P_{n+i}=(-1)^{n-i}\cdot P_{i}^2.
\end{displaymath}
\end{identity} See \cite{Ho} for this and some other identities involving Pell numbers.
\begin{identity}[A Catalan-like identity for $U_n(z)$] Assume that $a_n=U_{n}(z)$.
Then
\begin{displaymath}
U_{n}^2(z)-U_{n-i}(z)\cdot U_{n+i}(z)=(-1)^{n-i}\cdot U_{i}^2(z).
\end{displaymath}
\end{identity}
See \cite{Ud} for the direct proof of the previous identity.
\vspace{2mm}
Finally, we illustrate the identity from Theorem~\ref{fpt} by two well known identities for Fibonacci numbers. For the first identity, we set $(a)=(b)=(F),\, p=m,\, q=k$ and $y=1$, then we have the following result (\cite{Ru}, page 77).
\begin{identity}
\begin{displaymath}F_{k+m}^2-F_{m-k}^2=F_{2k}\cdot F_{2m}.
\end{displaymath}
\end{identity}
Also, if $p=m+1,q=k+1$, and $y=1$, then we have the identity from \cite{Sh}, page 63.
\begin{identity}
\begin{displaymath}F_{k+m+1}^2+F_{m-k}^2=F_{2k+1}\cdot F_{2m+1}.
\end{displaymath}
\end{identity}
| {
"timestamp": "2021-05-12T02:29:33",
"yymm": "2008",
"arxiv_id": "2008.13200",
"language": "en",
"url": "https://arxiv.org/abs/2008.13200",
"abstract": "We extend our investigation of $2$-determinants, which we defined in a previous paper. For a linear homogenous recurrence of the second order, we consider relations between different sequences satisfying the same linear homogeneous recurrence of the second order. After we prove a generalized identity of d'Ocagne, we derive, from a single identity, a number of classical identities (and their generalizations) such as d'Ocagne's, Cassini's, Catalan's and Vajda's. Along the way, the corresponding combinatorial interpretations in terms of restricted words over a finite alphabet are stated for the sequences we investigate.",
"subjects": "Combinatorics (math.CO)",
"title": "Arithmetic of Some Sequences Via $2$-determinants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787838473909,
"lm_q2_score": 0.7185943865443352,
"lm_q1q2_score": 0.7093811325883987
} |
https://arxiv.org/abs/1204.2180 | A regularity lemma and twins in words | For a word $S$, let $f(S)$ be the largest integer $m$ such that there are two disjoints identical (scattered) subwords of length $m$. Let $f(n, \Sigma) = \min \{f(S): S \text{is of length} n, \text{over alphabet} \Sigma \}$. Here, it is shown that \[2f(n, \{0,1\}) = n-o(n)\] using the regularity lemma for words.I.e., any binary word of length $n$ can be split into two identical subwords (referred to as twins) and, perhaps, a remaining subword of length $o(n)$. A similar result is proven for $k$ identical subwords of a word over an alphabet with at most $k$ letters. | \section{Introduction}
Let $S=s_1 \ldots s_n $ be a word of length $n$, i.e., a sequence $s_1, s_2, \ldots, s_n$.
A (scattered) {\it subword} of
$S$ is a word $S'= s_{i_1} s_{i_2} \ldots s_{i_s}$, where
$i_1<i_2<\cdots < i_s$. This notion was largely investigated in
combinatorics on words and formal languages theory with special
attention given to counting subword occurrences, different
complexity questions, the problem of reconstructing a word from
its subwords (see, e.g., \cite{ds2003, MS2004, salo2003}). For a
word $S$, let $f(S)$ be the largest integer $m$ such that there
are two disjoints identical subwords of $S$, each of length $m$.
We call such subwords {\it twins}. For example,
if $S=s_1 s_2
s_3 s_4 s_5 s_6 = 0 0 1 0 1 1$, then $S'=s_1 s_5$ and $S_2=s_4
s_6$ are two identical subwords equal to $0 1$. The question we
are concerned with is "How large could the twins be
in any word over a given alphabet?" One of the classical problems
related to this question is the problem of finding longest
subsequence common to two given sequences, see for example
\cite{CLRS01, H, Xia07}. Indeed, if we split a given word $S$
into two subwords with the same number of elements and find a
common to these two subwords word, it would correspond to
disjoint identical subwords in $S$. Optimizing over all
partitions gives largest twins.
Denoting $\Sigma^{n}$ the set of words of length $n$ over the alphabet $\Sigma$, let
\[f(n, \Sigma) = \min \{f(S): S \in \Sigma^n \}.\]
Observe first, that $f(n, \{0,1\})\geq\lfloor(1/3)n\rfloor$.
Indeed, consider any $S\in \Sigma^n$ and
split it into consecutive triples. Each triple has either two zeros or two ones, so we can build a subword $S_1$
by choosing a repeated element from each triple, and similarly
build a subword $S_2$ by choosing the second repeated element from
each triple. For example, if $S = 001~101~111~010$ then there are
twins $S_1, S_2$, each equal to $0~1~1~0$: $S = {\bf
0}{\color{red} 0}1~{\bf 1} 0{\color{red} 1}~{\bf 1}{\color{red}
1}1~{\bf 0}1{\color{red} 0}$, here one word is marked bold, and
the other marked red.
In fact, we can find much larger identical subwords in any binary word.
Our main result is
\begin{theorem}\label{thm:twins}
There exists an absolute constant $C$ such that
\[
\left(1-C\left(\frac{\log n}{\log\log n}\right)^{-1/4}\right)n \leq 2f(n, \{0,1\})\leq n - \log n.
\]
\end{theorem}
In the proof we shall employ a classical density increment
argument successfully applied in combinatorics and number theory,
see e.g.\ the survey of Koml\'os and Simonovits~\cite{KoSi96} and some important applications~\cite{Gow01} and~\cite{Szemeredi78}. We first show that
we can partition any word $S$ into consecutive factors that look as
if they were random in a certain weak sense (we call them
$\varepsilon$-regular). These $\varepsilon$-regular words can be partitioned (with the exception of $\varepsilon$ proportion of letters) into two
identical subwords. By appending these together for
every $\varepsilon$-regular word, we eventually obtain identical subwords of
roughly half the length of $S$.
We generalize the notion of two identical subwords in words to a notion of
$k$ identical subwords. For a given word $S$, let $f(S, k)$ be the largest
$m$ so that $S$ contains $k$ pairwise
disjoint identical subwords of length $m$ each. Finally,
let
$$f(n, k, \Sigma) = \min \{f(S, k): ~ S\in \Sigma^n \}.$$
\begin{theorem}\label{thm:k- tuplets-smallQ}
For any integer $k\geq 2$, and alphabet $\Sigma$, $|\Sigma|\leq
k$,
\[
\left(1-C|\Sigma|\left(\frac{\log n}{\log\log n}\right)^{-1/4}\right)n \leq kf(n, k, \Sigma).
\]
\end{theorem}
In case when $k$ is smaller than the size of the alphabet, we have the following bounds.
\begin{theorem}\label{thm:k- tuplets-largeQ}
For any integer $k\geq 2$, and alphabet $\Sigma$, $|\Sigma|> k$,
\[
\left(\frac{k}{|\Sigma|}-C|\Sigma|\left(\frac{\log n}{\log\log n}\right)^{-1/4}\right)n \leq kf(n, k,
\Sigma)\leq n - \max\{\alpha n,\log n\},
\]
where $\alpha\in[0,1/k]$ is the solution of the equation $\l^{-(k-1)\alpha} \alpha ^{-k \alpha }(1-k\alpha )^{k\alpha-1 }= 1$,
whenever such solution exists and $0$ otherwise.
\end{theorem}
We shall sometimes refer to two disjoint identical subwords as {\it twins},
three disjoint identical subwords as {\it triplets}, $k$ disjoint identical subwords as $k$-{\it tuplets}.
We shall prove the regularity lemma for binary words in Section
\ref{regularity} and will prove the Theorem~\ref{thm:twins} in
Section \ref{Proofs}. We shall prove Theorems
\ref{thm:k- tuplets-smallQ}, \ref{thm:k- tuplets-largeQ} in
Section~\ref{Proofs1}. We shall ignore any
divisibility issues as these will not affect our arguments.
\section{Definitions and Regularity Lemma for Words}\label{regularity}
First, we shall introduce some notations (for more detail, see for instance \cite {KK, Lo}).
An \emph{alphabet} $\Sigma$ is a finite non-empty set of symbols called \emph{letters}.
For a (scattered) {\it subword} $S'= s_{i_1} s_{i_2}
\ldots s_{i_s}$, of a word $S$, we call the set
$\{i_1, i_2, \ldots, i_s\}$ a {\it support} of $S'$ in $S$, and
write ${\rm supp}(S')$, so the length of $S'$, $|S'|= |{\rm supp}(S')|$.
Denoting $I= \{i_1, \ldots, i_s\}$, we write $S' = S[I]$.
A {\it factor} of $S$ is a subword with consecutive elements of
$S$, i.e., $s_i s_{i+1} \ldots s_{i+m}$, for some $1\leq i \leq n$
and $0\leq m \leq n-i$, we denote it $S[i, i+m]$. If $S$ is a word
over alphabet $\Sigma$ and $q\in \Sigma$, we denote $|S|_q$ the
number of elements of $S$ equal to $q$. The {\it density} $d_q(S)
$ is defined to be $|S|_q/|S|$.
For two subwords $S'$ and $S''$ of $S$, we say that $S'$ is
contained in $S''$ if ${\rm supp}(S')\subseteq {\rm supp}(S'')$,
we also denote by $S'\cap S''$ a subword of $S$, $S[{\rm
supp}(S')\cap {\rm supp}(S'')]$.
If $S= s_1 \ldots s_n$ and
$S[1,i] = A$, $S[i+1, n]=B$, then we write $S=AB$ and call $S$ a
concatenation of $A$ and $B$.
\begin{definition}[$\varepsilon$-regular word]
Call a word $S$ of length $n$ over an alphabet $\Sigma$
$\varepsilon$-regular if for every $i$, $\varepsilon n+1 \le i\le n-2\varepsilon n+1$ and
every $q\in \Sigma$ it holds that
\begin{equation}\label{eq:irregular}
|d_q(S)-d_q(S[i,i+\varepsilon n-1])|<\varepsilon.
\end{equation}
\end{definition}
Notice that in the case $|\Sigma|=|\{0,1\}|=2$, $d_0(S)=1-d_1(S)$ and thus
$ |d_0(S)-d_0(S[i,i+\varepsilon n-1])|<\varepsilon \Longleftrightarrow |d_1(S)-d_1(S[i,i+\varepsilon n-1])|<\varepsilon.$
When $\Sigma=\{0,1\}$, we shall denote $d(S) = d_1(S)$.
The notion of $\varepsilon$-regular words resembles the notion of pseudorandom (quasirandom) word, see ~\cite{ChungGrahamZ92}.
However, these two notions are quite different. A word that consists of alternating
$0$s and $1$s is $\varepsilon$-regular but not pseudorandom. Also, unlike in the case
of stronger notions of pseudorandomness, one can check in a linear time whether a word is $\varepsilon$-regular,
cf.~\cite{ADLRY94} in the graph case.
\begin{definition}
We call $\mathcal{S}:=(S_1$, \ldots, $S_t)$ a partition of $S$ if $S= S_1S_2\ldots S_t$, ($S$ is concatenation of consecutive $S_i$s).
A partition $\mathcal{S}$ is an $\varepsilon$-regular partition of a word $S\in\Sigma^n$ if
\[
\sum_{\substack{i\in[t]\\ S_i \text{ is not } \varepsilon-\text{regular}}} |S_i|\le \varepsilon n,
\]
\noindent i.e., the total length of $\varepsilon$-irregular subwords is
at most $\varepsilon n$.
\end{definition}
The decomposition lemma we are going to show states the following:
\begin{theorem}[Regularity Lemma for Words]\label{lem:rl_seqs}
For every $\varepsilon>0$ and $t_0$ there is an $n_0$ and $T_0$ such that any word $S\in \Sigma^n$, for $n\geq n_0$
admits an $\varepsilon$-regular partition of $S$ into $S_1$, \ldots,
$S_t$ with $t_0\le t\le T_0$. In fact, $T_0\le t_03^{1/{\varepsilon^4}}$
and $n_0 = t_0\varepsilon^{-\varepsilon^{-4}}$.
\end{theorem}
To prove the regularity lemma, we introduce the notion of an index and a refinement and prove a few basic facts.
\begin{definition}[Index of a partition]
Let $\mathcal{S}:=(S_1$, \ldots, $S_t)$ be a partition of $S\in \Sigma^n$
into consecutive factors.
We define
\[
\mathrm{ind}(\mathcal{S})= \sum_{q\in \Sigma } \sum_{i\in[t]} d_q(S_i)^2\tfrac{|S_i|}{n}.
\]
Further, for convenience we set $\mathrm{ind}_q(\mathcal{S})=\sum_{i\in[t]} d_q(S_i)^2\tfrac{|S_i|}{n}$.
\end{definition}
Observe that $\mathrm{ind}(\mathcal{S})$ is bounded by $1$ from above.
\begin{definition}[Refinement of $\mathcal{S}$]
Let $\mathcal{S}=(S_1,\ldots,S_t)$ and
\[
\mathcal{S}'=(S'_{1,1}, S'_{1,2}, \ldots, S'_{1, s_1}, \quad S'_{2,1}, S'_{2,2}, \ldots, S'_{2, s_2}, \quad \ldots, \quad S'_{t,1}, S'_{t,2}, \ldots, S'_{t, s_t})
\]
be partitions of $S\in \Sigma^n$.
We say that $\mathcal{S}'$ refines $\mathcal{S}$ and write $\mathcal{S}'\preccurlyeq\mathcal{S}$,
if for every $i=1, \ldots, t$, $S_i = S'_{i,1} S'_{i,2} \cdots
S'_{i, s_i}$.
\end{definition}
\begin{lemma}\label{fact:indexineq}
Let $\mathcal{S}$ and $\mathcal{S}'$ be partitions of $S\in \Sigma^n$
If $\mathcal{S}'\preccurlyeq\mathcal{S}$ then
\[
\mathrm{ind}(\mathcal{S}')\ge\mathrm{ind}(\mathcal{S}).
\]
\end{lemma}
\begin{proof}
Let $\mathcal{S}=(S_1,\ldots,S_t)$ and
\[
\mathcal{S}'=(S'_{1,1}, S'_{1,2}, \ldots, S'_{1, s_1}, \quad S'_{2,1}, S'_{2,2}, \ldots, S'_{2, s_2}, \quad \ldots, \quad S'_{t,1}, S'_{t,2}, \ldots, S'_{t, s_t}).
\]
We proceed for each $q\in \Sigma$ as follows:
\begin{align*}
\mathrm{ind}_q(\mathcal{S}')& = \sum_{S'\in\mathcal{S}'} d_q(S')^2\frac{|S'|}{n}\\
&= \sum_{i=1}^t \sum_{j=1}^{s_i} d_q(S'_{i,j})^2\frac{|S'_{i,j}|}{n}\\
&= \sum_{i=1}^{t} \frac{|S_i|}{n} \sum_{j=1}^{s_i} d_q(S_{i,j})^2\frac{|S'_{i,j}|}{|S_i|} \\
& \overset{\text{Jensen's inequality}}{\ge} \sum_{i=1}^t\frac{|S_i|}{n} \left(\sum_ {j=1}^{s_i} d_q(S'_{i,j})\frac{|S'_{i,j}|}{|S_i|}\right)^2 \\
& = \sum_{i=1}^t\frac{|S_i|}{n} \left(\sum_ {j=1}^{s_i} \frac{|S'_{i,j}|_q}{|S'_{i,j}|} \frac{|S_{i,j}|}{|S_i|}\right)^2\\
&=\sum_{i=1}^t\frac{|S_i|}{n} d_q(S_i)^2 \\
&=\mathrm{ind}_q(\mathcal{S}).
\end{align*}
Now, building the sum over all $q\in \Sigma$ yields:
\[
\mathrm{ind}(\mathcal{S}')\ge \mathrm{ind}(\mathcal{S}).
\]
\end{proof}
The next lemma shows that if a word $S$ is not
$\varepsilon$-regular, then there is a refinement of $(S)$ whose index exceeds the index of $(S)$ by at least $\varepsilon^3$.
\begin{lemma}\label{lem:increment}
Let $S\in \Sigma^m$ be an $\varepsilon$-irregular word. Then there is a partition $(A,B, C)$ of $S$ such that $|A|, |B|, |C| \geq \varepsilon m$ and
\begin{equation}\label{eq:increment}
\mathrm{ind}((A,B,C))\ge\mathrm{ind}((S))+\varepsilon^3=\left(\sum_{q\in \Sigma} d_q(S)^2\right)+\varepsilon^3.
\end{equation}
\end{lemma}
\begin{proof}
Since $S$ is not $\varepsilon$-regular, there exists an element $q\in \Sigma$ and an $i$ with $\varepsilon m+1\le i\le m-2\varepsilon m+1$ such that
$|d-d(S[i,i+\varepsilon m-1])|\ge \varepsilon$, where $d:=d_q(S)$ and
$d(T):=d_q(T)$ for any factor $T$ of $S$. Assume w.l.o.g.\ that
$d-d(S[i,i+\varepsilon m-1])\ge \varepsilon$ and set $\gamma:=d-d(S[i,i+\varepsilon
m-1])$, $A:=S[1,i-1]$, $B:=S[i,i+\varepsilon m-1]$ and $C:=S[i+\varepsilon
m,m]$, $a:=|A|$, $b:=|B|=\varepsilon m$ and $c:=|C|$.
Observe further that
\[
|S|_q=d(A)a+d(B)b+d(C)c=dm, \quad d((A,C))=\tfrac{dm-(d-\gamma)b}{a+c}, \quad d(B)=d-\gamma.
\]
Since $a+c = m-b$ and $\mathrm{ind}_q((A,B,C))=\mathrm{ind}_q((A,C,B))$,
\begin{align*}
\mathrm{ind}_q((A,B,C))&{\ge} d((A,C))^2\frac{a+c}{m}+d(B)^2\frac{b}{m} \\
&= \left(\frac{dm-(d-\gamma)b}{a+c}\right)^2\frac{a+c}{m}+(d-\gamma)^2\frac{b}{m} \\
& = \frac{(dm-(d-\gamma)b)^2}{(m-b)m}+(d-\gamma)^2\frac{b}{m} \\
&= \frac{1}{(m-b)m} \left[ d^2 (m^2 -mb) + \gamma^2(mb)\right] \\
&= d^2 + \frac{\gamma^2 b}{m-b}
\geq d^2 + \frac{\varepsilon^3 m}{(1-\varepsilon)m}
\geq d^2 + \varepsilon^3.
\end{align*}
The case when $d-d(S[i,i+\varepsilon n-1])\le -\varepsilon$ works out similarly. Indeed, set $\gamma:=d-d(S[i,i+\varepsilon m-1])$
as before and notice that $|\gamma|\ge\varepsilon$ and all the computations above are exactly the same.
So, $\mathrm{ind}_q ((A,B,C)) \geq d_q^2 + \varepsilon^3$.
For all other $q' \in \Sigma$, Lemma~\ref{fact:indexineq} gives that $\mathrm{ind}_{q'} ((A,B,C)) \geq \mathrm{ind}_{q'}((S)) = d_{q'}^2(S)$.
Thus $$\mathrm{ind}((A,B,C))= \mathrm{ind}_q((A,B,C)) +\sum_{q'\in \Sigma -\{q\} } \mathrm{ind}_{q'} ((A,B,C)) \geq \sum_{q'\in \Sigma} d_{q'}(S)^2 + \varepsilon^3.$$
\end{proof}
Finally we are in position to finish the argument.
\begin{proof}[Proof of the Regularity Lemma for Words]
Take $\varepsilon>0$ and $t_0$ as given. We will give a bound on $n_0$ later.
Suppose that we have a word $S\in \Sigma^n$. Split it into $t_0$
consecutive factors $S_1$, \ldots, $S_{t_0}$ of the same length
$\tfrac{n}{t_0}$. If $\mathcal{S}:=(S_1,\ldots ,S_{t_0})$
is not an
$\varepsilon$-regular partition, then let $I\subseteq [t_0]$ be the set
of all indices such that, for every $i\in I$, $S_i$ is not
$\varepsilon$-regular (thus, $\sum_{i\in I}|S_i|\ge \varepsilon n$). Then, by
Lemma~\ref{lem:increment}, we can refine each $S_i$, $i\in I$,
into factors $A_i$, $B_i$ and $C_i$ such that
$\mathrm{ind}((A_i,B_i,C_i))\ge \sum_{q\in \Sigma}d_q(S_i)^2+\varepsilon^3$ (in
the case that~\eqref{eq:irregular} is violated for several $q\in
\Sigma $, choose an arbitrary such $q$). We perform such
refinement for each $S_i$, $i\in I$, obtaining a partition
$\mathcal{S}'\preccurlyeq\mathcal{S}$, noticing that
\begin{align*}
\mathrm{ind}(\mathcal{S}')&=
\sum_{q\in \Sigma}\sum_{j\in[t_0]\setminus I} d_q(S_j)^2\frac{|S_j|}{n}+
\\ &\quad\quad\quad\quad\quad
\sum_{q\in \Sigma }\sum_{i\in I} \left(d_q(A_i)^2\frac{|A_i|}{n}+d_q(B_i)^2\frac{|B_i|}{n}+d_q(C_i)^2\frac{|C_i|}{n}\right) \\
&= \sum_{q\in \Sigma}\sum_{j\in[t_0]\setminus I} d_q(S_j)^2\frac{|S_j|}{n}+ \sum_{i\in I} \mathrm{ind}((A_i,B_i,C_i))\frac{|S_i|}{n} \\
&\overset{\eqref{eq:increment}}{\ge} \sum_{q\in \Sigma }\sum_{j\in[t_0]\setminus I} d_q(S_j)^2\frac{|S_j|}{n}+\sum_{i\in I}(\mathrm{ind}((S))+\varepsilon^3)\frac{|S_i|}{n} \\
&=\mathrm{ind}(\mathcal{S})+\varepsilon^3\frac{\sum_{i\in I}|S_i|}{n} \\
&\ge \mathrm{ind}(\mathcal{S})+\varepsilon^4.
\end{align*}
Thus, $\mathcal{S}'$ refines $\mathcal{S}$ and has higher index. If $\mathcal{S}'$ is not an $\varepsilon$-regular partition of $S$, then we can repeat the procedure above by refining $\mathcal{S}'$ etc.
Recall that an index of any partition $\mathcal{S}$ is bounded from above by $1$.
Thus, since the increment of the index that we get at each step is at least $\varepsilon^4$ and each word in the partition decreases in length by a factor of at most $\varepsilon$ at each step, it
follows that we can perform at most $\varepsilon^{-4}$ many steps so that the resulting factors are non-trivial, and therefore we will eventually find an $\varepsilon$-regular partition of $S$.
Notice that such a partition consists of at most $3^{1/\varepsilon^4}t_0$ words, since at each iteration each of the words is partitioned into at most $3$ new ones.
Therefore, $T_0\le 3^{1/\varepsilon^4}t_0$ and each factor in the partition has length at least $t_0^{-1}\varepsilon^{1/\varepsilon^4}n$.
\end{proof}
\section{Proof of Theorem~\texorpdfstring{\ref{thm:twins}}{twins}.}\label{Proofs}
Before we prove our main theorem about binary words, we show a useful claim about twins in $\varepsilon$-regular words.
\begin{claim}\label{claim:psrandom}
If $S$ is an $\varepsilon$-regular word, then $2f(S)\geq |S| - 5\varepsilon|S|$.
\end{claim}
\begin{proof}
Let $|S|=m$. We partition $S$ into $t= 1/\varepsilon$ consecutive factors
$S_{1}$,\ldots, $S_{1/\varepsilon}$, each of length $\varepsilon m$. Since $S$
is $\varepsilon$-regular, $|d(S_{i})-d(S)|<\varepsilon$, for every
$i\in\{2,\ldots, 1/\varepsilon-1\}$. Thus each $S_i$ has at least $(d(S)-
\varepsilon)\varepsilon m$ occurrences of $1$s and at least $(1-d(S)-\varepsilon)\varepsilon m$ occurrences of $0$s.
Let $S_i(1)$ be a subword of $S_i$ consisting of exactly $(d(S)-
\varepsilon)\varepsilon m$ letters $1$ and $S_i(0)$ be a subword of $S_i$ consisting
of exactly $(1-d(S)-\varepsilon)\varepsilon m$ letters $0$. Consider the following two
disjoint subwords of $S$:
$A=S_2(1)S_3(0) S_4(1) \cdots S_{t-2}(1) $ and $B=S_3(1) S_4(0) S_5(1) \cdots S_{t-2}(0)S_{t-1}(1) $.
When $t$ is odd, $A$ and $B$ are constructed similarly.
We see that $A$ and $B$ together have at least $m - 2\varepsilon^2 m ( 1/\varepsilon -3) -3 \varepsilon m$ elements,
where $2\varepsilon^2 m ( 1/\varepsilon -3)$ is an upper bound on the number of $0$s and $1$s which we had to ``throw away''
to obtain \emph{exactly} $(d(S)- \varepsilon)\varepsilon m$ letters $1$ and $(1-d(S)-\varepsilon)\varepsilon m$ letters $0$ in each $S_i$,
$2\varepsilon m$ is the number of elements in $S_1$ and $S_t$, and $\varepsilon m$ is the upper bound on $|S_2(0)|+|S_{t-1}(1)|$.
Thus, $2f(S) \geq m - 5\varepsilon m $. This concludes the proof of the claim.
\end{proof}
Notice that we could slightly improve on $5\varepsilon m$ above by finding in an already mentioned way twins of size $\varepsilon m/3$ each in $S_1$ and $S_t$,
but this does not give great improvement.
\begin{proof}[Proof of Theorem~\ref{thm:twins}]
Let $n$ be at least $n_0$, which is as asserted by the Regularity
Lemma for words
for given $\varepsilon>0$ and $t_0:=\lceil\tfrac{1}{\varepsilon}\rceil$.
Furthermore, let $S$ be a binary word of length $n$. Again,
Theorem ~\ref{lem:rl_seqs} asserts an $\varepsilon$-regular partition of
$S$ into $S_1$, \ldots, $S_t$ with $1/\varepsilon \le t\le T_0$. We apply
Claim~\ref{claim:psrandom} to every $\varepsilon$-regular factor $S_i$.
Furthermore, since $S_i$s appear consecutively in
$S$, we can put the twins from each of $S_i$s together obtaining
twins for the whole word $S$. This way we see:
\[
2f(S)\ge \sum_{\substack{i\in[t]\\ S_i \text{ is }\varepsilon-\text{regular}}} (|S_i|-5\varepsilon |S_i|)
\ge n-5\varepsilon n-\varepsilon n=n-6\varepsilon n,
\]
here $\varepsilon n$ corresponds to the total lengths of not $\varepsilon$-regular factors.
Choosing $\varepsilon=C(\frac{\log n}{\log\log n})^{-1/4}$, and an appropriate $C$, we see that
$n \geq \varepsilon^{-\varepsilon^{-4}}$.
Therefore, by Theorem \ref{lem:rl_seqs} ~~$2f(n,\{0,1\})\ge (1-C(\log n)^{-1/4}))n.$
Next we shall prove the upper bound on $f(n,\{0,1\})$ by constructing a binary word $S$ such that $2f(S) \leq |S| - \log |S|$.
Let $S= S_k S_{k-1} \ldots S_0$, where $|S_i|= 3^i$, $S_i$ consists only of $1$s for even $i$, and
it consists only of $0$s for odd $i$s. I.e., $S$ is built of iterated $1$- or $0$-blocks
exponentially decreasing in size. Let $A$ and $B$ be twins in $S$. \\
\noindent
Assume first that $A$ and $B$ have the same number of elements in $S_k$.
Since $S_k$ has odd number of elements, and $A$, $B$ restricted to $S'=S_{k-1}S_{k-2} \cdots S_0$ are twins,
by induction we have that $|A|+|B| \leq (|S_k|-1) + (|S'| - \log (|S'|)) = |S| - 1 -\log(|S'|) \leq |S| - \log |S|.$
That is true since $|S_k| = 3^k$, $|S| = (3^{k+1}-1)/2$. \\
\noindent
Now assume, w.l.o.g.\, that $A$ has more elements in $S_k$ than $B$ in $S_k$.
Then $B$ has no element in $S_{k-1}$.
We have that $| A \cap S_{k-1} | \geq |S_{k-1}|/2$,
otherwise $|A|+|B| \leq |S| - |S_{k-1}|/2 \leq |S| - \log |S|$.
So, $s=|A\cap S_{k-1}| \geq |S_{k-1}|/2 \geq 3^{k-1}/2$, and
$s$ elements of $B$ must be in $S_{k-3}\cup S_{k-5} \cdots$.
But $|S_{k-3}|+ |S_{k-5}| + \cdots \leq 3^{k-2}/2$, a contradiction proving Theorem \ref{thm:twins}.
\end{proof}
\begin{remark}
One can find words of length $n/2-o(n)$ as described above by an
algorithm with $O(\varepsilon^{-4}|Q|n)$ steps.
\end{remark}
\section{\texorpdfstring{$k$}{k}-tuplets over alphabet of at most \texorpdfstring{$k$}{k} letters}\label{Proofs1
\begin{proof}[Proof of Theorem \ref{thm:k- tuplets-smallQ}]
As before, we concentrate first on $\varepsilon$-regular words. Let $S$
be an $\varepsilon$-regular word of length $m$ over alphabet
$\Sigma=\{0,\ldots,{\l}-1\}$ and recall the assumption $\l\le k$. We
partition $S$ in $t= 1/\varepsilon$ consecutive factors $S_{1}$,\ldots,
$S_{1/\varepsilon}$, each of length $\varepsilon m$. Since $S$ is
$\varepsilon$-regular, $|d_q(S_{i})-d_q(S)|<\varepsilon$, for every
$i\in\{2,\ldots, 1/\varepsilon-1\}$, and every $q\in \Sigma $. Thus $S_i$ has
at least $(d_q(S)- \varepsilon)\varepsilon m$ letters $q$, for each $q\in \Sigma$.
We construct $k$-tuplets $A_1$, \ldots, $A_k$ as
follows. Each of $A_j$s consists of consecutive blocks, with
first block consisting of $(d_{0}(S)- \varepsilon)\varepsilon m$ letters $0$,
followed by a block of $(d_{1}(S)- \varepsilon)\varepsilon m$ letters $1$, \ldots,
followed by a block of $(d_{{\l}-1}(S)- \varepsilon)\varepsilon m$ letters ${\l}-1$, followed
by a block of $(d_{0}(S)- \varepsilon)\varepsilon m$ letters $0$, and so on.
Since $k\ge |\Sigma|$,
we will use all but at most
$\tfrac{1}{\varepsilon} \varepsilon^2m |\Sigma|+(2|\Sigma|)\varepsilon m = 3|\Sigma| \varepsilon m$ elements,
where the first
summand accounts for the number of elements that we did not use
when choosing exactly $(d_{q}(S)- \varepsilon)\varepsilon m$ elements $q$
from each $S_i$ and each $q\in \Sigma$ and the second summand for the number of elements
in $S_1$, \ldots, $S_{\l}$,
and from $S_{1/\varepsilon-\l+1}$, \ldots, $S_{1/\varepsilon}$.
Below are the examples in the special cases when $|\Sigma|=\l=k$ and when $|\Sigma|=2$ and $k=4$.
\noindent
{\it Example 1.}
\begin{align*}
A_1&=S_2(0) S_3(1) S_4(2) \cdots S_{\ell+1}(\ell-1) S_{\ell+2}(0) S_{\ell+3}(1) \cdots S_{2\ell+1} (\ell-1) \cdots, \\
A_2&= \,\,\, S_3(0) S_4(1) S_5(2) \cdots S_{\ell+2}(\ell-1) S_{\ell+3}(0) S_{\ell+4}(1) \cdots S_{2\ell+2} (\ell-1) \cdots, \\
\vdots\\
A_i&= \quad S_{i+1}(0) S_{i+2}(1) S_{i+3}(2) \cdots S_{i+\ell}(\ell-1) S_{i+\ell+1}(0) S_{i+\ell+2}(1) \cdots S_{i+2\ell} (\ell-1) \cdots\\
\vdots\\
A_{k}&= \hskip 2.4 cm S_{\l+1}(0) S_{\l+2}(1) S_{\l+3}(2) \cdots S_{2\ell}(\ell-1) S_{2\ell+1}(0) \cdots S_{3\ell} (\ell-1) \cdots
\end{align*}
\noindent
{\it Example 2.}
\begin{align*}
A_1&=S_2(0) S_3(1) \hskip 1.6cm S_6(0) S_7(1) \cdots \\
A_2&= \hskip 0.8 cm S_3(0) S_4(1) \hskip 1.6cm S_7(0) S_8(1)\cdots \\
A_3&= \hskip 1.6 cm S_{4}(0) S_{5}(1) \hskip 1.6cm S_8(0) S_9(1) \cdots \\
A_4&= \hskip 2.4 cm S_{5}(0) S_{6}(1) \hskip 1.6cm S_9(0) S_{10}(1) \cdots
\end{align*}
Here $S_i(j)$ is the block of $(d_j(S)-\varepsilon)\varepsilon m$ letters $j$ taken from $S_i$. So, in general, the total number of elements in $A_1$,\ldots, $A_k$ is at least
$m-3|\Sigma|\varepsilon m.$
Thus, $k f(S)\ge m-3 |\Sigma|\varepsilon m$.
To provide the lower bound on $f(n,k, \Sigma )$ we proceed as in the
proof of Theorem \ref{thm:twins} by first finding a regular partition of a given
word and then applying the above construction to regular factors with an appropriate choice of $\varepsilon$.
\end{proof}
\section{Large alphabets and small \texorpdfstring{$k$}{k}-tuplets}
\begin{proof}[Proof of Theorem~\ref{thm:k- tuplets-largeQ}]
The proof of the lower bound proceeds by considering a scattered word $W$ consisting of the $k$ most frequent letters.
Clearly, $|W|\ge \tfrac{k}{|\Sigma|}n$, which together with Theorem~\ref{thm:k- tuplets-smallQ} yields the lower bound.
The upper bound we obtain is either immediate from Theorem~\ref{thm:twins} or from computing the expected number of $k$-tuplets of length $m$ each
in a random word of length $n$ over an alphabet $\Sigma$ of size $\l$. If the expectation if less than $1$, this
means that there is a word $S$ with $f(S,k)<m$. Indeed, there are
\[
\frac{1}{k!} \prod_{i=0}^{k-1} \binom{n-im}{m}
\]
distinct sets of $k$ disjoint subwords each of length $m$ in a word of length $n$.
The probability that such a set corresponds to a $k$-tuplet, when each letter is chosen with probability $1/\l$ independently,
is $\l^{(1-k)m}$.
Thus, the expected number of $k$- tuplets is at most
\begin{multline*}
\l^{(1-k)m}\prod_{i=0}^{k-1} \binom{n-im}{m}=\l^{-(k-1)m} \frac{n!}{(m!)^k(n-km)!}\le \l^{-(k-1)m}\frac{n^n}{m^{km}(n-km)^{n-km}},
\end{multline*}
that is, for $m=\alpha n$, is at most
\[
\l^{-(k-1)\alpha n}\frac{n^n}{(\alpha n)^{k\alpha n }(n-k\alpha n)^{n-k\alpha n}}=\left(\l^{-(k-1)\alpha} \alpha ^{-k\alpha }(1-k\alpha )^{k\alpha-1 }\right)^n.
\]
Thus, if $\l^{-(k-1)\alpha} \alpha ^{-k\alpha } (1-k\alpha )^{k\alpha-1 }$ is less than $1$ then $f(S,k)\le \alpha n$.
In particular, for $k=2$ and $\l=5$ one can compute that $\alpha<0.49$.
\end{proof}
\section{Concluding Remarks}
\subsection{Small values of \texorpdfstring{$f(n,k,\Sigma)$}{f(n,k,Sigma)}}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$\Sigma$\textbackslash $n$ &$6$&$7$&$8$&$9$&$10$&$11$&$12$&$13$&$14$&$15$&$16$&$17$ \\\hline
$\{0,1\}$ &$2$&$2$&$2$&$3$&$3$ &$ 4$&$ 4$&$ 5$&$ 5$&$ 5$& $6$ & $6$ \\\hline
$\{0,1,2\}$ & $1$ &$1$ &$2$ &$2$ &$2$ &$3$ &$ 3$&$ 3$&$ 4$&$ 4$&$ 4$&$ 4$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|c|c| }
\hline
$\Sigma$\textbackslash $n$ & $18$&$19$&$20$&$21$&$22$&$23$&$24$\\\hline
$\{0,1\}$ &$7$ &$7$ &$ 8$& & & & \\\hline
$\{0,1,2\}$ &$ \leq 5$ &$\le6$&$\le6$&$\le 7$&$\le 7$&${\color{red} \le 8}$&$\le8$\\
\hline
\end{tabular}
\caption{Values for small $t$ of $f(t,2,2)$ and $f(t,2,3)$.}
\end{table}
We will slightly abuse notation and denote by $f(n,k,\l)$ the value of $f(n,k,\Sigma)$ with $|\Sigma|=\l$.
In the introductory section it was observed that $f(3,2,2)=1$ yielding immediately a weak lower bound on $f(n,2,2)$ to be $\lfloor n/3\rfloor$.
In general, it holds clearly
\[
f(n,k,\l)\ge \left\lfloor\tfrac{n}{m}\right\rfloor f(m,k,\l).
\]
For example, we determined (Theorem~\ref{thm:k- tuplets-largeQ}) a lower bound on $f(n,2,3)$ to be $\tfrac{1}{3}n-o(n)$.
We do not know whether it is tight and, more sadly, whether one can achieve it, without $o(n)$ term, by finding a (reasonable) number $t$
such that $f(t,2,3)\ge \frac{t}{3}$. If one could find such $t$ this would immediately give another proof of $f(n,2,3)\ge\tfrac{1}{3}n-t$.
However, the smallest value for such possible $t$ could be $21$, which already presents a computationally challenging task. In the tables above we summarize estimates on the values on $f(n,k,\l)$,
which were determined with the help of a computer. Thus, the first ``open'' case which might improve
lower bound on $f(n,2,3)$ is $f(22,2,3)$.
\subsection{Improving the \texorpdfstring{$O\left(|\Sigma|\left(\frac{\log\log n}{\log n}\right)^{1/4}\right)n$}{o(n)} term}
Further we remark, that a more careful analysis below of the increment argument in the proof of Theorem~\ref{lem:rl_seqs} leads to the bound $T_0\le t_0 3^{(-2\log \varepsilon)/\varepsilon ^3}$, which in turn improves the
bounds in Theorems~\ref{thm:twins} and~\ref{thm:k- tuplets-smallQ} to
\[
\left(1-C|\Sigma|\left(\frac{(\log\log n)^2}{\log n}\right)^{1/3}\right)n \leq kf(n, k, \Sigma).
\]
Recall that in the proof of Theorem~\ref{lem:rl_seqs} we set up an index and refining a corresponding partition
each time we increase it by at least $\varepsilon^4$. Let's reconsider $j$th refinement step at which
the partition $\mathcal{S}=(S_1,\ldots,S_{t_0})$ is to be refined. Further recall that $I$ consists of the indices $i$ such that
$S_i$ is not $\varepsilon$-regular.
Let $\alpha_j$ be such that
\begin{equation}\label{eq:alpha}
\sum_{i\in I} |S_i|=\alpha_j n.
\end{equation}
In the original proof we iterate as long as $\alpha_j\ge \varepsilon$ holds. And by peforming an iteration step we
merely use the fact that $\alpha_j\ge \varepsilon$ which leads to $\varepsilon^4$ increase of the index during one iteration step.
Recall that $\mathrm{ind}(\mathcal{S})$ was defined as follows
\[
\mathrm{ind}(\mathcal{S})= \sum_{q\in \Sigma }\sum_{j\in[|\mathcal{S}|]} d_q(S_j)^2\frac{|S_j|}{n},
\]
and for each further refinement $\mathcal{S}'\preccurlyeq\mathcal{S}$ it holds:
\begin{equation}\label{eq:newbound_ind}
\mathrm{ind}(\mathcal{S})\le \mathrm{ind}(\mathcal{S}')=\frac{(1-\alpha_j)n}{n}\mathrm{ind}(\mathcal{S}_1)+\frac{\alpha_j n}{n}\mathrm{ind}(\mathcal{S}_2) \le \sum_{q\in \Sigma }\sum_{j\in[|\mathcal{S}|]\setminus I} d_q(S_j)^2\frac{|S_j|}{n}+\alpha_j,
\end{equation}
where $\mathcal{S}_1$ consists of $\varepsilon$-regular words from $\mathcal{S}$(these words are not partitioned/refined anymore)
and $\mathcal{S}_2$ consists of not $\varepsilon$-regular words from $\mathcal{S}$ (and their lengths sum up to $\alpha_j n$).
Let $\l$ be the total number of iteration steps until we arrive at an $\varepsilon$-regular partition. Let
$\alpha_1$, \ldots, $\alpha_{\l}$ be the numbers, where $\alpha_j n$ is the sum over the lengths of not $\varepsilon$-regular
words in the partition at step $j$, $j\in[\l]$ (cf.\eqref{eq:alpha}).
By the discussion above
\[
1\ge \alpha_1\ge \alpha_2\ge \ldots\ge \alpha_{\l}\ge \varepsilon.
\]
Next, we partition $(\varepsilon,1]$ into $\log_2 \tfrac{1}{\varepsilon}$ consecutive intervals $(y_{i+1},y_i]$ where $y_1=1$ and
$y_{i+1}=y_i/2$. We claim that
each interval $(y_{i+1},y_i]$ contains at most $\frac{2}{\varepsilon^3}$ $\alpha_j$s. Indeed, the increase of the index during step $j$ where $\alpha_j\in(y_{i+1},y_i]$ is at least
\[
\alpha_j \varepsilon^3>y_{i+1}\varepsilon^3.
\]
Further, let $j'$ be the smallest index such that $\alpha_{j'}\le y_i$ and $j''$ be the largest
index such that $\alpha_{j''}>y_{i+1}$. Let $\mathrm{ind}_j$ be the index before the $j$th refinement step.
Then by~\eqref{eq:newbound_ind} the following holds for $j'+1\le j\le j''$:
\[
\mathrm{ind}_{j'+1}\le \mathrm{ind}_{j}\le \mathrm{ind}_{j''}\le \mathrm{ind}_{j'+1}+y_i.
\]
This implies that the number of $\alpha_j$s in the interval $(y_{i+1},y_i]$ cannot be bigger than
\[
\frac{y_i}{y_{i+1}\varepsilon^3}=\frac{2}{\varepsilon^3}.
\]
Thus, we obtain the following upper bound on $\l$
\[
\l\le \frac{2\log_2 \tfrac{1}{\varepsilon}}{\varepsilon^3},
\]
which leads to $T_0\le t_0 3^{(-2\log \varepsilon)/\varepsilon ^3}$, $n_0=t_0\varepsilon^{-(2\log 1/\varepsilon)/\varepsilon^3}$ and thus we can regularize with
$\varepsilon=\left(\frac{(\log\log n)^2}{\log n}\right)^{1/3}$.
\section*{Acknowledgements} The authors would like to thank Sergey Avgustinovich for fruitful discussions.
| {
"timestamp": "2012-04-11T02:02:57",
"yymm": "1204",
"arxiv_id": "1204.2180",
"language": "en",
"url": "https://arxiv.org/abs/1204.2180",
"abstract": "For a word $S$, let $f(S)$ be the largest integer $m$ such that there are two disjoints identical (scattered) subwords of length $m$. Let $f(n, \\Sigma) = \\min \\{f(S): S \\text{is of length} n, \\text{over alphabet} \\Sigma \\}$. Here, it is shown that \\[2f(n, \\{0,1\\}) = n-o(n)\\] using the regularity lemma for words.I.e., any binary word of length $n$ can be split into two identical subwords (referred to as twins) and, perhaps, a remaining subword of length $o(n)$. A similar result is proven for $k$ identical subwords of a word over an alphabet with at most $k$ letters.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Quantitative Methods (q-bio.QM)",
"title": "A regularity lemma and twins in words",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787838473909,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811325883986
} |
https://arxiv.org/abs/1810.07462 | Halfway to Rota's basis conjecture | In 1989, Rota made the following conjecture. Given $n$ bases $B_{1},\dots,B_{n}$ in an $n$-dimensional vector space $V$, one can always find $n$ disjoint bases of $V$, each containing exactly one element from each $B_{i}$ (we call such bases transversal bases). Rota's basis conjecture remains wide open despite its apparent simplicity and the efforts of many researchers (for example, the conjecture was recently the subject of the collaborative "Polymath" project). In this paper we prove that one can always find $\left(1/2-o\left(1\right)\right)n$ disjoint transversal bases, improving on the previous best bound of $\Omega\left(n/\log n\right)$. Our results also apply to the more general setting of matroids. | \section{Introduction}
Given bases $B_{1},\dots,B_{n}$ in an $n$-dimensional vector space
$V$, a \emph{transversal basis }is a basis of $V$ containing a single
distinguished vector from each of $B_{1},\dots,B_{n}$. Two transversal
bases are said to be \emph{disjoint} if their distinguished vectors
from $B_{i}$ are distinct, for each $i$ (here ``distinguished'' means that two copies of the same vector appearing in two $B_i$s are considered distinct). In 1989, Rota conjectured
(see \cite[Conjecture~4]{HR94}) that for any vector space $V$ over
a characteristic-zero field, and any choice of $B_{1},\dots,B_{n}$,
one can always find $n$ pairwise disjoint transversal bases.
Despite the apparent simplicity of this conjecture, it remains wide
open, and has surprising connections to apparently unrelated subjects.
Specifically, it was discovered by Huang and Rota \cite{HR94} that
there are implications between Rota's basis conjecture, the Alon--Tarsi
conjecture \cite{AT92} concerning enumeration of even and odd Latin
squares, and a certain conjecture concerning the supersymmetric bracket
algebra.
Rota also observed that an analogous conjecture could be made in the
much more general setting of \emph{matroids}, which are objects that
abstract the combinatorial properties of linear independence in vector
spaces. Specifically, a finite matroid $M=\left(E,\mathcal{I}\right)$
consists of a finite ground set $E$ (whose elements may be thought
of as vectors in a vector space), and a collection $\mathcal{I}$
of subsets of $E$, called independent sets. The defining properties
of a matroid are that:
\begin{itemize
\item{the empty set is independent (that is, $\emptyset\in\mathcal{I}$);}
\item{subsets of independent sets are independent (that is, if $A'\subseteq A\subseteq E$
and $A\in\mathcal{I}$, then $A'\in\mathcal{I}$);}
\item{if $A$ and $B$ are independent sets, and $\left|A\right|>\left|B\right|$,
then an independent set can be constructed by adding an element of
$A$ to $B$ (that is, there is $a\in A\backslash B$ such that $B\cup\left\{ a\right\} \in\mathcal{I}$).
This final property is called the \emph{augmentation property}.}
\end{itemize}
Observe that any finite set of elements in a vector space (over any
field) naturally gives rise to a matroid, though not all matroids
arise this way. A \emph{basis }in a matroid $M$ is a maximal independent
set. By the augmentation property, all bases have the same size, and
this common size is called the \emph{rank }of $M$. The definition
of a transversal basis generalises in the obvious way to matroids,
and the natural matroid generalisation of Rota's basis conjecture
is that for any rank-$n$ matroid and any bases $B_{1},\dots,B_{n}$,
there are $n$ disjoint transversal bases.
Although Rota's basis conjecture remains open, various special cases
have been proved. Several of these have come from the connection between
Rota's basis conjecture and the Alon--Tarsi conjecture,
which has since been simplified by Onn \cite{Onn97}. Specifically,
due to work by Drisko \cite{Dri97} and Glynn \cite{Gly10} on the
Alon--Tarsi conjecture, Rota's original conjecture for vector
spaces over a characteristic-zero field is now known to be true whenever
the dimension $n$ is of the form $p\pm1$, for $p$ a prime. Wild
\cite{Wil94} proved Rota's basis conjecture for so-called ``strongly
base-orderable'' matroids, and used this to prove the conjecture
for certain classes of matroids arising from graphs. Geelen and Humphries
proved the conjecture for ``paving'' matroids \cite{GH06}, and
Cheung \cite{Che12} computationally proved that the conjecture holds
for matroids of rank at most 4.
Various authors have also proposed variations and weakenings of Rota's
basis conjecture. For example, Aharoni and Berger \cite{AB06} showed
that in any matroid one can cover the set of all the elements in $B_{1},\dots,B_{n}$
by at most $2n$ ``partial'' transversals, and Bollen and Draisma
\cite{BD15} considered an ``online'' version of Rota's basis conjecture,
where the bases $B_{i}$ are revealed one-by-one. In 2017, Rota's
basis conjecture received renewed interest when it was chosen as the
twelfth ``Polymath'' project, in which amateur and professional
mathematicians from around the world collaborated on the problem.
Some of the fruits of the project were a small improvement to Aharoni
and Berger's theorem, and improved understanding of the online version
of Rota's basis conjecture \cite{Pol17}. See \cite{polyproposal}
for Timothy Chow's proposal of the project, see \cite{poly1,poly2,poly3}
for blog posts where much of the discussion took place, and see \cite{polywiki}
for the Polymath wiki summarising most of what is known about Rota's
basis conjecture.
One particularly natural direction to attack Rota's problem is to
try to find lower bounds on the number of disjoint transversal bases. Rota's basis conjecture
asks for $n$ disjoint transversal bases, but it is not completely
obvious that even two disjoint transversal bases must exist! Wild
\cite{Wil94} proved some lower bounds for certain matroids arising
from graphs, but the first nontrivial bound for general matroids was
by Geelen and Webb \cite{GW07}, who used a generalisation of Hall's
theorem due to Rado \cite{Rad42} to prove that there must be $\Omega\left(\sqrt{n}\right)$
disjoint transversal bases. Recently, this was improved by Dong and
Geelen \cite{DG18}, who used a beautiful probabilistic argument to
prove the existence of $\Omega\left(n/\log n\right)$ disjoint transversal
bases. In this paper we improve this substantially and obtain the
first linear bound.
\begin{thm}
\label{thm:new}For any $\varepsilon>0$, the following holds for
sufficiently large $n$. Given bases $B_{1},\dots,B_{n}$ of a rank-$n$
matroid, there are at least $\left(1/2-\varepsilon\right)n$ disjoint
transversal bases.
\end{thm}
Of course, since matroids generalise vector spaces, this also implies
the same result for bases in an $n$-dimensional vector space. We
also remark that for the weaker fact that there exist $\Omega\left(n\right)$
disjoint transversal bases, our methods give a simpler proof;
see \cref{rem:linear}.
In contrast to the previous work by Dong, Geelen and Webb, our approach
is to show how to build a collection of transversal bases in an iterative
fashion (reminiscent of augmenting path arguments in matching problems).
It is tempting to imagine a future path to Rota's basis conjecture
(at least in the case of vector spaces) using such an approach: by
improving on our arguments, perhaps introducing some randomness, it
might be possible to iteratively build a collection of $\left(1-o\left(1\right)\right)n$
transversal bases, and then it might be possible to use some sort
of ``template'' or ``absorber'' structure to finish the job. This
was precisely the approach taken in Keevash's celebrated proof of
the existence of designs \cite{Kee14}. Actually, it has been observed
by participants of the Polymath project (see \cite{poly1}) that Rota's
basis conjecture and the existence of designs conjecture both seem
to fall into a common category of problems which are not quite ``structured''
enough for purely algebraic methods, but too structured for probabilistic
methods.
\vspace{0.30cm}
\noindent{\bf Notation.} We will frequently want to denote the result of adding and removing single elements from a set. For a set $S$ and some $x\notin S$, $y\in S$, we write $S+x$ to mean $S\cup \{x\}$, and we write $S-y$ to mean $S\setminus \{y\}$.
\section{Finding many disjoint transversal bases}
In this section we prove \cref{thm:new}. It is convenient to think
of $B_{1},\dots,B_{n}$ as ``colour classes''.
\begin{defn}
Let $U=\left\{ \left(x,c\right):x\in B_{c},1\le c\le n\right\} $ be the set of
all coloured elements that appear in one of $B_{1},\dots,B_{n}$.
For $S\subseteq U$, let $\pi\left(S\right)=\left\{ x:\left(x,c\right)\in S\;\text{for some }c\right\} $
be its set of matroid elements. We say that a subset of elements of $U$
is a \emph{rainbow independent set} (RIS for short) if all its matroid
elements are distinct and form an independent set, and all their colours
are distinct.
\end{defn}
Note that an RIS with size $n$ corresponds to a transversal basis.
We remark that RISs are sometimes also known as \emph{partial transversals}.
Note that two transversal bases are disjoint if and only if their
corresponding RISs are disjoint as subsets of $U$.
Let $f=\left(1-\varepsilon\right)n/2$. The basic idea is to start
with a collection of $f$ empty RISs (which are trivially disjoint),
and iteratively enlarge the RISs in this collection, maintaining disjointness,
until we have many disjoint transversal bases.
Let $\mathcal{S}$ be a collection of $f$ disjoint RISs. We define the \emph{volume
}$\sum_{S\in\mathcal{S}}\left|S\right|$ of $\mathcal{S}$ to be the total number of
elements in the RISs in $\mathcal{S}$. We will show how to modify $\mathcal{S}$ to
increase its volume. We let $F=\bigcup_{S \in \mathcal{S}} S$ be the set of all currently used elements. One should think of $F$ as being the set of all elements which we cannot add to any $S\in\mathcal{S}$
without violating the disjointness of RISs in $\mathcal{S}$.
We stress that in the following two subsections we fix a collection $\mathcal{S}$ and define $F$ as above. All our definitions and claims are with respect to these $F$ and $\mathcal{S}$. We will show that under certain conditions the size of $\mathcal{S}$ can be increased, at which point one needs to restart the argument from the beginning with a new $\mathcal{S}$ (and a new $F$). This is made precise in \Cref{subsec:increasing}.
\begin{rem*}
We remark that it is actually possible to reduce to the case where each $B_c$ is disjoint, by making duplicate copies of all elements that appear in multiple $B_c$. So, instead of working with the universe $U$ of element/colour pairs, one can alternatively think of $U$ as being a collection of $n^2$ different matroid elements (each of which has a colour associated with it).
\end{rem*}
\subsection{\label{subsec:simple-swaps}Simple swaps}
Our objective is to increase the volume of $\mathcal{S}$. If an RIS $S\in\mathcal{S}$
is missing a colour $c$ and there is $x\in B_{c}$ independent to
the elements of $S$, such that $\left(x,c\right) \notin F$, then we can add $\left(x,c\right)$ to $S$
to create a larger RIS, increasing the volume of $\mathcal{S}$. We will want
much more freedom than this: we also want to consider those elements
that can be added to $S$ after making a small change to $S$. This
motivates the following definition.
\begin{defn}
\label{def:addable}Consider an RIS $S$ and a colour $b$ that does
not appear in $S$. Say an element $\left(x,c\right)\in U$ (possibly $(x,c)\in F$) is $\left(S,b\right)$-\emph{addable
}if either
\begin{itemize}
\item $S+\left(x,c\right)$ is an RIS, or;
\item There is $\left(x',c\right)\in S$ and $\left(y,b\right)\notin F$
such that $S-\left(x',c\right)+\left(y,b\right)+\left(x,c\right)$
is an RIS.
\end{itemize}
In the second case we say that $y$ is a \emph{witness} for the $\left(S,b\right)$-addability of $\left(x,c\right)$. For $(x',c) \in S$ and $(y,b) \notin F$ when $S-\left(x',c\right)+\left(y,b\right)$ is an RIS we say it
is the result of applying a \emph{simple swap }to $S$.
\end{defn}
If for some RIS $S\in\mathcal{S}$ missing a colour $b$ there is an $\left(S,b\right)$-addable
element $\left(x,c\right)\notin F$,
then we can increase the volume of $\mathcal{S}$ by adding $\left(x,c\right)$
to $S$, possibly after applying a simple swap to $S$. Note that we do not require $S\in \mathcal{S}$ for the definition of $(S,b)$-addability, though in practice we will only ever consider $S$ that are either in $\mathcal{S}$ or slight modifications of RISs in $\mathcal{S}$.
Our next objective is to show that for any $S$ missing a colour $b$,
either there is an $\left(S,b\right)$-addable element that is not
in $F$ (which would allow us to increase the volume of $\mathcal{S}$, as
above), or else there are \emph{many }$\left(S,b\right)$-addable
elements (which must therefore be in $F$). Although this will not
allow us to immediately increase the volume of $\mathcal{S}$, it will allow
us to transfer an element to $S$ from some other $S'\in\mathcal{S}$, and
this freedom to perform local modifications will be very useful.
Towards this end, we study which elements of $S$ can be used in a
simple swap.
\begin{defn}
Consider an RIS $S$ and consider a colour $b$ that does not appear
on $S$. We say that a colour $c$ appearing on $S$ is \emph{$\left(S,b\right)$-swappable}
if there is a simple swap yielding an RIS $S+\left(y,b\right)-\left(x',c\right)$,
with $\left(y,b\right)\notin F$ and $\left(x',c\right)\in S$. (For $S+\left(y,b\right)-\left(x',c\right)$ to be an RIS, we just need $\pi(S)+y-x'$ to be an independent set in our matroid.) We say that $y$ is a witness for
the $\left(S,b\right)$-swappability of $c$.
\end{defn}
(Basically, a colour is $\left(S,b\right)$-swappable if
we can replace it with a $b$-coloured element which is not in
$F$). For a colour $c$ we denote by $F_{c}=\left\{ x\in B_{c}:\left(x,c\right)\in F\right\}$ the set of matroid elements which appear in $\mathcal{S}$ with colour $c$.
\begin{claim}
\label{claim:many-good}For a nonempty RIS $S$ and a colour $b$ not
appearing in $S$, either there is an $\left(S,b\right)$-addable element $(y,b)\notin F$ or there are at least $n-\left|F_{b}\right|$ colours
which are $\left(S,b\right)$-swappable.
\end{claim}
\begin{proof}
For the purpose of contradiction, suppose that there is no $\left(S,b\right)$-addable element $(y,b)\notin F$, and that there are fewer than $n-\left|F_{b}\right|$ colours which are $\left(S,b\right)$-swappable. Let $S'\subseteq S$ be the set of all elements of
$S$ which have an $\left(S,b\right)$-swappable colour, so $\left|S'\right|<n-\left|F_{b}\right|$.
Also $\left|S'\right|<\left|S\right|$ because otherwise we would have $|S|<n-\left|F_{b}\right|$, so by the augmentation property there would be $y \in B_b \setminus F_b$ such that $S+(y,b)$ is an RIS (meaning that $(y,b)\notin F$ would be $\left(S,b\right)$-addable). Repeating this argument for $S'$ in place of $S$, there is $y\in B_{b}\backslash F_{b}$
such that $S'+\left(y,b\right)$ is an RIS. By repeatedly using the augmentation
property, we can add $\left|S-S'\right|-1$ elements of $S-S'$ to
$S'+\left(y,b\right)$. This gives an RIS of size $|S|$ of the form $S+\left(y,b\right)-\left(x',c\right)$
for some $\left(x',c\right)\in S-S'$. But this means $c$ is $\left(S,b\right)$-swappable, so $(x',c) \in S'$ by the definition of $S'$. This is a contradiction.
\end{proof}
Now we show that all elements of an $\left(S,b\right)$-swappable
colour which are independent to $\pi\left(S\right)$ are $\left(S,b\right)$-addable,
unless there is an \emph{$\left(S,b\right)$}-addable element not
in $F$. (Recall that $\pi\left(S\right)$ is the set of matroid elements
in $S$, without colour data.)
\begin{claim}
\label{claim:add-if-good}Consider an RIS $S$ with no element of
a colour $b$ and consider a colour
$c$ that is $\left(S,b\right)$-swappable with witness $y$. Either
$S+\left(y,b\right)$ is an RIS (thus, $\left(y,b\right)\notin F$
is $\left(S,b\right)$-addable), or otherwise for any $x\in B_{c}$
independent of $\pi\left(S\right)$, $\left(x,c\right)$ is $\left(S,b\right)$-addable with witness $(y,b)$.
\end{claim}
\begin{proof}
Let $\left(x',c\right)$ be the element with colour $c$ in $S$.
Consider some $x\in B_{c}$ independent to $\pi\left(S\right)$. Let $I=\pi\left(S\right)+x$ and $J=\pi\left(S\right)+y-x'$. By the augmentation property, there is an element of $I\backslash J$ that is independent of $J$; this element is either $x'$ or $x$. In the former case $S+\left(y,b\right)$
is an RIS. In the latter case, $S+\left(y,b\right)-\left(x',c\right)+\left(x,c\right)$ is
an RIS, showing that $\left(x,c\right)$ is $\left(S,b\right)$-addable.
\end{proof}
The following lemma gives a good illustration of how to use the ideas developed in this section to find many addable elements. It will be very useful later on.
\begin{claim}\label{claim:1-addability}
Let $S\in\mathcal{S}$ and let $b$ be a colour which does not appear in $S$. Then either we can increase the volume of $\mathcal{S}$ or there are at least $(n-|S|)\left(n-f\right)$ elements that are $\left(S,b\right)$-addable.
\end{claim}
\begin{proof}
If there is an element $\left(y,b\right)\notin F$ which is $\left(S,b\right)$-addable,
then we can directly add this element to $S$ (making a simple swap if necessary), increasing the volume
of $\mathcal{S}$. Otherwise, observe that $\left|F_{b}\right|\le\left|\mathcal{S}\right|=f$,
so by \cref{claim:many-good} there are at least $n-f$ colours that
are $\left(S,b\right)$-swappable. For
each such colour $c$, by the augmentation property, there are at least $n-|S|$ elements $x\in B_{c}$
independent to all the elements of $S$, each of which is $\left(S,b\right)$-addable
by \cref{claim:add-if-good}. That is to say, there are at least $(n-|S|)\left(n-f\right)$
elements which are $\left(S,b\right)$-addable, as claimed.
\end{proof}
In our proof of \cref{thm:new} we also make use of the following lemma. In the course of our arguments, when we need to find many addable elements with a given colour, it will allow us to ensure that these elements are actually distinct.
\begin{lem}
\label{lem:matching}Let $S$ be an RIS. Then for each $B_{b}$, we
can find an injection $\phi_{b}:S\to B_{b}$ such that for all
$\left(x,c\right)\in S$, $\phi_{b}\left(\left(x,c\right)\right)$
is independent of $\pi\left(S-\left(x,c\right)\right)$.
\end{lem}
\begin{proof}
Consider the bipartite graph $G$ where the first part consists of the elements of $S$ and
the second part consists of the elements of $B_{b}$, with an edge
between $\left(x,c\right)\in S$ and $y\in B_{b}$ if $y$ is independent
of $\pi\left(S-\left(x,c\right)\right)$. We use Hall's theorem
to show that there is a matching in this bipartite graph covering $S$. Indeed,
consider some $W\subseteq S$. By the augmentation property, there
are at least $\left|W\right|$ elements $y\in B_{b}$ such that $\pi\left(S-W\right)+y$
is an independent set, and again using the augmentation property,
each of these can be extended to an independent set of the form $\pi\left(S\right)+y-x$
for some $\left(x,c\right)\in W$. That is to say, $W$ has at least
$\left|W\right|$ neighbours in $G$.
\end{proof}
We thank the anonymous referees for pointing out that \cref{lem:matching} also follows from a result due to Brualdi \cite{brualdi69}.
\subsection{Cascading swaps}
Informally speaking, for any $S_{0}\in\mathcal{S}$ which is not a transversal
basis, we have shown that either we can directly augment $S_{0}$,
or there are many elements $\left(x_{1},c_{1}\right)\in U$ with which
we can augment $S_{0}$ after performing a simple swap. It's possible
that each such $\left(x_{1},c_{1}\right)$ already appears in some
other $S_{1}\in\mathcal{S}$, but if this occurs we need not give up: we can
transfer $\left(x_{1},c_{1}\right)$ from $S_{1}$ to $S_{0}$ and
then continue to look for elements $\left(x_{2},c_{2}\right)\in U$
with which we can augment $S_{1}-\left(x_{1},c_{1}\right)$ (again,
possibly with a swap). We can iterate this idea, looking for sequences
\[
S_{1},\dots,S_{\ell}\in\mathcal{S},\quad\left(x_{1},c_{1}\right)\in S_{1},\,\left(x_{2},c_{2}\right)\in S_{2},\dots,\left(x_{\ell},c_{\ell}\right)\in S_{\ell},\,\left(x_{\ell+1},c_{\ell+1}\right)\notin\bigcup_{S\in\mathcal{S}}S
\]
such that, after a sequence of simple swaps, each $\left(x_{i},c_{i}\right)$
is transferred from $S_{i}$ to $S_{i-1}$, and then $\left(x_{\ell+1},c_{\ell+1}\right)$
can be added to $S_{\ell}$. (We also need to ensure that the simple swaps we perform preserve disjointness of RISs in $\mathcal{S}$.) This transformation has the net effect
of adding an element to $S_{0}$ and keeping the size of all other
$S\in\mathcal{S}$ constant, thus increasing the volume of $\mathcal{S}$.
Crucially, because of the freedom afforded by simple swaps, each time
we expand our search to consider longer cascades, our number of options
for $\left(x_{\ell+1},c_{\ell+1}\right)$ increases. For sufficiently
large $\ell$, the number of options will be so great that there must
be suitable $\left(x_{\ell+1},c_{\ell+1}\right)$ not appearing in
any RIS in $\mathcal{S}$. In order to keep this analysis tractable, we will
only consider transformations that cascade along a single sequence
of RISs $S_0,\dots,S_{\ell}$; we will iteratively construct this
sequence of RISs in such a way that there are many possibilities $\left(x_{i},c_{i}\right)\in S_{i}$
relative to the number of possibilities $\left(x_{i-1},c_{i-1}\right)\in S_{i-1}$
in the previous step. The next definition makes precise the cascades
that we consider.
\begin{defn}\label{Defn_Cascade}
Consider a sequence of distinct RISs $S_{0},\dots,S_{\ell-1}\in\mathcal{S}$.
Say an element $\left(x_{\ell},c_{\ell}\right)\notin S_{0},\dots,S_{\ell-1}$
is \emph{cascade-addable with respect to} $S_{0},\dots,S_{\ell-1}$
if there is a colour $c_{0}$ and sequences
\[
\left(x_{1},c_{1}\right),\dots,\left(x_{\ell-1},c_{\ell-1}\right)\in U,\qquad y_{0}\in B_{c_{0}},\dots,y_{\ell-1}\in B_{c_{\ell-1}},
\]
such that the following hold.
\begin{itemize}
\item For each $1\le i\le\ell-1$, we have $\left(x_{i},c_{i}\right)\in S_{i}$;
\item $c_{0}$ does not appear in $S_{0}$, and $\left(x_{1},c_{1}\right)$
is $\left(S_{0},c_{0}\right)$-addable with
witness $y_{0}$;
\item for each $0\le i\le\ell-1$, $\left(x_{i+1},c_{i+1}\right)$ is $\left(S_{i}-\left(x_{i},c_{i}\right),c_{i}\right)$-addable
with witness $y_{i}$;
\item the colours $c_0,\dots,c_\ell$ are distinct.
\end{itemize}
We call $c_0, c_1, \dots, c_{\ell-1}$ \emph{a sequence of colours freeing $(x_{\ell}, c_{\ell})$}.
We write $Q\left(S_{0},\dots,S_{\ell-1}\right)$ for the set of all
elements outside $S_{0},\dots,S_{\ell-1}$ which are cascade-addable
with respect to $S_{0},\dots,S_{\ell-1}$.
\end{defn}
We remark that if $\ell=1$ then most of the conditions in the above definition become vacuous and an element being cascade-addable with respect to $S_0$ is equivalent to it being $(S_0, c_0)$-addable with a witness, for some colour $c_0.$
Observe
that if an element $\left(x_{\ell},c_{\ell}\right)$ is cascade-addable
then we can transfer it into $S_{\ell-1}$, as the final step in a
cascading sequence of simple swaps and transfers. The following lemma makes this precise.
\begin{claim}\label{Claim_Perform_Cascade}
Suppose that $\left(x_{\ell},c_{\ell}\right)$
is {cascade-addable }with respect to $S_{0},\dots,S_{\ell-1}$ and $c_0, c_1, \dots, c_{\ell-1}$ is a sequence of colours freeing $(x_{\ell}, c_{\ell})$.
Then there are $S_0'\dots S'_{\ell-1} \subseteq S_0\cup \dots \cup S_{\ell-1}\cup B_{c_0}\cup \dots \cup B_{c_{\ell-1}}$ such that replacing $S_0, \dots, S_{\ell-1}$ with $S'_0, \dots, S'_{\ell-1}$ in $\mathcal S$ results in a family $\mathcal S'$ of disjoint RISs of the same total volume as $\mathcal S$, in such a way that $S'_{\ell-1}+\left(x_{\ell},c_{\ell}\right)$ is an RIS.
\end{claim}
\begin{proof}
Let $\left(x_{1},c_{1}\right),\dots,\left(x_{\ell-1},c_{\ell-1}\right)\in U, y_{0}\in B_{c_{0}},\dots,y_{\ell-1}\in B_{c_{\ell-1}}$ be as in the definition of cascade-addability. For each $i=0, \dots, \ell-1$, let $(x'_i, c_{i+1})$ be the colour $c_{i+1}$ element of $S_i$ (which exists, because, from cascade-addability, $\left(x_{i+1},c_{i+1}\right)$ is $\left(S_{i}-\left(x_{i},c_{i}\right),c_{i}\right)$-addable
\emph{with a witness}). For each $i=1, \dots, \ell-2$, let $S_i'= S_i-(x_i,c_i)- (x'_i, c_{i+1})+ (y_i,c_i) + (x_{i+1}, c_{i+1})$. Let $S_0'=S_0- (x'_0, c_{1}) +(y_0,c_0) + (x_{1}, c_{1})$ and $S_{\ell-1}'= S_{\ell-1}-(x_{\ell-1},c_{\ell-1})- (x'_{\ell-1}, c_{\ell})+ (y_{\ell-1},c_{\ell-1})$.
Let $\mathcal S'$ be the family formed by replacing $S_0, \dots, S_{\ell-1}$ with $S'_0, \dots, S'_{\ell-1}$ in $\mathcal S$. It is easy to check that $\mathcal S'$ has the same total volume as $\mathcal S$, so it remains to check that it is a family of disjoint RISs.
For $i=1, \dots, \ell-2$, $S_i'$ is an RIS because it comes from $S_i-(x_i,c_i)$ by making the change in the definition of $(x_{i+1}, c_{i+1})$ being $(S_i-(x_i,c_i), c_i)$-addable with witness $y_i$ (and addability always produces an RIS by definition).
Similarly $S_{\ell-1}'+\left(x_{\ell},c_{\ell}\right)$ is an RIS.
To see that $S_0'$ is an RIS we use that $(x_1, c_1)$ is $(S_0,c_0)$-addable with witness $y_0$, and that $c_0$ does not appear in $S_0$, both of which come from the definition of cascade-addability.
It remains to show that the RISs $S_0', \dots, S_{\ell-1}'$ are disjoint from each other and the other RISs in $\mathcal S$. The elements $(y_i,c_i)$ occur in only one RIS $S_i'$ because they come from outside $F$ (since they are addability witnesses), and because their colours $c_0, \dots, c_{\ell-1}$ are distinct (from the definition of cascade-addability). The elements $(x_i, c_i)$ occur in only one RIS because they get removed from $S_{i}$ and added to $S_{i-1}$.
\end{proof}
The following lemma lets us build longer cascades.
\begin{claim}\label{Claim_Concatenate_cascade}
Suppose that $\left(x_{\ell},c_{\ell}\right)\in S_{\ell}$
is {cascade-addable }with respect to $S_{0},\dots,S_{\ell-1}$ and $c_0, c_1, \dots, c_{\ell-1}$ is a sequence of colours freeing $(x_{\ell}, c_{\ell})$. If $(x,c)$ is $(S_{\ell}-(x_{\ell}, c_{\ell}), c_{\ell})$-addable with a witness then either $(x,c)\in S_{0}\cup \dots\cup S_{\ell}\cup B_{c_0}\cup \dots\cup B_{c_{\ell}}$ or $(x,c)$ is {cascade-addable} with respect to $S_{0},\dots,S_{\ell}$.
\end{claim}
\begin{proof}
Suppose that $(x,c)\not \in S_{0},\dots,S_{\ell}, B_{c_0}, \dots, B_{c_{\ell}}$.
For the definition of $(x,c)$ being {cascade-addable}, all the conditions not involving $(x,c)$ and $(x_{\ell},c_{\ell})$ hold as a consequence of $\left(x_{\ell},c_{\ell}\right)\in S_{\ell}$
being {cascade-addable }with respect to $S_{0},\dots,S_{\ell-1}$.
It remains to check the conditions that $(x,c)\not\in S_{0},\dots,S_{\ell}$ and that
each of $c_0,\dots,c_\ell,c$ are distinct, both of which hold as a consequence of our assumption $(x,c)\not \in S_{0},\dots,S_{\ell}, B_{c_0}, \dots, B_{c_{\ell}}$.
\end{proof}
In the next lemma, we essentially show that given $S_{0},\dots,S_{\ell-1}$,
it is possible to choose $S_{\ell}$ in such a way that the number
of cascade-addable elements increases.
\begin{claim}
\label{claim:cascade-increase}Consider a sequence of distinct RISs
$S_{0},\dots,S_{\ell-1}\in\mathcal{S}$ with $1 \le \ell<f=\left|\mathcal{S}\right|$.
Then either we can modify $\mathcal{S}$ to increase its volume, or we can
choose $S_{\ell}\ne S_{0},\dots,S_{\ell-1}$ from $\mathcal{S}$ such that
\begin{equation}
\left|Q\left(S_{0},\dots,S_{\ell}\right)\right|\ge\frac{\left|Q\left(S_{0},\dots,S_{\ell-1}\right)\right|}{f-\ell}\cdot\left(n-f-\ell\right)-(\ell+1) n.\label{eq:recurrence}
\end{equation}
\end{claim}
\begin{proof}
If $Q\left(S_{0},\dots,S_{\ell-1}\right)$ contains an element $(x,c)$ not
in any $S\in\mathcal{S}$, then we can increase the volume of $\mathcal{S}$ with a
cascading sequence of simple swaps and transfers (using \cref{Claim_Perform_Cascade}, noting that if $\left(x_{\ell},c_{\ell}\right)\not\in F$, then we can add $\left(x_{\ell},c_{\ell}\right)$ to $S'_{\ell-1}$ in that lemma to get a larger family of RISs).
Otherwise, all the elements of $Q\left(S_{0},\dots,S_{\ell-1}\right)$ belong to some RIS $S\in \mathcal{S}\setminus\left\{ S_{0},\dots S_{\ell-1}\right\}$ (since $Q\left(S_{0},\dots,S_{\ell-1}\right)$ is defined to not contain any elements from $S_{0},\dots,S_{\ell-1}$).
Choose $S_{\ell}\in\mathcal{S}\setminus\left\{ S_{0},\dots S_{\ell-1}\right\}$ containing maximally many elements of $Q\left(S_{0},\dots,S_{\ell-1}\right)$. Since the $f-\ell$ RISs $S\in \mathcal{S}\setminus\left\{ S_{0},\dots S_{\ell-1}\right\}$ collectively contain all elements of $Q\left(S_{0},\dots,S_{\ell-1}\right)$, our chosen RIS $S_\ell$ must contain a proportion of at least $1/(f-\ell)$ of the elements of $Q\left(S_{0},\dots,S_{\ell-1}\right)$. In other words, if we let $Q=S_{\ell}\cap Q\left(S_{0},\dots,S_{\ell-1}\right)$, we have
\begin{equation}\label{eq:QBound}
\left|Q\right|\ge\frac{\left|Q\left(S_{0},\dots,S_{\ell-1}\right)\right|}{f-\ell}.
\end{equation}
Apply \cref{lem:matching} to $S_{\ell}$ to obtain an injection $\phi_{b}$, for every colour $b$.
Fix some $(x_{\ell},c_{\ell})\in Q$ and a sequence of colours $c_0, \dots, c_{\ell-1}$ freeing $(x_{\ell},c_{\ell})$. We prove a sequence of claims about how many elements are swappable/addable with respect to $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$, assuming we cannot increase the size of $\mathcal{S}$.
\begin{claim*}
{There are at least $n-f$ colours
which are $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-swappable.}
\end{claim*}
\begin{proof}
By \cref{claim:many-good}, either there is an $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-addable element $(y,c_\ell)\not\in F$, or there are at least $n-\left|F_{c_{\ell}}\right|\ge n-f$ colours
which are $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-swappable. In the former case, we can increase the volume of $\mathcal{S}$, by a cascading sequence of swaps and transfers (first consider $\mathcal S'$ from \cref{Claim_Perform_Cascade}, then move $(x_\ell,c_\ell)$ from $S_{\ell}$ to $S_{\ell-1}'$, then add $(y,c_\ell)$ to $S_\ell-(x_\ell,c_\ell)$).
\end{proof}
\begin{claim*}
{There are at least $n-f$ colours $c$ for which $\left(\phi_{c}\left(\left(x_{\ell},c_{\ell}\right)\right),c\right)$
is $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-addable.}
\end{claim*}
\begin{proof}
Let $c$ be a colour which is $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-swappable with witness $y$, as in the previous claim. If $y$ is independent
to $\pi\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right)\right)$, we
can increase the volume of $\mathcal{S}$ by adding it to $S_{\ell}$ after
a cascading sequence of swaps and transfers (first consider $\mathcal S'$ from \cref{Claim_Perform_Cascade}, then move $(x_\ell,c_\ell)$ from $S_{\ell}$ to $S_{\ell-1}'$, then add $(y,c_\ell)$ to $S_\ell-(x_\ell,c_\ell)$).
Otherwise, by \cref{claim:add-if-good} applied with $b=c_{\ell}$, $S=S_{\ell}-\left(x_{\ell},c_{\ell}\right)$, the element
$\left(\phi_{c}\left(\left(x_{\ell},c_{\ell}\right)\right),c\right)$
is $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-addable. Here we are using that $\left(\phi_{c}\left(\left(x_{\ell},c_{\ell}\right)\right),c\right)$
is independent from $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$ (which comes from the definition of $\phi_c$ in \cref{lem:matching}).
\end{proof}
\begin{claim*}
{There are at least $n-f-\ell$ colours $c\not\in\{c_0, \dots, c_{\ell-1}\}$ for which $\left(\phi_{c}\left(\left(x_{\ell},c_{\ell}\right)\right),c\right)$
is $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-addable.}
\end{claim*}
\begin{proof}
This ensues from the previous claim and the fact that the only requirement on $c$, besides addability, is that it is different from the $\ell$ colours in $\{c_0, \dots, c_{\ell-1}\}$.
\end{proof}
We now prove the following:
\begin{equation}\label{eq:QBound2}
|Q\left(S_{0},\dots,S_{\ell}\right)|\geq \left|Q\right|\left(n-\ell-f\right)-(\ell+1) n.
\end{equation}
From the last claim, we have $\left|Q\right|\left(n-\ell-f\right)$ elements of the form $\left(\phi_{c}\left(\left(x_{\ell},c_{\ell}\right)\right),c\right)$ which are all $\left(S_{\ell}-\left(x_{\ell},c_{\ell}\right),c_{\ell}\right)$-addable, with $c$ outside a sequence of colours freeing $(x_{\ell}, c_{\ell})$. Notice that these $\left(\phi_{c}\left(\left(x_{\ell},c_{\ell}\right)\right),c\right)$ are all distinct because $\phi_c$ is an injection.
By \cref{Claim_Concatenate_cascade}, each of these is cascade-addable with respect to $S_{0},\dots,S_{\ell}$, unless it appears in one of $S_{0},\dots,S_{\ell}$.
The total number of elements in $S_{0},\dots,S_{\ell}$ is at most $(\ell+1) n$, so we have found $\left|Q\right|\left(n-\ell-f\right)-(\ell+1) n$ cascade-addable elements with respect to $S_{0},\dots,S_{\ell}$, as required by \cref{eq:QBound2}.
The lemma immediately follows by combining \cref{eq:QBound} and \cref{eq:QBound2}.
\end{proof}
Now, we want to iteratively apply \cref{claim:cascade-increase} starting
from some $S_{0}\in\mathcal{S}$, to obtain a sequence $S_{0},S_{1},\dots,S_{h}\in\mathcal{S}$.
There are two ways this process can stop: either we find a way to
increase the volume of $\mathcal{S}$, in which case we are done, or else we
run out of RISs in $\mathcal{S}$ (that is, $h=f-1$). We want to show that
this latter possibility cannot occur by deducing from \cref{eq:recurrence}
that the $\left|Q\left(S_{0},\dots,S_{\ell}\right)\right|$ increase
in size at an exponential rate: after logarithmically many steps there
will be so many cascade-addable elements that they cannot all be contained
in the RISs in $\mathcal{S}$, and it must be possible to increase the volume
of $\mathcal{S}$.
A slight snag with this plan is that \cref{eq:recurrence} only yields
an exponentially growing recurrence if the ``initial term'' is rather
large. To be precise, let $C$ (depending on $\varepsilon$) be sufficiently
large such that
\begin{equation}
C\left(1+\varepsilon/2\right)^{\ell-1}\frac{1}{1-\varepsilon}-\ell-1\ge C\left(1+\varepsilon/2\right)^{\ell}\label{eq:C}
\end{equation}
for all $\ell\ge1$.
\begin{claim}
\label{claim:recurrence-estimate}For $S_{0},\dots,S_{h}$ as above,
suppose that $\left|Q\left(S_{0}\right)\right|\ge Cn$ or $\left|Q\left(S_{0},S_{1}\right)\right|\ge Cn$.
Then, for $0<\ell\le\min\left\{ h,\varepsilon n/2\right\} $, we have
\[
\left|Q\left(S_{0},\dots,S_{\ell}\right)\right|\ge C\left(1+\varepsilon/2\right)^{\ell-1}n.
\]
\end{claim}
\begin{proof}
We first establish a technical inequality. Recall that $f=(1-\varepsilon)n/2,$ so
\begin{equation}
\frac{n-f-\ell}{f-\ell} \ge \frac{n-(1-\varepsilon)n/2-n\varepsilon/2}{(1-\varepsilon)n/2}= \frac{1}{1-\varepsilon}.\label{eq:fneps}
\end{equation}
Now, let $Q_{\ell}=Q\left(S_{0},\dots,S_{\ell}\right)$. We proceed by
induction. First observe that if $|Q_0| \ge Cn$ then \cref{eq:recurrence}, \cref{eq:fneps} and \cref{eq:C} for $\ell=1$ imply $|Q_1|\ge Cn(n-f-1)/(f-1)-2n \ge(C/(1-\varepsilon)-2)n\ge Cn$, giving us the base case. If $\left|Q_{\ell}\right|\ge C\left(1+\varepsilon/2\right)^{\ell-1}n$,
then once again using \cref{eq:recurrence}, \cref{eq:fneps} and \cref{eq:C}, we obtain
\begin{align*}
\left|Q_{\ell+1}\right| & \ge\frac{C\left(1+\varepsilon/2\right)^{\ell-1}n}{f-\ell}\cdot\left(n-f-\ell\right)-(\ell+1) n\\
& =\left(C\left(1+\varepsilon/2\right)^{\ell-1}\frac{\left(n-f-\ell\right)}{f-\ell}-\ell-1\right)n\\
& \ge \left(C\left(1+\varepsilon/2\right)^{\ell-1}\frac{1}{1-\varepsilon}-\ell-1\right)n\\
& \ge C\left(1+\varepsilon/2\right)^{\ell}n.\tag*{\qedhere}
\end{align*}
\end{proof}
If we could choose $S_{0},S_{1}$ such that $\left|Q\left(S_{0}\right)\right|\ge Cn$
or $\left|Q\left(S_{0},S_{1}\right)\right|\ge Cn$, then \cref{claim:recurrence-estimate}
would imply that during the construction of $S_1,\dots,S_h$ we never run out of RISs in $\mathcal{S}$ (that is, $h<f-1$). Indeed, otherwise $Q(S_0,\dots,S_{\varepsilon n/2})$ would have size exponential in $n,$ which is impossible. Therefore, the process must stop at some point when we find a way to increase the volume of $\mathcal{S}.$ Provided we can again find suitable $S_0,S_1$ we can then repeat the arguments in this section, further increasing the volume of $\mathcal{S}$. After repeating these arguments enough times we will have obtained
$f=\left(1-\varepsilon\right)n/2\ge\left(1/2-\varepsilon\right)n$
disjoint transversal bases, completing the proof of \cref{thm:new}.
There may not exist suitable $S_{0},S_{1}\in\mathcal{S}$, but in the next
section we will show that if at least $\varepsilon n/2$ of the RISs
in $S$ are not transversal bases, then it is possible to modify $\mathcal{S}$
without changing its volume, in such a way that suitable $S_{0},S_{1}$
exist.
\begin{rem}
\label{rem:linear}With the results we have proved so far, we can already find linearly many disjoint transversal bases. Indeed, if $S_{0}$ is not a transversal basis (missing
a colour $b$, say), and the volume of $\mathcal{S}$ cannot be increased by
adding an element to $S_{0}$ (possibly after a simple swap), then
\cref{claim:1-addability} implies that there are at
least $n-f$ elements which are $\left(S_{0},b\right)$-addable,
meaning that $\left|Q\left(S_{0}\right)\right|\ge n-f$.
Take for example $\varepsilon=4/5$, meaning that $f\le n/10$ and $\left|Q\left(S_{0}\right)\right|\ge9n/10$. We can check
that \cref{eq:C} holds for all $\ell\ge1$ if $C=9/10$. That is to
say, as long as we have not yet completed $\mathcal{S}$ to a collection of
disjoint transversal bases, we can keep increasing its volume without
the considerations in the next section. This proves already
that it is possible to find linearly many disjoint transversal bases.
\end{rem}
\begin{rem}
It is not hard to add a term $(n-|S_{\ell}|)(n-f)$ to the right hand side of the inequality given by \cref{claim:cascade-increase} by considering also cascades along the sequence $S_0, \ldots, S_{\ell-1}$ of length strictly less than $\ell$. However, since this increase is only significant when $|S_{\ell}|$ is not close to $n$, which may never be the case, we omit it from our argument for the sake of readability.
\end{rem}
\subsection{Increasing the number of initial addable elements}\label{subsec:increasing}
Consider a collection $\mathcal{S}$ of $f=\left(1-\varepsilon\right)n/2$
disjoint RISs, at least $\varepsilon n/2$ of which are not transversal
bases. Recall the choice of $C$ from the previous section, and let $D=2C+4$, so that
$D\left(n-f-1\right)-2n\ge Cn$ for large $n$. We prove the following (for large $n$).
\begin{claim}
\label{claim:many-missing}We can modify $\mathcal{S}$ in such a way that at least one of the following holds.
\begin{enumerate}
\item [(a)]The volume of $\mathcal{S}$ increases;
\item [(b)]the volume of $\mathcal{S}$ does not change, and there is now $S_{0}\in\mathcal{S}$
missing at least $D$ colours;
\item [(c)]the volume of $\mathcal{S}$ does not change, and there are now distinct
$S_{0},S_{1}\in\mathcal{S}$ such that $S_{1}$ contains at least $D$ elements
that are $\left(S_{0},b\right)$-addable, for some colour $b$.
\end{enumerate}
\end{claim}
This suffices for our proof of \cref{thm:new}; indeed, if $S_{0}$ is missing at least $D$ colours, then by \cref{claim:1-addability}, either we can increase the volume of $\mathcal{S}$
or there are at least $D\left(n-f\right)\ge Cn$ elements which are
$\left(S_{0},b\right)$-addable for every $b$ not appearing in $S_0$, meaning that $\left|Q\left(S_{0}\right)\right|\ge Cn$.
If $S_{1}$ contains at least $D$ elements that are $\left(S_{0},b\right)$-addable,
then in the proof of \cref{claim:cascade-increase} with $\ell=1$ we have $|Q|\ge D$ so either we can increase the volume of $\mathcal{S}$ or $\left|Q\left(S_{0},S_{1}\right)\right|\ge D\left(n-f-1\right)-2n\ge Cn$ (recall \cref{eq:QBound2}).
Before proceeding to the proof of \cref{claim:many-missing},
we first observe that using \cref{claim:many-good} we can modify $\mathcal{S}$
to ensure that every $S\in\mathcal{S}$ that is not a transversal basis can
be assigned a distinct missing colour $b\left(S\right)$. To see this,
we iteratively apply the following lemma to $\mathcal{S}$.
\begin{lem}
\label{lem:different-missing}
Consider $f\le n/2$ and let $\mathcal{S}=\left\{ S_{1},\dots,S_{f}\right\} $
be a collection of disjoint RISs. We can either increase the size of $\mathcal{S}$ or we can modify $\mathcal{S}$ in such a way that the size of each $S_i$ remains the same, and in such a way that that there is a choice of disjoint colours $\{ b_1,\dots,b_f\}$ for which any $S_i$ that is not a transversal basis has no element of colour $b_i.$
\end{lem}
\begin{proof}
Suppose for some $i$ that we found distinct colours $b_1,\dots,b_{i-1}$
such that, for all $S_{j}$ which are not transversal bases, no element
of $S_{j}$ is of colour $b_{j}$. If $S_i$ is a transversal basis we choose an arbitrary unused colour as $b_i.$ Otherwise there is a colour, say $c$, not appearing in $S_{i}$. Then by \cref{claim:many-good} either we can increase the size of $\mathcal{S}$ or there are at least $n-f\ge n/2$ colours which are $(S_i,c)$-swappable. At least one of these colours does not appear in $\left\{ b_1,\dots,b_{i-1}\right\}$, since $i-1<f\le n/2$. Let $b$ be such a colour and set $b_i=b$. By performing a simple swap, we transform $S_i$ into a new RIS, still disjoint to all other $S_j \in \mathcal{S}$ and missing the colour $b$.
\end{proof}
Now we prove \cref{claim:many-missing}.
\begin{proof}[Proof of \cref{claim:many-missing}]
Recall that we are assuming there are at least $\varepsilon n/2$
RISs in $\mathcal{S}$ that are not transversal bases. Let $E$ be the largest
integer such that there are at least $M_{E}=\left(\varepsilon/\left(4D^{2}\right)\right)^{E}n$
RISs in $\mathcal{S}$ missing at least $E$ colours. We may assume $1\le E<D$.
By \cref{lem:different-missing} we may assume that each $S\in\mathcal{S}$ which is not a transversal basis
has a distinct missing colour $b\left(S\right)$. We describe a procedure
that modifies $\mathcal{S}$ to increase $E$.
We create an auxiliary digraph $G$ on the vertex set $\mathcal{S}$ as follows.
For every $S_{0}\in\mathcal{S}$ missing at
least $E$ colours, put an arc to $S_{0}$ from every $S_{1}\in\mathcal{S}$
such that $S_{1}$ contains at least $E+1$ elements that are $\left(S_{0},b\left(S_{0}\right)\right)$-addable.
Say an \emph{$\left(E+1\right)$-out-star} in a digraph is a set of
$E+1$ arcs directed away from a single vertex. Our goal is to prove that there are $M_{E+1}$ vertex-disjoint $\left(E+1\right)$-out-stars. To see why this suffices, consider an $\left(E+1\right)$-out-star (with centre $S_{1}$, say). We show how to transfer $E+1$ elements from $S_1$ to its out-neighbours, the end result of which is that $S_1$ is then missing $E+1$ colours. We will then be able to repeat this process for each of our out-stars.
For each of the $E+1$ out-neighbours $S_{0}$ of $S_{1}$ there are
at least $E+1$ elements of $S_1$ which are $\left(S_{0},b\left(S_{0}\right)\right)$-addable. Therefore, for each such $S_0$ we can make a specific choice of such an $\left(S_{0},b\left(S_{0}\right)\right)$-addable element, in such a way that each of these $E+1$ choices are \emph{distinct}. For each $S_0$ we can then transfer the chosen element from $S_1$ to $S_0$, possibly with a simple
swap. These simple swaps will not create any conflicts, because
any addability witness for any element in $S_{0}$ is in a colour
unique to that $S_{0}$ (by the property from \cref{lem:different-missing}). After this
operation, $S_i$ is now missing
at least $E+1$ colours.
It will be a relatively straightforward matter to find our desired out-stars by studying the digraph $G$. First we show that $G$ must have many edges.
\begin{claim*}
In the above auxiliary digraph, we may assume that every $S_{0}\in\mathcal{S}$ missing at least
$E$ colours has in-degree at least $\varepsilon n/D$.
\end{claim*}
\begin{proof}
By \cref{claim:1-addability} we can
assume that there are at least $E\left(n-f\right)$ elements which are $\left(S_{0},b\left(S_{0}\right)\right)$-addable. All these elements appear in various $S\in\mathcal{S}$ (otherwise we can increase the volume of $\mathcal{S}$).
Let $N^-(S_0)$ be the set of all $S_1$ such that there is an arc from $S_1$ to $S_0$ in $G$ (so $|N^-(S_0)|$ is the indegree of $S_0$). By definition, every $S\notin N^-(S_0)$ has at most $E$ elements which are $\left(S_{0},b\left(S_{0}\right)\right)$-addable. Moreover, observe that every $S\in \mathcal{S}$ has fewer than $D$ elements that are $\left(S_{0},b(S_0)\right)$-addable, or else (c) trivially occurs. It follows that
\begin{align*}
D|N^-(S_0)|+E(f-|N^-(S_0)|)&\ge E(n-f),
\end{align*}
so
\begin{align*}
|N^-(S_0)| & \ge\frac{E\left(\left(n-f\right)-f\right)}{D-E}\ge\frac{\varepsilon n}{D},
\end{align*}
as desired.
\end{proof}
We have proved that $G$ has at least $M_E \varepsilon n/D$ edges. Now we finish the proof by showing how to find our desired out-stars.
\begin{claim*}
$G$ has at least $M_{E+1}$ vertex-disjoint $\left(E+1\right)$-out-stars.
\end{claim*}
\begin{proof}
We can find these out-stars in a greedy fashion. Suppose that we have already found $t$ vertex-disjoint $\left(E+1\right)$-out-stars, for some $t< M_{E+1}$. We show that there must be an additional $\left(E+1\right)$-out-star disjoint to these. Let $G'$ be obtained from $G$ by deleting all vertices in the out-stars we have found so far. Each of these out-stars has $E+2$ vertices, so the number of arcs in $G'$ is at least
\begin{align*}
M_E \frac{\varepsilon n}D-t(E+2)\cdot2f&>M_E \frac{\varepsilon n}D-M_{E+1}(E+2)\cdot 2f \\
& = M_E \frac{\varepsilon n}D-\frac{M_{E}\varepsilon}{2D^2}\cdot(E+2)f\\
&\ge M_E \varepsilon\left(\frac n D-\frac f{D}\right)\\
& \ge M_E \varepsilon \cdot \frac f{D}\ge (E+1)f,
\end{align*}
where the last inequality holds for sufficiently large $n$, using the fact that $M_E$ is linear in $n$. This means that $G'$ (having at most $f$ vertices) has a vertex with outdegree at least $E+1$, which means $G'$ contains an $\left(E+1\right)$-out-star disjoint to the out-stars we have found so far.
\end{proof}
\end{proof}
\section{Concluding remarks}
In this paper we proved that that given bases $B_{1},\dots,B_{n}$
in a matroid, we can find $\left(1/2-o\left(1\right)\right)n$ disjoint
transversal bases. Although our methods do not extend past $n/2$,
we do not think that there is a fundamental obstacle preventing related
methods from going further. Indeed, by tracking the possible cascades
of swaps more carefully, it might be possible to find $\left(1-o\left(1\right)\right)n$
disjoint transversal bases, or at least to find $\left(1-o\left(1\right)\right)n$
disjoint partial transversals each of size $\left(1-o\left(1\right)\right)n$.
Although we cannot completely rule out the possibility that a full
proof of Rota's basis conjecture could be obtained in this way, we
imagine that more ingredients will be required. We are hopeful that
ideas used to prove existence of designs (see \cite{Kee14,GKLO16})
could be relevant, at least in the case of vector spaces.
Also, we remark that Rota's basis conjecture is reminiscent of some
other problems concerning rainbow
structures in graphs (actually, for a graphic matroid, Rota's basis
conjecture can be interpreted as a conjecture about rainbow spanning
forests in edge-coloured multigraphs). The closest one to Rota's basis
conjecture seems to be the Brualdi--Hollingsworth conjecture
\cite{BH96}, which posits that for every $(n-1)$-edge-colouring of the complete graph $K_n$, the edges can be decomposed into rainbow spanning trees. This conjecture has
recently seen some exciting progress (see for example \cite{Hor18,PS18,BLM18,MPS18}).
We wonder if some of the ideas developed for the study of rainbow
structures could be profitably applied to Rota's basis conjecture.
We also mention the following strengthening of Rota's basis conjecture due to Kahn (see \cite{HR94}). This is simultaneously a strengthening
of the Dinitz conjecture \cite{Dinitz} on list-colouring of $K_{n,n}$,
solved by Galvin \cite{Gal95}.
\begin{conjecture}
\label{conj:kahn}Given a rank-$n$ matroid and bases $B_{i,j}$ for
each $1\le i,j\le n$, there exist representatives $b_{i,j}\in B_{i,j}$
such that each of the sets $\{b_{1,j},\dots,b_{n,j}\}$ and
$\{b_{i,1},\dots,b_{i,n}\}$ are bases.
\end{conjecture}
The methods developed in this paper are also suitable for studying
\cref{conj:kahn}. In particular, the argument used to prove \cref{thm:new} can readily be modified to show the following natural partial result towards Kahn's conjecture.
\begin{thm}\label{thm:kahn}
For any $\varepsilon>0$ the following holds for sufficiently large $n$. Given a rank-$n$ matroid and bases $B_{i,j}$ for
each $1\le i\le n$ and $1\le j \le f=(1-\varepsilon)n/2$, there exist representatives $b_{i,j}\in B_{i,j}$ and $L\subseteq \{1,\dots,f\}$
such that each $\{b_{i,j}:i\in L\}$ is independent, and such that
$\{b_{i,1},\dots,b_{i,n}\}$ is a basis for any $i \in L$ and $|L| \ge (1/2-\varepsilon)n$.
\end{thm}
Note that if we are in the setting of \cref{conj:kahn} where bases are given for all $1 \le i,j\le n$ then the above theorem allows us to choose roughly which rows we would like to find our bases in.
Note also that if, for each fixed $j$, the bases $B_{1,j},\dots,B_{n,j}$ are all equal, then Kahn's conjecture reduces to Rota's basis conjecture. This observation also shows that \cref{thm:kahn} implies \cref{thm:new}.
It is not hard to adapt the proof of \cref{thm:new} to prove \cref{thm:kahn}. However, since it would require repeating most of the argument, we omit the details here. For interested readers we present the details in a companion note, which we will not publish but will make available on the arXiv \cite{arxiv:kahn}.
\medskip
\textbf{Acknowledgements.} We are extremely grateful to the anonymous referees for their careful reading of the paper and many useful suggestions.
| {
"timestamp": "2020-04-06T02:07:43",
"yymm": "1810",
"arxiv_id": "1810.07462",
"language": "en",
"url": "https://arxiv.org/abs/1810.07462",
"abstract": "In 1989, Rota made the following conjecture. Given $n$ bases $B_{1},\\dots,B_{n}$ in an $n$-dimensional vector space $V$, one can always find $n$ disjoint bases of $V$, each containing exactly one element from each $B_{i}$ (we call such bases transversal bases). Rota's basis conjecture remains wide open despite its apparent simplicity and the efforts of many researchers (for example, the conjecture was recently the subject of the collaborative \"Polymath\" project). In this paper we prove that one can always find $\\left(1/2-o\\left(1\\right)\\right)n$ disjoint transversal bases, improving on the previous best bound of $\\Omega\\left(n/\\log n\\right)$. Our results also apply to the more general setting of matroids.",
"subjects": "Combinatorics (math.CO)",
"title": "Halfway to Rota's basis conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787838473909,
"lm_q2_score": 0.718594386544335,
"lm_q1q2_score": 0.7093811325883986
} |
https://arxiv.org/abs/1609.06083 | A classification of anisotropic Besov spaces | We study (homogeneous and inhomogeneous) anisotropic Besov spaces associated to expansive dilation matrices $A \in {\rm GL}(d,\mathbb{R})$, with the goal of clarifying when two such matrices induce the same scale of Besov spaces. For this purpose, we first establish that anisotropic Besov spaces have an alternative description as decomposition spaces. This result allows to relate properties of function spaces to combinatorial properties of the underlying coverings. This principle is applied to the question of classifying dilation matrices. It turns out the scales of homogeneous and inhomogeneous Besov spaces differ in the way they depend on the dilation matrix: Two matrices $A,B$ that induce the same scale of homogeneous Besov spaces also induce the same scale of inhomogeneous spaces, but the converse of this statement is generally false. Furthermore, the question whether $A,B$ induce the same scale of homogeneous spaces is closely related to the question whether they induce the same scale of Hardy spaces; the latter question had been previously studied by Bownik. We give a complete characterization of the different types of equivalence in terms of the Jordan normal forms of $A,B$. | \section{Introduction}\label{introduction}
Let $A \in \mathbb{R}^{d \times d}$ denote a matrix whose eigenvalues all have modulus $>1$. Matrices of this kind, often with additional properties (such as integer entries) were the basis of the study of discrete wavelet systems (frames or bases) obtained from dilations by powers of $A$ and suitable translations. Here, the initial choice was to take $A = 2 \cdot I_d$, but it was soon recognized that many more matrices $A$ could be employed to construct multiresolution analyses, wavelet frames and bases, see e.g. \cite{GroMa,Str,LaWa,BaMe,BoSp}.
Anisotropic Besov spaces associated to diagonal, anisotropic scaling matrices have been studied since the work of Besov, Il$'$in and Nikol$'$ski{\u\i} \cite{Besov_et_al}, see for example Schmeisser and Triebel \cite{Schmeisser_Triebel}, Triebel \cite{Triebel_FSI,Triebel_FSII,Triebel_04}, Dintelmann \cite{Dintelmann_95}, Farkas \cite{Farkas}, Hochmuth \cite{Hochmuth}, Garrig\'os, Hochmuth and Tabacco \cite{Garrigos} and Kyriazis \cite{kyr}. This class of function spaces was further extended by Bownik in 2005 \cite{Bow05}, to allow arbitrary expansive dilation matrices. Bownik showed that many of the well-known results concerning the relationship between wavelet bases and (isotropic) Besov spaces carries over to the anisotropic setting. Further work in this direction can be found in \cite{BaBe1,BaBe2,LiBaBoYaYu,CaMoRo}.
Much of the existing literature is concerned with alternative descriptions of the spaces, say in terms of moduli of continuity, atomic decompositions, etc. A recurring theme connected to these questions is a certain robustness of the various descriptions. The existing results often provide an understanding which variations of the known criteria result in the same spaces, for instance by prescribing necessary and/or sufficient conditions on the atoms that allow to characterize the function spaces. This paper is intended as a further contribution to this discussion, by studying the question which matrices induce the same scale of Besov spaces. For the related case of Hardy spaces, this question had already been studied by Bownik in \cite{Bow03}. As will be seen below, it can be shown that two expansive matrices define the same scale of homogeneous anisotropic Besov spaces if and only if they induce the same scale of anisotropic Hardy spaces. Hence the results of the cited paper are directly relevant to our
paper. Note however
that one important result, namely \cite[Theorem (10.3)]{Bow03}, which provides a characterization of this property in terms of generalized eigenspaces, is incorrect; a counterexample can be found in Remark \ref{rem:counter_bownik} below. Hence our results provide a complement and partial correction to \cite{Bow03}.
\subsection{Overview of the paper}
In terms of technique, our paper relies mostly on Bownik's papers \cite{Bow03,Bow05}, as well as on the recent work by Voigtlaender on decomposition spaces \cite{VoDiss,Vo_Embed1}.
It is structured as follows: Sections \ref{sect:ani_besov} to \ref{sect:exp_matr} are mostly introductory. We review the basic definitions pertaining to anisotropic Besov spaces and expansive matrices. In particular, we introduce the notion of homogeneous quasi-norms associated to an expansive matrix, and recall their basic properties. We also introduce decomposition spaces, which will be an important tool in the subsequent arguments. Decomposition spaces are a flexible construction of function spaces due to Feichtinger and Gr\"obner \cite{DecompositionSpaces1}, which are based on certain coverings of the frequency space (or a subset thereof). The main advantage of decomposition spaces is that they translate the problem of comparing of decomposition spaces associated to different coverings to the task of comparing the coverings themselves, via Theorem \ref{thm:rigidity} and Lemma \ref{lem:suf_dc_equal}.
Section \ref{sect:ani_dec} then contains the first important new result, the alternative characterization of anisotropic Besov spaces as (Fourier side) decomposition spaces. The result as such is not surprising, and has already been obtained for several special cases: For the isotropic setting, it was proved in \cite{DecompositionSpaces1} for the inhomogeneous case, and proved also for the homogeneous case in \cite{VoDiss}. For anisotropic inhomogeneous spaces with diagonal dilation matrix, it was observed in \cite{BorupNielsenDecomposition}. The theorem for the general case seems to be new, however. As a first consequence of this observation, we show a rigidity result for Besov spaces, see Theorem \ref{thm:rigidity_besov}. For the coverings used in the decomposition space description of anisotropic Besov spaces induced by an expansive matrix $A$, one can employ annuli with respect to an $A^T$-homogeneous quasi-norms $\rho_{A^T}$; here $A^T$ denotes the transpose of $A$. We use this observation to translate
the question whether two matrices $A$ and $B$ yield the same anisotropic Besov spaces to a question about the
relationship between the associated quasi-norms $\rho_{A^T}$ and $\rho_{B^T}$.
For the homogeneous case, the induced spaces are equal if and only if $\rho_A^T$ and $\rho_B^T$ are equivalent (in the usual sense);
see Lemma \ref{lem:char_equiv_matr}. For the inhomogeneous case, equality of the induced Besov spaces holds if and only if the quasi-norms are {\em coarsely equivalent}, see Lemma \ref{lem:char_equiv_matr_ih}, and confer to Definition \ref{defn:coarse} for coarse equivalence. As a corollary we get that $A$ and $B$ induce the same inhomogeneous spaces if they induce the same homogeneous ones. Furthermore, the results from \cite{Bow03} yield that the homogeneous case is equivalent to the question whether $A$ and $B$ induce the same anisotropic Hardy spaces, by Remark \ref{rem:rel_Hardy_eq}.
Thus the discussion whether two expansive matrices $A$ and $B$ induce the same scales of anisotropic Besov spaces is reduced to that of (coarse) equivalence of associated homogeneous quasi-norms. Section \ref{sect:char_equiv} is devoted to completely clarifying this question. We show that for each expansive matrix $A$ there exists an expansive matrix $A'$ such that $A'$ has only positive eigenvalues, and $|{\rm det}(A')|=2$, and the norms induced by the two matrices are equivalent. We call $A'$ the {\bf expansive normal form} of $A$. The significance of this notion is provided by Theorem \ref{thm:class_equiv_matr} stating that any two matrices in expansive normal form induce equivalent quasi-norms if and only if they are equal. Hence there is a natural one-to-one correspondence between the scales of anisotropic, homogeneous Besov spaces and matrices in expansive normal form. Moreover,
Part (b) of Theorem \ref{thm:class_equiv_matr} provides an easily checked characterization of coarse equivalence. From these criteria one easily infers that there exist pairs $A,B$ of expansive matrices that induce the same scale of inhomogeneous spaces, but different scales of homogeneous spaces.
We close the paper with the description of an algorithm to decide whether two expansive matrices $A$ and $B$ induce the same (homogeneous and/or inhomogeneous) Besov spaces.
\section{Anisotropic Besov spaces}
\label{sect:ani_besov}
Our exposition regarding anisotropic Besov spaces follows \cite{Bow05}. Let us start with some preliminaries and basic notions. We will use the following normalization of the Fourier transform $\mathcal{F} : {\rm L}^1(\mathbb{R}^d) \to C_0(\mathbb{R}^d)$: For all $f \in {\rm L}^1(\mathbb{R}^d)$,
\[
\mathcal{F}(f)(\xi) = \widehat{f}(\xi) = \int_{\mathbb{R}^d} f(x) e^{- 2 \pi i \langle \xi, x \rangle} dx~.
\]
$\mathcal{S}(\mathbb{R}^d)$ denotes the space of Schwartz functions, $\mathcal{S}'(\mathbb{R}^d)$ its dual, the space of tempered distributions. As is well-known, the Fourier transform extends canonically to $\mathcal{S}'(\mathbb{R}^d)$. We let $\mathcal{P}$ denote the space of polynomials on $\mathbb{R}^d$, which can be viewed as a subspace of $\mathcal{S}(\mathbb{R}^d)$. For these definitions and basic properties of the Fourier transform, we refer to \cite{Ru_FA}.
Given an open subset $\mathcal{O} \subset \mathbb{R}^d$, we let $\mathcal{D}(\mathcal{O}) = C_c^\infty(\mathcal{O})$, the space of smooth, compactly supported functions on $\mathcal{O}$, endowed with the usual topology \cite{Ru_FA}. We let $\mathcal{D}'(\mathcal{O})$ denote its dual space.
We use ${\rm supp}(f) = \overline{f^{-1}(\mathbb{C} \setminus \{ 0 \})}$ for the support of a function $f$. Given a Borel subset $C \subset \mathbb{R}^d$, $\lambda(C)$ denotes its Lebesgue measure. The cardinality of a set $X$ is denoted by $|X|$.
Given a vector $x = (x_1,\ldots,x_d)^T \in \mathbb{R}^d$, we denote by $|x| = \left( \sum_{I=1}^d |x_i|^2 \right)^{1/2}$ its euclidean length. Given a matrix $A \in \mathbb{R}^d$, we let $\| A \| = \sup_{|x| = 1} |A x|$.
The definition of anisotropic Besov spaces is based on the notion of expansive matrices.
\begin{definition}
\label{defn:expansive}
A matrix $A \in {\rm GL}(d,\mathbb{R})$ is called {\bf expansive}, if all its (possibly complex) eigenvalues $\lambda$ fulfill $|\lambda|>1$.
\end{definition}
\begin{definition}
Let $A \in {\rm GL}(d,\mathbb{R})$ be an expansive matrix. $\psi \in \mathcal{S}(\mathbb{R}^d)$ is called {\bf $A$-wavelet} if it fulfills
\begin{eqnarray}
\label{eqn:def_wv1} & & {\rm supp}(\widehat{\psi}) \subset [-1,1]^d \setminus \{ 0 \}~, \\
\label{eqn:def_wv2} & & \forall \xi \in \mathbb{R}^d \setminus \{0 \}~:~\sum_{j \in \mathbb{Z}} \left| \widehat{\psi}((A^T)^j \xi) \right|>0~.
\end{eqnarray}
Given a wavelet $\psi$, we define $\psi_j(x) = |{\rm det}(A)|^j \psi(A^jx)$, for $j \in \mathbb{Z}$.
Given a wavelet $\psi$, a function $\psi_0 \in S(\mathbb{R})$ is called {\bf low-pass complement to $\psi$}, if $\widehat{\psi}$ is compactly supported, with
\begin{equation}
\forall \xi \in \mathbb{R}^d ~:~ |\widehat{\psi}_0(\xi)| + \sum_{j \in \mathbb{N}_0} |\widehat{\psi}((A^T)^j\xi)| > 0~.
\end{equation}
The inhomogeneous wavelet system $(\psi_j^i)_{j \in \mathbb{N}_0}$ is defined by $\psi_j^i = \psi_j$, for $j \ge 1$, and $\psi_0^i = \psi_0$.
\end{definition}
\begin{remark} \label{rem:wavelets}
\begin{enumerate}
\item[(a)] It is not hard to see that every expansive matrix $A$ allows the existence of $A$-wavelets.
\item[(b)] The fact that the Fourier transform $\widehat{\psi}$ of an $A$-wavelet has compact support away from zero actually implies
${\rm supp}(\widehat{\psi}) \subset [-1,1]^d \setminus \epsilon [-1,1]^d$ for some $\epsilon>0$. For the expansive matrix $A$, this implies that, for all $\xi \in \mathbb{R}^d$,
\[
|\{ j \in \mathbb{Z} : \widehat{\psi} ((A^T)^j \xi) \not= 0 \}| \le M ~,
\] with $M$ independent of $\xi$. As a consequence of this observation, the denominator of the right-hand side of
\[
\widehat{\eta}(\xi) = \frac{|\widehat{\psi}(\xi)|^2}{ \sum_{j \in \mathbb{Z}} \left| \widehat{\psi}((A^T)^j \xi) \right|^2}
\] is locally finite, hence a smooth $C^\infty$-function, vanishing nowhere by Assumption (\ref{eqn:def_wv2}).
This establishes that $\widehat{\eta}$ a well-defined $C_c^\infty$-function, and $\eta \in \mathcal{S}(\mathbb{R}^d)$ is an $A$- wavelet satisfying the additional condition
\begin{equation}
\label{eqn:def_wv2_strong} \forall \xi \in \mathbb{R}^d \setminus \{0 \}~:~\sum_{j \in \mathbb{Z}} \widehat{\eta}((A^T)^j \xi) = 1~.
\end{equation}
Similarly, one can construct $A$-wavelets $\tilde{\eta}$ satisfying the condition
\begin{equation}
\label{eqn:def_wv2_strong2} \forall \xi \in \mathbb{R}^d \setminus \{0 \}~:~\sum_{j \in \mathbb{Z}} \left| \widehat{\tilde{\eta}}((A^T)^j \xi) \right|^2= 1~.
\end{equation}
Hence we may replace assumption (\ref{eqn:def_wv2}) by (\ref{eqn:def_wv2_strong}) or by (\ref{eqn:def_wv2_strong2}), if it proves convenient.
\item[(c)] Similarly, one can find Schwartz functions $\psi_0$ and $\psi$ such that
\begin{equation}
\label{eqn:def_wv2_strong_ih} \forall \xi \in \mathbb{R}^d~:~\widehat{\psi}_0(\xi) + \sum_{j \in \mathbb{N}} \widehat{\psi}((A^T)^j \xi) = 1~,
\end{equation}
or
\begin{equation}
\label{eqn:def_wv2_strong2_ih} \forall \xi \in \mathbb{R}^d~:~|\widehat{\psi_0}(\xi)|^2 + \sum_{j \in \mathbb{N}} |\widehat{\psi}((A^T)^j \xi)|^2 = 1~,
\end{equation}
holds.
\item[(d)] The polynomials $p \in \mathcal{P} \subset \mathcal{S}'(\mathbb{R}^d)$ are characterized by the fact that ${\rm supp}(\widehat{p}) \subset \{ 0 \}$. Hence the convolution theorem yields for any $A$-wavelet $\psi$ that $(p \ast \psi_j)^\wedge = \widehat{p} \cdot \widehat{\psi_j} = 0$, and thus $p \ast \psi_j = 0$, for all $j \in \mathbb{Z}$.
\end{enumerate}
\end{remark}
\begin{definition}
Let $A$ denote an expansive matrix, $\alpha \in \mathbb{R}$, and $0 < q \le \infty$. The sequence space $\ell^q_{v_{\alpha,A}}(\mathbb{Z})$ is the space of all sequences $(c_j)_{j \in \mathbb{Z}}$ with the property that $(|{\rm det}(A)|^{\alpha j} c_j)_{j \in \mathbb{Z}} \in \ell^q$, endowed with the obvious (quasi-)norm. The space $\ell^q_{v_{\alpha,A}}(\mathbb{N}_0)$ is defined analogously. Since the precise meaning can usually be inferred from the context, we will typically write $\ell^q_{v_{\alpha,A}}$ for either of the two spaces.
\end{definition}
\begin{definition}
\label{defn:an_bes}
Let $\alpha \in \mathbb{R}$, $0 < p,q \le \infty$. Let $A$ be an expansive matrix, and $\psi$ an $A$-wavelet, with low-pass complement $\psi_0$.
\begin{enumerate}
\item[(a)]
We define the {\bf anisotropic homogeneous Besov (quasi-) norm} by letting, for
given $f \in \mathcal{S}'(\mathbb{R}^d)$,
\begin{equation} \label{eqn:def_bnorm}
\| f \|_{\dot{B}_{p,q}^\alpha(A)} = \left\| \left( \left\| f \ast \psi_j\right\| \right)_{j \in \mathbb{Z}} \right\|_{\ell^q_{v_{\alpha,A}}}
\end{equation}
We let $\dot{B}_{p,q}^\alpha(A)$ denote the space of all tempered distributions $f$ with $\| f \|_{\dot{B}_{p,q}^\alpha(A)} < \infty$. We identify elements of $\dot{B}_{p,q}^\alpha(A)$ that only differ by a polynomial.
\item[(b)] The {\bf anisotropic inhomogeneous Besov (quasi-) norm} is defined for
given $f \in \mathcal{S}'(\mathbb{R}^d)$ by
\begin{equation} \label{eqn:def_bnorm_ih}
\| f \|_{{B}_{p,q}^\alpha(A)} = \left\| \left( \left\| f \ast \psi_j^i\right\| \right)_{j \in \mathbb{N}_0} \right\|_{\ell^q_{v_{\alpha,A}}}
\end{equation}
We let $B_{p,q}^\alpha(A)$ denote the space of all tempered distributions $f$ with $\| f \|_{B_{p,q}^\alpha(A)} < \infty$.
\end{enumerate}
\end{definition}
\begin{remark} \label{rem:def_besov}
A few words regarding well-definedness of the (quasi-)norm and the associated Besov space are in order. First of all, note that for any tempered distribution $f$, the convolution product $f \ast \psi_j$ is a smooth function of polynomial growth, that may or may not be $p$-integrable. By convention, the right-hand side (\ref{eqn:def_bnorm}) is infinite whenever one of the convolution products $f \ast \psi_j$ is not $p$-integrable. In case the sequence $\left( \left\| f \ast \psi_j\right\| \right)_{j \in \mathbb{Z}}$ consists only of finite real numbers, its $\ell^q_{\alpha,A}$-norm is declared infinite whenever the
sequence is {\em not} in $\ell^q_{\alpha,A}$.
Note that strictly speaking, $\dot{B}_{p,q}^\alpha(A) \subset \mathcal{S}'(\mathbb{R}^d)/\mathcal{P}$.
The well-definedness of the norm on the quotient space follows from the fact that $f \ast \psi_j = (f + p) \ast \psi_j$ for all $p \in \mathcal{P}$, by Remark \ref{rem:wavelets}(c).
With these conventions, the anisotropic Besov spaces and their norm are well-defined, although possibly dependent on the choice of wavelet. The independence of the norm (up to equivalence) of the choice of wavelet, and thus of the space, is shown in \cite[Corollary 3.7]{Bow05}. Furthermore, we mention the following properties of anisotropic Besov spaces: They are normed-spaces for $1 \le p,q \le \infty$, and quasi-normed otherwise. Furthermore, all spaces are {\em complete}, by \cite[Proposition 3.3]{Bow05}.
\end{remark}
\begin{remark} \label{rem:l2_besov}
Using a wavelet $\psi$ fulfilling the strong admissibility condtion (\ref{eqn:def_wv2_strong2}), one computes for $f \in \dot{B}_{2,2}^0(A)$ using a wavelet fulfilling the condition (\ref{eqn:def_wv2_strong})
\begin{eqnarray*}
\| f \|_{\dot{B}_{2,2}^0}^2 & = & \sum_{j \in \mathbb{Z}} \| f \ast \psi_j \|_2^2 \\
& = & \sum_{j \in \mathbb{Z}} \int_{\mathbb{R}^d} |\widehat{f}(\xi)|^2 |\widehat{\psi}(A^j \xi)|^2 d\xi \\
& = & \int_{\mathbb{R}^d} |\widehat{f}(\xi)|^2 \underbrace{\sum_{j \in \mathbb{Z}} |\widehat{\psi}(A^j \xi)|^2}_{\equiv 1} d\xi \\
& = & \| f \|_2^2~.
\end{eqnarray*}
Here the second equality used the Plancherel theorem, as well as condition (\ref{eqn:def_wv2_strong}). A similar argument shows that $B_{2,2}^0(A) = {\rm L}^2(\mathbb{R}^d)$.
\end{remark}
The chief aim of this paper is to understand the dependence of the scale of anisotropic Besov spaces on the underlying matrix. For this reason, we define an equivalence relation:
\begin{definition}
Let $A$ and $B$ denote expansive matrices. We write $A \sim_{\dot{B}} B$ whenever $\dot{B}_{p,q}^\alpha (A) = \dot{B}_{p,q}^\alpha (B)$ holds for all $0 < p,q \le \infty, \alpha \in \mathbb{R}$.
The relation $A \sim_B B$ is defined analogously.
\end{definition}
\section{Decomposition spaces}
\label{sect:dec_spaces}
Decomposition spaces were introduced by Feichtinger and Gr\"obner \cite{DecompositionSpaces1,DecompositionSpaces2}, initially with the aim of constructing intermediate spaces between (isotropic) Besov spaces and modulation spaces. The decomposition space formalism is an extremely flexible tool for the description of elements of large variety of function spaces in terms of their Fourier localization, including $\alpha$-modulation spaces, Besov spaces, curvelet smoothness spaces, wavelet coorbit spaces over general dilation groups, etc.; see \cite{DecompositionSpaces1,DecompositionSpaces2,BorupNielsenDecomposition,BorupNielsenAlphaModulationSpaces,FuVo,VoDiss}. In order to treat homogeneous Besov spaces along with the inhomogeneous ones, the initial definition had to be somewhat modified, to allow for decompositions that do not cover the full frequency space \cite{FuVo}.
As will become clear below, our paper further contributes to this unifying view onto function spaces through the lense of decomposition space theory. Let us now start by recounting the definition of Fourier-side decomposition spaces, in the form defined in \cite{VoDiss}.
These spaces depend on certain coverings of a suitably chosen set $\mathcal{O} \subset \mathbb{R}^d$ of frequencies, and partitions of unity subordinate to these. For the purposes of this paper, it is sufficient to treat a particularly amenable class of coverings, described in the next definition.
\begin{definition}
Let $\mathcal{O} \subset \mathbb{R}^d$ be open, and let $\mathcal{Q} = (Q_i)_{i \in I}$ denote a family of subsets $Q_i \subset \mathcal{O}$ with compact closure in $\mathcal{O}$.
\begin{enumerate}
\item[(a)] We call $\mathcal{Q}$ an {\bf admissible covering} of $\mathcal{O}$, if it fulfills the following conditions:
\begin{enumerate}
\item[(i)] {\bf Covering property:} $\mathcal{O} = \bigcup_{i \in I} Q_i$
\item[(ii)] {\bf Admissibility:} $\sup_{i \in I} \sup_{j \in I, Q_i \cap Q_j \not= \emptyset} \frac{\lambda(Q_i)}{\lambda(Q_j)} <\infty$.
\end{enumerate}
\item[(b)] $\mathcal{Q}$ is called an {\bf almost structured admissible covering} if it is an admissible covering, and there exists a family $(Q_i')_{i \in I}$ of open bounded sets as well as $T_i \in {\rm GL}(d,\mathbb{R})$ and $b_i \in \mathbb{R}^d$ fulfilling the following conditions:
\begin{enumerate}
\item[(i)] For all $i \in I$: $\overline{T_i Q_i' + b_i} \subset Q_i$.
\item[(ii)] The quantity $\sup_{i,j: Q_i \cap Q_j \not= \emptyset} \| T_i^{-1} T_j \|$ is finite.
\item[(iii)] The set $\{ Q_i': i \in I \}$ is finite.
\item[(iv)] The family $(T_i Q_i' + b_i)_{ i \in I}$ is an admissible covering.
\end{enumerate}
The tuple $((T_i)_{i \in I}, (b_i)_{i \in I}, (Q_i')_{i \in I})$ are called {\bf standardization} of $\mathcal{Q}$.
\end{enumerate}
\end{definition}
The subtleties connected to the various notions of coverings is the price one has to pay for the generality of the decomposition space approach.
The definition of associated function spaces uses a particular class of partitions of unity subordinate to an admissible covering.
\begin{definition}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ denote an almost structured admissible covering with standardization $((T_i)_{i \in I}, (b_i)_{i \in I}, (Q_i')_{i \in I})$, and let $0 < p \le \infty$. We call a family $(\varphi_i)_{i \in I}$ of functions an {\bf ${\rm L}^p$-BAPU} with respect to $\mathcal{Q}$ if it has the following properties:
\begin{enumerate}
\item[(i)] For all $i \in I$~:~$\varphi_i \in C_c^\infty( \mathcal{O} )$.
\item[(ii)] For all $i \in I$~:~$\varphi_i \equiv 0$ on $\mathbb{R}^d \setminus Q_i$.
\item[(iii)] $\sum_{i \in I} \varphi \equiv 1$ on $\mathcal{O}$.
\item[(iv)] $\sup_{i \in I} |{\rm det}(T_i)|^{\frac{1}{t}-1} \| \mathcal{F}^{-1} \varphi_i \|_{{\rm L}^p} <\infty$.
Here $t = \min(p,1)$.
\end{enumerate}
\end{definition}
The word BAPU in the definition is an acronym for {\em bounded amissible partition of unity}.
The following important remark ensures that BAPUs exist for almost structured admissible coverings \cite[Theorem 2.8]{VoDiss}.
\begin{lemma}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ denote an almost structured admissible covering. Then there exists a family $(\varphi_i)_{i \in I}$ that is an ${\rm L}^p$-BAPU, for every $0 < p \le \infty$.
\end{lemma}
\begin{definition}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ denote an admissible covering, and let $v: I \to \mathbb{R}^+$ denote a weight. The weight is called {\bf $\mathcal{Q}$-moderate} if
\[
\sup_{i,j \in I : Q_i \cap Q_j \not= \emptyset} \frac{v(i)}{v(j)} < \infty~.
\]
Given $0 < q \le \infty$, we define $\ell^q_v(I) = \{ c = (c_i)_{i \in I} \in \mathbb{C}^I~:~ (c_i v(i))_{i \in I} \in \ell^q(I) \}$, endowed with the obvious (quasi-)norm.
\end{definition}
We can now define the class of decomposition spaces that we are interested in:
\begin{definition}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ denote an almost structured admissible covering, let $v$ denote a $\mathcal{Q}$-moderate weight on $I$, and $0 \le p,q \le \infty$. Let $(\varphi_i)_{i \in I}$ denote an ${\rm L}^p$-BAPU associated to $\mathcal{Q}$. Given $u \in \mathcal{D}'(\mathcal{O})$, we define its {\bf decomposition space (quasi-)norm} as
\begin{equation}
\label{eqn:def_dsnorm} \left\| u \right\|_{\mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_v)} = \left\| \left( \| \mathcal{F}^{-1} (\varphi_i \cdot u ) \|_{{\rm L}^p} \right)_{ i \in I} \right\|_{\ell^q_v}~.
\end{equation}
We denote by $\mathcal{D}(\mathcal{Q},,{\rm L}^p,\ell^q_v)$ the space of all $u \in \mathcal{D}'(\mathcal{O})$ for which this (quasi-)norm is finite.
\end{definition}
\begin{remark}
\begin{enumerate}
\item[(a)]
We use the same conventions regarding finiteness of the (quasi-) norm as for the Besov space setting, see Remark \ref{rem:def_besov}. For well-definedness of the inverse Fourier transform $\mathcal{F}^{-1} (\varphi_i \cdot u ) $ observe that the pointwise product $\varphi_i \cdot u$ is a distribution on $\mathbb{R}^d$ with compact support, hence is a {\em tempered} distribution, whose inverse Fourier transform is a smooth function. Hence the meaning of the ${\rm L}^p$-norm of the inverse Fourier transform is clear, if one allows $\infty$ as a possible value.
Just as for Besov spaces, the (quasi-)norms of decomposition spaces are independent of the choice of BAPU (up to equivalence), and complete with respect to their (quasi-)norms. Also, an application of the Plancherel theorem, similar to that in the Besov space case, easily establishes that $\mathcal{D}(\mathcal{Q},L^2,\ell^2) = {\rm L}^2(\mathbb{R}^d)$, whenever $\mathbb{R}^d \setminus \bigcup_{i \in I} Q_i$ has measure zero.
\item[(b)] It is common in the decomposition space literature to use the space $\mathcal{S}'(\mathbb{R}^d)$ as a reservoir space for $\mathcal{D}(\mathcal{Q},{\rm L}^p,\ell_v^q)$, rather than the larger space $\mathcal{D}'(\mathcal{O})$. This may result in {\em incomplete} decomposition spaces, see \cite[Remark after Definition 21]{FuVo}, and it is the main reason why our definition relies on $\mathcal{D}'(\mathcal{O})$. This distinction will become relevant in the proof of Theorem \ref{thm:besov_as_decsp} below.
\end{enumerate}
\end{remark}
The definition of decomposition spaces puts the frequency covering at the center of attention: Important properties of decomposition spaces should be related to properties of the underlying covering, and a major task of decomposition space theory is to make this relation explicit and transparent. The thesis \cite{VoDiss} demonstrates that this programme can be carried out for {\em embedding theorems} describing inclusion relations between decomposition spaces, see also \cite{Vo_Embed1}. In the following, we will be concerned with a much more restrictive question, namely that of {\em equality} of decomposition spaces: We would like to understand when different coverings induce the same decomposition spaces. For this purpose, we must be able to compare different admissible coverings. The pertinent notions for such a comparison are contained in the following definitions.
\begin{definition}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ and $\mathcal{P} = (P_j)_{j \in J}$ denote admissible coverings of the open set $\mathcal{O}$.
\begin{enumerate}
\item[(a)] Given $i \in I$ and $n \in \mathbb{N}_0$, we inductively define index sets $i^{n*} \subset I$ via
\[
i^{0*} = \{ i \}~,~i^{(n+1)*} = \{ j \in I~:~\exists k \in i^{n*} \mbox{ with } Q_k \cap Q_j \not= \emptyset \}~.
\]
\item[(b)] We define
\[
Q_{i}^{n*} = \bigcup_{j \in i^{n*}} Q_j~.
\]
\item[(c)] We call $\mathcal{Q}$ {\bf almost subordinate to } $\mathcal{P}$ if there exists $k \in \mathbb{N}_0$ such that for every $i \in I$ there exists a $j_i \in J$ with
$Q_i \subset P_{j_i}^{k*}$.
\item[(d)] $\mathcal{Q}$ and $\mathcal{P}$ are called {\bf equivalent} if $\mathcal{Q}$ is almost subordinate to $\mathcal{P}$ and $\mathcal{P}$ is almost subordinate to $\mathcal{Q}$.
\end{enumerate}
\end{definition}
\begin{definition}
Let $\mathcal{Q}=(Q_i)_{i \in I}$ and $\mathcal{P}= (P_j)_{j \in J}$ denote admissible coverings of the open sets $\mathcal{O}$ and $\mathcal{O}'$, respectively.
\begin{enumerate}
\item[(a)] Given $i \in I$, we define $J_i \subset J$ as
\[
J_i = \{ j \in J: P_j \cap Q_i \not= \emptyset \}~,
\]
and similarly, for $j \in J$,
\[
I_j = \{ i \in I: P_j \cap Q_i \not= \emptyset \}~.
\]
\item[(b)] $\mathcal{Q}$ and $\mathcal{P}$ are called {\bf weakly equivalent} if
\[
\sup_{i \in I} | J_i | + \sup_{j \in J} |I_j| < \infty
\]
\end{enumerate}
\end{definition}
It is useful to also have a notion of equivalence for weights over different coverings.
\begin{definition} \label{defn:weight_equiv}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ and $\mathcal{P} = (P_j)_{j \in J}$ denote admissible coverings, $v_1$ a $\mathcal{Q}$-moderate weight on $I$, and $v_2$ a $\mathcal{P}$-moderate weight on $J$. We define
\[
v_1 \asymp v_2 :\Leftrightarrow \sup_{i \in I, j \in J~:Q_i \cap P_j \not= \emptyset} \frac{v_1(i)}{v_2(j)} + \frac{v_2(j)}{v_1(i)} < \infty
\]
\end{definition}
We now cite two results relating (weak) equivalence of coverings to equality of decomposition spaces. The first one can be understood as a rigidity theorem,
see \cite[Theorem 1.10]{Vo_Embed1}.
\begin{theorem} \label{thm:rigidity}
Let $\mathcal{Q}=(Q_i)_{i \in I}$ and $\mathcal{P}= (P_j)_{j \in J}$ denote almost structured admissible coverings of the same open set $\mathcal{O}$,
$v_1$ a $\mathcal{Q}$-moderate weight on $I$, and $v_2$ a $\mathcal{P}$-moderate weight on $J$. Assume that $(p_1,q_1,p_2,q_2) \in (0,\infty]^4 \setminus \{ (2,2,2,2) \}$ exist with
\[
\mathcal{D}(\mathcal{Q},{\rm L}^{p_1}, \ell^{q_1}_{v_1}) = \mathcal{D}(\mathcal{P},{\rm L}^{p_2}, \ell^{q_2}_{v_2})~.
\]
Then $(p_1,q_1) = (p_2,q_2)$, $v_1 \asymp v_2$, and $\mathcal{P}$ and $\mathcal{Q}$ are weakly equivalent.
\end{theorem}
The converse requires somewhat different conditions. The following generalizes results from \cite{DecompositionSpaces1,BorupNielsenDecomposition}.
\begin{lemma} \label{lem:suf_dc_equal}
Let $\mathcal{Q}=(Q_i)_{i \in I}$ and $\mathcal{P}= (P_j)_{j \in J}$ denote weakly equivalent almost structured admissible coverings of the same open set $\mathcal{O}$, with standardizations $((T_i)_{i \in I}, (b_i)_{i \in I}, (Q_i')_{i \in I})$ and $((S_j)_{j \in J}, (c_j)_{j \in J},(P_j')_{j \in J})$, respectively.
Let $v_1$ denote a $\mathcal{Q}$-moderate weight on $I$, and $v_2$ a $\mathcal{P}$-moderate weight on $J$, with $v_1\asymp v_2$.
Assume finally that $\mathcal{Q}$ is almost subordinate to $\mathcal{P}$, and that there exists a $C>0$ such that
\[ \forall (i,j) \in I \times J: \left(Q_i \cap P_j \not= \emptyset \Rightarrow \| T_i^{-1} S_j \| + \| S_j^{-1} T_i \| \le C \right) \]
Then
\[
\mathcal{D}(\mathcal{Q},{\rm L}^{p}, \ell^{q}_{v_1}) = \mathcal{D}(\mathcal{Q},{\rm L}^{p}, \ell^{q}_{v_2})
\] holds for all $0 < p,q \le \infty$.
\end{lemma}
\begin{proof}
This is \cite[Lemma 6.10]{Vo_Embed1}. Note that this result also requires an upper bound on $ |{\rm det}(S_j^{-1} T_i)|$, for all $i,j$ with $Q_i \cap P_j \not= \emptyset$, which in our setting follows from the bound on the norms.
\end{proof}
\section{Expansive Matrices and Homogeneous Quasi-Norms}
\label{sect:exp_matr}
In this section we collect the pertinent properties of expansive matrices. For more background on the following definitions and results, we refer to \cite{Bow03}.
Throughout this section, let $A$ denote an expansive matrix. We first note a number of observations regarding norms of (powers of) expansive matrices:
\begin{lemma} \label{lem:exp_norm}
Let $\lambda_1,\ldots,\lambda_d \in \mathbb{C}$ denote the eigenvalues of $A$, counted with their algebraic multiplicities, and numbered to ensure
\[
1 < |\lambda_1| \le |\lambda_2| \le \ldots \le |\lambda_d| ~.
\] Pick $1 < \lambda_-< |\lambda_1|$ and $\lambda_+ > |\lambda_d|$. Then there exists a constant $c>0$ such that, for all $j \ge 0$ and $x \in \mathbb{R}^d$:
\begin{eqnarray*}
\frac{1}{c} \lambda_-^j |x| & \le & |A^j x| \le c \lambda_+^j |x| \\
\frac{1}{c} \lambda_+^{-j} |x| & \le & |A^{-j} x| \le c \lambda_-^{-j} |x| \\
\end{eqnarray*}
As a consequence, we have
\[
\frac{1}{c} \lambda_-^j \le \| A^j \| \le c \lambda_+^j ~,~ \frac{1}{c} \lambda_+^{-j} \le \|A^{-j} \| \le c \lambda_-^{-j} ~.
\]
\end{lemma}
The following norm estimate will be useful, see \cite[Lemma 10.1]{Bow03}
\begin{lemma} \label{lem:norm_est}
Let $c_1,c_2 >0$. Then there is $c_3 = c_3(c_1,c_2)>0$ such that for all matrices $A$ with $\| A \| \le c_1$ and $|{\rm det}(A)| \ge c_2$, the estimate $\| A^{-1} \| \le c_3$ holds.
\end{lemma}
\begin{proof}
By Cramer's rule, the entries of $A^{-1}$ are polynomials in $|{\rm det}(A)|^{-1}$ and the entries of $A$, and our assumptions provide upper bounds for both.
\end{proof}
\begin{definition}
An $A$-homogeneous quasi-norm is a Borel map $\rho_A : \mathbb{R}^d \to \mathbb{R}_0^+$ satisfying the following conditions:
\begin{enumerate}
\item[(i)] $\rho_A(x) = 0$ if and only if $x = 0$.
\item[(ii)] {\bf $A$-homogeneity}: $\rho_A(Ax) = |{\rm det}(A)| \rho_A(x)$.
\item[(iii)] {\bf Triangle inequality}: There exists a constant $C>0$ such that, for all $x,y \in \mathbb{R}^d$,
\[
\rho_A(x+y) \le C(\rho_A(x)+\rho_A(y))~.
\]
\end{enumerate}
\end{definition}
The next lemma yields that any two quasi-norms that are homogeneous with respect to the same expansive matrix $A$ are equivalent. In the following, we use the term {\bf ellipsoid} for images $CB_1(0)$ of the unit ball (with respect to the euclidean norm) under some invertible matrix $C$.
\begin{lemma}
\begin{enumerate}
\item[(i)] Any two $A$-homogeneous norms on $\mathbb{R}^d$ are equivalent, i.e., given two such mappings $\rho_1,\rho_2$, there exists a constant $C \ge 1$ such that, for all $x \in \mathbb{R}^d$:
\[
\frac{1}{C} \rho_1(x) \le \rho_2(x) \le C \rho_1(x)~.
\]
\item[(ii)] There exists an ellipsoid $\Delta_A$ and $r>1$ such that
\[
\Delta_A \subset r \Delta_A \subset A \Delta_A~,
\] and $\lambda(\Delta_A) = 1$. Then, letting
\[
\rho_A(x) = |{\rm det}(A)|^j
\] for $x \in A^{j+1} \Delta_A \setminus A^j \Delta_A$, and $\rho(0) = 0$, defines an $A$-homogeneous quasi-norm.
\end{enumerate}
\end{lemma}
As will be seen below, equivalence of induced quasi-norms is closely related to the equivalence relation $\sim_{\dot{B}}$. By contrast, the equivalence relation induced by the {\em inhomogeneous} spaces will be seen to depend on a slightly less restrictive type of equivalence:
\begin{definition} \label{defn:coarse}
Let $\rho_A$ and $\rho_B$ denote two quasi-norms on $\mathbb{R}^d$. $\rho_A$ and $\rho_B$ are called {\bf coarsely equivalent} if there exist constants $c \ge 1$ and $R \ge 0$ such that
\[
\frac{1}{c} \rho_A - R \le \rho_B \le c \rho_A + R~.
\]
We call two expansive matrices $A$ and $B$ (coarsely) equivalent if and only if the induced quasi-norms are (coarsely) equivalent.
\end{definition}
\begin{remark} \label{rem:coarse_equiv}
\begin{enumerate}
\item[(a)] The notion of coarse equivalence originates from geometric group theory, and also plays a role in operator theory and global analysis; see \cite{Roe} for an introduction. Clearly, equivalent quasi-norms are also coarsely equivalent. The converse will be seen to be false in general.
\item[(b)] $\rho_A$ and $\rho_B$ are coarsely equivalent if and only if there exists $R>0$ and $c \ge 1$ with the property that for all $x \in \mathbb{R}$ with $|x| \ge R$, the inequalities
\[
\frac{1}{c} \rho_A(x) \le \rho_B(x) \le c \rho_A(x) ~.
\]
I.e., coarse equivalence can be understood as equivalence of the quasi-norms ``at infinity''.
The elementary proof of the equivalence uses that $\rho_A$ and $\rho_B$ are bounded on compact sets.
\end{enumerate}
\end{remark}
We will be interested in understanding when two different matrices are (coarsely) equivalent. As a first step in this direction, we want to translate (coarse) equivalence of $A$ and $B$ to conditions involving certain products of powers of the two matrices. We first introduce a quantity that will frequently appear in the following criteria.
\begin{definition} \label{defn:eps}
Let $A$ and $B$ be two expansive matrices. We let
\[
\epsilon(A,B) = \frac{\ln (|{\rm det}(A)|)}{\ln (|{\rm det}(B)|)}~.
\]
\end{definition}
Every real-valued matrix $A$ can be understood as inducing a linear map on $\mathbb{C}^d$, and in the following, eigenvalues and corresponding eigenvectors of such a matrix $A$ will be understood as possibly complex-valued.
See \cite[Lemma (10.2)]{Bow03} for a proof of the following statement.
\begin{lemma} \label{lem:norm_equiv}
Let $A$ and $B$ be two expansive matrices. Then $A$ and $B$ are equivalent if and only if
$\sup_{k \in \mathbb{Z}} \left\| A^{-k} B^{\lfloor \epsilon k \rfloor} \right\| < \infty$ holds,
with $\epsilon = \epsilon(A,B)$.
\end{lemma}
\begin{remark} \label{rem:transpose}
As a consequence of the characterization, we note that $A^T$ and $B^T$ are equivalent if and only if $A$ and $B$ are.
The first statement is equivalent to
\begin{eqnarray*}
\infty & > & \sup_{k \in \mathbb{Z}} \left\| (A^T) ^{-k} (B^{T})^{\lfloor \epsilon k \rfloor} \right\|
\\ & = & \sup_{k \in \mathbb{Z}} \left\| \left( B^{\lfloor \epsilon k \rfloor} A^{-k} \right)^T \right\| \\
& = & \sup_{k \in \mathbb{Z}} \left\| B^{\lfloor \epsilon k \rfloor} A^{-k} \right\|
\end{eqnarray*}
whereas condition (c), applied to $A$ and $B$ in reverse order, yields
\[
\sup_{\ell \in \mathbb{Z}} \left\| B^{-\ell} A^{\lfloor \ell/\epsilon \rfloor} \right\| < \infty\, ,
\] and it is not hard to see that these two conditions are equivalent.
We note that the analogous statement for coarse equivalence is wrong, see the remark following Theorem \ref{thm:class_equiv_matr}.
\end{remark}
We next give a version of Lemma \ref{lem:norm_equiv} for coarse equivalence, which will be central to the following. This is why its proof, which is a straightforward adaptation of the proof of \cite[Lemma (10.2)]{Bow03}, is included.
\begin{lemma} \label{lem:char_cequiv_matr1} Let $A$ and $B$ be two expansive matrices.
Then $A$ and $B$ are coarsely equivalent if and only if
\begin{equation} \label{eqn:cequiv_norm}
\sup_{k \in \mathbb{N}} \| A^{-k} B^{\lfloor \epsilon k \rfloor} \| < \infty~,
\end{equation} with $\epsilon = \epsilon(A,B)$.
\end{lemma}
\begin{proof}
Throughout the proof, let $\epsilon = \epsilon(A,B)$. First assume that (\ref{eqn:cequiv_norm}) holds. We note that it is equivalent to
\begin{equation} \label{eqn:cequiv_norm_v}
\sup_{k \ge k_0} \| A^{-k} B^{\lfloor \epsilon k \rfloor} \| < \infty
\end{equation}
for any fixed $k_0 \in \mathbb{Z}$. Furthermore, one has for all $k \in \mathbb{Z}$ that
\begin{equation} \label{eqn:dets_equiv}
1 \le |\det(A^{-k} B^{\lfloor \epsilon k \rfloor})| \le |\det(B)|~,
\end{equation} Hence, Lemma \ref{lem:norm_est} and (\ref{eqn:cequiv_norm_v}) also imply
\[
\sup_{k \ge k_0} \| B^{-\lfloor \epsilon k \rfloor} A^k \| < \infty
\]
Hence there exist constants $0 < C \le D < \infty$, depending only on $k_0 \in \mathbb{Z}$ such that
\[
\forall k \ge k_0 \,, \, \forall z \in \mathbb{R}^d \setminus \{ 0 \} ~: ~ C \le \frac{|A^{-k} z|}{|B^{-\lfloor \epsilon k\rfloor} z|} \le D ~.
\]
Fix $r>0$ such that for all $x \in \mathbb{R}^{d} \setminus \{ 0 \}$ there exists $k \in \mathbb{Z}$ such that
\[
1 \le |A^{-k} x | \le r~,~ 1 \le |B^{-k} x | \le r~,
\] e.g., $r = \max(\| A \|,\| B \|)$. Fix $x \in \mathbb{R}^d$ with $|x| \ge 1$, and let $k \in \mathbb{Z}$ with $1 \le |A^{-k} x| \le r$.
By Lemma \ref{lem:exp_norm}, we have for a suitable number $\lambda_+>0$ depending on $A$
\[
r \ge |A^{-k} x| \ge |x| \| A^{k}\|^{-1} \ge \lambda_+^{-k} /c
\] hence $k \ge k_0$, with $k_0$ only depending on $A$.
Define
\begin{eqnarray*}
c_A = \inf \{ \rho_A(z) : 1 \le |z| \le r \}, & & d_A = \sup \{ \rho_A(z) : 1 \le |z| \le r \}, \\
c_B = \inf \{ \rho_B(z) : 1/D \le |z| \le r/C \}, & & d_B = \sup \{ \rho_B(z) : 1/D \le |z| \le r/C \}~.
\end{eqnarray*}
Now the choice of $k$ and homogeneity of $\rho_A$ yields
\begin{equation} \label{eqn:ineq_rhoA}
c_A |{\rm det}(A)|^k \le \rho_A(x) \le d_A |{\rm det}(A)|^k ~.
\end{equation} Furthermore, the choice of $k$ yields via (\ref{eqn:cequiv_norm_v}) that
\begin{equation}\label{eqn:ineq_rhoB}
1/D \le |B^{-\lfloor \epsilon k \rfloor} x| \le r/C~,
\end{equation}
whence we get via
\begin{eqnarray*}
\rho_B(x) & = & |{\rm det}(B)|^{\lfloor \epsilon k \rfloor} \rho_B(B^{-\lfloor \epsilon k \rfloor} x) \le |{\rm det}(B)|^{\lfloor \epsilon k \rfloor} d_B \\
& \le & |{\rm det}(A)|^k |{\rm det}(B)| d_B \le \rho_A(x) |\det(B)| d_B/c_A~.
\end{eqnarray*}
By a similar argument,
\[
\rho_B(x) \ge \rho_A(x) c_B/d_A~.
\]
To summarize, we have found constants $C_1,C_2>0$ such that, for all $x$ with $|x|\ge 1$, $C_1 \rho_A(x) \le \rho_B(x) \le C_2 \rho_A(x)$. Hence the two quasi-norms are coarsely equivalent, by Remark \ref{rem:coarse_equiv}(b).
For the converse statement, assume that $\rho_A$ and $\rho_B$ are equivalent, i.e., for suitable $R,C> 0$ and all $x \in \mathbb{R}$ with $|x|\ge R$, the inequalities
$1/C \rho_B(x) \le \rho_A(x) \le C \rho_B(x)$ hold.
By Lemma \ref{lem:norm_est}, there exists $\ell_0 \in \mathbb{N}$ such that $|B^{\ell}(x)| \ge |x|$ holds for all $\ell \ge \ell_0$. For all $k_0 \ge \lceil \ell_0/\epsilon \rceil +1$ and all
$x \in \mathbb{R}^d$, with $|x| = R$, it follows that $|B^{\lfloor \epsilon k \rfloor} x| \ge |x| \ge R$, hence the coarse equivalence assumption gives rise to
\begin{eqnarray*}
\rho_A (A^{-k} B^{\lfloor \epsilon k \rfloor}x) & = & |\det(A)|^{-k} \rho_A(B^{\lfloor \epsilon k \rfloor} x) = C |\det(A)|^{-k} \rho_B(B^{\lfloor \epsilon k \rfloor} x) \\
& = & C |\det(A)|^{-k} |\det(B)|^{\lfloor \epsilon k \rfloor} \rho_B(x) \le \underbrace{C |\det{B}| \sup \{ \rho_B(x) : |x| = R \}}_{=K}~.
\end{eqnarray*}
Thus we have that
\[
\{ A^{-k} B^{\lfloor \epsilon k \rfloor} x : k \ge k_0, |x| = R \} \subset \{ x \in \mathbb{R}^d : \rho_A(x) \le K \} ~,
\] and the right-hand side is bounded, hence contained in a ball of radius $R_0$ with respect to the euclidean norm. But this implies
\[
\sup_{k \ge k_0} \| A^{-k} B^{\lfloor \epsilon k \rfloor} \| \le R_0~,
\] and the converse is shown.
\end{proof}
The following remark notes some elementary consequences of Lemmas \ref{lem:norm_equiv} and \ref{lem:char_cequiv_matr1}.
\begin{remark} \label{rem:equiv_JNF}
\begin{enumerate}
\item[(a)] Assume that the matrices $A,B$ have the same block diagonal structure
\[ A = \left( \begin{array}{cccc} A_1 & & & \\ & A_2 & & \\ & & \ddots & \\ & & & A_k \end{array} \right) \,\, , \,\, B = \left( \begin{array}{cccc} B_1 & & & \\ & B_2 & & \\ & & \ddots & \\ & & & B_k \end{array} \right)
\] with the additional property that $\epsilon(A_i,B_i) = \epsilon(A,B)$, for $i=1,\ldots,k$. Then a straightforward application of the criteria for (coarse) equivalnce yields that $A$ and $B$ are (coarsely) equivalent if and only if $A_i$ and $B_i$ are, for all $i=1,\ldots,k$.
\item[(b)] If $A,B$ and $A',B'$ are related by $A = CA'C^{-1}$ and $B=CB'C^{-1}$, then $A$ and $B$ are (coarsely) equivalent if and only if $A'$ and $B'$ are.
\end{enumerate}
\end{remark}
\section{Anisotropic Besov spaces viewed as decomposition spaces}
\label{sect:ani_dec}
We will now establish the connection between anisotropic Besov spaces and decomposition spaces. First we need to introduce a class of coverings.
\begin{definition}
Let $A$ denote an expansive matrix. Let $C \subset \mathbb{R}^d$ be open, such that $\overline{C}$ is a compact subset of $\mathbb{R}^d \setminus \{ 0 \}$, and define, for $j \in \mathbb{Z}$,
\[
Q_j = A^j \overline{C}~.
\]
If $\bigcup_{j \in \mathbb{Z}} Q_j = \mathbb{R}^d \setminus \{ 0 \}$,
$\mathcal{Q} = (Q_j)_{j \in \mathbb{Z}}$ is called {\bf homogeneous covering induced by $A$}. An {\bf inhomogeneous covering induced by $A$} is given by the family $\mathcal{Q}_A^i = (Q_j^i)_{j \in \mathbb{N}_0}$, where $Q_j^i = Q_j = A^j \overline{C}$ for $j \ge 1$, and $Q_0^i = \overline{C_0}$, for a relatively compact open set $C_0$ with the property that
\[
\bigcup_{j \in \mathbb{N}_0} Q_j^i = \mathbb{R}^d~.
\]
\end{definition}
\begin{lemma} \label{lem:ind_cv_adm}
Let $A$ denote an expansive matrix, $\mathcal{Q} = (A^j Q_0)_{j \in \mathbb{Z}}$ a covering induced by $A$, and $\mathcal{Q}^i$ an associated inhomogeneous covering. Then $\mathcal{Q}, \mathcal{Q}^i$ are almost structured admissible coverings.
\end{lemma}
\begin{proof}
First let us consider the homogeneous case.
It is clear that the homogeneous covering is almost structured, with standardization provided by $T_j = A^j$, $Q_j' = Q_0$ and $b_j = 0$, for $j \in \mathbb{Z}$. Hence the only missing property is admissibility. By assumption, $Q_0 \subset \mathbb{R}^d$ has compact closure in $\mathbb{R}^d \setminus \{ 0 \}$.
Defining, for $R>1$, the {\bf annulus}
\[
C_R = \{ x \in \mathbb{R}^d : R^{-1} < |x| < R \}~,
\] for $R>1$. Compactness of $Q_0$ implies $Q_0 \subset C_R$ for $R$ sufficiently large. Now Lemma \ref{lem:exp_norm} implies the existence of $j_0 \in \mathbb{N}$ such that
\[ Q_j \cap Q_0 \subset A^j C_R \cap C_R = \emptyset \]
for all $j$ with $|j|>j_0$. It follows that $Q_j \cap Q_i = \emptyset$ whenever $|j-i|>j_0$, and that entails admissibility of the covering.
In the inhomogeneous case, we use the same standardization as for the homogeneous case for positive $j$, as well as $Q_0 = C_0$, $T_0 = I_d$ and $b_0$, and see that the covering is almost structured. The remainder of the argument follows as before.
\end{proof}
\begin{lemma}
Let $\mathcal{Q}$ and $\mathcal{P}$ denote two coverings induced by the same matrix $A$, either both homogeneous or both inhomogeneous. Then $\mathcal{Q}$ and $\mathcal{P}$ are equivalent.
\end{lemma}
\begin{proof}
We first treat the homogeneous case. By assumption, $Q_j = A^j Q_0$ and $P_i = A^i P_0$. Since $\mathbb{R}^d \setminus \{ 0 \}$ is connected, it follows that
\[
\bigcup_{k \in \mathbb{N}} P_0^{k*} = \mathbb{R}^d \setminus \{ 0 \}~,
\] and since the interiors of the $P_0^{k*}$ are open and cover $\mathbb{R}^d \setminus \{ 0\}$, and $Q_0$ is relatively compact in $\mathbb{R}^d \setminus \{ 0 \}$, it follows that
\[
Q_0 \subset P_0^{k*}
\] for some $k \in \mathbb{N}$. But then, by construction of the induced coverings,
\[
Q_j = A^j Q_0 \subset A^j P_0^{k*} = P_j^{k*}~,
\] for all $j \in \mathbb{Z}$, and thus $\mathcal{Q}$ is almost subordinate to $\mathcal{P}$. Now symmetry yields equivalence of the coverings.
Now $Q_j \cap P_i \not= \emptyset$ is equivalent to
\[
A^{j-i} Q_0 \cap P_0 \not= \emptyset~.
\] Using $Q_0,P_0 \subset C_R$ for $R$ sufficiently large, followed by the argument from the proof of Lemma \ref{lem:ind_cv_adm},
yields that $|j-i|<j_0$, and thus
\[
\left\| A^{-i} A^{j} \right\| \le C~,
\] for a constant $C$ independent of $i,j$.
The statement concerning the inhomogeneous case follows from these observations, and from the fact that every compact subset $K \subset \mathbb{R}^d$ is contained in
$\tilde{Q}_0^{i*}$, for sufficiently large $i$.
\end{proof}
\begin{lemma}
For any two coverings $\mathcal{Q} = (A^j Q_0)_{j \in \mathbb{Z}}$ and $\mathcal{P}= (A^i P_0)_{i \in \mathbb{Z}}$ induced by the same matrix $A$, one has
\[ \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v}) = \mathcal{D}(\mathcal{P},{\rm L}^p,\ell^q_{v}) ~.\]
The same statement holds for inhomogeneous coverings.
\end{lemma}
\begin{proof}
The previous lemma shows that $\mathcal{Q}$ and $\mathcal{P}$ are equivalent. Hence in order to apply Lemma \ref{lem:suf_dc_equal},
it remains to show that $\|A^{j-i}\| < C$, for all pairs $i,j$ with $Q_j \cap P_i \not= 0$. But the proof of the previous lemma shows that $Q_j \cap P_i \not= \emptyset$ entails $|j-i|< j_0$, with a fixed $j_0$, hence the norm estimate holds as well.
\end{proof}
We next want to identify (homogeneous and inhomogeneous) anisotropic Besov spaces as special cases of decomposition spaces. In the context of this paper, the chief purpose of this result is to make Theorem \ref{thm:rigidity} and \ref{lem:suf_dc_equal} available for the discussion of anisotropic Besov spaces. It is however of some independent interest, since it allows include the anisotropic Besov spaces in a unified view onto a large range of decomposition spaces (e.g., $\alpha$-modulation spaces, curvelet smoothness spaces, wavelet coorbit spaces, etc.), see \cite[Chapter 6]{VoDiss}. For the case of inhomogeneous Besov spaces associated to diagonal matrices, the following result is \cite{BorupNielsenDecomposition}, for isotropic Besov spaces, it can be found in \cite[Lemma 6.2.2]{VoDiss}. Our proof is an adaptation of the proof for the latter to the anisotropic setting.
In order to motivate the following proof, we rewrite the Besov space norm of a tempered distribution $f$ as a decomposition space norm of its Fourier transform:
\begin{eqnarray*}
\| f \|_{\dot{B}_{p,q}^\alpha} & = & \left\| \left( \| f \ast \psi_j \|_{p} \right)_{j \in \mathbb{Z}} \right\|_{\ell^q_{v_{\alpha,A}}}
\\ & = & \left\| \left( \left\| \mathcal{F}^{-1} \left( \widehat{f} \cdot \phi_j \right) \right\|_{p} \right)_{j \in \mathbb{Z}} \right\|_{\ell^q_{v_{\alpha,A}}}
\end{eqnarray*} where we use $\phi_j = \widehat{\psi}_j$. Provided this family is a BAPU of a suitable covering, the right-hand side becomes a decomposition space norm, which suggests that the Fourier transform induces an isomorphism between the two spaces. There is however one subtlety to consider: Note that the ``reservoir'' of candidates for elements in the decomposition spaces $\mathcal{D}(\mathcal{P},{\rm L}^p,\ell^q_{v_{\alpha,A}})$ consists of {\em distributions} on the open set $\mathcal{O} = \mathbb{R}^d \setminus \{ 0 \}$, whereas $\widehat{f}$ is {\em tempered}. Hence the remaining part of the proof consists mostly in showing that every element of the decomposition space is in fact (the restriction of) a tempered distribution.
A first step in this direction is the following theorem, see \cite[Theorem 8.2]{Vo_Embed1}; observe also the remark following that Theorem.
\begin{theorem} \label{thm:embed_dc_tempered}
Let $\mathcal{Q} = (Q_i)_{i \in I}$ denote an almost structured admissible covering of $\mathcal{O} \subset \mathbb{R}^d$ with standardization
with standardization $((T_i)_{i \in I}, (b_i)_{i \in I}, (Q_i')_{i \in I})$ and BAPU $(\varphi_i)_{i \in I}$.
Let $v$ denote a $Q$-moderate weight on $I$. For each $N \in \mathbb{N}_0$,
define
\[
w^{(N)} = (w_i^{(N)})_{i \in I}~, w_i^{(N)} = |{\rm det}(T_i)|^{1/p} \max \left\{ 1, \| T_i^{-1} \|^{d+1} \right\} \left[ \inf_{\xi \in Q_i} (1+|\xi|) \right]^{-N}~.
\]
Let $I_0 \subset I$ and
assume that, for some $N \in \mathbb{N}$ one has $w^{(N)} / v \in \ell^1(I_0)$. Then the map
\begin{eqnarray*}
\Phi & : & \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_v) \to \mathcal{S}'(\mathbb{R}^d)~ \\
\Phi(f) & : & \mathcal{S}(\mathbb{R}^d) \ni g \mapsto \sum_{i \in I_0} \langle \varphi_i \cdot f, g \rangle
\end{eqnarray*}
is well-defined.
\end{theorem}
\begin{theorem} \label{thm:besov_as_decsp}
Let $A$ denote an expansive matrix, and let $\mathcal{Q}_A$ denote a homgeneous covering induced by $A^T$. For $\alpha \in \mathbb{Z}$, define the weight
\[
v_{\alpha,A} : \mathbb{Z} \to \mathbb{R}^+~,v_{\alpha,A}(j) = |{\rm det}(A)|^{j \alpha}~.
\]
Denote by $\rho: \mathcal{S}'(\mathbb{R}^d) \to \mathcal{D}'(\mathbb{R}^d)$ the restriction map. Then $\rho \circ \mathcal{F}$
is a topological isomorphism
\[
\rho \circ \mathcal{F} : \dot{B}_{p,q}^\alpha(A) \to \mathcal{D}(\mathcal{Q}_A,{\rm L}^p,\ell^q_{v_{\alpha,A}})~.
\]
Similarly, if $\mathcal{Q}_A^i$ denote an inhomogeneous covering induced by $A^T$, then
\[
\rho \circ \mathcal{F} : {B}_{p,q}^\alpha(A) \to \mathcal{D}(\mathcal{Q}_A^i,{\rm L}^p,\ell^q_{v_{\alpha,A}})~.
\]
is a topological isomorphism, as well. Here $v_{\alpha,A}$ denotes the restriction of the weight for the homogeneous setting to $\mathbb{N}_0$.
\end{theorem}
\begin{proof}
We first consider the inhomogeneous case.
Let $\psi_0,\psi$ denote a pair of functions fulfilling (\ref{eqn:def_wv2_strong_ih}), and define $\varphi_j = \widehat{\psi}_j$ for $j \ge 0$.
Then $(\varphi_j)_{j \in \mathbb{N}_0}$ is a BAPU relative to the admissible covering $Q_j = \varphi_j^{-1}(\mathbb{C} \setminus \{ 0 \})$. In fact, the BAPU is $p$-admissible, since
\[
|{\rm det}(T_j)|^{\frac{1}{t}-1} \| \mathcal{F}^{-1} \varphi_j \|_{{\rm L}^p} = \| \psi \|_{{\rm L}^p}
\] holds for all $j >0$ and all $p \in (0,1]$, with $t=\min(p,1)$. Using this family to compute the decomposition space norm, we find that
\begin{eqnarray*}
\| f \|_{{B}_{p,q}^\alpha (A)} & = & \left\| \left( \| f \ast \psi_j \|_{{\rm L}^p} \right)_{j \in \mathbb{N}_0} \right\|_{\ell^q_{v_{\alpha,A}}} \\
& = & \left\| \left( \| \mathcal{F}^{-1}(\widehat{f} \cdot \varphi_j) \|_{{\rm L}^p} \right)_{j \in \mathbb{N}} \right\|_{\ell^q_{v_{\alpha,A}}} \\
& = & \| \widehat{f } \|_{\mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_\alpha,A})}~.
\end{eqnarray*}
Hence the Fourier transform, mapping $f$ to (the restriction to $C_c^\infty(\mathbb{R}^d)$ of) its Fourier transform, is an isometric embedding of $\dot{B}_{p,q}^\alpha(A)$ into $\mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}})$, and it remains to show that it is onto.
To this end, we consider the auxiliary map
\[
\Phi : \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}}) \to \mathcal{S}'(\mathbb{R}^d)~.
\] defined by
\[
(\Phi(f)) (g) = \sum_{j \ge 0} f(\varphi_j \cdot g)~.
\]
In order to apply Theorem \ref{thm:embed_dc_tempered}, we need an estimate for
\[
w^{(N)}(j) = |\det(T_j)|^{1/p} \max \{ 1, \| T_j^{-1} \|^{d+1}\} \left[ \inf_{\xi \in Q_j} (1+|\xi|) \right]^{-N}
\] where $T_j = (A^T)^j$, for $j>0$. Picking $\epsilon >0$ with $\mathbf{B}_\epsilon(0) \cap Q_1 = \emptyset$, we get by Lemma \ref{lem:exp_norm} that
\[
\inf_{\xi \in Q_j} |\xi| \ge C \lambda_-^j~,
\] whenever $\lambda_-$ is strictly between $1$ and the smallest eigenvalue modulus of $A$. In addition, $\sup_{j \ge 0} \| T_j ^{-1}\|$ is bounded, hence we obtain
\[
\frac{w^{(N)}(j)}{v_{\alpha,A}(j)} \le C' |\det(A)|^{j/p-\alpha j} \lambda_-^{-jN}~,
\] in particular, $\frac{w^{(N)}}{v_{\alpha,A}} \in \ell^1(\mathbb{N}_0)$ as soon as $|\det(A)|^{1/p-\alpha} < \lambda_-^N$. Hence $\Phi$ is well-defined.
Now, given any $f \in \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}}) $ and every $g \in \mathcal{D}(\mathbb{R}^d)$, we have that $g = \sum_{j=0}^M g \ast \varphi_j$ for $M$ sufficiently large, and thus
\[
f(g) = \sum_{j=0}^M f(\varphi_j \cdot g) = (\Phi(f))(g)~.
\] Thus $\rho \circ \Phi$ is the identity operator on $\mathcal{D}(\mathcal{Q}, {\rm L}^p,\ell^q_{v_{\alpha,A}})$.
Now for any $f \in \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}}) $ we can define $u = \mathcal{F}^{-1}(\Phi(f))$, and obtain that $\rho(\widehat{u})$ = $f$.
We nowturn to the homogeneous case. We choose a wavelet $\psi$ fulfilling the condition (\ref{eqn:def_wv2_strong}), and define $\varphi_j = \widehat{\psi}_j$. Just as above, $(\varphi_j)_{j \in \mathbb{Z}}$ is a $p$-admissible BAPU relative to the admissible covering $Q_j = \varphi_j^{-1}(\mathbb{C} \setminus \{ 0 \})$, and as before
\begin{eqnarray*}
\| f \|_{\dot{B}_{p,q}^\alpha (A)} & = & \left\| \left( \| f \ast \psi_j \|_{{\rm L}^p} \right)_{j \in \mathbb{Z}} \right\|_{\ell^q_{v_{\alpha,A}}} \\
& = & \left\| \left( \| \mathcal{F}^{-1}(\widehat{f} \cdot \varphi_j) \|_{{\rm L}^p} \right)_{j \in \mathbb{Z}} \right\|_{\ell^q_{v_{\alpha,A}}} \\
& = & \| \widehat{f } \|_{\mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_\alpha,A})}~.
\end{eqnarray*}
Again, it remains to see that $\rho \circ \mathcal{F}$ is onto. This time, we need auxiliary mappings
\[
\Phi_{1,2} : \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}})) \to \mathcal{S}'(\mathbb{R}^d)~,
\]
with
\[
\Phi_1 (f)(g) = \sum_{j \ge 0} f(\varphi_j \cdot g)~,
\]
and $\Phi_2$ defined later.
Just as in the inhomogeneous case it follows that $\Phi_1(f) \in \mathcal{S}'(\mathbb{R}^d)$ holds for all $f \in \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}})$.
The small frequencies require more work.
Given any Schwartz function $g$ and $N \in \mathbb{N}$, let $P_N g$ denote the Taylor polynomial of $g$ around zero,
\[
P_N(g)(\xi) = \sum_{|\alpha|<N} \frac{\partial^\alpha g(0)}{\alpha !} \xi^\alpha~.
\]
Note that here we used the notation $|\alpha| = \sum_{i=1}^d \alpha_i$ for the length of a multi-index $\alpha$, somewhat in conflict to our notation for the euclidean length. However, no serious confusion can arise from this in the following.
We then define, for any $f \in \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}})$ and $g \in \mathcal{S}(\mathbb{R}^d)$,
\begin{equation} \label{eqn:Phi2}
\Phi_2(f)(g) = \sum_{j < 0} f(\varphi_j \cdot (g - P_N g))~.
\end{equation}
Our aim is to show that, for $N$ sufficiently large, the right-hand side converges and yields a tempered distribution.
As a first step towards convergence of the right-hand side, we write
\[
\varphi_j^* = \sum_{i: Q_i \cap Q_j \not= \emptyset} \varphi_i~,
\]
which implies $\varphi_j \cdot \varphi_j^* = \varphi_j$, and thus
\begin{eqnarray*}
\left| f (\varphi_j (g-P_N g))\right| & = & \left| (\varphi_j \cdot f) (\varphi_j^* \cdot (g-P_N g) \right| \\
& = & \left| \mathcal{F}^{-1}( \varphi_j \cdot f) (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right| ~.
\end{eqnarray*}
Here we employed the definition of the Fourier transform of tempered distributions by duality. Note now that by assumption, the tempered distribution $\mathcal{F}^{-1}( \varphi_j \cdot f)$ is an ${\rm L}^p$-function, whereas $(\mathcal{F}(\varphi_j^* \cdot (g-P_N g))$ is a Schwartz function. Hence we can continue estimating
\begin{eqnarray} \nonumber
\left| f (\varphi_j (g-P_N g))\right| & \le & \left\| \mathcal{F}^{-1}( \varphi_j \cdot f) \right\|_{\infty} \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 \\
\nonumber & \le & \left\| \mathcal{F}^{-1}( \varphi_j \cdot f) \right\|_{\infty} \left\|_\infty (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 \\
\label{eqn:partial_holder} & \le & C |{\rm det}(A)|^{j/p} \left\| \mathcal{F}^{-1}( \varphi_j \cdot f) \right\|_{p} \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 ~,
\end{eqnarray}
with the last estimate due to \cite[Lemma 5.3]{Vo_Embed1}, furnishing a constant $C$ that is independent of $j$ and $f$.
We now sum the terms from (\ref{eqn:partial_holder}) and get
\begin{eqnarray}
\nonumber
\lefteqn{\sum_{j < 0} \left| f (\varphi_j (g-P_N g))\right| \le } \\ \nonumber & \le & C \sum_{j<0} |{\rm det}(A)|^{j/p} \left\| \mathcal{F}^{-1}( \varphi_j \cdot f) \right\|_{p} \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 \\
& \le & C \sum_{j<0} \left( v_{\alpha,A}(j) \left\| \mathcal{F}^{-1}( \varphi_j \cdot f) \right\|_{p} \right) \left( |{\rm det}(A)|^{j/p} v_{\alpha,A}(j)^{-1} \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 \right) \nonumber \\ & \le &
\| f \|_{\mathcal{D}(\mathcal{Q},{\rm L}^p, \ell^q_{v_{\alpha,A}})} \left\| \left( |{\rm det}(A)|^{j/p} v_{\alpha,A}(j)^{-1} \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 \right)_{j < 0} \right\|_{\ell^{\infty}} \label{eqn:zwischen}~.
\end{eqnarray}
This puts the norms $ \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1$ in the focus of our attention. Here the usual estimates relating decay of the Fourier transform to norms of the derivatives provides
\[
\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g)) \|_1 \le M \sum_{|\alpha| \le d+1} \left\| \partial^\alpha \left[ \varphi_j^* \cdot (g-P_N g) \right] \right\|_1~,
\] see e.g. \cite[Lemma 3.5]{Fu_coorbit}.
Using the Leibniz formula for derivatives of products yields
\begin{equation} \label{eqn:leibniz}
\partial^\alpha \left[ \varphi_j^* \cdot (g-P_N g) \right](\xi) = \sum_{\beta + \gamma = \alpha} \left( \begin{array}{c} \alpha \\ \beta \end{array} \right) \partial^\beta (\varphi_j^*) (\xi)
\partial^\gamma (g-P_N(g))(\xi) ~.
\end{equation}
By construction of the $\varphi_j$, we have $\varphi_j^* (\xi) = \varphi_0^*((A^T)^j \xi)$, for all $j < 0$. Choose $C_{\varphi,1} >0$ large enough such that ${\rm supp}(\varphi_0^*)$ is contained in the ball of radius $R$, then we get via Lemma \ref{lem:exp_norm}
\begin{equation} \label{eqn:supp_phi_est}
\forall \xi \in {\rm supp}(\varphi_j^*) ~:~|\xi| \le C_{\varphi,1} \lambda_-^{j}~,
\end{equation} where $\lambda_->1$ is a lower bound for the eigenvalues of $A$.
Furthermore, the chain rule and the fact that $\varphi_j^*$ is a dilation of $\varphi_0^*$ allows to estimate
\begin{equation} \label{eqn:part_phi}
\left| \partial^\alpha \varphi_j^*(\xi) \right| \le C_{\varphi,2} (1+\|h \|_\infty)^{|\alpha|}~.
\end{equation}
Given a multi-index $\gamma$ of order $\le d$, all partial derivatives of $\partial^\gamma (g-P_N g)$ of order less than $N-d$ vanish at zero. Hence
Taylor's formula allows to estimate
\begin{equation} \label{eqn:taylor}
\left| \partial^\gamma (g-P_N g) (\xi) \right| \le C_d |\xi|^{N-d} \underbrace{\sum_{|\alpha| \le N} \| \partial^\alpha g \|_{\infty}}_{C_{N,g}}~,
\end{equation}
for all $\xi \in \mathbb{R}^d$.
We can now employ the collected estimates to show unconditional convergence of the right-hand side of (\ref{eqn:Phi2}). Combining (\ref{eqn:part_phi}) with (\ref{eqn:taylor}) gives
\begin{equation}
\left| \partial^\beta (\varphi_j^*) (\xi) \partial^\gamma (g-P_N(g))(\xi) \right| \le C_d C_{\varphi,2} (1+\|h \|_\infty)^{|\alpha|} C_{N,g} |\xi|^{N-d}~,
\end{equation}
and on the support of this pointwise product, we can employ (\ref{eqn:supp_phi_est}), to get finally
\begin{equation}
\label{eqn:final_1}
\left| \partial^\beta (\varphi_j^*) (\xi) \partial^\gamma (g-P_N(g))(\xi) \right| \le C_d C_{\varphi,1} C_{\varphi,2} C_{N,g} \lambda_-^{j(N-d)} ~.
\end{equation}
A further consequence of (\ref{eqn:supp_phi_est}), we have that all $\varphi_j^*$, for $j< 0$, are supported in the ball of radius $C_{\varphi,1}$, hence integrating (\ref{eqn:final_1}) yields
\begin{equation}
\label{eqn:final_2}
\left\| \partial^\beta (\varphi_j^*) \partial^\gamma (g-P_N(g))(\xi) \right\|_1 \le C' C_{\varphi,1}^{d+1} C_{\varphi,2} C_{N,g} \lambda_-^{j(N-d)}
\end{equation}
Hence the triangle inequality applied to (\ref{eqn:leibniz}) yields that
\begin{equation} \label{eqn:final_3}
\left\| \partial^\alpha \left[ \varphi_j^* \cdot (g-P_N g) \right] \right\| \le C'' C_{N,g} \lambda_-^{j(N-d)}~,
\end{equation}
with the constant $C''$ aggregating the constants $C_d,C_{\varphi,1},C_{\varphi,2}$, and the coefficients entering in the sum (\ref{eqn:leibniz}); observe that the latter are independent of $j$ and $N$.
This yields, for all $j < 0$,
\begin{eqnarray*}
{\rm det}(A)|^{j/p} v_{\alpha,A}(j)^{-1} \left\| (\mathcal{F}(\varphi_j^* \cdot (g-P_N g))\right\|_1 & \le & C' C_{N,g} |{\rm det}(A)|^{j(1/p-\alpha)} \lambda_-^{j(N-d)}
\end{eqnarray*}
and this expression can be uniformly bounded in $j$ as soon as $\lambda_-^{N-d} > |{\rm det}(A)|^{\alpha-1/p}$.
But then we get by (\ref{eqn:zwischen}) that
\[
\sum_{j < 0} \left| f (\varphi_j (g-P_N g))\right| \le C''' C_{N,g}~,
\] which yields, firstly, that the sum defining $\Phi_2(f)(g)$ converges unconditionally for all Schwartz functions $g$, and secondly, that the linear map $g \mapsto \Phi_2(f)(g)$ is indeed a tempered distribution.
Hence $\Phi(f) = \Phi_1(f) + \Phi_2(f)$ defines a tempered distribution, and the fact that $P_N g = 0$ for all $g \in C_c^\infty(\mathbb{R}^d \setminus \{ 0 \})$ yields that
$\Phi(f)(g) = f(g)$ for all $f \in \mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_{\alpha,A}})$. Now the same argument as for the inhomogeneous case allows to conclude from this that $\rho \circ \mathcal{F}$ is onto.
\end{proof}
Thus the theory of decomposition spaces becomes available for the study of anisotropic Besov spaces, which puts the role of the induced coverings into focus.
The following lemmata transfer the comparison of induced coverings to the comparison of associated homogeneous quasi-norms. We begin with a characterization of weak equivalence for induced coverings.
\begin{lemma} \label{lem:char_we1}
Let $A$ and $B$ be two expansive matrices.
\begin{enumerate}
\item[(a)] The homogeneous coverings induced by $A$ and $B$ are weakly equivalent if and only if, for all $R>0$, one has
\begin{eqnarray} \label{eqn:ub1}
\sup_{i \in \mathbb{Z}} & & \left| \left\{ j \in \mathbb{Z} : \| A^{-j} B^i \| \ge R^{-1} \mbox{ and } \| B^{-i} A^j \| \ge R^{-1} \right\} \right| < \infty~,\\
\label{eqn:ub2} \sup_{j \in \mathbb{Z}} & & \left| \left\{ i \in \mathbb{Z} : \| A^{-j} B^i \| \ge R^{-1} \mbox{ and } \| B^{-i} A^j \| \ge R^{-1} \right\} \right| < \infty~,
\end{eqnarray}
\item[(b)] The inhomogeneous coverings induced by $A$ and $B$ are weakly equivalent if and only if, for all $R>0$, one has
\begin{eqnarray} \label{eqn:iub1}
\sup_{i \in \mathbb{N}_0} & & \left| \left\{ j \in \mathbb{N}_0 : \| A^{-j} B^i \| \ge R^{-1} \mbox{ and } \| B^{-i} A^j \| \ge R^{-1} \right\} \right| < \infty~,\\
\label{eqn:iub2} \sup_{j \in \mathbb{N}_0} & & \left| \left\{ i \in \mathbb{N}_0 : \| A^{-j} B^i \| \ge R^{-1} \mbox{ and } \| B^{-i} A^j \| \ge R^{-1} \right\} \right| < \infty~,
\end{eqnarray}
\end{enumerate}
\end{lemma}
\begin{proof}
For the proof of $(a)$, first assume that (\ref{eqn:ub1}) and (\ref{eqn:ub2}) hold for all $R>0$.
Fix $S>0$ large enough, so that $\bigcup_{j \in \mathbb{Z}} A^j C_S = \bigcup_{i \in \mathbb{Z}} B^i C_S = \mathbb{R}^d \setminus \{ 0 \}$. It is then sufficient to show that the coverings
$(A^j C_S)_{j \in \mathbb{Z}}$ and $(B^i C_S)_{i \in \mathbb{Z}}$ are weakly equivalent. Let $K$ denote a finite upper bound for the suprema in (b), for $R=S^2$. Given $i,j \in \mathbb{Z}$, one then has that
$i \in I_j$ if and only if $A^j C_S \cap B^i C_S \not= \emptyset$, or equivalently, if and only if $B^{-i} A^j C_S \cap C_S \not= \emptyset$. This implies the existence of $x \in C_S$ such that $B^{-i} A^j x \in C_S$.
Hence
\[
\| A^{-j} B^i \| \ge \frac{ | x | }{| B^{-i} A^j x |} \ge \frac{1}{S^2}~.
\] But the choice of $K$ yields that for any given $j \in \mathbb{Z}$, there are at most $K$ indices $I$ with $\| A^{-j} B^i \| \ge 1/S^2$, and thus we get
\[
\sup_{j \in \mathbb{Z}} |I_j| \le K~.
\] By symmetry, we obtain the second inequality, hence the coverings are weakly equivalent.
Now assume that $(A^j C_R)_{j \in \mathbb{Z}}$ and $(B^i C_R)_{i \in \mathbb{Z}}$ are weakly equivalent. Fix $j \in \mathbb{Z}$, and assume that $\| B^{-i} A^j \| \ge R^{-1}$ as well as $\| A^{-j} B^i \| \ge R^{-1}$ hold for some $i \in \mathbb{Z}$. We aim to show that $i \in I_j$, then the upper bound on $|I_j|$ provided by the assumption of weak equivalence yields (\ref{eqn:ub1}).
The first inequality yields $z_1 \in \mathbb{R}^n$ with $|z_1| = 1$ and $|A^{-j} B^i z_1| \ge R^{-1}$. Now if $|A^{-j} B^i z_1| < R$, then we have $A^{-j} B^i z_1 \in A^{-j} B^i C_R \cap C_R$, and thus $i \in I_j$.
Hence assume that $|A^{-j} B^i z_1| \ge R$. We use the inequality $\| A^{-j} B^i \| \ge R^{-1}$ to conclude the existence of $y \in \mathbb{R}^d$ with $|y| =1$ and $|B^{-i} A^j y| >R^{-1}$. If $|B^{-i} A^j y| < R$, we find $B^{-i} A^j y \in C_R \cap B^{-i} A^j C_R$, and thus $i \in I_j$.
Hence the remaining case is $|y| = |z_1| = 1$ with $|B^{-i} A^j y|\ge R$ and $|A^{-j} B^i z_1|\ge R$. Define $z_2 = \frac{1}{|B^{-i} A^j y|}B^{-i} A^j y$, hence $|z_2| = 1$. Pick a continuous curve $\varphi :[0,1] \to \{ x \in \mathbb{R}^n : |x| = 1 \}$ with $\varphi(0) = z_1$, $\varphi(1) = z_2$, then the function
\[
\widetilde{\varphi}: [0,1] \to \mathbb{R}^+~,t \mapsto |A^{-j} B^i \varphi(t)|
\] is continuous, with
\[
\widetilde{\varphi}(0) = |A^{-j} B^i z_1| \ge R~,~\widetilde{\varphi}(1) = \frac{|A^{-j} B^iB^{-i} A^j y|}{|B^{-i} A^j y|} \le R~.
\] Hence the intermediate value theorem yields $t$ with $\widetilde{\varphi}(t) = 1$. It follows that $\varphi(t) \in C_R$ as well as $ A^{-j} B^i \varphi(t) \in C_R$, and thus
$A^j C_R \cap B^i C_R \not= \emptyset$. Hence this case also leads to $i \in I_j$.
Hence the uniform upper bound for all $|I_j|$ yields (\ref{eqn:ub1}). In the same way, we obtain (\ref{eqn:ub2}) from an upper bound on all $|J_i|,~i \in \mathbb{Z}$.
The inhomogeneous case (b) follows entirely analogously.
\end{proof}
\begin{lemma} \label{lem:char_equiv_matr}
Let $A$ and $B$ be two expansive matrices, and $\mathcal{Q}$ and $\mathcal{P}$ coverings induced by $A$ and $B$, respectively. Let $\rho_{A}$ denote an $A$-homogeneous quasi-norm and $\rho_{B}$ a $B$-homogeneous quasi-norm. Then the following are equivalent:
\begin{enumerate}
\item[(a)] The homogeneous coverings $\mathcal{Q}$ and $\mathcal{P}$ are weakly equivalent.
\item[(b)] The homogeneous coverings $\mathcal{Q}$ and $\mathcal{P}$ are equivalent.
\item[(c)] The quasi-norms $\rho_{A}$ and $\rho_{B}$ are equivalent.
\end{enumerate}
\end{lemma}
\begin{proof}
For the proof of $(a) \Rightarrow (c)$, we show the contraposition. Hence assume that the quasi-norms $\rho_{A}$ and $\rho_B$ are not equivalent. Then Lemma \ref{lem:norm_equiv} (a) $\Leftrightarrow (c)$ yields a sequence $(k_n)_{n \in \mathbb{Z}}$ of $k_n$ such that
\[
\left\| A^{k_n} B^{-\lfloor \epsilon k_n \rfloor} \right\| \to \infty~.
\] as $n \to \infty$.
By the choice of $\epsilon$, we have
\begin{eqnarray*}
|{\rm det}(B^{\lfloor \epsilon k \rfloor} A^{-k})| & = & |{\rm det}(A)|^{-k} |{\rm det}(B)|^{\epsilon k} |{\rm det}(B)|^{-\epsilon k + \lfloor \epsilon k \rfloor} \\
& = & |{\rm det}(B)|^{-\epsilon k + \lfloor \epsilon k \rfloor} \ge |{\rm det}(B)|^{-1} ~,
\end{eqnarray*} for all $k \in \mathbb{Z}$.
Hence Lemma \ref{lem:norm_est} implies that
\[
\left\| B^{\lfloor \epsilon k_n \rfloor} A^{-k_n}\right\| \to \infty~
\] as well.
Now fix $R>1$, $m \in \mathbb{N}$, and pick $k_n$ with
\[
\left\| A^{k_n} B^{-\lfloor \epsilon k_n \rfloor} \right\| \ge R^{-1} \max(\|B \|,\|B^{-1} \|)^m~
\]
as well as
\[
\left\| B^{\lfloor \epsilon k_n \rfloor} A^{-k_n}\right\| \ge R^{-1} \max(\|B \|,\|B^{-1} \|)^m~.
\]
Using the norm estimate $\| S T \| \le \| S \| \| T \|$ for arbitrary matrices $S,T$ then gives for $i=0,\ldots, m$ that
\[
\left\| A^{k_n} B^{i-\lfloor \epsilon k_n \rfloor} \right\| \ge \| A^{k_n} B^{-\lfloor \epsilon k_n \rfloor} \| \| B \|^{-i} \ge R^{-1}
\]
as well as
\[
\left\| B^{\lfloor \epsilon k_n \rfloor-i} A^{-k_n}\right\| \ge \| A^{k_n} B^{-\lfloor \epsilon k_n \rfloor} \| \| B^{-1} \|^{-i} \ge R^{-1}~.
\]
But this means that condition (\ref{eqn:ub2}) is violated, and thus the induced coverings are not weakly equivalent.
Now assume that $\rho_A$ and $\rho_B$ are equivalent. Let $\mathcal{Q}$ denote a covering induced by $A$.
Define
\[
\mathbf{B}^A_r(0) = \{ x \in \mathbb{R}^d~:~ \rho_A(x) < r \}~,
\] the ball with respect to $\rho_A$ with center $0$ and radius $r$.
The fact that $Q_0$ has compact closure in $\mathbb{R}^d \setminus \{ 0 \}$ then yields $Q_0 \subset B^A_R(0) \setminus B^A_{R^{-1}}(0)$. By construction of the covering on the one hand, and $A$-homogeneity of $\rho_A$ on the other, this entails
\[
Q_j = A^j Q_0 \subset \mathbf{B}^A_{R |{\rm det}(A)|^{j}}(0) \setminus \mathbf{B}^A_{R^{-1} |{\rm det}(A)|^{j}}(0)~.
\]
By analogous reasoning, we get (possibly after increasing $R$) that also
\[
P_i \subset \mathbf{B}^B_{R |{\rm det}(B)|^{i}}(0) \setminus \mathbf{B}^B_{R^{-1} |{\rm det}(B)|^{i}}(0)~,
\] with $\mathbf{B}^B_R(0)$ denoting balls with respect to $\rho_B$.
By assumption, there exists $c\ge 1$ such that
\[
\frac{1}{c} \rho_A(x) \le \rho_B(x) \le c \rho_A(x)~.
\]
Now let $i, j \in \mathbb{Z}$ with
\begin{eqnarray*}
\emptyset & \not= & Q_j \cap P_i \\
& \subset & \left( \mathbf{B}^A_{R |{\rm det}(A)|^{j}}(0) \setminus \mathbf{B}^A_{R^{-1} |{\rm det}(A)|^{j}}(0)\right) \cap
\left( \mathbf{B}^B_{R |{\rm det}(B)|^{i}}(0) \setminus \mathbf{B}^B_{R^{-1} |{\rm det}(B)|^{i}}(0) \right)~.
\end{eqnarray*}
For any $x$ contained in this intersection, one obtains in particular that
\[
R^{-1} |{\rm det}(A)|^j \le \rho_A(x) \le c \rho_B(x) \le c R |{\rm det}(B)|^i ~,
\]
which leads to
\[
\frac{|{\rm det}(A)|^j}{|{\rm det}(B)|^i} \le c R^2~.
\] But analogous reasoning also yields
\[
\frac{|{\rm det}(B)|^i}{|{\rm det}(A)|^j} \le c R^2~.
\]
Using $\epsilon = \frac{\ln(|{\rm det}(A)|)}{\ln (|{\rm det}(B)|)}$ as in Lemma \ref{lem:norm_equiv}, the two equations yield
\[ |j \epsilon - i| \le \frac{\ln(c R^2)}{\ln(|\det(B)|)} ~.\]
Thus, with $i_0 = \lfloor j \epsilon \rfloor$ and $K = \lceil \frac{\ln(c R^2)}{\ln(|{\rm det}(B))|} \rceil +1$, we get
\begin{eqnarray*}
Q_j & = & A^j Q_0 \subset \bigcup_{\ell=i_0-K}^{i_0+K} \mathbf{B}_{R|{\rm det}(B)|^\ell}(0)\setminus \mathbf{B}_{R^{-1}|{\rm det}(B)|^\ell}(0) \\
& = & B^{i_0} \left( \bigcup_{\ell = -K}^K \mathbf{B}_{R|{\rm det}(B)|^\ell}(0)\setminus \mathbf{B}_{R^{-1}|{\rm det}(B)|^\ell}(0) \right)~.
\end{eqnarray*}
Since
\[
\overline{ \left( \bigcup_{\ell = -K}^K \mathbf{B}_{R|{\rm det}(B)|^\ell}(0)\setminus \mathbf{B}_{R^{-1}|{\rm det}(B)|^\ell}(0) \right)} \subset \mathbb{R}^d \setminus \{ 0 \}
\] is compact, there exists $k \in \mathbb{N}$ such that
\[
\left( \bigcup_{\ell = -K}^K \mathbf{B}_{R^{-1}|{\rm det}(B)|^\ell}(0) \right) \subset P_0^{k*}
\] and therefore
\[
Q_j \subset B^{i_0} P_0^{k*} = P_{i_0}^{k*}~.
\]
Exchanging the roles of $Q_j$ and $P_i$ yields a converse inclusion relation, and we have shown
equivalence of the induced coverings.
Finally, $(b) \Rightarrow (a)$ is trivial.
\end{proof}
The following is an analogy for the inhomogeneous setting. The proof is straightforward adaptation of the previous one, and therefore omitted.
\begin{lemma} \label{lem:char_equiv_matr_ih}
Let $A$ and $B$ be two expansive matrices, and $\mathcal{Q}$ and $\mathcal{P}$ inhomogeneous coverings induced by $A$ and $B$, respectively. Let $\rho_{A}$ denote an $A$-homogeneous quasi-norm and $\rho_{B}$ a $B$-homogeneous quasi-norm. Then the following are equivalent:
\begin{enumerate}
\item[(a)] The inhomogeneous coverings $\mathcal{Q}$ and $\mathcal{P}$ are weakly equivalent.
\item[(b)] The inhomogeneous coverings $\mathcal{Q}$ and $\mathcal{P}$ are equivalent.
\item[(c)] The quasi-norms $\rho_{A}$ and $\rho_{B}$ are coarsely equivalent.
\end{enumerate}
\end{lemma}
We can now transfer these findings to the level of Besov spaces. First a rigidity theorem: Two matrices are Besov-equivalent if and only if the associated scale of Besov spaces coincide {\em in one nontrivial instance}:
\begin{theorem} \label{thm:rigidity_besov}
Let $A,B$ denote expansive matrices.
\begin{enumerate}
\item[(a)] $A \sim_{\dot{B}} B$ holds if and only if there exists a tuple $(p,q) \not= (2,2,)$ and $\alpha \in \mathbb{R}$ such that
\[
\dot{B}_{p,q}^{\alpha}(A) = \dot{B}_{p,q}^{\alpha}(B)~.
\]
\item[(b)] $A \sim_{{B}} B$ holds if and only if there exists a tuple $(p,q) \not= (2,2)$ and $\alpha \in \mathbb{R}$ such that
\[
{B}_{p,q}^{\alpha}(A) = {B}_{p,q}^{\alpha}(B)~.
\]
\end{enumerate}
\end{theorem}
\begin{proof}
In both cases, we need to show the ``if'' part of the statement.
Assuming $\dot{B}_{p,q}^{\alpha}(A) = \dot{B}_{p,q}^{\alpha}(B)$ for one tuple $(p,q) \not= (2,2)$, we have that the homogeneous coverings $\mathcal{Q} = (Q_j)_{j \in \mathbb{Z}}$ and $\mathcal{P} = (P_i)_{i \in \mathbb{Z}}$ induced by $A^T$ and $B^T$, respectively, must be weakly equivalent. Now Lemma \ref{lem:char_equiv_matr} yields that the coverings are strongly equivalent, and also, that the induced quasi-norms are equivalent. This also implies $v_{\alpha,A} \asymp v_{\alpha,B}$. Finally, the equivalence of the homogeneous quasi-norms entails that $Q_j \cap P_i \not= \emptyset$ implies $j \in i_0 + \{ -k,\ldots,k \}$, where $i_0 = \lfloor j \epsilon \rfloor$, and $k$ is independent of $j$; see the proof of Lemma \ref{lem:char_equiv_matr}. But this yields a uniform upper bound on $\| A^{-j} B^i \|$, via Lemma \ref{lem:norm_equiv}(c). Now Lemma \ref{lem:suf_dc_equal} becomes applicable and shows that
\[
\mathcal{D}(\mathcal{Q},{\rm L}^p,\ell^q_{v_\alpha,A}) = \mathcal{D}(\mathcal{P},{\rm L}^p,\ell^q_{v_\alpha,B})~.
\] Finally, Theorem \ref{thm:besov_as_decsp} translates this statement to the level of Besov spaces.
The proof of (b) is entirely analogous, noting that coarse equivalence of the norms is enough to guarantee that $v_{\alpha,A} \asymp v_{\alpha,B}$ holds on $\mathbb{N}_0$.
\end{proof}
Finally, we record a handy characterization of Besov equivalence:
\begin{corollary} \label{cor:char_equiv}
Let $A,B$ denote expansive matrices.
\begin{enumerate}
\item[(a)] $A \sim_{\dot{B}} B$ if and only if the $A^T$- and $B^T$-homogeneous quasi-norms are equivalent.
\item[(b)] $A \sim_B B$ if and only if the $A^T$- and $B^T$-homogeneous quasi-norms are coarsely equivalent.
\end{enumerate}
In particular, $A \sim_{\dot{B}}B$ implies $A \sim_B B$.
\end{corollary}
\begin{remark} \label{rem:rel_Hardy_eq}
Our result provides an interesting contrast to the results of Bownik \cite{Bow03}, who studied the analogous question for anisotropic Hardy spaces. Defining $A \sim_H B$ by the requirement that the anisotropic Hardy spaces induced by $A$ and $B$ coincide, he obtained that $A \sim_H B$ if and only if $\rho_A$ and $\rho_B$ are equivalent \cite[Theorem 10.5]{Bow03}. Hence, in view of Remark \ref{rem:transpose}, we find that $A \sim_H B$ if and only if $A \sim_{\dot{B}} B$.
\end{remark}
\section{Characterizing (coarse) equivalence of matrices}
\label{sect:char_equiv}
It remains to derive explicit and checkable criteria for the (coarse) equivalence of matrices.
We first derive necessary criteria in terms of generalized eigenspaces, based on an approach developed in \cite{Bow05}. This requires the following class of auxiliary subspaces.
\begin{definition}
Given a matrix $A \in \mathbb{C}^{d \times d}$, we define for $r >0$ and $m \in \mathbb{N}_0$
\[
E(A,r,m) = {\rm span} \left( \bigcup_{|\lambda| = r} {\rm Ker}(A-\lambda I_d)^m \cup \bigcup_{|\lambda|<r} {\rm Ker}(A-\lambda I_d)^d \right)~.
\]
\end{definition}
The significance of these auxiliary spaces becomes apparent by the following lemma, which characterizes them by the asymptotic behaviour of $|A^k z|$, as $k \to \infty$. For a proof, see \cite[Lemma (10.4)]{Bow03}:
\begin{lemma} \label{lem:norm_growth}
Let $A \in \mathbb{C}^{d \times d}$. For any $z \in \mathbb{C}^n \setminus \{ 0 \}$, $r>0$ and $m \in \mathbb{N}_0$, the condition
\[
z \in E(A,r,m+1) \setminus E(A,r,m)
\] is equivalent to the existence of a constant $c>0$ and $k_0 \in \mathbb{N}$ such that
\[
\forall k \ge k_0~:~ \frac{1}{c} k^m r^k \le |A^k z| \le c k^m r^k~.
\]
\end{lemma}
We can now give necessary criteria in terms of generalized eigenspaces.
\begin{lemma} \label{lem:char_eigenspaces}
Let $A,B$ denote expansive matrices.
\begin{enumerate}
\item[(a)] If $A$ and $B$ are coarsely equivalent, then
for all $r>0$ and all $m \in \mathbb{N}$
\[
E(A^{-1},r^\epsilon,m) = E(B^{-1},r,m)~.
\]
\item[(b)] If $A$ and $B$ are equivalent, then for all $r>1$ and all $m \in \mathbb{N}$:
\[
{\rm span}\left( \bigcup_{|\lambda| = r^\epsilon} {\rm Ker}(A-\lambda I_d)^m \right) = {\rm span}\left( \bigcup_{|\lambda| = r} {\rm Ker}(B-\lambda I_d)^m \right) ~.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
For the proof of (a) assume that $A$ and $B$ are coarsely equivalent. By Lemma \ref{lem:char_cequiv_matr1}, this is equivalent to
\[
\sup_{k \in \mathbb{N}} \| A^{-k} B^{\lfloor \epsilon k \rfloor} \| < \infty ~.
\]
Note that Lemma \ref{lem:norm_est} then also provides
\[
\sup_{k \in \mathbb{N}} \| B^{-\lfloor \epsilon k \rfloor} A^k \| < \infty ~,
\]
and the two estimates provide constants $0 < c_1 \le c_2 < \infty$ such that, for all $z \in \mathbb{C}^n \setminus \{ 0 \}$, and all $k \in \mathbb{N}$,
\begin{equation} \label{eqn:norm_equiv_aux}
c_1 \le \frac{|A^{-k} z|}{|B^{-\lfloor \epsilon k \rfloor} z|} \le c_2~.
\end{equation}
Now assume that $z \in E(A^{-1},r^\epsilon,m+1) \setminus E(A^{-1},r^\epsilon,m)$. Then Lemma \ref{lem:norm_growth} yields the existence of $c>0$ and $k_0 \in \mathbb{N}$ such that
\[
\forall k \ge k_0~:~ \frac{1}{c} k^m r^{\epsilon k} \le |A^{-k} z| \le c k^m r^{\epsilon k}~.
\]
This entails, via (\ref{eqn:norm_equiv_aux}), that
\[
\frac{c_1}{c} k^m r^{\epsilon k} \le |B^{-\lfloor \epsilon k \rfloor}| \le c_2 c k^m r^{\epsilon k}~,
\] and thus, since
\[
r \le r^{\epsilon k - \lfloor \epsilon k \rfloor} \le 1~,
\]
we obtain, with a new constant $c'>0$, and for all $\ell \ge \lceil k_0 \epsilon \rceil$, that
\begin{equation} \label{eqn:almost_asymp}
\frac{1}{c'} \ell^m r^{\ell} \le |B^{-\ell} z| \le c' \ell^m r^\ell~,
\end{equation}
as long as $\ell \in M = \{ \lfloor \epsilon k \rfloor : k \ge k_0 \}$. Now let $\ell \ge \lfloor \epsilon k_0 \rfloor$ be arbitrary. Then there exists $\ell_1 \le \ell$ and $j \in \{ 0,\ldots, \lceil 1/\epsilon \rceil \}$ such that $\ell_1 \in M$ and $\ell = \ell_1 + j$. Assuming in addition that $\ell \ge \ell_0 = \max(2 \lceil 1/\epsilon, \rceil,\lfloor \epsilon k_0 \rfloor)$, we obtain the estimates
\begin{eqnarray*}
\frac{1}{c'} \ell^m r^\ell & = & \frac{1}{c'} (\ell_1+j)^m r^{\ell_1+j} \\
& \le & \frac{1}{c'} 2^m r^j \ell_1^m r^{\ell_1} \le 2^m |B^{-\ell_1} z| \\
& \le & 2^m \max_{0 \le j \le \lceil 1/\epsilon \rceil} \| B^j \|~ |B^{-\ell} z|~,
\end{eqnarray*} where we also used that since $A$ is expansive, the assumption that $z \in E(A^{-1},r^\epsilon,m+1) \setminus \{ 0 \}$ forces $r < 1$.
By a similar calculation, we obtain
\[
c' \ell^m r^\ell \ge r^{\lceil 1/\epsilon \rceil} \min_{0 \le j \le \lceil 1/\epsilon \rceil} \| B^{-j} \|^{-1} ~ |B^{-\ell} z|~,
\] and thus we have shown that (\ref{eqn:almost_asymp}), with different constant $c''$ instead of $c'$, holds for all $\ell \ge \ell_0$. But then Lemma \ref{lem:norm_growth} entails $z \in E(B^{-1},r,m+1) \setminus E(B^{-1},r,m)$.
The same argument with $A,B$ interchanged yields $E(B^{-1},r,m+1) \setminus E(B^{-1},r,m) \subset E(A^{-1},r^\epsilon,m+1) \setminus E(A^{-1},r^\epsilon,m)$, and thus equality of the two difference sets. Now induction, first over $m$, then over the eigenvalues of $A^{-1}$ in increasing order, yields
$E(A^{-1},r^\epsilon,m) = E(B^{-1},r,m)$.
For the proof of (b) we observe that the above argument can also be applied to $k <0$, and then it yields
$E(A,r^{\epsilon},m) = E(B,r,m)$ as well, for all $r,m$. Now, finally, the observation that
\[
E(B^{-1},r^{-1},m) \cap E(B,r,m) = {\rm span}\left( \bigcup_{|\lambda| = r} {\rm Ker}(B-\lambda I_d)^m \right)\,\,
\] and the analogous fact about $A$ yields the conclusion of (b).
\end{proof}
\begin{remark} \label{rem:counter_bownik}
Part (b) of the previous lemma was noted by Bownik, see \cite[Theorem (10.3)]{Bow03}, and our proof is essentially an adaptation of the proof given there. The cited theorem also states the converse, i.e., that the necessary condition in (b) is sufficient as well. This is {\em false}, as can be seen with the help of the pair $A,B$ of matrices given by
\[
A = \left( \begin{array}{cc} 2 & 2 \\ 0 & 2 \end{array} \right) \,\, ,\,\, B = \left( \begin{array}{cc} 2 & 4 \\ 0 & 2 \end{array} \right)\,.
\] These matrices fulfill $\epsilon(A,B)=1$ and the necessary condition of part (b). However, one readily computes for all $k \in \mathbb{N}$ that
\[
A^{-k} B^k = \left( \begin{array}{cc} 1 & k \\ 0 & 1 \end{array} \right) \,\, .
\] Hence the matrices are not even coarsely equivalent.
\end{remark}
We next want to reduce the general discussion to a subclass of expansive matrices, those having only positive eigenvalues and a fixed determinant. This requires a few further auxiliary notions:
\begin{definition}
The exponential map $\mathbb{R}^{d \times d} \to {\rm GL}(d,\mathbb{R})$ is defined by
\[
\exp(X) = \sum_{k=0}^\infty \frac{X^k}{k !}\,\,.
\]
\end{definition}
It is not hard to see that the exponential map is well-defined on $\mathbb{R} ^{d \times d}$ and satisfies $\exp(X+Y) = \exp(X) \exp(Y)$, whenever $X$ and $Y$ commute \cite[Proposition 3.2.1]{HiNe}. Furthermore, for any fixed matrix $X$ the map $t \mapsto \exp(tX)$ is a continuous homomorphism $\mathbb{R} \to {\rm GL}(d,\mathbb{R})$ \cite[Theorem 3.2.6]{HiNe}, and its image is a so-called one-parameter subgroup. The next lemma states that two expansive matrices contained in the same one-parameter subgroup have equivalent quasi-norms.
\begin{lemma}
Let $A$, $B$ be expansive matrices, and assume that $A = \exp(tX)$ and $B = \exp(sX)$, for some matrix $X$ and $s,t>0$. Then $A$ and $B$ are equivalent.
\end{lemma}
\begin{proof}
The well-known formula $det(\exp(X)) = \exp(tr(X))$, with $tr(X)$ denoting the trace of the matrix $X$, allows to compute
\[
\epsilon(A,B) = \frac{\ln (|{\rm det}(A)|)}{\ln (|{\rm det}(B)|)} = \frac{t}{s}\,\,.
\] and since $t \mapsto \exp(tX)$ is a homomorphism, we get
\[
A^{-k} B^{\lfloor \epsilon k \rfloor} = \exp\left( (-kt + \lfloor \frac{t}{s} k \rfloor s) X \right) = \exp(r_k X)\,
\] with $-s \le r_k \le 0$. It follows that
\[
\sup_{k \in \mathbb{Z}} \| A^{-k} B^{\lfloor \epsilon k \rfloor} \| \le \sup_{-s \le r \le 0} \| \exp(rX) \| < \infty\,\,
\] hence $A$ and $B$ are equivalent.
\end{proof}
A further step towards simplification is the observation that every expansive matrix is equivalent to a matrix having only positive eigenvalues.
\begin{lemma} \label{lem:ex_pos_spec}
Let $A$ denote an expansive matrix. Then there exists a matrix $B$ which is equivalent to $A$, has only positive eigenvalues, and fulfills $|\det(A)| = \det(B)$.
\end{lemma}
\begin{proof}
First assume that $A$ has only one eigenvalue. If that eigenvalue is negative, then $B=-A$ is as desired. Now assume that the eigenvalue $\lambda$ is non-real, with $|\lambda|>1$. Given any complex number $z$, define the two-by-two matrix
\[
M_z = \left( \begin{array}{cc} {\rm Re}(z) & {\rm Im}(z) \\ - {\rm Im}(z) & {\rm Re}(z) \end{array} \right) \,\,.
\]
There exists a $C \in {\rm GL}(d,\mathbb{R})$ bringing $A$ into real Jordan normal form, i.e. such that
\[
CAC^{-1} = \left( \begin{array}{ccccccc} M_\lambda & M_{z_1} & & & & \ldots & \\
& M_\lambda & M_{z_2} & & & & \\
& & \ddots & \ddots & & & \\
& & & \ddots & \ddots & & \\
& & & & \ddots & \ddots & \\
& & & & & M_\lambda & M_{z_{d/2-1}} \\
& & & & & & M_\lambda \end{array} \right) \,\,,
\] with $z_1,z_2 \ldots \in \{ 0 ,1 \} \subset \mathbb{C}$.
Write $\lambda = r w$, with $r>0$ and $|w| = 1$. Then we can factor $CAC^{-1}$ as
\[
CAC^{-1} = \left( \begin{array}{ccccccc} M_w & & & & & & \\
& M_w & & & & & \\
& & \ddots & & & & \\
& & & \ddots & & & \\
& & & & \ddots & & \\
& & & & & M_w & \\
& & & & & & M_w \end{array} \right)
\left( \begin{array}{ccccccc} M_r & M_{\overline{w} z_1} & & & & & \\
& M_r & M_{\overline{w} z_2} & & & & \\
& & \ddots & \ddots & & & \\
& & & \ddots & \ddots & & \\
& & & & \ddots & \ddots & \\
& & & & & M_r & M_{\overline{w} z_{d/2-1}} \\
& & & & & & M_r \end{array} \right) \,\,
\] and the two factors commute. Call the factors on the right-hand side $D_1, D_2$, and let
$B = C^{-1} D_2 C$, then $B$ has $r = |\lambda|$ as only eigenvalue, in particular $|{\rm det}(A)| = {\rm det}(B)$ holds. Furthermore, the fact that $D_1$ and $D_2$ commute allows to compute, for all $k \in \mathbb{Z}$,
\[
\| A^{-k} B^k \| = \| C^{-1} D_1^{-k} C\| \le \| C \| \, \| C^{-1}\|\,\,
\] since $D_1$ is an orthogonal matrix. Hence Lemma \ref{lem:norm_equiv} yields the desired statement.
In the general case, we decompose $A$ into real Jordan blocks $A_1,\ldots,A_k$, and apply the above procedure to each of them. We then obtain a matrix $B$ with the desired properties via Remark \ref{rem:equiv_JNF}.
\end{proof}
One of the advantages of matrices with positive eigenvalues is that one always finds a one-parameter subgroup of ${\rm GL}(d,\mathbb{R})$ going through them.
\begin{lemma} \label{lem:ex_one_par}
Let $A$ denote an expansive matrix with positive eigenvalues. Then there exists a matrix $X$ with
$A = \exp(X)$. In particular, for any $c>1$ there exists an expansive matrix $B$ that is equivalent to $A$, only has positive eigenvalues, and fulfills ${\rm det}(B)=c$.
\end{lemma}
\begin{proof}
Since the exponential of a block diagonal matrix is again block diagonal, and since $\exp(CXC^{-1}) = C \exp(X) C^{-1}$, we may assume w.l.o.g. that $A$ is a single Jordan matrix, i.e.,
\[
A = \lambda I_d + T
\] where $T$ is a strictly upper triangular matrix. Instead of this additive decomposition, we may also write $A$ as the product
\[
A = (\lambda I_d) \cdot C
\] where $C = I_d + \lambda^{-1} T$ is a unipotent matrix. In particular, $C-I_d$ is {\em nilpotent}, which means that $(C-I_d)^d = 0$.
Letting $T(d,\mathbb{R})$ denote the set of all unipotent matrices and $\mathfrak{t}(d,\mathbb{R})$ for the space of all nilpotent matrices, \cite[Theorem 3.3.3]{HiNe} states that
$
\exp: \mathfrak{t}(d,\mathbb{R}) \to T(d,\mathbb{R})$ is bijective. Hence there exists $Y \in \mathfrak{t}(d,\mathbb{R})$ with $\exp(Y) = C$.
Now the fact that $Y$ and $\ln (\lambda) I_d$ commute allows to conclude that
\[
\exp(\ln (\lambda) \cdot I_d + Y) = \exp(\ln (\lambda) I_d) \exp(Y) = A\,,
\] hence $X = \ln(\lambda) \cdot I_d + Y$ is as desired.
Returning to the general case of multiple Jordan blocks, if $A = \exp(X)$, then the eigenvalues of $\exp(tX)$ are just $\lambda^t$, with $\lambda$ an eigenvalue of $A$. In particular, $\exp(tX)$ is expansive, for all $t>0$. Furthermore, $\det(A) = \exp(tr(X))>1$. Hence $tr(X)>0$, and for $t = \frac{\ln(c)}{tr(X)}$ we get $\det(\exp(tX)) = c$. Thus $B = \exp(tX)$ is as desired.
\end{proof}
We can now show the main result of this section.
\begin{theorem} \label{thm:class_equiv_matr}
Let $A,B$ denote expansive matrices having only positive eigenvalues, and fulfilling ${\rm det}(A) = {\rm det}(B)$.
\begin{enumerate}
\item[(a)] $A$ and $B$ are equivalent if and only if $A=B$.
\item[(b)] Let $\lambda_1 > \lambda_2 > \ldots > \lambda_k$ denote the distinct eigenvalues of $A$, and assume that $A$ has the form
\[
A = \left( \begin{array}{cccc} J_1 & & & \\ & J_2 & & \\ & & \ddots & \\ & & & J_k \end{array} \right)\,\,,
\] such that,
\[
\forall 1 \le i \le k~:~(J_i-\lambda_i I_{d_i})^d = 0~.
\]
Then $A$ and $B$ are coarsely equivalent if and only if
\[
B = \left( \begin{array}{cccc} J_1 & \ast & \ast & \ast \\ & J_2 & \ast & \ast \\ & & \ddots & \ast \\ & & & J_k \end{array} \right)\,\,,
\] i.e., $B$ has the same blocks on the diagonal, and arbitrary entries above these blocks.
\end{enumerate}
\end{theorem}
\begin{proof}
We first consider the case where $A$ and $B$ have only one eigenvalue $\lambda>0$, and $A$ and $B$ are coarsely equivalent. We want to show that $A=B$.
By assumption on the spectra, we can write
\[
B = \lambda (I_d + N_B)
\] with $N_B^d = 0$. It follows that, for all $k$,
\begin{eqnarray*}
B^k & = & \lambda^k \sum_{\ell=0}^k \left( \begin{array}{c} k \\ \ell \end{array} \right) N_B^\ell \\
& = & \lambda^k \sum_{\ell=0}^d \left( \begin{array}{c} k \\ \ell \end{array} \right) N_B^\ell \\
& = & \lambda^k P_k
\end{eqnarray*}
where $P_k$ is a matrix whose entries depend polynomially on $k$.
Similarly, we have
\[
A^{-1} = \lambda (I_d + N_A)
\] and thus
\[
A^{-k} = \lambda^{-k} Q_k\,\,,
\] where the entries of $Q_k$ are polynomials in $k$.
Assuming that $\rho_A$ and $\rho_B$ are coarsely equivalent, we have
\[
\infty > \sup_{k \in \mathbb{N}} \| A^{-k} B^k \| = \sup_{k \in \mathbb{N}} \| Q_k P_k \| \,\,.
\] The entries of $Q_k P_k$ are polynomials in $k$, and bounded. It follows that the map $k \mapsto Q_k P_k$ is {\em constant}. In particular, $A^{-2} B^2 = A^{-1} B^1$, which implies $A=B$.
For part (a), assume that $A$ and $B$ are equivalent. W.l.o.g. $A$ is in Jordan normal form,
\[ A = \left( \begin{array}{cccc} J_1 & & & \\ & J_2 & & \\ & & \ddots & \\ & & & J_k \end{array} \right) \,\,
\]
The $i$th block of $A$ corresponds to the generalized eigenspace ${\rm Ker}(A-\lambda_i I_d)^d$
associated to the $i$th eigenvalue $\lambda_i$, and since the spectra of $A,B$ are both positive, we have
by Lemma \ref{lem:char_eigenspaces}
\[
{\rm Ker}(A-\lambda_i I_d)^d = {\rm Ker}(B-\lambda_i I_d)^d
\]
But this means that $B$ has the same block diagonal structure,
\[
B = \left( \begin{array}{cccc} B_1 & & & \\ & B_2 & & \\ & & \ddots & \\ & & & B_k \end{array} \right)\,\,
\] and $(B_i - \lambda_i I_{d_i})^d = 0$. In particular, $\epsilon(J_i,B_i) = 1 = \epsilon(A,B)$, for all $i=1,\ldots,k$. Now Remark \ref{rem:equiv_JNF} yields that all pairs $J_i,B_i$ must be equivalent, and the single eigenvalue case we considered first then entails $A=B$.
For part (b), we proceed by induction over the number $k$ of distinct eigenvalues, noting that the case $k=1$ has already been taken care of in the beginning of this proof. Hence it remains to prove the induction step, and we assume $k \ge 2$.
Define
\[
E_1 = {\rm Ker}(A-\lambda_1 I_d)^d \,\,,
\] the generalized eigenspace associated to the eigenvalue $\lambda_1$. For any matrix $C \in \mathbb{R}^{d \times d}$, one has
\[
{\rm Ker}(C^{-1}-\lambda^{-1} I_d)^d = {\rm Ker}(C-\lambda I_d)^d
\]
and hence we obtain from the assumption $\lambda_1^{-1} < \lambda_2^{-1} < \ldots$ that
\[
E_1 = E(A^{-1}, \lambda_1^{-1},d)\,\,.
\] Now assume that $A$ and $B$ are coarsely equivalent. By choice of $\lambda_1$, and using that the spectra of both $A$ and $B$ are real, we obtain from Lemma \ref{lem:char_eigenspaces}(a) that
\[
E_1 = E(B^{-1},\lambda_1^{-1},d) = {\rm Ker}(B-\lambda_1 I_d)^d\,\,.
\]
Thus we have derived that
\begin{equation} \label{eqn:block_AB}
A = \left( \begin{array}{cc} A_1 & 0 \\ 0 & A_2 \end{array} \right)\,\, , \,\, B = \left( \begin{array}{cc} B_1 & C \\ 0 & B_2 \end{array} \right)
\end{equation} where $A_1=J_1$, $A_2$ is the matrix containing the remaining blocks of $A$, $B_1$ is a matrix with the single eigenvalue $\lambda_1$, and $B_2, C$ are not further specified at this point.
Furthermore, we have $\det(A_1) = \det(B_1)$, and thus also $\det(A_2) = \det(A)/\det(A_1) = \det(B)/\det(B_1) = \det(B_2)$.
Now, by induction over $k \in \mathbb{N}$, we find that
\begin{equation} \label{eqn:powers_B}
B^k = \left( \begin{array}{cc} B_1^k & C_k \\ 0 & B_2^k \end{array} \right) \,\,, C_k = \sum_{\ell=0}^{k-1} B_1^{\ell} C B_2^{k-1-\ell}\,\, ,
\end{equation}
which leads to
\begin{equation}
A^{-k} B^k = \left( \begin{array}{cc} A_1^{-k} B_1^k & A_1^{-k} C_k \\ 0 & A_2^{-k} B_2^k \end{array} \right) \,.
\end{equation}
The assumption that $A$ and $B$ are coarsely equivalent implies via Lemma \ref{lem:char_cequiv_matr1} that this sequence of matrices is norm-bounded. This in turn implies that $A_i,B_i$ are coarsely equivalent, in view of the observation that $\epsilon(A_i,B_i) = 1$, for $i=1,2$. But then the case considered in the beginning of this proof implies $A_1=B_1$. Furthermore, the induction hypothesis becomes available for $B_2$, implying that the blocks on the diagonal of $B_2$ must coincide with those of $A_2$.
This proves the ``only-if'' part of (b).
For the converse direction, we assume that $A$ and $B$ have the structure assumed in part (b) of the theorem, and write $A,B$ as in (\ref{eqn:block_AB}). Note that here we assume $A_1 = B_1$, so we get that
\begin{equation}
A^{-k} B^k = \left( \begin{array}{cc} I_{d_1} & A_1^{-k} C_k \\ 0 & A_2^{-k} B_2^k \end{array} \right) \,.
\end{equation}
and the induction hypothesis yields that the family $(A_2^{-k} B_2^k)_{k \in \mathbb{N}}$ is norm-bounded.
Pick $\lambda_+$ strictly between $\lambda_1$ and $\lambda_2$, and let $r = \lambda_+/ \lambda_1 < 1$. We can then employ Lemma \ref{lem:exp_norm} and obtain the estimate
\begin{eqnarray*}
\| A_1^{-k} C_k \| & = & \left\| \sum_{\ell=0}^{k-1} A_1^{\ell-k} C B_2^{k-\ell-1} \right\| \\
& \le & \sum_{\ell=0}^{k-1} \| A_1^{\ell-k+1} \| \| A_1^{-1} C \| \| B_2^{k-\ell-1} \| \\
& \le & C \sum_{\ell=0}^{k-1} \lambda_1^{\ell-k+1} \lambda_+^{k-\ell-1} \| A_1^{-1} C \| \\
& = & C' \sum_{\ell=0}^{k-1} r^\ell \le C' \sum_{\ell=0}^\infty r^\ell < \infty\,\,.
\end{eqnarray*}
Now Lemma \ref{lem:char_cequiv_matr1} yields that $A$ and $B$ are coarsely equivalent.
\end{proof}
\begin{remark}
With the criteria from the theorem, we can now see easily (by applying the Theorem to $A^T,B^T$), that the matrices
\[
A = \left( \begin{array}{cc} 3 & 0 \\ 0 & 2 \end{array} \right) ~,~ B = \left( \begin{array}{cc} 3 & 0 \\ 1 & 2 \end{array} \right)
\] fulfill $A \sim_B B$ and $A \not\sim_{\dot{B}} B$, yielding an example that the converse of the final statement of Corollary \ref{cor:char_equiv} is false. Furthermore, we have $A^T \not\sim_{B} B^T$ even though $A \sim_B B$, in contrast to the case of homogeneous Besov spaces, see Remark \ref{rem:transpose}.
Note however that if $A$ and $B$ are both diagomal with positive entries, then $A \sim_{\dot{B}} B$ is equivalent $A \sim_B B$, and both statements are equivalent to $A = B^\epsilon$, for some positive number $\epsilon$.
\end{remark}
\begin{remark}
We call a matrix $A'$ {\bf in expansive normal form} if it is expansive, with positive eigenvalues and ${\rm det}(A') =2$. Then Lemmas \ref{lem:ex_pos_spec} and \ref{lem:ex_one_par} shows that for each expansive matrix $A$ there exists a matrix $A'$ that is in expansive normal form, and equivalent to $A$. Furthermore, Theorem \ref{thm:class_equiv_matr} ensures that $A'$ is {\bf unique}, a fact which justifies calling $A'$ {\bf the expansive normal form of $A$}.
These observations entail that two expansive matrices $A$ and $B$ are equivalent if and only if their expansive normal forms coincide. One can break down the computation of normal forms as follows:
\begin{itemize}
\item Compute $A_1$ such that $|\det(A)| = \det(A_1)$, $A_1$ has only positive eigenvalue, and $A_1$ is equivalent to $A$; see Lemma \ref{lem:ex_pos_spec}.
\item Compute $X = \log(A_1)$; see proof of Lemma \ref{lem:ex_one_par}.
\item Compute $A' = \exp(t X)$, for $t = \frac{\ln(2)}{\ln (|{\rm det}(A)|)}$.
\end{itemize}
Note that in principle the different steps can be carried out using finitely many operations, if taking exponentials and logarithms of scalars are admissible, and the eigenvalues of $A$ are known:
Determining the real Jordan normal form of $A$ amounts to computing a matrix $C$ such that $A=CJC^{-1}$, with $J$ a matrix in real Jordan normal form. Given the eigenvalues of $A$, the matrix $C$ is found by solving systems of linear equations. Then $A_1 = C M J C^{-1}$, with an easily computable (block) diagonal matrix $M$ ensuring that the product $M J$ has only positive eigenvalues (see proof of Lemma \ref{lem:ex_pos_spec}). $\log(A_1)$ is then obtained as $C\log(M J)C^{-1}$, which can again be carried out for each diagonal block separately, and quite efficiently: By \cite[Theorem 3.3.3]{HiNe}, the inverse of $\exp$ on the group of unipotent matrices is computed by the logarithm power series
\[
\log(I_d + Y) = \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n} Y^n~,
\] which breaks off after finitely many terms since $Y$ is nilpotent. Hence $X$ is computable from $A_1$ in finitely many steps.
Finally, $A' = C \exp(t M J) C^{-1}$, which amounts to exponentiating each diagonal block. The latter can be done efficiently by exponentiating the eigenvalues and nilpotent parts of each block separately and then taking the products. Since the exponential series of a nilpotent matrix again breaks off after finitely many terms, this step also only requires finitely many operations.
Thus the decision whether $A$ and $B$ are equivalent is decidable in finitely many steps, and the same is true for coarse equivalence. Note that the matrix $\exp(t M J)$ arising in the computation of $A'$ has the block structure required in part (b) of Theorem \ref{thm:class_equiv_matr}. Hence it remains to compute $B'$, and check whether $C^{-1} B' C - \exp(t \log(M J))$ vanishes on and below the block diagonal of $\exp(t \log(M J))$, where $C$ was the matrix effecting the real Jordan normal form of $A$.
Finally, recall that for the decision whether the Besov spaces associated to $A$ and $B$ resp. coincide, one needs to apply the above procedure to $A^T,B^T$, and that this distinction only matters for the inhomogeneous spaces.
\end{remark}
| {
"timestamp": "2016-09-21T02:04:06",
"yymm": "1609",
"arxiv_id": "1609.06083",
"language": "en",
"url": "https://arxiv.org/abs/1609.06083",
"abstract": "We study (homogeneous and inhomogeneous) anisotropic Besov spaces associated to expansive dilation matrices $A \\in {\\rm GL}(d,\\mathbb{R})$, with the goal of clarifying when two such matrices induce the same scale of Besov spaces. For this purpose, we first establish that anisotropic Besov spaces have an alternative description as decomposition spaces. This result allows to relate properties of function spaces to combinatorial properties of the underlying coverings. This principle is applied to the question of classifying dilation matrices. It turns out the scales of homogeneous and inhomogeneous Besov spaces differ in the way they depend on the dilation matrix: Two matrices $A,B$ that induce the same scale of homogeneous Besov spaces also induce the same scale of inhomogeneous spaces, but the converse of this statement is generally false. Furthermore, the question whether $A,B$ induce the same scale of homogeneous spaces is closely related to the question whether they induce the same scale of Hardy spaces; the latter question had been previously studied by Bownik. We give a complete characterization of the different types of equivalence in terms of the Jordan normal forms of $A,B$.",
"subjects": "Functional Analysis (math.FA)",
"title": "A classification of anisotropic Besov spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787879966233,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7093811296207598
} |
https://arxiv.org/abs/1011.3506 | Dynamics of a rational multi-parameter second order difference equation with cubic numerator and quadratic monomial denominator | The asymptotic behavior (such as convergence to an equilibrium, convergence to a 2-cycle, and divergence to infinity) of solutions of the following multi-parameter, rational, second order difference equation x_{n+1} =(ax_{n}^3+ bx_{n}^2x_{n-1}+cx_{n}x_{n-1}^2+dx_{n-1}^3)/x_{n}^2, x_{-1},x_{0}\in R, is studied in this paper. | \section{Introduction}
Most of the work about rational difference equations treat the case where both numerator and denominator are linear polynomials. For second order rational difference equations with linear numerator and denominator we refer the reader to the monograph of Kulenovic and Ladass (\cite{KL}). In 2008, Sedaghat et al (\cite{DKMOS}) extended the existing results about second order rational difference equations to second order rational difference equations with quadratic numerator and linear denominator.
In this paper we extend the existing results to the following difference equation
\begin{equation}\label{formula2}
x_{n+1}=\frac{ax_{n}^3+bx_{n}^2x_{n-1}+cx_{n}x_{n-1}^2+dx_{n-1}^3}{x_{n}^2},
\end{equation}
which is a second order rational difference equation with cubic numerator and quadratic monomial denominator. The parameters $a,b,d$ are positive while the parameter $c$ and initial conditions $x_{-1},x_{0}$ could accept some negative values.
In (\cite{SH}) we investigated the dynamics of the
following difference equation
\begin{equation}\label{formula1}
x_{n+1}=\frac{ax_{n}^3+bx_{n}^2+cx_{n}+d}{x_{n}^3},
\end{equation}
where it was shown that in most cases every positive solution of
Eq.(\ref{formula1}) converges to either an equilibrium or, a
2-cycle.
In this part we study the asymptotic behavior of solutions of
Eq.(\ref{formula2}) including convergence to an equilibrium,
convergence to a 2-cycle, and divergence. Our analysis on the
dynamics of Eq.(\ref{formula2}) is essentially based on the dynamics
of Eq.(\ref{formula1}). The concepts of equilibrium point,
2-cycle, stability, asymptotic stability have been defined in the
first part and will not be repeated here. Moreover,
throughout the present paper we refer to some of the results in the
first part.
Divide both sides of Eq.(\ref{formula2}) by $x_{n}$ to obtain
$$\frac{x_{n+1}}{x_{n}}=a+b\left(\frac{x_{n-1}}{x_{n}}\right)+c\left(\frac{x_{n-1}}{x_{n}}\right)^{2}+d\left(\frac{x_{n-1}}{x_{n}}\right)^{3},$$
In the preceding equation substitute
$$t_{n}=\frac{x_{n}}{x_{n-1}},$$
to obtain
$$t_{n+1}=\frac{at_{n}^{3}+bt_{n}^{2}+ct_{n}+d}{t_{n}^{3}},$$
which simply is the first order Eq.(\ref{formula1}) (similar to the
first part we use the function $\phi (t)=(at^3+bt^2+ct+d)/t^3$,
which defines the right hand side of Eq.(\ref{formula1}), in the
present paper frequently). In fact the solutions of
Eq.(\ref{formula1}) are the successive ratios of the solutions of
Eq.(\ref{formula2}). So we call $\{t_{n}\}$ the sequence of ratios.
Eq.(\ref{formula2}) is a special semiconjugate factorization of
Eq.(\ref{formula1}) which is called semiconjugacy \ by \ ratios. For more about semiconjugacy and semiconjugacy by ratios
see \cite{Sedaghat2} and \cite{SHS} respectively. We analyze the
dynamics of Eq.(\ref{formula2}) using the dynamics of
Eq.(\ref{formula1}) which was studied in the first part.
Now we discuss about the initial conditions of Eq.(\ref{formula2}).
Since in this paper we studied the dynamics of positive solutions of
Eq.(\ref{formula1}) then the initial conditions of
Eq.(\ref{formula2}) should be chosen in such a way that the sequence
of ratios becomes positive eventually. If both $x_{-1}$ and $x_{0}$
are positive or negative then the sequence of ratios is positive
from the first step. On the other hand, if one of them be positive
and the other one be negative then the ratio $x_{n}/x_{n-1}$ may
never becomes positive or even the iteration process may stop. For
example if $\phi $ has a negative equilibrium which is attractive
then it attracts some ratios in a neighborhood around itself and
therefore such a ratio remains negative forever. Also, if at any
step the ratio equals zero then the iteration process stops. We
should avoid such cases. Although, the determination of these cases
in general is not possible but we are able to determine some of
them. Now, we mention one of them. Consider the function $\phi $ on
the interval $(-\infty ,0)$. Assume that $c_{-}<c\leq
-c^{*}=\sqrt{3bd}$ or, $c>-c^*$ and $\phi (x_{m})>0$ (note that
$x_{m}<0$ when $c>-c^*$). Then, there exists a unique number $r<0$
such that $\phi (r)=0$. Suppose that $r<r'<0$ is the unique number
such that $\phi (r')=r$. Then, it is evident that any ratio in
$(-\infty ,r)\cup (r',0)$ will eventually become positive after at
most three steps.
Note that if $\{x_{n}\}$ is a solution for Eq.(\ref{formula2}) then
so is $\{-x_{n}\}$. Also, first and third quadrants of
$\Bbb{R}^{2}$, namely $(0,\infty )^{2}$ and $(-\infty ,0)^{2}$, are
invariant under Eq.(\ref{formula2}). By the discussions in the
previous paragraph the ratio $x_{n}/x_{n-1}$ should be positive
eventually. Then there are two possibilities. Either $x_{n}>0$ or,
$x_{n}<0$ for all $n\geq n_{0}$ for some $n_{0}\in \Bbb{N}$. If the
second case occurs then the change of variable $y_{n}=-x_{n}$ (or
considering $\{-x_{n}\}$ as solution) reduces Eq.(\ref{formula2}) to
the first case. Therefore, without loss of generality we assume that
both of initial conditions are positive, hereafter.
In the first part we discussed (in great detail) about the
convergence of solutions of Eq.(\ref{formula1}) (or the sequence of
ratios) to both an equilibrium and a 2-cycle. Now, we want to study
the dynamics of solutions of Eq.(\ref{formula2}) in both of these
cases.
\section{Asymptotic stability when the sequence of ratios converges to an equilibrium}
\begin{Theorem}\label{Theorem8} Assume that the sequence
$\{x_{n}\}_{n=-1}^{\infty }$ is a positive solution for
Eq.(\ref{formula2}). Assume also that $\overline{t}$ is an
equilibrium of Eq.(\ref{formula1}) such that the sequence of ratios
$\{x_{n}/x_{n-1}\}_{n=0}^{\infty }$ converges to it.
\begin{description}
\item[\it{(a)}] If $\overline{t}>1$ then $\{x_{n}\}$ diverges
to $\infty $.
\item[\it{(b)}] If $\overline{t}<1$ then $\{x_{n}\}$ converges to
zero.
\item[\it{(c)}] Assume that $\overline{t}=1$(or equivalently $a+b+c+d=1$). Let $\mathcal{S}=\{\phi ^{-n}(1)\}_{n=0}^{\infty
}$. If $x_{0}/x_{-1}\in \mathcal{S}$ then $\{x_{n}\}$ is
convergent to an equilibrium. Otherwise, $|b+2c+3d|\leq 1$ and also
\begin{description}
\item[\it{($c_{1}$)}] If $|b+2c+3d|<1$ then $\{x_{n}\}$
converges to an equilibrium. Moreover, if $0<b+2c+3d<1$ then
one of subsequences $\{x_{2n}\}$ and $\{x_{2n+1}\}$ will be
increasing and the other one will be decreasing eventually
while $\{x_{n}\}$ will be increasing or decreasing
eventually if $-1<b+2c+3d\leq 0$.
\item[\it{($c_{2}$)}] If $b+2c+3d=-1$ then $\{x_{n}\}$ will
be increasing or decreasing eventually. In the later case
$\{x_{n}\}$ converges to an equilibrium obviously. In the
former case if $c>-3d$ then $\{x_{n}\}$ diverges to $\infty
$.
\item[\it{($c_{3}$)}] If $b+2c+3d=1$ then both of subsequences of even and odd
terms will be increasing eventually. In particular, if $c>\frac{-2d}{a+d}-b$ then
$\{x_{n}\}$ diverges to $\infty $.
\end{description}
\end{description}
\end{Theorem}
Proof. (a) Since $\lim _{n\rightarrow \infty
}x_{n}/x_{n-1}=\overline{t}>1$ then there exist $L>1$ and $N\in
\Bbb{N}$ such that $x_{n}/x_{n-1}>L$ for all $n>N$. This simply
shows that $x_{n}\rightarrow \infty $ as $n\rightarrow \infty $. The
proof of (b) is similar and will be omitted.
(c) The equality $\overline{t}=1$ is simply equivalent to the
equality $a+b+c+d=1$. If $x_{0}/x_{-1}\in\mathcal{S}$ then there
exists $N\in \Bbb{N}$ such that $x_{n+1}/x_{n}=1$ for all $n>N$.
Thus $x_{n}$ remains constant for all $n> N$. Hence, $x_{n}$
converges to an equilibrium.
Next, assume that $x_{0}/x_{-1}\not \in \mathcal{S}$. Then, since
$\{x_{n}/x_{n-1}\}$ converges to the equilibrium $\overline{t}=1$ we
have $|\phi ^{'}(1)|\leq 1$ or equivalently
\begin{equation}\label{f11}
|b+2c+3d|\leq 1.
\end{equation}
On the other hand, since $a+b+c+d=1$ then we obtain by some
computations that
\begin{equation}\label{f12}
x_{n+1}-x_{n}=r_{n}(x_{n}-x_{n-1}), \ \ \ r_{n}=-
\left(b+c+d+\frac{c+d}{t_{n}}+\frac{d}{t_{n}^{2}}\right),
\end{equation}
notice that $r_{n}\rightarrow b+2c+3d$ as $n\rightarrow\infty $
since $t_{n}$ converges to 1. Therefore, by (\ref{f11}) there are
three cases to consider as follow:\\
Case \ I; $|b+2c+3d|<1$: Thus there exist $0<L<1$ and $N\in
\Bbb{N}$ such that $|r_{n}|<L$ for all $n> N$. So (\ref{f12})
implies for $n> N$ that
\begin{equation}\label{f13}
|x_{n+1}-x_{n}|<L|x_{n}-x_{n-1}|,
\end{equation}
thus we have (by induction) for $n\geq N$ that
$$|x_{n+1}-x_{n}|<L^{n-N}|x_{N+1}-x_{N}|,$$
Therefore
\begin{equation}\label{f14} \lim _{n\rightarrow \infty
}x_{n+1}-x_{n}=0.
\end{equation}
On the other hand we obtain from (\ref{f13}) for $n>N$ that
\begin{eqnarray*}
|x_{n+1}-x_{N}| & \leq & |x_{n+1}-x_{n}|+|x_{n}-x_{n-1}|+\ldots
+|x_{N+1}-x_{N}|\\
& < & \left(\sum _{i=0}^{n-N}L^{i}\right)|x_{N+1}-x_{N}|< \left(\sum _{i=0}^{\infty
}L^{i}\right)|x_{N+1}-x_{N}|\\
& = & \frac{|x_{N+1}-x_{N}|}{1-L},
\end{eqnarray*}
therefore, $\{x_{n}\}$ is bounded. This fact together with
(\ref{f14}) imply that $\{x_{n}\}$ is convergent.
On the other hand , (\ref{f12}) implies that
$t_{n}(t_{n+1}-1)=r_{n}(t_{n}-1)$. Therefore
\begin{eqnarray*}
t_{n+1}t_{n}-1 &=& t_{n+1}t_{n}\mp t_{n}-1 \\
&=& t_{n}(t_{n+1}-1)+(t_{n}-1)\\
&=&(t_{n}-1)(r_{n}+1),
\end{eqnarray*}
or equivalently
\begin{equation}\label{fff13}
\frac{x_{n+1}}{x_{n-1}}-1=(t_{n}-1)(r_{n}+1),
\end{equation}
Note that since $|r_{n}|\rightarrow |b+2c+3d|<1$ as $n\rightarrow
\infty $ there exists $n_{0}\in \Bbb{N}$ such that $r_{n}+1>0$ for
all $n>n_{0}$. Also, we know that when $\phi '(1)=-(b+2c+3d)\in
(-1,0)$, $t_{n}$ oscillates alternately around $1$ while when
$-(b+2c+3d)\in [0,1)$, $t_{n}$ remains on one side of $1$ forever.
These facts together with (\ref{fff13}) complete the proof of
$(c_{1})$.
Case \ II; $b+2c+3d=-1$: In this case $\phi '(1)=\phi (1)=1$
(recall that this case occurs when $c=c_{m}$ and $1$ is the greater
equilibrium of $\phi$ or, $c=c_{M}$ and $1$ is the lower equilibrium
of $\phi $. This case also may occur when $x_{m}<1<x_{M}$)
. Therefore, it's evident that there exists an $n_{0}\in \Bbb{N}$
such that either $x_{n}/x_{n-1}<1$ or, $x_{n}/x_{n-1}>1$ for
$n>n_{0}$. If the former case occurs then $\{x_{n}\}$ is decreasing
for all $n>n_{0}$ and therefore it will converge to an equilibrium.
On the other hand if the later case occurs then $\{x_{n}\}$ is
increasing for $n>n_{0}$. Define the following function
\begin{equation*}
r(t)=-\left( b+c+d+\frac{c+d}{t}+\frac{d}{t^2} \right), \ \ \ t>0,
\end{equation*}
note that $r(1)=1, \ r'(1)=c+3d$. Thus, if $c>-3d$ then $r'(1)>0$.
As a result, there exists $\epsilon >0$ such that $r(t)>1$ for all
$t\in (1,1+\epsilon )$. Therefore, since $r_{n}=r(t_{n})$,
$t_{n}=x_{n}/x_{n-1}>1$ for all $n>n_{0}$, and $t_{n}\rightarrow 1$
as $n\rightarrow \infty $ we conclude that there exists
$n_{1}>n_{0}$ such that $r_{n}>1$ for all $n>n_{1}$. Thus by
(\ref{f12}) the sequence of differences $\{x_{n+1}-x_{n}\}$ is
increasing for $n>n_{1}$.
This fact together with the fact that $\{x_{n}\}$ is increasing
eventually imply that $\{x_{n}\}$ diverges to $\infty $.
Case III; $b+2c+3d=1$: In this case $\phi '(1)=-\phi
(1)=-1$ (recall that this case may occur when $c\geq c^{*}$ or,
$c<c^{*}$ and $1<x_{m}$. Also note that by Lemma 2(c) in \cite{SH}
this case never occurs when $x_{M}<1$). Therefore, it's evident that
there exists $n_{0}\in \Bbb{N}$ such that the sequence of ratios
oscillate alternately around $1$ for all $n>n_{0}$. Some
computations show that
\begin{equation}\label{f15}
x_{n+1}-x_{n-1}=\rho _{n}(x_{n}-x_{n-1}), \ \ \ \rho
_{n}=c+2d-\frac{c+d}{t_{n}}-\frac{d}{t_{n}^{2}},
\end{equation}
since $\rho _{n}=(t_{n}-1)[(c+2d)t_{n}+d]/t_{n}^2$, $c+2d=a>0$, and
$t_{n}\neq 1$ for all $n\geq 0$ then
\begin{equation}\label{ff16}
\rho _{n}(t_{n}-1)>0,
\end{equation}
where the equality $c+2d=a$ is gained by the subtraction of
equalities $a+b+c+d=b+2c+3d=1$. Now consider the consecutive ratios
$x_{2n}/x_{2n-1}$ and $x_{2n+1}/x_{2n}$ for $n>n_{0}$. Since these
ratios oscillate around $1$ alternately then one of them is greater
than $1$ and the other one is less than $1$. Without loss of
generality assume that $x_{2n+1}/x_{2n}<1<x_{2n}/x_{2n-1}$. Thus by
(\ref{ff16}) $\rho _{2n+1}<0<\rho _{2n}$. Therefore, (\ref{f15})
implies that $x_{2n+1}>x_{2n-1}$ and $x_{2n+2}>x_{2n}$, i.e., both
of subsequences of even and odd terms are increasing eventually.
Next, define the function
$$R(t)=r(t)r(\phi (t)), \ \ \ t>0,$$
where $r$ is defined in the previous case. Some algebra shows that
$$R(1)=1,\ \ \ R'(1)=0,\ \ \ R''(1)=2(b+c)(a+d)+4d,$$
Therefore, if $R''(1)>0$, i.e., $c>-2d/(a+d)-b$ then $1$ is a local
minimum point for $R$. As a result, there exists $\epsilon >0$ such
that $R(t)>1$ for all $t\in (1-\epsilon ,1+\epsilon ),t\neq 1$.
Therefore, since $r_{n+1}r_{n}=R(t_{n})$ and $t_{n}\rightarrow 1$ as
$n\rightarrow \infty $ then there exists $n_{1}>n_{0}$ such that
$r_{n+1}r_{n}>1$ for all $n>n_{1}$. Thus, we obtain from (\ref{f12})
that for $n>n_{1}$
$|x_{n+1}-x_{n}|=r_{n}r_{n-1}|x_{n-1}-x_{n-2}|>|x_{n-1}-x_{n-2}|$ or
equivalently, $|d_{n+1}|>|d_{n-1}|$ where $d_{n}$ is the sequence of
differences. Therefore, both of sequences $\{|d_{2n+1}|\}$ and
$\{|d_{2n}|\}$ are increasing. Hence, either they are convergent to
a positive number or divergent to $\infty $.
Finally, we claim that both of subsequences of even and odd terms
(and therefore $\{x_{n}\}$) diverge to $\infty $. Suppose for the
sake of contradiction that one of them is convergent or both of them
are convergent. If one of them is convergent and the other one is
divergent then this simply is a contradiction since the ratio
$x_{n}/x_{n-1}$ converges to $1$. On the other hand, if both of them
are convergent then by the same reason both of them should be
convergent to a same number. So $\{x_{n}\}$ is convergent. Thus,
$|d_{n+1}|=|x_{n+1}-x_{n}|\rightarrow 0$ as $n\rightarrow \infty $
which
is a contradiction. Therefore, $\{x_{n}\}$ diverges to $\infty $. The proof is complete.
\section{Asymptotic stability when the sequence of ratios converges to a 2-cycle}
\begin{Lemma}\label{Lemma5}\begin{description}
\item[\it{(a)}] Eq.(\ref{formula1}) has a unique 2-cycle $(p,q)$ with
$pq=1$ if and only if
\begin{equation}\label{f16}
\frac{a-c}{d}=\frac{d-b+1}{a}>2.
\end{equation}
\item[\it{(b)}] Assume that $(p,q)$ is an attractive 2-cycle of
Eq.(\ref{formula1}) with $pq=1,p<1<q$. Then
$$q+p\phi '(p)<0<p+q\phi '(q).$$
\item[\it{(c)}] Assume that (\ref{f16}) holds. Then Eq.(\ref{formula2})
has infinite number of 2-cycles. More precisely, the following set
is the family of 2-cycles of Eq.(\ref{formula2})
$$\mathcal{A}=\{(p',q')|\ p'/q'=p\ \ or\ \ p'/q'=q\}.$$
\end{description}
\end{Lemma}
Proof. (a) Assume that Eq.(\ref{formula1}) has a 2-cycle $(p,q)$
with $pq=1$. So $q^2=aq^3+bq^2+cq+d$ and $p^2=ap^3+bp^2+cp+d$.
Multiply the first equation by $p$ and the second equation by $q$
and apply some algebra to obtain
$$p+q=\frac{d-b+1}{a},$$
in a similar fashion multiply those two equations by $p^2$ and $q^2$
to obtain
$$p+q=\frac{a-c}{d}.$$
Therefore $(a-c)/d=(d-b+1)/a$. Since $pq=1$ and $p+q=(a-c)/d$ then
both $p$ and $q$ satisfy the following quadratic polynomial
\begin{equation}\label{f17}
X^2-\frac{a-c}{d}X+1=0,
\end{equation}
Therefore such a 2-cycle is unique. On the other hand, Eq.(\ref{f17}) should have positive determinant.
So $(a-c)/d>2$ and therefore (18) holds.
Next, suppose that (18) holds. Then, it's easy to verify that the
polynomial $G$ in Lemma 1 in \cite{SH} is factored by
Eq.(\ref{f17}). As a result, Eq.(\ref{formula1}) has a 2-cycle
$(p,q)$ with $pq=1$.
(b) At first we show that both of quantities $q+p\phi '(p)$ and
$p+q\phi '(q)$ have different signs. Since $(p,q)$ is an attractive
2-cycle of Eq.(\ref{formula1}) then
\begin{equation}\label{f18}
\phi '(p)\phi '(q)=(\phi ^2)'(p)\leq 1.
\end{equation}
On the other hand, since $pq=1$ then (a) implies that (\ref{f16})
holds. This fact together with
the fact that $p+q=(a-c)/d$ imply that
\begin{eqnarray}\label{f19}
\nonumber p^2\phi '(p)+q^2\phi '(q) & =
&-\left(\frac{bq^2+2cq+3d}{q^2}+\frac{bp^2+2cp+3d}{p^2}\right)
\\
\nonumber & = & -\left(2b+2c(p+q)+3d((p+q)^2-2)\right) \\
\nonumber & = & -\left(2b+2c\left(\frac{a-c}{d}\right)+3d\left[\left(\frac{a-c}{d}\right)^2-2\right]\right) \\
\nonumber & = & -\left(2b+2a\frac{a-c}{d}+\frac{(a-c)^2}{d}-6d\right)\\
\nonumber & = & -\left(2b+2(d-b+1)+\frac{(a-c)^2}{d}-6d\right)\\
\nonumber & = & -\left(2+\frac{(a-c)^2-4d^2}{d}\right)\\
&<&-2.
\end{eqnarray}
Thus (\ref{f18}), (\ref{f19}), and the equality $pq=1$ yield
$$(q+p\phi '(p))(p+q\phi '(q))=1+\phi '(p)\phi '(q)+p^2\phi '(p)+q^2\phi '(q)<2-2=0,$$
therefore, both of quantities $q+p\phi '(p)$ and $p+q\phi '(q)$ have
different signs. On the other hand, the equality $pq=1$ together
with (\ref{f17}) imply that
\begin{equation}\label{f20}
q+p\phi '(p)=q(1-b-2cq-3dq^2)=q(1-b+3d+(c-3a)q),
\end{equation}
similarly
\begin{equation}\label{f21}
p+q\phi '(q)=p(1-b+3d+(c-3a)p).
\end{equation}
If $p+q\phi '(q)<0<q+p\phi '(p)$ then (\ref{f20}) and (\ref{f21})
imply that
$$0<(1-b+3d+(c-3a)q)-(1-b+3d+(c-3a)p)=(c-3a)(q-p),$$
So since $p<q$ we obtain that $c>3a$ which simply contradicts
(\ref{f16}). Hence, $q+p\phi '(p)<0<p+q\phi '(q)$.
(d) The proof of (d) is clear and will be omitted.
The proof is complete.\\
The following theorem (whose proof somehow uses the ideas in the
proof of Theorem \ref{Theorem8}) discusses about the dynamics of
solutions of Eq.(\ref{formula2}) when the sequence of ratios converges to a 2-cycle.
\begin{Theorem}\label{Theorem9}Assume that the sequence
$\{x_{n}\}_{n=-1}^{\infty }$ is a positive solution for
Eq.(\ref{formula2}) and $(p,q)$ is a 2-cycle of Eq.(\ref{formula1})
such that the sequence of ratios $\{x_{n}/x_{n-1}\}_{n=0}^{\infty }$
converges to it.
\begin{description}
\item[\it{(a)}] If $pq>1$ then $\{x_{n}\}$ diverges to $\infty $.
\item[\it{(b)}] If $pq<1$ then $\{x_{n}\}$ converges to zero.
\item[\it{(c)}] Assume that $pq=1$(or equivalently (\ref{f16}) holds).
Let $\mathcal{S}=\{\phi ^{-n}(p),\phi ^{-n}(q)\}_{n=0}^{\infty
}$. If $x_{0}/x_{-1}\in \mathcal{S}$ then $\{x_{n}\}$ converges
to a 2-cycle. Otherwise, we have $|\phi '(p)\phi '(q)|\leq 1$
and we consider three cases as follow
\begin{description}
\item[$(c_{1})$] $|\phi '(p)\phi '(q)|<1$; In this case
$\{x_{n}\}$ converges to a 2-cycle. Moreover, if $-1<\phi '(p)\phi '(q)<0$ then the
subsequences $\{x_{4n}\}$ and $\{x_{4n+3}\}$ will be
increasing and the other two will be decreasing eventually or vice
versa while both of subsequences $\{x_{2n}\}$ and $\{x_{2n+1}\}$ will be
increasing or decreasing eventually if $0\leq \phi '(p)\phi '(q)< 1$.
\item[$(c_{2})$] $\phi '(p)\phi '(q)=1$; In this case both of
subsequences $\{x_{2n}\}$ and $\{x_{2n+1}\}$ will be increasing or decreasing
eventually. In the later case $\{x_{n}\}$ converges to a
2-cycle. In the former case $\{x_{n}\}$ diverges to $\infty
$ if
$$p+q\phi '(q)+(\phi '(q))^2\phi ''(p)/2+\phi '(p)\phi ''(q)/2>0$$
\item[$(c_{3})$] $\phi '(p)\phi '(q)=-1$; Let $l=-(p^2+(\phi '(q))^2q^2)+(q+p\phi '(p))\phi ''(q)/2+(p+q\phi '(q))(\phi '(q))^2$
$\phi ''(p)/2$. If $l<0$ then all of subsequences $\{x_{4n}\},
\{x_{4n+1}\},\{x_{4n+2}\},$ and $\{x_{4n+3}\}$ are
decreasing eventually. In this case $\{x_{n}\}$ converges to a 2-cycle.
If $l>0$ then all of subsequences $\{x_{4n}\},
\{x_{4n+1}\},\{x_{4n+2}\},$ and $\{x_{4n+3}\}$ are
increasing eventually. In this case $\{x_{n}\}$ diverges to $\infty $
if
$$-2s''(q)-2(s'(q))^2-s'(q)(\phi ^2)''(q)>0$$
where
$$s(t)=\frac{t\phi (t)\gamma (\phi (t))\theta (t)[\phi ^2(t)\theta (\phi ^2(t)+p]}{t\theta
(t)+p},$$\\
$$\gamma
(t)=-\left(\frac{b}{pt}+\frac{c(t+p)}{p^2t^2}+\frac{d(t^2+pt+p^2)}{p^3t^3}\right),$$\\
$$ \theta (t)=-\left(\frac{b}
{qt}+\frac{c(t+q)}{q^2t^2}+\frac{d(t^2+qt+q^2)}{q^3t^3}\right).$$
\end{description}
\end{description}
\end{Theorem}
Proof. Throughout the proof we assume, without loss of generality,
that
\begin{equation}\label{f22}
t_{2n}\rightarrow p,\ \ \ t_{2n+1}\rightarrow q,\ \ \ \text{as} \
n\rightarrow \infty ,
\end{equation}
therefore
$$\frac{x_{n+2}}{x_{n}}=t_{n+2}t_{n+1}\rightarrow pq \ \ \ \text{as} \ n\rightarrow \infty ,$$
Thus if $pq>1$ then there exist $N\in \Bbb{N}$ and $L>1$ such that
$x_{n+2}/x_{n}>L$ for all $n>N$. This simply proves (a). In a
similar fashion (b) is proved. Now we proceed to (c). Since $pq=1$
then we assume, without loss of generality, that $p<1<q$ hereafter.
If $x_{0}/x_{-1}\in \mathcal{S}$ then there exists an integer $N$
such that $x_{N+1}/x_{N}=p$ or $x_{N+1}/x_{N}=q$. Thus
$x_{n+1}/x_{n}=p$ and $x_{n+2}/x_{n+1}=q$ for all $n\geq N$ or vice
versa. Therefore for $n\geq N$
$$\frac{x_{n+2}}{x_{n}}=pq=1,$$
which means that $\{x_{n}\}$ converges to a 2-cycle. Now assume that
$x_{0}/x_{-1}\not \in \mathcal{S}$. Then since the 2-cycle $(p,q)$
attracts the sequence of ratios $\{x_{n}/x_{n-1}\}$ we have $|\phi
'(p)\phi '(q)|\leq 1$.
$(c_{1})$ Define $D_{n}=x_{n}-x_{n-2}$. Therefore, using (\ref{f22})
and hopital law in calculous one can write
\begin{eqnarray*}
\lim _{n\rightarrow \infty }\left|\frac{D_{2n+2}}{D_{2n}}\right| &=& \lim _{n\rightarrow \infty }\left|\frac{t_{2n+2}t_{2n+1}t_{2n}t_{2n-1}-t_{2n}t_{2n-1}}{t_{2n}t_{2n-1}-1}\right| \\
&=&\lim _{t\rightarrow q} \left|\frac{t\phi (t)\phi ^2(t)\phi ^3(t)-t\phi (t)}{t\phi
(t)-1}\right|\\
& = & |\phi '(p)\phi '(q)|\\
& < & 1.
\end{eqnarray*}
In a similar fashion
$$\lim _{n\rightarrow \infty }\left|\frac{D_{2n+1}}{D_{2n-1}}\right|=|\phi '(p)\phi '(q)|<1, $$
Consequently, there exist $n_{0}\in \Bbb{N}$ and $0<L<1$ such that
for $n>n_{0}$
$$|D_{2n+2}|<L|D_{2n}|,\ \ \ |D_{2n+1}|<L|D_{2n-1}|.$$
Therefore, by an analysis precisely similar to what was applied in
Theorem \ref{Theorem8}(c) it could be shown that both of
subsequences $\{x_{2n}\}$ and $\{x_{2n+1}\}$ are convergent and
hence $\{x_{n}\}$ converges to a 2-cycle.
Some calculations show that
\begin{equation}\label{f23}
t_{n+1}-q=\gamma _{n}(t_{n}-p), \ \ \ \gamma
_{n}=-\left(\frac{b}{pt_{n}}+\frac{c(t_{n}+p)}{p^2t_{n}^2}+\frac{d(t_{n}^2+pt_{n}+p^2)}{p^3t_{n}^3}\right),
\end{equation}
and
\begin{equation}\label{f24}
t_{n+1}-p=\theta _{n}(t_{n}-q), \ \ \ \theta
_{n}=-\left(\frac{b}{qt_{n}}+\frac{c(t_{n}+q)}{q^2t_{n}^2}+\frac{d(t_{n}^2+qt_{n}+q^2)}{q^3t_{n}^3}\right).
\end{equation}
by (\ref{f22}) we obtain
\begin{equation}\label{f25}
\gamma _{2n} \rightarrow \phi '(p), \ \ \ \theta _{2n+1}\rightarrow
\phi '(q),\ \ \ \text{as} \ n\rightarrow \infty .
\end{equation}
Now, suppose that $-1<\phi '(p)\phi '(q)< 0$. By Lemma
\ref{Lemma5}(b), $\phi '(p)<-q/p<0$. So $\phi '(q)>0$. Therefore, in a neighborhood around $p$ and another neighborhood around $q$ $\phi$ is decreasing and increasing respectively. This fact together with (\ref{f22}) imply that there exists $n_{0}\in BBb{N}$ such that for
$n\geq n_{0}$
\begin{description}
\item[\it{(i)}] either $t_{4n}<p,t_{4n+1}>q,t_{4n+2}>p,t_{4n+3}<q$ or,
\item[\it{(ii)}] $t_{4n}>p,t_{4n+1}<q,t_{4n+2}<p,t_{4n+3}>q.$
\end{description}
On the other hand, (\ref{f23}) and (\ref{f24}) imply that
\begin{eqnarray*}
t_{n+4}t_{n+3}t_{n+2}t_{n+1}-1 &=& t_{n+4}t_{n+3}t_{n+2}t_{n+1}\mp p^2t_{n+3}t_{n+1}-p^2q^2 \\
&=& t_{n+3}t_{n+1}(t_{n+4}t_{n+2}-p^2)+p^2(t_{n+3}t_{n+1}-q^2) \\
&=& t_{n+3}t_{n+1}(t_{n+4}t_{n+2}\mp pt_{n+2}-p^2)+p^2(t_{n+3}t_{n+1}\mp qt_{n+1}-q^2) \\
&=& t_{n+3}t_{n+1}[t_{n+2}(t_{n+4}-p)+p(t_{n+2}-p)]+ \\
& & p^2[t_{n+1}(t_{n+3}-q)+q(t_{n+1}-q)]\\
&=& t_{n+3}t_{n+1}(t_{n+2}\theta _{n+3}\gamma _{n+2}+p)(t_{n+2}-p)+ \\
& & p^2(t_{n+1}\gamma _{n+2}\theta _{n+1}+q)(t_{n+1}-q)\\
&=& [t_{n+3}t_{n+1}\theta _{n+1}(t_{n+2}\theta _{n+3}\gamma _{n+2}+p)+p^2(t_{n+1}\gamma _{n+2}\theta _{n+1}+q)]\times \\
& & (t_{n+1}-q),
\end{eqnarray*}
Therefore
\begin{equation}\label{f27}
\frac{x_{n+4}}{x_{n}}-1=\lambda _{n}(t_{n+1}-q), \ \ \lambda
_{n}=\& t_{n+3}t_{n+1}\theta _{n+1}(t_{n+2}\theta _{n+3}\gamma
_{n+2}+p)+p^2(t_{n+1}\gamma _{n+2}\theta
_{n+1}+q),
\end{equation}
In a similar fashion one can write
\begin{equation}\label{f28}
\frac{x_{n+4}}{x_{n}}-1=\xi _{n}(t_{n+1}-p), \ \ \ \xi
_{n}=\&t_{n+3}t_{n+1}\gamma _{n+1}(t_{n+2}\gamma _{n+3}\theta
_{n+2}+q)+q^2(t_{n+1}\theta _{n+2}\gamma_{n+1}+p),
\end{equation}
notice that (\ref{f22}) and (\ref{f25}) imply that
\begin{equation}\label{f29}
\lambda _{2n}\rightarrow (\phi '(p)\phi '(q)+1)(p+q\phi '(q)), \ \ \
\xi _{2n+1}\rightarrow (\phi '(p)\phi '(q)+1)(q+p\phi '(p)), \ \ \
\text{as}\ n\rightarrow \infty .
\end{equation}
Consequently, by the fact that $\phi '(p)\phi '(q)>-1$, Lemma
\ref{Lemma5}(b), (\ref{f27}), (\ref{f28}), and (\ref{f29}) the
subsequences $\{x_{4n}\}$ and $\{x_{4n+3}\}$ will be increasing
while the other two will be decreasing eventually if $(i)$
holds. Otherwise, the subsequences $\{x_{4n}\}$ and $\{x_{4n+3}\}$
will be decreasing while the other two will be increasing
eventually.
Next, assume that $0\leq \phi '(p)\phi '(q)<1$. By Lemma
\ref{Lemma5}(b) $\phi '(p)<-q/p<0$. Thus $\phi '(q)\leq 0$. If $\phi
'(q)<0$ then in a neighborhood around $p$ and another neighborhood
around $q$ $\phi $ is decreasing. As a result by (\ref{f22}) we
conclude that there exists an integer $n_{0}\in \Bbb{N}$ such that
for $n\geq n_{0}$
\begin{description}
\item[(i)] either $t_{2n}>p,t_{2n+1}<q$ or,
\item[(ii)] $t_{2n}<p,t_{2n+1}>q$.
\end{description}
If, on the other hand $\phi '(q)=0$ then $q=x_{m}$ or $q=x_{M}$.
It's easy to show that if $q=x_{m}$ then case $(i)$ occurs
while case $(ii)$ occurs if $q=x_{M}$. By an analysis somehow
similar to that of applied for the expression $x_{n+4}/x_{n}-1$ we
obtain
\begin{equation}\label{f31}
\frac{x_{n+2}}{x_{n}}-1=\lambda '_{n}(t_{n+1}-q)=\xi
'_{n}(t_{n+1}-p), \ \ \ \lambda '_{n}=t_{n+1}\theta _{n+1}+p, \ \ \
\xi '_{n}=t_{n+1}\gamma _{n+1}+q,
\end{equation}
with
\begin{equation}\label{f32}
\lambda '_{2n}\rightarrow p+q\phi '(q),\ \ \ \xi '_{2n+1}\rightarrow
q+p\phi '(p), \ \ \ \text{as} \ n\rightarrow \infty .
\end{equation}
Therefore, Lemma \ref{Lemma5}(b), (\ref{f31}), and (\ref{f32}) imply
that both of subsequences $\{x_{2n}\}$ and $\{x_{2n+1}\}$ are
decreasing eventually if $(i)$ holds and vice versa if
$(ii)$ holds.
$(c_{2})$ By an analysis precisely similar to what was applied for
the case $0\leq \phi '(p)\phi '(q)<1$ in $(c_{1})$ one can prove
that both of subsequences $\{x_{2n}\}$ and $\{x_{2n+1}\}$ are
increasing or decreasing eventually. If the later case occurs (note
that this case occurs when case $(i)$ in $(c_{1})$ occurs,
i.e., $t_{2n}>p,t_{2n+1}<q$ for $n>n_{0}$) then $\{x_{n}\}$
converges to a 2-cycle obviously.
Now assume that the former case occurs (note that in this case
$t_{2n}<p,t_{2n+1}>q$ for $n>n_{0}$). Then
\begin{eqnarray*}
\nonumber \frac{D_{n+2}}{D_{n}} &=& \frac{x_{n+2}-x_{n}}{x_{n}-x_{n-2}}
\\
\nonumber &=& \frac{t_{n}t_{n-1}(t_{n+2}t_{n+1}-1)}{t_{n}t_{n-1}-1}, \\
\end{eqnarray*}
Therefore (\ref{f31}), (\ref{f23}), (\ref{f24}), and some algebra
imply that
\begin{equation}\label{f33}
D_{n+2}=s_{n}D_{n}, \ \ \ s_{n}=\frac{t_{n}t_{n-1}\gamma _{n}\theta
_{n-1}\lambda '_{n}}{\lambda '_{n-2}}.
\end{equation}
Some computations show that
\begin{equation}\label{f34}
\gamma (p)=\phi '(p), \ \ \ \theta (q)=\phi '(q), \ \ \ \gamma
'(p)=\frac{\phi ''(p)}{2}, \ \ \ \theta '(q)=\frac{\phi ''(q)}{2},
\end{equation}
Notice that $s_{n}=s(t_{n-1})$ and by (\ref{f34}) $s(q)=1$. Also
using (\ref{f34}) and some algebra we obtain that
$$s'(q)=p+q\phi '(q)+(\phi '(q))^2\frac{\phi ''(p)}{2}+\phi '(p)\frac{\phi ''(q)}{2}>0,$$
As a result there exists $\epsilon >0$ such that $s(t)>1$ for all
$t\in (q,q+\epsilon )$. Therefore, since $s_{2n}=s(t_{2n-1})$,
$t_{2n-1}>q$ for all $n>n_{0}$, and $t_{2n-1}\rightarrow q$ as
$n\rightarrow \infty $ then there exists $n_{1}>n_{0}$ such that
$s_{2n}>1$ for all $n>n_{1}$. Thus by (\ref{f33}) the sequence
$\{D_{2n}\}$ is increasing eventually.
Consequently, since $\{x_{2n}\}$ is increasing eventually then
$\{x_{2n}\}$ should be divergent to $\infty $ and hence by
(\ref{f22}) $\{x_{2n+1}\}$ should be divergent to $\infty$, too.
This means that $\{x_{n}\}$ is divergent to $\infty $.
$(c_{3})$ Note that similar to the case $-1\leq \phi '(p)\phi
'(q)<0$ in $(c_{1})$ there exists $n_{0}\in \Bbb{N}$ such that
either $t_{4n}<p,t_{4n+1}>q,t_{4n+2}>p,t_{4n+3}<q$ or,
$t_{4n}>p,t_{4n+1}<q,t_{4n+2}<p,t_{4n+3}>q$ for $n>n_{0}$. Consider
the quantities $\lambda _{n}$ and $\xi _{n}$ in $(c_{1})$ and define
the following functions for $t>0$
\begin{eqnarray*}
\lambda (t)=t\phi ^2(t)\theta (t)[\phi (t)\theta (\phi ^2(t))\gamma (\phi (t))+p]+p^2[t\gamma (\phi (t))\theta (t)+q],\\
\xi (t)=t\phi ^2(t)\gamma (t)[\phi (t)\gamma (\phi ^2(t))\theta (\phi (t))+p]+p^2[t\theta (\phi (t))\gamma
(t)+q],
\end{eqnarray*}
Notice that $\lambda _{n}=\lambda (t_{n+1}), \xi _{n}=\xi
(t_{n+1})$, and by (\ref{f34}) $\lambda (q)=\xi (p)=0$. Also by
(\ref{f34}) and some algebra we have
$$(\phi '(q))^2\xi '(p)=\lambda '(q)=l,$$
Therefore both of quantities $\xi '(p)$ and $\lambda '(q)$ have the
same signum. Now assume that both of them are negative, i.e., $l<0$.
Then there are neighborhoods around $p$ and $q$ that $\xi $ and
$\lambda $ are decreasing on respectively. Assume that
$t_{4n}<p,t_{4n+1}>q,t_{4n+2}>p,t_{4n+3}<q$ for $n>n_{0}$. Thus
since $\lambda _{4n}=\lambda (t_{4n+1}),\lambda _{4n+2}=\lambda
(t_{4n+3}), \xi _{4n+1}=\xi (t_{4n+2})$, and $\xi _{4n+3}=\xi
(t_{4n+4})$ then by (\ref{f22}) we obtain that there exists
$n_{1}>n_{0}$ such that for $n>n_{1}$
$$\lambda _{4n}<0<\lambda _{4n+2}, \ \ \ \xi _{4n+1}<0<\xi _{4n+3},$$
Consequently by (\ref{f27}) and (\ref{f28}) we conclude that all of
subsequences $\{x_{4n}\}$,$\{x_{4n+1}\}$,\\
$\{x_{4n+2}\},$ and $\{x_{4n+3}\}$ are decreasing eventually (note that similar result
obtains if $t_{4n}>p,t_{4n+1}<q,t_{4n+2}<p,t_{4n+3}>q$ for $n>n_{0}$). As a result, all of these four subsequences
are convergent and by the fact that $x_{n+2}/x_{n}\rightarrow 1$ as $n\rightarrow \infty $ we obtain that both of subsequences $\{x_{4n}\},\{x_{4n+2}\}$ of even terms should be convergent to a same number. The same result holds for the subsequences $\{x_{4n+1}\},\{x_{4n+3}\}$ of odd terms. Hence, $\{x_{n}\}$ converges to a 2-cycle.
Next, suppose that $l>0$. Then similar arguments show that all of
subsequences $\{x_{4n}\}$,$\{x_{4n+1}\}$,$\{x_{4n+2}\}$, and $\{x_{4n+3}\}$ are increasing eventually. Define
the function $$S(t)=s(t)s(\phi ^2(t)), \ \ \ t>0,$$ using the fact
that $s(q)=\phi '(p)\phi '(q)=-1$ and by some algebra we obtain that
$$S(q)=1, \ \ \ S'(q)=0,\ \ \ S''(q)=-2s''(q)-2(s'(q))^2-s'(q)(\phi ^2)''(q)>0,$$
Thus $q$ is a local minimum point for $S$. So there exists $\epsilon
>0$ such that $S(t)>1$ for $t\in (q-\epsilon ,q+\epsilon ),t\neq q$.
Therefore, since $s_{4n+2}s_{4n}=s(t_{4n+1})s(t_{4n-1})=S(t_{4n-1})$
and $t_{4n-1}\rightarrow q$ as $n\rightarrow \infty $ then there
exists $n_{0}\in \Bbb{N}$ such that $s_{4n+2}s_{4n}>1$ for all
$n>n_{0}$. As a result (\ref{f33}) implies that
$|D_{4n+4}|=s_{4n+2}s_{4n}|D_{4n}|>|D_{4n}|$, i.e., the sequence
$\{|D_{4n}|\}$ is increasing eventually. Thus, either it converges
to a positive number or, diverges to $\infty $.
We claim that both of subsequences of even terms, i.e., $\{x_{4n}\}$
and $\{x_{4n+2}\}$ are divergent to $\infty $ (and therefore by
(\ref{f22}) the other two subsequences are divergent, too. Hence,
$\{x_{n}\}$ diverges to $\infty $). Otherwise, at least one of them
should be convergent and therefore since $x_{4n+2}/x_{4n}\rightarrow
1$ as $n\rightarrow\infty $ we conclude that both of them are
convergent. As a result $D_{4n}\rightarrow 0$ as $n\rightarrow
\infty $ which
simply is a contradiction. The proof is complete.
\begin{Remark}\label{Remark5} In Theorem \ref{Theorem8} and Theorem
\ref{Theorem9} dynamical behavior of solutions of
Eq.(\ref{formula2}) was studied where the sequence of ratios
converges to an equilibrium and a 2-cycle respectively. By Theorem 4
and Theorem 5 in \cite{SH} we know that one of these two cases occur
definitely when $c\geq c^*$ or, $c<c^*,x_{M} \leq \overline{t}$ or,
$c<c^*,x_{m}\leq \overline{t}\leq x_{M}$. But if $c<c^*$ and
$\overline{t}<x_{m}$ the sequence of ratios may fail to be
convergent to an equilibrium or a 2-cycle. In this case according to
Theorem $6(a)$ in \cite{SH} the interval $I=[\phi (x_{m}),\phi
^2(x_{m})]$ is invariant under hypothesis \emph{(H)} or even ratios
eventually end up in $I$ if $c\leq c_{1}^*$. Therefore, if
$x_{0}/x_{-1}\in I$ and \emph{(H)} holds, or $x_{0}/x_{-1}\not \in
I$ but $c\leq c_{1}^*$ then $\{x_{n}\}$ diverges to $\infty $ when
$\phi (x_{m})\geq 1$ while $\{x_{n}\}$ converges to zero when $\phi
^2(x_{m})\leq 1$ obviously.
\end{Remark}
\section{Some examples}
\begin{Example}\label{Ex1} Consider the first example in Remark 4 in
\cite{SH}. Note that $c>c_{-}\approx -4.1305$ where $c_{-}$ is the
unique negative root of the cubic polynomial $Q$ in Theorem 1 in
\cite{SH}. So By Theorem $1(b)$ in \cite{SH} nonpositive iterations
of Eq.(\ref{formula1}) do not occur. In this example
Eq.(\ref{formula1}) has two equilibria and no 2-cycle. Some
computations show that
$$x_{m}\approx 0.7133,\overline{t}_{1}\approx 0.7845,\overline{t}_{2}=1,\delta \approx0.5833,$$
where $\delta $ has been defined in Theorem $7(a)$ in \cite{SH}.
Notice that here $a+b+c+d=1$ and hence one of two equilibria is $1$.
By Theorem $7(a_{1})$ if $t_{0}\in (\delta ,\overline{t}_{2})$ then
$\{t_{n}\}$ converges to $\overline{t}_{1}$ otherwise, it converges
to $\overline{t}_{2}$. Moreover, if $t_{0}\not\in (\delta
,\overline{t}_{2})$ then $\{t_{n}\}$ converges to $t_{2}$ from the
right.
Therefore, since $\overline{t}_{1}<1$, $a+b+c+d=1$, $b+2c+3d=-1$,
and $c>-3d$ then Theorem \ref{Theorem8}($c_{2}$) and Theorem
\ref{Theorem8}(b) imply that
\begin{description}
\item[\it{(i)}] If $x_{0}/x_{-1}\in (\delta ,\overline{t}_{2})$ then
$\{x_{n}\}$ converges to zero.
\item[\it{(ii)}] If $t_{0}\not\in (\delta
,\overline{t}_{2})$ then $\{x_{n}\}$ diverges to $\infty
$.
\end{description}
\end{Example}
\begin{Example}\label{Ex2} In Eq.(\ref{formula2}) set
$a=0.2,b=1.7,c=-2,d=1.1$. So $c>c_{-}\approx -2.8540$ and therefore
similar to the arguments in the previous example nonpositive
iterations of Eq.(\ref{formula1}) do not occur. In this example
Eq.(\ref{formula1}) has a unique equilibrium $\overline{t}=1$
(notice that $a+b+c+d=1$) and two 2-cycles $(p_{1},q_{1})\approx
(0.2262,63.6517)$ and $(p_{2},q_{2})\approx (0.5110,4.1111)$. Since
$c>c^*=-\sqrt{3bd}\approx -2.3685$ then by Theorem $4(c)$ in
\cite{SH} $\{t_{n}\}$ converges to $1$ if $t_{0}\in (p_{2},q_{2})$
and converges to the 2-cycle $(p_{1},q_{1})$ if $t_{0}\in
(0,p_{2})\cup (q_{2},\infty )$.
Therefore, since $p_{1}q_{1}>1$, $a+b+c+d=b+2c+3d=1$, and
$c>\frac{-2d}{a+d}-b$ then Theorem \ref{Theorem8}($c_{3}$), Theorem
\ref{Theorem9}(a), and Theorem \ref{Theorem9}(c) imply that
\begin{description}
\item[\it{(i)}] If $x_{0}/x_{-1}\in (0,\infty )\setminus
\{p_{1},p_{2},\overline{t},q_{2},q_{1}\}$ then $\{x_{n}\}$
diverges to $\infty $.
\item[\it{(ii)}] if $x_{0}/x_{-1}=\overline{t}$ then
$\{x_{n}\}$ converges to an equilibrium.
\item[\it{(iii)}] If $x_{0}/x_{-1}\in
\{p_{1},p_{2},q_{2},q_{1}\}$ then $\{x_{n}\}$ converges to a
2-cycle.
\end{description}
\end{Example}
\begin{Example}\label{Ex3} In Eq.(\ref{formula2}) set
$a=0.1,b=1.79,c=-2,d=1$. Thus again $c>c_{-}\approx-2.7295$ which
similar to the previous examples this guarantees that all iterations
of Eq.(\ref{formula1}) remain positive forever. In this example
Eq.(\ref{formula1}) has a unique equilibrium $\overline{t}\approx
0.9423$ and three 2-cycles $(p_{1},q_{1})\approx (0.1024,759.2585)$,
$(p_{2},q_{2})=(0.6021,2.1370)$ and $(p_{3},q_{3})\approx
(0.7298,1.3702)$. Since $c>c^*\approx -5.37$ then by Theorem $4(d)$
in \cite{SH} $\{t_{n}\}$ converges to the 2-cycle $(p_{1},q_{1})$ if
$t_{0}\in (0,p_{2})\cup (q_{2},\infty )$ and converges to the
2-cycle $(p_{3},q_{3})$ if $t_{0}\in (p_{2},q_{2})\setminus
\{\overline{t}\}$. Notice that $p_{3}q_{3}=1$. This is evident since
in this example (\ref{f16}) holds easily.
Consequently, since $p_{3}q_{3}=1$, $0<\phi' (p_{3})\phi
'(q_{3})<1$, and $p_{1}q_{1}>1$ then by Theorem \ref{Theorem8},
Theorem \ref{Theorem9}(a), and Theorem \ref{Theorem9}$(c_{1})$ we
conclude that
\begin{description}
\item[\it{(i)}] If $x_{0}/x_{-1}\in (0,p_{2})\cup (q_{2},\infty
)$ then $\{x_{n}\}$ diverges to $\infty $.
\item[\it{(ii)}] If $x_{0}/x_{-1}\in [p_{2},q_{2}]\setminus \{\overline{t}\}$ then
$\{x_{n}\}$ converges to a 2-cycle.
\item[\it{(iii)}] If $x_{0}/x_{-1}=\overline{t}$ then
$\{x_{n}\}$ simply converges to an equilibrium.
\end{description}
\end{Example}
| {
"timestamp": "2010-11-17T02:02:01",
"yymm": "1011",
"arxiv_id": "1011.3506",
"language": "en",
"url": "https://arxiv.org/abs/1011.3506",
"abstract": "The asymptotic behavior (such as convergence to an equilibrium, convergence to a 2-cycle, and divergence to infinity) of solutions of the following multi-parameter, rational, second order difference equation x_{n+1} =(ax_{n}^3+ bx_{n}^2x_{n-1}+cx_{n}x_{n-1}^2+dx_{n-1}^3)/x_{n}^2, x_{-1},x_{0}\\in R, is studied in this paper.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Dynamics of a rational multi-parameter second order difference equation with cubic numerator and quadratic monomial denominator",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787846017968,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7093811271812566
} |
https://arxiv.org/abs/1703.00827 | Sandpiles on the square lattice | We give a non-trivial upper bound for the critical density when stabilizing i.i.d. distributed sandpiles on the lattice $\mathbb{Z}^2$. We also determine the asymptotic spectral gap, asymptotic mixing time and prove a cutoff phenomenon for the recurrent state abelian sandpile model on the torus $\left( \mathbb{Z} / m\mathbb{Z} \right)^2$. The techniques use analysis of the space of functions on $\mathbb{Z}^2$ which are harmonic modulo 1. In the course of our arguments, we characterize the harmonic modulo 1 functions in $\ell^p(\mathbb{Z}^2)$ as linear combinations of certain discrete derivatives of Green's functions, extending a result of Schmidt and Verbitskiy. | \section{Introduction}
\subsection{Stabilization of i.i.d.~sandpiles}
A \emph{sandpile} on the integer lattice $\mathbb{Z}^2$ is a function $\sigma : \mathbb{Z}^2 \to \mathbb{Z}_{\geq 0}$, where $\sigma(x)$ represents the number of grains of sand at the site $x$. The sandpile $\sigma$ is \emph{stable} if each $\sigma(x) \leq 3$. If some $\sigma(x) \geq 4$, then we may \emph{topple} the sandpile at $x$ by passing one grain of sand from $x$ to each of its four nearest neighbors. We say that $\sigma$ \emph{stabilizes} if it is possible to reach a stable configuration from $\sigma$ by toppling each vertex finitely many times. If the heights $(\sigma(x))_{x \in \mathbb{Z}^2}$ are i.i.d.~random variables, we refer to $\sigma$ as an i.i.d.~sandpile.
Meester and Quant \cite{MQ05} asked which i.i.d.~sandpiles stabilize almost surely. It was proved by Fey and Redig \cite{FR05} that such a sandpile $\sigma$ must satisfy $\mathbf{E}[\sigma(x)] \leq 3$. This condition is not sufficient for stabilization: for every $p > 0$, the i.i.d.~sandpile where each $\sigma(x) = 2$ with probability $1-p$ and $\sigma(x) = 4$ with probability $p$ almost surely fails to stabilize \cite{FLP10}.
Thus, for each $2 < \rho \leq 3$, there are some i.i.d.~sandpiles with $\mathbf{E}[\sigma(x)] = \rho$ that do stabilize almost surely (e.g.~when each $\sigma(x) \in \{0,1,2,3\}$) and others that fail to stabilize. This behavior contrasts with the closely related \emph{divisible sandpile model}, in which stabilization of a nonconstant i.i.d.~initial condition $\sigma$ is determined entirely by the value of $\mathbf{E}[\sigma(x)]$ \cite{LMPU16}.
Our first main theorem shows that an i.i.d.~sandpile with $\mathbf{E}[\sigma(x)]$ slightly less than $3$ cannot stabilize almost surely unless $\sigma(x) \leq 3$ with high probability.
\begin{theorem} \label{quantitative_theorem}
There are constants $c,d > 0$ such that any i.i.d.~sandpile $\sigma$ on $\mathbb{Z}^2$ that stabilizes almost surely satisfies
\begin{equation}
\mathbf{E}[\sigma(x)] \leq 3 - \min\left( c, d \mathbf{E}[|X-X'|^{2/3}] \right)
\end{equation}
where $X,X'$ are independent and distributed as $\sigma(x)$.
\end{theorem}
If $3 - \mathbf{E}[\sigma(x)]$ is small, then the inequality $\mathbf{Prob}(X \neq X') \leq \mathbf{E}[|X - X'|^{2/3}]$ implies that the law of $\sigma(x)$ is concentrated at a single value, which must be at most $3$. Some extra work would be required to extract explicit values for the constants $c$ and $d$ from our proof of Theorem \ref{quantitative_theorem}; see the discussion following Lemma \ref{xi_tail}.
Theorem \ref{quantitative_theorem} answers a question posed by Fey, Meester, and Redig \cite{FMR09} by demonstrating that an i.i.d.~Poisson sandpile with mean sufficiently close to $3$ almost surely does not stabilize.
An interesting question that remains open is whether there exists $\epsilon>0$ such that the only i.i.d.~stabilizing sandpiles with $\mathbf{E}[\sigma(x)]>3-\epsilon$ are those which are already stable.
\subsection{Cutoff for sandpiles on the torus}
We also consider sandpile dynamics on the discrete torus
$\mathbb{T}_m = \left(\mathbb{Z}/m\mathbb{Z}\right)^2$, given as follows. The
point $(0,0)$ is designated \emph{sink} and is special. Each non-sink point on the
torus has a sand allocation \begin{equation}\sigma: \mathbb{T}_m
\setminus
\{(0,0)\} \to \mathbb{Z}_{\geq 0}.\end{equation} As on the integer lattice, if at
some time a non-sink vertex has allocation at
least 4 it may topple, passing one grain of sand to each of its neighbors; if a
grain of sand falls on the sink it is lost from the model. Those states $\mathscr{S}_m$
for which $\sigma \leq 3$ are stable. We consider the discrete time dynamics, where a single step consists of
dropping a grain of sand on a uniformly randomly chosen vertex and then
performing all legal topplings until the model reaches a stable state. The \emph{abelian property} \cite{D90} ensures that this stable state does not depend on the order in which the topplings were performed.
Those stable states $\mathscr{R}_m$ which may be reached from the maximal state $\sigma \equiv 3$ are
recurrent, whereas all other states are
transient. Started from any stable state, the sandpile model forms a Markov
chain with transition kernel $P_m$, which converges to the uniform measure
$\mathbb{U}_{\mathscr{R}_m}$ on recurrent states.
\begin{theorem}\label{mixing_time_theorem}
Let $m \geq 2$. There is a constant $c_0 = 0.348661174(3)$ and $t^{\operatorname{mix}}_m=
c_0
m^2 \log m$ such that the following
holds. For each fixed $\epsilon>0$,
\begin{align}
&\lim_{m \to \infty} \min_{\sigma \in
\mathscr{S}_m}\left\|P_m^{\lceil(1-\epsilon)t^{\operatorname{mix}}_m\rceil}\delta_{\sigma}-
\mathbb{U}_{\mathscr{R}_m} \right\|_{\operatorname{TV}(\mathscr{S}_m)} = 1,
\\ \notag & \lim_{m \to \infty} \max_{\sigma \in \mathscr{S}_m}
\left\|P_m^{\lfloor(1+\epsilon)t_m^{\operatorname{mix}}\rfloor}\delta_{\sigma}- \mathbb{U}_{\mathscr{R}_m}
\right\|_{\operatorname{TV}(\mathscr{S}_m)} = 0.
\end{align}
\end{theorem}
Informally, the convergence to uniformity of the sandpile model on the torus
has total variation mixing time asymptotic to $c_0 m^2 \log m$ and the transition to uniformity
satisfies a cutoff phenomenon. Implicit in the statement of
Theorem \ref{mixing_time_theorem} is that, with high probability, the time to
reach a recurrent state started from a general state in the model is less than
the mixing time. In Section \ref{sandpile_section} we give an easy proof
using a coupon collector-type argument that this hitting time is almost surely $O(m^2
\sqrt{\log m})$. Also, the asymptotic mixing time of order $m^2 \log
m$ is at a later point than is sampled in some statistical physics studies
regarding sandpiles, see \cite{SMKW15}.
We also determine asymptotically the absolute spectral gap of the torus sandpile Markov
chain.
\begin{theorem}\label{spectral_gap_theorem}
Let $m \geq 1$. There is a constant
\[
\gamma = 2.868114013(4)
\]
such that the absolute spectral gap of the sandpile Markov chain restricted to its recurrent states satisfies
\begin{equation}
\mathrm{gap}_m = \frac{\gamma + o(1)}{m^2} \qquad \text{as $m \to \infty$.}
\end{equation}
\end{theorem}
The constants in the preceding theorems are reciprocals: $c_0 \gamma = 1$. An explicit formula for $\gamma$ in terms of the Green's function on $\mathbb{Z}^2$ is given in Appendix \ref{spectral_gap_appendix}.
\subsection{Functions harmonic modulo 1}
\label{Functions_harmonic_modulo_1}
Functions which are \emph{harmonic modulo $1$} play a central role in the proofs of Theorems \ref{quantitative_theorem}--\ref{spectral_gap_theorem}. For $X = \mathbb{Z}^2$ or $X = \mathbb{T}_m$, we say that $f: X \to \mathbb{C}$ is harmonic modulo 1 if
\begin{equation}
\label{Laplacian}
(\Delta f)(i,j) := 4f(i,j) - f(i-1,j) - f(i+1,j) - f(i,j-1) - f(i,j+1)
\end{equation}
is in $\mathbb{Z}$ for all $(i,j) \in X$. The operator $\Delta$ is the \emph{graph Laplacian} on $X$.
Schmidt and Verbitskiy \cite{SV09} characterized the set of all functions in $\ell^1(\mathbb{Z}^2)$ that are harmonic modulo 1. Their result can be stated using discrete derivatives of the \emph{Green's function} on $\mathbb{Z}^2$. Let
\begin{equation}
\label{nu}
\nu := \frac{1}{4} \left(\delta_{(-1,0)} + \delta_{(1,0)} + \delta_{(0,-1)} + \delta_{(0,1)} \right)
\end{equation}
be the measure that drives simple random walk on $\mathbb{Z}^2$, and let $\nu^{*n}$ be its $n$-th convolution power, so that $\nu^{*n}(x)$ is the probability that a random walker started from the origin is at site $x$ after $n$ steps. The Green's function is defined by
\begin{equation}
\label{G_Z2_first}
G_{\mathbb{Z}^2}(x) := \frac{1}{4} \sum_{n=0}^\infty \left[ \nu^{*n}(x) - \nu^{*n}(0,0) \right].
\end{equation}
Evidently, $G_{\mathbb{Z}^2}(0,0) = 0$. For nonzero $x = (x_1,x_2)$ with $\|x\|_2 = \sqrt{x_1^2 + x_2^2}$, it is known classically that $G_{\mathbb{Z}^2}(x) = -\frac{1}{2\pi} \log \|x\|_2 + O(1)$. As shown in \cite{FU96}, this is the start of an asymptotic expansion, whose first few terms we quote in Theorem \ref{greens_function_asymptotic}.
It can easily be shown that $\Delta G_{\mathbb{Z}^2}(x) = \mathbf{e}_{(0,0)}(x) := \mathbf{1}\{x = (0,0)\}$, so $G_{\mathbb{Z}^2}$ is harmonic modulo 1. By taking discrete derivatives, we can find harmonic modulo 1 functions that decay to zero with $\|x\|_2$. The discrete derivatives $D_1 f$, $D_2 f$ of any $f: \mathbb{Z}^2 \to \mathbb{C}$ are defined as
\begin{equation}
\label{D_12}
D_1 f(i,j) := f(i+1,j) - f(i,j), \quad D_2 f(i,j) := f(i,j+1) - f(i,j).
\end{equation}
If $f$ is harmonic modulo 1, then so is any finite linear combination with integer coefficients of translates of $f$, including $D_1 f$ and $D_2 f$.
From the asymptotic expansion, it follows that the $k$-th derivatives of $G_{\mathbb{Z}^2}$ decay like the inverse $k$-th power of the radius. That is, if $a+b = k$, then $D_1^a D_2^b G_{\mathbb{Z}^2}(x) = O\left( \|x\|_2^{-k} \right)$. When $k \geq 3$, this implies that $D_1^a D_2^b G_{\mathbb{Z}^2} \in \ell^1(\mathbb{Z}^2)$. Thus, the third derivatives of $G_{\mathbb{Z}^2}$, and all finite integer linear combinations of their translates, are harmonic modulo 1 functions in $\ell^1(\mathbb{Z}^2)$. (Note that the fourth and higher derivatives are linear combinations of translates of the third derivatives.)
For $1 \leq p < \infty$, let ${\mathscr{H}}^p(\mathbb{Z}^2)$ be the set of all functions in $\ell^p(\mathbb{Z}^2)$ that are harmonic modulo 1. Also, let $\llangle f_1,\ldots,f_n \rrangle$ denote the set of all finite integer linear combinations of translates of the functions $f_1,\ldots,f_n$ on the domain $\mathbb{Z}^2$, so that for example $D_1^a D_2^b f \in \llangle f \rrangle$ for any $a,b \geq 0$.
\begin{theorem}
\label{H_p_theorem}
The sets ${\mathscr{H}}^p(\mathbb{Z}^2)$, for $1 \leq p < \infty$, admit the following characterization:
\begin{align}
\label{H_p_characterization}
{\mathscr{H}}^1(\mathbb{Z}^2) &= \llangle D_1^3 G_{\mathbb{Z}^2}, D_1^2 D_2 G_{\mathbb{Z}^2}, D_1 D_2^2 G_{\mathbb{Z}^2}, D_2^3 G_{\mathbb{Z}^2}, \mathbf{e}_{(0,0)} \rrangle \\
\notag {\mathscr{H}}^p(\mathbb{Z}^2) &= \llangle D_1^2 G_{\mathbb{Z}^2}, D_1 D_2 G_{\mathbb{Z}^2}, D_2^2 G_{\mathbb{Z}^2} \rrangle, \qquad 1 < p \leq 2 \\
\notag {\mathscr{H}}^p(\mathbb{Z}^2) &= \llangle D_1 G_{\mathbb{Z}^2}, D_2 G_{\mathbb{Z}^2} \rrangle, \hspace{6.52em} 2 < p < \infty.
\end{align}
\end{theorem}
The first equality in \eqref{H_p_characterization}, which is the most delicate part to prove, is essentially a restatement of Theorem 2.4 in \cite{SV09}. We provide a unified proof of all three parts of Theorem \ref{H_p_theorem} in Section \ref{classification_section}.
Since the function $\mathbf{e}_{(0,0)}$ is itself in ${\mathscr{H}}^p(\mathbb{Z}^2)$ for all $p$, it is implicit in the theorem statement that $\mathbf{e}_{(0,0)}$ is a linear combination of translates of second derivatives of $G_{\mathbb{Z}^2}$. This is true because $\Delta G_{\mathbb{Z}^2} = \mathbf{e}_{(0,0)}$, and the Laplacian $\Delta$ is a second-order discrete differential operator.
\subsection{Discussion of method}
\label{Discussion of method}
This section outlines the methods used to prove Theorems \ref{quantitative_theorem}--\ref{spectral_gap_theorem}.
Theorem \ref{quantitative_theorem} says that if $\sigma$ is an i.i.d.~sandpile on $\mathbb{Z}^2$ that stabilizes almost surely, then $3 - \mathbf{E}[\sigma(x)]$ is bounded below by a quantity that measures the typical difference between the heights at two locations $\sigma(x), \sigma(x')$. To prove the theorem, let $u(x)$ be the `odometer' function that counts the number of times a vertex $x$ topples in passing from $\sigma$ to its stabilization $\sigma^\infty$, so that $\sigma^\infty = \sigma - \Delta u$.
In Section \ref{stability_section} we observe that the modulo 1 harmonic functions are dual to toppling in the following sense: If $\xi \in \ell^1(\mathbb{Z}^2)$ is harmonic modulo 1, then
\begin{equation}
\label{pairing-eqn}
\langle \sigma, \xi \rangle \equiv \langle \sigma^\infty, \xi \rangle \mod{1}, \qquad a.s.
\end{equation}
where $\langle f,g \rangle = \sum_{x \in \mathbb{Z}^2} \overline{f(x)} g(x)$ is the usual pairing. This provides a collection of invariants which obstruct stabilization in the sandpile model.
To prove Theorem \ref{quantitative_theorem}, we consider the characteristic functions
\begin{equation}
\chi(\sigma; \xi) = \mathbf{E}\left[e^{-2\pi i \langle \sigma, \xi \rangle}\right], \qquad \chi(\sigma^\infty; \xi) = \mathbf{E}\left[e^{-2\pi i \langle \sigma^\infty, \xi \rangle}\right]
\end{equation}
which, by \eqref{pairing-eqn}, are equal. If $\mathbf{E}[\sigma(x)] = \mathbf{E}[\sigma^\infty(x)]$ is close to $3$ (which is the maximum possible value), then $\sigma^\infty(x)$ must equal $3$ for most $x \in \mathbb{Z}^2$. Choosing $\xi$ so that $\sum_{x \in \mathbb{Z}^2} \xi(x) = 0$, $\chi(\sigma^\infty; \xi)$ must be near $\chi(3; \xi) = 1$. On the other hand, since the starting values $\sigma(x)$ are i.i.d.,
\begin{equation}
\chi(\sigma; \xi) = \prod_{x \in \mathbb{Z}^2} \mathbf{E}\left[e^{-2\pi i \sigma(x) \xi(x)} \right].
\end{equation}
The modulus of each term $\mathbf{E}\left[e^{-2\pi i \sigma(x) \xi(x)} \right]$ decreases as the possible values of $\sigma(x)$ get more spread-out. In this way, the lower bound on $|\chi(\sigma; \xi)| = |\chi(\sigma^\infty; \xi)|$ translates into an upper bound on the amount that the starting values $\sigma(x)$ can vary.
We now turn to Theorem \ref{spectral_gap_theorem}. The set $\mathscr{R}_m$ of recurrent sandpiles on the torus has a natural abelian group structure. This identifies $\mathscr{R}_m$ with the \emph{sandpile group} $\mathscr{G}_m$, which is formally defined in Section \ref{sandpile_section}. The sandpile Markov chain restricted to its recurrent states is a random walk on $\mathscr{G}_m$, meaning that its eigenvectors are given by the dual group $\hat{\mathscr{G}}_m$. We can express $\hat{\mathscr{G}_m}$ as the additive group of functions $\xi : \mathbb{T}_m \to \mathbb{R} / \mathbb{Z}$ such that $\xi(0,0) = 0$ and $\Delta \xi \equiv 0$ in $\mathbb{R} / \mathbb{Z}$. (The operation of $\xi$ on sandpiles is $\sigma \mapsto \sum_{x \in \mathbb{T}_m \setminus \{(0,0)\}} \xi(x)\sigma(x)$.) In this way, an element $\xi \in \hat{\mathscr{G}}_m$ is naturally associated with the set of harmonic modulo 1 functions $\xi' : \mathbb{T}_m \to \mathbb{R}$ that reduce mod $\mathbb{Z}$ to $\xi$.
The eigenvalue of the Markov chain associated to $\xi$ is the Fourier coefficient of the measure $\mu$ driving the random walk at frequency $\xi$:
\begin{equation}
\hat{\mu}(\xi) = \frac{1}{m^2} \sum_{x \in \mathbb{T}_m} e^{2\pi i \xi(x)}.
\end{equation}
The mixing time is controlled by the frequencies for which $|\hat{\mu}(\xi)|$ is close to $1$.
Given a frequency $\xi$, let $\xi' : \mathbb{T}_m \to \mathbb{R}$ be one of its harmonic modulo $1$ representatives. The integer-valued function $v = \Delta \xi'$ will be referred to as a `prevector' of $\xi$. To recover $\xi'$ from $v$ up to an additive constant, we convolve $v$ with the Green's function $G_{\mathbb{T}_m}$ on the torus, which is defined by
\begin{equation}
\label{G_T_m_first}
G_{\mathbb{T}_m}(x) := \frac{1}{4}\sum_{n=0}^\infty \left(\nu^{*n}(x) - \frac{1}{m^2}\right)
\end{equation}
and is the unique mean-zero function (i.e.~$\sum_{x \in \mathbb{T}_m} G_{\mathbb{T}_m}(x) = 0$) satisfying
\begin{equation}
\Delta G_{\mathbb{T}_m}(x) = \mathbf{e}_{(0,0)}(x) - \frac{1}{m^2}.
\end{equation}
It follows that $(G_{\mathbb{T}_m} * v)(x) = \xi'(x) - c$, where $c = \frac{1}{m^2} \sum_{y \in \mathbb{T}_m} \xi'(y)$.
Although we will not use this characterization, $G_{\mathbb{T}_m}$ can be considered as a mean-zero version of the Green's function for the simple random walk on $\mathbb{T}_m$ started from the origin and killed at a uniformly random point. To be precise, given $y \in \mathbb{T}_m$, let $\tau_y$ be the first time $t \geq 0$ that a simple random walker started from the origin reaches $y$, and define $g_y(x)$ to be the expected number of times $0 \leq t < \tau_y$ that the walker visits site $x$. If $g(x) = \frac{1}{m^2} \sum_{y \in \mathbb{T}_m} g_y(x)$, then $G_{\mathbb{T}_m}(x) = \frac{1}{4}\left[ g(x) - \frac{1}{m^2}\sum_{x' \in \mathbb{T}_m} g(x') \right]$.
In Section \ref{Representations_for_frequencies}, we specify for each frequency $\xi \in \hat{\mathscr{G}_m}$ a particular choice of $\xi'$ such that the `distinguished prevector' $v = \Delta \xi'$ satisfies
\begin{equation}
\label{prevector_heuristic}
1 - |\hat{\mu}(\xi)| \asymp \frac{\|G_{\mathbb{T}_m} * v \|_{L^2(\mathbb{T}_m)}^2}{m^2}.
\end{equation}
Each prevector $v$ has mean zero because $v$ is in the image of $\Delta$. To find the absolute spectral gap of the Markov chain, which minimizes $1 - |\hat{\mu}(\xi)|$, we ask which mean-zero integer-valued vectors $v$ make $\|G_{\mathbb{T}_m} * v \|_{L^2(\mathbb{T}_m)}^2$ as small as possible.
It is profitable to think of $G_{\mathbb{T}_m} * v$ as a linear combination of translates of discrete derivatives of $G_{\mathbb{T}_m}$. For example, if $v(a,b) = -1$, $v(a-1,b) = 1$, and $v(i,j) = 0$ at all other $(i,j) \in \mathbb{T}_m$, then
\begin{equation}
(G_{\mathbb{T}_m} * v)(x_1,x_2) = G_{\mathbb{T}_m}(x_1 + 1-a, x_2 - b) - G_{\mathbb{T}_m}(x_1 - a, x_2 - b)
\end{equation}
which is the translation by $(a,b)$ of $D_1 G_{\mathbb{T}_m}$.
The Laplacian operator $\Delta$ acts locally. Its inverse, convolution with $G_{\mathbb{T}_m}$, is non-local but satisfies an approximate locality in that the discrete derivatives of $G_{\mathbb{T}_m}$, like those of $G_{\mathbb{Z}^2}$, decay to zero. Using these decay estimates, we show in Section \ref{Determination_of_gap} that $\|G_{\mathbb{T}_m} * v \|_{L^2(\mathbb{T}_m)}^2$ is minimized when $G_{\mathbb{T}_m} * v$ is an integer linear combination of the second derivatives $D_1^2 G_{\mathbb{T}_m}$, $D_1 D_2 G_{\mathbb{T}_m}$, $D_2^2 G_{\mathbb{T}_m}$ and their translates. These lead to gaps of order $1/m^2$ in \eqref{prevector_heuristic}.
It follows from \eqref{prevector_heuristic}, an upper bound on $\|\Delta\|_{L^2 \to L^2}$, and the inequality
\begin{equation}
\|v\|_{L^2(\mathbb{T}_m)}^2 = \|\Delta(G_{\mathbb{T}_m} * v)\|_{L^2(\mathbb{T}_m)}^2 \leq \|\Delta\|_{L^2 \to L^2}^2 \|G_{\mathbb{T}_m} * v\|_{L^2(\mathbb{T}_m)}^2
\end{equation}
that if the $L^2$ norm of the prevector $v$ is too high, then $v$ cannot generate the spectral gap. Proposition \ref{gap_achievers} shows that if the support of $v$ is too spread-out over $\mathbb{T}_m$, then by the approximate locality of convolution with $G_{\mathbb{T}_m}$, $v$ can be separated into widely spaced clusters whose contributions to $1 - |\hat{\mu}(\xi)|$ are nearly additive. Just keeping one of the clusters and zeroing out the rest of $v$ would produce a smaller gap. By this argument, the only prevectors with any chance of generating the spectral gap have bounded norm and bounded support, so the computation of the gap is reduced to a finite check.
To fill in the details of the proof, we require precise asymptotics for derivatives of $G_{\mathbb{T}_m}$. We obtain these using a local limit theorem, which is proved in Appendix \ref{local_limit_theorem_appendix}. We also relate $G_{\mathbb{T}_m}$ as $m \to \infty$ to $G_{\mathbb{Z}^2}$, which translates the finite check for the spectral gap into a minimization problem involving functions in $\ell^2(\mathbb{Z}^2)$ that are harmonic modulo $1$. The resulting search was performed using convex programming in the SciPy scientific computing package \cite{JOP01}, and is described in Appendix \ref{spectral_gap_appendix}. We find that for sufficiently large $m$, the gap is achieved for prevectors of the form $v(a,b) = v(a-1,b-1) = 1$, $v(a-1,b) = v(a,b-1) = -1$, $v(i,j) = 0$ elsewhere, which correspond to translates of $D_1 D_2 G_{\mathbb{T}_m}$.
For Theorem \ref{mixing_time_theorem}, we prove cutoff in both total variation and $L^2$ at time $\gamma^{-1} m^2 \log m$. The necessary ingredients are a total variation lower bound and an $L^2$ upper bound on mixing time.
First, we use the coupon-collector argument mentioned earlier to reduce to the case where the starting state $\sigma$ is recurrent. Next we observe that due to translation, there are $m^2$ different prevectors $v$ whose corresponding frequencies $\xi$ achieve $1 - |\hat{\mu}(\xi)| = \mathrm{gap}_m$. The $L^2$ distance from the uniform distribution on $\mathscr{R}_m$ of the chain started from $\sigma$ after $N$ steps satisfies
\begin{equation}
\left\|P_m^N \delta_{\sigma} - \mathbb{U}_{\mathscr{R}_m} \right\|_{L^2(d\mathbb{U}_{\mathscr{R}_m})}^2 = \sum_{\xi \in \hat{\mathscr{G}}_m \setminus \{0\}}\left|\hat{\mu}(\xi) \right|^{2N} \geq m^2 (1 - \mathrm{gap}_m)^{2N}.
\end{equation}
Thus the chain cannot mix in $L^2$ before time
\begin{equation}
N = \frac{1}{\mathrm{gap}_m} \log m = \frac{1}{\gamma} m^2 \log m + o(m^2 \log m).
\end{equation}
We strengthen this to a lower bound on total variation mixing time by a second moment method due originally to Diaconis \cite{DS87,D88} that builds a distinguishing statistic out of the top eigenvectors of the chain. See Lemma \ref{lower_bound_lemma}. To apply this lemma, we require an upper bound on $|\hat{\mu}(\xi_1 - \xi_2)|$ when the frequencies $\xi_1,\xi_2$ (both of which achieve the spectral gap) come from prevectors $v_1,v_2$ whose supports are separated. Since the contributions of $v_1$ and $v_2$ are nearly additive, we have $1 - |\hat{\mu}(\xi_1 - \xi_2)| \approx 1 - 2 \cdot \mathrm{gap}_m$, which enables the argument to go through.
For the upper bound on the $L^2$ mixing time, we show that
\begin{equation}
\label{L2_dist}
\sum_{\xi \in \hat{\mathscr{G}}_m \setminus \{0\}}\left|\hat{\mu}(\xi) \right|^{2N}
\end{equation}
tends to zero as $m \to \infty$ when $N = (1+\epsilon)\gamma^{-1} m^2 \log m$. Our argument uses an agglomeration scheme in which we partition the support of each prevector $v$ into widely spaced clusters. Lemma \ref{savings_lemma}, the main step in the proof, shows that each small cluster contributes additively to the gap $1 - |\hat{\mu}(\xi)|$. The earlier additivity results in Section \ref{Determination_of_gap}, most notably Proposition \ref{gap_achievers}, hold only for prevectors with bounded $L^1$ norm, so the extension to the general case requires new arguments. We use techniques from the theory of exponential sums, including van der Corput's inequality. As a consequence of the clustering scheme, we can control the number of distinct frequencies $\xi$ whose gap $1 - |\hat{\mu}(\xi)|$ might be small, giving the desired bound on \eqref{L2_dist}.
The cutoff argument may be considered an extension of the classical analysis
of mixing on the hypercube \cite{DGM90}, and exploits the fact that the lattice
which is quotiented to give the sandpile group is approximately cubic. See
\cite{H15} for analysis of some random walks on the cycle where the $L^1$ and
$L^2$ cutoff times differ by a constant.
\subsection{Historical review} Sandpile dynamics on the square lattice were introduced by Bak, Tang, and Wiesenfeld \cite{BTW87,BTW88} as a model of self-organized criticality. Dhar \cite{D90} considered the case of an arbitrary finite underlying graph, proving many fundamental results. Subsequently, Dhar et al.~\cite{DRSV95} used harmonic modulo 1 functions (there called `toppling invariants') to analyze the algebraic structure of the sandpile group for rectangular subsets of $\mathbb{Z}^2$.
Sandpiles are examples of \emph{abelian networks}, which are systems
of communicating automata satisfying a local commutativity condition
\cite{D99,BL16}. By a theorem of Cairns \cite{C15}, an abelian network on $\mathbb{Z}^2$ can emulate a Turing
machine, as can a sandpile on $\mathbb{Z}^3$. In particular, for a periodic
configuration of sand on $\mathbb{Z}^3$ plus a finite number of additional
sand grains, the question of stabilization is algorithmically
undecidable! It is not known whether the same question is
undecidable on $\mathbb{Z}^2$. A related open problem, highlighted in
\cite{LMPU16}, is the following: ``Given a probability distribution
$\mu$ on $\mathbb{Z}$ (say, supported on $\{0,1,2,3,4\}$ with rational
probabilities), is it algorithmically decidable whether the i.i.d.~abelian sandpile on $\mathbb{Z}^2$ with marginal $\mu$ stabilizes almost
surely?'' Theorem \ref{quantitative_theorem} and its method of proof can be viewed as a slight
advance on this problem.
The question of stabilization of i.i.d.~sandpiles was posed by Meester and Quant \cite{MQ05} and by Fey and Redig \cite{FR05}. A fundamental result is the \emph{conservation of density} proved by Fey, Meester, and Redig \cite{FMR09}, which in particular implies the earlier result of \cite{FR05}: An i.i.d.~stabilizing sandpile $\sigma$ on $\mathbb{Z}^2$ must satisfy $\mathbf{E}[\sigma(x)] \leq 3$.
To get strictly below $3$ in the upper bound of Theorem \ref{quantitative_theorem}, we use harmonic modulo 1 functions to construct additional conserved quantities; see Lemma \ref{pairing-invariant}.
Theorems \ref{mixing_time_theorem} and \ref{spectral_gap_theorem} are concerned with the sandpile Markov chain on the discrete torus $\mathbb{T}_m$, whose stationary distribution
is uniform on the (finite) set of recurrent states. These finite Markov chains are related to sandpiles on the infinite grid $\mathbb{Z}^2$ by theorems of \cite{AJ04, JR08}. Athreya and J\'{a}rai \cite{AJ04} proved that the restriction of a uniform recurrent sandpile on the $d$-dimensional cube $[-m,m]^d \cap \mathbb{Z}^d$ to any fixed finite subset of $\mathbb{Z}^d$ converges in law as $m \to \infty$. Hence there is a limiting measure $\mu$ on recurrent sandpiles on $\mathbb{Z}^d$. By equality of the free and wired uniform spanning forests, replacing the cube with the $d$-dimensional discrete torus results in the same limit $\mu$. J\'{a}rai and Redig \cite{JR08} proved that in dimensions $d \geq 3$, a $\mu$-distributed sandpile plus one additional chip stabilizes almost surely. They used this fact to construct an ergodic Markov process on recurrent sandpiles on $\mathbb{Z}^d$ having $\mu$ as its stationary distribution. In dimension $2$, it is not known whether a $\mu$-distributed sandpile plus one additional chip stabilizes almost surely. (Possibly Lemma \ref{pairing-invariant} could help resolve this question.)
Some further studies of sandpile dynamics on $\mathbb{Z}^d$ are \cite{MRS04,JRS15,BHJ16}.
The mixing of the sandpile Markov chain on finite graphs arises in relating sandpiles with different boundary conditions: the dependence of observables such as the `density' (average amount of sand per vertex) on the boundary conditions is a symptom of slow mixing. In particular, the extra log factor in the mixing time $t_m^{\operatorname{mix}}$ of Theorem \ref{mixing_time_theorem} could be viewed as the cause for the failure of the `density conjecture' \cite{FLW10,L15}.
The proof of cutoff in Theorem \ref{mixing_time_theorem} estimates a
significant piece of the spectrum of the transition kernel of the sandpile walk on the torus. See \cite{CE02,BS13} for further applications of spectral techniques
related to sandpiles.
The eigenvectors and eigenvalues of the sandpile Markov chain on an arbitrary finite graph were characterized in \cite{JLP15} using `multiplicative harmonic functions' (these are complex exponentials of the harmonic modulo 1 functions, as explained in Section \ref{Random_walk_on_the_sandpile_group}).
In \cite{JLP15} it was shown that the sandpile Markov chain on any connected graph with $n$ vertices mixes in $O(n^3 \log n)$ steps, and that cutoff for the complete graph (both in total variation and in $L^2$) occurs at time $\frac{1}{4\pi^2} n^3 \log n$.
Regarding the discrete torus $\mathbb{T}_m$, it was proved in \cite{JLP15} that the sandpile
chain on any graph with $m^2$ vertices and maximum degree $4$ has spectral gap
at least $1/(2m^2)$, and mixes in at most $\frac{5}{2} m^2 \log m$ steps.
Theorems \ref{mixing_time_theorem} and \ref{spectral_gap_theorem} improve these results by obtaining asymptotics for the mixing time and spectral gap, and by demonstrating cutoff. We expect that our techniques can also prove cutoff for the sandpile chain on the finite box $[-m,m]^2 \cap \mathbb{Z}^2$, with boundary vertices identified as the sink, at a constant multiple of $m^2 \log m$ steps.
Schmidt and Verbitskiy \cite{SV09} characterize the set ${\mathscr{H}}^1(\mathbb{Z}^2)$ of harmonic modulo 1 functions in $\ell^1(\mathbb{Z}^2)$ in terms of third derivatives of the Green's function $G_{\mathbb{Z}^2}$.
Our Theorem \ref{H_p_theorem} provides a similar characterization of the sets ${\mathscr{H}}^p(\mathbb{Z}^2)$, for $1 \leq p < \infty$.
\subsection*{Organization}
Section \ref{background_section} fixes notation and provides background on discrete derivatives and Fourier transforms, the graph Laplacian and Green's function on $\mathbb{Z}^2$ and $\mathbb{T}_m$, and results from the theory of exponential sums. Section \ref{classification_section} proves Theorem \ref{H_p_theorem}, while Section \ref{stability_section} proves Theorem \ref{quantitative_theorem}. Section \ref{sandpile_section} defines the sandpile group $\mathscr{G}_m$ and its dual $\hat{\mathscr{G}}_m$, describes the eigenvalues and eigenvectors of the sandpile Markov chain using $\hat{\mathscr{G}}_m$, and shows that we may assume the starting state is recurrent when proving Theorem \ref{mixing_time_theorem}. Section \ref{spectral_gap_section} proves Theorem \ref{spectral_gap_theorem} and provides the main technical estimates needed for Theorem \ref{mixing_time_theorem}. Finally, Section \ref{proof_mixing_theorem_section} proves Theorem \ref{mixing_time_theorem}.
Appendix \ref{local_limit_theorem_appendix} proves a local limit theorem for repeated convolutions of the simple random walk measure on $\mathbb{Z}^2$ that is used to obtain asymptotics for derivatives of the discrete Green's function on $\mathbb{T}_m$. Appendix \ref{spectral_gap_appendix} uses convex programming to find an exact formula for the leading constant $\gamma$ in the spectral gap and mixing time of the sandpile chain.
\section{Function spaces and conventions}
\label{background_section}
The additive character on $\mathbb{R}/\mathbb{Z}$ is $e(x) = e^{2\pi i x}$. Its real part
is denoted $c(x) = \cos 2\pi x$ and imaginary part $s(x) = \sin 2\pi x$.
For real $x$, $\|x\|_{\mathbb{R}/\mathbb{Z}}$ denotes the distance of $x$ to the nearest
integer.
We use the notations $A \ll B$ and $A = O(B)$ to mean that there is a constant
$0<C<\infty$ such that $|A| < CB$, and $A \asymp B$ to mean $A \ll B \ll A$. A subscript such as $A \ll_R B$, $A = O_R(B)$ means that the constant $C$ depends on $R$. The notation $A = o(B)$ means that $A/B$ tends to zero.
Given a measurable space $(\mathscr{X}, \mathscr{B})$, the \emph{total variation distance} between two probability measures $\mu$ and $\nu$ on $(\mathscr{X}, \mathscr{B})$ is
\begin{equation}
\left\|\mu - \nu\right\|_{\operatorname{TV}} = \sup_{A \in \mathscr{B}}|\mu(A) - \nu(A)|.
\end{equation}
If $\mu$ is absolutely continuous with respect to $\nu$, the total variation distance may be expressed as
\begin{equation}
\|\mu - \nu\|_{\operatorname{TV}} = \frac{1}{2} \int_{\mathscr{X}}\left|\frac{d\mu}{d\nu}-1 \right|d\nu.
\end{equation}
In this case an $L^2(d\nu)$ distance may be defined by
\begin{equation}
\left\|\mu - \nu\right\|_{L^2(d\nu)}^2 = \int_{\mathscr{X}}\left(\frac{d\mu}{d\nu} - 1 \right)^2 d\nu,
\end{equation}
and Cauchy-Schwarz gives $\|\mu - \nu\|_{\operatorname{TV}} \leq \frac{1}{2}\|\mu - \nu\|_{L^2(d\nu)}$.
Consider $\mathbb{Z}^2$ and the discrete torus $\mathbb{T}_m$ to be metric spaces with the graph distance given by the $\ell^1$ norm on $\mathbb{Z}^2$ and the quotient distance, for $x, y \in \mathbb{T}_m$,
\begin{equation}
\|x-y\|_1 = \min\{ \|x'-y'\|_1 : x',y' \in \mathbb{Z}^2, [x'] = x, [y'] = y \}
\end{equation}
where $[x'],[y']$ are the images of $x',y'$ under the quotient map $\mathbb{Z}^2 \to \mathbb{T}_m$. The ball of radius $R > 0$ around a point $x$ is
\begin{equation}
B_R(x) = \left\{y: \|y-x\|_1 \leq R \right\}.
\end{equation}
We also use $\|x\|_2$ to denote the $\ell^2$ norm of $x = (x_1,x_2) \in \mathbb{Z}^2$. The argument of $x$, denoted $\arg(x)$, is the angle $0 \leq \theta < 2\pi$ such that $(x_1,x_2) = (\|x\|_2 \cos \theta, \|x\|_2 \sin \theta)$.
Denote the usual function spaces
\begin{equation}
\ell^p\left(\mathbb{Z}^2\right) = \left\{f : \mathbb{Z}^2 \to \mathbb{C}, \|f\|_p^p = \sum_{x \in \mathbb{Z}^2} |f(x)|^p < \infty\right\}, \quad 1 \leq p < \infty
\end{equation}
and
\begin{equation}
L^p\left(\mathbb{T}_m\right) = \left\{f: \mathbb{T}_m \to \mathbb{C}, \|f\|_p^p = \sum_{x \in \mathbb{T}_m} |f(x)|^p \right\}, \quad 1 \leq p < \infty.
\end{equation}
The latter functions may be considered as functions on $\mathbb{Z}^2$ which are $m\mathbb{Z}^2$-periodic. Let $\ell^\infty(\mathbb{Z}^2)$ and $L^\infty(\mathbb{T}_m)$ be the spaces of bounded functions on $\mathbb{Z}^2$ and $\mathbb{T}_m$, with $\|f\|_{\ell^\infty(\mathbb{Z}^2)} = \sup_{x \in \mathbb{Z}^2} |f(x)|$ and $\|f\|_{L^\infty(\mathbb{T}_m)} = \max_{x \in \mathbb{T}_m} |f(x)|$.
On the torus, the subspace of mean zero
functions is indicated by
\begin{equation}L_0^2(\mathbb{T}_m) = \left\{f \in L^2(\mathbb{T}_m):
\sum_{x \in
\mathbb{T}_m} f(x) = 0\right\}.\end{equation}
The notation $\mathbb{Z}^{\mathbb{T}_m}_0$ is used for the integer-valued functions in $L^2_0(\mathbb{T}_m)$.
On either $\mathbb{Z}^2$ or the torus, the standard basis vectors are written
\begin{equation}
\mathbf{e}_{(i,j)}(k, \ell) = \mathbf{1}\{i=k\} \mathbf{1}\{j = \ell\}.
\end{equation}
For functions other than the standard basis vectors, the notation $f_x = f(x)$ is used interchangeably.
For $X = \mathbb{Z}^2$ or $X = \mathbb{T}_m$ the support of a function on $X$ is \begin{equation}\operatorname{supp} f=\{x \in
X: f(x)
\neq 0\}.\end{equation}
Given $(i,j) \in X$, the translation operator $T_{(i,j)}$ acts on
functions by
\begin{equation}
T_{(i,j)}f(k, \ell) = f(k-i, \ell-j).
\end{equation}
The convolution of functions $f \in \ell^1(\mathbb{Z}^2)$, $g \in \ell^\infty\left(\mathbb{Z}^2\right)$ or $f, g \in L^2(\mathbb{T}_m)$ is given by
\begin{equation}
(f*g)(i,j) = \sum_{(k, \ell)\in X} f(i-k, j-\ell)g(k,\ell)
\end{equation}
where again $X$ represents $\mathbb{Z}^2$ or $\mathbb{T}_m$.
The averaging operator with respect to the uniform probability measure on
$\mathbb{T}_m$ is indicated by \begin{equation}\mathbf{E}_{x \in \mathbb{T}_m}[f] = \frac{1}{m^2}\sum_{x
\in \mathbb{T}_m} f(x).\end{equation}
Given $x, y \in \mathbb{R}/\mathbb{Z}$ and $f \in \ell^1(\mathbb{Z}^2)$, the Fourier transform of $f$ is
\begin{equation}
\hat{f}(x,y) = \sum_{(i,j) \in \mathbb{Z}^2} f(i,j) e(-(ix + jy)).
\end{equation}
Given $x, y \in \mathbb{Z}/m\mathbb{Z}$ and $f \in L^2(\mathbb{T}_m)$ the Fourier transform of $f$ is
\begin{equation}
\hat{f}(x,y) = \sum_{(i,j) \in \mathbb{T}_m} f(i,j) e\left(- \frac{ix + jy}{m} \right).
\end{equation}
The Fourier transform has the familiar property of carrying convolution to pointwise multiplication. For $f \in \ell^2(\mathbb{Z}^2)$, Parseval's identity is
\begin{equation}
\|f\|_2^2 = \int_{(\mathbb{R}/\mathbb{Z})^2} \left|\hat{f}(x,y)\right|^2 dx dy.
\end{equation}
For $f \in L^2(\mathbb{T}_m)$ the corresponding identity is
\begin{equation}
\|f\|_2^2 = \frac{1}{m^2} \sum_{x \in \mathbb{T}_m} \left| \hat{f}(x)\right|^2.
\end{equation}
For a function $f$ on $\mathbb{Z}^2$ or $\mathbb{T}_m$, the discrete derivatives $D_1 f(i,j)$, $D_2 f(i,j)$ are defined by \eqref{D_12}. Discrete differentiation is expressed as a convolution operator by
introducing
\begin{align}
\delta_1(i,j) &= \left\{\begin{array}{lll}-1 && (i,j) = (0,0)\\ 1 && (i,j)
=(-1,0)\\ 0 && \text{otherwise,}\end{array}\right.\\ \notag
\delta_2(i,j) &= \left\{\begin{array}{lll}-1 && (i,j) = (0,0)\\ 1 && (i,j)
=(0,-1)\\ 0 && \text{otherwise.}\end{array}\right.
\end{align}
For integers $a,b \geq 0$, one has
\begin{equation}
D_1^a D_2^b f = \delta_1^{*a}* \delta_2^{*b}* f.
\end{equation}
For $X = \mathbb{Z}^2$ or $\mathbb{T}_m$ and functions $f_1,\ldots,f_n$ on $X$, recall that
\begin{equation}
\llangle f_1,\ldots,f_n \rrangle = \operatorname{span}_\mathbb{Z}\{ T_x f_1, \ldots, T_x f_n: x \in X\},
\end{equation}
where $\operatorname{span}_\mathbb{Z}$ refers to the finite integer span. It is convenient to introduce classes of integer-valued functions:
\begin{align}
\label{C_0123}
C^0(X) &= \llangle \mathbf{e}_{(0,0)} \rrangle = \{f: X \to \mathbb{Z}, \|f\|_1<\infty\},\\
\notag C^1(X) &= \llangle \delta_1, \delta_2 \rrangle,\\
\notag C^2(X) &= \llangle \delta_1^{*2}, \delta_1 * \delta_2, \delta_2^{*2} \rrangle,\\
\notag C^3(X) &= \llangle \delta_1^{*3}, \delta_1^{*2} * \delta_2, \delta_1 * \delta_2^{*2}, \delta_2^{*3} \rrangle.
\end{align}
One has the equivalent characterizations
\begin{align}
C^1(X) &= \left\{ f \in C^0(X) : \sum_{x \in X} f(x) = 0 \right\}, \\
C^2(X) &= \left\{ f \in C^0(X) : \sum_{x \in X} f(x) = 0, \, \sum_{x \in X} f(x) x = 0 \right\} \label{C2}
\end{align}
and, for each $1 \leq k \leq 3$,
\begin{equation}
\label{C2_alt}
C^k(X) = \{\delta_1 * f + \delta_2 * g : f,g \in C^{k-1}(X)\}.
\end{equation}
Note the special cases $C^0(\mathbb{T}_m) = \mathbb{Z}^{\mathbb{T}_m}$ and $C^1(\mathbb{T}_m) = \mathbb{Z}_0^{\mathbb{T}_m}$.
\subsection{The graph Laplacian and Green's function}
The graph Laplacian $\Delta$ on either $\mathbb{Z}^2$ or $\mathbb{T}_m$ is the second-order discrete differential operator defined by \eqref{Laplacian}. On $\mathbb{Z}^2$ its Fourier transform is given by
\begin{equation}
\label{Delta-Fourier}
\widehat{(\Delta f)}(x,y) = (4 - 2[c(x) + c(y)])\hat{f}(x,y), \quad x,y \in \mathbb{R} / \mathbb{Z},
\end{equation}
and on $\mathbb{T}_m$ the Fourier transform is
\begin{equation}
\widehat{(\Delta f)}(x,y) = (4 - 2[c(x/m) + c(y/m)])\hat{f}(x,y), \quad x,y \in \mathbb{Z} / m\mathbb{Z}.
\end{equation}
\begin{lemma}
The graph Laplacians satisfy the operator bound
\begin{align}
\left\|\Delta\right\|_{\ell^2(\mathbb{Z}^2) \to \ell^2(\mathbb{Z}^2)}, \left\|\Delta\right\|_{L^2(\mathbb{T}_m) \to L^2(\mathbb{T}_m)}&\leq 8, \\\notag
\left\|\Delta\right\|_{\ell^{\infty}(\mathbb{Z}^2) \to \ell^{\infty}(\mathbb{Z}^2)}, \left\|\Delta\right\|_{L^{\infty}(\mathbb{T}_m) \to L^{\infty}(\mathbb{T}_m)}&\leq 8.
\end{align}
\end{lemma}
\begin{proof}
The $\ell^\infty$ and $L^\infty$ estimates are immediate. For $f \in \ell^2(\mathbb{Z}^2)$, by Parseval
\begin{equation}
\|\Delta f\|_2^2 = \int_{(\mathbb{R}/\mathbb{Z})^2} (4 - 2(c(x) + c(y)))^2 \left|\hat{f}(x,y)\right|^2 dxdy \leq 64 \|f\|_2^2.
\end{equation}
The bound on $L^2(\mathbb{T}_m)$ is similar.
\end{proof}
On either $X = \mathbb{Z}^2$ or $X = \mathbb{T}_m$, let $\nu$ be the probability measure given by \eqref{nu}, which drives simple random walk on $X$. The Green's function $G$ is a distribution on $C^1(X)$ given by
\begin{equation}
G*f = \frac{1}{4}\sum_{n=0}^\infty (\nu^{*n} * f), \qquad f \in C^1(X).
\end{equation}
Since $\Delta f = 4\left(\delta_{(0,0)} - \nu\right) * f$, the formal computation
\begin{equation}
\Delta^{-1} = \frac{1}{4} \left( \delta_{(0,0)} - \nu \right)^{-1} = \frac{1}{4} \sum_{n=0}^\infty \nu^{*n} = G
\end{equation}
indicates that $G$ is in some sense the inverse of $\Delta$. Precise versions of this statement are given below.
On $\mathbb{Z}^2$, $G$ may be realized as the function \eqref{G_Z2_first}:
\begin{equation}
\label{G_Z2}
G_{\mathbb{Z}^2}(x) = \frac{1}{4}\sum_{n=0}^\infty \left[ \nu^{*n}(x)-\nu^{*n}(0,0) \right].
\end{equation}
This is a classical object of probability theory. We quote the asymptotics from \cite{FU96}.
\begin{theorem}[\cite{FU96}, Remark 2]\label{greens_function_asymptotic}
Let $x = (x_1,x_2) \in \mathbb{Z}^2$. There are constants $a,b>0$ such that
\begin{equation} \label{G_expansion}
G_{\mathbb{Z}^2}(x) = \left\{\begin{array}{lll}0 && x = (0,0)\\ -\frac{\log \|x\|_2}{2\pi} - a - b \frac{\frac{8x_1^2 x_2^2}{\|x\|_2^4}- 1}{\|x\|_2^2} + O(\|x\|_2^{-4})&& x \neq (0,0). \end{array}\right.
\end{equation}
\end{theorem}
It follows from \eqref{G_Z2} that
\begin{equation}
\Delta G_{\mathbb{Z}^2}(x) = \sum_{n=0}^\infty \left[ \nu^{*n}(x) - \nu^{*(n+1)}(x) \right] = \mathbf{e}_{(0,0)}(x),
\end{equation}
so $\Delta(G_{\mathbb{Z}^2} * f) = f$ for all $f \in C^0(\mathbb{Z}^2)$. The Fourier transform of $G_{\mathbb{Z}^2}$ is
\begin{equation}
\label{G_Z2-Fourier}
\hat{G}_{\mathbb{Z}^2}(x,y) = \frac{1}{4- 2\left(c(x) +c(y) \right)}. \end{equation}
When combined with \eqref{Delta-Fourier}, this shows that $G_{\mathbb{Z}^2} * \Delta f = f$ whenever $f \in \ell^2(\mathbb{Z}^2)$.
On $\mathbb{T}_m$ a realization of $G$ as a function is obtained by \eqref{G_T_m_first}:
\begin{equation}
\label{G_T_m}
G_{\mathbb{T}_m}(x) = \frac{1}{4}\sum_{n=0}^\infty \left(\nu^{*n}(x) - \frac{1}{m^2}\right).
\end{equation}
This converges absolutely, as is most easily checked by passing to frequency space, where the zeroth Fourier coefficient vanishes, and the remaining Fourier coefficients are convergent geometric series. Summing \eqref{G_T_m} over all $x \in \mathbb{T}_m$ shows that $G_{\mathbb{T}_m}$ has mean zero. As well, it follows from \eqref{G_T_m} that
\begin{equation}
\Delta G_{\mathbb{T}_m}(x) = \sum_{n=0}^\infty \left[ \nu^{*n}(x) - \nu^{*(n+1)}(x) \right] = \mathbf{e}_{(0,0)}(x) - \frac{1}{m^2}.
\end{equation}
Therefore, $\Delta (G_{\mathbb{T}_m} * f) = f - \mathbf{E}_{x \in \mathbb{T}_m}[f]$ for any $f \in L^2(\mathbb{T}_m)$. In particular, if $f \in L^2_0(\mathbb{T}_m)$ then $\Delta (G_{\mathbb{T}_m} * f) = f$.
It is also true that $G_{\mathbb{T}_m} * \Delta f = f - \mathbf{E}_{x \in \mathbb{T}_m}[f]$ for all $f \in L^2(\mathbb{T}_m)$. To prove this, observe that since $\Delta f \in L_0^2(\mathbb{T}_m)$, $\Delta( G_{\mathbb{T}_m} * \Delta f ) = \Delta f$. Only the constant functions are in the kernel of $\Delta$, so $G_{\mathbb{T}_m} * \Delta f = f - c$ for some constant $c$. Since $G_{\mathbb{T}_m} * \Delta f$ has mean zero, $c = \mathbf{E}_{x \in \mathbb{T}_m}[f]$.
Both operators, $\Delta$ and convolution with $G_{\mathbb{T}_m}$, have image $L_0^2(\mathbb{T}_m)$. The two observations $\Delta (G_{\mathbb{T}_m} * f) = f - \mathbf{E}_{x \in \mathbb{T}_m}[f]$ and $G_{\mathbb{T}_m} * \Delta f = f - \mathbf{E}_{x \in \mathbb{T}_m}[f]$ imply that the composition in either order of the two operators results in orthogonal projection onto $L_0^2(\mathbb{T}_m)$. Restricted to $L_0^2(\mathbb{T}_m)$, the two operators are inverses. On $L^2(\mathbb{T}_m)$, $G_{\mathbb{T}_m}$ is the \emph{Moore-Penrose pseudoinverse} of $\Delta$.
We will require the following statements regarding discrete derivatives of $G_{\mathbb{T}_m}$. Recall that the notation $A \ll_{a,b} B$ means that there is a constant $0 < C < \infty$ depending on $a,b$ such that $|A| \leq CB$.
\begin{lemma}\label{greens_function_estimate}
For $a, b \in \mathbb{Z}_{\geq 0}$, $1 \leq a + b $, for $|i|, |j| \leq \frac{m}{2}$,
\begin{equation}
D_1^a D_2^b G_{\mathbb{T}_m}(i,j) \ll_{a,b} \frac{1}{1+\left(i^2 + j^2 \right)^{\frac{a
+ b}{2}}}.
\end{equation}
\end{lemma}
In the case $a+b= 1$ the following asymptotic evaluation holds.
\begin{lemma}\label{green_function_differentiated_asymptotic}
Let $m \geq 2$ and $0 \leq i, j \leq \frac{m}{2}$. Set $R = \sqrt{i^2+j^2}$.
There is a constant $c > 0$ such that, as $m \to \infty$, for $0<R <
\frac{m^{\frac{1}{2}}}{(\log m)^{\frac{1}{4}}}$,
\begin{align}
D_1 G_{\mathbb{T}_m}(i,j) &= - \frac{ci}{i^2 + j^2} + O\left(\frac{1}{i^2 +
j^2}\right),\\ \notag
D_2 G_{\mathbb{T}_m}(i,j) &= - \frac{cj}{i^2 + j^2} + O\left(\frac{1}{i^2 +
j^2}\right).
\end{align}
\end{lemma}
The proofs of Lemmas \ref{greens_function_estimate} and
\ref{green_function_differentiated_asymptotic} are given in Appendix
\ref{local_limit_theorem_appendix}.
\begin{lemma}\label{z_2_approx_lemma}
If $a + b \geq 2$ then $D_1^a D_2^b G_{\mathbb{Z}^2}$ is in $\ell^2(\mathbb{Z}^2)$ and for
each fixed $i, j$, $D_1^a D_2^b G_{\mathbb{T}_m}(i,j) \to D_1^a D_2^b G_{\mathbb{Z}^2}(i,j)$ as $m
\to \infty$.
\end{lemma}
\begin{proof}
The Fourier transform of $D_1^a D_2^b G_{\mathbb{Z}^2}$ is given by, for $x,y \in \mathbb{R}/\mathbb{Z}$, not both 0,
\begin{equation}
\widehat{D_1^a D_2^b G_{\mathbb{Z}^2}}(x,y) = \frac{ \left(e(x)-1
\right)^a\left(e(y)-1 \right)^b}{4- 2(c(x) + c(y))} .
\end{equation}
This function is bounded on $(\mathbb{R}/\mathbb{Z})^2$, which proves the first claim by
Parseval.
The Fourier transform of $D_1^a D_2^b G_{\mathbb{T}_m}$ at frequency $(x,y) \in (\mathbb{Z}/m\mathbb{Z})^2$ is given by
$
\widehat{D_1^a D_2^b G_{\mathbb{Z}^2}}\left(\frac{x}{m}, \frac{y}{m}\right).
$
Taking the group inverse Fourier transform,
\begin{equation}
D_1^a D_2^b G_{\mathbb{T}_m}(i,j) = \frac{1}{m^2} \sum_{x, y \in \mathbb{Z}/m\mathbb{Z}} \widehat{D_1^a
D_2^b G_{\mathbb{Z}^2}}\left(\frac{x}{m}, \frac{y}{m}\right) e\left(\frac{ix +jy}{m}
\right).
\end{equation}
Treating this as a Riemann sum and letting $m \to \infty$ obtains the limit
\begin{equation}
D_1^a D_2^b G_{\mathbb{Z}^2}(i,j) = \int_{(\mathbb{R}/\mathbb{Z})^2}\widehat{D_1^a D_2^b
G_{\mathbb{Z}^2}}\left(x, y\right) e\left(ix+jy \right)dxdy. \qedhere
\end{equation}
\end{proof}
\subsection{Exponential sums}
This section collects the two results from the classical theory of exponential sums that are needed for the proof of Lemma \ref{savings_lemma}, which is the key ingredient in the upper bound of Theorem \ref{mixing_time_theorem}. For further references, see \cite{T86,IK04,M94}.
The first result is van der Corput's inequality. We will only need the case $H = 1$. See \cite{S04} for a motivation and proof of this statement.
\begin{theorem}[van der Corput's Lemma] Let $H$ be a positive integer. Then
for
any complex numbers $y_1, y_2, ..., y_N$,
\begin{equation}
\left|\sum_{n=1}^N y_n\right|^2 \leq \frac{N+H}{H+1}\sum_{n=1}^N |y_n|^2 +
\frac{2(N+H)}{H+1} \sum_{h=1}^H \left(1 -
\frac{h}{H+1}\right)\left|\sum_{n=1}^{N-h}y_{n+h}\overline{y_n}\right|.
\end{equation}
\end{theorem}
The second result treats summation of a linear phase function and is
fundamental.
\begin{lemma}
\label{geometric}
Let $\alpha \in \mathbb{R}\setminus \mathbb{Z}$ and let $N \geq 1$. Then
\begin{equation}
\left| \sum_{j = 1}^N e(\alpha j) \right| \ll \min\left(N, \|\alpha\|_{\mathbb{R}/\mathbb{Z}}^{-1}\right).
\end{equation}
\end{lemma}
\begin{proof}
Sum the geometric series.
\end{proof}
\section{Classification of functions harmonic modulo 1}
\label{classification_section}
This section proves Theorem \ref{H_p_theorem}. Let $1 \leq p < \infty$, and recall that ${\mathscr{H}}^p(\mathbb{Z}^2)$ is the set of all harmonic modulo 1 functions in $\ell^p(\mathbb{Z}^2)$. If $f \in {\mathscr{H}}^p(\mathbb{Z}^2)$, then as $\|x\|_2 \to \infty$, $f(x) \to 0$ and therefore also $\Delta f(x) \to 0$. Since $\Delta f$ is integer-valued, it must be identically zero outside a ball of finite radius. Thus,
\begin{equation}
\label{Hp}
{\mathscr{H}}^p\left(\mathbb{Z}^2\right) = \left\{f \in \ell^p\left(\mathbb{Z}^2\right): \Delta f \in C^0(\mathbb{Z}^2) \right\}, \qquad 1 \leq p < \infty,
\end{equation}
where $C^0(\mathbb{Z}^2)$ is the space of integer-valued functions on $\mathbb{Z}^2$ with finite support, as in \eqref{C_0123}.
From Theorem \ref{greens_function_asymptotic}, we can derive the following formulas.
\begin{lemma}
\label{greens_function_derivs}
For nonzero $x = (x_1,x_2) \in \mathbb{Z}^2$, we have:
\begin{align}
\label{greens_deriv_formulas}
D_1 G_{\mathbb{Z}^2}(x_1,x_2) &= \frac{1}{2\pi} \cdot \frac{-x_1}{x_1^2 + x_2^2} + O\left( \|x\|_2^{-2} \right) \\
\notag D_2 G_{\mathbb{Z}^2}(x_1,x_2) &= \frac{1}{2\pi} \cdot \frac{-x_2}{x_1^2 + x_2^2} + O\left( \|x\|_2^{-2} \right) \\
\notag D_1^2 G_{\mathbb{Z}^2}(x_1,x_2) &= \frac{1}{2\pi} \cdot \frac{x_1^2 - x_2^2}{(x_1^2 + x_2^2)^2} + O\left( \|x\|_2^{-3} \right) \\
\notag D_1 D_2 G_{\mathbb{Z}^2}(x_1,x_2) &= \frac{1}{2\pi} \cdot \frac{2x_1 x_2}{(x_1^2 + x_2^2)^2} + O\left( \|x\|_2^{-3} \right) \\
\notag D_2^2 G_{\mathbb{Z}^2}(x_1,x_2) &= \frac{1}{2\pi} \cdot \frac{x_2^2 - x_1^2}{(x_1^2 + x_2^2)^2} + O\left( \|x\|_2^{-3} \right) \\
\notag D_1^a D_2^b G_{\mathbb{Z}^2}(x_1,x_2) &= O\left( \|x\|_2^{-3} \right), \qquad a+b = 3.
\end{align}
\end{lemma}
\begin{proof}
For $(x_1,x_2) \notin \{(0,0),(-1,0)\}$, Theorem \ref{greens_function_asymptotic} gives
\begin{multline}
D_1 G_{\mathbb{Z}^2}(x_1,x_2) \\
\begin{aligned}
&= -\frac{1}{4\pi} \log\left( 1 + \frac{2x_1 + 1}{x_1^2 + x_2^2} \right) + b \left[ \frac{1}{(x_1+1)^2 + x_2^2} - \frac{1}{x_1^2 + x_2^2} \right] \\
&\quad - 8b \left[ \frac{(x_1+1)^2 x_2^2}{[(x_1+1)^2 + x_2^2]^3} - \frac{x_1^2 x_2^2}{(x_1^2 + x_2^2)^3} \right] + O\left( \|x\|_2^{-4} \right).
\end{aligned}
\end{multline}
Expand the log term into a Taylor series. The quantities in brackets are $O\left( \|x\|_2^{-3} \right)$, as follows from using a common denominator. Thus
\begin{equation}
D_1 G_{\mathbb{Z}^2}(x_1,x_2) = -\frac{1}{2\pi} \cdot \frac{x_1}{x_1^2 + x_2^2} + \frac{1}{4\pi} \cdot \frac{x_1^2 - x_2^2}{(x_1^2 + x_2^2)^2} + O\left( \|x\|_2^{-3} \right).
\end{equation}
This and the analogous statement for $D_2 G_{\mathbb{Z}^2}$ prove the first two formulas in \eqref{greens_deriv_formulas}. The remainder of the lemma is proved similarly by taking further discrete derivatives; we omit the details.
\end{proof}
\begin{proof}[Proof of Theorem \ref{H_p_theorem}]
Using the terminology introduced in Section \ref{background_section}, the desired statements are:
\begin{align}
\label{H_p_2}
{\mathscr{H}}^1(\mathbb{Z}^2) &= \{G_{\mathbb{Z}^2} * v : v \in C^3(\mathbb{Z}^2)\} + C^0(\mathbb{Z}^2) \\
\notag {\mathscr{H}}^p(\mathbb{Z}^2) &= \{G_{\mathbb{Z}^2} * v : v \in C^2(\mathbb{Z}^2)\}, \quad 1 < p \leq 2 \\
\notag {\mathscr{H}}^p(\mathbb{Z}^2) &= \{G_{\mathbb{Z}^2} * v : v \in C^1(\mathbb{Z}^2)\}, \quad 2 < p < \infty.
\end{align}
If $v \in C^k(\mathbb{Z}^2)$ for $1 \leq k \leq 3$, then $G_{\mathbb{Z}^2} * v$ is a finite integer linear combination of translates of $k$-th derivatives of $G_{\mathbb{Z}^2}$. It follows from Lemma \ref{greens_function_derivs} that $(G_{\mathbb{Z}^2} * v)(x) = O\left( \|x\|_2^{-k} \right)$, so $G_{\mathbb{Z}^2} * v \in \ell^p(\mathbb{Z}^2)$ as long as $p > 2/k$. Since $\Delta (G_{\mathbb{Z}^2} * v) = v$ is $\mathbb{Z}$-valued, we conclude that $G_{\mathbb{Z}^2} * v \in {\mathscr{H}}^p(\mathbb{Z}^2)$. Along with the observation that $C^0(\mathbb{Z}^2) \subset {\mathscr{H}}^1(\mathbb{Z}^2)$, this proves that for each line of \eqref{H_p_2}, the set on the left side contains the set on the right side.
We prove the forward inclusions in \eqref{H_p_2} in reverse order, from the third line to the first line. Let $f \in {\mathscr{H}}^p(\mathbb{Z}^2)$ for some $1 \leq p < \infty$, and let $v = \Delta f$. By \eqref{Hp}, $v \in C^0(\mathbb{Z}^2)$, so there is $R > 0$ such that the support of $v$ is contained in the $\ell^1$-ball of radius $R$ about the origin. In
\begin{equation}
(G_{\mathbb{Z}^2} * v)(x) = \sum_{y \in \mathbb{Z}^2} G_{\mathbb{Z}^2}(x-y) v(y) = \sum_{\|y\|_1 \leq R} G_{\mathbb{Z}^2}(x-y) v(y),
\end{equation}
write $G_{\mathbb{Z}^2}(x) - G_{\mathbb{Z}^2}(x-y)$ as a sum of at most $R$ first derivatives of $G_{\mathbb{Z}^2}$, and use Lemma \ref{greens_function_derivs} to see that $G_{\mathbb{Z}^2}(x) - G_{\mathbb{Z}^2}(x-y) = O_R\left( \|x\|_2^{-1} \right)$. Thus, setting $B = \|v\|_1$ and $a = \sum_{y \in \mathbb{Z}^2} v(y)$,
\begin{equation}
(G_{\mathbb{Z}^2} * v)(x) = a G_{\mathbb{Z}^2}(x) + O_{B,R}\left( \|x\|_2^{-1} \right).
\end{equation}
Set $h(x) = (G_{\mathbb{Z}^2} * v)(x) - f(x)$, so that $\Delta h \equiv 0$. If $a \neq 0$, then as $\|x\|_2 \to \infty$, we have $(G_{\mathbb{Z}^2} * v)(x) \to -\mathrm{sgn}(a) \cdot \infty$ while $f(x) \to 0$, meaning that $h(x) \to -\mathrm{sgn}(a) \cdot \infty$. This violates the maximum principle, so $a = 0$ and $v \in C^1(\mathbb{Z}^2)$. We now have $h(x) \to 0$ as $\|x\|_2 \to \infty$, so again by the maximum principle, $h \equiv 0$ and $f = G_{\mathbb{Z}^2} * v$. This proves the forward inclusion in the third line of \eqref{H_p_2}.
Suppose that $p \leq 2$. Since $v \in C^1(\mathbb{Z}^2)$, we can write $v = \delta_1 * v_1 + \delta_2 * v_2$ for some $v_1,v_2 \in C^0(\mathbb{Z}^2)$. Then
\begin{align}
(G_{\mathbb{Z}^2} * v)(x) &= (D_1 G_{\mathbb{Z}^2} * v_1)(x) + (D_2 G_{\mathbb{Z}^2} * v_2)(x) \\
\notag &= \sum_{\|y\|_1 \leq R+1} D_1 G_{\mathbb{Z}^2}(x-y) v_1(y) + D_2 G_{\mathbb{Z}^2}(x-y) v_2(y) \\
\notag &= b_1 D_1 G_{\mathbb{Z}^2}(x) + b_2 D_2 G_{\mathbb{Z}^2}(x) + O_{B,R}\left( \|x\|_2^{-2} \right),
\end{align}
where each $b_i = \sum_{y \in \mathbb{Z}^2} v_i(y)$. In the last equality we wrote $D_i G_{\mathbb{Z}^2}(x) - D_i G_{\mathbb{Z}^2}(x-y)$ as a sum of $O(R)$ second derivatives of $G_{\mathbb{Z}^2}$ and used the bound from Lemma \ref{greens_function_derivs}. Again using Lemma \ref{greens_function_derivs}, we obtain for nonzero $x = (x_1,x_2)$ that
\begin{equation}
(G_{\mathbb{Z}^2} * v)(x) = \frac{1}{2\pi} \cdot \frac{-b_1 x_1 - b_2 x_2}{x_1^2 + x_2^2} + O_{B,R}\left( \|x\|_2^{-2} \right).
\end{equation}
Suppose $b_1$ and $b_2$ are not both zero. Then, there are $0 \leq \theta_1 < \theta_2 < 2\pi$ such that $(G_{\mathbb{Z}^2} * v)(x) \asymp \|x\|_2^{-1}$ for all $x \neq (0,0)$ with $\theta_1 \leq \arg(x) \leq \theta_2$. This contradicts the assumption that $f = G_{\mathbb{Z}^2} * v \in \ell^2(\mathbb{Z}^2)$. We conclude that $b_1 = b_2 = 0$, so $v_1,v_2 \in C^1(\mathbb{Z}^2)$ and therefore $v \in C^2(\mathbb{Z}^2)$ by \eqref{C2_alt}.
Finally, suppose that $p = 1$. Since $v \in C^2(\mathbb{Z}^2)$, we can write $v = \delta_1^{*2} * w_1 + (\delta_1 * \delta_2) * w_2 + \delta_2^{*2} * w_3$ for some $w_1,w_2,w_3 \in C^0(\mathbb{Z}^2)$. Set $c_i = \sum_{y \in \mathbb{Z}^2} w_i(y)$. By the same reasoning as in the previous case,
\begin{align}
&\quad\, (G_{\mathbb{Z}^2} * v)(x) \\
\notag &= c_1 D_1^2 G_{\mathbb{Z}^2}(x) + c_2 D_1 D_2 G_{\mathbb{Z}^2}(x) + c_3 D_2^2 G_{\mathbb{Z}^2}(x) + O_{B,R}\left( \|x\|_2^{-3} \right) \\
\notag &= \frac{1}{2\pi} \cdot \frac{(c_1 - c_3)(x_1^2 - x_2^2) +2c_2 x_1 x_2}{(x_1^2 + x_2^2)^2} + O_{B,R}\left( \|x\|_2^{-3} \right)
\end{align}
for all nonzero $x = (x_1,x_2)$. This implies that $c_1 = c_3$ and $c_2 = 0$; if not, the first term would have asymptotic order $\|x\|_2^{-2}$ for $\arg(x)$ in some range $[\theta_1,\theta_2]$, contradicting that $f \in \ell^1(\mathbb{Z}^2)$.
Set $c = c_1 = c_3$, and let $v' = \Delta \mathbf{e}_{(0,0)} = -\delta_1^{*2} * \mathbf{e}_{(1,0)} - \delta_2^{*2} * \mathbf{e}_{(0,1)}$. Then
\begin{equation}
v + cv' = \delta_1^{*2} * (w_1 - c\,\mathbf{e}_{(1,0)}) + (\delta_1 * \delta_2) * w_2 + \delta_2^{*2} * (w_3 - c\,\mathbf{e}_{(0,1)}).
\end{equation}
Since all three of $w_1 - c\,\mathbf{e}_{(1,0)}$, $w_2$, and $w_3 - c\,\mathbf{e}_{(0,1)}$ are in $C^1(\mathbb{Z}^2)$, we have $v + cv' \in C^3(\mathbb{Z}^2)$. As well, $G_{\mathbb{Z}^2} * cv' = c\,\mathbf{e}_{(0,0)} \in C^0(\mathbb{Z}^2)$. Hence
\begin{equation}
f = G_{\mathbb{Z}^2} * (v + cv' - cv') \in \{G_{\mathbb{Z}^2} * w : w \in C^3(\mathbb{Z}^2)\} + C^0(\mathbb{Z}^2),
\end{equation}
which completes the proof.
\end{proof}
\section{Stabilization on $\mathbb{Z}^2$}\label{stability_section}
Consider a sandpile $\sigma: \mathbb{Z}^2 \to \mathbb{Z}_{\geq 0}$. The \emph{parallel toppling} procedure attempts to stabilize $\sigma$ by defining a sequence of sandpiles $\sigma = \sigma^0, \sigma^1, \sigma^2, \ldots$ where $\sigma^{n+1}$ is obtained from $\sigma^n$ by simultaneously toppling all vertices $x$ with $\sigma^n(x) \geq 4$. Formally, set $v^n(x) = \mathbf{1}\{\sigma^n(x) \geq 4\}$ and define $\sigma^{n+1} = \sigma^n - \Delta(v^n)$. Define the sequence of \emph{odometer functions} $u^1,u^2,\ldots$ by $u^n = v^0 + v^1 + \cdots + v^{n-1}$, so that $u^n(x)$ is the number of times vertex $x$ has toppled in the first $n$ topplings. In particular, $\|u^n\|_{\ell^\infty} \leq n$ and $\sigma^n = \sigma - \Delta(u^n)$. It is shown in \cite{FMR09} that $\sigma$ stabilizes if and only if $u^n \uparrow u^\infty$ for some $u^\infty: \mathbb{Z}^2 \to \mathbb{Z}_{\geq 0}$, in which case the stabilization is given by $\sigma^\infty = \sigma - \Delta u^\infty$.
Our proof uses the following `conservation of density' result of \cite{FMR09}.
\begin{lemma}[\cite{FMR09}, Lemma 2.10]
\label{stabilization_prop}
Let $(\sigma_x)_{x \in \mathbb{Z}^2}$ be i.i.d.~and stabilize almost surely, with
stabilization $(\sigma^\infty_x)_{x \in \mathbb{Z}^2}$. Then $\mathbf{E}[\sigma_0] = \mathbf{E}[\sigma^\infty_0]$.
\end{lemma}
In particular, if the i.i.d.~sandpile $\sigma$ stabilizes almost surely, then $\mathbf{E}[\sigma_0] \leq 3$.
We now show that if $\xi \in {\mathscr{H}}^1(\mathbb{Z}^2)$, the pairing $\langle \sigma, \xi \rangle = \sum_{x \in \mathbb{Z}^2} \sigma(x) \xi(x)$ remains invariant modulo 1 when the sandpile $\sigma$ is stabilized.
\begin{lemma}
\label{pairing-invariant}
Let $(\sigma_x)_{x \in \mathbb{Z}^2}$ be an i.i.d.~sandpile which stabilizes almost surely, and let $\xi \in {\mathscr{H}}^1(\mathbb{Z}^2)$. Then
\begin{equation} \langle \sigma, \xi\rangle \equiv \langle \sigma^\infty, \xi\rangle \mod{1}, \qquad a.s.
\end{equation}
\end{lemma}
\begin{proof}
Lemma \ref{stabilization_prop} implies that $\mathbf{E}[\sigma_0] < \infty$.
Since $\xi \in \ell^1(\mathbb{Z}^2)$,
\begin{equation}
\mathbf{E}[ \langle \sigma, |\xi| \rangle ] = \sum_{x \in \mathbb{Z}^2} |\xi_x| \mathbf{E}[\sigma_x] = \|\xi\|_1 \mathbf{E}[\sigma_0] < \infty
\end{equation}
and so $\langle \sigma, \xi\rangle$ converges absolutely almost surely.
Write $\sigma^n = \sigma - \Delta u^n$ and use self-adjointness of $\Delta$ to obtain
\begin{equation}
\label{finite-invariant}
\langle \sigma^n, \xi\rangle = \langle \sigma - \Delta u^n, \xi\rangle = \langle \sigma, \xi \rangle - \langle u^n, \Delta \xi \rangle.
\end{equation}
Since $u^n$ is integer-valued, increasing and converges almost surely, while $\Delta\xi$ is integer-valued and has finite support, the increment $\langle u^n, \Delta\xi\rangle$ converges a.s.~to $\langle u^\infty, \Delta\xi \rangle \in \mathbb{Z}$.
Note that the parallel toppling property implies that, for $n \geq 0$, \begin{equation}\sigma^{n+1}(x) \leq \max\left(\sigma^{n}(x), 7\right).\end{equation}
Thus, whenever $\langle \sigma, |\xi| \rangle$ is finite and $\sigma$ stabilizes to $\sigma^\infty$,
\begin{equation}
\lim_{n \to \infty} \langle \sigma^n, \xi \rangle = \lim_{n \to \infty} \sum_{x \in \mathbb{Z}^2} \sigma^n(x) \xi_x = \sum_{x \in \mathbb{Z}^2} \lim_{n \to \infty} \sigma^n(x) \xi_x = \langle \sigma^\infty, \xi \rangle
\end{equation}
where the second equality is justified by dominated convergence:
\begin{equation}
|\sigma^n(x) \xi_x| \leq \max(\sigma(x), 7) |\xi_x|, \qquad \sum_{x \in \mathbb{Z}^2} \max(\sigma(x), 7) |\xi_x| < \infty.
\end{equation}
Sending $n \to \infty$ in \eqref{finite-invariant} completes the proof.
\end{proof}
For definiteness, our argument uses the particular function
\begin{equation}
\xi = G_{\mathbb{Z}^2}* \delta_1^{*3} = D_1^3 G_{\mathbb{Z}^2},
\end{equation}
which is in ${\mathscr{H}}^1(\mathbb{Z}^2)$ by Lemma \ref{greens_function_derivs}. The next lemma estimates the tail of $\|\xi\|_2^2$.
\begin{lemma}\label{xi_tail}
Let $R \geq 1$ be a parameter. As $R \to \infty$,
\begin{equation} \label{four_thirds}
\sum_{x\in \mathbb{Z}^2\,:\, 0 < |\xi_x| < \frac{1}{2R}} |\xi_x|^2 \gg R^{-\frac{4}{3}}.
\end{equation}
\end{lemma}
\begin{proof}
Arguing as in Lemma \ref{greens_function_derivs}, we see that there are $0 \leq \theta_1 < \theta_2 < 2\pi$ such that, for nonzero $x \in \mathbb{Z}^2$ satisfying $\theta_1 \leq \arg(x) \leq \theta_2$,
\begin{equation}
|\xi_x| \asymp \|x\|_2^{-3}.
\end{equation}
Thus
\begin{equation}
\sum_{x \in \mathbb{Z}^2\,:\, 0 < |\xi_x| < \frac{1}{2R}} |\xi_x|^2 \gg \int_{R^{\frac{1}{3}}}^\infty \frac{dr}{r^5} \gg R^{-\frac{4}{3}}. \qedhere
\end{equation}
\end{proof}
As the proof below makes clear, an explicit constant in the lower bound \eqref{four_thirds} would lead to explicit values of $c,d$ in the statement of Theorem \ref{quantitative_theorem}. To obtain a fully quantitative version of Lemma \ref{xi_tail}, it would be enough to bound the error in \eqref{G_expansion} by finding an explicit $C > 0$ such that
\begin{equation} \label{G_error}
\left| G_{\mathbb{Z}^2}(x) + \frac{\log \|x\|_2}{2\pi} + a + b \frac{\frac{8x_1^2 x_2^2}{\|x\|_2^4}- 1}{\|x\|_2^2} \right| \leq C\|x\|_2^{-4}
\end{equation}
for all $(0,0) \neq x \in \mathbb{Z}^2$. A result in this direction \cite[Section 4]{KS04} is that
\begin{equation}
\left| G_{\mathbb{Z}^2}(x) + \frac{\log \|x\|_2}{2\pi} \right| \leq 0.01721 \|x\|_2^{-2}.
\end{equation}
(Indeed, the constant $0.01721$ is optimal and an exact formula for it is given.) It is likely that extending the techniques developed in \cite{KS04} would lead to a bound of the form \eqref{G_error}, and thence to an explicit numerical bound in Theorem \ref{quantitative_theorem}.
\begin{proof}[Proof of Theorem \ref{quantitative_theorem}]
Consider the characteristic functions
\begin{equation}
\chi(\sigma;\xi) = \mathbf{E}\left[e^{-2\pi i \langle \sigma, \xi\rangle}\right], \qquad \chi(\sigma^\infty;\xi) = \mathbf{E}\left[e^{-2\pi i \langle\sigma^\infty, \xi\rangle}\right].
\end{equation}
Since $\langle \sigma, \xi\rangle \equiv \langle \sigma^\infty, \xi\rangle \bmod 1$ a.s., $\chi(\sigma;\xi) = \chi(\sigma^\infty; \xi)$.
Let $\mathbf{E}[\sigma_0] =\mathbf{E}[\sigma_0^\infty]= 3-\epsilon$. Using $|1- e^{2\pi i t}| \leq 2 \pi|t|$ and $\sum_{x \in \mathbb{Z}^2}\xi_x = 0$,
\begin{align}\label{sigma_infty_upper_bound}
|1 - \chi(\sigma^\infty; \xi)| &= \left| \mathbf{E}\left[1 - e^{-2\pi i \langle
\sigma^\infty - 3, \xi\rangle }\right] \right| \\
&\notag\leq \mathbf{E}[2\pi |\left\langle \sigma^{\infty}-3, \xi \right\rangle |] \\
&\notag\leq 2\pi \|\xi\|_1 \epsilon.
\end{align}
Thus, $|\chi(\sigma^\infty; \xi)| \geq 1 - 2\pi \|\xi\|_1 \epsilon$.
Meanwhile, since $(\sigma_x)_{x \in \mathbb{Z}^2}$ is i.i.d.,
\begin{equation}
\chi(\sigma; \xi) = \prod_{x \in \mathbb{Z}^2} \mathbf{E}\left[e^{-2\pi i \xi_x \sigma_0} \right].
\end{equation}
Use the inequality $-\log t \geq \frac{1-t^2}{2}$ in $0 < t \leq 1$ to obtain
\begin{equation}
-\log \left|\chi(\sigma; \xi) \right| \geq \frac{1}{2} \sum_{x \in \mathbb{Z}^2} \left(1 -\left|\mathbf{E}\left[e^{-2\pi i \xi_x \sigma_0} \right] \right|^2 \right).
\end{equation}
Let $X,X'$ be independent and distributed as $\sigma_0$. One has
\begin{equation}
\left|\mathbf{E}\left[e^{-2\pi i \xi_x \sigma_0} \right] \right|^2 = \mathbf{E}\left[ e^{-2\pi i \xi_x X} \right] \mathbf{E}\left[ e^{2\pi i \xi_x X'} \right] = \mathbf{E}\left[ e^{-2\pi i \xi_x (X-X')} \right].
\end{equation}
This quantity is equal to its real part $\mathbf{E}[c(\xi_x (X-X'))]$. (Recall $c(t) = \cos 2\pi t$.) Therefore, using $1- c(t) \geq 8 t^2$ for $|t| \leq \frac{1}{2}$,
\begin{align}
-\log \left|\chi(\sigma; \xi) \right| &\geq \frac{1}{2} \sum_{x \in \mathbb{Z}^2} \Big(1 - \mathbf{E}[c(\xi_x (X-X'))] \Big)\\
&\notag\geq 4 \mathbf{E}\left[ \sum_{0 < |\xi_x (X-X')| < \frac{1}{2}} \xi_x^2 (X-X')^2 \right] \\
&\notag = 4 \sum_{k=1}^\infty \mathbf{E}\left[ \mathbf{1}\{|X-X'| = k\} \sum_{0 < |\xi_x| < \frac{1}{2k}} \xi_x^2 k^2 \right].
\end{align}
Lemma \ref{xi_tail} now implies that
\begin{equation}
-\log \left|\chi(\sigma; \xi) \right| \gg \sum_{k=1}^\infty \mathbf{E}\left[ \mathbf{1}\{|X-X'| = k\} k^{2/3} \right] = \mathbf{E}[|X-X'|^{2/3}],
\end{equation}
and therefore
\begin{equation}
1 - \left|\chi(\sigma; \xi)\right| \gg \min\left(1, \mathbf{E}[|X-X'|^{2/3}] \right).
\end{equation}
The result follows on combining this with (\ref{sigma_infty_upper_bound}).
\end{proof}
\section{The sandpile group}\label{sandpile_section}
Recall the designations $\mathscr{R}_m \subset \mathscr{S}_m$ for the recurrent and stable states, respectively, of the sandpile model on $\mathbb{T}_m$ with sink at $(0,0)$. Any sandpile $\sigma: \mathbb{T}_m \setminus \{(0,0)\} \to \mathbb{Z}_{\geq 0}$ can be stabilized by repeatedly performing legal topplings until the resulting configuration is stable. By the abelian property \cite{D90}, the final state does not depend on the order in which the topplings are performed, and is called the \emph{stabilization} of $\sigma$.
If we view functions on $\mathbb{T}_m$ as $m^2 \times 1$ column vectors, then the Laplacian operator $\Delta$ on $\mathbb{T}_m$ can be considered as an $m^2 \times m^2$ matrix, so that for example $\Delta \mathbb{Z}^{\mathbb{T}_m}$ is the integer span of the columns of $\Delta$. The null space of $\Delta$ is one-dimensional, and is spanned by the all-ones vector.
The \emph{reduced Laplacian} $\Delta'$ is obtained by omitting the row and column
corresponding to the sink $(0,0)$, and is invertible.
The recurrent states $\mathscr{R}_m$ of the sandpile model are naturally identified with the abelian group
\begin{equation}
\label{G_m}
\mathscr{G}_m := \mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}}/\Delta' \mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}},
\end{equation}
which is the \emph{sandpile group} of $\mathbb{T}_m$. Indeed, each equivalence class
\begin{equation}
\sigma + \Delta' \mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}} \subset \mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}}, \quad \sigma \in \mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}},
\end{equation}
contains exactly one recurrent sandpile \cite{HLMPPW08}. Addition in $\mathscr{G}_m$ corresponds via this bijection to the operation on $\mathscr{R}_m$ of pointwise addition followed by stabilization.
The \emph{sandpile Markov chain} has state space $\mathscr{S}_m$ and transition operator $P_m$. To take a single step from a sandpile $\sigma$, choose a site $x \in \mathbb{T}_m$ uniformly at random. If $x \neq (0,0)$, replace $\sigma$ with the stabilization of $\sigma + \mathbf{e}_x$; if $x = (0,0)$, remain at $\sigma$. The recurrent states of the chain are precisely $\mathscr{R}_m$, and the chain restricted to $\mathscr{R}_m$ is a random walk on the group $\mathscr{G}_m$. See \cite{JLP15}, which develops this construction in the setting of an arbitrary underlying graph, for further background.
Using \eqref{G_m}, the matrix-tree theorem implies that $\mathscr{G}_m$ is in bijection with the spanning trees of $\mathbb{T}_m$. It is shown in \cite{JLP15} that $|\mathscr{G}_m| = \exp\left(\left(\frac{4\beta(2)}{\pi}+o(1)\right)m^2 \right)$ where $\beta(2)$ is the Catalan constant,
\begin{equation}
\frac{4 \beta(2)}{\pi} = 1.1662\ldots.
\end{equation}
Thus the recurrent states make up an exponentially small fraction of the $4^{m^2-1}$ stable states.
The following proposition bounds the hitting time started from a deterministic stable state to reach a recurrent state.
\begin{proposition}\label{recurrent_state_hitting_time_proposition}
There is a constant $C > 0$ such that, as $m \to \infty$, for any stable state
$\sigma \in \mathscr{S}_m$, if $n > C m^2 \sqrt{\log m }$ then
\[
\mathbf{Prob}\left(P_m^n \delta_{\sigma} \in \mathscr{R}_m \right) = 1 - o(1).
\]
\end{proposition}
\begin{rem}
Starting from $\sigma = 0$, at least order $m^2$ steps are necessary to reach a recurrent state, since only one chip is added at a time. We do not claim that the extra factor of $\sqrt{\log m}$ above is optimal. Because we will show that the mixing time of the sandpile chain has order $m^2 \log m$, the bound in Proposition \ref{recurrent_state_hitting_time_proposition} is sufficient for understanding the mixing behavior.
\end{rem}
\begin{proof}
We make two initial observations. First, any state satisfying $\sigma \geq 3$
can be toppled to a stable recurrent state. This is because such a state can
evidently be reached from a recurrent state. Also, by performing a sequence of
topplings, a single vertex with allocation $h$ can be toppled to produce a disc
of radius $\gg \sqrt{h}$ with height at least 3. This follows as a simple
consequence of the analysis in \cite{PS13}, which studies the limiting shape of
the configuration obtained by repeated toppling of a pile at a single vertex.
Let $A$ be an integer, $A \ll \sqrt{\log m}$, and drop $n \sim \operatorname{Poisson}(Am^2)$ grains of sand on the torus, while performing no
topplings. Note that this is the same as independently dropping $\operatorname{Poisson}(A)$ grains of sand on each vertex. Also, $n < 2Am^2$ with probability $1 - o(1)$.
The probability that a non-sink vertex $x$ has height at most $a$ is
\begin{equation}\label{single_point}
\mathbf{Prob}(h_x \leq a) = e^{-A}\sum_{j=0}^a \frac{A^j}{j!}.
\end{equation}
For $ a < \frac{A}{2}$ we obtain
\begin{align*}
\mathbf{Prob}(h_x \leq a) \asymp \frac{A^a}{a!}\exp\left(-A \right).
\end{align*}
If $x_1, x_2, \ldots, x_s$ denote the points of a disc of area $s \gg a$, then, by
independence,
\begin{align}\label{hole_probability}
\mathbf{Prob}\left(\bigwedge_{i=1}^s (h_{x_i} \leq a) \right) \leq \exp\left(-sA + sa
\log \frac{A}{a} + s(a + O(1)) \right).
\end{align}
Choose $s, a \asymp \sqrt{\log m}$ such that a point of height $a$ in a disc
of area $s$ topples to cover the disc. Then choose $A$ a sufficiently large
constant times $\sqrt{\log m}$ so that the probability of
(\ref{hole_probability}) is $o\left(1/m^2 \right)$. It follows that
with probability $1-o(1)$, the event (\ref{hole_probability}) does not occur
for any disc on the torus at distance $\gg \sqrt{\log m}$ from the sink. The
sites closer to the sink have height $\geq 3$ with probability $1 -o(1)$ by
estimating using (\ref{single_point}) and a union bound.
\end{proof}
The following proposition reduces the statements in Theorem \ref{mixing_time_theorem} to estimates started from the fixed recurrent state $\sigma \equiv 3$.
\begin{proposition}
\label{reduction_proposition}
For each constant $C > 0$, for $t = C m^2 \log m$, as $m \to \infty$,
\begin{equation}
\sup_{\sigma_0 \in \mathscr{S}_m}\Big|\left\|P_m^{t}\delta_{\sigma_0} -\mathbb{U}_{\mathscr{R}_m}\right\|_{\operatorname{TV}} - \left\|P_m^{t}\delta_{\sigma \equiv 3}- \mathbb{U}_{\mathscr{R}_m} \right\|_{\operatorname{TV}} \Big| = o(1).
\end{equation}
\end{proposition}
\begin{proof}
Given $\sigma_0 \in \mathscr{S}_m$, let $\sigma_1 \in \mathscr{R}_m$ be the unique recurrent state in the equivalence class $\sigma_0 + \Delta' \mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}}$. By Proposition \ref{recurrent_state_hitting_time_proposition}, $P_m^t\delta_{\sigma_0}(\mathscr{R}_m) = 1 - o(1)$, and thus $\left\|P_m^t \delta_{\sigma_0} - P_m^t \delta_{\sigma_1}\right\|_{\operatorname{TV}} = o(1)$. Since the chain restricted to $\mathscr{R}_m$ is transitive, it follows from the triangle inequality that
\begin{align}
&\Big|\left\|P_m^{t}\delta_{\sigma_0} -\mathbb{U}_{\mathscr{R}_m}\right\|_{\operatorname{TV}} - \left\|P_m^{t}\delta_{\sigma \equiv 3}- \mathbb{U}_{\mathscr{R}_m} \right\|_{\operatorname{TV}} \Big| \\
\notag &\quad = \Big|\left\|P_m^{t}\delta_{\sigma_0} -\mathbb{U}_{\mathscr{R}_m}\right\|_{\operatorname{TV}} - \left\|P_m^{t}\delta_{\sigma_1}- \mathbb{U}_{\mathscr{R}_m} \right\|_{\operatorname{TV}} \Big| \\
\notag &\quad \leq \left\|P_m^t \delta_{\sigma_0} - P_m^t \delta_{\sigma_1}\right\|_{\operatorname{TV}} = o(1). \qedhere
\end{align}
\end{proof}
\subsection{Random walk on the sandpile group}
\label{Random_walk_on_the_sandpile_group}
Going forward we assume that the sandpile Markov chain is started from the deterministic recurrent state $\sigma \equiv 3$ so that the dynamics is reduced to a random walk on the abelian group $\mathscr{G}_m$. In general, for any random walk on a finite abelian group $\mathscr{G}$ driven by the measure $\mu$, the eigenfunctions of the transition kernel are given by the dual group, which is the additive group of characters $\hat{\mathscr{G}} = \{\xi : \mathscr{G} \to \mathbb{R} / \mathbb{Z}\}$. If $\xi \cdot g$ denotes the image of $g \in \mathscr{G}$ under $\xi \in \hat{\mathscr{G}}$, then the eigenfunction corresponding to $\xi$ is $f_\xi(g) = e(\xi \cdot g)$. The corresponding eigenvalue is the Fourier coefficient of $\mu$ at frequency $\xi$, namely $\hat{\mu}(\xi) = \sum_{g \in \mathscr{G}} \mu(g) e(\xi \cdot g)$.
The sandpile chain on $\mathscr{G}_m$ is driven by the measure
\begin{equation}
\mu := \frac{1}{m^2}\left(\delta_0 + \sum_{x \in \mathbb{T}_m \setminus \{(0,0)\}}\delta_{\mathbf{e}_x}
\right)
\end{equation}
where, technically, $\mathbf{e}_x$ refers to the equivalence class $\mathbf{e}_x + \Delta' \mathbb{Z}^{\mathbb{T}_m \setminus \{(0,0)\}} \in \mathscr{G}_m$, and $0 \in \mathscr{G}_m$ is the identity. The dual group of $\mathscr{G}_m$ is
\begin{equation}
\label{hat_G_m}
\hat{\mathscr{G}}_m =
(\Delta')^{-1}\mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}}/\mathbb{Z}^{\mathbb{T}_m\setminus\{(0,0)\}}.
\end{equation}
This can be seen by dualizing \eqref{G_m}; a bare-hands proof is given in Section 3 of \cite{JLP15}. To define the meaning of $\xi \cdot g$ in this setting, we can view each frequency $\xi \in \hat{\mathscr{G}}_m$ as a function from $\mathbb{T}_m \setminus \{(0,0)\}$ to $\mathbb{R} / \mathbb{Z}$, and each group element $g \in \mathscr{G}_m$ as an equivalence class $\sigma + \Delta' \mathbb{Z}^{\mathbb{T}_m \setminus \{(0,0)\}}$, where $\sigma \in \mathbb{Z}^{\mathbb{T}_m \setminus \{(0,0)\}}$. Then, $\xi \cdot g = \sum_{x \in \mathbb{T}_m \setminus \{(0,0)\}} \xi_x \sigma_x \in \mathbb{R} / \mathbb{Z}$, whose value does not depend on the choice of the representative $\sigma$ in the equivalence class. The eigenvalue corresponding to $\xi$ is
\begin{equation}\label{def_fourier_coefficient}
\hat{\mu}(\xi) = \frac{1}{m^2}\left(1 + \sum_{x \in \mathbb{T}_m \setminus \{(0,0)\}}e(\xi_x) \right).
\end{equation}
Given $\xi: \mathbb{T}_m \setminus \{(0,0)\} \to \mathbb{R} / \mathbb{Z}$, which may or may not be in $\hat{\mathscr{G}}_m$, set $v = \Delta' \xi$ (which is also $\mathbb{R} / \mathbb{Z}$-valued). Extend $\xi$ to the domain $\mathbb{T}_m$ by setting $\xi(0,0) = 0$. Then $\Delta \xi(x) = v(x)$ for all $x \in \mathbb{T}_m \setminus \{(0,0)\}$, and since the columns of $\Delta$ all sum to zero, $\Delta \xi(0,0) = -\sum_{x \neq (0,0)} v(x)$. From \eqref{hat_G_m}, $\xi \in \hat{\mathscr{G}}_m$ if and only if $v \equiv 0$, which holds if and only if $\Delta \xi \equiv 0$. This justifies the description of $\hat{\mathscr{G}}_m$ in Section \ref{Discussion of method} as the additive group of functions $\xi : \mathbb{T}_m \to \mathbb{R} / \mathbb{Z}$ such that $\xi(0,0) = 0$ and $\Delta \xi \equiv 0$ in $\mathbb{R} / \mathbb{Z}$. From this point forward, when we refer to a frequency $\xi \in \hat{\mathscr{G}}_m$, we mean a function that meets these conditions.
In \cite{JLP15}, $\hat{\mathscr{G}}_m$ was identified with the group of `multiplicative harmonic functions,' which in the present setting are the maps from $\mathbb{T}_m$ to $\mathbb{C}^*$ given by $x \mapsto e(\xi_x)$.
Abusing notation slightly, define for any $\mathbb{R}$-valued or $\mathbb{R} / \mathbb{Z}$-valued function $\xi$ on $\mathbb{T}_m$,
\begin{equation}\hat{\mu}(\xi) := \mathbf{E}_{x \in \mathbb{T}_m}\left[e(\xi_x)\right].\end{equation}
When in fact $\xi \in \hat{\mathscr{G}_m}$, this definition agrees with \eqref{def_fourier_coefficient}.
\subsection{Representations for frequencies}
\label{Representations_for_frequencies}
We use a concrete description of
the frequencies in terms of the Green's function, which
associates to the frequencies an approximate partial ordering.
To describe this, given $\xi \in \hat{\mathscr{G}}_m$ recall that a `prevector' for $\xi$ is any integer-valued vector $\Delta \xi'$, where $\xi' : \mathbb{T}_m \to \mathbb{R}$ reduces mod $\mathbb{Z}$ to $\xi$. We choose a particular representative $\xi' : \mathbb{T}_m \to (-1,1)$ by letting
\begin{equation}
C(\xi) = \frac{1}{2\pi} \arg \left( \hat{\mu}(\xi) \right) \in
\textstyle{\left[ -\frac{1}{2}, \frac{1}{2} \right)}
\end{equation}
and choosing each $\xi_x' \in
\left(C(\xi) - \frac{1}{2}, C(\xi) + \frac{1}{2}\right]$. The `distinguished prevector' of $\xi$ is then given by
\begin{equation}
v = v(\xi) := \Delta \xi'.
\end{equation}
Note that $v : \mathbb{T}_m \to \mathbb{Z}$ has mean zero and satisfies $\|v\|_{L^\infty} \leq 3$.
\begin{lemma}\label{l_2_fourier_lemma}
For every $\xi \in \hat{\mathscr{G}}_m$, the distinguished prevector of $\xi$ satisfies
\begin{equation}
1 - \left|\hat{\mu}(\xi)\right| \gg \frac{\|v(\xi)\|_2^2}{m^2} \geq
\frac{\|v(\xi)\|_1}{m^2}.
\end{equation}
\end{lemma}
\begin{proof}
Choose $\xi'$ as above, and define $\xi^* : \mathbb{T}_m \to \left(-\frac{1}{2}, \frac{1}{2}\right]$ by
\begin{equation}
\xi^*_x = \xi'_x - C(\xi),
\end{equation}
so that $\Delta \xi^* =
\Delta \xi' = v$ and
\begin{equation}
0 \leq |\hat{\mu}(\xi)| = \frac{1}{m^2} \sum_{x \in \mathbb{T}_m} e\left( \xi^*_x
\right) = \frac{1}{m^2} \sum_{x \in \mathbb{T}_m} c\left( \xi^*_x \right).
\end{equation}
Approximating $1 - c(t) \gg t^2$ uniformly for $|t| \leq \frac{1}{2}$ yields
\begin{equation}
\label{xi^*_bound}
1 - |\hat{\mu}(\xi)| \gg \frac{\|\xi^*\|_2^2}{m^2}.
\end{equation}
Since $\Delta$ is bounded from $L^2(\mathbb{T}_m) \to
L^2(\mathbb{T}_m)$,
\begin{equation}
\frac{\|v\|_2^2}{m^2} = \frac{\|\Delta \xi^*\|_2^2}{m^2} \ll
\frac{\|\xi^*\|_2^2}{m^2} \ll 1 - |\hat{\mu}(\xi)|
\end{equation}
as desired. Finally, $\|v\|_2^2 \geq \|v\|_1$ since $v$ is integer-valued.
\end{proof}
To go in the reverse direction, for any $v \in \mathbb{Z}_0^{\mathbb{T}_m}$ define $\overline{\xi} = G_{\mathbb{T}_m} * v$, so that $\Delta \overline{\xi} = v$. Let $\xi''_x = \overline{\xi}_x - \overline{\xi}_{(0,0)}$, and set $\xi = \xi(v)$ to be the reduction mod $\mathbb{Z}$ of $\xi''$. Since $\xi''_{(0,0)} = 0$ and $\Delta \xi'' = v$, which is $\mathbb{Z}$-valued, it follows that $\xi \in \hat{\mathscr{G}}_m$.
If $\xi_0 \in \hat{\mathscr{G}}_m$ and $v = \Delta \xi'$ is any prevector of $\xi_0$, then $\xi(v) = \xi_0$; this is because $\Delta(\xi' - \xi'') \equiv 0$, so $\xi' - \xi'' \equiv c$ for some $c \in \mathbb{R}$, and in fact $c = \xi'_{(0,0)} - \xi''_{(0,0)} \in \mathbb{Z}$. Also, if $v_0 \in \mathbb{Z}_0^{\mathbb{T}_m}$ and $v$ is any prevector of $\xi(v_0)$, then $v_0 - v \in \Delta \mathbb{Z}^{\mathbb{T}_m}$.
\begin{lemma}
\label{xi_bar}
Given $\xi \in \hat{\mathscr{G}}_m$, let $v$ be any prevector of $\xi$ and let $\overline{\xi} = G_{\mathbb{T}_m} * v$. Then $|\hat{\mu}(\xi)| = |\hat{\mu}(\overline{\xi})|$. If $v$ is the distinguished prevector of $\xi$, then in addition
\begin{equation}
\label{asymp_gap}
1 - |\hat{\mu}(\xi)| \asymp \frac{\|\overline{\xi}\|_2^2}{m^2}.
\end{equation}
\end{lemma}
Equation \eqref{asymp_gap} is equivalent to Theorem 3.8 in \cite{JLP15}, and the argument below is the same as the proof given there.
\begin{proof}
Let $v = \Delta \xi'$, where $\xi': \mathbb{T}_m \to \mathbb{R}$ reduces mod $\mathbb{Z}$ to $\xi$. Then $\overline{\xi} = G_{\mathbb{T}_m} * \Delta \xi' = \xi' - c$ where $c = \mathbf{E}_{x \in \mathbb{T}_m}[\xi']$, so
\begin{equation}
\hat{\mu}(\overline{\xi}) = \mathbf{E}_{x \in \mathbb{T}_m}[e(\xi'_x - c)] = e(-c) \hat{\mu}(\xi') = e(-c) \hat{\mu}(\xi)
\end{equation}
and therefore $|\hat{\mu}(\overline{\xi})| = |\hat{\mu}(\xi)|$.
To prove the upper bound in \eqref{asymp_gap},
\begin{align}
1 - |\hat{\mu}(\xi)| &= 1 - |\hat{\mu}(\overline{\xi})| \leq 1 - \operatorname{Re} \hat{\mu}(\overline{\xi}) = \frac{1}{m^2} \sum_{x \in \mathbb{T}_m} \left[ 1 - c(\overline{\xi}_x) \right] \\
\notag &\ll \frac{1}{m^2} \sum_{x \in \mathbb{T}_m} |\overline{\xi}_x|^2 = \frac{\|\overline{\xi}\|_2^2}{m^2}.
\end{align}
For the lower bound, define $\xi^*$ as in the proof of Lemma \ref{l_2_fourier_lemma} and observe that $\overline{\xi} = G_{\mathbb{T}_m} * \Delta \xi^* = \xi^* - \mathbf{E}_{x \in \mathbb{T}_m}[\xi^*]$ is the orthogonal projection of $\xi^*$ onto $L_0^2(\mathbb{T}_m)$. Thus $\|\overline{\xi}\|_2^2 \leq \|\xi^*\|_2^2$, and the result follows from \eqref{xi^*_bound}.
\end{proof}
\section{Spectral estimates}\label{spectral_gap_section}
This section reduces the determination of the spectral gap to a finite check, and provides additive savings estimates for separated spectral components. Lemma \ref{l_2_fourier_lemma} implies that each nonzero frequency $\xi \in \hat{\mathscr{G}}_m$ satisfies $1 - |\hat{\mu}(\xi)| \gg 1/m^2$, and if $1 - |\hat{\mu}(\xi)| \leq c/m^2$, then the $L^1$ norm of the distinguished prevector $v(\xi)$ must be bounded by a constant depending only on $c$. Section \ref{Determination_of_gap} develops tools to deal with prevectors that have bounded $L^1$ norm, providing control over those frequencies that achieve the spectral gap or approach it to within a constant factor. This proves Theorem \ref{spectral_gap_theorem} and does most of the work for the lower bound in Theorem \ref{mixing_time_theorem}.
Section \ref{small_phase_section} extends the analysis to prevectors whose $L^1$ norm increases with $m$, but which are sparse enough that their supports can be partitioned into widely separated clusters. This provides the main ingredient for the upper bound in Theorem \ref{mixing_time_theorem}. As we will show in Section \ref{proof_mixing_theorem_section}, if $\xi$ is a frequency for which $v(\xi)$ is not sparse, then the Lemma \ref{l_2_fourier_lemma} lower bound on $1 - |\hat{\mu}(\xi)|$ shows that the contribution of $\xi$ is negligible when computing the mixing time.
To fix ideas, given $\xi \in \hat{\mathscr{G}}_m$ recall that $\hat{\mu}(\xi) = \mathbf{E}_{x \in \mathbb{T}_m}\left[e(\xi_x) \right]$. For any subset $S \subset \mathbb{T}_m$, it is evident that
\begin{equation}
\left| \sum_{x \in S} e(\xi_x) \right| \leq |S|.
\end{equation}
The `savings from $S$' for the frequency $\xi$, denoted by $\operatorname{sav}(\xi;S)$, is the amount by which the left side falls short of this upper bound:
\begin{equation}
\label{savings_def}
\operatorname{sav}(\xi; S) := |S| - \left|\sum_{x \in S}e(\xi_x)\right|.
\end{equation}
By the triangle inequality, if $S_1,S_2 \subset \mathbb{T}_m$ are disjoint then
\begin{equation}
\operatorname{sav}(\xi; S_1) + \operatorname{sav}(\xi; S_2) \leq \operatorname{sav}(\xi; S_1 \cup S_2).
\end{equation}
The `total savings' for $\xi$ is defined by
\begin{equation}
\label{savings_def_2}
\operatorname{sav}(\xi) := \operatorname{sav}(\xi; \mathbb{T}_m) = m^2 - \left|\sum_{x \in \mathbb{T}_m} e(\xi_x) \right|
\end{equation}
and satisfies
\begin{equation}
1 - \left|\hat{\mu}(\xi)\right| = \frac{\operatorname{sav}(\xi)}{m^2}.
\end{equation}
The notion of savings is well-suited for proving lower bounds on the gap $1 - |\hat{\mu}(\xi)|$. Specifically, if $S_1,\ldots,S_k$ are disjoint subsets of $\mathbb{T}_m$ then
\begin{equation}
1 - |\hat{\mu}(\xi)| \geq \frac{1}{m^2} \sum_{i=1}^k \operatorname{sav}(\xi; S_i).
\end{equation}
The spectral gap of the sandpile Markov chain is
\begin{equation}
\mathrm{gap}_m = \min_{0 \neq \xi \in \hat{\mathscr{G}}_m} \frac{\operatorname{sav}(\xi)}{m^2}.
\end{equation}
Observe that if $v$ is the distinguished prevector of $\xi \in \hat{\mathscr{G}}_m$, then Lemma \ref{l_2_fourier_lemma} gives $\operatorname{sav}(\xi)\gg \|v\|_1$. Also, given a set $S \subset \mathbb{T}_m$ and a function $w$ on $\mathbb{T}_m$, write $w|_S$ for the function which is equal to $w$ on $S$ and 0 on $S^c$.
\subsection{Determination of spectral gap up to finite check}
\label{Determination_of_gap}
Given constants $B,R > 0$, define the finite set
\begin{equation}
{\mathscr{C}}(B,R) := \{v \in C^2(B_R(0)) : \|v\|_1 \leq B \}.
\end{equation}
Here $B_R(0)$ is the $\ell^1$ ball of radius $R$ about $0$ in $\mathbb{Z}^2$, and $C^2(\cdot)$ is given by \eqref{C2}. Since $B_R(0)$ embeds into $\mathbb{T}_m$ for each $m > 2R$, we can view each $v \in {\mathscr{C}}(B,R)$ as an element either of $C^2(\mathbb{Z}^2)$ or of $C^2(\mathbb{T}_m)$ by setting $v \equiv 0$ outside $B_R(0)$.
For any $\mathbb{R}$- or $\mathbb{R} / \mathbb{Z}$-valued function $\xi$ on $\mathbb{Z}^2$, define the functional
\begin{equation}
\label{f_define}
f(\xi) := \sum_{x \in \mathbb{Z}^2} (1 - c(\xi_x)).
\end{equation}
We will see that this is the appropriate analogue to savings for functions on $\mathbb{Z}^2$. If $v \in C^2(\mathbb{Z}^2)$, then $f(G_{\mathbb{Z}^2} * v) < \infty$ by the bound $1 - c(t) \ll t^2$ combined with Lemma \ref{z_2_approx_lemma} or Lemma \ref{greens_function_derivs}. For such $v$, $f(G_{\mathbb{Z}^2} * v) = 0$ if and only if $G_{\mathbb{Z}^2} * v$ is $\mathbb{Z}$-valued. Since $G_{\mathbb{Z}^2} * v \in \ell^2(\mathbb{Z}^2)$, if it is $\mathbb{Z}$-valued then it must be finitely supported, and in addition we have $\Delta (G_{\mathbb{Z}^2} * v) = v$. Thus, $f(G_{\mathbb{Z}^2} * v) = 0$ precisely for those $v$ in the subset
\begin{equation}
{\mathcal{I}} := \{\Delta w : w \in C^0(\mathbb{Z}^2)\} \subset C^2(\mathbb{Z}^2).
\end{equation}
If $v,v' \in C^2(\mathbb{Z}^2)$ and $v - v' \in {\mathcal{I}}$, then $f(G_{\mathbb{Z}^2} * v) = f(G_{\mathbb{Z}^2} * v')$.
Set
\begin{equation}
\label{def_gamma}
\gamma := \inf\left\{ f(G_{\mathbb{Z}^2} * v) : v \in C^2(\mathbb{Z}^2) \,\setminus\, {\mathcal{I}} \right\}.
\end{equation}
The following are the main results of this section. Together with the computation in Appendix \ref{spectral_gap_appendix}, they lead to a quick proof of Theorem \ref{spectral_gap_theorem}.
\begin{proposition}
\label{gap_achievers}
We have $\gamma > 0$, and there exist constants $B_0,R_0 > 0$ such that:
\begin{enumerate}[label=\arabic*.]
\item For sufficiently large $m$, any $\xi \in \hat{\mathscr{G}}_m$ that achieves the spectral gap, $\operatorname{sav}(\xi) = m^2 \mathrm{gap}_m$, has a prevector $v$ which is a translate of some $v' \in {\mathscr{C}}(B_0,R_0) \subset C^2(\mathbb{T}_m)$.
\item For any $v \in C^2(\mathbb{Z}^2)$ satisfying $f(G_{\mathbb{Z}^2} * v) < \frac{3}{2}\gamma$, there exists $v' \in {\mathscr{C}}(B_0,R_0) \subset C^2(\mathbb{Z}^2)$ such that a translate of $v'$ differs from $v$ by an element of ${\mathcal{I}}$. In particular, $f(G_{\mathbb{Z}^2} * v) = f(G_{\mathbb{Z}^2} * v')$.
\end{enumerate}
\end{proposition}
\begin{proposition}
\label{T_m_to_Z^2}
Fix $B,R_1 > 0$. For any $v \in {\mathscr{C}}(B,R_1)$ and $m > 2R_1$, let $\xi^{(m)} = \xi^{(m)}(v)$ be the frequency in $\hat{\mathscr{G}}_m$ corresponding to $v$, namely
\begin{equation}
\xi^{(m)}_x = (G_{\mathbb{T}_m} * v)(x) - (G_{\mathbb{T}_m} * v)(0,0) \quad \textnormal{(reduced mod $\mathbb{Z}$),}
\end{equation}
and let $\xi = \xi(v) = G_{\mathbb{Z}^2} * v$. Then
\begin{equation}
\operatorname{sav}(\xi^{(m)}) \to f(\xi) \quad \text{as } m \to \infty.
\end{equation}
\end{proposition}
Part 2 of Proposition \ref{gap_achievers} implies that
\begin{equation}
\label{gamma_2}
\gamma = \min\left\{ f(G_{\mathbb{Z}^2} * v) : v \in {\mathscr{C}}(B_0,R_0) \,\setminus\, {\mathcal{I}} \right\},
\end{equation}
which reduces the computation of $\gamma$ to a finite check. In Appendix \ref{spectral_gap_appendix} we verify that $\gamma$ is obtained by $\xi = G_{\mathbb{Z}^2} * \delta_1 *\delta_2$ with numerical value
\begin{equation}
\label{gamma_value}
\gamma = 2.868114013(4).
\end{equation}
\begin{proof}[Proof of Theorem \ref{spectral_gap_theorem}]
In this proof we use the notation $\xi(v) = G_{\mathbb{Z}^2} * v$. Take the constants $B_0,R_0$ from Proposition \ref{gap_achievers} and find $\gamma' > \gamma$ such that if $v \in {\mathscr{C}}(B_0,R_0)$ and $f(\xi(v)) > \gamma$, then $f(\xi(v)) \geq \gamma'$. Applying Proposition \ref{T_m_to_Z^2} with $B = B_0$ and $R_1 = R_0$, choose $m$ large enough that
\begin{equation}
g(m) := \sup_{v \in {\mathscr{C}}(B_0,R_0)} |\operatorname{sav}(\xi^{(m)}(v)) - f(\xi(v))| < \frac{\gamma' - \gamma}{2}.
\end{equation}
Let $v_0 \in {\mathscr{C}}(B_0,R_0)$ satisfy $f(\xi(v_0)) = \gamma$, and let $\xi^{(m)}_0 = \xi^{(m)}(v_0) \in \hat{\mathscr{G}}_m$. Then
\begin{equation}
\operatorname{sav}(\xi^{(m)}_0) < \frac{\gamma + \gamma'}{2}.
\end{equation}
Now suppose that $\xi^{(m)} \in \hat{\mathscr{G}}_m$ achieves the spectral gap. By translating, we may assume that $\xi^{(m)}$ has a prevector $v \in {\mathscr{C}}(B_0,R_0)$. We claim that $f(\xi(v)) = \gamma$: if not, then $f(\xi(v)) \geq \gamma'$ and
\begin{equation}
\operatorname{sav}(\xi^{(m)}) > \frac{\gamma + \gamma'}{2} > \operatorname{sav}(\xi^{(m)}_0),
\end{equation}
a contradiction. Thus $f(\xi(v)) = \gamma$, and
\begin{equation}
|m^2 \mathrm{gap}_m - \gamma| = |\operatorname{sav}(\xi^{(m)}) - f(\xi(v))| \leq g(m),
\end{equation}
with $g(m) \to 0$ as $m \to \infty$. Along with the formula \eqref{gamma_value} for $\gamma$, which is proved in Appendix \ref{spectral_gap_appendix}, this concludes the proof.
\end{proof}
In the process of proving Propositions \ref{gap_achievers} and \ref{T_m_to_Z^2}, we show two lemmas, Lemmas \ref{C_0_1_lemma} and \ref{C_2_lemma}, regarding savings in the neighborhood of the support of $v$ for prevectors $v \in \mathbb{Z}_0^{\mathbb{T}_m}$ that have bounded $L^1$ norm. Note that if $\xi \in \hat{\mathscr{G}}_m$ is the frequency corresponding to $v$ and $\overline{\xi} = G_{\mathbb{T}_m} * v$, then for any $S \subset \mathbb{T}_m$, the proof of Lemma \ref{xi_bar} implies that
\begin{equation}
\left| \sum_{x \in S} e(\xi_x) \right| = \left| \sum_{x \in S} e(\overline{\xi}_x) \right|,
\end{equation}
so all savings computations can be done using $\overline{\xi}$. Indeed, if we extend the definitions \eqref{savings_def}, \eqref{savings_def_2} from elements of $\hat{\mathscr{G}}_m$ to all $\mathbb{R}$- or $\mathbb{R}/\mathbb{Z}$-valued functions on $\mathbb{T}_m$, then $\operatorname{sav}(\xi;S) = \operatorname{sav}(\overline{\xi};S)$ and $\operatorname{sav}(\xi) = \operatorname{sav}(\overline{\xi})$.
\begin{lemma}\label{C_0_1_lemma}
For all $A,B,R_1 > 0$ there exists an $R_2(A, B, R_1)>2R_1$ such that if $m$ is sufficiently large, then for any $x \in \mathbb{T}_m$ and any $v \in \mathbb{Z}^{\mathbb{T}_m}$ satisfying the following conditions:
\begin{enumerate}
\item $\|v\|_1 \leq B$
\item $v|_{B_{R_1}(x)} \not \in C^2(\mathbb{T}_m)$
\item $d\left(x, \operatorname{supp} v|_{B_{R_1}(x)^c} \right)> 2 R_2$
\end{enumerate}
we have
\begin{equation}
\operatorname{sav}\left( G_{\mathbb{T}_m} * v; B_{R_2}(x) \right) \geq A.
\end{equation}
Thus, if $v$ has mean zero, then the corresponding frequency $\xi \in \hat{\mathscr{G}}_m$ satisfies $\operatorname{sav}\left(\xi; B_{R_2}(x)\right) \geq A$.
\end{lemma}
\begin{proof}
Given $v \in \mathbb{Z}^{\mathbb{T}_m}$, decompose $\overline{\xi} = G_{\mathbb{T}_m} * v$ into an internal and external component, $\overline{\xi} = \xi^i + \xi^e$, setting
\begin{equation}
\xi^i := G_{\mathbb{T}_m} * v|_{B_{R_1}(x)}, \qquad \xi^e := G_{\mathbb{T}_m}*v|_{B_{R_1}(x)^c}.
\end{equation}
Treat $R_2$ as a parameter growing to infinity, and let $R$ be a second parameter growing with $R_2$ such that $\frac{R_2}{R^3} \to \infty$ as $R_2 \to \infty$. In practice, these parameters are chosen large, but fixed, so that there is uniformity in all $m$ sufficiently large. Since $|D_1 G_{\mathbb{T}_m}(y)|$ and $|D_2 G_{\mathbb{T}_m}(y)|$ have size $\ll 1/\|y\|_1$ as $\|y\|_1 \to\infty$, we have $\xi^e_{x +y} = \xi^e_x + O(\frac{BR}{R_2})$ for all $\|y\|_1 \leq R$. Hence, by Taylor expansion,
\begin{equation}\label{external_eliminated}
\left|\sum_{\|y\|_1 \leq R}e(\overline{\xi}_{x+y})\right| = O\left(\frac{BR^3}{R_2} \right)+ \left|\sum_{\|y\|_1 \leq R} e(\xi^i_{x+y})\right|.
\end{equation}
Since the error tends to 0 as $R_2 \to \infty$, it suffices to prove that
\begin{equation}
\#\left\{y:\|y\|_1 \leq R \right\} - \left|\sum_{\|y\|_1 \leq R}e\left(\xi_{x+y}^i\right)\right| \to \infty \qquad \text{as } R \to \infty.
\end{equation}
First suppose that $v|_{B_{R_1}(x)} \not \in C^1\left(\mathbb{T}_m\right)$. For all $y = (y_1,y_2) \in \mathbb{T}_m$ with $|y_1|,|y_2| \leq m/2$,
\begin{equation}
\xi_{x+y}^i = \sum_{\|z\|_1 \leq R_1} G_{\mathbb{T}_m}(y-z) v(x+z).
\end{equation}
Let $r = \sqrt{y_1^2 + y_2^2}$. Using the Lemma \ref{greens_function_estimate} bound on the first derivatives of $G_{\mathbb{T}_m}$ to approximate $G_{\mathbb{T}_m}(y - z)$ with $G_{\mathbb{T}_m}(y)$ yields
\begin{equation}
\xi_{x+y}^i = a G_{\mathbb{T}_m}(y) + O_{B,R_1}\left( r^{-1} \right),
\end{equation}
where $a = \sum_{\|z\|_1 \leq R_1} v(x+z) \neq 0$. The asymptotic for the first derivative of the Green's function in Lemma \ref{green_function_differentiated_asymptotic} now implies that $|\xi_{x + (j,0)}^i - \xi_x^i| \to \infty$ as $j \to \infty$, while $|\xi_{x + (j+1,0)}^i - \xi_{x + (j,0)}^i| \to 0$, so that $\{\xi^i_{x + (j,0)}\}_{j=0}^\infty$ is dense in $\mathbb{R}/\mathbb{Z}$, and hence
\begin{equation}
R - \left|\sum_{j=1}^R e(\xi_{x + (j,0)}^i - \xi_x^i)\right| \to \infty \qquad \text{as } R \to \infty,
\end{equation}
which suffices for the claim.
Now suppose that $v|_{B_{R_1}(x)} \in C^1\left(\mathbb{T}_m\right) \setminus C^2\left(\mathbb{T}_m\right)$, so that it can be written as $\delta_1 * w_1 + \delta_2 * w_2$ where $w_1,w_2$ are $\mathbb{Z}$-valued, supported on $B_{R_1+1}(x)$, and not both in $C^1(\mathbb{T}_m)$ by \eqref{C2_alt}. For all $y = (y_1,y_2)$,
\begin{equation}
\label{xi_convolution}
\xi_{x+y}^i = \sum_{\|z\|_1 \leq R_1 + 1} D_1 G_{\mathbb{T}_m}(y-z) w_1(x+z) + D_2 G_{\mathbb{T}_m}(y-z) w_2(x+z).
\end{equation}
Use the Lemma \ref{greens_function_estimate} bound on the second derivatives of $G_{\mathbb{T}_m}$ to approximate $D_k G_{\mathbb{T}_m}(y-z)$ with $D_k G_{\mathbb{T}_m}(y)$ for $k = 1,2$. The result is
\begin{equation}
\label{xi_convolution_2}
\xi_{x+y}^i = a D_1 G_{\mathbb{T}_m}(y) + b D_2 G_{\mathbb{T}_m}(y) + O_{B,R_1}\left( r^{-2} \right)
\end{equation}
for constants $a,b = O_{B,R_1}(1)$, not both zero. Lemma \ref{green_function_differentiated_asymptotic} now shows that for $1 \leq r < m^{1/2}/(\log m)^{1/4}$,
\begin{equation}
\xi_{x+y}^i = \frac{-c(ay_1 + by_2)}{y_1^2 + y_2^2} + O_{B,R_1}\left( r^{-2} \right),
\end{equation}
where $c > 0$ is a fixed constant. Thus $|\xi_{x+y}^i| \ll 1/r$, and there are $0 \leq \theta_1 < \theta_2 < 2\pi$ such that if $\theta_1 \leq \arg(y) \leq \theta_2$, then $|\xi_{x+y}^i| \asymp 1/r$.
It follows that
\begin{gather}
\label{real_xi_i}
\sum_{\|y\|_1 \leq R} (1 - c(\xi_{x+y}^i)) \asymp \log R, \\
\label{imag_xi_i}
\left|\sum_{\|y\|_1 \leq R}s(\xi_{x+y}^i)\right| \leq \sum_{\|y\|_1 \leq R} |s(\xi_{x+y}^i)| \ll R.
\end{gather}
To combine \eqref{real_xi_i} and \eqref{imag_xi_i} we use that for all $a,b \in \mathbb{R}$ with $a > 0$,
\begin{equation}
\label{real_part_approx}
\sqrt{a^2+b^2} - \sqrt{a^2} = \int_{a^2}^{a^2+b^2} \frac{dt}{2\sqrt{t}} \leq \frac{b^2}{2a}.
\end{equation}
Letting $a$ and $b$ be the real and imaginary parts of $\sum_{\|y\|_1 \leq R} e(\xi_{x+y}^i)$, we conclude that \begin{equation} \#\left\{y:\|y\|_1 \leq R \right\} - \left|\sum_{\|y\|_1 \leq R}e\left(\xi_{x+y}^i\right)\right| \asymp \log R,\end{equation} as required.
\end{proof}
\begin{proof}[Proof of Proposition \ref{T_m_to_Z^2}]
Given $v \in {\mathscr{C}}(B,R_1)$, set $\xi^* = G_{\mathbb{T}_m} * v$; we suppress the dependence on $m$ for notational convenience. It will suffice to show that $\operatorname{sav}(\xi^*) \to f(\xi)$ as $m \to \infty$.
Write $v$ as a sum of $O_{B, R_1}(1)$ translates of $\pm \delta_1^{*2}$, $\pm \delta_1 * \delta_2$, $\pm \delta_2^{*2}$. Since the second derivatives of the Green's function decay like the inverse square of the radius, an argument parallel to the one given in equations \eqref{xi_convolution}-\eqref{xi_convolution_2} shows that $|\xi^*_y| = O_{B,R_1}(1/r^2)$, where $r = \sqrt{y_1^2 + y_2^2}$ and $|y_1|,|y_2| \leq m/2$. For all $R_1 < R < m/2$, Taylor expansion yields
\begin{align}
\label{Taylor_estimates}
\sum_{\|y\|_1 > R} (1 - c(\xi^*_y)) &= O_{B,R_1}\left(R^{-2}\right), \\
\notag \sum_{y \in \mathbb{T}_m} (1 - c(\xi^*_y)) &= O_{B,R_1}(1), \\
\notag \left| \sum_{y \in \mathbb{T}_m} s(\xi^*_y) \right| &= O_{B,R_1}(1).
\end{align}
In the last estimate, we use that $\xi^*$ is mean zero over $\mathbb{T}_m$ so that the contribution of the linear term in the Taylor expansion of $s(\xi^*_y)$ vanishes. Therefore, using \eqref{real_part_approx} in the first equality,
\begin{align}
\label{savings_real}
\operatorname{sav}(\xi^*) &= O_{B,R_1}\left( m^{-2} \right) + \sum_{y \in \mathbb{T}_m} (1 - c(\xi^*_y)) \\
\notag &= O_{B,R_1}\left( R^{-2} \right) + \sum_{\|y\|_1 \leq R} (1 - c(\xi^*_y)).
\end{align}
Sending $m \to \infty$ for fixed $R$, Lemma \ref{z_2_approx_lemma} shows that each $\xi^*_y \to \xi_y$. Thus
\begin{equation}
\lim_{m \to \infty} \operatorname{sav}(\xi^*) = O_{B,R_1}\left( R^{-2} \right) + \sum_{\|y\|_1 \leq R} (1 - c(\xi_y))
\end{equation}
for each $R > R_1$. Sending $R \to \infty$ completes the proof.
\end{proof}
\begin{lemma}\label{C_2_lemma}
For all $B, R_1 > 0$ and $\alpha < 1$, there exists $R_2(\alpha, B, R_1)>2R_1$ such that if $m$ is sufficiently large, then for any $x \in \mathbb{T}_m$ and any $v \in \mathbb{Z}^{\mathbb{T}_m}$ satisfying the following conditions:
\begin{enumerate}
\item $\|v\|_1 \leq B$
\item $v|_{B_{R_1}(x)} \in C^2(\mathbb{T}_m)$
\item $d\left(x, \operatorname{supp} v|_{B_{R_1}(x)^c} \right)> 2 R_2$
\end{enumerate}
we have
\begin{equation}
\label{savings_alpha}
\operatorname{sav}\left(G_{\mathbb{T}_m} * v; B_{R_2}(x)\right) \geq \alpha \operatorname{sav}(\xi^*); \qquad \xi^*=G_{\mathbb{T}_m} * v|_{B_{R_1}(x)}.
\end{equation}
Thus, if $v$ has mean zero, then the corresponding frequency $\xi \in \hat{\mathscr{G}}_m$ satisfies $\operatorname{sav}\left(\xi; B_{R_2}(x)\right) \geq \alpha \operatorname{sav}(\xi^*)$.
\end{lemma}
\begin{proof}
First we show that there is $\delta = \delta(B,R_1) > 0$ such that for sufficiently large $m$, either $\operatorname{sav}(\xi^*) = 0$ or $\operatorname{sav}(\xi^*) \geq \delta$. Translating $v$ by $-x$ shows that $\operatorname{sav}(\xi^*) = \operatorname{sav}(G_{\mathbb{T}_m} * v')$ for some $v' \in {\mathscr{C}}(B,R_1)$. Let
\begin{equation}
\gamma' = \min\left\{ f(G_{\mathbb{Z}^2} * v') : v' \in {\mathscr{C}}(B,R_1) \,\setminus\, {\mathcal{I}} \right\},
\end{equation}
so $\gamma' > 0$. By Proposition \ref{T_m_to_Z^2}, if $m$ is large enough then
\begin{equation}
|\operatorname{sav}(G_{\mathbb{T}_m} * v') - f(G_{\mathbb{Z}^2} * v')| < \gamma' / 2 \quad \text{for all } v' \in {\mathscr{C}}(B,R_1).
\end{equation}
Thus, if $v' \in {\mathscr{C}}(B,R_1) \,\setminus\, {\mathcal{I}}$, then $\operatorname{sav}(\xi^*) > \gamma'/2$.
If $v' \in {\mathscr{C}}(B,R_1) \cap {\mathcal{I}}$, we will show that $\operatorname{sav}(\xi^*) = 0$, allowing us to take $\delta = \gamma'/2$. Write $v' = \Delta w$ where $w \in C^0(\mathbb{Z}^2)$. Observe that $\operatorname{supp}(w)$ is a finite set, and any $(i,j) \in \mathbb{Z}^2 \,\setminus\, \operatorname{supp}(w)$ that is adjacent to exactly one point in $\operatorname{supp}(w)$ must have $(\Delta w)(i,j) \neq 0$. Since $\operatorname{supp}(\Delta w) \subset B_{R_1}(0)$, it follows that $\operatorname{supp}(w) \subset B_{R_1-1}(0)$. Hence we can consider $v'$ and $w$ as $\mathbb{Z}$-valued functions on $\mathbb{T}_m$ for $m > 2R_1$, and the equation $v' = \Delta w$ still holds in this context. Therefore, $G_{\mathbb{T}_m} * v' = w - c$ where $c$ is the mean value of $w$ on $\mathbb{T}_m$, and $\operatorname{sav}(G_{\mathbb{T}_m} * v') = 0$.
With $\delta$ in hand, we turn to the proof of \eqref{savings_alpha}. Set $\epsilon = \epsilon(\alpha,B,R_1) = (1-\alpha)\delta > 0$. We will show that if $m$ is sufficiently large,
\begin{equation}
\operatorname{sav}(G_{\mathbb{T}_m} * v; B_{R_2}(x)) > \operatorname{sav}(\xi^*) - \epsilon.
\end{equation}
This implies \eqref{savings_alpha}, because if $\operatorname{sav}(\xi^*) = 0$ then \eqref{savings_alpha} is trivial, while if $\operatorname{sav}(\xi^*) \geq \delta$ then $\operatorname{sav}(\xi^*) - \epsilon \geq \alpha \operatorname{sav}(\xi^*)$. By arguing as in Lemma \ref{C_0_1_lemma} up to equation \eqref{external_eliminated}, it suffices to prove that if $R$ is fixed but sufficiently large then
\begin{equation}
\label{xi_star_desired}
\operatorname{sav}(\xi^*; B_R(x)) > \operatorname{sav}(\xi^*)-\epsilon/2
\end{equation}
for all $m$ sufficiently large. Writing $v|_{B_{R_1}(x)}$ as a sum of $O_{B, R_1}(1)$ translates of $\pm \delta_1^{*2}$, $\pm \delta_1 * \delta_2$, $\pm \delta_2^{*2}$, it follows as in the proof of Proposition \ref{T_m_to_Z^2} that for $y = (y_1,y_2) \in \mathbb{T}_m$ with $|y_1|,|y_2| \leq m/2$ and $r = \sqrt{y_1^2 + y_2^2}$, $|\xi_{x+y}^*| = O_{B,R_1}(1/r^2)$. Taylor expansion gives
\begin{align}
\label{Taylor_2}
\sum_{\|y\|_1 \leq R} (1 - c(\xi^*_{x+y})) &= O_{B,R_1}(1), \\
\notag \sum_{\|y\|_1 > R} (1 - c(\xi^*_{x+y})) &= O_{B,R_1}\left( R^{-2} \right), \\
\notag \left| \sum_{\|y\|_1 \leq R} s(\xi^*_{x+y}) \right| \leq \sum_{\|y\|_1 \leq R} \left| s(\xi^*_{x+y}) \right| &= O_{B,R_1}(\log R).
\end{align}
Thus,
\begin{align}
\label{xi_star_approx}
\operatorname{sav}(\xi^*) &= m^2 - \left| \sum_{z \in \mathbb{T}_m} e(\xi_z^*) \right| \leq \sum_{z \in \mathbb{T}_m} (1 - c(\xi_z^*)) \\
\notag &= O_{B,R_1}\left( R^{-2} \right) + \sum_{\|y\|_1 \leq R} (1 - c(\xi_{x+y}^*)) \\
\notag &= O_{B,R_1}\left( \frac{\log^2 R}{R^2} \right) + \# B_R(x) - \left| \sum_{\|y\|_1 \leq R} e(\xi_{x+y}^*) \right|,
\end{align}
using \eqref{real_part_approx} in the last equality. Since this is $\operatorname{sav}(\xi^*; B_R(x))$ plus a quantity tending to zero with $R$, \eqref{xi_star_desired} is verified.
\end{proof}
\begin{proof}[Proof of Proposition \ref{gap_achievers}]
First we find a constant $B_0$ such that:
\begin{enumerate}[label=(\Roman*)]
\item For sufficiently large $m$, if $\xi^{(m)} \in \hat{\mathscr{G}}_m$ achieves the spectral gap, then its distinguished prevector $v^{(m)} = v(\xi^{(m)})$ must satisfy $\|v^{(m)}\|_1 \leq B_0$. \label{I}
\item If $v \in C^2(\mathbb{Z}^2)$ satisfies $f(G_{\mathbb{Z}^2} * v) \leq \frac{3}{2} \gamma + 1$, then $v$ differs by an element of ${\mathcal{I}}$ from some $\tilde{v} \in C^2(\mathbb{Z}^2)$ with $\|\tilde{v}\|_1 \leq B_0$. \label{II}
\end{enumerate}
To this end, fix any $v' \in C^2(\mathbb{Z}^2) \,\setminus\, {\mathcal{I}}$ and let $\gamma' = f(G_{\mathbb{Z}^2} * v') \geq \gamma$. Choose $B',R'$ large enough that $v' \in {\mathscr{C}}(B',R')$, and let $m > 2R'$ so that ${\mathscr{C}}(B',R')$ embeds into $C^2(\mathbb{T}_m)$. Applying Proposition \ref{T_m_to_Z^2} shows that if $\xi'^{(m)}$ is the frequency in $\hat{\mathscr{G}}_m$ corresponding to $v'$, then
\begin{equation}
\operatorname{sav}(\xi'^{(m)}) \to f(G_{\mathbb{Z}^2} * v') \quad \text{as } m \to \infty
\end{equation}
and therefore $\operatorname{sav}(\xi'^{(m)}) < \gamma' + 1$ for sufficiently large $m$.
Suppose that $\xi^{(m)} \in \hat{\mathscr{G}}_m$ achieves the spectral gap, and let $v^{(m)}$ be the distinguished prevector of $\xi^{(m)}$. By Lemma \ref{l_2_fourier_lemma},
\begin{equation}
\|v^{(m)}\|_1 \ll \operatorname{sav}(\xi^{(m)}) \leq \operatorname{sav}(\xi'^{(m)}) < \gamma' + 1,
\end{equation}
that is, $\|v^{(m)}\|_1$ is bounded by a universal constant. This verifies \ref{I}.
By choosing $\gamma'$ arbitrarily close to $\gamma$, we could have obtained that for any $\epsilon > 0$, if $m$ is sufficiently large then any $\xi^{(m)} \in \hat{\mathscr{G}}_m$ achieving the spectral gap must satisfy $\operatorname{sav}(\xi^{(m)}) < \gamma + \epsilon$. This will be used later.
For \ref{II}, given $v \in C^2(\mathbb{Z}^2)$, let $\xi = G_{\mathbb{Z}^2} * v$. Lemma \ref{z_2_approx_lemma} or Lemma \ref{greens_function_derivs} shows that $\xi \in \ell^2(\mathbb{Z}^2)$, so there are only finitely many $x \in \mathbb{Z}^2$ such that $|\xi_x| \geq \frac{1}{2}$. Reduce $\xi$ to $\tilde{\xi}: \mathbb{Z}^2 \to \left[ -\frac{1}{2}, \frac{1}{2} \right)$ by subtracting $w \in C^0(\mathbb{Z}^2)$, and let $\tilde{v} = \Delta \tilde{\xi} = v - \Delta w$, which differs from $v$ by $\Delta w \in {\mathcal{I}}$ and is therefore in $C^2(\mathbb{Z}^2)$. Because $\tilde{v}$ is integer-valued and $\Delta$ is bounded from $\ell^2(\mathbb{Z}^2) \to \ell^2(\mathbb{Z}^2)$,
\begin{equation}
\label{v_tilde}
\|\tilde{v}\|_1 \leq \|\tilde{v}\|_2^2 = \|\Delta \tilde{\xi}\|_2^2 \ll \|\tilde{\xi}\|_2^2 = \sum_{x \in \mathbb{Z}^2} |\tilde{\xi}_x|^2 \ll \sum_{x \in \mathbb{Z}^2} (1 - c(\tilde{\xi}_x)),
\end{equation}
where the last inequality uses $1 - c(t) \gg t^2$ for $|t| \leq \frac{1}{2}$. The right side is $f(\tilde{\xi}) = f(G_{\mathbb{Z}^2} * v)$, so an upper bound on $f(G_{\mathbb{Z}^2} * v)$ translates to an upper bound on $\|\tilde{v}\|_1$, confirming \ref{II}.
Fix $B_0$ to satisfy \ref{I} and \ref{II}. For any $v \in \mathbb{Z}^{\mathbb{T}_m}$ with $\|v\|_1 \leq B_0$, we perform a clustering on $\operatorname{supp}(v)$, as follows.
\begin{enumerate}
\item Initially all of $\operatorname{supp} v$ is uncovered and initialize a list $\mathscr{X}$ of centers of balls to be empty.
\item Iterate until $\operatorname{supp} v$ is covered:
\begin{enumerate}
\item Choose $x \in \operatorname{supp} v$ which is uncovered and append $x$ to $\mathscr{X}$.
\item Beginning from an initial guess $R_1(x) = 1$:
\begin{enumerate}
\item If $v|_{B_{R_1(x)}(x)} \not \in C^2(\mathbb{T}_m)$, then choose $R_2(x)$ according to Lemma \ref{C_0_1_lemma} with $A = \frac{3}{2}\gamma + 1$, $B = B_0$, and $R_1=R_1(x)$. If $v|_{B_{R_1(x)}(x)} \in C^2(\mathbb{T}_m)$, then choose $R_2(x)$ according to Lemma \ref{C_2_lemma} with $\alpha = \frac{7}{8}$, $B = B_0$, and $R_1=R_1(x)$.
\item If the condition of those lemmas holds,
\begin{equation*}
d\left(x, \operatorname{supp} v|_{B_{R_1(x)}(x)^c} \right)> 2 R_2(x),
\end{equation*}
then declare those $y \in \operatorname{supp} (v) \cap B_{R_1}(x)$ covered and continue to (c). Otherwise, replace $R_1(x):= 2R_2(x)$ and repeat step (i).
\end{enumerate}
\item If all of $\operatorname{supp} v$ is covered, finish. If not, return to (a).
\end{enumerate}
\item Since $\|v\|_1 \leq B_0$, the process stops after boundedly many steps.
\end{enumerate}
At the end of this process,
\begin{gather}
\operatorname{supp} v \subset \bigcup_{x \in \mathscr{X}} B_{R_1(x)}(x), \\
\label{distance_condition} d\left(x, \operatorname{supp} v|_{B_{R_1(x)}(x)^c} \right)> 2 R_2(x) \quad \text{for each } x \in \mathscr{X}.
\end{gather}
We claim that a subset $\mathscr{X}' \subset \mathscr{X}$ can be chosen such that
\begin{equation}\label{disjointness_condition}
\operatorname{supp} v \subset \bigsqcup_{x \in \mathscr{X}'} B_{R_1(x)}(x).
\end{equation}
Note a particular consequence of \eqref{distance_condition} and \eqref{disjointness_condition} is that the balls \begin{equation}\{B_{R_2(x)}(x)\}_{x \in \mathscr{X}'}\end{equation} are pairwise disjoint.
To verify \eqref{disjointness_condition}, let $x$ and $x'$ be centers of balls of the process with $x$ appearing prior to $x'$ in the list. We will show that either $B_{R_1(x)}(x)$ and $B_{R_1(x')}(x')$ are disjoint, or $B_{R_1(x)}(x) \subset B_{R_1(x')}(x')$. First, since $x' \notin B_{R_1(x)}(x)$, we have $d(x,x') > 2R_2(x) \geq 2R_1(x)$. Suppose $y$ is in the intersection of $B_{R_1(x)}(x)$ and $B_{R_1(x')}(x')$, so that
\begin{equation}
\label{d_x_x'}
d(x,x') \leq d(x,y) + d(y,x') \leq R_1(x) + R_1(x').
\end{equation}
Combining this with the lower bound on $d(x,x')$ gives $R_1(x) < R_1(x')$. Therefore, \eqref{d_x_x'} implies that $d(x,x') < 2R_1(x') \leq 2R_2(x')$, whence $d(x,x') \leq R_1(x')$. For any $z \in B_{R_1(x)}(x)$,
\begin{equation}
d(z,x') \leq d(z,x) + d(x,x') \leq R_1(x) + R_1(x') < 2R_2(x')
\end{equation}
so that, in fact, $z \in B_{R_1(x')}(x')$ and $ B_{R_1(x)}(x) \subset B_{R_1(x')}(x')$. Hence, starting from $\mathscr{X}$ we obtain the desired list $\mathscr{X}'$ by discarding any $x$ which satisfies $x \in B_{R_1(x')}(x')$ for some $x'$ later in the list.
Let $\overline{R}_1(v) = \max\{R_1(x) : x \in \mathscr{X}'\}$. From the description of the clustering algorithm, there is a uniform in $m$ upper bound on $\overline{R}_1(v)$ that depends only on $B_0$:
\begin{equation}
\label{R_1_bar}
\overline{R}_1(v) \leq R_0 = R_0(B_0).
\end{equation}
Fix
\begin{equation}
\label{gamma_0}
\gamma_0 = \min\left\{ f(G_{\mathbb{Z}^2} * v) : v \in {\mathscr{C}}(B_0,R_0) \,\setminus\, {\mathcal{I}} \right\}.
\end{equation}
We will show that $\gamma = \gamma_0$, but a priori we only know that $\gamma_0 \geq \gamma$ and $\gamma_0 > 0$. Proposition \ref{T_m_to_Z^2} implies that if $m$ is large enough,
\begin{equation}
\label{savings_8}
|\operatorname{sav}(G_{\mathbb{T}_m} * v) - f(G_{\mathbb{Z}^2} * v)| < \gamma_0/8 \quad \text{for all } v \in {\mathscr{C}}(B_0,R_0).
\end{equation}
We can now prove Part 1 of Proposition \ref{gap_achievers}. Let $\xi \in \hat{\mathscr{G}}_m$ achieve the spectral gap, and take $m$ large enough that $\operatorname{sav}(\xi) < \min(\frac{3}{2}\gamma + 1, \frac{3}{2}\gamma_0)$, noting that the upper bound is strictly greater than $\gamma$. Let $v \in \mathbb{Z}_0^{\mathbb{T}_m}$ be the distinguished prevector of $\xi$, with $\|v\|_1 \leq B_0$, and run the clustering algorithm on $v$. Set
\begin{equation}
\mathscr{X}'' = \left\{ x \in \mathscr{X}' : \left. v \right|_{B_{R_1(x)}(x)} \notin {\mathcal{I}} \right\}
\end{equation}
and define $v'$ to equal $v$ on each $B_{R_1(x)}(x)$ for $x \in \mathscr{X}''$, while $v' \equiv 0$ elsewhere. To get from $v$ to $v'$, we subtracted finitely many elements of ${\mathcal{I}}$. It follows that $v' = v - \Delta w$ for some $w \in \mathbb{Z}^{\mathbb{T}_m}$; the proof of Lemma \ref{C_2_lemma} explains why this holds on $\mathbb{T}_m$ as well as on $\mathbb{Z}^2$. We conclude that $v'$ is also a prevector of $\xi$.
Given $x \in \mathscr{X}''$, for notational convenience set $u^{(x)} = \left. v \right|_{B_{R_1(x)}(x)}$. If some $u^{(x)} \notin C^2(\mathbb{T}_m)$, then by construction, $\operatorname{sav}(\xi) \geq \frac{3}{2}\gamma + 1$, a contradiction. Therefore each $u^{(x)} \in C^2(\mathbb{T}_m)$. Since $R_1(x) \leq R_0$ and $u^{(x)} \notin {\mathcal{I}}$, we have $f(G_{\mathbb{Z}^2} * u^{(x)}) \geq \gamma_0$. It follows from \eqref{savings_8} that $\operatorname{sav}(G_{\mathbb{T}_m} * u^{(x)}) > \frac{7}{8} \gamma_0$. Then, by step (i) of the clustering,
\begin{equation}
\operatorname{sav}(\xi; B_{R_2(x)}(x)) > \left( \frac{7}{8} \right)^2 \gamma_0 > \frac{3}{4} \gamma_0.
\end{equation}
If $|\mathscr{X}''| \geq 2$, then $\operatorname{sav}(\xi) > \frac{3}{2} \gamma_0$, another contradiction. We conclude that $|\mathscr{X}''| = 1$, and moreover, for the unique $x \in \mathscr{X}''$, $v' = u^{(x)} \in C^2(\mathbb{T}_m)$. Translating $v'$ by $-x$ yields an element of ${\mathscr{C}}(B_0,R_0)$. This proves Part 1 of Proposition \ref{gap_achievers}.
Part 2 is proved along similar lines. Let $v \in C^2(\mathbb{Z}^2)$ satisfy
\begin{equation}
\label{gamma_bound} \textstyle
f(G_{\mathbb{Z}^2} * v) < \min(\frac{3}{2} \gamma + 1, \frac{3}{2} \gamma_0).
\end{equation}
By property \ref{II}, we may assume that $\|v\|_1 \leq B_0$ (by subtracting an element of ${\mathcal{I}}$ if necessary). Since $\operatorname{supp} v$ is finite, it embeds into $\mathbb{T}_m$ for large enough $m$, and then $v$ can be seen as an element of $C^2(\mathbb{T}_m)$. Let $\xi^{(m)} \in \hat{\mathscr{G}}_m$ be the frequency corresponding to $v$, so that $\operatorname{sav}(\xi^{(m)}) \to f(G_{\mathbb{Z}^2} * v)$ as $m \to \infty$, by Proposition \ref{T_m_to_Z^2}.
Run the clustering algorithm on $v$, noting that independent of the value of $m$, the algorithm will follow exactly the same steps and produce identical clusters. Define $\mathscr{X}''$, $v'$, and the notation $u^{(x)}$ as in the proof of Part 1. If $u^{(x)} \notin C^2(\mathbb{T}_m)$ for some $x \in \mathscr{X}''$, then $\operatorname{sav}(\xi^{(m)}) \geq \frac{3}{2} \gamma + 1$. Since the property ``$u^{(x)} \notin C^2(\mathbb{T}_m)$'' is independent of $m$, this is a uniform lower bound on all $\operatorname{sav}(\xi^{(m)})$ and so $f(G_{\mathbb{Z}^2} * v) \geq \frac{3}{2} \gamma + 1$. Likewise, if each $u^{(x)} \in C^2(\mathbb{T}_m)$ but $|\mathscr{X}''| \geq 2$, then $\operatorname{sav}(\xi^{(m)}) > \frac{3}{2} \gamma_0$ for all $m$ and so $f(G_{\mathbb{Z}^2} * v) \geq \frac{3}{2} \gamma_0$. Both possibilities contradict \eqref{gamma_bound}.
We conclude that any $v \in C^2(\mathbb{Z}^2)$ satisfying \eqref{gamma_bound} differs by an element of ${\mathcal{I}}$ from some $v' \in C^2(\mathbb{Z}^2)$ that has a translate in ${\mathscr{C}}(B_0,R_0)$. It follows that $\gamma$ is equal to the right side of \eqref{gamma_0}, that is, $\gamma = \gamma_0$. In particular, $\gamma > 0$. Finally, the right side of \eqref{gamma_bound} simplifies to $\frac{3}{2} \gamma$, proving Part 2 of Proposition \ref{gap_achievers}.
\end{proof}
The following lemma provides the additive savings needed to prove the lower bound in Theorem \ref{mixing_time_theorem}.
\begin{lemma}\label{C_2_sum_lemma}
Let $k \geq 1$ be fixed, and let $v_1, \ldots, v_k \in C^2(\mathbb{T}_m)$ be bounded functions of bounded support
which are
$R$-separated, in the sense that their supports have pairwise $\ell^1$ distance at least $R$. Set $v = \sum_{i=1}^k v_i$. Then as $R \to \infty$,
\begin{equation}
1 - \left|\hat{\mu}(\xi(v))\right| = O\left(\frac{\log(1+R)}{R^2m^2} \right) +
\sum_{i=1}^k \Big( 1 - \left|\hat{\mu}(\xi(v_i)) \right| \Big).
\end{equation}
The implicit constant depends upon $k$ and the bounds for the functions and their supports.
\end{lemma}
\begin{proof}
Set $\overline{\xi} = G_{\mathbb{T}_m} * v$ and $\overline{\xi}_i = G_{\mathbb{T}_m} * v_i$, so that $|\hat{\mu}(\xi(v))| = |\hat{\mu}(\overline{\xi})|$ and $|\hat{\mu}(\xi(v_i))| = |\hat{\mu}(\overline{\xi}_i)|$. Fix a point $x_i$ in the support of each $v_i$, so that the balls $B_{R'}(x_i)$ are disjoint where $R' = \lfloor (R-1)/2 \rfloor$. As in the proof of Proposition \ref{T_m_to_Z^2}, if $y = (y_1,y_2)$ with $|y_1|,|y_2| \leq m/2$ and $r = \sqrt{y_1^2 + y_2^2}$,
\begin{equation}
\label{inverse_square}
|\overline{\xi}_i(x_i + y)| = O(1/r^2).
\end{equation}
We obtain the analogue to \eqref{savings_real},
\begin{equation}
\label{real_reduction}
1 - |\hat{\mu}(\overline{\xi}_i)| = O\left( \frac{1}{R^2 m^2} \right) + \frac{1}{m^2} \sum_{\|y\|_1 \leq R'} \Big(1 - c\left(\overline{\xi}_i(x_i + y)\right)\Big).
\end{equation}
If $\|y\|_1 \leq R'$, then $\overline{\xi}(x_i + y) = \overline{\xi}_i(x_i + y) + O(R^{-2})$, so that
\begin{equation}
\label{cosine_approx}
c\left(\overline{\xi}(x_i + y)\right) = c\left(\overline{\xi}_i(x_i + y)\right) + O\left(\frac{\left|s(\overline{\xi}_i(x_i + y))\right|}{R^2} \right) +
O\left( \frac{1}{R^4}\right).
\end{equation}
As in \eqref{Taylor_2},
\begin{equation}
\label{sine_bound}
\sum_{\|y\|_1 \leq R'} \left|s\left(\overline{\xi}_i(x_i + y)\right)\right| = O(\log(1 + R)).
\end{equation}
Combining \eqref{real_reduction}, \eqref{cosine_approx}, and \eqref{sine_bound} yields
\begin{equation}
\label{xi_i_approx}
1 - |\hat{\mu}(\overline{\xi}_i)| = O\left( \frac{\log(1+R)}{R^2 m^2} \right) + \frac{1}{m^2} \sum_{\|y\|_1 \leq R'} \Big(1 - c\left(\overline{\xi}(x_i + y)\right)\Big).
\end{equation}
Take the sum of \eqref{xi_i_approx} over $i = 1,2,\ldots,k$. For $z \notin \bigcup_{i=1}^k B_{R'}(x_i)$, let $r_i$ be the $\ell^2$ distance from $z$ to $x_i$, so that $|\overline{\xi}(z)| = O(1/r_1^2 + \cdots + 1/r_k^2)$. Use the inequality
\begin{equation}
\left( \frac{1}{r_1^2} + \cdots + \frac{1}{r_k^2} \right)^2 \leq k\left( \frac{1}{r_1^4} + \cdots + \frac{1}{r_k^4} \right)
\end{equation}
to conclude that
\begin{equation}
\sum_{z \,\notin\, \bigcup_{i=1}^k B_{R'}(x_i)} \Big(1 - c\left(\overline{\xi}(z)\right)\Big) = O\left( \frac{1}{R^2} \right).
\end{equation}
In combination with \eqref{xi_i_approx}, this yields
\begin{equation}
\sum_{i=1}^k \Big( 1 - \left|\hat{\mu}(\overline{\xi}_i) \right| \Big) = O\left( \frac{\log(1+R)}{R^2 m^2} \right) + \frac{1}{m^2} \sum_{z \in \mathbb{T}_m} \Big(1 - c\left(\overline{\xi}(z)\right)\Big),
\end{equation}
or equivalently,
\begin{equation}
\label{xi_real}
1 - \operatorname{Re}\left( \hat{\mu}(\overline{\xi}) \right) = O\left( \frac{\log(1+R)}{R^2 m^2} \right) + \sum_{i=1}^k \Big( 1 - \left|\hat{\mu}(\overline{\xi}_i) \right| \Big).
\end{equation}
We will finish the proof by applying \eqref{real_part_approx}. Proposition \ref{T_m_to_Z^2} implies that each $1 - \left| \hat{\mu}(\overline{\xi}_i) \right| = O(1/m^2)$, so $\operatorname{Re}\left( \hat{\mu}(\overline{\xi}) \right) \gg 1$. Meanwhile,
\begin{equation}
\operatorname{Im} \left( \hat{\mu}(\overline{\xi}) \right) = \frac{1}{m^2} \sum_{z \in \mathbb{T}_m} s\left(\overline{\xi}(z)\right), \qquad s(t) = 2\pi t + O\left( |t|^3 \right).
\end{equation}
In the Taylor expansion, the linear term vanishes since $\overline{\xi}$ has mean zero on $\mathbb{T}_m$. Also,
\begin{equation}
\left| \overline{\xi}(z) \right|^3 \leq \left( \sum_{i=1}^k \left| \overline{\xi}_i(z) \right| \right)^3 \leq k^2 \sum_{i=1}^k \left| \overline{\xi}_i(z) \right|^3,
\end{equation}
and $\sum_{z \in \mathbb{T}_m} \left| \overline{\xi}_i(z) \right|^3 = O(1)$ by \eqref{inverse_square}. Hence $\operatorname{Im} \left( \hat{\mu}(\overline{\xi}) \right) = O(1/m^2)$, and we obtain the desired result from \eqref{xi_real} using \eqref{real_part_approx}.
\end{proof}
\subsection{Estimation of moderate size phases}
\label{small_phase_section}
In this section we give estimates for the savings of frequencies $\xi$ whose distinguished prevectors $v = v(\xi)$ have $\|v\|_1$ growing with $m$. In particular, we prove an approximate additive savings estimate for separated parts of $v$, which is what is needed to prove the upper bound of Theorem \ref{mixing_time_theorem}.
Let $R > 1$ be a large fixed parameter. Given any $v \in \mathbb{Z}^{\mathbb{T}_m}$, for each $x
\in \operatorname{supp} v$ let \begin{equation}\operatorname{nbd}(x) := B_R(x) = \{y \in \mathbb{T}_m: \left\|y-x\right\|_1 \leq R\}.\end{equation} Perform a simple
agglomeration scheme, in which any two points $x, y \in \operatorname{supp} v$ whose
neighborhoods overlap are joined in a common $R$-cluster. In other words, $x$ and $y$ belong to a common cluster if and only if there is a sequence of points $\{z_i\}_{i=0}^n \subset \operatorname{supp} v$ such that $x = z_0$, $y = z_n$ and, for $0 \leq i < n$, $\|z_i - z_{i+1}\|_1 \leq 2R$. Write ${\mathscr{C}}$ for the
collection of clusters formed in this way. Given $C \in {\mathscr{C}}$, write
\begin{equation}
\operatorname{nbd}(C) := \bigcup_{x \in C} \operatorname{nbd}(x)
\end{equation}
for the neighborhood of $C$, so that $\operatorname{supp} v \subset \bigsqcup_{C \in {\mathscr{C}}} \operatorname{nbd}(C)$.
Let ${\mathscr{P}} \subset {\mathscr{C}}$ be the collection of all clusters $C$ such that $\left. v \right|_C = \Delta w$ for some $w \in \mathbb{Z}^{\mathbb{T}_m}$, and let $S$ be the union of all clusters $C \in {\mathscr{C}} \,\setminus\, {\mathscr{P}}$. The `$R$-reduction' of $v$ is defined to be $\tilde{v} = \left. v \right|_S$, which differs from $v$ by a sum of terms of the form $\Delta w$, and whose $L^1$ and $L^\infty$ norms are bounded by $\|v\|_1, \|v\|_{L^\infty}$ respectively. We say that $v$ is `$R$-reduced' if $\tilde{v} = v$. For any frequency $\xi \in \hat{\mathscr{G}}_m$, the `$R$-reduced prevector' of $\xi$ is the $R$-reduction of the distinguished prevector $v(\xi)$, which is indeed a prevector of $\xi$.
The following is the main result of this section. It is similar to Lemmas \ref{C_0_1_lemma} and \ref{C_2_lemma}, but does not require the prevector $v$ to have bounded $L^1$ norm.
\begin{lemma}\label{savings_lemma}
Let $B \geq 1$ be a fixed parameter. There is a function $\eta(B, R)$ tending to 0 as $R \to \infty$ such that for all $m$ sufficiently large, if $v \in \mathbb{Z}^{\mathbb{T}_m}$ satisfies the following conditions:
\begin{enumerate}
\item $v$ is $R$-reduced
\item $\|v\|_{L^\infty} \leq 3$
\item $v$ has an $R$-cluster $C$ for which $\big\| \left. v \right|_C \big\|_1 \leq B$
\end{enumerate}
then
\begin{equation}
\label{eta_bound}
\operatorname{sav}(G_{\mathbb{T}_m} * v; \operatorname{nbd}(C)) \geq m^2 \mathrm{gap}_m - \eta(B, R).
\end{equation}
Thus, if $v$ has mean zero, then the corresponding frequency $\xi \in \hat{\mathscr{G}}_m$ satisfies $\operatorname{sav}(\xi; \operatorname{nbd}(C)) \geq m^2 \mathrm{gap}_m - \eta(B, R)$.
The sufficiently large value of $m$ above which \eqref{eta_bound} holds is allowed to depend on both $B$ and $R$.
\end{lemma}
The upper bound on $\|v\|_{L^\infty}$ could be replaced by any fixed constant; we chose $3$ because the distinguished prevector of every $\xi \in \hat{\mathscr{G}}_m$ satisfies $\|v(\xi)\|_{L^\infty} \leq 3$, so the $R$-reduced prevector has the same bound.
\begin{proof}
Suppose that $v \in \mathbb{Z}^{\mathbb{T}_m}$ satisfies the conditions of the lemma. We decompose the phase function $\overline{\xi} = G_{\mathbb{T}_m} * v$ into an internal and external component, $\overline{\xi} = \xi^i + \xi^e$, where
\begin{equation}
\xi^i := G_{\mathbb{T}_m} * \left. v \right|_C, \qquad \xi^e := G_{\mathbb{T}_m} * \left. v \right|_{C^c}.
\end{equation}
Our first observation is that the third derivatives of $\xi^e$ are uniformly bounded over all $x \in \operatorname{nbd}(C)$:
\begin{equation}
\label{3rd_deriv_bound}
\left|D_1^a D_2^b \xi_x^{e}\right| \ll \frac{1}{R}, \quad \text{for $x \in \operatorname{nbd}(C)$ and $a,b \geq 0$, $a+b = 3$}.
\end{equation}
To see this, note that if $x \in \operatorname{nbd}(C)$, then every $y \in \operatorname{supp}(v) \,\setminus\, C$ satisfies $\|x-y\|_1 > R$. Therefore,
\begin{align}
\left| D_1^a D_2^b \xi_x^e \right| &= \left| \sum_{y \in C^c} v(y) D_1^a D_2^b G_{\mathbb{T}_m}(x-y) \right| \\
\notag &\leq \|v\|_{L^\infty} \sum_{y \in B_R(x)^c} |D_1^a D_2^b G_{\mathbb{T}_m}(x-y)|.
\end{align}
The bound \eqref{3rd_deriv_bound} then follows from the asymptotic of Lemma \ref{greens_function_estimate}.
The rest of the proof is divided into three cases. Heuristically, $\xi^e$ could be roughly constant over $\operatorname{nbd}(C)$, vary linearly over $\operatorname{nbd}(C)$, or vary quadratically over $\operatorname{nbd}(C)$. The bound on the third derivatives of $\xi^e$ ensures that these are the only possibilities. If $\xi^e$ is roughly constant, then we can prove \eqref{eta_bound} using the arguments developed in Section \ref{Determination_of_gap}.
If $\xi^e$ varies linearly over $\operatorname{nbd}(C)$, then we can find a region of $\operatorname{nbd}(C)$ far enough away from $C$ that the internal phase is nearly constant, so $\overline{\xi} = \xi^i + \xi^e$ varies linearly. We then cite the geometric series bound of Lemma \ref{geometric} to show that for any $A > 0$, if $R$ is large enough then $\operatorname{sav}(\overline{\xi}; \operatorname{nbd}(C)) \geq A$. This is much stronger than the desired bound \eqref{eta_bound}: as long as $A > \gamma = \lim_{m \to \infty} m^2 \mathrm{gap}_m$, we do not even need to subtract $\eta(B,R)$.
Finally, if $\xi^e$ varies quadratically over $\operatorname{nbd}(C)$, we use van der Corput's inequality to reduce to the linear case.
The proof will use three auxiliary parameters $R_1,R_2,R_3$ which tend to infinity with $R$ and satisfy $R_1 < R_2 < R_3 < R$. We require that
\begin{equation}
\label{R_123}
R_1 \to \infty, \quad \frac{R_2}{R_1^4} \to \infty, \quad \frac{R_3}{R_1 R_2^2} \to \infty, \quad \frac{R}{R_1^2 R_3^2} \gg 1, \quad \text{as } R \to \infty.
\end{equation}
These properties are all satisfied if, for example, $R_1 = R^{1/26}$, $R_2 = R^{5/26}$, $R_3 = R^{12/26}$.
For the first case, suppose that for all $x \in C$ and $\|y - x\|_1 \leq R_1$, we have $\left\|\xi_y^e - \xi_x^e\right\|_{\mathbb{R}/\mathbb{Z}} < 1/R_1^3$. Perform the clustering algorithm from the proof of Proposition \ref{gap_achievers} on $\left. v \right|_C$, using the same parameters (e.g.~$\alpha = 7/8$). This partitions $C$ into sub-clusters indexed by a set $\mathscr{X}'$: for each $x \in \mathscr{X}'$ there are radii $2\tilde{R}_1(x) < \tilde{R}_2(x)$ such that
\begin{equation}
C \subset \bigsqcup_{x \in \mathscr{X}'} B_{\tilde{R}_1(x)}(x),
\end{equation}
while the balls $\{B_{\tilde{R}_2(x)}(x)\}_{x \in \mathscr{X}'}$ are disjoint, and each sub-cluster meets the conditions of either Lemma \ref{C_0_1_lemma} or Lemma \ref{C_2_lemma}, as appropriate. As in \eqref{R_1_bar}, the radii $\tilde{R}_2(x)$ are uniformly bounded by some $R_0$ depending only on $B$. By taking $R$ large enough with respect to $B$, we may assume that $R_0$ is arbitrarily small relative to $R_1$.
Let $x \in \mathscr{X}'$ and $R' \leq R_1$. We use the assumption that $\left\|\xi_y^e - \xi_x^e\right\|_{\mathbb{R}/\mathbb{Z}} < 1/R_1^3$ for all $y \in B_{R_1}(x)$ to compute, by Taylor expansion,
\begin{equation}
\left| \sum_{y \in B_{R'}(x)} e(\xi^i_y + \xi^e_y) \right| = \left| \sum_{y \in B_{R'}(x)} e(\xi^i_y) \right| + O\left( \frac{1}{R_1} \right).
\end{equation}
In other words, for all $R' \leq R_1$,
\begin{equation}
\label{xi_i_reduction}
\operatorname{sav}(\overline{\xi}; B_{R'}(x)) = \operatorname{sav}(\xi^i; B_{R'}(x)) + O\left( R_1^{-1} \right).
\end{equation}
Define $\mathscr{X}'' \subset \mathscr{X}'$ as in the proof of Proposition \ref{gap_achievers}. Since $v$ is $R$-reduced, $\mathscr{X}''$ is nonempty. For $x \in \mathscr{X}''$, let $u^{(x)}$ be the restriction of $v$ to $B_{\tilde{R}_1(x)}(x)$. By step (i) of the clustering,
\begin{equation}
\label{xi_i_cases}
\operatorname{sav}(\xi^i; B_{\tilde{R}_2(x)}(x)) \geq \begin{cases} \frac{3}{2} \gamma + 1, & u^{(x)} \notin C^2(\mathbb{T}_m), \\
\frac{7}{8} \operatorname{sav}(G_{\mathbb{T}_m} * u^{(x)}), & u^{(x)} \in C^2(\mathbb{T}_m). \end{cases}
\end{equation}
Note that $\frac{3}{2} \gamma + 1 > m^2 \mathrm{gap}_m$ for large enough $m$, and $\operatorname{sav}(G_{\mathbb{T}_m} * u^{(x)}) \geq m^2 \mathrm{gap}_m$ by definition of $\mathscr{X}''$. Thus, the combination of \eqref{xi_i_reduction} with \eqref{xi_i_cases} verifies the desired bound \eqref{eta_bound} except when $|\mathscr{X}''| = 1$ and $u^{(x)} \in C^2(\mathbb{T}_m)$. In that remaining situation, we observe from \eqref{xi_star_approx} that
\begin{align}
\operatorname{sav}(\xi^i; B_{R_1}(x)) &= \operatorname{sav}(G_{\mathbb{T}_m} * u^{(x)}; B_{R_1}(x)) \\
\notag &= \operatorname{sav}(G_{\mathbb{T}_m} * u^{(x)}) - O_B\left( \frac{\log^2 R_1}{R_1^2} \right) \\
\notag &\geq m^2 \mathrm{gap}_m - O_B\left( \frac{\log^2 R_1}{R_1^2} \right),
\end{align}
which along with \eqref{xi_i_reduction} completes the proof.
In the second and third cases, we assume that there exist $x \in C$ and $y \in B_{R_1}(x)$ such that $d := \|\xi_y^e - \xi_x^e\|_{\mathbb{R}/\mathbb{Z}} \geq 1/R_1^3$. Set $w = y - x$, so $\|w\|_1 \leq R_1$. For the second case, suppose that for all integers $1 \leq n \leq \frac{R_2}{\|w\|_1}$,
\begin{equation}\label{linear_constraint}
\left\|\xi_{x + nw}^e -
\xi_x^e - n\left(\xi_y^e
- \xi_x^e\right)\right\|_{\mathbb{R}/\mathbb{Z}} < \frac{1}{R_1}.
\end{equation}
Effectively, the external phase varies linearly along the discrete line $\{x + nw : n \in \mathbb{Z},\, 0 \leq n \leq \frac{R_2}{\|w\|_1} \}$.
We now find a segment along the line that is far away from $C$. Set
\begin{equation}
\ell = \left\lfloor \frac{R_2}{3^B \|w\|_1} \right\rfloor,
\end{equation}
and consider the $\ell^1$-balls of radius $3^k \ell \|w\|_1$ centered at $x + 2 \cdot 3^k \ell w$, for $0 \leq k \leq B-1$. The interiors of these balls are disjoint, and $x \in C$ is not in any of the interiors. By the pigeonhole principle, the interior of at least one ball contains no elements of $C$. Choose $k$ corresponding to one such ball, and set $U = 2 \cdot 3^k \ell$, so that $x + Uw$ is the center.
Set $V = \lfloor \sqrt{\ell d^{-1}} \rfloor$. By \eqref{R_123}, $\frac{\ell}{V} \geq \sqrt{\ell d} \to \infty$ with $R$, and certainly $V \to \infty$ with $R$. Any point $y$ along a shortest path from $x + Uw$ to $x + nw$, with $U < n \leq U+V$, satisfies $d(y,C) \gg U \|w\|_1$. Since the first derivatives of $G_{\mathbb{T}_m}$ decay like the inverse of the radius, it follows that $\xi^i_{x + nw} - \xi^i_{x + Uw} = O\left( \frac{V}{U} \right) = o_R(1)$.
Consider the exponential sum
\begin{align}
&\quad\, \sum_{n = U}^{U+V} e\left(\overline{\xi}_{x + nw}\right) = \sum_{n = U}^{U+V} e\left(\xi_{x + nw}^e + \xi_{x + nw}^i\right) \\
&= \notag \sum_{n = U}^{U+V} e\left(\xi_{x}^e +
\xi_{x + Uw}^i + n\left(\xi_y^e - \xi_x^e \right) +
O\left(\frac{1}{R_1} \right) + O\left(\frac{V}{U}
\right)\right).
\end{align}
Taylor expanding the error in the exponential, then
summing the geometric series, we obtain
\begin{equation}
\sum_{n = U}^{U+V} e\left(\overline{\xi}_{x + nw}\right) =
O\left(\frac{V}{R_1} + \frac{V^2}{U} + \frac{1}{d} \right) = o_R(V).
\end{equation}
Hence this segment of the line provides savings of $\gg V$ for $\overline{\xi}$. That is, $\operatorname{sav}(\overline{\xi}; \operatorname{nbd}(C))$ is bounded below by a constant that may be made arbitrarily high by taking $R$ large enough.
For the third case, suppose that \eqref{linear_constraint} fails for some $n = n_1 \leq \frac{R_2}{\|w\|_1}$. Set $W = \lfloor \frac{R_3}{\|w\|_1} \rfloor$, and apply van der Corput's inequality with $H=1$ to estimate
\begin{equation}
\label{van_der_Corput}
\left| \sum_{n = 1}^W e\left(\overline{\xi}_{x + nw}
\right)\right|^2 \leq \frac{W(W+1)}{2} + \frac{W+1}{2} \left| \sum_{n = 1}^{W-1} e\left(\overline{\xi}_{x + (n+1)w} - \overline{\xi}_{x + nw} \right)\right|.
\end{equation}
Set $z_n = \xi_{x + (n+1)w}^e - \xi_{x + nw}^e$. By the definition of $n_1$,
\begin{equation}
\frac{1}{R_1} \leq \left\| \sum_{i=0}^{n_1-1} (z_i) - n_1 z_0 \right\|_{\mathbb{R}/\mathbb{Z}} = \left\| \sum_{i=0}^{n_1-2} (n_1-1-i)(z_{i+1} - z_i) \right\|_{\mathbb{R}/\mathbb{Z}}
\end{equation}
and therefore there is $n_0 \leq n_1-2$ for which
\begin{equation}
\delta := \|z_{n_0+1} - z_{n_0}\|_{\mathbb{R}/\mathbb{Z}} \geq \frac{2}{R_1 n_1^2} \geq \frac{2 \|w\|_1^2}{R_1 R_2^2}.
\end{equation}
For all $1 \leq n,p \leq W$, the quantity $z_n - z_p -(n-p)(z_{n_0+1} - z_{n_0})$ is a sum of $O\left( W^2 \|w\|_1^3 \right)$ terms of the form $D_1^a D_2^b \xi_{x'}^e$ where $a+b = 3$ and $x' \in \operatorname{nbd}(C)$. (Each $z_i$ is a sum of $O\left( \|w\|_1 \right)$ first derivatives of $\xi^e$, so $z_{i+1} - z_i$ is a sum of $O\left( \|w\|_1^2 \right)$ second derivatives of $\xi^e$, and then $(z_{i+1} - z_i) - (z_{n_0+1} - z_{n_0})$ is a sum of $O\left( \|w\|_1^2 \cdot W \|w\|_1 \right)$ third derivatives of $\xi^e$. Finally, sum over all $i$ between $p$ and $n$.) Thus, \eqref{3rd_deriv_bound} gives
\begin{equation}
\left\|z_n - z_p -(n-p)(z_{n_0+1} - z_{n_0})\right\|_{\mathbb{R}/\mathbb{Z}} = O\left( \frac{W^2 \|w\|_1^3}{R} \right).
\end{equation}
By the definition of $W$ and \eqref{R_123}, this quantity is $O\left( 1/R_1 \right)$.
We now repeat the argument of the previous case, using
\begin{equation}
\ell' = \left\lfloor \frac{R_3}{3^B \|w\|_1} \right\rfloor
\end{equation}
to define $U' = 2 \cdot 3^k \ell'$ for an appropriately chosen $0 \leq k \leq B-1$, and $V' = \lfloor \sqrt{\ell' \delta^{-1}} \rfloor$. By \eqref{R_123}, $\frac{\ell'}{V'} \geq \sqrt{\ell' \delta} \to \infty$ with $R$, and $\delta^{-1} = o_R(V')$. Arguing as before, we obtain
\begin{equation}
\sum_{n = U'}^{U'+V'} e\left(\overline{\xi}_{x + (n+1)w} - \overline{\xi}_{x + nw}\right) = O\left( \frac{V'}{R_1} + \frac{(V')^2}{U'} + \frac{1}{\delta} \right) = o_R(V').
\end{equation}
Thus, we have saved an arbitrary constant in the sum
\begin{equation}
\sum_{n = 1}^{W-1} e\left(\overline{\xi}_{x + (n+1)w} - \overline{\xi}_{x + nw} \right),
\end{equation}
and hence also in $\sum_{n=1}^W e\left(\overline{\xi}_{x + nw}
\right)$, by \eqref{van_der_Corput}.
\end{proof}
\section{Proof of Theorem \ref{mixing_time_theorem}}
\label{proof_mixing_theorem_section}
In the process of proving Theorem \ref{mixing_time_theorem}, we also prove the following mixing result in $L^2$.
\begin{theorem}\label{L_2_mixing_theorem}
Let $m \geq 2$, let $c_0 = \gamma^{-1}$ be the constant of Theorem \ref{mixing_time_theorem}, and as there, set $t_m^{\operatorname{mix}} = c_0 m^2 \log m$. For each fixed $\epsilon > 0$,
\begin{align}
\lim_{m \to \infty} \min_{\sigma \in \mathscr{R}_m}\left\|P_m^{\lceil(1-\epsilon)t_m^{\operatorname{mix}}\rceil} \delta_\sigma - \mathbb{U}_{\mathscr{R}_m}\right\|_{L^2(d\mathbb{U}_{\mathscr{R}_m})} &= \infty\\
\notag \lim_{m \to \infty} \max_{\sigma \in \mathscr{R}_m}\left\|P_m^{\lfloor(1+\epsilon)t_m^{\operatorname{mix}}\rfloor} \delta_\sigma - \mathbb{U}_{\mathscr{R}_m}\right\|_{L^2(d\mathbb{U}_{\mathscr{R}_m})} &= 0.
\end{align}
\end{theorem}
Note that, since we restrict to recurrent states, Parseval gives the following characterization of the $L^2(d\mathbb{U}_{\mathscr{R}_m})$ norm,
\begin{equation}
\left\|P_m^N \delta_{\sigma} - \mathbb{U}_{\mathscr{R}_m} \right\|_{L^2(d\mathbb{U}_{\mathscr{R}_m})}^2 = \sum_{\xi \in \hat{\mathscr{G}}_m \setminus \{0\}}\left|\hat{\mu}(\xi) \right|^{2N}.
\end{equation}
\subsection{Proof of the lower bound}
Our proof of the lower bound in Theorem \ref{mixing_time_theorem} uses the
following second moment lemma, a variant of the method used by Diaconis and
Shahshahani \cite{DS87} to show cutoff in the Bernoulli--Laplace diffusion model
(see also \cite{D88}).
Given any probability measure $\mu$ on a finite abelian group $\mathscr{G}$, recall from Section \ref{Random_walk_on_the_sandpile_group} the definitions of the dual group $\hat{\mathscr{G}}$ and the Fourier coefficients $\hat{\mu}(\xi)$, for $\xi \in \hat{\mathscr{G}}$.
\begin{lemma}\label{lower_bound_lemma}
Let $\mathscr{G}$ be a finite abelian group, let $\mu$ be a probability measure on $\mathscr{G}$ and
let $N \geq 1$. Let $\mathscr{X} \subset \hat{\mathscr{G}} \,\setminus\, \{0\}$. Suppose that the following inequalities
hold for some parameters $0 < \epsilon_1, \epsilon_2 < 1$,
\begin{align}
\sum_{\xi \in \mathscr{X}}\left|\hat{\mu}(\xi)\right|^N &\geq
\frac{|\mathscr{X}|^{\frac{1}{2}}}{\epsilon_1}\\ \notag
\sum_{\xi_1, \xi_2 \in \mathscr{X}}\left|\hat{\mu}(\xi_1-\xi_2)\right|^N & \leq (1
+\epsilon_2^2)\left(\sum_{\xi \in \mathscr{X}}\left|\hat{\mu}(\xi)\right|^N \right)^2.
\end{align}
Then
\begin{equation}
\left\|\mu^{*N}- \mathbb{U}_{\mathscr{G}}\right\|_{\operatorname{TV}(\mathscr{G})} \geq 1 - 4\epsilon_1^2 - 4
\epsilon_2^2.
\end{equation}
\end{lemma}
\begin{proof}
Define, for $\xi \in \mathscr{X}$, $w_\xi =
\left(\frac{\overline{\hat{\mu}(\xi)}}{\left|\hat{\mu}(\xi)\right|}\right)^N$,
and $f \in
L^2(\mathscr{G})$ by
\begin{equation}
f(x) = \sum_{\xi\in \mathscr{X}}w_\xi e(\xi\cdot x).
\end{equation}
Then
\begin{align}
\mathbf{E}_{\mathbb{U}}[f] = 0, \qquad &\mathbf{E}_{\mathbb{U}}\left[\left|f\right|^2\right] = |\mathscr{X}|,\\
\notag
\mathbf{E}_{\mu^{*N}}[f] = \sum_{\xi \in \mathscr{X}}\left|\hat{\mu}(\xi)\right|^N, \qquad
&\mathbf{E}_{\mu^{*N}}\left[|f|^2 \right] = \sum_{\xi_1, \xi_2 \in \mathscr{X}}
w_{\xi_1}\overline{w_{\xi_2}}\hat{\mu}(\xi_1 - \xi_2)^N.
\end{align}
Define $A = \left\{g \in \mathscr{G}: |f(g)| >
\frac{1}{2}E_{\mu^{*N}}[f]\right\}$. By Chebyshev's inequality, $\mathbb{U}(A) \leq
4\epsilon_1^2$, while by the same inequality, $\mu^{*N}(A) \geq 1-
4\epsilon_2^2$, from which the claim follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mixing_time_theorem}, lower bound]
In light of Proposition \ref{gap_achievers}, choose $v \in {\mathscr{C}}(B_0,R_0)$ such that the frequency $\xi = \xi(v) \in \hat{\mathscr{G}}_m$ generates the spectral gap. Choose a large fixed constant $R > R_0$ and let
$\{v_i\}_{i=1}^{M}$ be a
collection of $R$-separated translates of $v$, with $M \asymp \frac{m^2}{R^2}$. The corresponding frequencies $\xi_i = \xi(v_i)$ all satisfy $|\hat{\mu}(\xi_i)| = |\hat{\mu}(\xi)| = 1 - \mathrm{gap}_m$.
Given $c > 0$, set $N = \left\lfloor(\log m -c) \mathrm{gap}_m^{-1} \right\rfloor$ and apply Lemma
\ref{lower_bound_lemma} with
set of frequencies $\mathscr{X} = \{\xi_i\}_{i=1}^M$. Calculate
\begin{equation}
\label{hat_mu}
\left|\hat{\mu}(\xi_i)\right|^N = \frac{e^c}{m} \left[ 1 + O\left(\frac{\log
m}{m^2} \right) \right].
\end{equation}
If $c$ is sufficiently large, then the first condition of Lemma
\ref{lower_bound_lemma} is satisfied with $\epsilon_1 = O\left(R e^{-c}
\right).$
Write $d(v_i,v_j)$ for the $\ell^1$ distance between the supports of $v_i$ and $v_j$. If $d(v_i,v_j) \geq \rho$, then by Lemma \ref{C_2_sum_lemma},
\begin{equation}
1- \left|\hat{\mu}(\xi_i -\xi_j)\right| =2(1- |\hat{\mu}(\xi)|) +
O\left(\frac{\log(1+\rho)}{\rho^2 m^2}\right)
\end{equation}
and therefore we can compute
\begin{equation}
\label{hat_mu_n}
\left|\hat{\mu}(\xi_i -\xi_j)\right|^N = e^{2c} m^{-2 + O(\log(1+\rho) / \rho^2)}.
\end{equation}
Choose $R$ large enough that when $\rho = R$ in \eqref{hat_mu_n}, the power of $m$ is less than $-1$. Then, by separating the cases $i=j$ and $i \neq j$,
\begin{equation}
\sum_{\substack{1 \leq i,j \leq M \\ d(v_i,v_j) < \log m}} \left|\hat{\mu}(\xi_i -\xi_j)\right|^N = O\left(\frac{m^2}{R^2} \right) + e^{2c} O\left( \frac{m}{R^4} \right),
\end{equation}
since the number of pairs $(i,j)$ in the sum is $O(m^2 \log^2(m) / R^4)$. In addition, using $\rho = \log m$ in \eqref{hat_mu_n} and plugging in \eqref{hat_mu},
\begin{align}
\sum_{\substack{1 \leq i,j \leq M \\ d(v_i,v_j) \geq \log m}} \left|\hat{\mu}(\xi_i -\xi_j)\right|^N &= \sum_{\substack{1 \leq i,j \leq M \\ d(v_i,v_j) \geq \log m}} e^{2c} m^{-2} \left( 1 + O\left( \frac{\log \log m}{\log m} \right) \right) \\
\notag &\leq M^2 |\hat{\mu}(\xi)|^{2N} \left( 1 + O\left( \frac{\log \log m}{\log m} \right) \right).
\end{align}
Therefore, since $M^2 |\hat{\mu}(\xi)|^{2N} \asymp e^{2c} m^2 / R^4$,
\begin{equation}
\sum_{1 \leq i,j \leq M} \left|\hat{\mu}(\xi_i -\xi_j)\right|^N \leq M^2 |\hat{\mu}(\xi)|^{2N} \left( 1 + O\left( \frac{\log \log m}{\log m} + \frac{R^2}{e^{2c}} + \frac{1}{m} \right) \right)
\end{equation}
and the second condition of Lemma \ref{lower_bound_lemma} is met with $\epsilon_2 = O\left( Re^{-c} \right)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{L_2_mixing_theorem}, lower bound]
By Cauchy-Schwarz, the condition $ \sum_{\xi \in \mathscr{X}}\left|\hat{\mu}(\xi)\right|^N \geq
\frac{|\mathscr{X}|^{\frac{1}{2}}}{\epsilon_1}$ implies
\begin{equation}
\sum_{\xi \in \mathscr{X}}\left|\hat{\mu}(\xi)\right|^{2N} \geq \frac{1}{\epsilon_1^2}.
\end{equation}
By the proof of the lower bound above, since $\epsilon$ is fixed, $\epsilon_1$ may be taken arbitrarily small, which proves the $L^2$ lower bound.
\end{proof}
\subsection{Proof of the upper bound}
\label{Proof_of_the_upper_bound}
Recall that Proposition \ref{reduction_proposition} reduces Theorem \ref{mixing_time_theorem} to the case where the starting state is recurrent. We prove the upper bound of Theorem \ref{L_2_mixing_theorem}, which implies the upper bound of Theorem \ref{mixing_time_theorem} by Cauchy-Schwarz.
We consider mixing at step
\begin{equation}
N = \lfloor (1+\epsilon)\mathrm{gap}_m^{-1}\log m \rfloor \asymp m^2 \log m.
\end{equation}
Let $R = R(\epsilon)$ be a parameter which is fixed as a function of $m$, to be determined at the end of the argument. Given frequency $\xi \in \hat{\mathscr{G}}_m$, let $v \in \mathbb{Z}_0^{\mathbb{T}_m}$ be its $R$-reduced prevector, and perform the clustering algorithm of Section \ref{small_phase_section} on $v$ with parameter $R$.
Let $\mathscr{N}(V,K)$ denote the number of $R$-reduced prevectors $v$ of $L^1$ mass $V$ in $K$ clusters.
\begin{lemma}
The following upper bound holds:
\begin{equation}
\mathscr{N}(V,K) \leq \exp\left(K \log(m^2)
+ O(V \log R) \right).
\end{equation}
\end{lemma}
\begin{proof}
We provide a recipe to generate all possible prevectors by adding mass one point at a time. Let $\Gamma$ be a lattice path from $(0,0)$ to $(V,V)$, that moves either upward or rightward at each step, and that never passes above the main diagonal. Assume that $\Gamma$ has exactly $K-1$ intersection points with the main diagonal strictly between $(0,0)$ and $(V,V)$. Let the rightward edges go from $(i, k(i))$ to $(i+1, k(i))$, for $0 \leq i \leq V-1$. The sequence $\{k(i)\}_{i=0}^{V-1}$ is non-decreasing, with $0 \leq k(i) \leq i$, and there are $K$ values of $i$ for which $k(i) = i$ (including $i = 0$).
To generate a prevector $v$ using the path $\Gamma$:
\begin{enumerate}
\item Iterate from $i = 0$ to $V-1$:
\begin{enumerate}
\item If $k(i) = i$, start a new cluster by adding one unit of mass to a point $x_i$ that is separated from the set of previously placed points $\{x_j\}_{j < i}$ by a distance greater than $2R$.
\item If $k(i) < i$, add one unit of mass to a point $x_i$ whose distance from the previously placed point $x_{k(i)}$ is at most $2R$. The possibility $x_i = x_{k(i)}$ is allowed.
\end{enumerate}
\item For each $x \in \operatorname{supp}(v) = \bigcup_{i=0}^{V-1} \{x_i\}$, let $w(x) = \#\{i : x_i = x\}$ be the total mass at $x$, and choose $v(x) \in \{-w(x), w(x)\}$.
\end{enumerate}
Every prevector $v$ with $L^1$ mass $V$ in $K$ clusters can be generated by this procedure. For each path $\Gamma$, since step (a) is taken $K$ times, the number of possible prevectors is $O\left( (m^2)^K \cdot (R^2)^V \cdot 2^V \right)$. The number of paths $\Gamma$ is bounded by the $V$-th Catalan number, $\frac{1}{V+1} \binom{2V}{V} \leq 2^{2V}$. Hence
\begin{equation}
\mathscr{N}(V,K) = O\left( m^{2K} R^{2V} 8^V \right),
\end{equation}
which has the desired form.
\end{proof}
\begin{proof}[Proof of Theorem \ref{L_2_mixing_theorem}, upper bound]
In
\begin{equation}
\left\|P_m^N \delta_{\sigma} - \mathbb{U}_{\mathscr{R}_m} \right\|_{L^2(d\mathbb{U}_{\mathscr{R}_m})}^2 = \sum_{0 \neq \xi \in
\hat{\mathscr{G}}_m}\left|\hat{\mu}(\xi)\right|^{2N},
\end{equation}
write $\Xi(V,K)$ for the collection of nonzero frequencies $\xi \in \hat{\mathscr{G}}_m$ such that the $R$-reduced prevector of $\xi$ has $L^1$ norm $V$ in $K$ $R$-clusters. Thus
\begin{equation}
\label{full_mu_sum}
\left\|P_m^N \delta_{\sigma} - \mathbb{U}_{\mathscr{R}_m} \right\|_{L^2(d\mathbb{U}_{\mathscr{R}_m})}^2 = \sum_{K \geq 1} \sum_{V \geq K} \sum_{\xi \in \Xi(V,K)}\left|\hat{\mu}(\xi)\right|^{2N}.
\end{equation}
From the definition of $R$-reduction in Section \ref{small_phase_section}, the bound of Lemma \ref{l_2_fourier_lemma} applies also to $R$-reduced prevectors. Thus, there is a universal constant $c > 0$ such that every $\xi \in \Xi(V,K)$ satisfies
\begin{equation}
|\hat{\mu}(\xi)|^{2N} \leq \exp(-cV \log m).
\end{equation}
Let $A>0$ be a fixed integer constant. Then,
\begin{align}
& \quad\, \sum_{K \geq 1} \sum_{V \geq AK} \sum_{\xi \in
\Xi(V,K)}\left|\hat{\mu}(\xi)\right|^{2N} \\
&\notag \leq \sum_{K \geq 1} \sum_{V \geq AK} \mathscr{N}(V,K)
\exp\left(-c V \log m \right)\\
&\notag \leq \sum_{K \geq 1} \sum_{V \geq AK} \exp\Big(K \log(m^2) - V [c\log m - O(\log R)]
\Big).
\end{align}
For sufficiently large $m$, the coefficient of $V$ in the last expression is at least $\frac{c}{2} \log m$. Then, if $Ac > 4$, we sum the two geometric series:
\begin{align}
& \quad\, \sum_{K \geq 1} \sum_{V \geq AK} \exp\left( K \log(m^2) -V((c/2) \log m) \right) \\
\notag &= \sum_{K \geq 1} \exp\left( 2K \log m - AK(c/2) \log m \right) (1 + o(1)) \\
\notag &= m^{2 - Ac/2} (1 + o(1)),
\end{align}
where the $o(1)$ is as $m \to \infty$. Choose $A$ so that $2 - Ac/2 \leq -1$.
To estimate the remaining sum over $K \leq V < AK$, let $\delta = \epsilon/3$ and set $B = A \delta^{-1}$. Choose $R = R(\epsilon)$ according to Lemma \ref{savings_lemma}, so that the savings from each $R$-cluster of size at most $B$ is at least $m^2 \mathrm{gap}_m \left(1 - \epsilon/2 \right)$. If $\xi \in \Xi(V,K)$ with $V < AK$, then its $R$-reduced prevector has at least $(1-\delta)K$ clusters of size at most $B$. Hence
\begin{equation}
1 - |\hat{\mu}(\xi)| \geq (1-\delta)K \cdot \mathrm{gap}_m \left( 1 - \frac{\epsilon}{2} \right) \geq \left( 1 - \frac{5\epsilon}{6} \right) \mathrm{gap}_m K,
\end{equation}
and therefore
\begin{align}
|\hat{\mu}(\xi)|^{2N} &\leq \exp\left( \left[ -2(1+\epsilon)\left(1-\frac{5\epsilon}{6}\right) \log m + O\left(m^{-2}\right) \right] K \right) \\
\notag &\leq \exp(-(2 + \beta)(\log m) K)
\end{align}
for some constant $\beta = \beta(\epsilon) > 0$, as long as $\epsilon$ is sufficiently small.
We compute, for sufficiently large $m$,
\begin{align}
&\quad\, \sum_{K \geq 1} \sum_{K \leq V < AK} \sum_{\xi \in \Xi(V,K)} |\hat{\mu}(\xi)|^{2N} \\
\notag &\leq \sum_{K \geq 1} \sum_{K \leq V < AK} \mathscr{N}(V,K) \exp(-(2 + \beta)(\log m) K) \\
\notag &\leq \sum_{K \geq 1} \sum_{K \leq V < AK} \exp(-\beta(\log m) K + O(V \log R)) \\
\notag &\leq \sum_{K \geq 1} \exp\Big( \left[ -\beta \log m + O(A \log R) \right] K \Big) \\
\notag &\leq \sum_{K \geq 1} \exp(-(\beta/2) (\log m) K) = O\left( m^{-\beta/2} \right).
\end{align}
Thus the entire sum \eqref{full_mu_sum} tends to zero like a small negative power of $m$, completing the proof.
\end{proof}
| {
"timestamp": "2018-07-12T02:11:17",
"yymm": "1703",
"arxiv_id": "1703.00827",
"language": "en",
"url": "https://arxiv.org/abs/1703.00827",
"abstract": "We give a non-trivial upper bound for the critical density when stabilizing i.i.d. distributed sandpiles on the lattice $\\mathbb{Z}^2$. We also determine the asymptotic spectral gap, asymptotic mixing time and prove a cutoff phenomenon for the recurrent state abelian sandpile model on the torus $\\left( \\mathbb{Z} / m\\mathbb{Z} \\right)^2$. The techniques use analysis of the space of functions on $\\mathbb{Z}^2$ which are harmonic modulo 1. In the course of our arguments, we characterize the harmonic modulo 1 functions in $\\ell^p(\\mathbb{Z}^2)$ as linear combinations of certain discrete derivatives of Green's functions, extending a result of Schmidt and Verbitskiy.",
"subjects": "Probability (math.PR)",
"title": "Sandpiles on the square lattice",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787842245939,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7093811269102007
} |
https://arxiv.org/abs/0801.2987 | The minimum rank problem over finite fields | The structure of all graphs having minimum rank at most k over a finite field with q elements is characterized for any possible k and q. A strong connection between this characterization and polarities of projective geometries is explained. Using this connection, a few results in the minimum rank problem are derived by applying some known results from projective geometry. |
\section{Introduction}
Given a field $F$ and a simple undirected graph $G$ on $n$ vertices (i.e., an
undirected graph without loops or multiple edges), let
$S(F,G)$ be the set of symmetric $n\times n$ matrices $A$ with entries
in $F$ satisfying $a_{ij} \neq 0$, $i \neq j$, if and only if $ij$ is
an edge in $G$. There is no restriction on the diagonal entries of
the matrices in $S(F,G)$. Let
\begin{equation*}
\mr(F,G)=\min\{\rank A \,\mid\, A \in S(F,G)\}.
\end{equation*}
Let $\ensuremath{\mathcal{G}}_k(F)=\{G \,\mid\, \mr(F,G)\leq k\}$, the set of simple graphs with
minimum rank at most $k$.
The problem of finding $\mr(F,G)$ and describing $\ensuremath{\mathcal{G}}_k(F)$ has
recently attracted considerable attention, particularly for the case
in which $F = \ensuremath{\mathbb R}$ (see
\cite{nylen-minrank,cdv-eig-mult,johnson-duarte-trees, hsieh-minrank,
johnson-saiago-maxmult, chen, vdh-nullity, BFH1-minrankpath,
barrett-vdHL-minrank2-infinite, hall, arav,
bento-duarte-tridiag-matrices, bfh-mult-pathcover-tree, bfh-cdv,
barrett-vdHL-minrank2-finite, ding-kotlov-minrank-finite,
barioli-fallat-joins}). The minimum rank problem over $\ensuremath{\mathbb R}$ is a
sub-problem of a much more general problem, the inverse eigenvalue
problem for symmetric matrices: given a family of real numbers, find
every symmetric matrix that has the family as its eigenvalues. More
particularly, the minimum rank problem is a sub-problem of the inverse
eigenvalue problem for graphs, which fixes a zero/nonzero pattern for
the symmetric matrices considered in the inverse eigenvalue problem.
The minimum rank problem can also be thought of in this way: given a
fixed pattern of off-diagonal zeros, what is the smallest rank that a
symmetric matrix having that pattern can achieve?
Up to the addition of isolated vertices, it is easy to see that
$\ensuremath{\mathcal{G}}_1(F)=\{K_n\,\mid\, n\in\ensuremath{\mathbb{N}}\}$ for any field $F$. In
\cite{barrett-vdHL-minrank2-infinite} and
\cite{barrett-vdHL-minrank2-finite}, $\ensuremath{\mathcal{G}}_2(F)$ was characterized for
any field $F$ both in terms of forbidden subgraphs and in terms of the
structure of the graph complements. The forbidden subgraph characterizations in
these papers used ten or fewer graphs for each value of $k$.
Restricting our focus to finite fields, let $\ensuremath{\mathbb{F}}_q$ denote the finite
field with $q$ elements. Ding and Kotlov
\cite{ding-kotlov-minrank-finite} independently used structures
similar to those introduced in this paper to obtain an upper bound for
the sizes of minimal forbidden subgraphs characterizing $\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$
for any $k$ and any $q$. This result implies that there are a finite
number of forbidden subgraphs characterizing $\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$. In
\cite{barrett-grout-loewy-mrF2R3}, the bound of Ding and Kotlov was
improved greatly for $\ensuremath{\mathcal{G}}_3(\ensuremath{\mathbb{F}}_2)$ and this set was characterized by
62 forbidden subgraphs. This result and further computations confirm
our intuition that the forbidden subgraph characterizations of
$\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$ quickly become complicated as $k$ increases.
In this paper, we will characterize the structure of graphs in
$\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$ for any $k$ and any $q$. The characterization is
simply stated and has a very strong connection to projective geometry
over finite fields. At the end of the paper, we will list a few of
the ramifications of this connection to projective geometry.
We adopt the following notation dealing with fields, vector spaces,
and matrices. Given a field $F$, the group of nonzero elements under
multiplication is denoted $F^\times$ and the vector space of dimension $k$
over $F$ is denoted $F^k$. Given a matrix $M$, the principal
submatrix lying in the rows and columns $x_1,x_2,\ldots,x_m$ is denoted
$M[x_1,x_2,\ldots,x_m]$.
As an example of how one might approach the problem of finding the
minimum rank of a simple graph, we recall from
\cite{barrett-vdHL-minrank2-finite} the fullhouse graph in
Figure~\ref{fig:fullhouse} (there called $(P_3\cup 2K_1)^c$), which is
the only graph on 5 or fewer vertices for which the minimum rank is
field-dependent.
\begin{figure}[h!]
\centering
{\rm \input graphs/fullhouse.tex }
\caption{A labeled fullhouse graph}
\label{fig:fullhouse}
\end{figure}
If $F \neq \ensuremath{\mathbb{F}}_2$, there are elements $a, b \neq 0$ in $F$ such that $a+b
\neq 0$. Then
\begin{align*}
\begin{bmatrix}
a&a&a&0&0\\
a&a+b&a+b&b&b\\
a&a+b&a+b&b&b\\
0&b&b&b&b\\
0&b&b&b&b
\end{bmatrix} \in S(F,\mathrm{fullhouse})
\end{align*}
which shows that $\mr(F,\mathrm{fullhouse})=2$. The case $F=\ensuremath{\mathbb{F}}_2$ gives a different
result. Let $A$ be any matrix in $S(\ensuremath{\mathbb{F}}_2,\mathrm{fullhouse})$. Then for some
$d_1,d_2,\ldots, d_5\in \ensuremath{\mathbb{F}}_2$,
\begin{align*}
A= \begin{bmatrix}
d_1&1&1&0&0\\
1&d_2&1&1&1\\
1&1&d_3&1&1\\
0&1&1&d_4&1\\
0&1&1&1&d_5
\end{bmatrix}
\quad \text{ and } \quad \det (A[\{1, 2,
5\},\{1, 3, 4\}])=\left| \begin{array}{ccc}
d_1&1&0\\1&1&1\\ensuremath{\vec 0}&1&1\end{array}\right| =1,
\end{align*}
where $A[\{1, 2, 5\},\{1, 3, 4\}]$ is the submatrix of $A$ lying in rows
$\{1,2,5\}$ and columns $\{1,3,4\}$. Therefore $\mr(\ensuremath{\mathbb{F}}_2,\mathrm{fullhouse})\geq 3$.
Setting each $d_i$ to 1 verifies that $\mr(\ensuremath{\mathbb{F}}_2,\mathrm{fullhouse})= 3$.
In spite of this dependence on the field, there are a number of
results about minimum rank that are field independent. For example,
the minimum rank of a tree is field independent (see any of
\cite{bank-thesis}, \cite{sinkovic-thesis}, or
\cite{hogben-minrank-tree}). Many of the forbidden subgraphs
classifying $\ensuremath{\mathcal{G}}_3(\ensuremath{\mathbb{F}}_2)$ that are found in
\cite{barrett-grout-loewy-mrF2R3} are also forbidden subgraphs for
$\ensuremath{\mathcal{G}}_3(F)$ for any field $F$. These results and others demonstrate
that results obtained over finite fields can provide important
insights for other fields.
The presentation of material in this paper is oriented towards a
reader that is familiar with concepts from linear algebra and graph
theory. In the rest of this section, we will review some of our
conventions in terminology from graph theory.
In this paper, graphs are undirected, may have loops, but will not
have multiple edges between vertices. To simplify our drawings, a
vertex with a loop (a \emph{looped vertex}) will be filled (black) and
a vertex without a loop (a \emph{nonlooped vertex}) will be empty
(white). A \emph{simple graph} is a graph without loops. Let $G$ be
a graph with some loops and $\hat G$ be the simple version of $G$ obtained
by deleting all loops. We say that a matrix in $S(F,\hat G)$
\emph{corresponds} to the simple graph $\hat G$. A matrix $A\in S(F,\hat G)$
\emph{corresponds} to $G$ if $a_{ii}$ is nonzero exactly when the
vertex $i$ has a loop in $G$. Note that if a matrix corresponds to a
looped graph, then it also corresponds to the simple version of the
graph.
We recall some notation from graph theory.
\begin{definition}
Given two graphs $G$ and $H$ with disjoint vertex sets $V(G)$ and
$V(H)$ and edge sets $E(G)$ and $E(H)$, the \emph{union} of $G$ and
$H$, denoted $G \cup H$, has vertices $V(G)\cup V(H)$ and edges $E(G)\cup
E(H)$. The \emph{join} of $G$ and $H$, denoted $G\lor H$, has vertices
$V(G)\cup V(H)$ and edges $E(G)\cup E(H)\cup\{uv \,\mid\, u\in V(G),\, v\in V(H)\}$.
The complement of the graph $G$, denoted $G^c$, has vertices $V(G)$
and edges $\{uv \,\mid\, u,v\in V(G),\, uv\not\in E(G)\}$. Note that a vertex
is looped in $G$ if and only if it is nonlooped in $G^c$.
\end{definition}
\begin{definition}
The simple complete graph on $n$ vertices will be denoted by $K_n$
and has vertices $\{1,2,\ldots,n\}$ and edges $\{xy\,\mid\, x,y\in V(K_n), x\neq y\}$.
The simple complete multipartite graph $K_{s_1,s_2,\ldots,s_m}$ is
defined as $K_{s_1}^c\lor K_{s_2}^c\lor\cdots \lor K_{s_m}^c$.
\end{definition}
\begin{definition}
Two vertices in a graph are \emph{adjacent} if an edge connects them. A
{\it clique} in a graph is a set of pairwise adjacent vertices. An
{\it independent set} in a graph is a set of pairwise nonadjacent
vertices.
\end{definition}
The next definition extends a standard definition introduced in
\cite{blowup-lemma} and is used in random graph theory in connection
with the regularity lemma.
\begin{definition}
A \emph{blowup} of a graph $G$ with vertices $\{v_1,v_2,\ldots,v_n\}$ is a
new simple graph $H$ constructed by replacing each nonlooped vertex
$v_i$ in $G$ with a (possibly empty) independent set $V_i$, each
looped vertex $v_i$ with a (possibly empty) clique $V_i$, and each
edge $v_iv_j$ in $G$ ($i\neq j$) with the edges $\{xy \,\mid\, x\in V_i, y\in
V_j\}$ in $H$.
\end{definition}
\begin{example}\label{ex:blowup}
Let $G$ be the graph labeled in Figure~\ref{fig:example-blowup}\subref{fig:example-blowup-orig}.
\begin{figure}[h!]
\centering
\subfloat[$G$]{\input{graphs/example1twinreduction.tex}\label{fig:example-blowup-orig}}
\qquad\qquad
\subfloat[$H$, a blowup of~$G$]{\input{graphs/example1twinexpansion.tex}\label{fig:example-blowup-blowup}}
\caption{Graphs in Example~\ref{ex:blowup}}
\label{fig:example-blowup}
\end{figure}
Let $|V_1|=3$, $|V_2|=1$, $|V_3|=2$, and $|V_4|=0$. Then we obtain
the simple blowup graph $H$ in
Figure~\ref{fig:example-blowup}\subref{fig:example-blowup-blowup}. It
is useful to see how matrices corresponding to a graph and a blowup of
the graph are related. Over $\ensuremath{\mathbb{F}}_3$, let
\begin{align*}
M=\left[ \begin{array}{c|c|c|c}
0&2&0&0\\ \hline
2&1&1&0\\ \hline
0&1&1&1\\ \hline
0&0&1&1 \end{array}\right]
\quad \text{ and } \quad
N=\left[ \begin{array}{ccc|c|cc}
0&0&0&1&0&0\\ 0&0&0&2&0&0\\ 0&0&0&1&0&0\\\hline
1&2&1&0&1&1\\\hline
0&0&0&1&0&1\\ 0&0&0&1&1&2 \end{array}\right].
\end{align*}
Then $M$ is an example of a matrix corresponding to $G$ and $N$ is an
example of a matrix corresponding to $H$. Note that, for example, the
entry $m_{11}$ was replaced with a $3\times 3$ zero block in $N$, the entry
$m_{12}$ was replaced with a $3\times 1$ nonzero block in $N$, the entries
in the last row and column of $M$ were replaced with empty blocks
(i.e., erased), and the diagonal entries of $N$ were changed to
whatever was desired. These substitutions of block matrices
correspond to the vertex substitutions used to construct $H$.
\end{example}
We will introduce our method by presenting a proof of a special case
of a characterization theorem from \cite{barrett-vdHL-minrank2-finite}
which characterizes $\ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{F}}_2)$. We will then generalize this
proof into a characterization of all simple graphs in $\ensuremath{\mathcal{G}}_{k}(\ensuremath{\mathbb{F}}_q)$
for any $k$ and $q$. After giving examples for some specific $k$ and
$q$, we will describe the strong connection to projective geometry and
list some consequences of this connection.
\section{A new approach to a recent result}
We will introduce our method by giving a proof of a special case of
Theorems~5 and 6 of \cite{barrett-vdHL-minrank2-finite}.
\begin{theorem}[{\cite{barrett-vdHL-minrank2-finite}}]
\label{thm:f2r2-orig}
Let $G$ be a simple graph on $n$ vertices. Then $\mr(\ensuremath{\mathbb{F}}_2,G)\leq2$ if
and only if the simple version of $G^c$ is either of the form
\begin{equation*}
(K_{s_1}\cup K_{p_1,q_1})\lor K_r
\end{equation*}
for some appropriate nonnegative integers $s_1$, $p_1$, $q_1$, and
$r$, or of the form
\begin{equation*}
(K_{s_1}\cup K_{s_2}\cup K_{s_3})\lor K_r
\end{equation*}
for some appropriate nonnegative integers $s_1$, $s_2$, $s_3$, and $r$.
\end{theorem}
We first rephrase Theorem~\ref{thm:f2r2-orig} using blowup graph terminology.
\begin{theorem}[{\cite{barrett-vdHL-minrank2-finite}}]
\label{thm:f2r2}
Let $G$ be a simple graph on $n$ vertices. Then
\mbox{$\mr(\ensuremath{\mathbb{F}}_2,G)\leq2$} (i.e., $G\in \ensuremath{\mathcal{G}}_2(\ensuremath{\mathbb{F}}_2)$) if and only if
$G$ is a blowup of either of the graphs in Figure~\ref{fig:thm-f2r2}.
\end{theorem}
\begin{figure}[h!]
\centering
\subfloat[]{\input{graphs/P3UK1.tex}\label{fig:thm-f2r2-a}}
\qquad\qquad\qquad
\subfloat[]{\input{graphs/K3UK1.tex}\label{fig:thm-f2r2-b}}
\caption{Graphs in Theorem~\ref{thm:f2r2}}
\label{fig:thm-f2r2}
\end{figure}
In the proof of this result, we will need the following lemma and
corollary, which hold in any field. We will then give a proof of Theorem~\ref{thm:f2r2}.
\begin{lemma}[{\cite[Theorem~8.9.1]{gr-graph-theory}}]
\label{lem:gr-rank-decomp}
Let $A$ be an $n\times n$ symmetric matrix of rank~$k$. Then there
is an invertible principal $k\times k$ submatrix $B$ of $A$ and a
$k\times n$ matrix $U$ such that
\begin{align*}
A=U^t B U.
\end{align*}
\end{lemma}
\begin{corollary}
\label{cor:rank-decomp}
Let $A$ be an $n\times n$ symmetric matrix. Then $\rank A \leq k$ if
and only if there is some invertible $k\times k$ matrix $B$ and
$k\times n$ matrix $U$ such that $A=U^tBU$.
\end{corollary}
\begin{proof}
Let $A$ have rank $r\leq k$. Then by
Lemma~\ref{lem:gr-rank-decomp}, there is an invertible $r\times r$
matrix $B_1$ and an $r\times n$ matrix $U_1$ such that
$A=U_1^tB_1U_1$. Let $B_2=\begin{bmatrix} B_1&O\\O&I_{k-r}\end{bmatrix}$ and
$U_2=\begin{bmatrix} U_1\\O \end{bmatrix}$ (where $O$ represents a zero matrix of the
appropriate size). Then $A=U_2^tB_2U_2$.
The reverse implication follows from the rank inequality
$\rank(U^tBU)\leq \rank B$.
\end{proof}
Recall that two square matrices $A$ and $B$ are congruent if there
exists some invertible matrix $C$ such that $A=C^tBC$. It is
straightforward to show that congruence is an equivalence relation.
Let $\mathcal{B}$ consist of one representative from each congruence
equivalence class of invertible symmetric $k\times k$ matrices. By
Corollary~\ref{cor:rank-decomp}, if $A$ is a symmetric $n\times n$ matrix
with $\rank A\leq k$, then $A\in \{U^tBU \,\mid\, B\in\mathcal{B},\, U \text{ a $k\times
n$ matrix}\}$.
We now proceed with the proof of Theorem~\ref{thm:f2r2}.
\begin{proof}[Proof of Theorem~\ref{thm:f2r2}]
First, we compute a suitable $\mathcal{B}$, a set of representatives
from the congruence classes of invertible symmetric $2\times 2$ matrices
over $\ensuremath{\mathbb{F}}_2$. If an invertible symmetric $2\times 2$ matrix $B$ over
$\ensuremath{\mathbb{F}}_2$ has a nonzero diagonal entry, then $B=\begin{bmatrix} 1&1\\1&0
\end{bmatrix}$, $B=\begin{bmatrix} 0&1\\1&1\end{bmatrix}$, or $B=I_2$. In any of these three
cases, $B^tBB=I_2$, so $B$ is congruent to the identity matrix
$I_2$. If an invertible symmetric $2\times 2$ matrix $B$ over $\ensuremath{\mathbb{F}}_2$
has all zeros on the diagonal, then the off-diagonal entries must be
nonzero, so $B=\begin{bmatrix} 0&1\\1&0\end{bmatrix}$. In this case,
\begin{align*}
\begin{bmatrix} a&c\\b&d\end{bmatrix} \begin{bmatrix} 0&1\\1&0\end{bmatrix} \begin{bmatrix} a&b\\c&d\end{bmatrix} = \begin{bmatrix} ac+ac&ad+bc\\ad+bc&bd+bd\end{bmatrix}=
\begin{bmatrix} 0&ad+bc\\ad+bc&0\end{bmatrix},
\end{align*}
so any matrix congruent to $B$ will have a zero diagonal.
Therefore, a suitable $\mathcal{B}$ is
\begin{align*}
\mathcal{B}=\left\{I_2,\begin{bmatrix} 0&1\\1&0\end{bmatrix}\right\}.
\end{align*}
Because $U$ is a matrix with entries in $\ensuremath{\mathbb{F}}_2$, the columns of $U$
are members of the finite set
\begin{align*}
\left\{ \bmat1\\ensuremath{\vec 0} \end{bmatrix}, \begin{bmatrix} 0\\1 \end{bmatrix}, \begin{bmatrix} 1\\1 \end{bmatrix},
\begin{bmatrix} 0\\ensuremath{\vec 0}\end{bmatrix} \right\}.
\end{align*}
Let $A$ be a symmetric $k\times k$ matrix. For any $n \times n$
permutation matrix $P$, the graphs of $A$ and $P^tAP$ are isomorphic.
Therefore we may assume that identical columns of $U$ are contiguous
and write $U = \begin{bmatrix} E_1 & E_2 & J & O \end{bmatrix}$ where $E_1$ is $2 \times
p$ matrix with each column equal to $\begin{bmatrix} 1\\ensuremath{\vec 0}\end{bmatrix}$, $E_2$ is $2
\times q$ matrix with each column equal to $\begin{bmatrix} 0\\1\end{bmatrix}$, $J$ is a
$2 \times r$ matrix with each entry equal to $1$, and $O$ is a $2
\times t$ zero matrix. Then either
$$A=\begin{bmatrix}
E_1^{\rm{T}}\\
E_2^{\rm{T}}\\
J^{\rm{T}}\\
O^{\rm{T}}\end{bmatrix} \begin{bmatrix}
E_1 & E_2 & J & O \end{bmatrix} = \begin{bmatrix}
J_p & O & J_{p,r} & O\\
O & J_q & J_{q,r} & O\\
J_{r,p} & J_{r,q} & O_r & O\\
O & O & O & O_t\end{bmatrix}$$
or else $$A=\begin{bmatrix}
E_1^{\rm{T}}\\
E_2^{\rm{T}}\\
J^{\rm{T}}\\
O^{\rm{T}}\end{bmatrix} \begin{bmatrix}
0 & 1\\
1 & 0\end{bmatrix} \begin{bmatrix}
E_1 & E_2 & J & O \end{bmatrix} = \begin{bmatrix}
O_p & J_{p,q} & J_{p,r} & O\\
J_{q,p} & O_q & J_{q,r} & O\\
J_{r,p} & J_{r,q} & O_r & O\\
O & O & O & O_t\end{bmatrix},$$
where $J$ is an all-ones matrix, $O$ is a zero matrix, and subscripts of $J$ and $O$ denote the dimensions of the matrix.
Any simple graph corresponding to the first matrix is a blowup of the
graph in Figure~\ref{fig:thm-f2r2}\subref{fig:thm-f2r2-a}, while any
simple graph corresponding to the second matrix is a blowup of the
graph in Figure~\ref{fig:thm-f2r2}\subref{fig:thm-f2r2-b}. Thus we
have established Theorem~\ref{thm:f2r2}.
\end{proof}
\begin{observation} \label{obs:simplified-gFk}
Note that every block in the above matrices is either a $O$ matrix or
a $J$ matrix. Consequently, we could have obtained the zero/nonzero
form of the matrices with rank at most 2 by only considering $U = \begin{bmatrix}
1 & 0 & 1 & 0\\
0 & 1 & 1 & 0\end{bmatrix}$ and computing
\begin{align*}
A = U^tU = \begin{bmatrix}
1 & 0 & 1 & 0\\
0 & 1 & 1 & 0\\
1 & 1 & 0 & 0\\
0 & 0 & 0 & 0\end{bmatrix}
\end{align*}
and
\begin{align*}
A = U^tB_2U = \begin{bmatrix}
1 & 0\\
0 & 1\\
1 & 1\\
0 & 0\end{bmatrix} \begin{bmatrix}
0 & 1 & 1 & 0\\
1 & 0 & 1 & 0\end{bmatrix} = \begin{bmatrix}
0 & 1 & 1 & 0\\
1 & 0 & 1 & 0\\
1 & 1 & 0 & 0\\
0 & 0 & 0 & 0\end{bmatrix}.
\end{align*}
The nonzero diagonal entries correspond to loops in our graphs. This
simplified procedure again yields the graphs in
Figure~\ref{fig:thm-f2r2}.
\end{observation}
In the proof of Theorem~\ref{thm:f2r2}, we noted that any $U$ could be
written in a standard form. In Observation~\ref{obs:simplified-gFk},
we saw how the standard form of $U$ could be simplified to take advantage
of the theorem being about blowup graphs. We will now discuss the
reasoning behind these constructions and show that an analogous
standard form of $U$ exists for any finite field and any $k$.
Because we construct the graphs using representatives of
congruence classes, it is important for any simplified $U$ to have the
property that if $B$ and $\hat B$ are congruent, then $U^tBU$ and
$U^t\hat B U$ correspond to isomorphic graphs. The following lemma
shows that if we take a matrix $U$ where the columns consist of all
vectors in $\ensuremath{\mathbb{F}}_q^k$, like in Observation~\ref{obs:simplified-gFk},
and if $B$ and $\hat B$ are congruent, then $U^tBU$ and $U^t\hat BU$
correspond to isomorphic graphs.
\begin{lemma}\label{lem:U-all-vectors}
Let $U$ be the matrix with columns $\{v \,\mid\, v\in \ensuremath{\mathbb{F}}_q^k\}$.
Let $B$ and $C$ be invertible $k\times k$ matrices with $B$ symmetric. Then the graphs corresponding to
$U^tBU$ and $U^t(C^tBC)U$ are isomorphic.
\end{lemma}
\begin{proof}
Since every vector in $\ensuremath{\mathbb{F}}_q^k$ appears as a column of $U$ and the mapping
$x\mapsto Cx$ is one-to-one, $CU$ is just a column permutation of
$U$. This permutation induces a relabeling of the graph $U^tBU$ to
give the graph of $(CU)^tB(CU)=U^t(C^tBC)U$.
\end{proof}
Though this invariance property with respect to congruent matrices does not
hold for an arbitrary $U$, there is another smaller $U$ which does
have the same property. We first need some preliminary material.
Then we will introduce this new $U$ in Lemma~~\ref{lem:congruent-isomorphic}.
\begin{definition}
Let $F$ be a field. Two nonzero vectors $v_1,v_2\in F^k$ are
\emph{projectively equivalent} if there exists some nonzero $c\in F$
such that $v_1=cv_2$.
\end{definition}
It is easy to check that projective equivalence is in fact
an equivalence relation on the vectors in $V$.
We pause to note that replacing a column of $U$ with a projectively
equivalent column does not affect the graph corresponding to $U^tBU$.
To see this, let $U=[u_1\; u_2\; \cdots \; u_n]$ and let $i\in\{1,2,\ldots,n\}$.
Let $\hat U$ be the matrix obtained from $U$ by replacing the column
$u_i$ with $cu_i$ for some nonzero $c\in F$. Then the $i,j$ entry of
$\hat U^tB\hat U$, $(cu_i)^tBu_j$ if $i\neq j$ or $(cu_i)^tB(cu_i)$ if
$i=j$, is zero if and only if the $i,j$ entry of $U^tBU$, $u_i^tBu_j$,
is zero. Thus the graphs associated with $U^tBU$ and $\hat U^t B \hat
U$ are equal.
\begin{lemma}\label{lem:projective-permutation}
Let $F$ be any field, let $x\in F^k$, let $\ensuremath{\bar x}$ denote
the projective equivalence class of $x$, and let $P=\cup_{x\in
F^k-\ensuremath{\vec 0}}\{\ensuremath{\bar x}\}$, the set of projective equivalence classes in $F^k$.
Let $C$ be an invertible matrix. Then the map $f\colon P \to P$
defined by $f\colon \ensuremath{\bar x} \mapsto \class{Cx}$ is a bijection.
\end{lemma}
\begin{proof}
The function $f$ is well-defined since if $Cx=y$, then for any
nonzero $k\in F$, $\class{C(kx)}=\class{kCx}=\class{ky}=\ensuremath{\bar y}$. If $
\class{Cx_1}=\class{Cx_2}$, then for some nonzero $k\in F$,
$kCx_1=Cx_2$, which implies $C(kx_1-x_2)=0$, giving $kx_1=x_2$ since
$C$ is invertible. Therefore $\class{x_1}=\class{x_2}$ and $f$ is
injective. Surjectivity of $f$ also follows from the hypothesis
that $C$ is invertible.
\end{proof}
\begin{lemma}\label{lem:congruent-isomorphic}
Let $\class{x_1}, \class{x_2},\ldots,\class{x_m}$ be the projective
equivalence classes of $\ensuremath{\mathbb{F}}_q^k-\ensuremath{\vec 0}$, with each $x_i$ as a chosen
representative from its class. Let $U=[x_1\;x_2\;\cdots\; x_m]$, the
matrix with column vectors $x_1,x_2,\ldots,x_m$. Let $B$ and $C$ be
invertible $k\times k$ matrices with $B$ symmetric. Then the graphs
corresponding to $U^tBU$ and $U^t(C^tBC)U$ are isomorphic.
\end{lemma}
\begin{proof}
Let $T=CU$. Denote the $i$th column of $U$ by $u_i$ and the $i$th
column of $T$ by $t_i$. By Lemma~\ref{lem:projective-permutation},
the sequence of projective equivalence classes $\class{t_1},
\class{t_2}, \ldots, \class{t_n}$ is just a permutation of the sequence
$\class{u_1},\class{u_2},\ldots,\class{u_n}$. Form the matrix $S$ in
which the $i$th column, $s_i$, is $u_j$ if
$\class{t_i}=\class{u_j}$, so that $S$ is a column permutation of
$U$ and $\class{s_i}=\class{t_i}$. Then the graph corresponding to
$U^t(C^tBC)U=(C U)^tB(C U)=T^tBT$ is isomorphic to the graph
corresponding to $S^tBS$ by the reasoning preceding
Lemma~\ref{lem:projective-permutation}, which is in turn just a
relabeling of the graph corresponding to $U^tBU$.
\end{proof}
We now find a standard form for any matrix $U$, as in our proof of
Theorem~\ref{thm:f2r2}. Let $U$ be a $k\times n$ matrix over $\ensuremath{\mathbb{F}}_q$ and
let $B$ be an invertible symmetric $k\times k$ matrix over $\ensuremath{\mathbb{F}}_q$. Let
$\class{x_1}, \class{x_2},\ldots,\class{x_m}$ be the projective equivalence
classes of $\ensuremath{\mathbb{F}}_q^k-\ensuremath{\vec 0}$, with each $x_i$ as a chosen representative
from its class. For each nonzero column $u_i$ of $U$, replace $u_i$
with the chosen representative of $\class{u_i}$. Then permute the
columns of $U$ so that the matrix is of the form $\hat U=[X_1 \; X_2
\; \cdots\; X_m\; O]$, where each $X_i$ is a block matrix of columns equal
to $x_i$ and $O$ is a zero block matrix. Note that some of these
blocks may be empty. Let $G$ be the simple graph corresponding to
$U^tBU$ and let $\hat G$ be the simple graph corresponding to $\hat
U^tB\hat U$. From our results above, $G$ is isomorphic to $\hat G$.
As illustrated in Observation~\ref{obs:simplified-gFk}, we can obtain
the zero/nonzero structure of the block matrix $\hat U^tB\hat U$ by
simply deleting all duplicate columns of $\hat U$. Deleting these
duplicate columns of $\hat U$ leaves a matrix that can be obtained
from $\tilde U=[x_1\;x_2\;\cdots\;x_m\;0]$ by deleting the columns of
$\tilde U$ corresponding to empty blocks of $\hat U$. Let $\tilde G$
be the (looped) graph corresponding to $\tilde U^tB\tilde U$. Then
$\hat G$ is a blowup of $\tilde G$, which implies that $G$ is a blowup
of $\tilde G$.
Furthermore, let $\mathcal{B}$ be a set consisting of one
representative from each congruence class of invertible symmetric $k\times
k$ matrices and let $\hat B$ be the representative that is congruent
to $B$. Then from Lemma~\ref{lem:congruent-isomorphic}, the graphs
corresponding to $\tilde U^t B \tilde U$ and $\tilde U^t\hat B \tilde
U$ are isomorphic.
There is another simplification we can make. Notice that both graphs
displayed in Theorem~\ref{thm:f2r2} have an isolated nonlooped vertex.
This vertex came from the zero column vectors in $U$ and corresponds
to the fact that adding any number of isolated vertices to a graph
does not change its minimum rank. In any theorem like
Theorem~\ref{thm:f2r2}, each graph from which we construct blowups
will always have this isolated nonlooped vertex and so will be of the
form $G\cup K_1$. Note that in constructing such a graph $G$, it is
enough to assume that $\tilde U$ in the above paragraphs does not have
a zero column vector.
\begin{definition}\label{def:g_k}
Let $\class{x_1}, \class{x_2},\ldots,\class{x_m}$ be the projective
equivalence classes of $\ensuremath{\mathbb{F}}_q^k-\ensuremath{\vec 0}$, with each $x_i$ as a chosen
representative from its class. Let $\mathcal{B}$ be a set
consisting of one representative from each congruence class of
invertible symmetric $k\times k$ matrices. Let $U=[x_1\;x_2\;\cdots\; x_m]$,
the matrix with column vectors $x_1,x_2,\ldots,x_m$. We define the set
of graphs $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ as the set of graphs corresponding to the
matrices in $\{U^tBU \,\mid\, B\in \mathcal{B}\}$.
\end{definition}
We now have the following result (recall that $K_1$ has no loop).
\begin{theorem}
A simple graph $G$ is in $\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$ if and only if $G$ is a
blowup of some graph in $\{H\cup K_1 \,\mid\, H\in\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)\}$.
\end{theorem}
\begin{proof}
Let $G$ be a simple graph in $\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$. Let $A\in S(\ensuremath{\mathbb{F}}_q,G)$ be
a matrix with $\rank A\leq k$. Then $A=U^tBU$ for some $k\times n$ matrix
$U$ and some invertible symmetric $k\times k$ matrix $B$. Using the
procedure outlined in the paragraphs following
Lemma~\ref{lem:congruent-isomorphic}, we see that $G$ is a blowup of
a graph $\tilde G$ corresponding to $\tilde U^t B \tilde U$, where
$\tilde U$ and $B$ are defined as in the procedure.
Lemma~\ref{lem:congruent-isomorphic} then shows that $\tilde
G\in\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$.
Conversely, let $G$ be a blowup of some graph in $\{H\cup K_1\,\mid\,
H\in\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)\}$ obtained by replacing each vertex $v_i$ of $H$ with
a set of vertices $V_i$ and $K_1$ with any number of vertices.
Deleting isolated vertices of $G$ does not change the minimum rank
of $G$, so without loss of generality, we will assume that $G$ has
no isolated vertices (which implies that $K_1$ was replaced with an
empty set of vertices). Let $\class{x_1},
\class{x_2},\ldots,\class{x_m}$ be the projective equivalence
classes of $\ensuremath{\mathbb{F}}_q^k-\ensuremath{\vec 0}$, with each $x_i$ as a chosen representative
from its class. Let $\tilde U= [x_1\; x_2\; \cdots \; x_m]$ and let $B$
be an invertible symmetric $k\times k$ matrix such that $\tilde U^t B
\tilde U$ corresponds to the graph $H$. Form the matrix $\hat
U=[X_1\;X_2\;\cdots \; X_m]$ by replacing each column $x_i$ of $\tilde U$
with the block $X_i$, where the columns of $X_i$ consist of $|V_i|$
copies of $x_i$. Then $\hat U^t B \hat U$ corresponds to $G$ and
$\rank \hat U^t B \hat U\leq k$ since $B$ has rank $k$. Thus
$\mr(\ensuremath{\mathbb{F}}_q,G)\leq k$, so $G\in\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$.
\end{proof}
Now we will make this into a more explicit characterization of
$\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$ by finding a suitable $\mathcal{B}$ for any $k$ and any
$q$, thus enabling us to explicitly find $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ for any $k$
and any $q$.
\section{Congruence classes of symmetric matrices over finite fields}
\label{sec:congr-class-symm}
Symmetric matrices represent symmetric bilinear forms and play an
important role in projective geometry. Two congruent symmetric
matrices represent the same symmetric bilinear form with respect to
different bases. Because of their fundamental importance, congruence
classes of symmetric matrices over finite fields have been studied and
characterized for a long time in projective geometry. In this
section, we have distilled the pertinent proofs of these
characterizations from \cite{albert-matrices},
\cite{hirschfeld-projective}, and \cite{cohn-algebra} to give a
suitable $\mathcal{B}$ for invertible symmetric $k\times k$ matrices
over $\ensuremath{\mathbb{F}}_q$ for any $k$ and $q$. In the next section, we will
expound more on the connection between the minimum rank problem and
projective geometry.
We need the following elementary lemma.
\begin{lemma}\label{lem:make-diagonal}
If a symmetric matrix $B=\begin{bmatrix} C &D\\D^t&E \end{bmatrix}$, where $C$ is a
square invertible matrix, then $B$ is congruent to $\begin{bmatrix} C&O\\O&E'
\end{bmatrix}$, where $O$ is a zero matrix and $E'$ is a square symmetric
matrix of the same order as $E$.
\end{lemma}
\begin{proof}
Let $R=C^{-1}D$ so that $CR=D$. Then
\begin{align*}
\begin{bmatrix} I&O\\-R^t&I \end{bmatrix} \begin{bmatrix} C&D\\D^t&E\end{bmatrix} \begin{bmatrix} I&-R\\O&I\end{bmatrix}
&= \begin{bmatrix} C&D\\-R^tC+D^t&-R^tD+E\end{bmatrix} \begin{bmatrix} I&-R\\O&I\end{bmatrix} \\
&= \begin{bmatrix} C&-CR+D\\-R^tC+D^t&R^tCR-D^tR-R^tD+E\end{bmatrix}\\
&= \begin{bmatrix} C&O\\O&E-D^tR\end{bmatrix},
\end{align*}
since $-CR+D=O=(-CR+D)^t=-R^tC+D^t$.
\end{proof}
\begin{lemma}\label{lem:canonical-matrix}
Every symmetric matrix over $\ensuremath{\mathbb{F}}_q$ is congruent to a matrix of
the form $\diag(a_1,a_2,\ldots, a_s,b_1H_1,b_2H_2,\ldots, b_tH_t)$, where
$a_i,b_i\in \ensuremath{\mathbb{F}}_q$, $H_i=\begin{bmatrix} 0&1\\1&0\end{bmatrix}$, and $s$ and $t$ are
nonnegative integers.
\end{lemma}
\begin{proof}
If $B$ is the zero matrix, then the result is true.
If $B$ is not the zero matrix, then the diagonal of $B$ has a
nonzero entry or there is some $a_{ij}\neq 0$, $i\neq j$, so that $B$
has a principal submatrix of the form $\begin{bmatrix} 0&a_{ij}\\a_{ij} &0
\end{bmatrix}=a_{ij}H$, where $H=\begin{bmatrix} 0&1\\1&0\end{bmatrix}$.
In the first case, by using a suitable permutation, we may assume that
$b_{11}\neq 0$. By Lemma~\ref{lem:make-diagonal}, $B$ is congruent to
$\diag(b_{11},B')$.
In the second case, again by using a suitable permutation, we may assume that
the upper left $2\times 2$ principal submatrix is $a_{ij}H$. By
Lemma~\ref{lem:make-diagonal}, $B$ is congruent to $\diag(a_{ij}H,B')$.
Continue this process inductively with $B'$. Then, again using a
suitable permutation, $B$ is congruent to $\diag(a_1,a_2,\ldots,
a_s,b_1H,b_2H,\ldots, b_tH)$.
\end{proof}
We will now treat the even characteristic and odd characteristic cases
separately.
\subsection{Even characteristic}
We first consider the case when $\ensuremath{\mathbb{F}}_q$ has even characteristic.
First, we need a well-known result.
\begin{lemma}
Every element in a field of characteristic 2 is a square.
\end{lemma}
\begin{corollary}\label{cor:diag-odd-char}
Every symmetric matrix is congruent to $\diag(I_s,H_1,H_2,\ldots, H_t)$.
\end{corollary}
\begin{proof}
By Lemma~\ref{lem:canonical-matrix}, a symmetric matrix $A$ is
congruent to a matrix
\begin{align*}
B=\diag(a_1,a_2,\ldots, a_s,b_1H_1,b_2H_2,\ldots, b_tH_t).
\end{align*}
Let
\begin{align*}
C=\diag(\frac{1}{\sqrt{a_1}},\frac 1 {\sqrt{a_2}},\ldots,\frac 1
{\sqrt{a_s}},\frac 1 {\sqrt{b_1}}I_2,\frac 1
{\sqrt{b_2}}I_2,\ldots,\frac 1 {\sqrt{b_t}}I_2).
\end{align*}
Then $C^tBC=\diag(I_s,H_1,H_2,\ldots,H_t)$.
\end{proof}
Let $B$ be a symmetric matrix in $\ensuremath{\mathbb{F}}_q$. Then according to
Corollary~\ref{cor:diag-odd-char}, $B$ is congruent to a matrix
$C=\diag(I_s,H_1,H_2,\ldots,H_t)$, where each
$H_i=[\begin{smallmatrix}0&1\\1&0\end{smallmatrix}]$. Either $s=0$ or $s>0$.
If $s>0$, then $\diag(I_s,H_1,H_2,\ldots,H_t)$, and thus $B$, is congruent
to $I_k$. To see this, let
\begin{align*}
A=\diag(1,H)=\begin{bmatrix} 1&0&0\\ensuremath{\vec 0}&0&1\\ensuremath{\vec 0}&1&0 \end{bmatrix} \quad \text{ and } \quad
C=\begin{bmatrix} 1&1&1\\1&0&1\\ensuremath{\vec 0}&1&1 \end{bmatrix}.
\end{align*}
Then, since $\characteristic \ensuremath{\mathbb{F}}_q=2$,
\begin{align*}
C^t(AC)=\begin{bmatrix} 1&1&0\\1&0&1\\1&1&1 \end{bmatrix} \begin{bmatrix} 1&1&1\\ensuremath{\vec 0}&1&1\\1&0&1
\end{bmatrix} = I_3.
\end{align*}
If $s=0$, then $\diag(H_1,H_2,\ldots,H_t)$ and $B$ have even order and $B$
is congruent to $\diag(H_1,\ldots,H_{k/2})$.
The next lemma shows that these two cases are different.
\begin{lemma}
If a symmetric matrix $B$ has a zero diagonal, then every matrix
congruent to $B$ has a zero diagonal.
\end{lemma}
\begin{proof}
Let $B$ be a symmetric matrix having a zero diagonal. If $v$ is the
$k$th column of a matrix $C$, then the $(k,k)$ entry of $C^tBC$ is
$v^tBv$, which is zero, since
\begin{equation*}
v^tBv=\sum_{i,j}b_{ij}v_iv_j=\sum_{i}b_{ii}v_i^2+\sum_{i<j}b_{ij}(v_iv_j+v_iv_j)
= \sum_ib_{ii}v_i^2=0.\qedhere
\end{equation*}
\end{proof}
The results in this subsection give us the following lemma.
\begin{lemma}
\label{lem:B-for-even-char}
Let $q$ be even. To determine $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$, we may take $\mathcal{B}$
as follows: if $k$ is odd, then $\mathcal{B}=\{I_k\}$; if $k$ is
even, then $\mathcal{B}=\{I_k,\diag(H_1,H_2,\dots,H_{k/2})\}$, where
$H_i=\begin{bmatrix} 0&1\\1&0 \end{bmatrix}$.
\end{lemma}
\subsection{Odd characteristic}
We now consider the case when $\ensuremath{\mathbb{F}}_q$ has odd characteristic. We first need a
well-known result.
\begin{lemma}
If $\ensuremath{\mathbb{F}}_q$ has odd characteristic and $\nu\in \ensuremath{\mathbb{F}}_q$,
then there exists $c,d\in\ensuremath{\mathbb{F}}_q$ such that $c^2+d^2=\nu$.
\end{lemma}
\begin{proof}
Let $A=\{c^2 \,\mid\, c\in \ensuremath{\mathbb{F}}_q\}$ and $B=\{\nu - d^2 \,\mid\, d\in \ensuremath{\mathbb{F}}_q\}$.
Since the map $\sigma\colon \ensuremath{\mathbb{F}}_q^\times \to \ensuremath{\mathbb{F}}_q^\times$ given by
$\sigma\colon x\mapsto x^2$ has kernel $\{1,-1\}$, there are $(q-1)/2$
squares in $\ensuremath{\mathbb{F}}_q\setminus \{0\}$. Including zero, there are then
$(q+1)/2$ squares in $\ensuremath{\mathbb{F}}_q$. Thus $|A|=|B|=(q+1)/2$, so $A\cap
B\neq \emptyset$, and $c^2=\nu-d^2$ for some $c,d\in\ensuremath{\mathbb{F}}_q$.
\end{proof}
Since there are $(q-1)/2$ nonzero squares in $\ensuremath{\mathbb{F}}_q$, given a nonsquare
$\nu\in\ensuremath{\mathbb{F}}_q$, the set $\{\nu b^2 \,\mid\, b\in\ensuremath{\mathbb{F}}_q, b\neq 0\}$ is a set of
$(q-1)/2$ nonsquares in $\ensuremath{\mathbb{F}}_q$. Consequently, every nonsquare is
equal to $\nu b^2$ for some $b\in\ensuremath{\mathbb{F}}_q$.
The matrix $aH$ for any $a\in \ensuremath{\mathbb{F}}_q$ is congruent to a diagonal matrix:
\begin{align*}
\begin{bmatrix} 1&1\\-1&1\end{bmatrix} \begin{bmatrix} 0&a\\a&0\end{bmatrix} \begin{bmatrix} 1&-1\\1&1\end{bmatrix}
= \begin{bmatrix} a&a\\a&-a\end{bmatrix} \begin{bmatrix} 1&-1\\1&1\end{bmatrix} = \begin{bmatrix} 2a&0\\ensuremath{\vec 0}&-2a\end{bmatrix}.
\end{align*}
This fact combined with Lemma~\ref{lem:canonical-matrix} shows that
every symmetric matrix over $\ensuremath{\mathbb{F}}_q$ is congruent to a diagonal matrix.
\begin{lemma}
Every invertible symmetric $k\times k$ matrix $B$ over $\ensuremath{\mathbb{F}}_q$ is
congruent to either $I_k$ or $\diag(I_{k-1},\nu)$, where $\nu$ is
any nonsquare in $\ensuremath{\mathbb{F}}_q$.
\end{lemma}
\begin{proof}
Let $C$ be an invertible diagonal matrix congruent to $B$, with
$C=N^tBN$, and let $\nu$ be any nonsquare in $\ensuremath{\mathbb{F}}_q$.
By a permutation matrix $P$, let $D=P^tCP=\diag(b_1^2,b_2^2,\ldots,
b_s^2, \nu c_1^2, \nu c_2^2,\ldots, \nu c_t^2)$, where the first $s$
elements of the diagonal of $D$ are squares in $\ensuremath{\mathbb{F}}_q$ and the last
$t$ elements are nonsquares in $\ensuremath{\mathbb{F}}_q$.
Let
$Q=\diag(b_1^{-1},b_2^{-1},\ldots,b_s^{-1},c_1^{-1},c_2^{-1},\ldots,c_t^{-1})$.
Let $E=Q^tDQ=\diag(I_s,\nu I_t)$.
Let $c,d\in \ensuremath{\mathbb{F}}_q$ such that $c^2+d^2=\nu$. Let
\begin{align*}
R=\nu^{-1}\begin{bmatrix}
c&d\\-d&c \end{bmatrix}.
\end{align*}
Since $\det R=\nu^{-2}(c^2+d^2) = \nu^{-1} \neq 0$, $R$ is
invertible. Note that
\begin{align*}
R^t(\nu I_2) R=\nu R^tR=\nu\nu^{-2}(c^2+d^2)I_2=I_2.
\end{align*}
If $t$ is even, let $S=\diag(I_s,R_1,R_2,\ldots,R_{t/2})$, where
$R_i=R$ for each $i$. Then $S^tES=I_k$. If $t$ is odd, let
$S=\diag(I_s,R_1,R_2,\ldots,R_{(t-1)/2},1)$. Then
$S^tES=\diag(I_{k-1},\nu)$.
\end{proof}
The next lemma shows that these two cases are in fact different and
gives a simple criteria to determine which congruence class any
symmetric matrix is in.
\begin{lemma}
If $\det B$ is a square (nonsquare) and $\hat B$ is congruent to
$B$, then $\det \hat B$ is a square (nonsquare).
\end{lemma}
\begin{proof}
Let $\hat B=C^tBC$. Then $\det \hat B=(\det C)^2(\det B)$. Thus
$\det B$ is a square if and only if $\det \hat B$ is a square.
\end{proof}
Since $\det I_k=1$ is a square and $\det (\diag(I_{k-1},\nu))=\nu$ is a
nonsquare, we can determine if a matrix is congruent to $I_k$ or
congruent to $\diag(I_{k-1},\nu)$ by whether the determinant is a square
or not.
It appears then that $|\mathcal{B}|=2$. However, we can do better in
one case since we only are concerned with whether an entry of $U^tBU$
is zero or nonzero and not with the actual value of the entry.
\begin{definition}
Let $B$ and $\hat B$ be matrices. If $\hat B=dC^tBC$ for some
invertible matrix $C$ and some nonzero constant $d$, then $B$ and
$\hat B$ are \emph{projectively congruent}.
\end{definition}
Since multiplying by a nonzero constant preserves the zero/nonzero
pattern in a matrix over a field, if $B$ and $\hat B$ are projectively
congruent, then $U^tBU$ and $U^t\hat BU$ give isomorphic graphs.
\begin{lemma}
If $k$ is odd, then an invertible symmetric $k\times k$ matrix is
projectively congruent to $I_k$.
\end{lemma}
\begin{proof}
Let $k=2\ell-1$. We can see that $\det (\nu\diag(I_{k-1},\nu)) =
\nu^{2\ell-1}\nu=\nu^{2\ell}$ is a square. Thus $\diag(I_{k-1},\nu)$ is
projectively congruent to $I_k$.
\end{proof}
The results in this subsection give us the following lemma.
\begin{lemma}
\label{lem:B-for-odd-char}
Let $q$ be odd. To determine $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$, we may take
$\mathcal{B}$ as follows: if $k$ is odd, then $\mathcal{B}=\{I_k\}$;
if $k$ is even, then $\mathcal{B}=\{I_k,\diag(I_{k-1},\nu)\}$, where
$\nu$ is any nonsquare in $\ensuremath{\mathbb{F}}_q$
\end{lemma}
\subsection{Summary}
Combining Lemmas~\ref{lem:B-for-even-char} and
\ref{lem:B-for-odd-char}, the results of this section can be
summarized as the following theorem.
\begin{theorem}\label{thm:main-theorem}
The set $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ is the set of graphs of the matrices in
$\{U^tBU \,\mid\, B\in\mathcal{B}\}$, where the columns of $U$ are a
maximal set of nonzero vectors in $\ensuremath{\mathbb{F}}_q^k$ such that no vector is
a multiple of another and $\mathcal{B}$ is given by:
\begin{enumerate}
\item \label{case:k-odd}if $k$ is odd, $\mathcal{B}=\{I_k\}$.
\item \label{case:k-even-q-even}if $k$ is even and $\characteristic \ensuremath{\mathbb{F}}_q=2$,
$\mathcal{B}=\{I_k,\diag(H_1,H_2,\dots,H_{k/2})\}$, where
$H_i=\left[\begin{matrix} 0&1\\1&0 \end{matrix}\right]$.
\item \label{case:k-even-q-odd}if $k$ is even and $\characteristic \ensuremath{\mathbb{F}}_q\neq 2$,
$\mathcal{B}=\{I_k,\diag(I_{k-1},\nu)\}$, where $\nu$ is any
non-square in $\ensuremath{\mathbb{F}}_q$.
\end{enumerate}
\end{theorem}
\subsection{Examples of characterizations}
\label{sec:example-subs-theorems}
As special cases of Theorem~\ref{thm:main-theorem}, we present the
following corollaries which calculate $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ for several
$k$ and $q$. In the corollaries, we label a graph in
$\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ using the pattern $FqRk$, signifying that it is a graph
for the $\mr(\ensuremath{\mathbb{F}}_q,G)\leq k$ corollary. To compute these graphs, we
used the software program Sage \cite{sage-2.8.15} and the Sage functions
listed in Appendix~\ref{app:gen-subs-graphs}.
In these theorems, recall that $K_1$ does not have a loop.
\begin{corollary}\label{cor:f2r3}
Let $G$ be any simple graph. Let $F2R3$ be the graph in
Figure~\ref{fig:r3}\subref{fig:f2r3}. Then $\mr(\ensuremath{\mathbb{F}}_2,G) \leq 3$ (i.e., $G\in
\ensuremath{\mathcal{G}}_3(\ensuremath{\mathbb{F}}_2)$) if and only if $G$ is a blowup graph of $F2R3\cup
K_1$.
\end{corollary}
\begin{figure}
\centering
\subfloat[$F2R3$]{\input{graphs/F2R3.tex}\label{fig:f2r3}}
\qquad\qquad\qquad
\subfloat[$F3R3$]{\input{graphs/F3R3.tex}\label{fig:f3r3}}
\caption{Graphs in Corollaries~\ref{cor:f2r3} and \ref{cor:f3r3}}
\label{fig:r3}
\end{figure}
\begin{proof}
As matrices over $\ensuremath{\mathbb{F}}_2$, let
\begin{equation*}
U=\left[\begin{array}{rrrrrrr}
0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 & 0 & 0 & 0
\end{array}\right]
\quad \text{ and } \quad
B=\left[\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right].
\end{equation*}
Then the graph $F2R3$ corresponds to the matrix
\begin{equation*}
U^tBU=
\left[\begin{array}{rrrrrrr}
1 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 & 1 & 1 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 1
\end{array}\right].
\end{equation*}
\end{proof}
\begin{corollary} \label{cor:f3r3} Let $G$ be any simple graph. Let
$F3R3$ be the graph in Figure~\ref{fig:r3}\subref{fig:f3r3}. Then
$\mr(\ensuremath{\mathbb{F}}_3,G) \leq 3$ (i.e., $G\in \ensuremath{\mathcal{G}}_3(\ensuremath{\mathbb{F}}_3)$) if and only if $G$ is
a blowup graph of $F3R3\cup K_1$.
\end{corollary}
\begin{proof}
As matrices over $\ensuremath{\mathbb{F}}_3$, let
\begin{equation*}
U=\left[\begin{array}{rrrrrrrrrrrrr}
0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 1 \\
0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 1 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0
\end{array}\right]
\quad \text{ and } \quad
B=\left[\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right].
\end{equation*}
Then the graph $F3R3$ corresponds to the matrix
\begin{equation*}
U^tBU= \left[\begin{array}{rrrrrrrrrrrrr}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\
1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 0 & 1 & 2 & 1 \\
1 & 0 & 2 & 1 & 0 & 2 & 1 & 0 & 2 & 0 & 2 & 1 & 2 \\
1 & 1 & 1 & 2 & 2 & 2 & 0 & 0 & 0 & 1 & 1 & 1 & 0 \\
1 & 2 & 0 & 2 & 0 & 1 & 0 & 1 & 2 & 1 & 2 & 0 & 1 \\
1 & 0 & 2 & 2 & 1 & 0 & 0 & 2 & 1 & 1 & 0 & 2 & 2 \\
1 & 1 & 1 & 0 & 0 & 0 & 2 & 2 & 2 & 2 & 2 & 2 & 0 \\
1 & 2 & 0 & 0 & 1 & 2 & 2 & 0 & 1 & 2 & 0 & 1 & 1 \\
1 & 0 & 2 & 0 & 2 & 1 & 2 & 1 & 0 & 2 & 1 & 0 & 2 \\
0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 1 & 0 \\
0 & 1 & 2 & 1 & 2 & 0 & 2 & 0 & 1 & 1 & 2 & 0 & 1 \\
0 & 2 & 1 & 1 & 0 & 2 & 2 & 1 & 0 & 1 & 0 & 2 & 2 \\
0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2 & 1
\end{array}\right].
\end{equation*}
\end{proof}
The next corollary gives the simplest previously-unknown result for
which $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ contains two graphs.
\begin{figure}[h!]
\centering
\subfloat[$F2R4A$]{\input{graphs/F2R4_1.tex}}
\qquad\qquad\qquad
\subfloat[$F2R4B$]{\input{graphs/F2R4_2.tex}}
\caption{Graphs in Corollary~\ref{cor:f2r4}}
\label{fig:f2r4}
\end{figure}
\begin{corollary} \label{cor:f2r4} Let $G$ be any simple graph. Let
$F2R4A$ and $F2R4B$ be the graphs in Figure~\ref{fig:f2r4}. Then
$\mr(\ensuremath{\mathbb{F}}_2,G) \leq 4$ (i.e., $G\in \ensuremath{\mathcal{G}}_4(\ensuremath{\mathbb{F}}_2)$) if and only if $G$ is
a blowup graph of either $F2R4A\cup K_1$ or $F2R4B\cup K_1$.
\end{corollary}
\begin{proof}
As matrices over $\ensuremath{\mathbb{F}}_2$, let
\begin{equation*}
U= \left[\begin{array}{rrrrrrrrrrrrrrr}
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\end{array}\right]
\end{equation*}
and let
\begin{equation*}
B_1=I_4\quad \text{ and } \quad B_2=\left[\begin{array}{rrrr}
0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0
\end{array}\right].
\end{equation*}
Then the graph $F2R4A$ corresponds to the matrix
\begin{equation*}
U^tB_1U=
\left[\begin{array}{rrrrrrrrrrrrrrr}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1
\end{array}\right]
\end{equation*}
and the graph $F2R4B$ corresponds to the matrix
\begin{equation*}
U^tB_2U=\left[\begin{array}{rrrrrrrrrrrrrrr}
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\
0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 0 \\
1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \\
1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 1 \\
0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 \\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 & 1 & 1 & 0
\end{array}\right].
\end{equation*}
\end{proof}
\section{Connection to projective geometry}
As mentioned previously, the classifications of symmetric matrices in
Section~\ref{sec:congr-class-symm} are standard classification results
in projective geometry. In this section, we first review
appropriate terminology and highlight this connection to projective
geometry. We will define slightly more terminology than is strictly
necessary to help the reader see where these things fit into standard
projective geometry. We then give some examples of how results in
projective geometry can help us understand $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ better. For
further material, a definitive treatise on projective geometry is
contained in the series \cite{hirschfeld-projective} and
\cite{hirschfeld-general-geometries}.
\subsection{Definitions and the connection}
We start with basic definitions from projective geometry.
\begin{definition}
Let $V=\ensuremath{\mathbb{F}}_q^{n+1}$, the vector space of dimension $n+1$ over
$\ensuremath{\mathbb{F}}_q$. For $x,y\in V-\vec 0 $, we define an equivalence
relation by
\begin{align*}
x\sim y \iff x=cy,\quad \text{ where } c\in\ensuremath{\mathbb{F}}_q \text{ and } c\neq 0.
\end{align*}
Denote the equivalence class containing $x\in V-\ensuremath{\vec 0}$ as $\ensuremath{\bar x}=\{cx \,\mid\,
c\in \ensuremath{\mathbb{F}}_q \text{ and } c\neq 0 \}$. Geometrically, we can think of
the class $\ensuremath{\bar x}$ as the set of non-origin points on a line passing
through $x$ and the origin in $V$. These equivalence classes form the
\emph{projective geometry} $PG(n,q)$ of (projective) dimension $n$ and order $q$.
The equivalence classes are called the \emph{points} of $PG(n,q)$.
Each subspace of dimension $m+1$ in $V$ corresponds to a subspace of
(projective) dimension $m$ in $PG(n,q)$. If a projective geometry has
(projective) dimension 2, then it is called a \emph{projective plane}.
\end{definition}
Note that there is a shift by one in dimension between a vector space
$V$ and its subspaces and the projective geometry associated with $V$
and its subspaces. To help the reader, we will use the nonstandard term
\emph{projective dimension} (or ``$\projdim$'') when dealing with the
dimension of a projective geometry.
\begin{definition}
Let $\mathcal{S}$ be the set of subspaces of
$PG(n,q)$. A \emph{correlation} $\sigma: \mathcal{S} \to
\mathcal{S}$ is a bijective map such that for any subspaces
$R,T\in\mathcal{S}$, $R\subseteq T$ implies that $\sigma(T)\subseteq
\sigma(R)$ and $\projdim \sigma(R)=n-1-\projdim R$. A
\emph{polarity} is a correlation $\sigma$ of order 2 (i.e.,
$\sigma^2=1$, the identity map).
\end{definition}
Note that any polarity $\sigma$ maps points in $\mathcal{S}$ to hyperplanes
(subspaces of projective dimension $n-1$ in $\mathcal{S}$) and hyperplanes to
points. Since $\sigma^2=1$, we have $Y=\sigma(\ensuremath{\bar x})$ if and only if $\sigma(Y)=\ensuremath{\bar x}$, so
$\sigma$ induces a bijection between points and hyperplanes. This
bijection leads to the next definition.
\begin{definition}
Let $\sigma$ be a polarity on $PG(n,q)$. Let $\ensuremath{\bar x},\ensuremath{\bar y}$ be points in
$PG(n,q)$. We say that $\sigma(\ensuremath{\bar x})$ is the \emph{polar}
(hyperplane) of $\ensuremath{\bar x}$ and $\ensuremath{\bar x}$ is the \emph{pole} of $\sigma(\ensuremath{\bar x})$.
If $\ensuremath{\bar y}\in\sigma(\ensuremath{\bar x})$, then $\ensuremath{\bar x}\in\sigma(\ensuremath{\bar y})$ and we say that $\ensuremath{\bar x}$
and $\ensuremath{\bar y}$ are \emph{conjugate} points. If $\ensuremath{\bar x}\in\sigma(\ensuremath{\bar x})$, then we
say that $\ensuremath{\bar x}$ is \emph{self-conjugate} or \emph{absolute}.
Similarly, if $S$ is a subspace of $PG(n,q)$, then $S$ is
\emph{absolute} if $\sigma(S)\subseteq S$ or $S\subseteq\sigma(S)$.
A subspace of $PG(n,q)$ consisting of absolute points is called
\emph{isotropic}.
\end{definition}
The next definition gives the connection with symmetric matrices.
\begin{definition}
Let $B$ be an $(n+1)\times (n+1)$ invertible symmetric matrix over
$\ensuremath{\mathbb{F}}_q$. Define $\sigma:\mathcal{S} \to \mathcal{S}$ by $\sigma: R\mapsto R^\perp$,
where the orthogonality relation is defined by the nondegenerate
symmetric bilinear form represented by $B$ (i.e., $R^\perp=\{\ensuremath{\bar y} \,\mid\,
x^tBy=0 \text{ for all } \ensuremath{\bar x}\in R\}$). We call $\sigma$ the polarity
associated with $B$.
\end{definition}
The fact that the $\sigma$ in the previous definition is a polarity is
easy to check.
Let $M_1$ and $M_2$ be symmetric matrices. Let $\sigma_1$ and
$\sigma_2$ be the associated polarities, respectively. Two polarities
are equivalent if the matrices are projectively congruent, i.e.,
$\sigma_1$ is equivalent to $\sigma_2$ if $M_1=dC^tM_2C$ for some
nonzero $d$ and invertible matrix $C$. Thus there is a unique
polarity associated with each matrix given in
Theorem~\ref{thm:main-theorem}.
We now summarize from \cite[Section 2.1.5]{hirschfeld-projective} the
classification of polarities that are associated with symmetric
matrices. Let $B$ be an invertible symmetric matrix over $\ensuremath{\mathbb{F}}_q$.
Let $\sigma$ be the polarity associated with $B$.
\begin{itemize}
\item If $q$ is odd, then $\sigma$ is called an \emph{ordinary
polarity}.
If $B$ has even order, then the associated polarity is either a
\emph{hyperbolic} polarity or an \emph{elliptic} polarity. The
correspondence between these types of polarities and the matrices in
$\mathcal{B}$ from
Theorem~\ref{thm:main-theorem}(\ref{case:k-even-q-odd}) is slightly
nontrivial and is summarized in
\cite[Corollary~5.19]{hirschfeld-projective}.
If $B$ has odd order, then $\sigma$ is a \emph{parabolic} polarity, which
corresponds to $\mathcal{B}$ in
Theorem~\ref{thm:main-theorem}(\ref{case:k-odd}).
\item If $q$ is even and $b_{ii}=0$ for all $i$, then $\sigma$ is a
\emph{null} polarity (or in alternate terminology, $\sigma$ is a
\emph{symplectic} polarity). Note that this only occurs when $B$
has even order since otherwise $B$ is not invertible. This case
corresponds to the non-identity matrix in the $\mathcal{B}$ in
Theorem~\ref{thm:main-theorem}(\ref{case:k-even-q-even}).
\item If $q$ is even and there is some $b_{ii}\neq 0$, then $\sigma$ is a
\emph{pseudo-polarity}. This case corresponds to the identity matrix
in $\mathcal{B}$ in Theorem~\ref{thm:main-theorem}(\ref{case:k-odd})
or (\ref{case:k-even-q-even}).
\end{itemize}
We pause to note that there are polarities that are not associated
with symmetric matrices. However, since we are only concerned about
symmetric matrices, we will restrict ourselves to this case.
Information about polarities not associated with symmetric matrices
may also be found in \cite{hirschfeld-projective}.
We now examine the connection to graphs by recalling the definition of
a polarity graph.
\begin{definition}
Let $B$ be an invertible symmetric $(n+1)\times (n+1)$ matrix over $\ensuremath{\mathbb{F}}_q$ and
let $\sigma$ be the associated polarity. The \emph{polarity graph} of
$\sigma$ has as its vertices the points of $PG(n,q)$ and as its edges
$\{\ensuremath{\bar x}\ensuremath{\bar y} \,\mid\, x^tBy=0\}$.
\end{definition}
In a polarity graph, $\ensuremath{\bar x}$ is adjacent to $\ensuremath{\bar y}$ exactly when $\ensuremath{\bar x}$ and
$\ensuremath{\bar y}$ are conjugate (i.e., $x$ and $y$ are orthogonal with respect to
$B$). In standard literature, loops are not allowed in polarity
graphs. However, for our purposes, loops convey needed information,
so a vertex $\ensuremath{\bar x}$ in a polarity graph has a loop if and only if $\ensuremath{\bar x}$ is
absolute (i.e., $x^tBx=0$, where $B$ is an invertible symmetric matrix
associated with the polarity).
In Theorem~\ref{thm:main-theorem}, the vertices of a graph in
$\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ represent the points of the projective geometry
$PG(k-1,q)$ and an edge is drawn if the corresponding points are
\emph{not} conjugate (i.e., $x^tBy\neq 0$). Thus, the graphs in
Theorem~\ref{thm:main-theorem} are exactly the complements of polarity
graphs. Recall that when dealing with looped graphs, a vertex is
looped in the complement of a graph if and only if it is nonlooped in
the original graph.
Using this connection, we can restate Theorem~\ref{thm:main-theorem}:
\begin{theorem}\label{thm:main-theorem-proj}
The set $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ is the set of complements of the (looped)
polarity graphs of the polarities on $PG(k-1,q)$ that are associated
with symmetric matrices.
\end{theorem}
\subsection{Consequences of the connection}
With the main theorem stated as in
Theorem~\ref{thm:main-theorem-proj}, we can use a variety of known
results about polarity graphs to derive results about graphs in
$\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$. In this section, we list a few consequences of
Theorem~\ref{thm:main-theorem-proj}.
An elementary result in projective geometry gives us
the size of the graphs in $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$. While this result could
have been realized from the statement in
Theorem~\ref{thm:main-theorem}, it also naturally follows as a
consequence of Theorem~\ref{thm:main-theorem-proj}.
\begin{theorem}
Every graph in $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ has $\frac{q^k-1}{q-1}$ vertices.
\end{theorem}
\begin{proof}
There are $q^k-1$ vectors in $\ensuremath{\mathbb{F}}_q^k-\ensuremath{\vec 0}$. Since there are $q-1$
nonzero constants in $\ensuremath{\mathbb{F}}_q$, there are $q-1$ elements in each
equivalence class in $PG(k-1,q)$, so there are $\frac{q^k-1}{q-1}$
points in $PG(k-1,q)$.
\end{proof}
The following observation follows directly from
Theorem~\ref{thm:main-theorem-proj} and restates the criteria for an
edge in a graph in $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ in several ways.
\begin{observation}\label{obs:edges}
Let $G\in\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ and let $u$ and $v$ be (not necessarily
distinct) vertices in $G$. Let $\sigma$ be the polarity
corresponding to $G$ and let $B$ be an invertible symmetric matrix
corresponding to $\sigma$. Then $uv$ is an edge in $G$ if and only
if:
\begin{enumerate}
\item $u^tBv\neq 0$ (equivalently, $v^tBu\neq 0$), or equivalently,
\item $u$ and $v$ are not conjugate points, or equivalently,
\item $u\not\in \sigma(v)$ (equivalently, $v\not\in\sigma(u)$).
\end{enumerate}
\end{observation}
\begin{corollary}\label{cor:regular}
A graph $G\in\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ is regular of degree $q^{k-1}$ (using the
convention that a loop adds one to the degree of a vertex).
\end{corollary}
\begin{proof}
Let $v\in G$ and let $\sigma$ be the polarity associated with $G$. Since
the hyperplane $\sigma(v)$ contains $\frac{q^{k-1}-1}{q-1}$ points, this
is the degree of a $v$ in the complement of $G$. Thus the degree of
$v$ in $G$ is
\begin{equation*}
\frac{q^{k}-1}{q-1} - \frac{q^{k-1}-1}{q-1}=q^{k-1}. \qedhere
\end{equation*}
\end{proof}
In light of Observation~\ref{obs:edges}, determining the numbers of
looped and nonlooped vertices in $G$ is equivalent to finding the
numbers of absolute points of the polarities of $PG(k-1,q)$.
\begin{theorem}\label{thm:num-nonlooped-qeven}
Let $\ensuremath{\mathbb{F}}_q$ be a finite field having characteristic 2. One graph in
$\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ will have $\frac{q^{k-1}-1}{q-1}$ nonlooped vertices. If
$k$ is even, then the additional graph in $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ will have all
nonlooped vertices.
\end{theorem}
\begin{proof}
In a field of characteristic~2, since
\begin{align*}
x^tBx = \sum_{i,j} b_{ij}x_ix_j=\sum_i
b_{ii}x_i^2+\sum_{i<j}b_{ij}(x_ix_j+x_ix_j) = \sum_i
b_{ii}x_i^2=\left(\sum_i \sqrt{b_{ii}}x_i\right)^2,
\end{align*}
a point $\ensuremath{\bar x}$ is absolute if and only if $\sum_i \sqrt{b_{ii}}x_i=0$.
In a pseudo-polarity, the set of absolute points is the
hyperplane $\sum_i\sqrt{b_{ii}}x_i=0$. Since a hyperplane of $PG(k-1,q)$ is a
projective geometry of projective dimension $k-2$, there are
$\frac{q^{k-1}-1}{q-1}$ nonlooped vertices in this graph.
In a null polarity, $b_{ii}=0$ for all $i$. Therefore every vertex
is nonlooped (i.e., there are $\frac{q^k-1}{q-1}$ nonlooped
vertices). A null polarity occurs when $k$ is even.
\end{proof}
For the odd characteristic case, we will directly apply a standard
result in projective geometry about the number of absolute points in
ordinary polarities.
\begin{theorem}[{\cite[Theorem 22.5.1(b)]{hirschfeld-general-geometries}}]
Let $q$ be odd. Then the number of absolute points in a polarity in
$PG(k-1,q)$ is given by:
\begin{align*}
\begin{cases}
\frac{(q^m-1)(q^{m-1}+1)}{q-1} \text{ or }
\frac{(q^m+1)(q^{m-1}-1)}{q-1} & \text{if $k=2m$ is even}\\
\frac{q^{2m}-1}{q-1} & \text{if $k=2m+1$ is odd}
\end{cases}
\end{align*}
\end{theorem}
\begin{corollary}\label{cor:num-nonlooped-qodd}
Let $q$ be odd. If $k=2m$ is even, then the two graphs in
$\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ will have $\frac{(q^m-1)(q^{m-1}+1)}{q-1}$ and $
\frac{(q^m+1)(q^{m-1}-1)}{q-1}$ nonlooped vertices, respectively.
If $k=2m+1$ is odd, then the graph in $\ensuremath{\mathfrak{g}}_k(\ensuremath{\mathbb{F}}_q)$ will have
$\frac{q^{2m}-1}{q-1}$ nonlooped vertices.
\end{corollary}
We conclude by applying a few standard results for polarities over
$PG(2,q)$ (a projective plane) to give results about $\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$
and the minimum rank problem. We note that the polarity graphs of
$PG(2,q)$ for any $q$ are the Erd\H{o}s-R{\'e}nyi graphs from extremal
graph theory (see \cite{erdos-renyi-hungarian}, \cite{erdos-renyi}, or
\cite{brown-erdos-renyi}). For a survey of interesting properties of
the Erd\H{o}s-R{\'e}nyi graphs and their subgraphs, see
\cite{parsons-projective} or \cite[Chapter 3]{williford-dissert}.
\begin{theorem}\label{thm:white-clique}
If $G\in\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$, then the nonlooped vertices in $G$ form a
clique.
\end{theorem}
\begin{proof}
Suppose that $u$ and $v$ are distinct nonadjacent nonlooped vertices
in $G$. Then $u$ and $v$ are absolute vertices and $u\in\sigma(u)\cap\sigma(v)$
and $v\in\sigma(u)\cap\sigma(v)$. This is a contradiction since the intersection
of any two distinct lines in $PG(2,q)$ is a single point.
\end{proof}
If $G\in\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$, the formulas in
Theorem~\ref{thm:num-nonlooped-qeven} and
Corollary~\ref{cor:num-nonlooped-qodd} imply that $G$ has $q+1$
nonlooped vertices. This combined with Corollary~\ref{cor:regular}
and Theorem~\ref{thm:white-clique} gives the following corollary.
\begin{corollary}
If $G\in\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$, then each nonlooped vertex is adjacent to
$q$ nonlooped vertices and $q^2-q$ looped
vertices.
\end{corollary}
Theorem~\ref{thm:white-clique} also gives us the following theorem.
\begin{theorem}\label{thm:multipartite-weak}
Let $G=K_{s_1,s_2,\ldots,s_n}$, a simple complete multipartite graph.
If $q\geq n-1$, then $\mr(\ensuremath{\mathbb{F}}_q,G)\leq 3$.
\end{theorem}
\begin{proof}
Let $G=K_{s_1,s_2,\ldots,s_n}$. Then $G$ is a blowup graph of
$K_n$, where each vertex of $K_n$ is nonlooped. Since the graph in
$\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$ contains a clique of $q+1$ nonlooped vertices, if
$q+1\geq n$, then $G$ is a blowup graph of the graph in
$\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$.
\end{proof}
We can now construct an interesting family of simple graphs.
\begin{theorem}
For every integer $n\geq1$, let $G_n$ be a simple complete multipartite
graph $H_1\lor H_2\lor \cdots \lor H_n$ where each $H_i$ is an independent set with
$s_i>(n-1)^2$ vertices. We then have $\mr(\ensuremath{\mathbb{F}}_q,G_n)\leq3$ if and
only if $q\geq n-1$.
\end{theorem}
\begin{proof}
If $q\geq n-1$, then $\mr(\ensuremath{\mathbb{F}}_q,G_n)\leq3$ by
Theorem~\ref{thm:multipartite-weak}.
Conversely, let $q<n-1$. Let $I$ be the graph in $\ensuremath{\mathfrak{g}}_3(\ensuremath{\mathbb{F}}_q)$ and
let $I_1$ and $I_2$ be the subgraphs of $I$ induced by the looped
and nonlooped vertices of $I$, respectively. Since $I_1$ has $q^2$
vertices, any blowup of $I_1$ containing more than $q^2$ vertices
will contain an edge by the pigeon-hole principle. Since the
vertices in each $H_i$ form an independent set of size
$s_i>(n-1)^2>q^2$, at least one vertex in each $H_i$ must be a
blowup of a vertex in $I_2$. Furthermore, since the vertices of
each $H_i$ have the same neighbors, we can assume without loss of
generality that all of the vertices of each $H_i$ are blowups of
vertices of $I_1$. Thus $G_n$ is a blowup of $I_2$. However, any
blowup of $I_2$ will be of the form $K_{t_1,t_2,\ldots,t_{q+1}}$
since $I$ has $q+1$ nonlooped vertices, but $G_n$ is not of this
form since $q+1<n$.
\end{proof}
\section{Conclusion}
We have suceeded in classifying the structure of graphs in
$\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$ for any $k$ and any $q$. We have also shown how this
classification relates to projective geometry. We have applied a few
results of projective geometry to give results in the minimum rank
problem.
We conclude with a short list of open questions and topics for further
investigation. First, there are many results about polarity graphs
that could potentially yield results for the minimum rank problem.
What other facts from projective geometry can be applied to give
results in the minimum rank problem over finite fields?
The structural characterization in this paper gives rise to a
theoretical procedure for determining the minimum rank of any graph
over a finite field. How can this procedure be efficiently
implemented? How can the results of Ding and Kotlov
\cite{ding-kotlov-minrank-finite} be combined with the classification
in this paper to yield results on minimal forbidden subgraphs
describing $\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$? The author has implemented such an
algorithm and has some preliminary results on the numbers of forbidden
subgraphs describing $\ensuremath{\mathcal{G}}_k(\ensuremath{\mathbb{F}}_q)$ for different values of $k$ and
$q$.
Finally, there is still ongoing research investigating the structure
of polarity graphs. For example, Jason Williford
\cite{williford-dissert}, Michael Newman, and Chris Godsil
\cite{godsil-2005-} have recently investigated the sizes of
independent sets in polarity graphs. Are there results in the minimum
rank problem that would aid in answering questions about the structure
of polarity graphs?
\section{Acknowledgment}
The author thanks Wayne Barrett and Don March for work in the early
part of this research, including the proof of Theorem~\ref{thm:f2r2}
and early computational experiments, as well as Willem Haemers for
pointing out that the graph $F2R3$ is related to the Fano projective
plane, which led to the investigation of the connection to projective
geometry. Most of this research was done for the author's
Ph.D.~dissertation and the support of Brigham Young University is
gratefully acknowledged.
\bibliographystyle{alpha}
| {
"timestamp": "2008-01-18T23:56:30",
"yymm": "0801",
"arxiv_id": "0801.2987",
"language": "en",
"url": "https://arxiv.org/abs/0801.2987",
"abstract": "The structure of all graphs having minimum rank at most k over a finite field with q elements is characterized for any possible k and q. A strong connection between this characterization and polarities of projective geometries is explained. Using this connection, a few results in the minimum rank problem are derived by applying some known results from projective geometry.",
"subjects": "Combinatorics (math.CO)",
"title": "The minimum rank problem over finite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9871787827157817,
"lm_q2_score": 0.7185943805178139,
"lm_q1q2_score": 0.7093811258259768
} |
https://arxiv.org/abs/2212.02365 | Error reduction using machine learning on Ising worm simulation | We develop a method to improve on the statistical errors for higher moments using machine learning techniques. We present here results for the dual representation of the Ising model with an external field, derived via the high temperature expansion and simulated by the worm algorithm. We compare two ways of measuring the same set of observables, without and with machine learning: moments of the magnetization and the susceptibility can be improved by using the decision tree method to train the correlations between the higher moments and the second moment obtained from an integrated 2-point function. Those results are compared in small volumes to analytic predictions. | \section{Ising dual representation}
The dual representation of the Ising model is derived by introducing bond variables and integrating out the spin degrees of freedom \cite{Prokof:2001,Wolff2008,Gabriel:2002}:
\begin{align}
Z_{\rm Ising}&=\sum_{\{s\}} e^{-\beta H(s)},\qquad H=-J \sum_{\langle i,j \rangle}s_i s_j+h\sum_i s_i\nonumber\\
&=(2 \cosh(\beta h))^{V} \cosh(\beta J)^{E} \sum_{\stackrel{\{n_b,m_i\}}{\partial \{n_b\} = \{m_i\}}}\tanh(\beta J)^{\sum_b n_b}\tanh(\beta h)^{\sum_i m_i},
\end{align}
where the first line is the standard spin representation of the Ising model and the second line is its dual representation derived by the high temperature character expansion in $\beta=1/T$, and $h$ is the external magnetic field.
The dual variables of that representation are the monomers $m_i\in \{0,1\}$ defined on the lattice sites $i$ and the dimers $n_b \in \{0,1\}$ defined on the bonds $b$, which are nearest neighbor pairs. Here, $V$ is the volume, $E$ is the number of bonds. Since spin summation only gives non-trivial contributions if after the expansion the spin at every site is raised to an even power, we obtain the constraint that the dimers form intersecting loops that are either closed or form strings that connect two monomers: $\partial \{n_b\} = \{m_i\}$.
The dual representation is well suited to be simulated via Monte Carlo, in particular using the worm algorithm \cite{Prokof:2001}. Then we can measure the number of monomers $M=\sum_{i} m_i$ and its higher moments on each configuration obtained after a worm update, but also so-called improved estimators, such as the 2-point correlation functions during worm evolution \cite{Wolff2008}.
It turns out that the averaged worm length is simply the connected susceptibility obtained via the integrated 2-point function
\begin{align}
\langle G_2 \rangle&=\frac{1}{V^2}\sum_{x,y} \langle G(x,y)\rangle =\langle \sigma^2 \rangle
\end{align}
with $G(x,y)=G(x-y)$ due to translation symmetry.
\begin{figure}[b]
\subfigure[The second moment $\expval{\sigma^2}$]{
\includegraphics[width=0.49\textwidth]{s2_DecisionTree4x4}
}
\subfigure[Deviation from the exact solution]{\label{fig:s2_sub}
\includegraphics[width=0.49\textwidth]{s2_4x4_subtract}
}
\caption{\label{fig:s2}
Left: Comparison of $\expval{f_2}$ and $\expval{G_2}$ with analytic solution on a $4 \times 4$ lattice and the external field $h=0.2$. Right: Subtracting the exact solution from $\expval{f_2}$ and $\expval{G_2}$. }
\end{figure}
The magnetization $\langle\sigma\rangle $ and the susceptibility $\chi$ can be written in terms of the total monomer number $M$ as follows,
\begin{align}
\langle \sigma^{n} \rangle &= \frac{1}{(N\beta)^n}\frac{1}{Z} \frac{\partial^n Z}{\partial h^n} = \langle f_n \rangle \\
\langle \sigma \rangle &= \tanh(\beta h) + \frac{\langle M \rangle}{\sinh(\beta h)\cosh(\beta h)} = \langle f_1 \rangle \\
\chi &= \langle \sigma^2 \rangle - \langle \sigma \rangle^2
=\frac{1}{N\cosh^2(\beta h)} -\frac{1}{N}\left(\frac{1}{\sinh^2(\beta h)}+\frac{1}{\cosh^2(\beta h)}\right)\langle M \rangle \nonumber\\
&\qquad \qquad \qquad \quad + \frac{1}{(\sinh(\beta h)\cosh(\beta h))^2} \left(\langle M^2 \rangle - \langle M \rangle^2\right) = \langle f_2 \rangle - \langle f_1 \rangle^2 \,.
\end{align}
Here, we define $f_n$ to distinguish the observables written in terms of $M$ from the same observable written in terms of the improved estimator $G_2$. For example, $f_2 = \sigma^2(M)$.
Note that they are not the same $f_2 \neq G_2$ before ensemble averaging, as they have different distributions.
We compare $\langle f_2 \rangle$ and $\langle G_2 \rangle$ with the exact solution in Fig.~\ref{fig:s2}. The analytic solution is subtracted in Fig.~\ref{fig:s2_sub} to see the statistical errors and the deviations from the analytic result. Since $\langle G_2 \rangle$ as a worm estimator has better statistics and hence smaller error bars compared to $\langle f_2 \rangle$ (see Fig~\ref{fig:s2_err}), also its mean value is closer to the analytic result, see Fig.~\ref{fig:s2_dev}.
This advantage of improved estimators concerning statistical errors is strong in particular at higher temperatures, including the vicinity of a critical point, but it weaker at low temperatures as the worm algorithm is less efficient here, since the average worm length is very large here.
\begin{figure}[t]
\subfigure[Ratio of Deviations]{\label{fig:s2_dev}
\includegraphics[width=0.49\textwidth]{s2_deviation}
}
\subfigure[Ratio of statistical errors]{\label{fig:s2_err}
\includegraphics[width=0.49\textwidth]{Err_s2_percent}
}
\caption{Left: Ratio of the deviations from analytic solution on a 4x4 lattice for $h=0.2$. Right: Ratio of the statistical errors of $\expval{f_2}$ and $\expval{G_2}$ }
\end{figure}
\section{Machine learning strategy}
\begin{figure}[hb]
\subfigure[Training data]{
\label{fig:corr}
\includegraphics[width=0.49\textwidth]{correlation_s1_T_2.5}
}
\subfigure[Estimation by decision tree regression]{\label{fig:esti}
\includegraphics[width=0.49\textwidth]{DecisionTree_s1_T_2.5}
}
\caption{Left: Correlation between the bootstrap samples of $\expval{f_2}$ and $\expval{f_1}$ at $T=2.5$ and $h=0.2$. Right: blue points are the decision tree regression prediction for $\expval{G_2}$ input. Green point is the analytic solution.}
\end{figure}
\begin{figure}[t]
\subfigure[Histogram of $\expval{f_2}$ and $\expval{G_2}$]{
\label{fig:histo_s2}
\includegraphics[width=0.49\textwidth]{histo_s2_T_2.5}
}
\subfigure[Histogram of $\expval{f_1}$ and $\expval{\tilde{G}_1}$]{
\label{fig:histo_s1}
\includegraphics[width=0.49\textwidth]{histo_s1_T_2.5}
}
\caption{Left: histogram of $\expval{f_2}$ and $\expval{G_2}$ at $T=2.5$ and $h=0.2$. Right: histogram of $\expval{f_1}$ and $\expval{\tilde{G}_1}$ at $T=2.5$ and $h=0.2$.}
\end{figure}
\begin{figure}[b]
\subfigure[Magnetization $\expval{\sigma}$]{
\includegraphics[width=0.49\textwidth]{s1_DecisionTree4x4}
}
\subfigure[Deviations]{
\includegraphics[width=0.49\textwidth]{s1_subtract}
}
\caption{\label{fig:s1} Left: Comparison of $\expval{f_1}$ and $\expval{\tilde{G}_1}$ with analytic solution on $4 \times 4$ lattice with external field $h=0.2$. Right: subtracting analytic solution from $\expval{f_1}$ and $\expval{\tilde{G}_1}$.}
\end{figure}
The important observable to determine the critical temperature is the susceptibility \linebreak $\chi=\expval{\sigma^2} -\expval{\sigma}^2$. The pseudo-critical temperature can be determined from its peak via finite size scaling.
Whereas $\expval{\sigma^2}$ can be determined by $\expval{G_2}$, there is no improved estimator for $\expval{\sigma}$, and it has to be determined by $\expval{f_1}$, which is less accurate.
While $\delta \expval{G_2}$ is small, the dominant error of the
susceptibility comes from $\delta (\expval{f_1}^2)$.
The goal of our machine learning strategy is to reduce the statistical error of the susceptibility by predicting a new observable $\expval{\tilde{G}_1}$, which corresponds to $\expval{\sigma}$, with a reduced error.
To obtain the general mapping of the distributions of the means $\expval{\sigma^2}$ and $\expval{\sigma}$, we consider as training data the correlation of bootstrap samples ($n=1000)$ between $\expval{f_2}$ and $\expval{f_1}$. With this, we train the
machine this correlation on a $4\times 4$ lattice with the external field $h=0.2$, as presented in Fig.~\ref{fig:corr}.
In Fig.~\ref{fig:esti}, we present the machine learning prediction
$\expval{\tilde{G}_1}$.
We obtain this mapping by applying the decision tree regression method from the scikit-learn library~\cite{scikit-learn}.
The decision tree regression method is used to select the closest data point of the input.
The blue points are selected by the decision tree from the input of
$\expval{G_2}$.
The red and blue crosses indicate the statistical error of $\expval{f_2}$,
$\expval{f_1}$ and $\expval{G_2}$, $\expval{\tilde{G}_1}$.
Comparing with the analytic solution, the green star, the machine learning
prediction $\expval{\tilde{G}_1}$ has smaller statistical error and deviation
than $\expval{f_1}$.
The distribution of the bootstrap samples is Gaussian.
In Fig.~\ref{fig:histo_s2}, we have shown that both $\expval{f_2}$ and
$\expval{G_2}$ are Gaussian distributions and $\expval{G_2}$ has smaller
statistical error.
Here, the purple line is the analytic result.
After applying the decision tree regression, the machine learning prediction
has also Gaussian distribution in Fig.~\ref{fig:histo_s1}.
The magnetization $\expval{\tilde{G}_1}$ obtained by machine learning for
temperatures from $T=1.5$ to $T=4.0$ are presented in Fig.~\ref{fig:s1}.
In Fig.~\ref{fig:s1_dev} and Fig.\ref{fig:s1_err}, we compare the deviation and statistical error of $\expval{\tilde{G}_1}$ and $\expval{f_1}$. As a result, machine learning predictions are more accurate and closer to the true result. The statistical errors are reduced by about 40\%.
\begin{figure}[t]
\subfigure[Ratio of deviations]{
\label{fig:s1_dev}
\includegraphics[width=0.49\textwidth]{s1_deviation}
}
\subfigure[Statistical error reduction]{\label{fig:s1_err}
\includegraphics[width=0.49\textwidth]{Err_s1_percent}
}
\caption{Left: Comparison of $\expval{f_1}$ and $\expval{\tilde{G}_1}$ with analytic solution on $4 \times 4$ lattice. Right: Ratio of the statistical errors of $\expval{f_1}$ and $\expval{\tilde{G}_1}$ }
\end{figure}
\begin{figure}[h]
\subfigure[Susceptibility]{
\includegraphics[width=0.49\textwidth]{sus_DecisionTree4x4}
}
\subfigure[Deviations]{
\includegraphics[width=0.49\textwidth]{sus_subtract}
}
\caption{
\label{fig:sus}
Comparison of $\chi=\expval{G_2}-\expval{f_1}^2$ and $\chi=\expval{G_2}-\expval{\tilde{G}_1}^2$ with analytic solution on $4 \times 4$ lattice. }
\end{figure}
\begin{figure}[t]
\subfigure[Ratio of deviations]{
\label{fig:sus_dev}
\includegraphics[width=0.49\textwidth]{sus_deviation}
}
\subfigure[Statistical error reduction]{\label{fig:sus_err}
\includegraphics[width=0.49\textwidth]{Err_sus_percent}
}
\caption{Left: deviation from the analytic solution. Right: Statistical error reduction by machine learning method for susceptibility.}
\end{figure}
The susceptibility can be obtained by subtracting $\expval{f_1}^2$ or
$\expval{\tilde{G}_1}^2$ from $\expval{G_2}$. In Fig.~\ref{fig:sus}, we compare
two ways of calculations.
When the machine learning prediction $\expval{\tilde{G}_1}^2$ is subtracted,
the results are closer to the analytic result.
The reduction of deviations and statistical errors are presented in
Fig.~\ref{fig:sus_dev} and Fig.~\ref{fig:sus_err}. The statistical errors are
reduced by about 20-70\% depending on the temperature.
\section{Higher moments}
This method is applicable to higher moment, for instance, $\expval{\sigma^3}$ or $\expval{\sigma^4}$.
The effect of the statistical error reduction depends on how strong the correlation is, which can be quantified by the Pearson coefficient.
In Fig.~\ref{fig:Pearson_stat}, we show the dependence of statistical error reduction with respect to Pearson coefficient of the correlation between $\expval{f_2}$ and $\expval{f_n}$, where $n=1,3,4$.
In the case of $\expval{f_1}$, it is clear that the statistical error reduction is more effective at strong correlations. However, in the case of higher moments, $n=3,4$, the statistical errors are smaller but not as much as for the magnetization.
The reason is that the accuracy of the input data affects the error reduction. In Fig.~\ref{fig:s2_err}, the input data is less accurate at lower temperatures. Hence the error reduction is less effective despite the Pearson coefficient is larger than $0.9$. The two data points in Fig.~\ref{fig:Pearson_stat} for $n=3$ with largest Pearson coefficient correspond to rather low temperatures $T=1.5, 2.0$. Their Pearson coefficients are close to one but the error reduction is not large, about 30\%. On the other hand, the data for $n=1$ with largest Pearson coefficient correspond to high temperature which has very accurate input data. Despite of accurate input data at high temperatures, the small error reduction for the other data is due to the weak correlation, as seen in Fig.~\ref{fig:PearsonT} for $n=3,4$ and $m=2$.
The Pearson coefficient for the correlation between $\expval{f_3}$ and $\expval{f_4}$ is large even at high temperatures. Hence, if we have the accurate input $\expval{G_4}$, $\expval{\tilde{G}_3}$ can be evaluated by our machine learning method precisely. $\expval{G_4}$ can be sampled by introducing a second worm:
sampling the Ising model with such a two-worm algorithm~\cite{Brett:2015}, the four-point function $G(x,y,z,w)$ and its integrated expectation value $\langle G_4\rangle$ can be directly measured as an improved estimator. Then the correlation between $\expval{f_3}$ and $\expval{f_4}$ can be learned from the input $\expval{G_4}$, resulting in an error reduction for $\expval{\tilde{G}_3}$.
\begin{figure}[tb]
\subfigure[Pearson Coefficients]{\label{fig:PearsonT}
\includegraphics[width=0.49\textwidth]{Pearson_vs_T}
}
\subfigure[Deviation from the analytic solution]{\label{fig:Pearson_stat}
\includegraphics[width=0.49\textwidth]{Pearson_err}
}
\caption{
Pearson coefficient of the correlation between $f_n$ and $f_m$, where $m=2,4$.
Statistical error reduction with respect to Pearson coefficient.}
\end{figure}
\section{Conclusion}
We developed an error reduction strategy using the decision tree method. We have tested this method for the Ising model in its dual representation and found that the error of the magnetization $\expval{\sigma}$ is reduced about 40\%. Moreover, for the susceptibility, the mean value of the machine learned prediction is closer to the analytic result.
Applying this method to higher moments is less efficient because of the weak correlation of the worm estimator with the higher moments in terms of monomers. A two-worm algorithm with external field is required for its improvement, and is currently under investigation. We also plan to test this method for larger volumes and higher dimensions and determine observables for finite size scaling, such as the Binder cumulant, with reduced statistical error form the decision tree method.
Finally, we aim is to apply this method to strong coupling lattice QCD in the dual representation~\cite{Gagliardi:2017uag}. Here, we may benefit for improving on chiral and nuclear observables, to pinpoint the QCD phase diagram in the strong coupling regime.
\acknowledgments
J.K. was supported in part by the NSFC and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center TRR110 "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG Project-ID 196253076 - TRR 110)
W.U. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'– project number 315477589 – TRR 211.
| {
"timestamp": "2022-12-06T02:28:31",
"yymm": "2212",
"arxiv_id": "2212.02365",
"language": "en",
"url": "https://arxiv.org/abs/2212.02365",
"abstract": "We develop a method to improve on the statistical errors for higher moments using machine learning techniques. We present here results for the dual representation of the Ising model with an external field, derived via the high temperature expansion and simulated by the worm algorithm. We compare two ways of measuring the same set of observables, without and with machine learning: moments of the magnetization and the susceptibility can be improved by using the decision tree method to train the correlations between the higher moments and the second moment obtained from an integrated 2-point function. Those results are compared in small volumes to analytic predictions.",
"subjects": "High Energy Physics - Lattice (hep-lat)",
"title": "Error reduction using machine learning on Ising worm simulation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347912737016,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7093646161086609
} |
https://arxiv.org/abs/2107.00364 | Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks | We analyze the learning dynamics of infinitely wide neural networks with a finite sized bottle-neck. Unlike the neural tangent kernel limit, a bottleneck in an otherwise infinite width network al-lows data dependent feature learning in its bottle-neck representation. We empirically show that a single bottleneck in infinite networks dramatically accelerates training when compared to purely in-finite networks, with an improved overall performance. We discuss the acceleration phenomena by drawing similarities to infinitely wide deep linear models, where the acceleration effect of a bottleneck can be understood theoretically. | \section{Introduction}
The study of infinitely wide neural networks is one of the most actively researched topics in deep learning theory \cite{NTK,gp1,gp2,gp3,gp4,gp5,gp6,gp7,yang,yang2,yang3,TP2b,Littwin2020OnRK,Littwin2020OnTO}. Previous work identified distinct training regimes that are determined by the networks hyperparameters. In \cite{Yang2020FeatureLI}, it is shown that by correctly scaling hyperparameters such as learning rate, weight multipliers and initialization constants, neural network training dynamics under gradient descent exhibit a limiting behaviour as the width of the network tends to infinity. Two distinct training regimes are identified in the limit. 1) The kernel regime, where the network evolves like a linear model during training. In this regime, intermediate activations in infinite layers change by a vanishing amount, stripping the network of its ability to learn features.
2) The feature learning regime, where the intermediate activations change in a nontrivial way during training, resulting in data dependent feature learning. \\
In \cite{Yang2020FeatureLI}, a clear dichotomy exists between these regimes in the infinite width limit, where a network is either in one regime or another, but not in both at the same time. A network undergoing training by gradient descent algorithms can be placed in one regime or the other depending on its parametrization at initialization. The underlying assumption in both cases however is that all hidden layers are infinitely wide.
Some practical architectures incorporate narrow layers by design, or use varying width layers of which some are wide, while others are narrow. For example, bottleneck layers force a network to learn a low dimensional latent representation, and are typically used as part of a generally wide network. Perhaps the most prominent architecture which uses such layers is the autoencoder, where the encoder learns a typically low dimensional representation of its input, while the decoder reconstructs the input from its latent representation.
In these types of models, the discrepancy between wide and narrow layers is a carefully hand designed feature of the architecture. Therefore, the standard infinite width approximation cannot be applied without "assuming away" a prominent architectural feature.\\
In this work, we consider a different type of limit, where a bottleneck layer of finite width is inserted in an otherwise infinitely wide network. From a Bayesian perspective, such models have been investigated in \cite{Agrawal2020WideNN,Aitchison2020WhyBI}, however their learning dynamics under gradient descent have not yet been explored to the best of our knowledge. As we show, gradient descent on such models can be understood and simulated exactly in function space rather than parameter space, at a relatively cheap computational cost. This is in contrast to the feature learning limit in \cite{Yang2020FeatureLI}, where it is practically infeasible to simulate exact learning dynamics in the limit without approximations. Empirically, we show that adding a bottleneck layer can significantly boost speed of training convergence. We further speculate on possible reasons for this dramatic boost in convergence speed by drawing equivalence to infinitely wide deep linear models with bottlenecks, where acceleration of the bottlenecks is fully tractable. \\
\section{Neural Networks With Bottlenecks}\label{sec:bottle}
A natural way of thinking about wide neural networks with bottlenecks is through composition of networks. Let $f_n(x;w):\mathbb{R}^{d} \to \mathbb{R}^{d_r}$ denote the (vector) output of a neural network given input vector $x \in \mathbb{R}^d$ and parameters $w$, where $n$ denotes the width of the hidden layers\footnote{We use the term "width" losely here. For MLPs, this means the size of the hidden layers, or the feature dimension in convolutional neural nets.}.
In the present work, we consider the case where the input $x$ is itself given by the output of a neural network $x(\xi) = g_n(\xi;\theta): \mathbb{R}^{d_0} \to \mathbb{R}^d$, given input vector $\xi \in \mathbb{R}^{d_0}$ and parameters $\theta$, with hidden layers width parametrized by $m$. For simplicity, we assume the width of all hidden layers in both $f$ and $g$ is $n$. The output of the composed architecture is then given by $\mathcal{F}_n(\xi) = (f_n \circ g_n)(\xi)$. Note that even if $n$ is large, the output of $g_n(\xi)$ is still $d$ dimensional, hence $\mathcal{F}_n$ can be viewed as a wide network with a bottleneck of dimension $d$. \\
\paragraph{Setup}
\label{par:setup}
To make things concrete, we consider the case where $f_n$ implement an MLP of depths $2$ and width $n$. Due to technical considerations,\footnote{Rigorously generalizing the claims in this paper to deeper MLPs is not straightforward, and involves a considerable technical challenge. This difficulty arises due to the structure of the input-output Jacobian, which cannot be implemented as a Tensor Program \cite{yang,yang2,yang3} in its current form.} we restrict our discussion in the current paper to a 1 hidden layer MLP for $f$, and leave the rigorous analysis of deeper networks to future work. Given an input vector $g \in \mathbb{R}^d$ and parameters $w = \{u,v\}$, the output $f_n(g)$ is given by:
\begin{align}\label{eqn:mlp}
f_n(x) = \frac{v}{\sqrt{n}}\phi(\frac{ux}{\sqrt{d}})
\end{align}
where $u \in \mathbb{R}^{n \times d},v \in \mathbb{R}^{d_r \times n}$ are the weight matrices sampled from a normal distribution, and
$\phi$ is a coordinate nonlinearity which we assume, for the sake of analysis, is twice differentiable with bounded derivatives. We let $g$ implement an arbitrary neural network function with parameter matrices $\theta = \{\theta^l\}_{l=1}^{L}$, with suitable dimensions.
We are interested in theoretically understanding the behaviour of the composition $\mathcal{F}_n = f_n \circ g_n$ during training in the overparametrized scenario where the width $n$ of the hidden layers tend to infinity, while the bottleneck dimension $d$ remains fixed in size.
Our setup involves the training of the composition function $\mathcal{F}_n$ on a training dataset $\{\xi_i\}_{i=1}^N$, using a loss function $\mathcal{L}$ implicitly containing the labels. We state our results assuming infinitesimal learning rate (aka gradient flow), however we expect our results to carry over to SGD. Since $\mathcal{F}_n$ contains a bottleneck of finite size, we will have to reason about the evolution of forward and backward signals as they propagate through a finite sized layer.
\paragraph{Notations}
We use $x,\tilde{x}$ to denote generic placeholder vectors to $f$, and $g(\xi)$ as a specific instantiation of $x$ by the output $g(\xi,\theta)$. We denote the input-output Jacobian $J(x) = \frac{\partial f(x)}{\partial x} \in \mathbb{R}^{d_r \times d}$, with the notation $\tilde{J} = J(\tilde{x})$. We use $\mathcal{F},g,J$ for $\mathcal{F}(\xi), g(\xi), J = J\big(g(\xi)\big)$ where $\xi$ is some generic input.
To remove clutter, we use subscripts $n,t,i$ to denote the value of a vector/matrix parametrized by $n$ at time $t$ given sample $\xi_i$. (i.e $g_{n,t,i} = g_{n,t}(\xi_i),g_{n,t} = g_{n,t}(\xi)$ and same for $J$). We use superscripts to denote coordinates of a vector/matrix (i.e $g^\alpha$ is the $\alpha$ coordinate of vector $g$). The absence of a subscript $n$ implies we have taken $n \to \infty$. Finally, we use $\chi_i$ to denote the loss derivative for sample $i$ (i.e $\chi_i = \frac{\partial \mathcal{L}}{\partial \mathcal{F}_i} \in \mathbb{R}^{d_r}$).
\subsection{Preliminaries}\label{sec:pre}
As width increases, pre-activations of initialized neural networks without bottlenecks approach a centered gaussian process (GP), independent across coordinates but possibly correlated across inputs. At the output level, the following hold at initialization:
\begin{align}\label{eqn:nngp}
\lim_{n \to \infty} g_n(\xi) \overset{d}{=} g(\xi),&~~~ \lim_{n \to \infty} f_n(x) \overset{d}{=} f(x)
\end{align}
where $g(\xi),f(x)$ are GPs with respect to their respective inputs. For $f$ as implemented in \cref{eqn:mlp}, we can write the defining covariance $\Lambda$ of the GP given input samples $x,\tilde{x}$:
\begin{align}
\forall_\alpha,\begin{pmatrix}
f^\alpha(x)\\
f^\alpha(\tilde{x})
\end{pmatrix} &\sim \mathcal{N}\big(0, \Lambda(x,\tilde{x})\big)\\
\Lambda(x,\tilde{x}) &= \begin{pmatrix} \Sigma(x,x) & \Sigma(x,\tilde{x})\\ \Sigma(\tilde{x},x) & \Sigma(\tilde{x},\tilde{x}) \end{pmatrix} \in \mathbb{R}^{2 \times 2}
\end{align}
We assume both $f,g$ have empirical NTKs $\mathcal{K}_n ,\Theta_n$ defined as $\mathcal{K}_n(x,\tilde{x}) = \frac{\partial f(x)}{\partial w}\frac{\partial f(\tilde{x})^\top}{\partial w} \in \mathbb{R}^{d_r \times d_r},\Theta_n(\xi,\tilde{\xi}) = \frac{\partial g(\xi)}{\partial \theta}\frac{\partial g(\tilde{\xi})^\top}{\partial \theta} \in \mathbb{R}^{d \times d}$
with corresponding limits $\mathcal{K}(x,\tilde{x})I_{d_r}, \Theta(\xi,\tilde{\xi})I_{d}$, where $I_{d_r},I_d$ are identity matrices of size $d_r,d$, and $\mathcal{K}(x,\tilde{x}),\Theta(\xi,\tilde{\xi}) \in \mathbb{R}$.
Consistent with intuition, it was rigorously shown in \cite{Agrawal2020WideNN} that as the width increases the output of an MLP with bottlenecks converges to a composition of GPs. \\
\paragraph{Training} We consider training the model described in \cref{sec:bottle}. Under gradient flow, the weights evolve according to $\dot{w}_t = -\nabla_{w_t}\mathcal{L}_t,~~~\dot{\theta}_t = -\nabla_{\theta_t}\mathcal{L}_t$. The evolution of the output of the composition function can be described by the following set of ODEs:
\begin{align}\label{eqn:gf}
\dot{\mathbf{\mathcal{F}}}_{n,t}(\xi) &= \frac{\partial \mathcal{F}_{n,t}(\xi)}{\partial w_t}\dot{w}_t + \frac{\partial \mathcal{F}_{n,t}(\xi)}{\partial \theta_t}\dot{\theta}_t
\end{align}
Substituting the empirical kernel definitions:
\begin{align}\label{eqn:random_kernel}
\dot{\mathcal{F}}_{n,t}(\xi) &= -\sum_{i=1}^N\mathcal{K}_{n,t}\big(g_{n,t},g_{n,t,i}\big)\chi_{n,t,i} \\\nonumber
&-\sum_{i=1}^NJ_{n,t} \Theta_{n,t}(\xi,\xi_i)J^{\top}_{n,t,i} \chi_{n,t,i}
\end{align}
where $g_{n,t} = g_{n,t}(\xi), J_{n,t} = J_{n,t}(\xi)$. The above evolutionary equations can be interpreted as Kernel equations with random, evolving kernels that depend on the weights $w,\theta$. In \cite{NTK}, it was shown for infinite width networks (without bottlenecks) the output is fully deterministic at any time $t$ conditioned on the initial GP output at time $t=0$. In contrast, since both bottleneck embedding $g$ and input-output Jacobian $J$ are finite, we expect \cref{eqn:random_kernel} to remain random even when taking the limit $n \to \infty$. To get a more complete view of the evolution of the composition function $\mathcal{F}$ during training at the limit, we must reason about the dynamics of the Jacobian term $J$.
\section{Dynamics in Function Space}
A key observation in our analysis is that the input-output Jacobian $J(x)$ converges to a multivariate GP, and evolves as a linear function in the infinite width limit, similarly to outputs of an infinitely wide network. Here, $J(x)$ will have non trivial correlations across its coordinates, unlike layers in the NTK limit where the coordinates are independent. Our first result relating to the initial state of $J$ is stated in the following proposition:
\begin{restatable}[GP behaviour of the Jacobian]{lemma}{GP}\label{lemma:GP}
For $f_n(x)$ and its limit $f(x)$ as described in \cref{eqn:mlp,eqn:nngp}, the following holds at initialization for every pair of fixed inputs $x,\tilde{x}$:
\[
\lim_{n \to \infty}J_n(x) = J(x) \label{eqn:jac}
\]
where $J(x)$ is a multivariate GP with independent rows, and $J(x),f(x)$ are jointly Gaussian
with:
\begin{align}
\begin{pmatrix}\label{JJ:cov}
J^{\alpha,\beta}(x)\\
J^{\alpha,\gamma}(\tilde{x})
\end{pmatrix} &\sim \mathcal{N}(\bm{0}, \begin{pmatrix} \Sigma_{(2)}^{\beta,\beta}(x,x) & \Sigma_{(2)}^{\beta,\gamma}(x,\tilde{x})\\ \Sigma_{(2)}^{\gamma,\beta}(\tilde{x},x) & \Sigma_{(2)}^{\gamma,\gamma}(\tilde{x},\tilde{x}) \end{pmatrix}\\
\begin{pmatrix}\label{fJ:cov}
f^\alpha(x)\\
J^{\alpha,\beta}(\tilde{x})
\end{pmatrix} &\sim \mathcal{N}(\bm{0}, \begin{pmatrix} \Sigma(x,\tilde{x}) & \Sigma_{(1)}^{\beta}(x,\tilde{x})\\ \Sigma_{(1)}^\beta(x,\tilde{x})^\top & \Sigma_{(2)}^{\beta,\beta}(\tilde{x},\tilde{x}) \end{pmatrix}
\end{align}
where:
\begin{align}\label{sig1:sig2:def}
\Sigma_{(1)}(x,\tilde{x}) &= \frac{\partial}{\partial b}\Sigma(x,b)\Big|_{b=\tilde{x}} \in \mathbb{R}^{1\times d}\\
\Sigma_{(2)}(a,b) &= \frac{\partial^2}{\partial a\partial b}\Sigma(a,b)\Big|_{a=x,b=\tilde{x}} \in \mathbb{R}^{d \times d}
\end{align}
\end{restatable}
\cref{lemma:GP} already illustrates a novel aspect of infinite width networks. As the input-output Jacobian is frequently used to derive sensitivity to perturbations, \cref{lemma:GP} shows that sensitivity to perturbations and outputs can be jointly modeled as a multivariate GP for sufficiently wide models. \\
In the next theorem, we show that the $J(x)$ evolves linearly in wide models, in a similar fashion to the network outputs. Now we are ready to characterize the full training dynamics of $\mathcal{F}$ in the infinite width limit in the following theorem:
\begin{restatable}[Evolution of composed function]{thm}{main}\label{thm:main}
For networks $f,g$ as in \cref{sec:bottle}, the following ODEs describe the dynamics of $\mathcal{F},g,J$ in the limit of $n \to \infty$:
\begin{align}
\dot{g}_t(\xi) &= -\sum_{i=1}^N\Theta(\xi,\xi_i)J_{t,i}^\top\chi_{t,i} \label{eqn:eq1}\\
\dot{J}_t(x) &= - \sum_{i=1}^N\chi_{t,i}\Xi(x,g_{t,i})^\top \label{eqn:eq2}\\
\dot{\mathcal{F}}_t(\xi) &= -\sum_{i=1}^N\Big[\mathcal{K}\big(g_t,g_{t,i}\big)I_{d_r} +
\Theta(\xi,\xi_i)J_tJ_{t,i}^{\top}\Big]\chi_{t,i} \label{eqn:eq3}
\end{align}
where $\Xi(-,-)$ is a deterministic function defined as $\Xi(x,\tilde{x}) \in \mathbb{R}^{ d } = \lim_{n \to \infty} \frac{\partial^\top}{\partial x}\mathcal{K}_n^{\alpha,\alpha}(x,\tilde{x})$.
\end{restatable}
For $\phi = \text{ReLU}$, we give an explicit form for $\Xi$ in \cref{sec:exp}.
ODEs in \cref{eqn:eq1,eqn:eq2,eqn:eq3} depend on deterministic, frozen kernels $\mathcal{K},\Theta,\Xi$. The evolution of $\mathcal{F},g,J$ is hence completely deterministic after conditioning on initial states, and can therefore be expressed in functional space.
\section{Implicit Acceleration by Bottlenecks in Linear Networks}\label{sec:implicit}
To intuitively understand the training aspects of introducing bottlenecks in infinite width networks, we draw inspiration from deep linear networks. Assume $f_n,g_n$ implement deep linear MLPs of depth $L_f,L_g$ respectively and width $n$, and let $w_{\text{eff}} \in \mathbb{R}^{d_r \times d} = \frac{1}{\sqrt{n^{(L_f-1)} d}}w^{L_f}w^{L_{f-1}}...w^1,~\theta_\text{eff} \in \mathbb{R}^{d \times d_0} = \frac{1}{\sqrt{n^{(L_g-1)} d_0}}\theta^{L_g}\theta^{L_{g-1}}...\theta^1$. Hence, we have that $g(\xi) = \theta_\text{eff} \xi$, $f(g) = w_{\text{eff}} g$ and $\mathcal{F}(\xi) = w_{\text{eff}}\theta_\text{eff} \xi$. In finite networks, recent results have shown that in some cases, the stacking of linear layers produces an acceleration effect, and a low rank bias when optimized by gradient descent \cite{Arora2018OnTO}. Moreover, the acceleration effect is akin to momentum, and cannot be reproduced by adding some regularizer to the objective. However, the acceleration effect as outlined in \cite{Arora2018OnTO} disappears when considering the NTK regime. This is because in this regime, training speed is determined by the NTK itself, which does not change meaningfully with stacking of additional linear layers. Indeed, for a linear $g$ we have that $\Theta(\xi,\tilde{\xi}) = L_g\frac{\xi^\top\tilde{\xi}}{d_0}$. However, by introducing a bottleneck, we regain the lost acceleration effect, as illustrated in the following lemma.
\begin{restatable}{lemma}{linear}\label{lemma:linear}
In the limit of $n \to \infty$, optimizing $\mathcal{F}$ by running gradient flow on the weights $\{w^l\}_{l=1}^{L_f}$ with a learning rate $\epsilon_f$, and $\{\theta^l\}_{l=1}^{L_g}$ with a learning rate $\epsilon_g$, is equivalent to running gradient flow directly on the effective weights $w_{\text{eff}}, \theta_\text{eff}$ with learning rates $L_f\epsilon_f,L_g\epsilon_g$.
\end{restatable}
\cref{lemma:linear} suggests an infinitely wide deep linear network with a bottleneck is essentially reduced to a two layer, finite linear network with weight matrices $w_{\text{eff}}, \theta_\text{eff}$ under gradient descent. Therefore, under mild initialization conditions,\footnote{The acceleration effect formally requires a small initialization and learning rates to hold. Empirically, these conditions may sometimes be relaxed.} a bottleneck brings about accelerated learning in linear networks. It is worth noting, additional capacity cannot be attained by stacking additional layers in linear networks. Moreover, shallow and deep linear functions represent the same function class. Hence training speed can be directly attributed to trajectories of gradient descent. This does not hold for nonlinear finite networks, where changes in depth or width also affect capacity. However, in the infinite width regime, with bottlenecks or not, capacity is infinite. Hence, we can isolate the effect of bottlenecks on training and acceleration without confounding capacity.
\section{Experiments}
We provide empirical support for our theoretical contributions in three parts:
\begin{itemize}
\item In part 1, we conduct simulations to numerically verify the theory in \cref{lemma:GP,lemma:linear,thm:main}. We present these results in \cref{verify}.
\item In part 2, we train infinite neural networks with bottlenecks on MNIST \cite{mnist} and CIFAR-10 \cite{cifar} datasets by simulating SGD on the loss in function space, investigating training acceleration effects and test performance. \Cref{fig:mnist_cifar_train_loss} summarizes the results from these experiments. We observe that the accelerated training predicted by \cref{lemma:linear} for linear models is also visible when we train infinite width nonlinear networks with bottlenecks on the two real world datasets.\footnote{Figure~\ref{fig:cifar10_lr1k_metrics:train_loss_main} show loss for first 15K steps on CIFAR-10, as some training runs did not complete in time due to compute resource scarcity. Extended results showing loss trajectories for models trained longer are available in figure~\ref{fig:cifar10_lr1k_metrics} in \cref{sec:exp}.} We present additional results and analysis in \cref{sec:exp}.
\item In part 3, we run experiments with finite width networks trained with standard SGD and verify that the acceleration effect holds in this setting. These results are presented in \cref{finiteapprox}.
\end{itemize}
\label{expt:data:mnist}
\begin{figure}[!t]
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{workshop_figures/MNIST/train_loss.pdf}
\caption{MNIST training loss}
\label{fig:mnist_lr5k_metrics:train_loss_main}
\end{subfigure}
\begin{subfigure}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{workshop_figures/CIFAR10/train_loss_first10K.pdf}
\caption{CIFAR-10 training}
\label{fig:cifar10_lr1k_metrics:train_loss_main}
\end{subfigure}
\caption{{Training loss for infinite width bottleneck models with widths from smallest to largest (inf indicates infinite width bottleneck). Loss is reduced in function space by simulating SGD with \cref{eqn:eq1,eqn:eq2,eqn:eq3}}}%
\label{fig:mnist_cifar_train_loss}
\end{figure}
\section{Conclusion}
In this work we investigate the effect of applying a bottleneck in an otherwise infinite width network. We do this by first deriving the ODEs corresponding to optimization of such a model under gradient flow. Our theoretical analysis reveals novel insights regarding the behaviour of input-output Jacobians, both at initialization and training. Though stated for shallow, single hidden layer networks post bottleneck, we expect our results to hold in more general cases. Empirically, we observe that infinite width networks with bottlenecks train much faster than their fully infinite counterparts, while typically achieving better overall performance. We hope our results pave the way for new understanding of learning dynamics beyond kernel regimes.
| {
"timestamp": "2021-07-05T02:11:25",
"yymm": "2107",
"arxiv_id": "2107.00364",
"language": "en",
"url": "https://arxiv.org/abs/2107.00364",
"abstract": "We analyze the learning dynamics of infinitely wide neural networks with a finite sized bottle-neck. Unlike the neural tangent kernel limit, a bottleneck in an otherwise infinite width network al-lows data dependent feature learning in its bottle-neck representation. We empirically show that a single bottleneck in infinite networks dramatically accelerates training when compared to purely in-finite networks, with an improved overall performance. We discuss the acceleration phenomena by drawing similarities to infinitely wide deep linear models, where the acceleration effect of a bottleneck can be understood theoretically.",
"subjects": "Machine Learning (cs.LG)",
"title": "Implicit Acceleration and Feature Learning in Infinitely Wide Neural Networks with Bottlenecks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347875615794,
"lm_q2_score": 0.727975460709318,
"lm_q1q2_score": 0.7093646134063272
} |
https://arxiv.org/abs/2005.09690 | Invariance of the tame fundamental group under base change between algebraically closed fields | We show that the tame étale fundamental group of a connected normal finite type separated scheme remains invariant upon base change between algebraically closed fields of characteristic $p \geq 0$. | \section{Statement of theorem}
For $X$ a scheme, we let $\pi_1(X)$ denote the \'etale fundamental group of $X$,
where we leave the base point implicit,
and we let $\pi_1^{(p)}(X)$ denote the maximal prime to $p$ quotient of $\pi_1(X)$,
again with an implicit choice of base point.
For convenience of notation, we let $\pi_1^{(0)}(X) := \pi_1(X)$.
If $X \ra Y$ and $Y \ra Z$ are morphisms, we denote $X \times_Y Z$ by $X_Z$.
In the case $Z = \spec B$, we also denote $X \times_Y Z$ by $X_B$.
In this note, we give a proof of the following theorem:
\begin{theorem}
\label{theorem:isomorphism-on-algebraically-closed-base-change}
Suppose $k$ is an algebraically closed field of characteristic $p$
and $U$ is a connected normal quasi-projective scheme over $k$.
Let $L$ be any algebraically closed field with a map $k \rightarrow L$, and define $U_L := U \times_{\spec k} \spec L$.
Then, the natural
map $\pi_1^{(p)}(U_L) \ra \pi_1^{(p)}(U)$ is an isomorphism.
In particular, if $p =0$, $\pi_1(U_L) \ra \pi_1(U)$ is an isomorphism.
\end{theorem}
\begin{remark}
\label{remark:}
This result is surely well known to the experts and is not in any way original.
Nevertheless, I was having difficulty finding it in the literature, and so I decided to write it up.
The proof written here is a combination of ideas presented to me by Brian Conrad and Jason Starr.
In particular, Jason Starr has written up a separate proof on mathoverflow at
\cite{MO:etale-fundamental-group-base-change-between-algebraically-closed-fields}.
The proof in this note is merely a re-organization of the ideas presented in that post.
It should also be noted that, at least in characteristic $0$, a proof is given in
\cite[Expos\'e XIII, Proposition 4.6]{noopsortSGA1Grothendieck1971}
taking $Y = \spec L$ in the statement there.
However, that proof relies on resolution of singularities, and so here we give a
proof
not involving such high-powered results.
\end{remark}
\begin{example}
\label{example:failure-of-algebraically-closed-base-change-in-char-p}
The prime to $p$ hypothesis in the characteristic $p > 0$ case is crucial.
If $k \subset L$ are two algebraically closed fields
of characteristic $p > 0$, then for $U$ a normal quasi-projective scheme over $k$ the map $\pi_1(U_L) \ra \pi_1(U)$ is not in general
an isomorphism.
A counterexample is provided in the case $U = \ba^1_k$
by Artin-Schreier covers.
In more detail, if $\pi_1(\ba^1_L) \ra \pi_1(\ba^1_k)$ were an isomorphism,
then the map $H^1(\ba^1_k, \bz/(p)) \ra H^1(\ba^1_L, \bz/(p))$
would also be an isomorphism.
The Artin-Schreier exact sequence identifies this with the
map
$k\left[ x \right]/\left\{ f^p - f : f \in k[x] \right\} \ra L\left[ x \right]/\left\{ f^p - f : f \in L[x] \right\}$,
and this map is not surjective because $a x^{p-1}$ for $a \in L - k$ does not lie in the image.
\end{example}
\subsection{Acknowledgements}
I would like to thank Brian Conrad and Jason Starr.
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1656518.
\section{Proof of theorem}
\subsection{Idea of proof of \autoref{theorem:isomorphism-on-algebraically-closed-base-change}}
\label{subsubsection:idea-of-proof-of-algebraically-closed-base-change}
The proof of \autoref{theorem:isomorphism-on-algebraically-closed-base-change} is fairly technically involved,
but the idea is not too complicated:
The key is to verify injectivity of $\pi_1(U_L) \ra \pi_1(U)$.
As a first step, we reduce from the normal case to the smooth case using that geometrically normal schemes have a dense open smooth subscheme.
Then, we assume our variety $U$ is smooth, and prove the theorem by reducing it to the curve case.
For this reduction, we fiber $U$ over a variety of one lower dimension, in which case we can apply the curve case
to the geometric generic fiber.
It remains to deal with the case that $U$ is a quasi-projective smooth curve, which
is also the most technically involved part.
In this case, we can write $U$ as $\overline U - D$
with $\overline U$ smooth and projective and $D$ a divisor.
To check injectivity, we want to check every finite \'etale cover of $U_L$ is the base change of some finite \'etale cover of $U$.
If $E$ is one such cover, we can use spreading out and specialization to obtain an \'etale cover $U' \ra U$
with the same ramification orders over the generic points of $D$ that $E$ has.
Then, we construct the cover $E'$ which is the normalization of $E$ in $E \times_{U_L} U_L'$,
and verify this is the base change of a cover from $k$.
We do so
by applying the projective version of \autoref{theorem:isomorphism-on-algebraically-closed-base-change},
using that $E'$ and $U_L'$ have projective compactifications $\overline E'$ and $\overline U_L'$
with an \'etale map $\overline E' \ra \overline U_L'$.
We now indicate how we put together the steps described in the above to prove \autoref{theorem:isomorphism-on-algebraically-closed-base-change}.
In \autoref{subsection:surjectivity} (\autoref{lemma:surjectivity}), we prove $\pi_1(U_L) \ra \pi_1(U)$ is surjective.
For injectivity, we first prove the map is injective
in the case $U$ is a smooth connected and quasi-projective curve in \autoref{subsubsection:smooth-case} (\autoref{proposition:curve-case}).
We prove in \autoref{subsection:induction} (\autoref{proposition:smooth-case}) that \autoref{theorem:isomorphism-on-algebraically-closed-base-change}
holds for smooth connected quasi-projective varieties of all dimensions.
Finally, we complete the proof
in the case that $U$ is normal connected and quasi-projective in \autoref{subsubsection:normal-case}.
\subsection{Surjectivity}
\label{subsection:surjectivity}
We first show $\pi_1(U_L) \ra \pi_1(U)$ is surjective.
\begin{lemma}
\label{lemma:surjectivity}
The map $\pi_1(U_L) \ra \pi_1(U)$ is surjective.
\end{lemma}
\begin{proof}
To see this, it suffices to verify
that the pullback of any connected finite \'etale cover over $U$ along $U_L \ra U$ is connected.
Since $L$ and $k$ are both algebraically closed,
the result follows from the fact that connectedness is preserved under base change between algebraically closed
fields.
\end{proof}
\subsection{The case $U$ is a smooth curve}
\label{subsubsection:smooth-case}
We now prove injectivity for smooth connected quasi-projective $U$.
For this, it suffices to show that any Galois connected finite \'etale
cover $E$ of $U_L$ is the base change of some Galois connected finite \'etale cover of $U$.
To show this, it suffices
to find a connected finite \'etale cover $E'_k \ra U$ so that $(E'_k)_L \ra U_L$ factors through $E$.
As a first step, we wish to find a cover $U'$ of $U$ with the same ramification orders as $E$ over points
in the projective completion of $U$.
\subsubsection{Notational Setup}\label{subsubsection:notation-E}
Let $k \ra L$ be an inclusion of algebraically closed fields,
let $U$ be a smooth curve over $k$, $\overline U$ its regular projective completion, and $D := \overline U - U$.
Let $E \rightarrow U_L$ be a Galois connected finite \'etale cover.
Let $\overline E$ be the normalization of $\overline U_L$ inside $E$.
\begin{lemma}
\label{lemma:ramification-order-cover}
With notation as in \autoref{subsubsection:notation-E},
Then, there exists a connected finite cover $\overline U' \ra \overline U$, \'etale over $U$,
with the same ramification orders that $\overline E$ has over the corresponding points of
$D_L$.
\end{lemma}
\begin{proof}
The idea is to ``spread out and specialize'' $E$.
To construct $U'$, we can find a finitely generated $k$-subalgebra
$A \subset L$ and an finite \'etale cover $E_A \ra U_A$
so that $(E_{A})_L \simeq E$.
Since $k$ is algebraically closed, for any field $K \supset k$, the irreducible components of
$D_K$ arise uniquely from the irreducible components of $D$ under scalar extension. We freely use this bijection
in what follows.
Let $K(A)$ denote the fraction field of $A$.
Note that the ramification order of $E_{K(A)}$ over each point of $D_{K(A)}$
agrees with that of $E$ over the corresponding point of $D_L$.
Further, we claim that for a general closed point $s$ of $\spec A$,
the degree of ramification of $s \times_{\spec A} E_A$ over a point of $s \times_{\spec A} D_A \simeq D$
agrees with the ramification order of $E_{K(A)}$ over the corresponding generic point of $D_{K(A)}$.
To see why this ramification order is constant over an open set of $\spec A$,
recall that the ramification order can be identified with the degree of the relative sheaf of differentials at that point.
So, for $p \in D$ a point, considering the sheaf of differentials associated to $E_A \times_{\spec U_A} p_A \ra p_A$,
under the identification $p_A \simeq \spec A$, we see that over the generic point, this sheaf has degree $n$.
It follows that there is an open subscheme of $\spec A$ where the sheaf has degree $n$,
and so the morphism has ramification degree $n$ on some open subscheme of $\spec A$.
Since $k$ is algebraically closed field, every closed point of $\spec A$ has residue field $k$, so we may
choose such a closed point $t \colon \spec k \ra \spec A$
with the same ramification orders over $D$ as $E$ over the corresponding points of $D_L$.
Since the locus on the base $\spec A$ where the map $E_A \ra U_A$ is connected is constructible, we may also assume
the fiber of $E_A \ra U_A$ over $t$ is also connected.
Then, $U' := E_A \times_{\spec A} \spec k$ is our desired connected finite \'etale cover.
\end{proof}
Summarizing, the situation of \autoref{lemma:ramification-order-cover},
we obtain the commutative diagram
\begin{equation}
\nonumber
\begin{tikzcd}
E \ar{r} \ar{d} & E_A \ar{d} & U'\ar{l}\ar{d} \\
U_L \ar{d} \ar{r} & U_A \ar{d} & U \ar{l}\ar{d} \\
\spec L \ar{r} & \spec A & \spec k \ar[swap]{l}{t}
\end{tikzcd}\end{equation}
where the four squares are fiber products.
\subsubsection{Further notation}
\label{subsubsection:notation-U'}
Let $\overline U'$ denote the normalization of $\overline U$ in $U'$.
By possibly replacing $U'$ with its normalization in a finite field extension, we may assume
$K(U) \ra K(U')$ is even Galois.
Let $\overline E'$ denote the normalization of $\overline E$ in $\overline E \times_{\overline U_L} \overline U_L'$
and let $E' := \overline E' \times_{\overline U_L'} U_L'$,
as in the commutative diagram
\[\begin{tikzcd}[row sep={40,between origins}, column sep={40,between origins}]
& E' \ar{rr}\ar{dd}\ar{dl} & & \overline E' \ar{dd}\ar{dl} \\
E \ar[crossing over]{rr} \ar{dd} & & \overline E \\
& U_L' \ar{rr} \ar{dl} & & \overline U_L' \ar{dl} \\
U_L \ar{rr} && \overline U_L \ar[from=uu,crossing over] & & D_L. \ar{ll}
\end{tikzcd}\]
Observe that the finite map $\overline U' \ra \overline U$ restricts to $U' \ra U$ over $U \subset \overline U$ as $U$ is normal.
Furthermore, by Abhyankar's lemma
\cite[A I.11]{FreitagK:lectures-etale}
(see also \cite[Expose XIII, 5.2]{noopsortSGA1Grothendieck1971})
we obtain that $\overline U'$ is regular, hence smooth (as we are working over an algebraically closed field $k$).
Although the normalization $\overline{E} \ra \overline{U}_L$ of $\overline{U}_L$ in $E \ra U_L$
is not necessarily \'etale, we now show the finite surjection $\overline E' \ra \overline U_L'$ is \'etale.
\begin{lemma}
\label{lemma:projective-cover-is-etale}
Retain notation as in \autoref{subsubsection:notation-E} and \autoref{subsubsection:notation-E}.
Then, $\overline E' \ra \overline U_L'$ is \'etale.
\end{lemma}
\begin{proof}
From \autoref{lemma:ramification-order-cover}, by construction of $U' \ra U$, for each point
of $D_L$, the ramification degree of
$\overline E \ra \overline U_L$
agrees with
that of
$\overline U'_L \ra \overline U_L$.
We next claim that the resulting map $\overline E' \ra \overline U_L'$
is \'etale over all points of $\overline U_L'$ lying above a point of $D_L$.
Indeed, this is where we crucially use the assumption that $E \ra U$ has degree prime to $p$.
Since being \'etale can be checked
in the local ring at each such point, \'etaleness of $\overline E' \ra\overline U_L'$ follows from the following \autoref{lemma:local-computation}.
\end{proof}
\begin{lemma}
\label{lemma:local-computation}
Suppose $\spec B \ra \spec A$ and $\spec C \ra \spec A$ are two maps of local Dedekind domains, both ramified to order $n$,
with $n$ prime to the characteristic of $K(A)$.
Let $\spec D \ra \spec A$ be the normalization of $\spec B$ in $\spec B \times_{\spec A} \spec C$.
Then, $\spec D \ra \spec C$ is \'etale.
\end{lemma}
\begin{proof}
First, we reduce to the situation that $A$ is strictly henselian. Indeed, recall that
the strict henselization of $A$ is constructed
as a limit of \'etale covers of $\spec A$.
Because we can check that $\spec D \ra \spec C$ is \'etale after passing to an \'etale cover, and because normalization
commutes with passing to \'etale covers, we can check the result in the case that $A$ is strictly henselian.
Therefore, we now assume $A$ is strictly henselian. Let $\pi$ denote a uniformizer of $A$.
By Abhyankar's lemma
\cite[A I.11]{FreitagK:lectures-etale}, we know that $B \simeq A[\pi^{1/n}]$ and $C \simeq A[\pi^{1/n}]$.
Then, we claim $D = B \otimes_A C$.
To verify this, it suffices to show that $\phi: \spec B \otimes_A C \ra \spec C$ is \'etale, because $C$ is normal and
\'etale schemes over normal schemes are normal by Serre's R1+S2 criterion for normality.
Therefore, the fiber product $\spec B \otimes_A C$ would already be normal, and hence must be the desired normalization.
Further, this would then imply $\spec D \ra \spec C$ is \'etale because $D = B \otimes_A C$ and $\spec B \otimes_A C \ra \spec C$ is
\'etale.
So, to conclude the proof, we only need check $B \otimes_A C \ra C$ is \'etale. Indeed, letting $\zeta_n$ be an $n$th root of unity,
\begin{align*}
B \otimes_A C &\simeq A[\pi^{1/n}] \otimes_A A[\pi^{1/n}] \\
&\simeq A[\pi^{1/n}][x]/(x^n-\pi) \\
&\simeq A[\pi^{1/n}][x]/(x-\zeta_n^i \pi^{1/n}) \\
&\simeq \oplus_{i=1}^n A[\pi^{1/n}],
\end{align*}
which is indeed \'etale over $A[\pi^{1/n}] \simeq C$.
\end{proof}
We are now prepared to complete the curve case of \autoref{theorem:isomorphism-on-algebraically-closed-base-change}.
\begin{proposition}
\label{proposition:curve-case}
\autoref{theorem:isomorphism-on-algebraically-closed-base-change} holds in the case that $U$ is a smooth curve.
\end{proposition}
\begin{proof}
Retain notation from \autoref{subsubsection:notation-E} and \autoref{subsubsection:notation-U'}.
Since $E' \ra U_L$ is a finite \'etale cover of $U_L$ dominating $E \ra U_L$,
to complete the proof in the case that $U$ is smooth, it suffices to show $E' \ra U_L$ is the base change of some finite \'etale cover $E'_k \ra U$.
We showed in \autoref{lemma:projective-cover-is-etale} that $\overline E' \rightarrow \overline U'_L$ is a finite \'etale cover.
Since $\overline U'$ is projective, by \cite[Tag 0A49]{stacks-project},
we obtain that there is some finite \'etale cover $\overline E'_k \ra \overline U'$ with $\overline E' \simeq \left( \overline E'_k \right)_L$.
Define $E'_k := \overline E'_k \times_{\overline U'} U'$
and note that $(E'_k)_L \simeq E'$.
Since $U' \ra U$ is also finite \'etale, we obtain that
the composition $E'_k \ra U' \ra U$ is finite \'etale.
Therefore,
$(E'_k)_L \simeq E'$ is a finite \'etale cover of $U_L$
which is the base change of a finite \'etale cover of $U$.
We have shown that $(E'_k)_L \ra U_L$ is a finite \'etale cover of $U_L$ factoring through $E$, which
is what we needed to construct.
\end{proof}
\subsection{The case $U$ is smooth}
\label{subsection:induction}
In this section, specifically in \autoref{proposition:smooth-case},
we prove \autoref{theorem:isomorphism-on-algebraically-closed-base-change} in
the case that $U$ is a smooth connected quasi-projective variety
of arbitrary dimension.
To start, we reduce to the case that $U$ is a relative curve over projective space of one dimension lower.
\begin{proposition}
\label{proposition:smooth-morphism-reduction}
Let $U$ be a smooth connected quasi-projective variety of arbitrary dimension $d$.
In order to show \autoref{theorem:isomorphism-on-algebraically-closed-base-change}
holds for $U$, it suffices to show it holds for smooth connected quasi-projective
varieties
$U$ with a dominant generically smooth map $\alpha \colon U \ra \bp^{d-1}_k$.
\end{proposition}
\begin{proof}
Say we have $U \subset \bp^n_k$.
To start, replacing $U$ by its span, we may assume $U$ is nondegenerate.
Choose a
codimension $d$ plane $H \subset \bp^n_k$
such that if $J' \subset \bp^n_k$ is a general codimension $d-1$ plane containing $H$,
we have $J' \cap U$ is smooth. This is possible by Bertini's theorem, see \autoref{lemma:intersection-smooth}
for a detailed proof.
Then, $\pi_1(U - H \cap U) \simeq \pi_1(U)$ because
the fundamental group of a smooth variety is unchanged by removing any set of codimension
at least $2$ (see \autoref{lemma:removing-codimension-2}).
So, replacing $U$ by $U - H \cap U$, we may assume there is a codimension $d$ hyperplane
which $U$ intersects trivially.
We now prove the proposition in the case there is a codimension $d$ plane $H$ intersecting $U$ trivially
and so that for a general plane $J'$ containing $H$, the intersection $J' \cap U$ is smooth.
We want to show there is a dominant map $U \ra \bp^{d-1}_k$
whose generic fiber is smooth.
Geometrically, this map is given by sending a point on $U$ to the
unique codimension $d-1$ hyperplane containing $H$ which that point lies in.
More formally, let $\scv \subset \bp^n \times \bp^{d-1}_k$ denote the closed subscheme universal family over the Grassmannian
of codimension $d-1$ planes containing the codimension $d$-plane $H$.
We obtain a map $\scv \ra \bp^n_k$.
This map is birational, and an isomorphism away from $H$.
Because $U$ does not intersect $H$,
we may consider $U \subset \scv$.
The map $\scv \ra \bp^{d-1}_k$ induces a map
$U \ra \bp^{d-1}_k$.
By construction of $H$, the generic fiber of the map is smooth.
\end{proof}
Using the fibration of \autoref{proposition:smooth-morphism-reduction}, we next show that the generic fiber
of Galois connected finite \'etale cover $E_L \ra U_L$
over $L$ is the base change of a Galois connected finite \'etale cover over $K$.
\begin{proposition}
\label{proposition:base-change-generic-fiber}
Assume $U$ is a smooth connected normal $k$-variety and with a dominant generically smooth map
$\alpha \colon U \ra \bp^{d-1}_k$ and
suppose $E_L \ra U_L$ is a Galois connected finite \'etale cover.
Let $\eta_k$ denote the generic point of $\bp^{d-1}_k$ and $\eta_L$ denote the geometric
generic point of $\bp^{d-1}_L$.
Our given cover restricts to a Galois connected finite \'etale cover $E_{\eta_L} \ra U_{\eta_L}$
which is the base change of some Galois connected finite \'etale cover $E_{\eta_k} \ra U_{\eta_k}$.
\end{proposition}
\begin{proof}
Let $\overline \eta_k$ and $\overline \eta_L$ denote
geometric generic points corresponding to $\eta_k$ and $\eta_L$,
meaning that $\overline \eta_k$ has residue field which is the algebraic closure of
$\kappa(\eta_k)$ and similarly for $L$.
Since $E_{\overline \eta_L} := E \times_{\bp^{d-1}_L} \overline \eta_L$
is smooth and of dimension $1$,
by the curve case of \autoref{theorem:isomorphism-on-algebraically-closed-base-change}, shown in
\autoref{proposition:curve-case},
it arises as the base change of some cover
$E_{\overline \eta_k} \ra U_{\overline \eta_k}$.
That is, $(E_{\overline \eta_k})_{\overline \eta_L} \simeq E_{\overline \eta_L}$.
To conclude the proof, we only need realize $E_{\overline \eta_k} \ra U_{\overline \eta_k}$
as the base change of a map over $\eta_k$.
We can realize $\overline \eta_k \ra \eta_K$ as the composition of a purely inseparable
morphism $\overline \eta_k \ra \eta_k^s$ and a separable morphism $\eta_k^s \ra \eta_k$ by taking
$\eta_k^s := \spec \kappa(\eta_k)^s$. (Here, for $K$ is a field, $K^s$ denotes its separable closure.)
Since $\overline \eta_k \ra \eta_k^s$ is a universal homeomorphism, the same is true of
$U_{\overline \eta_k} \ra U_{\eta_k^s}$, and so the map induces an isomorphism of \'etale fundamental
groups
$\pi_1(U_{\overline \eta_k}) \ra \pi_1(U_{\eta_k^s})$ \cite[Tag 0BQN]{stacks-project}.
It follows that $E_{\overline \eta_k} \ra U_{\overline \eta_k}$ is the base change of a morphism
$E_{\eta_k^s} \ra U_{\eta_k^s}$ over $\eta_k^s$.
It suffices to verify this is the base change of a map over $\eta_k$.
which we will do by showing
$E_{\eta_k^s} \ra U_{\eta_k^s}$ is stable under the
$\pi_1(\eta_k) = \gal(\eta_k^s/\eta_k)$ action.
Indeed, observe that we have the explicit descriptions $\eta_k \simeq k(x_1, \ldots, x_n)$ and $\eta_L \simeq L(x_1, \ldots, x_n)$.
It follows that the two maps of schemes $\eta_k^s \ra \eta_k$ and $\eta_L \ra \eta_k$
correspond to the linearly disjoint extensions of fields
$k(x_1, \ldots, x_n) \ra k(x_1, \ldots, x_n)^s$ and $k(x_1, \ldots, x_n) \ra L(x_1, \ldots, x_n)$.
Indeed, it is a standard fact that these are linearly disjoint (see \autoref{lemma:linearly-disjoint}).
Since $k(x_1, \ldots, x_n)^s$ and $L(x_1, \ldots, x_n)$ are linearly disjoint,
any automorphism of $k(x_1, \ldots, x_n)^s$ over $k(x_1, \ldots, x_n)$ can be extended to an automorphism of $L(x_1, \ldots, x_n)^s$ over $L(x_1, \ldots, x_n)$.
Since
$E_{\overline \eta_L} \ra U_{\overline \eta_L}$
was invariant under the $\pi_1(\eta_L)$ action,
it follows that $E_{\overline \eta_k} \ra U_{\overline \eta_k}$
is invariant under the $\pi_1(\eta_k)$ action.
Therefore, $E_{\overline \eta_k} \ra U_{\overline \eta_k}$
descends to a map $E_{\eta_k} \ra U_{\eta_k}$ over $\eta_k$, completing the proof.
\end{proof}
We now complete the proof of \autoref{theorem:isomorphism-on-algebraically-closed-base-change} in the case $U$ is smooth.
\begin{proposition}
\label{proposition:smooth-case}
\autoref{theorem:isomorphism-on-algebraically-closed-base-change} holds when $U$ is smooth.
\end{proposition}
\begin{proof}
By \autoref{proposition:smooth-morphism-reduction}, we may assume there is a generically smooth dominant map
$U \ra \bp^{d-1}_k$. With notation as in \autoref{proposition:base-change-generic-fiber},
any Galois connected finite \'etale cover $E_L \ra U_L$ restricts to a cover $E_{\eta_L} \ra U_{\eta_L}$ which is the base change of a Galois
connected finite \'etale cover
$E_{\eta_k} \ra U_{\eta_k}$.
Define, $E_k$ to be the normalization of $U$ in the fraction field of $E_{\eta_k}$.
We claim that $(E_k)_L \simeq E_L$ as covers of $U_L$.
This will complete the proof, because upon verifying $(E_k)_L \simeq E_L$,
we will obtain that $E_k \ra U$ must be \'etale, as the base change to $L$ is \'etale.
To see $(E_k)_L \simeq E_L$ as covers of $U_L$, we know $E_L$ is the normalization of $U_L$ in $K(E_L) = K(E_{\eta_L})$.
Further, since $L/k$ has a separating transcendence basis (since $k$ is algebraically closed, hence perfect),
it follows that $(E_k)_L$ is normal and has fraction field $K(E)$.
Therefore, $(E_k)_L \simeq E_L$ by the
universal property of normalization.
\end{proof}
\subsection{Proof of injectivity in the general case}
\label{subsubsection:normal-case}
We now complete the proof of the theorem for normal connected quasi-projective schemes,
using that we have proven it for
smooth $U$.
\begin{proof}[Proof of \autoref{theorem:isomorphism-on-algebraically-closed-base-change}]
By \autoref{lemma:surjectivity}, the map $\pi_1(U_L) \ra \pi_1(U)$ is surjective.
To complete the proof, we wish to show it is injective.
To verify the map $\pi_1(U_L) \ra \pi_1(U)$ is injective, we would
like to show that if $E \ra U_L$ is any connected finite \'etale cover,
then $E$ is isomorphic to $(E')_L$ for $E' \ra U$ some connected
finite \'etale cover.
To see this, start with some $E \ra U_L$.
Let $W \subset U$ denote the maximal dense open smooth open subscheme of $U$.
Since we have already shown the map $\pi_1(W_L) \ra \pi_1(W)$ is an isomorphism
in \autoref{proposition:smooth-case},
we know that $E \times_{U_L} W_L$ is isomorphic the base change of some finite \'etale
cover
$E' \ra W$ along $\spec L \ra \spec k$.
Let $\widetilde{E}'$ denote the normalization of $U$ in $E'$.
Since $U$ is normal, $\widetilde{E}' \ra U$ is a finite morphism.
The setup this far is summarized by the commutative diagrams
\begin{equation}
\nonumber
\begin{tikzcd}
E \times_{U_L} W_L \ar {r} \ar {d} & E \ar {d} & E' \ar {r} \ar {d} & \widetilde{E}' \ar {d} \\
W_L \ar {r} & U_L & W \ar {r} & U.
\end{tikzcd}\end{equation}
To complete the proof, we only need show $\widetilde{E}' \ra U$
is \'etale and there is an isomorphism $\widetilde{E}'_L \simeq E$ over $U_L$.
Indeed, since $\widetilde{E}'$ is normal and finite over $U$, the base change
$\widetilde{E}'_L$ is normal and finite over $U_L$ (again using that $L/k$ has a separating
transcendence basis).
It follows that $\widetilde{E}'_L$ is the normalization of
$U_L$ in $E'_L \simeq E \times_{U_L} W_L$.
But, since $E$ is also the normalization of $U_L$ in
$E \times_{U_L} W_L$, we obtain that
$E \simeq \widetilde{E}'_L$.
Since $\widetilde{E}'_L \simeq E \ra U_L$ is \'etale, it follows that
$\widetilde{E}' \ra U$ is also \'etale, completing the proof.
\end{proof}
| {
"timestamp": "2020-05-21T02:01:02",
"yymm": "2005",
"arxiv_id": "2005.09690",
"language": "en",
"url": "https://arxiv.org/abs/2005.09690",
"abstract": "We show that the tame étale fundamental group of a connected normal finite type separated scheme remains invariant upon base change between algebraically closed fields of characteristic $p \\geq 0$.",
"subjects": "Algebraic Geometry (math.AG); Number Theory (math.NT)",
"title": "Invariance of the tame fundamental group under base change between algebraically closed fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347831070321,
"lm_q2_score": 0.7279754607093178,
"lm_q1q2_score": 0.7093646101635259
} |
https://arxiv.org/abs/1901.03794 | Analyzing a Maximum Principle for Finite Horizon State Constrained Problems via Parametric Examples. Part 1: Problems with Unilateral State Constraints | In the present paper, the maximum principle for finite horizon state constrained problems from the book by R. Vinter [\textit{Optimal Control}, Birkhäuser, Boston, 2000; Theorem~9.3.1] is analyzed via parametric examples. The latter has origin in a recent paper by V.~Basco, P.~Cannarsa, and H.~Frankowska, and resembles the optimal growth problem in mathematical economics. The solution existence of these parametric examples is established by invoking Filippov's existence theorem for Mayer problems. Since the maximum principle is only a necessary condition for local optimal processes, a large amount of additional investigations is needed to obtain a comprehensive synthesis of finitely many processes suspected for being local minimizers. Our analysis not only helps to understand the principle in depth, but also serves as a sample of applying it to meaningful prototypes of economic optimal growth models. Problems with unilateral state constraints are studied in Part 1 of the paper. Problems with bilateral state constraints will be addressed in Part 2. | \section{Introduction}
It is well known that optimal control problems with state constraints are models of importance, but one usually faces with a lot of difficulties in analyzing them. These models have been considered since the early days of the optimal control theory. For instance, the whole Chapter VI of the classical work \cite[pp.~257--316]{Pont_Bolt_Gamk_Mish_1962} is devoted to problems with restricted phase coordinates. There are various forms of the maximum principle for optimal control problems with state constraints; see, e.g., \cite{Hartl_Sethi_Vickson_1995}, where the relations between several forms are shown and a series of numerical illustrative examples have been solved.
To deal with state constraints, one has to use functions of bounded variation, Borel measurable functions, Lebesgue-Stieltjes integral, nonnegative measures on the $\sigma-$algebra of the Borel sets, the Riesz Representation Theorem for the space of continuous functions, and so on.
By using the maximum principle presented in \cite[pp.~233--254]{Ioffe_Tihomirov_1979}, Phu \cite{Phu_1989,Phu_1992} has proposed an ingenious method called \textit{the method of region analysis} to solve several classes of optimal control problems with one state and one control variable, which have both state and control constraints. Minimization problems of the Lagrange type were considered by the author and, among other things, it was assumed that integrand of the objective function is strictly convex with respect to the control variable. To be more precise, the author considered \textit{regular problems}, i.e., the optimal control problems where the Pontryagin function is strictly convex with respect to the control variable.
In the present paper, the maximum principle for finite horizon state constrained problems from the book by Vinter \cite[Theorem~9.3.1]{Vinter_2000} is analyzed via parametric examples. The latter has origin in a recent paper by Basco, Cannarsa, and Frankowska \cite[Example~1]{Basco_Cannarsa_Frankowska_2018}, and resembles the optimal growth problem in mathematical economics (see, e.g., \cite[pp.~617--625]{Takayama_1974}). The solution existence of these parametric examples, which are \textit{irregular optimal control problems} in the sense of Phu \cite{Phu_1989,Phu_1992}, is established by invoking Filippov's existence theorem for Mayer problems \cite[Theorem~9.2.i and Section~9.4]{Cesari_1983}. Since the maximum principle is only a necessary condition for local optimal processes, a large amount of additional investigations is needed to obtain a comprehensive synthesis of finitely many processes suspected for being local minimizers. Our analysis not only helps to understand the principle in depth, but also serves as a sample of applying it to meaningful prototypes of economic optimal growth models.
Note that the \textit{maximum principle} for finite horizon state constrained problems in \cite[Chapter~9]{Vinter_2000} covers many known ones for smooth problems and allows us to deal with nonsmooth problems by using the \textit{Mordukhovich normal cone} and the \textit{Mordukhovich subdifferential} \cite{Mordukhovich_2006a,Mordukhovich_2006b,Mordukhovich_2018}, which are also called the limiting normal cone and the limiting subdifferential. This principle is a necessary optimality condition which asserts the existence of a multipliers set $(p, \mu, \nu,\gamma)$ consisting of an \textit{absolutely continuous function} $p$, a \textit{function of bounded variation}~$\mu$, a \textit{Borel measurable function} $\nu$, and a \textit{real number} $\gamma\geq 0$, where $(p,\mu,\gamma)\neq (0,0,0)$, such that the four conditions (i)--(iv) in Theorem~\ref{V_thm9.3.1 necessary condition} below are satisfied. The relationships between these conditions are worthy a detailed analysis. We will present such an analysis via three parametric examples of optimal control problems of the Langrange type, which have five parameters $(\lambda,a,x_0,t_0,T)$, where $\lambda>0$ appears in the description of the objective function, $a>0$ appears in the differential equation, $x_0$ is the initial value, $t_0$ is the initial time, and $T$ is the terminal time. Observe that, in Example~1 of \cite{Basco_Cannarsa_Frankowska_2018}, $T=\infty$, $x_0$ and $t_0$ are fixed. Problems with unilateral state constraints are studied in Part 1 of the paper. Problems with bilateral state constraints will be addressed in Part 2.
This Part 1 is organized as follows. Section \ref{Background Materials} presents some background materials including the above-mentioned maximum principle and Filippov's existence theorem for Mayer problems. Control problems without state constraints are considered in Section \ref{Example 1}, while control problems with unilateral state constraints are studied in Section \ref{Example 2}. Some concluding remarks are given in Section \ref{Conclusions}.
\section{Background Materials}\label{Background Materials}
In this section, we give some notations, definitions, and results that will be used repeatedly in the sequel.
\subsection{Notations and Definitions }
The symbol ${\rm I\!R}$ (resp., ${\rm I\!N})$ denotes the set of real numbers (resp., the set of positive integers). The norm in the $n$-dimensional Euclidean space ${\rm I\!R}^n$ is denoted by $\|.\|$. For a subset $C\subset{\rm I\!R}^n$, we abbreviate its \textit{convex hull} to $\mbox{\rm co}\, C$. For a set-valued map $F: {\rm I\!R}^n \rightrightarrows {\rm I\!R}^m$, we call the set ${\rm gph}\, F := \{(x,y) \in {\rm I\!R}^n \times {\rm I\!R}^m\,:\, y\in F(x) \}$ the \textit{graph} of $F$.
Let $\Omega \subset {\rm I\!R}^n$ be a closed set and $\bar{v} \in \Omega$. The {\it Fr\'echet normal cone} (also called the {\it prenormal
cone}, or the {\it regular normal cone}) to
$\Omega\subset{\rm I\!R}^n$ at $\bar{v}$ is given by
\begin{eqnarray*}
\widehat
N_{\Omega}(\bar{v})=\left\{v'\in{\rm I\!R}^n\,:\,
\displaystyle\limsup_{v\xrightarrow{\Omega}\bar{v}}\,\displaystyle\frac{\langle
v',v-\bar{v} \rangle}{\|v-\bar{v} \|}\leq 0\right\},\end{eqnarray*} where
$v\xrightarrow{\Omega}\bar{v}$ means $v\to \bar{v}$ with $v\in\Omega$. The {\it Mordukhovich} (or {\it limiting}) {\it normal cone} to $\Omega$ at $\bar{v}$ is defined by
\begin{eqnarray*} N_{\Omega}(\bar{v})
=\big\{v'\in{\rm I\!R}^n\,:\,\exists \mbox{ sequences } v_k\to \bar{v},\ v_k'\rightarrow v' \mbox{ with } v_k'\in \widehat
N_{\Omega}(v_k)\; \mbox{for all}\; k\in {\rm I\!N}\big\}.\end{eqnarray*}
Given an extended real-valued function $\varphi: {\rm I\!R}^n \rightarrow {\rm I\!R}\cup\{-\infty, +\infty\}$, one defines the \textit{epigraph} of $\varphi$ by $\mbox{\rm epi}\, \varphi=\{(x, \mu)\in {\rm I\!R}^n\times {\rm I\!R} \,:\, \mu \geq \varphi (x)\}$. The \textit{Mordukhovich subdifferential} (or \textit{limiting subdifferential}) of $\varphi$ at $\bar x\in {\rm I\!R}^n$ with $|\varphi (\bar x)|< \infty$ is defined by
\begin{equation*}
\partial\varphi(\bar x)=\big\{x^*\in {\rm I\!R}^n \;:\; (x^*, -1)\in N\big((\bar x, \varphi(\bar x)); \mbox{\rm epi}\, \varphi \big)\big\}.
\end{equation*}
If $|\varphi (x)|= \infty$, then one puts $\partial\varphi(\bar x)=\emptyset$.
The reader is referred to \cite[Chapter~1]{Mordukhovich_2006a} and \cite[Chapter~1]{Mordukhovich_2018} for comprehensive treatments of the Fr\'echet normal cone, the limiting normal cone, the limiting subdifferential, and the related calculus rules.
For a given segment $[t_0, T]$ of the real line, we denote the $\sigma$-algebra of its Lebesgue measurable subsets (resp., the $\sigma$-algebra of its Borel measurable subsets) by $\mathcal{L}$ (resp., $\mathcal{B}$). The Sobolev space $W^{1,1}([t_0, T], {\rm I\!R}^n)$ is the linear space of the absolutely continuous functions $x:[t_0, T] \to {\rm I\!R}^n$ endowed with the norm $\|x\|_{W^{1,1}}=\|x(t_0)\|+\displaystyle\int_{t_0}^T \|\dot x(t)\| dt$ (see, e.g., \cite[p.~21]{Kolmogorov_Fomin_1970} for this and another equivalent norm).
As in \cite[p.~321]{Vinter_2000}, we consider the following \textit{finite horizon optimal control problem of the Mayer type}, denoted by $\mathcal M$,
\begin{equation}\label{cost functional_FP}
\mbox{Minimize}\ \; g(x(t_0), x(T)),
\end{equation}
over $x \in W^{1,1}([t_0, T], {\rm I\!R}^n)$ and measurable functions $u:[t_0, T] \to {\rm I\!R}^m$ satisfying
\begin{equation}\label{state control system_FP}
\begin{cases}
\dot x(t)=f(t, x(t), u(t)),\quad &\mbox{a.e.\ } t\in [t_0, T]\\
(x(t_0), x(T))\in C\\
u(t)\in U(t), &\mbox{a.e.\ } t\in [t_0, T]\\
h(t, x(t))\leq 0, & \forall t\in [t_0, T],
\end{cases}
\end{equation}
where $[t_0, T]$ is a given interval, $g: {\rm I\!R}^n\times {\rm I\!R}^n \to {\rm I\!R}$, $f: [t_0, T]\times {\rm I\!R}^n\times {\rm I\!R}^m \to {\rm I\!R}^n$, and $h:[t_0, T]\times {\rm I\!R}^n \to {\rm I\!R}$ are given functions, $C\subset {\rm I\!R}^n\times {\rm I\!R}^n$ is a closed set, and $U: [t_0, T]\rightrightarrows {\rm I\!R}^m$ is a set-valued map.
A measurable function $u:[t_0, T] \to {\rm I\!R}^m$ satisfying $u(t)\in U(t)$ a.e. $t\in [t_0, T]$ is called a \textit{control function}. A \textit{process} $(x, u)$ consists of a control function $u$ and an arc $x \in W^{1,1}([t_0, T]; {\rm I\!R}^n)$ that is a solution to the differential equation in \eqref{state control system_FP}. A \textit{state trajectory} $x$ is the first component of some process $(x, u)$. A process $(x, u)$ is called \textit{feasible} if the state trajectory satisfies the \textit{endpoint constraint} $(x(t_0), x(T))\in C$ and the \textit{state constraint} $h(t, x(t))\leq 0$ for all $t\in [t_0, T]$.
Due to the appearance of the state constraint, the problem $\mathcal M$ in \eqref{cost functional_FP}--\eqref{state control system_FP} is said to be an \textit{optimal control problem with state constraints}. But, if the inequality $h(t, x(t))\leq 0$ is fulfilled for every $(t, x(t))$ with $t \in [t_0, T]$ and $x \in W^{1,1}([t_0, T]; {\rm I\!R}^n)$ (for example, when $h$ is constant function having a fixed nonpositive value), i.e., the condition $h(t, x(t))\leq 0$ for all $t\in [t_0, T]$ can be removed from~\eqref{state control system_FP}, then one says that $\mathcal M$ an \textit{optimal control problem without state constraints}.
The \textit{Hamiltonian} $\mathcal H: [t_0, T]\times {\rm I\!R}^n \times {\rm I\!R}^n\times {\rm I\!R}^m \to {\rm I\!R}$ of \eqref{state control system_FP} is defined by
\begin{equation}\label{Hamiltonian}
\mathcal H(t, x, p, u):=p.f(t, x, u)=\displaystyle\sum_{i=1}^np_if_i(t, x, u).
\end{equation}
\begin{Definition}\label{local_minimizer} {\rm
A feasible process $(\bar x, \bar u)$ is called a $W^{1,1}$ \textit{local minimizer} for $\mathcal M$ if there exists $\delta>0$ such that $g(\bar x(t_0), \bar x(T))\leq g(x(t_0), x(T))$ for any feasible processes $(x, u)$ satisfying $\|\bar x-x\|_{W^{1,1}} \leq \delta$.}
\end{Definition}
\begin{Definition}\label{global_minimizer} {\rm
A feasible process $(\bar x, \bar u)$ is called a $W^{1,1}$ \textit{global minimizer} for $\mathcal M$ if, for any feasible processes $(x, u)$, one has $g(\bar x(t_0), \bar x(T))\leq g(x(t_0), x(T))$.}
\end{Definition}
\begin{Definition}[{See \cite[p.~329]{Vinter_2000}}]\label{partial_hybrid subdiff} {\rm
The \textit{partial hybrid subdifferential} $\partial^>_x h(t, x)$ of $h(t,x)$ w.r.t. $x$ is given by
\begin{align}\label{h-partial subdiff}
\partial^>_x h(t, x):=\mbox{\rm co}\,\big\{\xi \,:\, &\mbox{ there exists }(t_i, x_i)\overset{h}{\rightarrow}(t, x) \mbox{ such that }\nonumber\\
&\ h(t_k, x_k)>0 \mbox{ for all } k \mbox{ and }\nabla_xh(t_k, x_k)\to \xi\big\},
\end{align} where the symbol $(t_k, x_k)\overset{h}{\rightarrow}(t, x)$ means that $(t_k, x_k)\rightarrow (t, x)$ and $h(t_k, x_k)\rightarrow h(t, x)$ as $k\to\infty$.}
\end{Definition}
\subsection{A Maximum Principle for State Constrained Problems}
Due to the appearance of the state constraint $h(t, x(t))\leq 0$ in $\mathcal{M}$, one has to introduce a multiplier that is an element in the topological dual $C^*([t_0, T]; {\rm I\!R})$ of the space of continuous functions $C([t_0, T]; {\rm I\!R})$ with the supremum norm. By the Riesz Representation Theorem (see, e.g., \cite[Theorem~6, p.~374]{Kolmogorov_Fomin_1970} and \cite[Theorem~1, pp.~113--115]{Luenberger_1969}), any bounded linear
functional $f$ on $C([t_0, T]; {\rm I\!R})$ can be uniquely represented in the form
\begin{equation*}
f(x) =\int_{[t_0,T]} x(t) dv(t),
\end{equation*}
where $v$ is \textit{a function of bounded variation} on $[t_0, T]$ which vanishes at $t_0$ and which are continuous from the right at every point $\tau\in (t_0, T)$, and $\displaystyle\int_{[t_0,T]} x(t) dv(t)$ is the Riemann-Stieltjes integral of $x$ with respect to $v$ (see, e.g., \cite[p.~364]{Kolmogorov_Fomin_1970}).
The set of the elements of $C^*([t_0, T]; {\rm I\!R})$ which are given by nondecreasing functions $v$ is denoted by $C^\oplus(t_0, T)$.
Every $v \in C^*([t_0, T]; {\rm I\!R})$ corresponds to \textit{a finite regular measure}, denoted by $\mu_v$, on the $\sigma$-algebra ${\mathcal B}$ of the Borel subsets of $[t_0, T]$ by the formula
\begin{equation*}
\mu_v(A) :=\int_{[t_0,T]} \chi_A(t) dv(t),
\end{equation*} where $\chi_A(t)=1$ for $t\in A$ and $\chi_A(t)=0$ if $t\notin A$. Due to the correspondence $v\mapsto\mu_v$, we call every element $v\in C^*([t_0, T]; {\rm I\!R})$ a ``measure" and identify $v$ with $\mu_v$. Clearly, the measure corresponding to each $v\in C^\oplus(t_0, T)$ is nonnegative.
The integrals $\displaystyle\int_{[t_0, t)}\nu(s)d\mu(s)$ and $\displaystyle\int_{[t_0, T]}\nu(s)d\mu(s)$ of a Borel measurable function $\nu$ in next theorem are understood in the sense of the Lebesgue-Stieltjes integration \cite[p.~364]{Kolmogorov_Fomin_1970}.
\begin{theorem}[{See \cite[Theorem~9.3.1]{Vinter_2000}}]\label{V_thm9.3.1 necessary condition}
Let $(\bar x, \bar u)$ be a $W^{1,1}$ local minimizer for $\mathcal M$. Assume that for some $\delta>0$, the following hypotheses are satisfied:
\begin{enumerate}[\rm (H1)]
\item $f(., x, .)$ is $\mathcal{L}\times \mathcal{B}^m$ measurable, for fixed $x$. There exists a Borel measurable function $k(., .):[t_0, T]\times {\rm I\!R}^m \to {\rm I\!R}$ such that $t \mapsto k(t, \bar u(t))$ is integrable and
\begin{equation*}
\| f(t, x, u)-f(t, x', u)\|\leq k(t, u)\|x-x'\|, \quad \forall x, x'\in \bar x(t)+\delta\bar B,\; \forall u \in U(t)
\end{equation*} for almost all $t\in [t_0, T]$;
\item $\mbox{\rm gph}\, U$ is a Borel set in $[t_0,T]\times{\rm I\!R}^m$;
\item $g$ is Lipschitz continuous on the ball $(\bar x(t_0), \bar x(T))+\delta\bar B$;
\item $h$ is upper semicontinuous and there exists $K>0$ such that
\begin{equation*}
\| h(t, x)-h(t, x')\|\leq K\|x-x'\|, \quad \forall x, x'\in \bar x(t)+\delta\bar B,\; \forall t \in [t_0, T].
\end{equation*}
\end{enumerate}
Then there exist $p\in W^{1,1}([t_0, T]; {\rm I\!R}^n)$, $\gamma \geq 0$, $\mu \in C^\oplus(t_0, T)$, and a Borel measurable function $\nu:[t_0, T]\to {\rm I\!R}^n$ such that $(p, \mu, \gamma)\neq (0, 0, 0)$, and for $q(t):=p(t)+\eta(t)$ with $\eta(t):=
\displaystyle\int_{[t_0, t)}\nu(s)d\mu(s)$ if $t\in [t_0, T)$ and $\eta(T):=\displaystyle\int_{[t_0, T]}\nu(s)d\mu(s)$, the following holds true:
\begin{enumerate}[\rm (i)]
\item $\nu(t)\in \partial^>_x h(t, \bar x(t))\ \mu-\mbox{a.e.};$
\item $-\dot p(t)\in \mbox{\rm co}\, \partial_x\mathcal{H}(t, \bar x(t), q(t), \bar u(t))$ a.e.;
\item $(p(t_0), -q(T))\in \gamma \partial g(\bar x(t_0), \bar x(T))+N_C(\bar x(t_0), \bar x(T))$;
\item $\mathcal{H}(t, \bar x(t), q(t), \bar u(t))=\max_{u\in U(t)}\mathcal{H}(t, \bar x(t), q(t), u)$ a.e.
\end{enumerate}
\end{theorem}
Applying Theorem \ref{V_thm9.3.1 necessary condition} to unconstrained optimal control problems, one has next proposition.
\begin{proposition}[{See \cite[Theorem~6.2.1]{Vinter_2000}}]\label{V_thm6.2.1 necessary condition}
Suppose that $\mathcal M$ is an optimal control problem without state constraints. Let $(\bar x, \bar u)$ be a $W^{1,1}$ local minimizer for $\mathcal M$. Assume that for some $\delta>0$, the following hypotheses are satisfied.
\begin{enumerate}[\rm (H1)]
\item For every $x\in{\rm I\!R}^n$, the function $f(., x, .): [t_0, T]\times {\rm I\!R}^m \to {\rm I\!R}^n$ is $\mathcal{L}\times \mathcal{B}^m$ measurable. In addition, there exists a Borel measurable function $k:[t_0, T]\times {\rm I\!R}^m \to {\rm I\!R}$ such that $t \mapsto k(t, \bar u(t))$ is integrable and
\begin{equation*}
\| f(t, x, u)-f(t, x', u)\|\leq k(t, u)\|x-x'\|, \quad \forall x, x'\in \bar x(t)+\delta\bar B, u \in U(t), a.e.;
\end{equation*}
\item $\mbox{\rm gph}\, U$ is an $\mathcal{L}\times \mathcal{B}^m$ measurable set in $[t_0,T]\times{\rm I\!R}^m$;
\item $g$ is locally Lipschitz continuous.
\end{enumerate}
Then there exist $p\in W^{1,1}([t_0, T]; {\rm I\!R}^n)$ and $\gamma \geq 0$ such that $(p, \gamma)\neq (0, 0)$ and the following holds true:
\begin{enumerate}[\rm (i)]
\item $-\dot p(t)\in \mbox{\rm co}\, \partial_x\mathcal{H}(t, \bar x(t), p(t), \bar u(t))$ a.e.;
\item $(p(t_0), -p(T))\in \gamma \partial g(\bar x(t_0), \bar x(T))+N_C(\bar x(t_0), \bar x(T))$;
\item $\mathcal{H}(t, \bar x(t), p(t), \bar u(t))=\max_{u\in U(t)}\mathcal{H}(t, \bar x(t), p(t), u)$.
\end{enumerate}
\end{proposition}
\subsection{Solution Existence in State Constrained Optimal Control}
To recall a solution existence theorem for optimal control problems with state constraints of the Mayer type, we will use the notations and concepts given in \cite[Section~9.2]{Cesari_1983}. Let $A$ be a subset of ${\rm I\!R}\times {\rm I\!R}^n$ and $U: A\rightrightarrows {\rm I\!R}^m$ be a set-valued map defined on $A$. Let $$M:=\{(t, x, u)\in {\rm I\!R}\times{\rm I\!R}^n\times {\rm I\!R}^m \;:\; (t, x)\in A,\ u \in U(t, x)\},$$ and $f=(f_1, f_2, \dots, f_n): M \to {\rm I\!R}^n$ be a single-valued map defined on $M$. Let $B$ be a given subset of ${\rm I\!R}\times {\rm I\!R}^n\times{\rm I\!R}\times {\rm I\!R}^n$ and $g: B \to {\rm I\!R}$ be a real function defined on $B$. Consider the optimal control problem of the Mayer type
\begin{equation}\label{cost functional_SET}
\mbox{Minimize}\ \; g(t_0, x(t_0), T, x(T))
\end{equation} over $x \in W^{1,1}([t_0, T]; {\rm I\!R}^n)$ and measurable functions $u:[t_0, T]~\to~{\rm I\!R}^m$ satisfying
\begin{equation}\label{state control system_SET}
\begin{cases}
\dot x(t)=f(t, x(t), u(t)),\quad &\mbox{a.e.\ } t\in [t_0, T]\\
(t, x(t))\in A, & \mbox{for all }t\in [t_0, T]\\
(t_0, x(t_0), T, x(T))\in B\\
u(t)\in U(t, x(t)), &\mbox{a.e.\ } t\in [t_0, T],
\end{cases}
\end{equation}
where $[t_0, T]$ is a given interval. The problem \eqref{cost functional_SET}--\eqref{state control system_SET} will be denoted by $\mathcal{M}_1$.
A \textit{feasible process} for $\mathcal M_1$ is a pair of functions $(x, u)$ with $x: [t_0, T] \to {\rm I\!R}^n$ being absolutely continuous on $[t_0, T]$, $u:[t_0, T] \to {\rm I\!R}^m$ being measurable, such that all the requirements in \eqref{state control system_SET} are satisfied. If $(x, u)$ is a feasible process for $\mathcal M_1$, then $x$
is said to be a \textit{feasible trajectory}, and $u$ a \textit{feasible control function} for $\mathcal M_1$. The set of all feasible processes for $\mathcal M_1$ is denoted by $\Omega$.
Let $A_0=\big\{t\in\mathbb R\,:\, \exists x\in \mathbb R^n\ {\rm s.t.}\ (t,x)\in A\big\}$, i.e., $A_0$ is the projection of $A$ on the $t-$axis. Set $$A(t)=\big\{x\in {\rm I\!R}^n \;:\; (t, x)\in A\big\}\quad\; (t\in A_0)$$ and $$Q(t, x)=\big\{z\in {\rm I\!R}^n \;:\; z=f(t, x, u),\ u \in U(t, x)\big\} \quad\; ((t, x)\in A).$$
The forthcoming statement is called \textit{Filippov's Existence Theorem for Mayer problems}.
\begin{theorem}[{see \cite[Theorem~9.2.i and Section~9.4]{Cesari_1983}}]\label{Filippov's_Existence_Thm}
Suppose that $\Omega$ is nonempty, $B$ is closed, $g$ is lower semicontinuous on $B$, $f$ is continuous on $M$ and, for almost every $t\in [t_0, T]$, the sets $Q(t, x)$, $x\in A(t)$, are convex. Moreover, assume either that $A$ and $M$ are compact or that $A$ is not compact but closed and the following three conditions hold
\begin{enumerate}[\rm (a)]
\item For any $\varepsilon\geq 0$, the set $M_\varepsilon:=\{(t, x, u)\in M \;:\; \|x\| \leq \varepsilon\}$ is compact;
\item There is a compact subset $P$ of $A$ such that every feasible trajectory $x$ of $\mathcal M_1$ passes through at least one point of $P$;
\item There exists $c\geq 0$ such that
\begin{equation*}
x_1 f_1(t, x, u) + x_2 f_2(t, x, u)+\dots+x_n f_n(t, x, u) \leq c (\|x\|^2+1)\quad\; \forall (t, x, u)\in M.
\end{equation*}
\end{enumerate}
Then, $\mathcal M_1$ has a $W^{1, 1}$ global minimizer.
\end{theorem}
Clearly, condition (b) is satisfied if the initial point $(t_0, x(t_0))$ or the end point $(T, x(T))$ is fixed. As shown in \cite[p.~317]{Cesari_1983}, the following condition implies (c):
\begin{enumerate}[\rm ($c_0$)]
\item \textit{There exists $c\geq 0$ such that $
\|f(t, x, u)\| \leq c (\|x\|+1)$ for all $(t, x, u)\in M$.}
\end{enumerate}
\section{Control Problems without State Constraints}\label{Example 1}
Denote by $(FP_1)$ the finite horizon optimal control problem of the Lagrange type
\begin{equation} \label{cost functional_FP_1}
\mbox{Minimize}\ \; J(x,u)=\int_{t_0}^{T} \big[-e^{-\lambda t}(x(t)+u(t))\big] dt
\end{equation}
over $x \in W^{1,1}([t_0, T], {\rm I\!R})$ and measurable function $u:[t_0, T] \to {\rm I\!R}$ satisfying
\begin{equation} \label{state control system_FP_1}
\begin{cases}
\dot x(t)=-au(t),\quad &\mbox{a.e.\ } t\in [t_0, T]\\
x(t_0)=x_0\\
u(t)\in [-1, 1], &\mbox{a.e.\ } t\in [t_0, T],\\
\end{cases}
\end{equation}
with $a > \lambda>0$, $T>t_0\geq 0$, and $x_0 \in {\rm I\!R}$ being given.
\medskip
To treat $(FP_1)$ in \eqref{cost functional_FP_1}--\eqref{state control system_FP_1} as a problem of the Mayer type, we set $x(t)=(x_1(t), x_2(t))$, where $x_1(t)$ plays the role of the state variable $x(t)$ in $(FP_1)$, and $$x_2(t):= \int_{t_0}^{t} \big[-e^{-\lambda t}(x_1(\tau)+u(\tau))\big] d\tau$$ for all $t\in [0, T]$. Then $(FP_1)$ is equivalent to the problem
\begin{equation} \label{cost functional_FP_1a}
\mbox{Minimize}\ \; x_2(T)
\end{equation}
over $x=(x_1, x_2) \in W^{1,1}([t_0, T], {\rm I\!R}^2)$ and measurable functions $u:[t_0, T] \to {\rm I\!R}$ satisfying
\begin{equation}\label{state control system_FP_1a}
\begin{cases}
\dot x_1(t)=-au(t),\quad &\mbox{a.e.\ } t\in [t_0, T]\\
\dot x_2(t)=-e^{-\lambda t}(x_1(t)+u(t)), &\mbox{a.e.\ } t\in [t_0, T]\\
(x(t_0), x(T))\in \{(x_0, 0)\}\times {\rm I\!R}^2\\
u(t)\in [-1, 1], &\mbox{a.e.\ } t\in [t_0, T].\\
\end{cases}
\end{equation}
The problem \eqref{cost functional_FP_1a}--\eqref{state control system_FP_1a} is abbreviated to $(FP_{1a})$.
\subsection{Solution Existence}\label{SE_without_S_constraint}
Clearly, $(FP_{1a})$ is of the form $\mathcal{M}_1$ (see Subsection 2.3) with $n=2$, $m=1$, $A=[t_0, T]~\times~{\rm I\!R}^2$, $U(t, x)=[-1, 1]$ for all $(t, x)\in A$, $B=\{t_0\}\times\{(x_0, 0)\}\times {\rm I\!R}\times {\rm I\!R}^2$, $g(t_0, x(t_0), T, x(T))=x_2(T)$, $M=A\times [-1, 1]$, $f(t, x, u)=(-au, -e^{-\lambda t}(x_1+u))$ for all $(t, x, u)\in M$. We are going to show that $(FP_{1a})$ satisfies all the assumptions of Theorem~\ref{Filippov's_Existence_Thm}.
Clearly, the pair $(x, u)$, where $u(t)=0$, $x_1(t)=x_0$, and $x_2(t)= -x_0\displaystyle\int_{t_0}^{t}e^{-\lambda \tau}d\tau$ for all $t\in [t_0, T]$, is a feasible process for $(FP_{1a})$. Thus, the set $\Omega$ of feasible processes is nonempty. Besides, $B$ is closed, $g$ is lower semicontinuous on $B$, $f$ is continuous on $M$. Moreover, by the formula for $A$, one has $A_0=[t_0, T]$ and $A(t)={\rm I\!R}^2$ for all $t\in A_0$. In addition, from the formulas for $M$, $U$, and $f$, one gets
\begin{align*}
Q(t, x) &=\big\{z\in {\rm I\!R}^2 \;:\; z=f(t, x, u),\ u \in U(t, x)\big\}\\
&=\big\{z\in {\rm I\!R}^2 \;:\; z=(-au, -e^{-\lambda t}(x_1+u)),\ u \in [-1, 1]\big\}\\
&=\big\{(0, -e^{-\lambda t}x_1)\}+\{(-a, -e^{-\lambda t})u \;:\; u\in [-1, 1]\big\}
\end{align*}
for any $(t, x)\in A$. Thus, for every $t \in [t_0, T]$, the sets $Q(t, x)$, $x\in A(t)$, are line segments; hence they are convex. Since $A$ is closed, but not compact, we have to check the conditions (a)--(c) in Theorem~\ref{Filippov's_Existence_Thm}.
\textit{Condition} (a): For any $\varepsilon\geq 0$, since
\begin{align*}
M_\varepsilon&=\{(t, x, u)\in M \;:\; \|x\| \leq \varepsilon\}\\
&=\{(t, x, u)\in[t_0, T]\times {\rm I\!R}^2\times [-1,1] \;:\; \|x\| \leq \varepsilon\}\\
&= [t_0, T]\times \{x\in {\rm I\!R}^2\;:\;\|x\| \leq \varepsilon\}\times [-1,1],
\end{align*}
one sees that $M_\varepsilon$ is compact.
\textit{Condition} (b): Obviously, $P:=\{t_0\}\times\{(x_0, 0)\}$ is a compact subset of $A$, and every feasible trajectory passes through the unique point of $P$. Thus, condition (b) is fulfilled.
\textit{Condition} (c): Choosing $c=a+1$, we have
\begin{align*}
\|f(t, x, u)\|=\|(-au, -e^{-\lambda t}(x_1+u))& \leq a|u|+e^{-\lambda t}|x_1+u|\\
&\leq a+|x_1|+1\\ & \leq c(\|x\|+1)
\end{align*}
for any $(t, x, u)\in M$, because $u\in [-1,1]$ and $e^{-\lambda t}\leq 1$ for $t\geq t_0 \geq 0$. Thus, condition ($c_0$), which implies (c), is satisfied.
By Theorem~\ref{Filippov's_Existence_Thm}, $(FP_{1a})$ has a $W^{1, 1}$ global minimizer. Therefore, $(FP_{1})$ has a $W^{1, 1}$ global minimizer by the equivalence of $(FP_{1a})$ and $(FP_{1})$.
\subsection{Necessary Optimality Conditions}
To obtain necessary conditions for $(FP_{1a})$, we note that $(FP_{1a})$ is in the form of $\mathcal{M}$ with $g(x, y)=y_2$, $f(t, x, u)=(-au, -e^{-\lambda t}(x_1+u))$, $C=\{(x_0, 0)\}\times {\rm I\!R}^2$, $U(t)=[-1, 1]$, and $h(t, x)=0$ for all $x=(x_1, x_2) \in {\rm I\!R}^2$, $y=(y_1, y_2) \in {\rm I\!R}^2$, $t\in [t_0, T]$, and $u\in {\rm I\!R}$. Since $(FP_{1a})$ is an optimal control problem without state constraints, we can apply both Proposition~\ref{V_thm6.2.1 necessary condition} Theorem~\ref{V_thm9.3.1 necessary condition} to this problem. In accordance with \eqref{Hamiltonian}, the Hamiltonian of $(FP_{1a})$ is given by
\begin{equation}\label{Hamiltonian-FP1a}
\mathcal{H}(t, x, p, u)=-aup_1-e^{-\lambda t}(x_1+u)p_2 \quad \forall(t, x, p, u)\in [t_0, T]\times {\rm I\!R}^2\times {\rm I\!R}^2 \times {\rm I\!R},
\end{equation}
while by \eqref{h-partial subdiff} we have $\partial^>_x h(t, x)=\emptyset$ for all $(t, x)\in [t_0, T]\times {\rm I\!R}^2$. Let $(\bar x, \bar u)$ be a $W^{1,1}$ local minimizer of $(FP_{1a})$.
\subsubsection{Necessary Optimality Conditions for $(FP_{1a})$ in Terms of Proposition~\ref{V_thm6.2.1 necessary condition}}\label{Sub1-FP1a}
It is clear that the assumptions (H1)--(H3) of Proposition~\ref{V_thm6.2.1 necessary condition} are satisfied for $(FP_{1a})$. So, there exist $p\in W^{1,1}([t_0, T]; {\rm I\!R}^2)$ and $\gamma \geq 0$ such that $(p, \gamma)\neq (0, 0)$, and conditions~(i)--(iii) of Proposition~\ref{V_thm6.2.1 necessary condition} hold true. Let us analyze these conditions.
\textbf{Condition (i)}: By \eqref{Hamiltonian-FP1a}, $\mathcal{H}$ is differentiable in $x$ and $\partial_x\mathcal{H}(t, x, p, u)=\{(-e^{-\lambda t}p_2, 0)\}$ for all $(t, x, p, u)\in [t_0, T]\times {\rm I\!R}^2\times {\rm I\!R}^2 \times {\rm I\!R}$. Thus, condition (i) implies that $\dot p_1(t) = e^{-\lambda t}p_2(t)$ for a.e. $t\in [t_0, T]$ and $p_2(t)$ is a constant function.
\textbf{Condition (ii)}: By the formulas for $g$ and $C$, we have $\partial g(\bar x(t_0), \bar x(T))=\{(0, 0, 0, 1)\}$ and $N_C(\bar x(t_0), \bar x(T))={\rm I\!R}^2\times\{(0, 0)\}$. Thus, condition (ii) implies that
$$(p(t_0), -p(T))\in \gamma \{(0, 0, 0, 1)\}+{\rm I\!R}^2\times\{(0, 0)\};$$
hence $p_1(T)=0$ and $p_2(T)=-\gamma$. As $p_2(t)$ is a constant function, we have $p_2(t)=-\gamma$ for all $t\in [t_0, T]$. So, the above analysis of condition (i) gives $p_1(t)=\dfrac{\gamma}{\lambda}\big(e^{-\lambda t}-e^{-\lambda T}\big)$ for all $t\in [t_0, T]$. Since $(p, \gamma)\neq (0, 0)$, we must have $\gamma > 0$.
\textbf{Condition (iii)}: Due to \eqref{Hamiltonian-FP1a}, condition (iii) means that
\begin{equation*}
-a\bar u(t)p_1(t)-e^{-\lambda t}[x_1(t)+\bar u(t)]p_2(t)=\max_{u\in [-1, 1]}\left\{-aup_1(t)-e^{-\lambda t}[x_1(t)+u]p_2(t) \right\}
\end{equation*}
for a.e. $t\in [t_0, T]$.
Equivalently,
\begin{equation}\label{min_condition}
[ap_1(t)+e^{-\lambda t}p_2(t)]\bar u(t)=\min_{u\in [-1, 1]}\left\{[ap_1(t)+e^{-\lambda t}p_2(t)]u\right\} \quad \mbox{a.e. } t\in [t_0, T].
\end{equation}
Setting $\varphi(t):=ap_1(t)+e^{-\lambda t}p_2(t)$ for $t\in [t_0, T]$, we have
\begin{align*}
\varphi(t) =a\dfrac{\gamma}{\lambda}\big(e^{-\lambda t}-e^{-\lambda T}\big)-\gamma e^{-\lambda t}=\gamma (\dfrac{a}{\lambda}-1)e^{-\lambda t}-\gamma\dfrac{a}{\lambda} e^{-\lambda T}
\end{align*}
for a.e. $t\in [t_0, T]$. As $\dfrac{a}{\lambda}>1$, we see that $\varphi$ is decreasing on ${\rm I\!R}$. In addition, it is clear that $\varphi(T)=-\gamma e^{-\lambda T}<0$, and $\varphi(t)=0$ if and only if $t=\bar t$, where $\bar t:=T-\dfrac{1}{\lambda}\ln \dfrac{a}{a-\lambda}$.
We have the following cases.
{\sc Case A:} \textit{$t_0\geq \bar t$}. Then $\varphi(t)<0$ for all $t\in (t_0, T]$. Therefore, condition \eqref{min_condition} implies $\bar u(t)=1$ for all $t\in [t_0, T]$. Hence, by \eqref{state control system_FP_1a}, $\bar x_1(t)=x_0-a(t-t_0)$ for a.e. $t\in [t_0, T]$.
{\sc Case B:} \textit{$t_0 < \bar t$}. Then $\varphi(t)>0$ for $t\in [t_0, \bar t)$ and $\varphi(t)<0$ for $t\in (\bar t, T]$. Thus, \eqref{min_condition} yields $\bar u(t)=-1$ for $t\in [t_0, \bar t)$ and $\bar u(t)=1$ for a.e. $t\in (\bar t, T]$; hence $\bar x_1(t)=x_0+a(t-t_0)$ for every $t\in [t_0, \bar t]$ and $\bar x_1(t)=x_0-a(t+t_0-2\bar t)$ for every $t\in (\bar t, T]$.
\subsubsection{Necessary Optimality Conditions for $(FP_{1a})$ in Terms of Theorem~\ref{V_thm9.3.1 necessary condition}}
Since the assumptions (H1)--(H4) of Theorem~\ref{V_thm9.3.1 necessary condition} are satisfied for $(FP_{1a})$, by that theorem one can find $p\in W^{1,1}([t_0, T]; {\rm I\!R}^2)$, $\gamma \geq 0$, $\mu \in C^\oplus(t_0, T)$, and a Borel measurable function $\nu:[t_0, T]\to {\rm I\!R}^2$ such that $(p, \mu, \gamma)\neq (0, 0, 0)$, and for $q(t):=p(t)+\eta(t)$ with $\eta:[t_0, T]\to {\rm I\!R}^2$ being given by $\eta(t):=
\displaystyle\int_{[t_0, t)}\nu(s)d\mu(s)$ if $t\in [t_0, T)$ and $\eta(T):=\displaystyle\int_{[t_0, T]}\nu(s)d\mu(s)$, conditions (i)--(iv) in Theorem~\ref{V_thm9.3.1 necessary condition} hold true. Since $\partial^>_x h(t, \bar x(t))=\emptyset$ for all $t\in [t_0, T]$, the inclusion $\nu(t)\in \partial^>_x h(t, \bar x(t))$ is violated at every $t\in [t_0,T]$. Hence, condition (i) forces $\mu=0$. We see that condition (iv) is fulfilled and the conditions (ii)--(iv) in Theorem~\ref{V_thm9.3.1 necessary condition} recover the conditions (i)--(iii) of Proposition~\ref{V_thm6.2.1 necessary condition}.
\medskip
Going back to the original problem $(FP_1)$, we can put the obtained results in the following theorem.
\begin{Theorem}\label{Thm1} Given any $a,\lambda$ with $a>\lambda>0$, define $\rho=\dfrac{1}{\lambda}\ln \dfrac{a}{a-\lambda}>0$ and $\bar t=T-\rho$. Then, problem $(FP_1)$ has a unique local solution $(\bar x,\bar u)$, which is a global solution, where $\bar u(t)=-a^{-1}\dot{\bar x}(t)$ for almost everywhere $t\in [t_0, T]$ and $\bar x(t)$ can be described as follows:
\begin{description}
\item{\rm (a)} If $t_0 \geq \bar t$ (i.e., $T-t_0 \leq \rho$), then
\begin{equation*}
\bar x(t)=x_0-a(t-t_0), \quad t \in [t_0, T].
\end{equation*}
\item{\rm (b)} If $t_0 < \bar t$ (i.e., $T-t_0 > \rho$), then
\begin{equation*}
\bar x(t)=
\begin{cases}
x_0+a(t-t_0), \quad & t \in [t_0, \bar t]\\
x_0-a(t+t_0-2\bar t), & t \in (\bar t, T].
\end{cases}
\end{equation*}
\end{description}
\end{Theorem}
\begin{proof} The assertions (a) and (b) are straightforward from the results obtained in Case~A and Case~B of Subsection~\ref{Sub1-FP1a}, because $\bar x_1(t)$ in $(FP_{1a})$ coincides with $\bar x(t)$ in $(FP_1)$.
\end{proof}
\section{Control Problems with Unilateral Constraints}\label{Example 2}
By $(FP_2)$ we denote the finite horizon optimal control problem of the Lagrange type
\begin{equation} \label{cost functional_FP_2}
\mbox{Minimize}\ \; J(x,u)=\int_{t_0}^{T} \big[-e^{-\lambda t}(x(t)+u(t))\big] dt
\end{equation}
over $x \in W^{1,1}([t_0, T], {\rm I\!R})$ and measurable functions $u:[t_0, T] \to{\rm I\!R}$ satisfying
\begin{equation} \label{state control system_FP_2}
\begin{cases}
\dot x(t)=-au(t),\quad &\mbox{a.e.\ } t\in [t_0, T]\\
x(t_0)=x_0\\
u(t)\in [-1, 1], &\mbox{a.e.\ } t\in [t_0, T]\\
x(t)\leq 1, & \forall t\in [t_0, T]
\end{cases}
\end{equation}
with $a > \lambda>0$, $T>t_0\geq 0$, and $x_0 \leq 1$ being given.
\medskip
We transform this problem into one of the Mayer type by setting $x(t)=(x_1(t), x_2(t))$, where $x_1(t)$ plays the role of $x(t)$ in \eqref{cost functional_FP_2}--\eqref{state control system_FP_2} and \begin{equation}\label{phase_second_component}x_2(t):= \int_{t_0}^{t} \big[-e^{-\lambda\tau}(x_1(\tau)+u(\tau))\big] d\tau\end{equation} for all $t\in [0, T]$. Thus, $(FP_2)$ is equivalent to the problem
\begin{equation}\label{cost functional_FP_2a}
\mbox{Minimize}\ \; x_2(T)
\end{equation}
over $x=(x_1, x_2) \in W^{1,1}([t_0, T], {\rm I\!R}^2)$ and measurable functions $u:[t_0, T] \to{\rm I\!R}$ satisfying
\begin{equation} \label{state control system_FP_2a}
\begin{cases}
\dot x_1(t)=-au(t),\quad &\mbox{a.e.\ } t\in [t_0, T]\\
\dot x_2(t)=-e^{-\lambda t}(x_1(t)+u(t)), &\mbox{a.e.\ } t\in [t_0, T]\\
(x(t_0), x(T))\in \{(x_0, 0)\}\times {\rm I\!R}^2\\
u(t)\in [-1, 1], &\mbox{a.e.\ } t\in [t_0, T]\\
x_1(t)\leq 1, &\forall t\in [t_0, T].
\end{cases}
\end{equation} We denote problem \eqref{cost functional_FP_2a}--\eqref{state control system_FP_2a} by $(FP_{2a})$.
\subsection{Solution Existence}\label{SE_1-sided_S_constraint}
To check that $(FP_{2a})$ is of the form $\mathcal{M}_1$ (see Subsection 2.3), we choose $n=2$, $m=1$, $A=[t_0, T]~\times(-\infty, 1]\times {\rm I\!R}$, $U(t, x)=[-1, 1]$ for all $(t, x)\in A$, $B=\{t_0\}\times\{(x_0, 0)\}\times {\rm I\!R}\times {\rm I\!R}^2$, $g(t_0, x(t_0), T, x(T))=x_2(T)$, $M=A\times [-1, 1]$, $f(t, x, u)=(-au, -e^{-\lambda t}(x_1+u))$ for all $(t, x, u)\in M$. In comparison with the problem $(FP_{1a})$, the only change in this formulation of $(FP_{2a})$ is that we have $A=[t_0, T]~\times(-\infty, 1]\times {\rm I\!R}$ instead of $A=[t_0, T]~\times {\rm I\!R}^2$. Thus, to show that $(FP_{2a})$ satisfies all the assumptions of Theorem~\ref{Filippov's_Existence_Thm}, we can use the arguments in Subsection~\ref{SE_without_S_constraint}, except those related to the convexity of the sets $Q(t,x)$ and the compactness of $M_\varepsilon$, which have to be verified in a slightly different manner.
By the above formula for $A$, we have $A_0=[t_0, T]$ and $A(t)=(-\infty, 1]\times {\rm I\!R}$ for all $t\in A_0$. As in Subsection~\ref{SE_without_S_constraint}, we have \begin{align*}
Q(t, x)=\big\{(0, -e^{-\lambda t}x_1)\}+\{(-a, -e^{-\lambda t})u \;:\; u\in [-1, 1]\big\}
\end{align*}
for any $(t, x)\in A$. Thus, the assumption of Theorem~\ref{Filippov's_Existence_Thm} on the convexity of the sets $Q(t, x)$, $x\in A(t)$, for almost every $t\in [t_0, T]$, is satisfied. Since $M=[t_0, T]~\times(-\infty, 1]\times {\rm I\!R}\times [-1, 1]$, for any $\varepsilon\geq 0$, one has
\begin{align*}
M_\varepsilon&=\{(t, x, u)\in M \;:\; \|x\| \leq \varepsilon\}\\
&=\{(t, x, u)\in[t_0, T]~\times(-\infty, 1]\times {\rm I\!R}\times [-1, 1] \;:\; \|x\| \leq \varepsilon\}.
\end{align*}
As $M_\varepsilon$ is closed and contained in the compact set $[t_0, T]\times \{x\in {\rm I\!R}^2\;:\;\|x\| \leq \varepsilon\}\times [-1,1]$, it is compact.
It follows from Theorem~\ref{Filippov's_Existence_Thm} that $(FP_{2a})$ has a $W^{1, 1}$ global minimizer. Therefore, by the equivalence of~$(FP_{2})$ and $(FP_{2a})$, we can assert that $(FP_{2})$ has a $W^{1, 1}$ global minimizer.
\subsection{Necessary Optimality Conditions}\label{Subsection4.2}
In order to apply Theorem~\ref{V_thm9.3.1 necessary condition} for solving $(FP_2)$, we observe that $(FP_{2a})$ is in the form of~$\mathcal M$ with $g(x, y)=y_2$, $f(t, x, u)=(-au, -e^{-\lambda t}(x_1+u)),$ $C=\{(x_0, 0)\}\times {\rm I\!R}^2$, $U(t)=[-1, 1]$, and $h(t, x)=x_1-1$ for all $t\in [t_0, T]$, $x=(x_1, x_2) \in {\rm I\!R}^2$, $y=(y_1, y_2) \in {\rm I\!R}^2$ and $u\in {\rm I\!R}$.
\medskip
The forthcoming two propositions describe a fundamental properties of the local minimizers of the problem $(FP_{2a})$, which is obtained from the optimal control problem of the Lagrange type $(FP_{2})$ by introducing the artificial variable $x_2$. Similar statements as those in the first proposition are valid for any optimal control problem of the Mayer type, which is obtained from an optimal control problem of the Lagrange type in the same manner. While, the claims in the second proposition hold true for every optimal control problem of the Mayer type, whose objective function does not depend on the initial point.
\begin{Proposition}\label{lemma_basic_property_1}
Suppose that $(\bar x, \bar u)$ is a $W^{1,1}$ local minimizer for $(FP_{2a})$. Then, for any $\tau_1,\tau_2\in [t_0,T]$ with $\tau_1<\tau_2$, the restriction of $(\bar x, \bar u)$ on $[\tau_1,\tau_2]$, i.e., the process $(\bar x(t), \bar u(t))$ with $t\in [\tau_1,\tau_2]$, is a $W^{1,1}$ local minimizer for the following optimal control problem of the Mayer type
\begin{equation*}\label{cost functional_FP_2aR}
{\rm Minimize}\ \; x_2(\tau_2)
\end{equation*}
{\rm over $x=(x_1, x_2) \in W^{1,1}([\tau_1, \tau_2], {\rm I\!R}^2)$ and measurable functions $u:[\tau_1, \tau_2] \to{\rm I\!R}$ satisfying
\begin{equation*}\label{state control system_FP_2aR}
\begin{cases}
\dot x_1(t)=-au(t),\quad &\mbox{a.e.\ } t\in [\tau_1,\tau_2]\\
\dot x_2(t)=-e^{-\lambda t}(x_1(t)+u(t)),&\mbox{a.e.\ } t\in [\tau_1,\tau_2]\\
(x(\tau_1), x(\tau_2))\in \{(\bar x_1(\tau_1),\bar x_2(\tau_1))\}\times \{\bar x_1(\tau_2)\}\times {\rm I\!R}\\
u(t)\in [-1, 1], &\mbox{a.e.\ } t\in [\tau_1, \tau_2]\\
x_1(t)\leq 1, & \forall t\in [\tau_1, \tau_2],
\end{cases}
\end{equation*}}which is denoted by $(FP_{2a})|_{[\tau_1, \tau_2]}$. In another words, for any $\tau_1,\tau_2\in [t_0,T]$ with $\tau_1<\tau_2$, the restriction of a $W^{1,1}$ local minimizer for $(FP_{2a})$ on the time segment $[\tau_1,\tau_2]$ is a $W^{1,1}$ local minimizer for the Mayer problem $(FP_{2a})|_{[\tau_1, \tau_2]}$, which is obtained from $(FP_{2a})$ by replacing $t_0$ with $\tau_1$, $T$ with $\tau_2$, and $C$ with $\widetilde C:=\{(\bar x_1(\tau_1),\bar x_2(\tau_1))\}\times \{\bar x_1(\tau_2)\}\times {\rm I\!R}$.
\end{Proposition}
\begin{proof}
Since $(\bar x, \bar u)$ is a $W^{1,1}$ local minimizer for $(FP_{2a})$, by Definition \ref{local_minimizer} there exists $\delta>0$ such that the process $(\bar x, \bar u)$ minimizes the quantity $g(x(t_0), x(T))=x_2(T)$ over all feasible processes $(x, u)$ of $(FP_{2a})$ with $\|\bar x-x\|_{W^{1,1}} \leq \delta$.
Clearly, the restriction of $(\bar x, \bar u)$ on $[\tau_1,\tau_2]$ satisfies the conditions given in \eqref{cost functional_FP_2aR}. Thus, it is a feasible process for $(FP_{2a})|_{[\tau_1, \tau_2]}$.
Let $(x(t), u(t))$, $t\in [\tau_1, \tau_2]$, be an arbitrary feasible process of $(FP_{2a})|_{[\tau_1, \tau_2]}$ satisfying $$\|\bar x-x\|_{W^{1,1}([\tau_1, \tau_2],{\rm I\!R}^2)} \leq\delta.$$ Consider the pair of functions $(\widetilde x,\widetilde u)$, where $\widetilde x=(\widetilde x_1,\widetilde x_2)$, which is given by
\begin{align*}
\widetilde x_1(t):=
\begin{cases}
\bar x_1(t), \ & t\in [t_0,\tau_1]\cup [\tau_2,T]\\
x_1(t), & t\in (\tau_1, \tau_2),
\end{cases}
\end{align*}
\begin{align*}
\widetilde x_2(t):=
\begin{cases}
\bar x_2(t), \ & t\in [t_0,\tau_1]\\
x_2(t), & t\in (\tau_1, \tau_2)\\
x_2(\tau_2)+\displaystyle\int_{\tau_2}^{t} \big[-e^{-\lambda\tau}(\bar x_1(\tau)+\bar u(\tau))\big] d\tau , & t\in [\tau_2,T],
\end{cases}
\end{align*}
and
\begin{align*}
\widetilde u(t):=
\begin{cases}
\bar u(t), \ & t\in [t_0,\tau_1]\cup [\tau_2,T]\\
u(t), & t\in (\tau_1, \tau_2).
\end{cases}
\end{align*}
Clearly, $(\widetilde x,\widetilde u)$ is a feasible process of $(FP_{2a})$ satisfying $\|\bar x-\widetilde x\|_{W^{1,1}([t_0, T],{\rm I\!R}^2)} \leq \delta$. Thus, one must have $g(\widetilde x(T))\geq g(\bar x(T))$ or, equivalently,
\begin{eqnarray*}
x_2(\tau_2)+\displaystyle\int_{\tau_2}^{T} \omega(\tau) d\tau
\geq \bar x(\tau_2) + \displaystyle\int_{\tau_2}^{T} \omega(\tau) d\tau ,
\end{eqnarray*} where $\omega(\tau):=-e^{-\lambda\tau}(\bar x_1(\tau)+\bar u(\tau))$. Hence, one obtains the inequality $x_2(\tau_2)\geq \bar x(\tau_2)$ proving that the restriction of $(\bar x, \bar u)$ on $[\tau_1,\tau_2]$ is a $W^{1,1}$ local minimizer for $(FP_{2a})|_{[\tau_1, \tau_2]}$.
\end{proof}
\begin{Proposition}\label{lemma_basic_property_2}
Suppose that $(\bar x, \bar u)$ is a $W^{1,1}$ local minimizer for $(FP_{2a})$. Then, for any $\tau_1\in [t_0,T)$, the restriction of the process $(\bar x, \bar u)$ on the time segment $[\tau_1, T]$, i.e., the process $(\bar x(t), \bar u(t))$ with $t\in [\tau_1, T]$, is a $W^{1,1}$ local minimizer for the following optimal control problem of the Mayer type
\begin{equation*}\label{cost functional_FP_2b}
{\rm Minimize}\ \; x_2(T)
\end{equation*}
{\rm over $x=(x_1, x_2) \in W^{1,1}([\tau_1, T], {\rm I\!R}^2)$ and measurable functions $u:[\tau_1, T] \to{\rm I\!R}$ satisfying
\begin{equation*}\label{state control system_FP_2b}
\begin{cases}
\dot x_1(t)=-au(t),\quad &\mbox{a.e.\ } t\in [\tau_1,T]\\
\dot x_2(t)=-e^{-\lambda t}(x_1(t)+u(t)),&\mbox{a.e.\ } t\in [\tau_1,T]\\
(x(\tau_1), x(T))\in \{(\bar x_1(\tau_1),\bar x_2(\tau_1))\}\times {\rm I\!R}^2\\
u(t)\in [-1, 1], &\mbox{a.e.\ } t\in [\tau_1, T]\\
x_1(t)\leq 1, & \forall t\in [\tau_1, T],
\end{cases}
\end{equation*}}which is denoted by $(FP_{2b})$. In another words, for any $\tau_1\in [t_0,T)$, the restriction of a $W^{1,1}$ local minimizer for $(FP_{2a})$ on the time segment $[\tau_1, T]$ is a $W^{1,1}$ local minimizer for the Mayer problem $(FP_{2b})$, which is obtained from $(FP_{2a})$ by replacing $t_0$ with $\tau_1$.
\end{Proposition}
\begin{proof} For a fixed $\tau_1\in [t_0,T)$, let $(FP_{2b})$ be defined as in the formulation of the lemma. It is clear that the process $(\bar x(t), \bar u(t))$, $t\in [\tau_1, T]$, is feasible for $(FP_{2b})$. Since $(\bar x, \bar u)$ is a $W^{1,1}$ local minimizer of $(FP_{2a})$, by Definition \ref{local_minimizer} there exists $\delta>0$ such that the process $(\bar x, \bar u)$ minimizes the quantity $g(x(t_0), x(T))=x_2(T)$ over all feasible processes $(x, u)$ of $(FP_{2a})$ with $\|\bar x-x\|_{W^{1,1}} \leq \delta$. Let $(x(t), u(t))$, $t\in [\tau_1, T]$, be an arbitrary feasible process of $(FP_{2b})$ satisfying $\|\bar x-x\|_{W^{1,1}([\tau_1, T])} \leq \delta$. Consider the pair of functions $(\widetilde x,\widetilde u)$ given by
\begin{align*}
\widetilde x(t):=
\begin{cases}
\bar x(t), \quad t\in [t_0, \tau_1)\\
x(t), \quad t\in [\tau_1, T]
\end{cases}
\quad\; \mbox{and} \quad\;
\widetilde u(t):=
\begin{cases}
\bar u(t), \quad t\in [t_0, \tau_1)\\
u(t), \quad t\in [\tau_1, T].
\end{cases}
\end{align*}
Clearly, $(\widetilde x,\widetilde u)$ is a feasible process of $(FP_{2a})$ satisfying $\|\bar x-\widetilde x\|_{W^{1,1}([t_0, T])} \leq \delta$. Thus, one must have $g(\widetilde x(T))\geq g(\bar x(T))$. Since $\widetilde x(T)=x(T) $, one obtains the inequality $g(x(T))\geq g(\bar x(T))$, which justifies the assertion of the proposition.
\end{proof}
In accordance with \eqref{Hamiltonian}, the Hamiltonian of $(FP_{2a})$ is given by
\begin{equation}\label{Hamiltonian_FP_2a}
\mathcal{H}(t, x, p, u)=-aup_1-e^{-\lambda t}(x_1+u)p_2 \quad \forall(t, x, p, u)\in [t_0, T]\times {\rm I\!R}^2\times {\rm I\!R}^2 \times {\rm I\!R}.
\end{equation}
By \eqref{h-partial subdiff}, the partial hybrid subdifferential of $h$ at $(t, x)\in [t_0, T]\times {\rm I\!R}^2$ is given by
\begin{equation}\label{hybrid subdiff_FP_2a}
\partial^>_x h(t, x)=
\begin{cases}
\emptyset, &\quad \mbox{if} \ x_1<1\\
\{(1,0)\}, & \quad \mbox{if} \ x_1\geq 1.
\end{cases}
\end{equation}
\textit{From now on, let $(\bar x, \bar u)$ be a $W^{1,1}$ local minimizer for $(FP_{2a})$}.
Since the assumptions (H1)--(H4) of Theorem~\ref{V_thm9.3.1 necessary condition} are satisfied for $(FP_{2a})$, by that theorem one can find $p\in W^{1,1}([t_0, T]; {\rm I\!R}^2)$, $\gamma \geq 0$, $\mu \in C^\oplus(t_0, T)$, and a Borel measurable function $\nu:[t_0, T]\to {\rm I\!R}^2$ such that $(p, \mu, \gamma)\neq (0, 0, 0)$, and for $q(t):=p(t)+\eta(t)$ with \begin{equation}\label{1st_formula_for_eta}\eta(t):=
\displaystyle\int_{[t_0, t)}\nu(\tau)d\mu(\tau)\quad\ (\forall t\in [t_0, T))\end{equation} and
\begin{equation}\label{2nd_formula_for_eta}
\eta(T):=\displaystyle\int_{[t_0, T]}\nu(\tau)d\mu(\tau),
\end{equation}
conditions (i)--(iv) in Theorem~\ref{V_thm9.3.1 necessary condition} hold true.
\textbf{Condition (i)}: Note that
\begin{align*}
& \mu \{t\in [t_0, T] \,:\, \nu(t) \notin \partial^>_x h(t, \bar x(t))\}\\ &= \mu \{t\in [t_0, T] \,:\, \partial^>_x h(t, \bar x(t))=\emptyset\}+\mu \{t\in [t_0, T] \,:\, \partial^>_x h(t, \bar x(t))\neq\emptyset,\; \nu(t) \notin \partial^>_x h(t, \bar x(t))\}.
\end{align*}
Since $\bar x_1(t)\leq 1$ for every $t$, combining this with \eqref{hybrid subdiff_FP_2a} gives
\begin{align*}
& \mu \{t\in [t_0, T] \,:\, \nu(t) \notin \partial^>_x h(t, \bar x(t))\}\\ &= \mu \{t\in [t_0, T] \,:\,\bar x_1(t)<1\} + \mu \{t\in [t_0, T] \,:\, \bar x_1(t)=1,\; \nu(t) \neq (1, 0)\}.
\end{align*}
So, from (i) it follows that
\begin{equation}\label{1st_condition_for_mu}\mu \{t\in [t_0, T] \,:\,\bar x_1(t)<1\}=0\end{equation} and \begin{equation}\label{2nd_condition_for_mu}\mu \big\{t\in [t_0, T] \,:\, \bar x_1(t)=1,\; \nu(t) \neq (1, 0)\big\}=0.\end{equation}
\textbf{Condition (ii)}: By \eqref{Hamiltonian_FP_2a}, $\mathcal{H}$ is differentiable in $x$ and $\partial_x\mathcal{H}(t, x, p, u)=\{(-e^{-\lambda t}p_2, 0)\}$ for all $(t, x, p, u)\in [t_0, T]\times {\rm I\!R}^2\times {\rm I\!R}^2 \times {\rm I\!R}$. Thus, (ii) implies that $-\dot p(t) =(-e^{-\lambda t}q_2(t), 0)$ for a.e. $t\in [t_0, T]$. Hence, $\dot p_1(t) = e^{-\lambda t}q_2(t)$ for a.e. $t\in [t_0, T]$ and $p_2(t)$ is a constant for all $t\in [t_0, T]$.
\textbf{Condition (iii)}: By the formulas for $g$ and $C$, $\partial g(\bar x(t_0), \bar x(T))=\{(0, 0, 0, 1)\}$ and $N_C(\bar x(t_0), \bar x(T))={\rm I\!R}^2\times\{(0, 0)\}$. Thus, (iii) yields
$$(p(t_0), -q(T))\in \{(0, 0, 0, \gamma)\}+{\rm I\!R}^2\times\{(0, 0)\},$$ which means that
$q_1(T)=0$ and $q_2(T)=-\gamma$.
\textbf{Condition (iv)}: By \eqref{Hamiltonian_FP_2a}, from (iv) one gets
\begin{equation*}
-a\bar u(t)q_1(t)-e^{-\lambda t}[\bar x_1(t)+\bar u(t)]q_2(t)=\max_{u\in [-1, 1]}\left\{-auq_1(t)-e^{-\lambda t}[\bar x_1(t)+u]q_2(t) \right\}\ \mbox{a.e.}\; t\in [t_0,T]
\end{equation*}
or, equivalently,
\begin{equation}\label{min_condition_FP_2a}
[aq_1(t)+e^{-\lambda t}q_2(t)]\bar u(t)=\min_{u\in [-1, 1]}\left\{[aq_1(t)+e^{-\lambda t}q_2(t)]u\right\}\ \mbox{a.e.}\; t\in [t_0,T].
\end{equation}
Thanks to Proposition~\ref{lemma_basic_property_1} and the above analysis of Conditions (i)--(iv), we will be able to prove next statement.
\begin{Proposition}\label{lemma_interior_traj}
Suppose that $[\tau_1, \tau_2]$ is a subsegment of $[t_0, T]$ with $h (t, \bar x(t))<0$ for all $t\in [\tau_1, \tau_2]$. Then, the curve $t\mapsto \bar x_1(t)$, $t\in [\tau_1, \tau_2]$, cannot have more than one turning point. To be more precise, the curve must be of one of the following three categories {\rm C1$-$C3}:
\begin{equation}\label{lemma_interior_traj_1}
\bar x_1(t)=\bar x_1(\tau_1)+a(t-\tau_1), \quad t\in [\tau_1, \tau_2],
\end{equation}
\begin{equation}\label{lemma_interior_traj_2}
\bar x_1(t)=\bar x_1(\tau_1)-a(t-\tau_1), \quad t\in [\tau_1, \tau_2],
\end{equation}
and
\begin{equation} \label{lemma_interior_traj_3}
\bar x_1(t)=
\begin{cases}
\bar x_1(\tau_1)+a(t-\tau_1), \quad & t\in [\tau_1, t_{\zeta}]\\
\bar x_1(t_{\zeta})-a(t-t_\zeta), & t\in (t_{\zeta}, \tau_2],
\end{cases}
\end{equation} where $t_{\zeta}$ is a certain point in $(\tau_1, \tau_2)$ (see {\rm Fig.~1--3}).
\begin{figure}[!ht]
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[scale=.15]{interior_traj_1_181203.jpg}
\caption{Category C1}
\end{minipage}
\hfill
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[scale=.15]{interior_traj_2_181203.jpg}
\caption{Category C2}
\end{minipage}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=.15]{interior_traj_3_181203.jpg}
\caption{Category C3}
\end{figure}
\end{Proposition}
\begin{proof}
Suppose that $[\tau_1, \tau_2]$ is a subsegment of $[t_0, T]$ with $h (t, \bar x(t))<0$ for all $t\in [\tau_1, \tau_2]$, i.e., $\bar x_1(t)< 1$ for all $t\in [\tau_1, \tau_2]$. Then, it follows from Proposition~\ref{lemma_basic_property_1} that the restriction of $(\bar x, \bar u)$ on $[\tau_1, \tau_2]$ is a $W^{1,1}$ local minimizer for $(FP_{2a})|_{[\tau_1, \tau_2]}$. Since the latter satisfies the assumptions (H1)--(H4) of Theorem~\ref{V_thm9.3.1 necessary condition}, by that theorem one finds $\widetilde p\in W^{1,1}([\tau_1, \tau_2]; {\rm I\!R}^2)$, $\widetilde \gamma \geq 0$, $\widetilde \mu \in C^\oplus(\tau_1, \tau_2)$, and a Borel measurable function $\widetilde \nu:[\tau_1, \tau_2]\to {\rm I\!R}^2$ with the property $(\widetilde p, \widetilde \mu, \widetilde \gamma)\neq (0, 0, 0)$, and for $\widetilde q(t):=\widetilde p(t)+\widetilde \eta(t)$ with \begin{equation}\label{1st_formula_for_w_eta_FP3b}\widetilde \eta(t):=
\displaystyle\int_{[\tau_1, \tau_2)}\widetilde \nu(\tau)d\widetilde \mu(\tau)\quad\ (\forall t\in [\tau_1, \tau_2))\end{equation} and
\begin{equation}\label{2nd_formula_for_w_eta_FP3b}\widetilde \eta(\tau_2):=\displaystyle\int_{[\tau_1, \tau_2]}\widetilde \nu(\tau)d\widetilde \mu(\tau),\end{equation} the conditions (i)--(iv) in Theorem~\ref{V_thm9.3.1 necessary condition} hold true, provided that $t_0, T, p, \mu, \gamma, \nu, \eta$, and $q$ are changed respectively to $\tau_1, \tau_2, \widetilde p, \widetilde \mu, \widetilde \gamma, \widetilde \nu, \widetilde \eta$, and $\widetilde q$.
By Condition (i), one has
\begin{equation}\label{1st_condition_for_w_mu_FP3b}\begin{cases}\widetilde \mu \{t\in [\tau_1, \tau_2] \,:\,\bar x_1(t)<1\}=0,\\
\widetilde \mu \big\{t\in [\tau_1, \tau_2] \,:\, \bar x_1(t)=1,\; \widetilde \nu(t) \neq (1, 0)\big\}=0.\end{cases}\end{equation}
By Condition (ii), $\dot{\widetilde p_1}(t) = e^{-\lambda t}\widetilde q_2(t)$ for a.e. $t\in [\tau_1, \tau_2]$ and $\widetilde p_2(t)$ is a constant for all $t\in [\tau_1, \tau_2]$.
Since $N_{\widetilde C}(\bar x(\tau_1), \bar x(\tau_2))={\rm I\!R}^3\times\{0\}$, by Condition (iii) one has
$$(\widetilde p(\tau_1), -\widetilde q(\tau_2))\in \{(0, 0, 0, \widetilde\gamma)\}+{\rm I\!R}^3\times\{0\}.$$
This amounts to saying that $\widetilde q_2(\tau_2)=-\widetilde \gamma$.
Condition (iv) means that
\begin{equation}\label{min_condition_FP3c}
[a\widetilde q_1(t)+e^{-\lambda t}\widetilde q_2(t)]\bar u(t)=\min_{u\in [-1, 1]}\left\{[a\widetilde q_1(t)+e^{-\lambda t}\widetilde q_2(t)]u\right\}\ \, \mbox{a.e.}\ t\in [\tau_1, \tau_2].
\end{equation}
Since $\bar x_1(t)<1$ for all $t\in [\tau_1, \tau_2]$, \eqref{1st_condition_for_w_mu_FP3b} yields $\widetilde \mu([\tau_1, \tau_2])=0$, i.e., $\widetilde \mu=0$. Combining this with \eqref{1st_formula_for_w_eta_FP3b} and \eqref{2nd_formula_for_w_eta_FP3b}, one gets $\widetilde \eta(t)=0$ for all $t\in [\tau_1, \tau_2]$. Thus, the relation $\widetilde q(t)=\widetilde p(t)+\widetilde \eta(t)$ implies that $\widetilde q(t)=\widetilde p(t)$ for every $t\in [\tau_1, \tau_2]$. Therefore, together with the Lebesgue Theorem \cite[Theorem~6, p.~340]{Kolmogorov_Fomin_1970}, the properties of $\widetilde p(t)$ and $\widetilde q(t)$ established in the above analyses of the conditions~(ii) and~(iii) give $\widetilde p_2(t)=\widetilde q_2(t)=-\widetilde \gamma$ and $\widetilde p_1(t)=\widetilde q_1(t)=\dfrac{\widetilde\gamma}{\lambda}e^{-\lambda t}+\zeta$ for all $t\in [\tau_1, \tau_2]$, where $\zeta$ is a constant.
Substituting these formulas for $\widetilde q_1(t)$ and $\widetilde q_2(t)$ to ~\eqref{min_condition_FP3c}, we have
\begin{equation*}
\Big[a(\dfrac{\widetilde\gamma}{\lambda}e^{-\lambda t}+\zeta)-\widetilde\gamma e^{-\lambda t}\Big]\bar u(t)=\min_{u\in [-1, 1]}\left\{[a(\dfrac{\widetilde\gamma}{\lambda}e^{-\lambda t}+\zeta)-\widetilde\gamma e^{-\lambda t}]u\right\} \quad \mbox{a.e. } t\in [\tau_1, \tau_2]
\end{equation*}
or, equivalently,
\begin{equation}\label{min_condition_FP3d}
[\widetilde \gamma (\dfrac{a}{\lambda}-1)e^{-\lambda t}+a\zeta]\bar u(t)=\min_{u\in [-1, 1]}\left\{[\widetilde \gamma (\dfrac{a}{\lambda}-1)e^{-\lambda t}+a\zeta]u\right\} \quad \mbox{a.e. } t\in [\tau_1, \tau_2].
\end{equation}
Set $\widetilde\varphi(t)=\widetilde \gamma (\dfrac{a}{\lambda}-1)e^{-\lambda t}+a\zeta$ for all $t\in [\tau_1, \tau_2]$.
If $\widetilde\gamma=0$, then $\widetilde\varphi(t)\equiv a\zeta$ on $[\tau_1, \tau_2]$. Since $a>0$, the condition $(\widetilde p, \widetilde \mu, \widetilde \gamma)\neq (0, 0, 0)$ implies that $\zeta\neq 0$. If $\zeta>0$, then $\widetilde\varphi(t)>0$ for all $t\in [\tau_1, \tau_2]$. If $\zeta<0$, then $\widetilde\varphi(t)<0$ for all $t\in [\tau_1, \tau_2]$. Thus, if $\zeta>0$, then \eqref{min_condition_FP3d} implies that $\bar u(t)=-1$ a.e. $t\in [\tau_1, \tau_2]$. Similarly, if $\zeta<0$, then $\bar u(t)=1$ a.e. $t\in [\tau_1, \tau_2]$. Hence, applying the Lebesgue Theorem \cite[Theorem~6, p.~340]{Kolmogorov_Fomin_1970} to the absolutely continuous function $\bar x_1(t)$, one has
\begin{equation} \label{interior_traj_4}
\bar x_1(t)=\bar x_1(\tau_1)+a(t-\tau_1) \quad (\forall t\in [\tau_1, \tau_2])
\end{equation}
in the first case, and
\begin{equation} \label{interior_traj_5}
\bar x_1(t)=\bar x_1(\tau_1)-a(t-\tau_1) \quad (\forall t\in [\tau_1, \tau_2])
\end{equation} in the second case.
If $\widetilde\gamma>0$ then, due to the assumption $a>\lambda>0$, $\widetilde\varphi$ is strictly decreasing on $[\tau_1, \tau_2]$. When there exists $t_{\zeta}\in (\tau_1, \tau_2)$ such that $\widetilde\varphi(t_{\zeta})=0$, one has $\widetilde\varphi(t)>0$ for $t\in (\tau_1, t_{\zeta})$ and $\widetilde\varphi(t)<0$ for $t\in (t_{\zeta}, \tau_2)$. Hence, \eqref{min_condition_FP3d} forces $\bar u(t)=-1$ a.e. $t\in [\tau_1, t_{\zeta}]$ and $\bar u(t)=1$ a.e. $t\in [t_{\zeta}, \tau_2]$. Thus, by the cited above Lebesgue Theorem,
\begin{equation*} \label{interior_traj_6}
\bar x_1(t)=
\begin{cases}
\bar x_1(\tau_1)+a(t-\tau_1), \quad&\mbox{for}\; t\in [\tau_1, t_{\zeta}]\\
\bar x_1(t_{\zeta})-a(t-t_\zeta), &\mbox{for}\; t\in (t_{\zeta}, \tau_2].
\end{cases}
\end{equation*} As $\bar x_1(t)<1$ for all $t\in [\tau_1, \tau_2]$, one must have $\bar x_1(t_{\zeta})<1$, i.e., $t_{\zeta}<\tau_1+a^{-1}(1-\bar x_1(\tau_1))$. When $\widetilde\varphi(t)>0$ for all $t\in (\tau_1, \tau_2)$, condition \eqref{min_condition_FP3d} implies that $\bar u(t)=-1$ a.e. $t\in [\tau_1, \tau_2]$. So, $\bar x_1(t)$ is defined by \eqref{interior_traj_4}. When $\widetilde\varphi(t)<0$ for all $t\in (\tau_1, \tau_2)$, condition \eqref{min_condition_FP3d} implies that $\bar u(t)=1$ a.e. $t\in [\tau_1, \tau_2]$. Hence, $\bar x_1(t)$ is defined by \eqref{interior_traj_5}.
In summary, for any $\tau_1, \tau_2$ with $t_0\leq \tau_1<\tau_2\leq T$ and $\bar x_1(t)<1$ for all $t\in [\tau_1, \tau_2]$, the curve $t\mapsto \bar x_1(t)$, $t\in [\tau_1, \tau_2]$, cannot have more than one turning point. Namely, the curve must be of one of the three categories \eqref{lemma_interior_traj_1}--\eqref{lemma_interior_traj_3}.
\end{proof}
To proceed furthermore, put $ {\mathcal T}_1:=\{t\in [t_0,T]\; :\; \bar x_1(t)=1\}.$ Since $\bar x_1(t)$ is a continuous function, ${\mathcal T}_1$ is a compact set (which may be empty).
{\bf Case 1:} \textit{${\mathcal T}_1=\emptyset$, i.e., $\bar x_1 (t) <1$ for all $t\in [t_0, T]$.} Then, by \eqref{1st_condition_for_mu} one has $\mu([t_0, T])=0$, i.e., $\mu=0$. Combining this with \eqref{1st_formula_for_eta} and \eqref{2nd_formula_for_eta}, one gets $\eta(t)=0$ for all $t\in [t_0, T]$. Thus, the relation $q(t)=p(t)+\eta(t)$ allows us to have $q(t)=p(t)$ for every $t\in [t_0, T]$. Therefore, together with the Lebesgue Theorem \cite[Theorem~6, p.~340]{Kolmogorov_Fomin_1970}, the properties of $p(t)$ and $q(t)$ established in the above analyses of the conditions~(ii) and~(iii) give $$p_2(t)=q_2(T)=-\gamma\quad\ (\forall t\in [t_0, T])$$ and $$p_1(t)=p_1(T)+\int_T^t \dot p_1(\tau)d\tau=q_1(T)+\int_T^t \big(-\gamma e^{-\lambda \tau}\big)d\tau= \dfrac{\gamma}{\lambda}\big(e^{-\lambda t}-e^{-\lambda T}\big)$$ for all $t\in [t_0, T]$. Now, observe that substituting $q(t)=p(t)$ into \eqref{min_condition_FP_2a} yields
\begin{equation}\label{min_condition_FP2b}
[ap_1(t)+e^{-\lambda t}p_2(t)]\bar u(t)=\min_{u\in [-1, 1]}\left\{[ap_1(t)+e^{-\lambda t}p_2(t)]u\right\} \quad \mbox{a.e. } t\in [t_0, T].
\end{equation}
Setting $\varphi(t)=ap_1(t)+e^{-\lambda t}p_2(t)$ for $t\in [t_0, T]$ and using the above formulas of $p_1(t)$ and $p_2(t)$, we have
\begin{align*}
\varphi(t) =a\dfrac{\gamma}{\lambda}\big(e^{-\lambda t}-e^{-\lambda T}\big)-\gamma e^{-\lambda t}=\gamma (\dfrac{a}{\lambda}-1)e^{-\lambda t}-\gamma\dfrac{a}{\lambda} e^{-\lambda T}
\end{align*}
for $t\in [t_0, T]$. Due to the condition $(p, \gamma, \mu)\neq 0$, one must have $\gamma>0$. Moreover, the assumption $a>\lambda>0$ implies $\dfrac{a}{\lambda}>1$. Thus, the function $\varphi(t)$ is decreasing on $[t_0, T]$. In addition, it is clear that $\varphi(T)=-\gamma e^{-\lambda T}<0$, and $\varphi(t)=0$ if and only if $t=\bar t$, where
\begin{equation}\label{special_time}\bar t:=T-\dfrac{1}{\lambda}\ln \dfrac{a}{a-\lambda}.
\end{equation}The assumption $a>\lambda>0$ implies that $\bar t<T$. Note that the number $\rho:=\dfrac{1}{\lambda}\ln \dfrac{a}{a-\lambda}$ does not depend on the initial time $t_0$ and the terminal time $T$.
If $t_0\geq \bar t$, then one has $\varphi(t)<0$ for all $t\in (t_0, T)$. This situation happens if and only if $T-t_0\leq\rho$ (the time interval of the optimal control problem is rather small). Clearly, condition \eqref{min_condition_FP2b} forces $\bar u(t)=1$ a.e. $t\in [t_0, T]$. Since \eqref{state control system_FP_2a} is fulfilled for $x(t)=\bar x(t)$ and $u(t)=\bar u(t)$, applying the Lebesgue Theorem \cite[Theorem~6, p.~340]{Kolmogorov_Fomin_1970} to the absolutely continuous function $\bar x_1(t)$, one has
\begin{equation}\label{descending_trajectory1}\bar x_1(t)=\bar x_1(t_0)+\int_{t_0}^t\dot{\bar x}_1(\tau)d\tau=\bar x_1(t_0)+\int_{t_0}^t(-a\bar u(\tau))d\tau=x_0-a(t-t_0)\end{equation} for all $t\in [t_0, T]$. In addition, by \eqref{phase_second_component} one finds that \begin{equation}\label{descending_trajectory2}\bar x_2(t)=\int_{t_0}^{t} \big[-e^{-\lambda\tau}(\bar x_1(\tau)+\bar u(\tau))\big] d\tau=\int_{t_0}^{t} \big[-e^{-\lambda \tau}\big(x_0-a(\tau-t_0)+1\big)\big] d\tau\end{equation} for all $t\in [t_0, T]$.
{If $t_0<\bar t$, then $\varphi(t)>0$ for $t\in (t_0, \bar t)$ and $\varphi(t)<0$ for $t\in (\bar t, T)$. This situation happens if and only if $T-t_0>\rho$ (the time interval of the optimal control problem is large enough). Condition \eqref{min_condition_FP2b} yields $\bar u(t)=-1$ for a.e. $t\in [t_0, \bar t]$ and $\bar u(t)=1$ for a.e. $t\in [\bar t, T]$. Hence, by the above-cited Lebesgue Theorem, one has
\begin{equation}\label{up_then_down_traj}
\bar x_1(t)=
\begin{cases}
x_0+a(t-t_0),\quad &\mbox{if }\, t\in [t_0, \bar t]\\
\bar x_1(\bar t)-a(t-\bar t),\quad &\mbox{if }\, t\in (\bar t, T].
\end{cases}
\end{equation}
Therefore, from \eqref{phase_second_component}, we have
\begin{align}\label{up_then_down_traj2}
\bar x_2(t)=\begin{cases}
\displaystyle\int_{t_0}^{t} \big[-e^{-\lambda \tau}\big(x_0+a(\tau-t_0)+1\big)\big] d\tau, \quad &\mbox{if }\, t\in [t_0, \bar t]\\
\displaystyle\int_{t_0}^{t} \big[-e^{-\lambda \tau}\big(\bar x_1(\bar t)-a(\tau-\bar t)+1]d\tau, &\mbox{if }\, t\in (\bar t, T].
\end{cases}
\end{align}
Noting that $\bar x (t) <1$ for all $t\in [t_0, T]$ by our assumption, we must have $\bar x_1(\bar t)<1$, i.e., $\bar t<t_0+a^{-1}(1-x_0)$. Since $\bar t=T- \rho$, the last inequality is equivalent to $T-t_0<\rho+a^{-1}(1-x_0)$.}
{Thus, if ${\mathcal T}_1=\emptyset$ and $T-t_0\leq\rho$, then the unique process $(\bar x,\bar u)$ suspected for a $W^{1,1}$ local optimizer of $(FP_{2a})$ is the one with $\bar u(t)=1$ a.e. $t\in [t_0, T]$, $\bar x(t)=(\bar x_1(t),\bar x_2(t))$, where $\bar x_1(t)$ and $\bar x_2(t)$ are given respectively by \eqref{descending_trajectory1} and \eqref{descending_trajectory2}. Otherwise, if ${\mathcal T}_1=\emptyset$ and $$\rho<T-t_0<\rho+a^{-1}(1-x_0),$$ then the unique process $(\bar x,\bar u)$ serving as a $W^{1,1}$ local optimizer of $(FP_{2a})$ is the one with $\bar u(t)=-1$ for a.e. $t\in [t_0, \bar t]$ and $\bar u(t)=1$ for a.e. $t\in [\bar t, T]$, $\bar x(t)=(\bar x_1(t),\bar x_2(t))$, where $\bar x_1(t)$ and $\bar x_2(t)$ are defined respectively by \eqref{up_then_down_traj} and \eqref{up_then_down_traj2}. The situation where ${\mathcal T}_1=\emptyset$ and $T-t_0\geq \rho+a^{-1}(1-x_0)$ cannot occur. The situation where ${\mathcal T}_1=\emptyset$ and $x_0\geq 1-a(\bar t-t_0)$ also cannot occur.}
Now, suppose that ${\mathcal T}_1\neq\emptyset$, i.e., there exists $t\in [t_0, T]$ with the property $\bar x (t) =1$. Setting
\begin{equation*}\label{two_alphas} \alpha_1:=\min \{t\in [t_0,T]\; :\; \bar x_1(t)=1\},\quad \alpha_2:=\max\{t\in [t_0,T]\; :\; \bar x_1(t)=1\},
\end{equation*} we have $t_0\leq\alpha_1\leq\alpha_2\leq T$. The following situations can occur.
{\bf Case 2:} \textit{$t_0<\alpha_1=\alpha_2=T$, i.e., $\bar x_1(t)<1$ for $t\in [t_0,T)$ and $\bar x_1(T)=1$}. Clearly,~\eqref{1st_condition_for_mu} means that $\mu([t_0, T))=0$. Moreover, if $\nu(T)\neq (1, 0)$, then from \eqref{2nd_condition_for_mu} it follows that $\mu(\{T\})=0$. So, we have $\mu([t_0, T])=\mu([t_0, T))+\mu(\{T\})=0$, i.e., $\mu=0$. Hence, we can repeat the arguments already used in Case 1 to prove that either $\bar x_1(t)=x_0-a(t-t_0)$ for all $t\in [t_0, T]$, or \begin{equation*}
\bar x_1(t)=
\begin{cases}
x_0+a(t-t_0),\quad &\mbox{if }\, t\in [t_0, \bar t]\\
\bar x_1(\bar t)-a(t-\bar t),\quad &\mbox{if }\, t\in (\bar t, T].
\end{cases}
\end{equation*} In particular, either we have $\bar x_1(T)=x_0-a(T-t_0)<1$, or $\bar x_1(T)=\bar x_1(\bar t)-a(T-\bar t)<1$. Both instances are impossible, because $\bar x_1(T)=1$. So, the situation $\nu(T)\neq (1, 0)$ is excluded; thus $\nu(T)=(1, 0)$.
From \eqref{1st_formula_for_eta} and \eqref{2nd_formula_for_eta}, one gets $\eta(t)=0$ for $t\in [t_0, T)$ and $\eta(T)=(\mu(T)-\mu(T-0), 0),$ where $\mu(T-0)$ denotes the left limit of $\mu$ at $T$. Therefore, the relation $q(t)=p(t)+\eta(t)$, which holds for every $t\in [t_0, T]$, yields $q_1(t)=p_1(t)$ for $t\in [t_0, T)$, $q_1(T)=p_1(T)+\mu(T)-\mu(T-0)$, and $q_2(t)=p_2(t)$ for $t\in [t_0, T]$. Combining this with the above results of our analyses of the conditions (ii) and (iii), we have $p_2(t)=-\gamma$ and $p_1(t)=\dfrac{\gamma}{\lambda}e^{-\lambda t}+\zeta$ for all $t\in [t_0, T]$, with~$\zeta$ being a constant. Since $q(t)$ equals to $p(t)$ everywhere on $[t_0, T]$, except possibly for $t=T$, condition~\eqref{min_condition_FP_2a} implies that
\begin{equation}\label{min_condition_FP2c}
[ap_1(t)+e^{-\lambda t}p_2(t)]\bar u(t)=\min_{u\in [-1, 1]}\left\{[ap_1(t)+e^{-\lambda t}p_2(t)]u\right\} \quad \mbox{a.e. } t\in [t_0, T].
\end{equation}
As in Case 1, we set $\varphi(t)=ap_1(t)+e^{-\lambda t}p_2(t)$ for every $t\in [t_0, T]$. Here one has
\begin{align*}
\varphi(t) =a\big(\dfrac{\gamma}{\lambda}e^{-\lambda t}+\zeta\big)-\gamma e^{-\lambda t}=\gamma (\dfrac{a}{\lambda}-1)e^{-\lambda t}+a\zeta
\end{align*}
for all $t\in [t_0, T]$.
Since $\dfrac{a}{\lambda}>1$, the function $\varphi(t)$ is decreasing on $[t_0, T]$. Besides, since $\mu(T)-\mu(T-0)\geq 0,$ $q_1(T)=p_1(T)+\mu(T)-\mu(T-0)$, and $q_1(T)=0$, we have $p_1(T)\leq 0$. So, $\varphi(T)=ap_1(T)-\gamma e^{-\lambda T}<0$. If $\varphi(t)<0$ for all $t\in (t_0,T)$, then by~\eqref{min_condition_FP2c} one has $\bar u(t)=1$ for a.e. $t\in [t_0,T]$.
So, as it has been done in \eqref{descending_trajectory1}, we have $\bar x_1(t)=x_0-a(t-t_0)$ for all $t\in [t_0,T]$. This yields $\bar x_1(T)<x_0<1$. We have arrived at a contraction. Now, suppose that there exists $\bar t_\zeta\in [t_0,T)$ satisfying $\varphi(\bar t_\zeta)=0$. Then $\varphi(t)>0$ for $t\in (t_0, \bar t_\zeta)$ and $\varphi(t)<0$ for $t\in (\bar t_\zeta, T)$. Thus, \eqref{min_condition_FP2b} yields $\bar u(t)=-1$ for a.e. $t\in [t_0, \bar t_\zeta]$ and $\bar u(t)=1$ for a.e. $t\in [\bar t_\zeta, T]$. Hence, applying the Lebesgue Theorem \cite[Theorem~6, p.~340]{Kolmogorov_Fomin_1970} to the absolutely continuous function $\bar x_1(t)$, one has $\bar x_1(t)=a(t-t_0)+x_0$ for all $t\in [t_0, \bar t_\zeta]$ and $\bar x_1(t)=-a(t-\bar t_\zeta)+\bar x_1(\bar t_\zeta)$ for every $t\in [\bar t_\zeta, T]$. As $\bar x (t) <1$ for all $t\in [t_0, T]$ by our assumption, we must have $\bar x_1(\bar t_\zeta)<1$. Then we get $\bar x_1(T)=-a(T-\bar t_\zeta)+\bar x_1(\bar t_\zeta)<1,$ which is impossible.
{\bf Case 3:} \textit{$t_0=\alpha_1=\alpha_2<T$, i.e., $x_0=1$ and $\bar x_1(t)<1$ for $t\in (t_0,T]$}. Let $\bar\varepsilon>0$ be such that $t_0+\bar\varepsilon< T$. For any $k\in{\rm I\!N}$ with $k^{-1}\in (0,\bar\varepsilon)$, by Proposition~\ref{lemma_basic_property_2} we know that the restriction of $(\bar x, \bar u)$ on $[t_0+k^{-1}, T]$ is a $W^{1,1}$ local minimizer for the Mayer problem~$(FP_{2b})$, which is obtained from $(FP_{2a})$ by replacing $t_0$ with $t_0+k^{-1}$. Since $\bar x_1(t)<1$ for all $t\in [t_0+k^{-1},T]$, we can repeat the arguments already used in Case 1 to get that either $\bar x_1(t)=\bar x_1(t_0+k^{-1})-a(t-t_0-k^{-1})$ for all $t\in [t_0+k^{-1}, T]$, or
\begin{equation*}
\bar x_1(t)=
\begin{cases}
\bar x_1(\bar t)+a(t-\bar t),\quad &\mbox{if }\, t\in [t_0+k^{-1}, \bar t]\\
\bar x_1(\bar t)-a(t-\bar t),\quad &\mbox{if }\, t\in (\bar t, T]
\end{cases}
\end{equation*} with $\bar t=T-\rho$, $\bar t\in [t_0+k^{-1}, T]$, and $\bar x_1(\bar t)<1$. By the Dirichlet principle, there must exist an infinite number of indexes $k$ with $k^{-1}\in (0,\bar\varepsilon)$ such that $\bar x_1(t)$ has the first form (resp., the second form). Without loss of generality, we may assume that this happens for all $k$ with $k^{-1}\in (0,\bar\varepsilon)$. If the first situation occurs, then by letting $k\to\infty$ we can assert that $\bar x_1(t)=1-a(t-t_0)$ for all $t\in [t_0, T]$. If the second situation occurs, then we have
\begin{equation*}
x_0=\displaystyle\lim_{k\to\infty}\bar x_1(t_0+k^{-1})=
\displaystyle\lim_{k\to\infty}\big[\bar x_1(\bar t)+a(t_0+k^{-1}-\bar t)\big]=\bar x_1(\bar t)+a(t_0-\bar t).
\end{equation*} Since $\bar x_1(\bar t)+a(t_0-\bar t)\leq \bar x_1(\bar t)<1$ and $x_0=1$, we have arrived at a contradiction.
{\bf Case 4:} \textit{$t_0<\alpha_1\leq\alpha_2<T$}. Then, $\bar x_1(\alpha_1)=\bar x_1(\alpha_2)=1$, $\bar x_1(t)<1$ for $t\in [t_0,\alpha_1)\cup (\alpha_2,T]$. To find a formula for $(\bar x, \bar u)$ on $[\alpha_2, T]$, observe from Proposition~\ref{lemma_basic_property_2} that the restriction of $(\bar x, \bar u)$ on $[\alpha_2, T]$ is a $W^{1,1}$ local minimizer for the Mayer problem obtained from $(FP_{2a})$ by replacing $t_0$ with $\alpha_2$. Thus, the result in Case 3 applied to the process $(\bar x(t), \bar u(t))$, $t\in [\alpha_2, T]$, implies that $\bar x_1(t)=1-a(t-\alpha_2)$ and $\bar x_2(t)=\displaystyle\int_{\alpha_2}^{t} \big[-e^{-\lambda \tau}\big(1-a(\tau-\alpha_2)+1\big)\big] d\tau$ for all $t\in [\alpha_2, T]$. To obtain a formula for $(\bar x, \bar u)$ on $[t_0, \alpha_2]$, consider the following two subcases.
\underline{\textit{Subcase 4a}}: $t_0<\alpha_1=\alpha_2<T$. Here we have $\bar x_1(\alpha_1)=1$ and $\bar x_1(t)<1$ for all $t\in [t_0, T]\setminus\{\alpha_1\}$. To find a formula for $\bar x_1(.)$ on $[t_0, \alpha_1]$, we temporarily fix a value $\alpha \in (t_0, \alpha_1)$ (later, we will let $\alpha$ converge to $\alpha_1$). Since $\bar x_1(t)<1$ for all $[t_0, \alpha]$, applying Proposition~\ref{lemma_interior_traj} with $\tau_1:=t_0$ and $\tau_2:=\alpha$, we can assert that the restriction of $\bar x_1(.)$ on $[t_0, \alpha]$ is defined by one of next three formulas:
\begin{equation}\label{interior_traj_1}
\bar x_1(t)=x_0+a(t-t_1), \quad t\in [t_0, \alpha],
\end{equation}
\begin{equation}\label{interior_traj_2}
\bar x_1(t)=x_0-a(t-t_1), \quad t\in [t_0, \alpha],
\end{equation}
and
\begin{equation} \label{interior_traj_3}
\bar x_1(t)=
\begin{cases}
x_0+a(t-t_1), \quad & t\in [t_0, t_{\zeta}]\\
\bar x_1(t_{\zeta})-a(t-t_\zeta), & t\in (t_{\zeta}, \alpha],
\end{cases}
\end{equation}
where $t_{\zeta} \in (t_0, \alpha)$. Hence, the graph of $\bar x_1(.)$ on $[t_0, \alpha]$ is of one of the following types:
C1)~\textit{Going up} as in Fig.~\ref{fig4}; C2) \textit{Going down} as in Fig.~\ref{fig5}; C3) \textit{Going up first and then going down} as in Fig.~\ref{fig6}.
\begin{figure}[!ht] \begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[scale=.15]{interior_traj_1a_181203.jpg}
\caption{Category C1}\label{fig4}
\end{minipage}
\hfill
\begin{minipage}[b]{0.5\textwidth}
\centering
\includegraphics[scale=.15]{interior_traj_2a_181203.jpg}
\caption{Category C2}\label{fig5}
\end{minipage}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[scale=.15]{interior_traj_3a_181203.jpg}
\caption{Category C3}\label{fig6}
\end{figure}
Now, let $\alpha=\alpha^{(k)}$ with $\alpha^{(k)}:=\alpha_1-\dfrac{1}{k}$, where $k\in {\rm I\!N}$ is as large as $\alpha\in (t_0, \alpha_1)$. Since for each $k$ the restriction of the graph of $\bar x_1(.)$ on $[t_0,\alpha^{(k)}]$ must be of one of the three types C1--C3, by the Dirichlet principle we can find a subsequence $\{k'\}$ of $\{k\}$ such that the corresponding graphs belong to a fixed category. If the latter is~C2, then by~\eqref{interior_traj_2} and the continuity of $\bar x_1(.)$ one has
$$\bar x_1(\alpha_1)=\displaystyle\lim_{k'\to\infty} \bar x_1(\alpha^{(k')})=\displaystyle\lim_{k'\to\infty} \Big[x_0-a(\alpha^{(k')}-t_0)\Big]=x_0-a(\alpha_1-t_0).$$ This is impossible, because $\bar x_1(\alpha_1)=1$. Similarly, the situation where the fixed category is~C3 can be excluded by using \eqref{interior_traj_3}. If the graphs belong to the category~C1, from~\eqref{interior_traj_1} we deduce that
\begin{equation}\label{interior_traj_1a}
\bar x_1(t)=x_0+a(t-t_0) \quad (\forall t\in [t_0, \alpha_1]).
\end{equation} Then, the condition $\bar x_1(\alpha_1)=1$ is satisfied if and only if $\alpha_1=t_0+a^{-1}(1-x_0).$
\underline{\textit{Subcase 4b}}: $t_0<\alpha_1<\alpha_2<T$. Then, one has $\bar x_1(\alpha_1)=\bar x_1(\alpha_2)=1$ and $\bar x_1(t)<1$ for $t\in [t_0,\alpha_1)\cup (\alpha_2,T]$. We are going to show that this situation cannot occur.
Suppose first that $\bar x_1(t)=1$ for all $t\in (\alpha_1, \alpha_2)$. Since $(\bar x, \bar u)$ is a $W^{1,1}$ local minimizer of $(FP_{2a})$, by Definition \ref{local_minimizer} we can find $\delta>0$ such that the process $(\bar x, \bar u)$ minimizes the quantity $g(x(t_0), x(T))=x_2(T)$ over all feasible processes $(x, u)$ of $(FP_{2a})$ with $\|\bar x-x\|_{W^{1,1}} \leq \delta$. By the result given before Subcase~4a, one has $\bar x_1(t)=1-a(t-\alpha_2)$ for all $t\in [\alpha_2, T]$. Fixing a number $\alpha\in (\alpha_1,\alpha_2)$, we consider the pair of functions $(\widetilde x^\alpha, \widetilde u^\alpha)$ defined by
\begin{align*}
\widetilde x^\alpha (t):=
\begin{cases}
\bar x(t), \ \; t\in [t_0, \alpha)\\
1-a(t-\alpha), \ \; t\in [\alpha, T]
\end{cases}
\quad\; \mbox{and} \quad\;
\widetilde u^\alpha(t):=
\begin{cases}
\bar u(t), \ \; t\in [t_0, \alpha)\\
1, \ \; t\in [\alpha, T].
\end{cases}
\end{align*}
It is easy to check that $(\widetilde x^\alpha, \widetilde u^\alpha)$ is a feasible process of $(FP_{2a})$. Besides, by direct computing, we have $$\bar x_2(T)-\widetilde x_2^\alpha (T)=\dfrac{1}{\lambda}[a(\alpha-\alpha_2)e^{-\lambda T}+2(e^{-\lambda \alpha}-e^{-\lambda \alpha_2})].$$
Thus, the condition $\alpha<\alpha_2$ yields $\bar x_2(T)>\widetilde x_2^\alpha (T)$. Since $\displaystyle\lim_{\alpha\to \alpha_2}\|\bar x-\widetilde x^\alpha\|_{W^{1,1}} =0$, one has $\|\bar x-\widetilde x^\alpha\|_{W^{1,1}}\leq\delta$ for all $\alpha\in (\alpha_1,\alpha_2)$ sufficiently close to $\alpha_2$. This contradicts the assumed $W^{1,1}$ local optimality of the process $(\bar x,\bar u)$.
Now, suppose that there exists $\hat t \in (\alpha_1, \alpha_2)$ such that $\bar x_1(\hat t)<1$. By the continuity of $\bar x_1(.)$, the constants $\hat \alpha_1:=\max\{t\in [\alpha_1, \hat t]\,:\, \bar x_1(t)=1\}$ and $\hat \alpha_2:=\min\{t\in [\hat t, \alpha_2]\,:\, \bar x_1(t)=1\}$ are well defined. Note that $\hat t\in \big(\hat \alpha_1,\hat \alpha_2\big)$ and $\bar x_1(t)<1$ for every $t\in (\hat \alpha_1, \hat \alpha_2)$. If $\varepsilon>0$ is small enough, then $\hat \alpha_1+\varepsilon\in \big(\hat \alpha_1,\hat t\big)$. Using the result given in Subcase~4a for the restriction of the function $\bar x_1(t)$ on the segment $[\hat \alpha_1+\varepsilon, \hat \alpha_2]$ (thus, $\hat \alpha_1+\varepsilon$ plays the role of $t_0$ and $\hat \alpha_2$ takes the place of $\alpha_1$), one finds that
\begin{equation*}
\bar x_1(t)=\bar x_1(\hat \alpha_1+\varepsilon)+a(t-\hat \alpha_1-\varepsilon) \quad\; (\forall t\in [\hat \alpha_1+\varepsilon, \hat \alpha_2]).
\end{equation*}
In particular, the function $\bar x_1(t)$ is strictly increasing on $[\hat \alpha_1+\varepsilon, \hat \alpha_2]$. Since $\hat t\in \big( \alpha_1+\varepsilon,\hat \alpha_2\big)$, this implies that $\bar x_1(\hat \alpha_1+\varepsilon)<\bar x_1(\hat t)$. Then, by the continuity of $\bar x_1(t)$ we obtain
\begin{equation*}
\bar x_1(\hat \alpha_1)=\displaystyle\lim_{\varepsilon\to 0} \bar x_1(\hat \alpha_1+\varepsilon)\leq\bar x_1(\hat t)<1.
\end{equation*}
As $\bar x_1(\hat \alpha_1)=1$, we have arrived at a contradiction.
Since Subcase~4b cannot happen, we conclude that the formula for $\bar x_1(t)$ in this case is given by
\begin{equation*}
\bar x_1(t)=
\begin{cases}
x_0+a(t-t_0), \quad & t \in [t_0, \alpha_1]\\
1-a(t-\alpha_1), & t \in (\alpha_1, T],
\end{cases}
\end{equation*}
with $\alpha_1:=t_0+a^{-1}(1-x_0)$. One must have $\alpha_1\leq \bar t$, where $\bar t$ is defined by \eqref{special_time}. Indeed, suppose on the contrary that $\alpha_1>\bar t$. For an arbitrarily given $\alpha \in (\bar t, \alpha_1)$, we consider the problem $(FP_{1b})$ (resp., the problem $(FP_{2b})$) which is obtained from the problem $(FP_{1a})$ in Section~\ref{Example 1} (resp., from the above problem $(FP_{2b})$) by letting $\alpha$ play the role of the initial time $t_0$. Since $\alpha > \bar t$, it follows from Theorem~\ref{Thm1} that $(FP_{1b})$ has a unique global solution $(\bar x^\alpha,\bar u^\alpha)$, where $\bar u(t)=1$ for almost everywhere $t\in [\alpha, T]$, $\bar x_1^\alpha(t)=\bar x_1(\alpha)-a(t-\alpha)$ for all $t \in [\alpha, T]$, and $\bar x_2^\alpha(t)= \int_{\alpha}^{t} \big[-e^{-\lambda\tau}(x_1(\tau)+u(\tau))\big] d\tau$ for all $t \in [\alpha, T]$. Clearly, the restriction of $(\bar x, \bar u)$ on $[\alpha, T]$ is a feasible process for $(FP_{1b})$. Thus, we have
\begin{equation}\label{intergral_inequality}
\bar x_2^\alpha(T)<\bar x_2(T).
\end{equation}
Besides, by Proposition~\ref{lemma_basic_property_2}, the restriction of $(\bar x, \bar u)$ on $[\alpha, T]$ is a $W^{1,1}$ local solution for $(FP_{2b})$. So, there exits $\delta>0$ such that the restriction of $(\bar x, \bar u)$ on $[\alpha, T]$ minimizes the quantity $x_2(T)$ over all feasible processes $(x, u)$ of $(FP_{2b})$ with $\|x-\bar x\|_{W^{1,1}([\alpha, T]; {\rm I\!R}^n)} \leq \delta$. Clearly, $(\bar x^\alpha,\bar u^\alpha)$ is a feasible process of $(FP_{2b})$. Therefore, since $\|\bar x^\alpha -\bar x\|_{W^{1,1}([\alpha, T]; {\rm I\!R}^n)} \leq \delta$ for all $\alpha$ sufficiently close to $\alpha_1$, we have $\bar x_2^\alpha(T)\geq\bar x_2(T)$ for those $\alpha$. This contradicts~\eqref{intergral_inequality}.
\medskip
Going back to the original problem $(FP_2)$, we can summarize the results obtained in this section as follows.
\begin{Theorem}\label{Thm2} Given any $a,\lambda$ with $a>\lambda>0$, define $\rho=\dfrac{1}{\lambda}\ln \dfrac{a}{a-\lambda}>0$, $\bar t=T-\rho$ $\bar x_0=1-a(\bar t-t_0)$, and $\alpha_1=t_0+a^{-1}(1-x_0)$. Then, problem $(FP_2)$ has a unique local solution $(\bar x,\bar u)$, which is a global solution, where $\bar u(t)=-a^{-1}\dot{\bar x}(t)$ for almost everywhere $t\in [t_0, T]$ and $\bar x(t)$ can be described as follows:
\begin{description}
\item{\rm (a)} If $t_0 \geq \bar t$ (i.e, $T-t_0 \leq \rho$), then
\begin{equation*}
\bar x(t)=x_0-a(t-t_0), \quad t \in [t_0, T].
\end{equation*}
\item{\rm (b)} If $t_0 < \bar t$ and $x_0<\bar x_0$ (i.e, $\rho< T-t_0<\rho+a^{-1}(1-x_0)$), then
\begin{equation*}
\bar x(t)=
\begin{cases}
x_0+a(t-t_0), \quad & t \in [t_0, \bar t]\\
x_0-a(t+t_0-2\bar t), & t \in (\bar t, T].
\end{cases}
\end{equation*}
\item{\rm (c)} If $t_0 < \bar t$ and $x_0\geq \bar x_0$ (i.e, $T-t_0\geq\rho+a^{-1}(1-x_0)$), then
\begin{equation*}
\bar x(t)=
\begin{cases}
x_0+a(t-t_0), \quad & t \in [t_0, \alpha_1]\\
1-a(t-\alpha_1), & t \in (\alpha_1, T].
\end{cases}
\end{equation*}
\end{description}
\end{Theorem}
\begin{proof} To obtain the assertions (a)--(c), it suffices to combine the results formulated in Case~1, Case~3, and Case~4, having in mind that $\bar x_1(t)$ in $(FP_{2a})$ plays the role of $\bar x(t)$ in $(FP_2)$.
\end{proof}
\section{Conclusions}\label{Conclusions}
We have analyzed a maximum principle for finite horizon state constrained problems via two parametric examples of optimal control problems of the Langrange type, which have five parameters. These problems resemble the optimal growth problem in mathematical economics. The first example is related to control problems without state constraints. The second one belongs to the class of irregular control problems with unilateral state constraints. We have proved that the control problem in each example has a unique local solution, which is a global solution. Moreover, we are able to present an explicit description of the optimal process with respect to the five parameters.
The obtained results allows us to have a deep understanding of the maximum principle in question.
It seems to us that, following the approach adopted in this paper, one can study economic optimal growth models by advanced tools from functional analysis and optimal control theory.
| {
"timestamp": "2019-01-15T02:04:49",
"yymm": "1901",
"arxiv_id": "1901.03794",
"language": "en",
"url": "https://arxiv.org/abs/1901.03794",
"abstract": "In the present paper, the maximum principle for finite horizon state constrained problems from the book by R. Vinter [\\textit{Optimal Control}, Birkhäuser, Boston, 2000; Theorem~9.3.1] is analyzed via parametric examples. The latter has origin in a recent paper by V.~Basco, P.~Cannarsa, and H.~Frankowska, and resembles the optimal growth problem in mathematical economics. The solution existence of these parametric examples is established by invoking Filippov's existence theorem for Mayer problems. Since the maximum principle is only a necessary condition for local optimal processes, a large amount of additional investigations is needed to obtain a comprehensive synthesis of finitely many processes suspected for being local minimizers. Our analysis not only helps to understand the principle in depth, but also serves as a sample of applying it to meaningful prototypes of economic optimal growth models. Problems with unilateral state constraints are studied in Part 1 of the paper. Problems with bilateral state constraints will be addressed in Part 2.",
"subjects": "Optimization and Control (math.OC)",
"title": "Analyzing a Maximum Principle for Finite Horizon State Constrained Problems via Parametric Examples. Part 1: Problems with Unilateral State Constraints",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347823646073,
"lm_q2_score": 0.727975460709318,
"lm_q1q2_score": 0.709364609623059
} |
https://arxiv.org/abs/0711.1979 | Galilean Classification of Curves | In this paper, we classify space-time curves up to Galilean group of transformations with Cartan's method of equivalence. As an aim, we elicit invariats from action of special Galilean group on space-time curves, that are, in fact, conservation laws in physics. We also state a necessary and sufficient condition for equivalent Galilean motions. | \section{Introduction}
Galilean transformation group has an important place in classic
and modern physics for instance: in quantum theory, gauge
transformations in electromagnetism, in mechanics \cite{AM}, and
conductivity tensors in fluid dynamics \cite{Ga}, also, in
mathematical fields such as Lagrangian mechanics, dynamics and
control theory \cite{Le}, and so on. In physics, when we study a
curve in Galilean space--time ${\Bbb R}^3\times{\Bbb R}$, it is
very important that we know about invariants of the curve, that
are conservation laws. In \cite{AM} for example, a Hamiltonian
vector field with some conditions, was introduced as a Galilean
invariant of special Galilean transformations on $T^{\ast}{\Bbb
R}^3$. But in this paper, by Cartan's method of equivalence
problem, we will find all invariants. We show that there are two
functionally independent invariants for a curve in a Galilean
space--time up to the action of special Galilean transformation
group, such that other invarians are functions of these invariants
and their derivations. Then, we use of this invariants to classify
space--time curves, in respect to special Galilean
transformations. In the next section, we state Cartan's theorem,
that is the main key for the classification. In section 3, we
propound the definition of Galilean group as a matrix group and
its properties. In the latest section, we determine the invariants
and classify space--time curves up to special Galilean group.
Finally, we prove that this invariants are a necessary and
sufficient condition for specification of space--time curves.
Then, we infer a physical result for Galilean motions, from this
classification.
\section{Preliminaries}
Let $G\subset{\rm GL}(n,{\Bbb R})$ be a matrix Lie group with Lie
algebra ${\goth g}$ and $P:G\rightarrow Mat(n\times n)$ be a
matrix-valued function which embeds $G$ into $Mat(n\times n)$ the
vector space of $n\times n$ matrices with real entries. Its
differential is $dP_B: T_BG\rightarrow T_{P(B)}Mat(n\times
n)\simeq Mat(n\times n)$.
\paragraph{Definition 2.1} The following form of $G$ is called {\it
Maurer-Cartan form}:
$$\omega_B=\{P(B)\}^{-1}\,.\,dP_B$$
that it is often written $\omega_B = P^{-1}\,.\,dP$.
The Maurer-Cartan form is the key to classifying maps into
homogeneous spaces of $G$, and this process need to this theorem
(for proof refer to \cite{Iv-La}):
\paragraph{Theorem 2.2 (Cartan)}{\it Let $G$ be a matrix Lie group with Lie
algebra ${\goth g}$ and Maurer-Cartan form $\omega$. Let $M$ be a
manifold on which there exists a ${\goth g}-$valued 1-form $\phi$
satisfying $d\,\phi = -\phi\wedge\phi$. Then for any point $x\in
M$ there exist a neighborhood $U$ of $x$ and a map $f:U\rightarrow
G$ such that $f^{\ast}\,\omega = \phi$. Moreover, any two such
maps $f_1, f_2$ must satisfy $f_1 = L_B\circ f_2$ for some fixed
$B\in G$ ($L_B$ is the left action of $B$ on $G$).}
\paragraph{Corollary 2.3 (\cite{Iv-La})}{\it Given maps $f_1, f_2:M\rightarrow
G$, then $f_1^{\ast}\,\omega =f_2^{\ast}\,\omega$, that is, this
pull-back is invariant, if and only if $f_1 = L_B\circ f_2$ for
some fixed $B\in G$.}\\
By corollary 2.3, one can conclude that in the view of Cartan's
theorem, the relation $f_1^{\ast}\,\omega =f_2^{\ast}\,\omega$
offers the invariant functions. In fact, these functions that we
call them invariants, when $f_1 = L_B\circ f_2$ for some fixed
$B\in G$, will remain permanent for maps $f_1$ and $f_2$ under the
pull-back action on Maurer-Cartan form $\omega$.
\section{Galilean Transformation Group}
Let ${\bf R}^3\times{\bf R}$ be a standard Galilean space--time. A
Galilean transformation is a transformation of ${\bf
R}^3\times{\bf R}$ as follows:
\paragraph{Definition 3.1} A map $\phi:{\bf R}^3\times{\bf R}\rightarrow{\bf R}^3\times{\bf
R}$ with the following definition
\begin{eqnarray*}
\left(
\begin{array}{c} {\bf X}\\ t
\end{array}\right)\mapsto
\left(
\begin{array}{cc}
{\bf R}& {\bf v}\\ {\bf 0}^T & 1
\end{array}\right)\,\left(
\begin{array}{c} {\bf X}\\ t
\end{array}\right)+\left(
\begin{array}{c} {\bf y}\\ s
\end{array}\right)
\end{eqnarray*}
is called a {\it Galilean transformation}, where ${\bf R}\in{\rm
O}(3,{\Bbb R})$, $s\in{\Bbb R}$, and ${\bf y},{\bf v}\in{\Bbb
R}^3$.\\
The set of Galilean transformations is a 10-dimensional group
\cite{Iv-La}. We call this group as {\it Galilean transformation
group} or in brief, {\it Galilean group}, and denote it by ${\rm
Gal}(4,{\Bbb R})$. We can also identify this group with the
following matrix group
\begin{eqnarray}
{\rm Gal}(4,{\Bbb R})=\left\{ \left(\begin{array}{ccc} 1 & {\bf 0} & s \\ {\bf v} & {\bf R} & {\bf y} \\
0 & {\bf 0} & 1
\end{array}\right)\;\Big|\; {\bf R}\in{\rm
O}(3,{\Bbb R}),\; s\in{\Bbb R},\; \mbox{and}\;\; {\bf y},{\bf
v}\in{\Bbb R}^3 \;\right\},\label{eq1}
\end{eqnarray}
that with the matrix multiplication, is a 10-dimensional group.
Galilean group is a subgroup of affine transformation group
$A(5,{\Bbb R})$ and so a subgroup of ${\rm GL}(5,{\Bbb R})$. This
group also has a smooth structure and so is a smooth manifold.
Hence, with the smooth action of matrix multiplication, it is a
Lie group with Lie algebra ${\goth gal}(4,{\Bbb R})$. By its
representation, we can find its Maurer--Cartan forms that provide
a base for Lie algebra ${{\goth gal}}(4,{\Bbb R})$.
\paragraph{Definition 3.2} An element of Galilean transformation
group is called {\it special galilean transformation}, if in
representation (\ref{eq1}), ${\bf R}$ be in ${\rm SO}(3,{\Bbb
R})$. The group of all special Galilean transformations is called
{\it special Galilean transformation group} (or special Galilean
group in brief), and denoted by ${\rm SGal}(4,{\Bbb R})$. So, we
have
\begin{eqnarray*}
{\rm SGal}(4,{\Bbb R})=\left\{ \left(\begin{array}{ccc} 1 & {\bf 0} & s \\ {\bf v} & {\bf R} & {\bf y} \\
0 & {\bf 0} & 1
\end{array}\right)\;\Big|\; {\bf R}\in{\rm
SO}(3,{\Bbb R}),\; s\in{\Bbb R},\; \mbox{and}\;\; {\bf y},{\bf
v}\in{\Bbb R}^3 \;\right\}.
\end{eqnarray*}
${\rm SGal}(4,{\Bbb R})$ is a connected component of ${\rm
Gal}(4,{\Bbb R})$, and a Lie group with Lie algebra ${\goth s\goth
g\goth a\goth l}(4,{\Bbb R})$. In next section, we consider the
special Galilean group for classifying space--time curves, and
similar to Galilean group's, special Galilean group's
Maurer--Cartan form will be computed, it provides a base for Lie
algebra ${\goth s\goth g\goth a\goth l}(4,{\Bbb R})$.
\section{Classification of Space--time Curves}
Let $c:[a,b]\rightarrow\Bbb{R}\times\Bbb{R}^3$ be a
curve with following definition:
$$c(t):=(t,{\bf X}(t))=(t,x_1(t),x_2(t),x_3(t)),$$
in which, the space coordinate ${\bf X}$, is a smooth
vector-valued function with values in $\Bbb{R}^3$, and $x_i$~s for
$i=1,2,3$, are a smooth scalar functions.\par
\paragraph{Definition 4.1} By {\sl ST--curve}, we mean a curve of class ${\cal C}^5$ that
is in four dimensional space--time ${\Bbb R}\times{\Bbb R}^3$,
with this condition that it has no singular point, i.e. $\det({\bf X}',{\bf X}'',{\bf
X}''')={\bf X}'\cdot({\bf X}''\times{\bf X}''')\neq0$. We may
assume that this value be positive.\\
If $c(t)=(t,{\bf X}(t))$ be a ST--curve, for all point $t\in[a,b]$
we have ${\bf X}'(t)\neq 0$, and the curve ${\bf X}:t\mapsto{\bf X}(t)$ will be regular
and one can reparameterize it with arc length parameter $s$, so that for each point $s$,
we have $||{\bf X}'(s)||=1$.
%
\paragraph{Definition 4.2} We call $c(t)=(t,{\bf X}(t))$ as {\sl regular}, if the curve ${\bf X}(t)$ be regular. Also,
we say that the parameter of $c$ is {\sl arc length parameter}, if the parameter
be an arc length parameter of ${\bf X}$.\\
The group of Galilean transformation can act on a ST--curve at
each point of the domain, when we equate $\Bbb{R}^3\times\Bbb{R}$
with
\begin{eqnarray*}
{\Bbb R}^5=\left\{\left(
\begin{array}{c}
t\\ {\bf X}\\1
\end{array}\right)\;\;\Big|\;\;t\in\Bbb{R},\,{\bf X}\in\Bbb{R}^3\right\},
\end{eqnarray*}
hence the action can be defined. Therefore, we say that {\sl two
ST--curves are equivalent if, their representations in ${\Bbb
R}^5$ be Galilean equivalent}.
\paragraph{Convention 4.3} Henceforth, we consider that image of
ST--curve $c$, be in ${\Bbb R}^5$ as above. \par
We may consider a new curve $\alpha_c:[a,b]\rightarrow{\rm
Gal}(4,{\Bbb R})$ rather than $c$, in the following form:
\begin{eqnarray*}
\alpha_c(t)\!:=\!\left(\!\! \begin{array}{ccccc} 1 & 0 & 0 & 0 & t \\[2mm]
{\bf X}' & \displaystyle{\frac{{\bf X}''}{||{\bf X}''||}} &
\displaystyle{\frac{{\bf X}''\times {\bf X}'''}{||{\bf X}''\times
{\bf X}'''||}} & \displaystyle{\frac{{\bf X}''\times({\bf
X}''\times {\bf X}''')}{||{\bf X}''\times({\bf X}''\times {\bf
X}''')||}} & {\bf X}
\\[4mm]
0 & 0 & 0 & 0 & 1
\end{array} \!\!\right)
\end{eqnarray*}
where ${\bf X}$ is assumed as column matrix, and $||\;||$ is the
Euclidean norm. Obviously, for every time $t\in[a,b]$,
$\alpha_c(t)$ is an element of ${\rm Gal}(4,{\Bbb R})$ and so
$\alpha_c$ is well-defined.\par We can study $\alpha_c$ instead of
$c$, up to the action of Galilean transformation group as
following:\par
Let $c,\bar{c}:[a,b]\rightarrow{\Bbb R}^5$ be two ST--curves with
definitions $t\mapsto(t,{\bf X}(t),1)$ and
$\bar{t}\mapsto(\bar{t},\bar{{\bf X}}(\,\bar{t}\,),1)$
respectively. If $c$ be equivalent to $\bar{c}$ up to ${\rm
Gal}(4,(\Bbb R))$, that is, $\bar{c}=A\circ c$ for $A\in{\rm
Gal}(4,(\Bbb R))$, we have
\begin{eqnarray*}
\left(\begin{array}{c} \bar{t} \\ \bar{{\bf X}}\\ 1
\end{array}\right)= A \cdot\left(\begin{array}{c} t \\ {\bf X}\\ 1
\end{array}\right)=
\left(\begin{array}{ccc} 1 & {\bf 0} & s \\ {\bf v} & {\bf R} & {\bf y} \\
0 & {\bf 0} & 1
\end{array}\right)
\cdot\left(\begin{array}{c} t \\ {\bf X}\\ 1
\end{array}\right)
\end{eqnarray*}
then, we conclude that
\begin{eqnarray}
\bar{t}=t+s, \;\;\mbox{and}\;\; \bar{{\bf X}}={\bf R}\cdot{\bf
X}+t{\bf v}+{\bf y}\label{*}
\end{eqnarray}
First, second, and third differentiations of (\ref{*}) are in the
following form
\begin{eqnarray*}
\bar{{\bf X}}'&=&{\bf R}\cdot{\bf X}'+{\bf v}\\
\bar{{\bf X}}''&=&{\bf R}\cdot{\bf X}''\\
\bar{{\bf X}}'''&=&{\bf R}\cdot{\bf X}'''
\end{eqnarray*}
From above relations we have
\begin{eqnarray*}
\alpha_{\bar{c}}\! &=&
\!\left(\!\! \begin{array}{ccccc} 1 & 0 & 0 & 0 & \bar{t} \\[2mm]
\bar{{\bf X}}' & \displaystyle{\frac{\bar{{\bf X}}''}{||\bar{{\bf
X}}''||}} & \displaystyle{\frac{\bar{{\bf X}}''\times\bar{{\bf
X}}'''}{||\bar{{\bf X}}''\times\bar{{\bf X}}'''||}} &
\displaystyle{\frac{\bar{{\bf X}}''\times(\bar{{\bf X}}''\times
\bar{{\bf X}}''')}{||\bar{{\bf X}}''\times(\bar{{\bf X}}''\times
\bar{{\bf X}}''')||}} & \bar{{\bf X}}
\\[4mm]
0 & 0 & 0 & 0 & 1
\end{array} \!\!\right)\hspace{0.8cm}\\
&=& {\tiny\hspace{-0.125cm}\left(\hspace{-0.25cm}
\begin{array}{ccccc} 1 & 0 & 0 & 0 & t+s \\[2mm]
{\bf R}\cdot{\bf X}' \!\!&\!\! \displaystyle{\frac{{\bf
R}\!\cdot\!{\bf X}''}{||{\bf R}\!\cdot\!{\bf X}''||}} \!\!&\!\!
\displaystyle{\frac{{\bf R}\!\cdot\!{\bf X}''\!\!\times\!\! {\bf
R}\!\cdot\!{\bf X}'''}{||{\bf R}\!\cdot\!{\bf X}''\!\!\times\!\!
{\bf R}\!\cdot\!{\bf X}'''||}} \!\! &\!\! \displaystyle{\frac{{\bf
R}\!\cdot\!{\bf X}''\!\!\times\!\!({\bf R}\!\cdot\!{\bf
X}''\!\!\times \!\!{\bf R}\!\cdot\!{\bf X}''')}{||{\bf
R}\!\cdot\!{\bf X}''\!\!\times\!\!({\bf R}\!\cdot\!{\bf
X}''\!\!\times\!\! {\bf R}\!\cdot\!{\bf X}''')||}} \!\! & \!\!
{\bf R}\!\cdot\!{\bf X}+ t\!\cdot\!{\bf v}+{\bf y}
\\[4mm]
0 & 0 & 0 & 0 & 1
\end{array} \hspace{-0.25cm}\right)}\\
&=^*& \left(\begin{array}{ccc} 1 & {\bf 0} & s \\ {\bf v} & {\bf R} & {\bf y} \\
0 & {\bf 0} & 1
\end{array}\right)\cdot\\
&&\hspace{0.8cm}\left(\!\! \begin{array}{ccccc} 1 & 0 & 0 & 0 & t \\[2mm]
{\bf X}' & \displaystyle{\frac{{\bf X}''}{||{\bf X}''||}} &
\displaystyle{\frac{{\bf X}''\times {\bf X}'''}{||{\bf X}''\times
{\bf X}'''||}} & \displaystyle{\frac{{\bf X}''\times({\bf
X}''\times {\bf X}''')}{||{\bf X}''\times({\bf X}''\times {\bf
X}''')||}} & {\bf X}
\\[4mm]
0 & 0 & 0 & 0 & 1
\end{array} \!\!\right)\\
&=& A\cdot\alpha_c,
\end{eqnarray*}
since the equation $=^*$ is concluded by knowing that for every
vectors ${\bf X}$ and ${\bf Y}$ in ${\Bbb R}^3$, and any element
${\bf R}\in{\rm SO}(3,{\Bbb R})$ we have ${\bf R}\cdot({\bf
X}\times {\bf Y})={\bf R}\cdot{\bf X}\times{\bf R}\cdot{\bf Y}$
and $||{\bf R}\cdot{\bf X}||=\det({\bf R})||{\bf X}||=||{\bf
X}||$. So we have
\paragraph{Theorem 4.4} {\it Two ST--curves $c,\bar{c}:[a,b]\rightarrow{\Bbb
R}^5$ are equivalent up to $A\in{\rm SGal}(4,\Bbb R)$ that is
$\bar{c}=A\circ c$; if and only if, the associated curves
$\alpha_c$ and $\alpha_{\bar{c}}$ are equivalent up to $A$ that is
$\alpha_{\bar{c}}=A\circ \alpha_c$.}\\
So our acclaim of working with $\alpha_c$ instead of $c$, does not
reduce the problem, with this benefit that we can use of Cartan's
theorem for $\alpha_c$ and then find its invariants. These
invariants in the view of theorem 4.4, are invariants of $c$.
Henceforth, we classify new curves $\alpha_c$~s up to ${\rm SGal}(4,\Bbb R)$.\\
From Cartan's theorem, a necessary and sufficient condition for
$\alpha_{\bar{C}}=B\circ\alpha_C=L_B\circ\alpha_C$ by $B\in {\rm
SGal}(4,{\Bbb R})$, is that for any left invariant 1-form
$\omega^i$ on ${\rm SGal}(4,{\Bbb R})$ we have
$\alpha_{\bar{C}}^{\ast}(\omega^i)=\alpha_{C}^{\ast}(\omega^i)$,
that is equivalent with
$\alpha_{\bar{C}}^{\ast}(\omega)=\alpha_{C}^{\ast}(\omega)$, for
natural ${\goth s\goth g\goth a\goth l}(4,{\Bbb R})$-valued 1-form
$\omega=P^{-1}\cdot dP$, where $P$ is the Maurer--Cartan form.\par
Thereby, we must compute the $\alpha_C^{\ast}(P^{-1}\cdot dP)$,
which is invariant under special Galilean transformations, that
is, its entries are invariant functions of ST--curves. This
$5\times 5$ matrix form, consists of arrays that are coefficients
of $dt$.\par
Since
$\alpha_C^{\ast}(P^{-1}\,.\,dP)=\alpha_C^{-1}\,.\,d\alpha_C$, so
for finding the invariants, it is sufficient that we calculate the
matrix $\alpha_C^{-1}\cdot d\alpha_C$. By differentiating of
determinant, we have
\begin{eqnarray*}
d\alpha_c(t)=
\left(\begin{array}{ccccc} 0 & 0 & 0 & 0 & 1 \\[2mm]
{\bf X}'' & A_1 & A_2 & A_3 & {\bf X'}\\[4mm]
0 & 0 & 0 & 0 & 0
\end{array}\right)dt
\end{eqnarray*}
where we suppose that {\scriptsize\begin{eqnarray*} A_1
\!\!\!&=&\!\!\!\frac{{\bf X}'''||{\bf X}''||^2-{\bf X}''({\bf
X}''\!\cdot\!{\bf X}''')}{||{\bf X}''||^3},\\
A_2 \!\!\!&=&\!\!\! \frac{({\bf X}''\!\!\times\!\! {\bf
X}'''')||{\bf X}''\!\!\times\!\! {\bf X}'''||^2-\{({\bf
X}''\!\!\times\!\! {\bf X}''')\!\cdot\!({\bf X}''\!\!\times\!\!
{\bf X}'''')\}({\bf X}''\!\!\times\!\! {\bf X}''')}{||{\bf
X}''\!\!\times \!\!{\bf
X}'''||^3}, \\
A_3 \!\!\!&=&\! \!\!\frac{{\bf X}''\!\!\times\!\!({\bf
X}''\!\!\times\!\! {\bf X}'''')||{\bf X}''\!\!\times\!\!({\bf
X}''\!\!\times\!\! {\bf X}''')||^2\!}{||{\bf
X}''\!\!\times\!\!({\bf X}''\!\!\times\!\!
{\bf X}''')||^3}\\
&& -\frac{\!\{({\bf X}''\!\!\times\!\!({\bf X}''\!\!\times\!\!
{\bf X}''')\!\cdot\!({\bf X}''\!\!\times\!\!({\bf
X}''\!\!\times\!\! {\bf X}''''))\}({\bf X}''\!\!\times\!\!({\bf
X}''\!\!\times\!\! {\bf X}'''))}{||{\bf X}''\!\!\times\!\!({\bf
X}''\!\!\times\!\! {\bf X}''')||^3}.
\end{eqnarray*}}
We assumed that ${\bf X}$ is in the form $(x_1 \;\; x_2 \;\;
x_3)^T$. By knowing that $\det\alpha_c=1$, so the inverse matrix
of $\alpha_c$, $\alpha_c(t)^{-1}$, is in the form of following
matrix
{\tiny
\begin{eqnarray*}
\hspace{-0.35cm}\left(\!\!\!\!\!\! \begin{array}{ccc} 1 & {\bf 0}^T & -t \\[2mm]
-\displaystyle{\frac{{\bf X}'\cdot{\bf X}''}{||{\bf X}''||}}
\!\!\!&\!\!\!\!\! \big\{\!\!\displaystyle{\frac{{\bf X}''}{||{\bf
X}''||}}\!\!\big\}^T
\!\!\!&\!\!\! \displaystyle{\frac{t({\bf X}'\!\cdot\!{\bf X}'')-{\bf X}\!\cdot\!{\bf X}''}{||{\bf X}''||}} \\[4mm]
-\displaystyle{\frac{{\bf X}'\!\cdot\!({\bf X}''\!\!\times\!\!{\bf
X}''')}{||{\bf X}''\!\!\times\!\!{\bf X}'''||}} &
\big\{\!\!{\displaystyle{\frac{{\bf X}''\!\!\times\!\!{\bf
X}'''}{||{\bf X}''\!\!\times\!\!{\bf X}'''||}}}\!\!\big\}^T \!\! &
\!\! \displaystyle{\frac{t({\bf X}'\!\cdot\!({\bf
X}''\!\!\times\!\!{\bf X}'''))-{\bf X}\!\cdot\!({\bf
X}''\!\!\times\!\!{\bf X}''')}
{||{\bf X}''\!\!\times\!\!{\bf X}'''||}} \\[4mm]
\displaystyle{\frac{({\bf X}'\!\!\times\!\!{\bf X}'')\cdot({\bf
X}''\!\!\times\!\!{\bf X}''')}{||{\bf X}''||^2||{\bf
X}''\!\!\times\!\!{\bf X}'''||^2}A} \!\!&\!\!\!
\big\{\!\!\displaystyle{\frac{{\bf X}''\!\!\times\!\!({\bf
X}''\!\!\times\!\!{\bf X}''')}{||{\bf X}||^2||{\bf
X}''\!\!\times\!\!{\bf X}''')||^2}}\!\!\big\}^T \!\!\!&\!\!\!
-\displaystyle{\frac{t({\bf X}'\!\!\times\!\!{\bf
X}'')\!\cdot\!({\bf X}''\!\!\times\!\!{\bf X}''') -({\bf
X}\!\!\times\!\!{\bf X}'')\!\!\times\!\!({\bf
X}''\!\!\times\!\!{\bf X}''')}{||{\bf X}''||^2||{\bf X}''
\!\!\times\!\!{\bf X}'''||^2}A} \\[4mm]
0 & {\bf 0}^T & 1\end{array} \!\!\!\!\right),
\end{eqnarray*}}
where $A=||{\bf X}''\times({\bf X}''\times{\bf X}''')||$, and the
notation $^T$ means the transpose of a column matrix to be as a
row matrix.\par
After some straight computations, we find $\alpha_c^{-1}\cdot
d\alpha_c$ as the multiplication of following matrix by $dt$:
\begin{eqnarray*}
\!\!\!\!\left(\!\!\!\!\!\begin{array}{ccccc}
0 & 0 & 0 & 0 & 1 \\[4mm]
||{\bf X}''|| & 0 & 0 & 0 & 0\\[4mm]
0 & 0 & 0 & \displaystyle{\frac{{\bf B}\cdot({\bf X}''\times{\bf
C})}{A\;||{\bf
X}''\times{\bf X}'''||}} & 0 \\[4mm]
\!\!\!\!\!\! 0 \!\!\!\!\!\!&\!\!\!\!\!
\displaystyle{\frac{A\;({\bf X}''\times{\bf B})\cdot{\bf X}'''}
{||{\bf X}''||^3||{\bf X}''\times{\bf X}'''||^2}} &
\displaystyle{\frac{A\;({\bf X}''\times{\bf B})\cdot{\bf C}}
{||{\bf X}''||^2||{\bf X}''\times{\bf X}'''||^3}}
& 0 & 0 \\[4mm]
0 & 0 & 0 & 0 & 0
\end{array}\!\!\right)
\end{eqnarray*}
where ${\bf B}={\bf X}''\times{\bf X}'''$ and ${\bf C}={\bf
X}''\times{\bf X}''''$. We may assume that the parameter of
ST--curve be the natural parameter, the arc length $s$.
Henceforth, We adopt the problem with this assumption. Thus we
have $||{\bf X}'||=1$, then
\begin{eqnarray}
{\bf X}'\cdot{\bf X}''=0.\label{eq2}
\end{eqnarray}
Also if we assume that
\begin{eqnarray}
&&||{\bf X}''||=\mbox{constant},\label{eq3}\\
&&||{\bf X}'''||=\mbox{constant},\label{eq4}
\end{eqnarray}
By using of (\ref{eq2}) and (\ref{eq3}), we have ${\bf
X}'\cdot{\bf X}'''=-||{\bf X}''||=$ constant. By differentiating
of (\ref{eq3}) and (\ref{eq4}) in respect to $s$, we achieve that
${\bf X}''\cdot{\bf X}'''=0$ and ${\bf X}'''\cdot{\bf X}''''=0$,
respectively. Since $||{\bf X}''\times{\bf X}'''||^2=||{\bf
X}''||^2||{\bf X}'''||^2-({\bf X}''\cdot{\bf X}''')^2$, therefore
\begin{eqnarray}
&&||{\bf X}''\times{\bf X}'''||=\mbox{constant},\label{eq5}\\
&&{\bf X}''\cdot{\bf X}''''=-||{\bf
X}'''||=\mbox{constant}.\label{eq6}
\end{eqnarray}
From (\ref{eq3}) and (\ref{eq5}), we conclude that $A=||{\bf
X}''\times({\bf X}''\times{\bf X}''')||$ is also constant.\par
Since ${\bf X}''$ is perpendicular to ${\bf X}'''$, so ${\bf
X}''\times({\bf X}''\times{\bf X}''')=-{\bf X}'''$ and then we
find that
\begin{eqnarray}
({\bf X}''\times{\bf B})\cdot{\bf X}''' &=&-||{\bf X}'''||\nonumber \\
&=& \mbox{constant},\nonumber \\ ({\bf X}''\times{\bf B})\cdot{\bf
C} &=& {\bf B}\cdot({\bf X}''\times{\bf C})\nonumber \\
&=&{\bf X}''\cdot({\bf X}'''\times{\bf X}'''')\nonumber \\
&=&\sqrt{\det({\bf X}^{(i)}\cdot{\bf
X}^{(j)})}_{2\leq i,j\leq 4} \label{eq7} \\
&=&\!\!\sqrt{||\!{\bf X}'''||\,(||\!{\bf X}''||\,||\!{\bf
X}''''||\!+\!||\!{\bf X}'''||^2)}\nonumber \\
&=&\mbox{constant},\label{eq8}
\end{eqnarray}
where in the latest relation, the relation (\ref{eq7}) comes from
this fact that: for every vectors ${\bf X_1}$, ${\bf X_2}$, and
${\bf X_3}$ in ${\Bbb R}^3$, we have $\{{\bf X_1}\cdot({\bf
X_2}\times{\bf X_3})\}^2=\det({\bf X}_i\cdot{\bf X}_j)$. The
equation (\ref{eq8}) acquired from assuming that ${\bf
X}''\cdot({\bf X}'''\times{\bf X}'''')$ be positive (one can
consider the negative case), because the lengths $||{\bf X}''||$,
$||{\bf X}'''||$, and $||{\bf X}''''||$ do not vanish, therefore
${\bf X}''\cdot({\bf X}'''\times{\bf X}'''')$ is not zero.
Finally, we have
\begin{eqnarray*}
\alpha_c^{-1}(s)\cdot d\alpha_c(s)= \hspace{8cm}\\[2mm]
\!\!\!\!\!\!\!\left(\!\!\!\!\!\begin{array}{ccccc}
0 & 0 & 0 & 0 & 1 \\[4mm]
||{\bf X}''||& 0 & 0 & 0 & 0\\[4mm]
0 & 0 & 0 \!\!&\!\! \displaystyle{\frac{{\bf B}\cdot({\bf
X}''\times{\bf C})}{A\;||{\bf
X}''\times{\bf X}'''||}} & 0 \\[4mm]
\!\!\!\!\!\! 0 \!\!\!\!\!\!&\!\!\!\!\!
\displaystyle{\frac{-A\;||{\bf X}'''||} {||{\bf X}''||^3||{\bf
X}''\times{\bf X}'''||^2}} \!\!&\!\! \displaystyle{\frac{A\;({\bf
X}''\times{\bf B})\cdot{\bf C}} {||{\bf X}''||^2||{\bf
X}''\times{\bf X}'''||^3}}
\!\!\!\!&\!\!\!\! 0 & 0 \\[4mm]
0 & 0 & 0 & 0 & 0
\end{array}\!\!\right)ds.
\end{eqnarray*}
Thereupon, by using of $s$, the arc length as parameter, and
assuming that the second and third derivation of ${\bf X}$ (the
space coordinate of curve $c(t)=(t,{\bf X})$) have invariant
lengths, then the entries of the multiplications matrix in
$\alpha_c^{-1}\cdot d\alpha_c$ will be all invariant. By previous
description, although this invariants calculated for the curve
$\alpha_c$, but they are in fact invariants of the curve $c$.
We can summarize above results in following theorem:
\paragraph{Theorem 4.5} {\it Let $c:[a,b]\rightarrow\Bbb{R}\times\Bbb{R}^3$ be a ST--curve with
definition $c(t):=(t,{\bf X}(t))$, then $\omega_1=||{\bf X}''||$ and $\omega_2=||{\bf X}'''||$ are
differential invariants of $c$ up to special Galilean group ${\rm SGal}(4,{\Bbb R})$.
In general, every other differential invariant of $c$, is
functionally dependency to $\omega_1$, $\omega_2$, and their
derivations in respect to the parameter.}
%
\paragraph{Remark 4.6} If we consider a ST--curve, the dimension of
its image is 1, but the dimension of $\Bbb{R}\times\Bbb{R}^3$ is
4, hence the number of invariants must be 3. In spite of finding
two fundamental invariants $\omega_1$ and $\omega_2$, one can add
another invariant for instance $\omega_3:=||{\bf X}''\times{\bf
X}'''||$, to complete set of essential invariants.
\paragraph{Theorem 4.7} {\it Let $c,
\overline{c}:[a,b]\rightarrow\Bbb{R}\times\Bbb{R}^3$ be two
ST--curves. $c$ and $\overline{c}$ are locally equivalent up to
special Galilean group ${\rm SGal}(4,{\Bbb R})$, if and only if,
$\omega_1=\overline{\omega}_1$ and
$\omega_2=\overline{\omega}_2$.}\\
\noindent{{\it Proof:}} Formerly, we proved that two curves which
are locally equivalent up to special Galilean transformation, have
same differential invariants. We prove the converse. Let $c$ and
$\overline{c}$ be two ST--curves on $[a,b]$ with representations
respectively $(t,{\bf X})$ and $(\overline{t},\overline{{\bf X}}
)$. We assume that $\omega_1=\overline{\omega}_1$ and
$\omega_2=\overline{\omega}_2$, we prove that there is a special
Galilean transformation $A\in{\rm SGal}(4,{\Bbb R})$, such that
$c$ and $\overline{c}$ in the view of convention 4.3, will be
special Galilean equivalent.\par
We suppose that images of $c$ and $\overline{c}$ be in ${\Bbb
R}^5$ by the convention. There is an element in ${\rm
SGal}(4,{\Bbb R})$ that transform one point of $c$ to one of
$\overline{c}$, because if we consider arbitrary points $(t_0,{\bf
X}_0,1)$ of $c$ and $(\overline{t}_0,\overline{{\bf X}}_0,1)$ of
$\overline{c}$, then there are unique ${\bf R}\in{\rm SO}(3,{\Bbb
R})$ and ${\bf y}\in{\Bbb R}^3$ such that $\overline{{\bf
X}}_0={\bf R}\cdot{\bf X}_0+{\bf y}$. Thus, there is the following
element of ${\rm SGal}(4,{\Bbb R})$ that transforms the first
point to the second:
\begin{eqnarray*}
\left(\begin{array}{ccc} 1 & {\bf 0} & \overline{t}_0-t_0 \\ {\bf 0} & {\bf R} & {\bf y} \\
0 & {\bf 0} & 1
\end{array}\right).
\end{eqnarray*}
So, we can assume that $c_1:=(t,{\bf X}_1,1)$ be a special
Galilean transformation of $c$, that intersects $\overline{c}$ in
the time $t_0\in[a,b]$, that is $c_1(t_0)=\overline{c}(t_0)$. Let
this special Galilean transformation be in the following form
\begin{eqnarray*}
\left(\begin{array}{ccc} 1 & {\bf 0} & s_1 \\ {\bf 0} & {\rm Id}_3 & {\bf y}_1 \\
0 & {\bf 0} & 1
\end{array}\right).
\end{eqnarray*}
Then, since there are unique $\widehat{{\bf R}}\in{\rm SO}(3,{\Bbb
R})$ and $\widehat{{\bf y}}\in{\Bbb R}^3$ so that transfer the
orthonormal frame
\begin{eqnarray*}
\left(\frac{{\bf X}_1''}{||{\bf X}_1''||}\,,\,\frac{{\bf
X}_1''\times{\bf X}_1'''}{||{\bf X}_1''\times{\bf
X}_1'''||}\,,\,\frac{{\bf X}_1''\times({\bf X}_1''\times{\bf
X}_1''')}{||{\bf X}_1''\times({\bf X}_1''\times{\bf X}_1''')||}
\right)(t_0)
\end{eqnarray*}
on $c_1(t_0)$ and the tangent vector ${\bf X}_1 '(t_0)$, to
orthonormal frame
\begin{eqnarray*}
\left(\frac{\overline{{\bf X}}''}{||\overline{{\bf
X}}''||}\,,\,\frac{\overline{{\bf X}}''\times\overline{{\bf
X}}'''}{||\overline{{\bf X}}''\times\overline{{\bf
X}}'''||}\,,\,\frac{\overline{{\bf X}}''\times(\overline{{\bf
X}}''\times\overline{{\bf X}}''')}{||\overline{{\bf
X}}''\times(\overline{{\bf X}}''\times\overline{{\bf X}}''')||}
\right)(t_0)
\end{eqnarray*}
on $\overline{c}(t_0)$ and $\overline{{\bf X}}_1'(t_0)$,
respectively. We suppose that $\widehat{c}:=(t,\widehat{{\bf
X}},1)$ be the curve introduced with the action of following
matrix of ${\rm SGal}(4,{\Bbb R})$ on $c_1$:
\begin{eqnarray*}
\left(\begin{array}{ccc}
1 & {\bf 0} & 0 \\
{\bf 0} & \widehat{{\bf R}} & \widehat{{\bf y}} \\
0 & {\bf 0} & 1
\end{array}\right).
\end{eqnarray*}
We suppose that the parameters of $\overline{c}$ and $\widehat{c}$
be arc length parameters (by the mean of definition 4.2). Now we
can replace the curves $\overline{c}$ and $\widehat{c}$ with their
corresponding curves $\alpha_{\overline{c}}$ and
$\alpha_{\widehat{c}}$ (respectively). So, if we prove that
$\alpha_{\overline{c}}=\alpha_{\widehat{c}}$, then we will have
$\overline{c}=\widehat{c}$. Moreover, we have
\begin{eqnarray*}
\alpha_{\widehat{c}}&=&\left(\begin{array}{ccc}
1 & {\bf 0} & 0 \\
{\bf 0} & \widehat{{\bf R}} & \widehat{{\bf y}} \\
0 & {\bf 0} & 1
\end{array}\right)
\left(\begin{array}{ccc}
1 & {\bf 0} & s_1 \\
{\bf 0} & {\rm Id}_3 & {\bf y}_1 \\
0 & {\bf 0} & 1
\end{array}\right)\alpha_c\\
&=&
\left(\begin{array}{ccc}
1 & {\bf 0} & s_1 \\
{\bf 0} & \widehat{{\bf R}} & \widehat{{\bf R}}\cdot{\bf y}_1+\widehat{{\bf y}} \\
0 & {\bf 0} & 1
\end{array}\right)\alpha_c,
\end{eqnarray*}
hence $\alpha_c$ and $\alpha_{\widehat{c}}$ are equivalent with an
element of ${\rm SGal}(4,{\Bbb R})$. Thereby, $\alpha_c$ and
$\alpha_{\overline{c}}$ will be equivalent, and by theorem 4.4,
the proof will be completed. Henceforth, we show that
$\alpha_{\overline{c}}=\alpha_{\widehat{c}}$. \par
For curves $\overline{c}$ and $\widehat{c}$ we have following
equations, respectively
\begin{eqnarray*}
d\,\alpha_{\overline{c}}&=&\alpha_{\overline{c}}\,\cdot\,\overline{b}\\
d\,\alpha_{\widehat{c}}&=&\alpha_{\widehat{c}}\,\cdot\,\widehat{b},
\end{eqnarray*}
when $\overline{b},\widehat{b}\in{\goth s\goth g\goth a\goth
l}(4,{\Bbb R})$. But for $c$ and $\widehat{c}$ from assumption, in
every point of domain we have $\omega_1=\overline{\omega}_1$ and
$\omega_2=\overline{\omega}_2$. Furthermore, in each point of
$[a,b]$.
\begin{eqnarray*}
\widehat{\omega}_1&:=&||\widehat{{\bf X}}''||=||\widehat{{\bf
R}}\cdot{\bf X}''||
=\det(\,\widehat{{\bf R}}\,)\,||{\bf X}''||=\omega_1\\
\widehat{\omega}_2&:=&||\widehat{{\bf X}}'''||=||\widehat{{\bf
R}}\cdot{\bf X}'''|| =\det(\,\widehat{{\bf R}}\,)\,||{\bf
X}'''||=\omega_2.
\end{eqnarray*}
So, we have $\widehat{\omega}_1=\overline{\omega}_1$ and
$\widehat{\omega}_2=\overline{\omega}_2$. Then, with above
expressions we conclude that for all points of $[a,b]$,
$\overline{b}$ and $\widehat{b}$ are same that we call it $b$.
Now, $\alpha_{\overline{c}}$ and $\alpha_{\overline{c}}$ are
satisfied in first order equations (resp.)
$d\,\alpha_{\overline{c}}=\alpha_{\overline{c}}\,\cdot\,b$ and
$d\,\alpha_{\widehat{c}}=\alpha_{\widehat{c}}\,\cdot\,b$, with the
initial condition
$\alpha_{\overline{c}}(t_0)=\alpha_{\widehat{c}}(t_0)$. Therefore,
we have $\alpha_{\overline{c}}(t)=\alpha_{\widehat{c}}(t)$ for all
$t \in[a,b]$, and so, proof is complete. \hfill $\diamondsuit$
\paragraph{Corollary 4.8} {\it In the physical sense, we can consider
that each ST--curve be a trace of a particle with mass $m$ and
under influence of a force ${\bf F}$. By theorem 4.7, we conclude
that:
\noindent{\bf 1.} Two particles with same masses
$m=\widetilde{m}$, under influence of forces ${\bf F}$ and
$\widetilde{{\bf F}}$ (resp.), have same trajectory, if and only
if, the norms of ${\bf F}$ and its derivative ${\bf F}'$ be equal
to the corresponding norms of $\widetilde{{\bf F}}$ and its
derivative $\widetilde{{\bf F}}'$.
\noindent{\bf 1.} In particular, we suppose that two observers
${\cal O}$ and $\widetilde{{\cal O}}$ move with accelerations
${\bf a}$ and $\widetilde{{\bf a}}$ (resp.), in an inertial
coordinate system. If we consider the paths of a particle (as
ST--curves) with mass $m$ in respect to observers ${\cal O}$ and
$\widetilde{{\cal O}}$, and under the effect of forces ${\bf F}$
and $\widetilde{{\bf F}}$ (resp.), then the pathes are equal under
a special Galilean transformation, if and only if, $||{\bf
F}||=||\widetilde{{\bf F}}||$ and $||{\bf F}'||=||\widetilde{{\bf
F}}'||$.}
| {
"timestamp": "2007-11-13T14:26:22",
"yymm": "0711",
"arxiv_id": "0711.1979",
"language": "en",
"url": "https://arxiv.org/abs/0711.1979",
"abstract": "In this paper, we classify space-time curves up to Galilean group of transformations with Cartan's method of equivalence. As an aim, we elicit invariats from action of special Galilean group on space-time curves, that are, in fact, conservation laws in physics. We also state a necessary and sufficient condition for equivalent Galilean motions.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Galilean Classification of Curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347890464284,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.709364608736468
} |
https://arxiv.org/abs/1903.08571 | Identifying Maximal Non-Redundant Integer Cone Generators | A non-redundant integer cone generator (NICG) of dimension $d$ is a set $S$ of vectors from $\{0,1\}^d$ whose vector sum cannot be generated as a positive integer linear combination of a proper subset of $S$. The largest possible cardinality of NICG of a dimension $d$, denoted by $N(d)$, provides an upper bound on the sparsity of systems of integer equations with a large number of integer variables. A better estimate of $N(d)$ means that we can consider smaller sub-systems of integer equations when solving systems with many integer variables. Furthermore, given that we can reduce constraints on set algebra expressions to constraints on cardinalities of Venn regions, tighter upper bound on $N(d)$ yields a more efficient decision procedure for a logic of sets with cardinality constraints (BAPA), which has been applied in software verification. Previous attempts to compute $N(d)$ using SAT solvers have not succeeded even for $d=3$. The only known values were computed manually: $N(d)=d$ for $d < 4$ and $N(4) > 4$. We provide the first exact values for $d > 3$, namely, $N(4)=5$, $N(5)=7$, and $N(6)=9$, which is a significant improvement of the known asymptotic bound (which would give only e.g. $N(6) \le 29$, making a decision procedure impractical for $d=6$). We also give lower bounds for $N(7)$, $N(8)$, $N(9)$, and $N(10)$, which are: $11$, $13$, $14$, and $16$, respectively. We describe increasingly sophisticated specialized search algorithms that we used to explore the space of non-redundant generators and obtain these results. |
\section{Conclusions}
{\m{\small QFBAPA}} has the small model property. We are interested in deriving a number $N(d)$,
where $N(d)$ is the smallest number such that the following property holds: if formula has a solution, then
it also has a solution of the size $N(d)$.
In this paper we computed the values $N(4)$, $N(5)$ and $N(6)$. We also significantly improved the known bounds for $N(7)$, $N(8)$, $N(9)$ and $N(10)$. Those numbers are used to determine the size of small models for {\m{\small QFBAPA}} formulas.
Our motivation was twofold: first, we obtained the bounds that
improve constant factors in asymptotically best
algorithm for {\m{\small QFBAPA}}. Second, we provided another case study
in developing domain-specific algorithms for combinatorial
search. Although we found a domain-specific search algorithm
to be the most effective, the problem may prove to be fruitful ground
for future general-purpose constraint-solving techniques, such as
pseudo-Boolean and finite-domain solvers.
\begin{comment}
To solve this problem we started with a very simple approach. That approach was not efficient for computing
$N(5)$ value. Afterward we presented some optimizations and got reasonably fast method for computing $N(5)$.
Computing value $N(6)$ was still unfeasible. Among approaches we encounter, a few of them was not efficient.
However, considering only parts of those approaches and merging them we succeed to get even faster method
for computing $N(5)$, and finally a method for computing $N(6)$.
Randomized solution we presented did not give us a new result(s), but tell us more about nature of the problem.\\
We tried to improve our solution with third party libraries. However, it appears that methods we tried to replace
with third party libraries are very efficient for $d$ values we considered. Therefore, further improvements
should be focused on a way of choosing vectors used to generate the solution.
\section{Further work}
In the subsection \ref{subsec:isomorphic} we presented an idea that is used to detect isomorphic sets, those
that will not be used to generate the solution. However, we also explained the method is not considering all
isomorphic sets. With more efficient method for detecting isomorphic sets, we could improve running time of
the algorithm. We counted number of solutions given by the algorithm, and number of non-isomorphic solutions. We got
the following results, presented as ($d$, the algorithm, non-isomorphic): (3, 2, 2); (4, 13, 7); (5, 165, 27);
(6, 4984, 131). As we can see, for greater $d$, ratio $\frac{\rm{the\ algorithm\ solutions}}{non-isomorphic\ solutions}$
is greater, too.
We noticed that there does not exist such solution for $d = 6$ that contains singleton. It might be the
case for $d > 7$ as well. Having such result could significantly shrink amount of vectors used to
generate the solution, leading to more efficient algorithm.
We used the randomized algorithm that we got by the minimal modification of the deterministic algorithm. Such
algorithm gave as a lower bound. From the introduction and \cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean} it can be seen that
having tight upper bound is more important. Therefore, someone might try to come up with a randomized algorithm
that could give an upper bound.
\end{comment}
\section{Gaussian Elimination: N(6)=9}
In Section~\ref{sec:core} we described a different approaches towards a construction of $\rm{NICG}$ sets. Most of the approaches consider $\rm{NICG}$ property only of
currently calculated set. We also argued why the approach in which we try to maintain as many information as possible is not very efficient. Knowing that, we decided
to merge ideas from the both approaches and come with a more efficient algorithm.\\
The idea about maintaining too much information is not good, as we have explained. However, maintaining some amount
of ``not $\rm{NICG}(Y)$'' information might reduce the search space, and thus improve the running time.
The property ``not $\rm{NICG}(Y)$'' allow us to avoid computing over every set $X$ such that $Y \subset X$.
For instance, consider an example where $Y = \{(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), $ $(1, 1, 0, 0, 0)\}$.
$Y$ is not $\rm{NICG}$ because $\sum{Y} = 2 (1, 1, 0, 0, 0)$. There are $\binom{28}{4} + \binom{28}{5} +
\binom{28}{6} + \binom{28}{7} + \binom{28}{8} = 4787640$ sets $X$, such that $|X| \leq 8, Y \subset X$. Most of
those sets would be considered if we do not shrink them using the fact not $\rm{NICG}(Y)$. Quite a lot
of sets is affected by only one set, thus we decided to use information about not $\rm{NICG}$ sets.\\
On the other side, we decided not to use information about $\rm{NICG}$ sets, but for a given set $X$ to test
$\rm{NICG}(X)$ on fly. In Section \ref{sec:introduction} we saw that answering
$\rm{NICG}(X)$ is the same as answering is there a solution to the corresponding system of equations. Instead of
using the procedure \proc{inIntConeTest} to answer that, we try to solve system using Gaussian elimination.
Answering whether a given set $X$ has the $\rm{NICG}(X)$ property by solving the corresponding
system with Gaussian elimination might look like an inefficient approach. To understand such a view, consider
a system of five equations and eight variables (what could be the case for $d = 5$) such that its
solution contains three parameter-variables.
Each component of a vector sum of the eight binary vectors is a non-negative integer value not greater than 8.
Therefore there are $9^3$ possibilities to assign values to the three parameters.\\
A system that represents a set for $d \ge 6$ might contain even more parameter-variables
resulting in even more possible assignments to the parameter-variables.
However, it turned out that, for the systems our search algorithms constructed, Gaussian method works very good since
most of the parameter-variable assignments are not valid.
This approach verified result for $d \leq 5$ and gave $N(6) = 9$ in around a thirty minutes.\\
Below are given examples of sets of vectors that represent solutions for $d = 1 \ldots 6$. Those sets
are obtained by applying the described approaches.
\begin{table}
\begin{center}
\begin{tabular}{ c || c | c | c | c | c | c}
$d$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
$N(d)$ & 1 & 2 & 3 & 5 & 7 & 9 \\ \hline
a solution & $\begin{pmatrix} 1 \end{pmatrix}$
& $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$
& $\begin{pmatrix} 1 & 1 & 1 \\ 1 & 0 & 1 \\ 0 & 1 & 1 \end{pmatrix}$
& $\begin{pmatrix} 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 1 & 1\end{pmatrix}$
& $\begin{pmatrix} 1 & 1 & 1 & 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 & 1 \\
0 & 1 & 1 & 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 1 & 1 & 1 & 1
\end{pmatrix}$
& $\begin{pmatrix} 1 & 1 & 1 & 1 & 0 & 0 & 1 & 1 & 1 \\ 1 & 1 & 1 & 0 & 1 & 1 & 0 & 1 & 1 \\
0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 1 & 0 & 1 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1
\end{pmatrix}$
\end{tabular}
\caption{Solutions for different $d$ values, an example solution per a value. Full set of solutions is available at \cite{bapasite}.}
\end{center}
\end{table}
\section{Speeding up Search using Weak Isomorphisms}
In Subsection \ref{subsec:isomorphic} we have seen two approaches that might be used to eliminate isomorphic states.
One of the approaches is time inefficient, and another one is memory costly. Both, time and memory inefficiency,
grows exponentially, which suggests that any of those approaches can be used only for small $d$ values. On the
other side, both of these methods are very strict in sense that for a given type of isomorphism, the methods
eliminate all isomorphic states (i.e. detect all isomorphic $(X^{(1)}, X^{(2)})$ states, as has been already explained).
Every approach to the problem we have used so far can be described by the following algorithm:
\begin{codebox}
\zi \Comment $currSolution$ represents a NICG set of vectors that is
\zi already considered as part of some solution.
\zi \Comment $nonUsedVectors$ represents set of vectors that
\zi can be used to build a solution from the current $state$.
\Procname{$\proc{Solve}(currSolution, nonUsedVectors)$}
\zi update $solutions$ by $currSolution$ \Comment $solution$ is a global set of solutions.
\zi $vector \gets$ choose element from $nonUsedVectors$
\zi $nonUsedVectors \gets$ $nonUsedVectors \backslash vector$
\zi update isomorphic states
\zi \If $\proc{NICG}(currSolution \cup vector)$
\zi \Then
$\proc{Solve}(currSolution \cup vector, nonUsedVectors)$
\End
\zi $\proc{Solve}(currSolution, nonUsedVectors)$
\end{codebox}
\label{prog:Solve}
The algorithm above generates a search tree. The method we use to eliminate isomorphic states in the search tree
directly affects both, the running time and the memory usage.\\
As we can see, on one side are the introduced methods that eliminate a lot of states, but they
use too much time, or too much memory. On the other side, if we do not use any method for elimination
we have to search over a huge tree, but then we do not use any extra time or memory for elimination. Instead of devising
a method that rely on benefits only of one or the another side, we have tried to ``meet in the middle''.\\
A method that we describe is not so strict in elimination, as the previous methods were, but it is very
efficient as we are going to show by the results.
Suppose the algorithm is in a state $(currSolution = X, nonUsedVectors = A)$, and it chooses to examine
$vector \gets x$. In this
state the algorithm must decide which states $(X \cup y, A \backslash x)$ it is \textbf{not} going to visit, knowing
that it is going to visit $(X \cup x, A \backslash x)$. Note that even if $\proc{NICG}(X \cup x)$ returns
$\const{false}$, the state $(X \cup x, A \backslash x)$ can be considered as a visited one, but such that it
does not lead to the solution. Obviously, if $X \cup x$ is isomorphic to $X \cup y$, there is no
point to search over subtree represented by $X \cup y$.
By isomorphism between sets $X$ and $Y$ we denoted existence of a permutation
that maps $X$ to $Y$. Additionally, we observe that
if there exists a permutation $P$ such that $P(X) = X$ and $P(x) = y$,
then there exists a permutation $P'$ such that $P'(X \cup x) = X \cup y$. The opposite does not
stand always.\\
Consider even more specific type of permutations that make two states being isomorphic. We say
that a permutation $P$ 'preserves order of ones' of a collection of vectors $X$, if the following holds:
\[
(\forall x \in X) (\forall i \in \{1, \ldots, d\}) x_i = 1 \Rightarrow P(i) = i,
\]
where by $x_i$ is denoted vector-component of $x$ at the position $i$.
In other words, when $P$ is applied on $X$ it does not
change order of ones in that collection of binary vectors. Of course, it immediately leads to the conclusion
$P(X) = X$. This type of permutations we call ``1-order preserving'' permutations.
In our algorithm we describe 1-order preserving permutations by using
a single boolean array $fixedPerms$ of the size $d$. If $fixedPerms[i] = \const{true}$ it means that
the array represents a collection of permutation such that for every permutation $P$ from the collection
it stands $P(i) = i$. The array $fixedPerms$ is updated in the following way:
\begin{itemize}
\item Initially, $fixedPerms = \{\const{false}\}^d$.
\item When a new vector $x$ is added to the current state $X$, array $fixedPerms$ is updated
as follows:
\begin{codebox}
\zi \For $i \gets 1 \ldots d$
\zi \Do
\If the $i$-th component of $x \isequal 1$
\zi \Then
$fixedPerms'[i] \gets \const{true}$
\End
\End
\end{codebox}
\end{itemize}
Therefore, for each newly added vector the algorithm updates $fixedPerms$ in $O(d)$ time.
For each state the algorithm needs additional
$d$ bits to represent the corresponding array.
Testing whether two vectors $x$ and $y$ are isomorphic
with respect to 1-order preserving collection given by $fixedPerms$ can be done as follows:
\begin{codebox}
\Procname{$\proc{IsomorphicVectors}(x, y, fixedPerms)$}
\zi \If number of ones in $x \neq $ number of ones in $y$
\zi \Then
\Return \const{false}
\End
\zi \For $i \gets 1 \ldots d$
\zi \Do
\If $fixedPerms[i] \isequal \const{true}$
\zi \Then
\If the $i$-th components of $x$ and $y$ differ
\zi \Then
\Return \const{false}
\End
\End
\End
\zi \Return \const{true}
\end{codebox}
The method $\proc{IsomorphicVectors}$ for particular input executes in $O(d)$ time.
As we have seen, eliminating states according to 1-order preserving collection is both time and memory
efficient. However, such approach is a bit weak, and it does not eliminate all isomorphic states. This can
be illustrated with the following example. Suppose $currSolution = \{(1, 1)\}$ and
$nonUsedVectors = \{(1, 0), (0, 1)\}$. The only permutation that does not change order of ones in
$currSolution$ is $\begin{pmatrix} 1 & 2 \\ 1 & 2 \end{pmatrix}$, therefore $fixedPerms = \{\const{true}, \const{true}\}$.
Thus, call of $\proc{IsomorphicVectors}((1, 0), (0, 1), \{\const{true}, \const{true}\})$ will
return $\const{false}$. However, there exists a permutation $\begin{pmatrix} 1 & 2 \\ 2 & 1 \end{pmatrix}$
that maps $\{(1, 1), (1, 0)\}$ onto $\{(1, 1), (0, 1)\}$, what means that these two states should
be considered as isomorphic.
Although the last approach is based on weak isomorphism it runs faster then the previously described approaches
for $d = 1 \ldots 6$.
For $d = 6$ there exist 254 non-isomorphic solutions.
The algorithm that uses Gaussian elimination finds 80000 solutions, with many of them being isomorphic.
The search lasts for about 45 minutes.
The algorithm that implements the weak isomorphism finds nearly 5000 solutions in about 5 minutes.
\section{Improvement of $N'(d)$ for Arbitrary $d$}\label{section:row-isomorphism}
\subsection{Isomorphism on row additions}\label{subsection:row-isomorphism}
Each NICG set of vectors $X$ can be described as a matrix of dimension $d \times |X|$, where each column
of the matrix represents a single vector from $X$, and no two different columns represent the same vector.
In this section we introduce an isomorphism of NICG solutions that involves additions and
substructions on rows of matrices.
Consider an NICG set of vectors $X$, and the corresponding matrix $M$. We will say that two rows, $i_1$ and $i_2$,
do not share variable if there does not exists $j$ such that $M_{i_1, j} = 1$ and $M_{i_2, j} = 1$.
We can state the following lemma.
\begin{lema}\label{lema:row-addition}
Consider a matrix $M$, and assume that $M$ contains at least two rows that do not share variable.
Let two of these rows be $i_1$ and $i_2$. Let a matrix $M'$ be obtained
from the matrix $M$ by replacing the row $i_2$ by the row-sum $i_1 + i_2$. $M$ represents NICG set of vectors
if and only if $M'$ represents NICG set of vectors.
\end{lema}
\begin{comment}
\begin{proof}
$(\Rightarrow)$ Assume that $M$ represents NICG set. Such set has a unique non-negative integer solution, where
each of the variables is 1. Let the row $i_1$ contains variables $\{x_1, \ldots, x_s\}$, and
the row $i_2$ contains variables $\{y_1, \ldots, y_t\}$.
Assume that a set represented by $M'$ has at least two non-negative integer solutions, and show it leads to
contradiction. For any solution of the system given by $M'$ we will have
\[
\begin{array}{r c l}
v(x_1) + \ldots + v(x_s) & = & s, \\
v(y_1) + \ldots + v(y_t) + v(x_1) + \ldots + v(x_s) & = & s + t,
\end{array}
\]
where $v$ is a function that assigns solution to the variables. From the last two equations we have
\[
v(y_1) + \ldots + v(y_t) = t,
\]
such that $v(y_i)$ are all non-negative integers. Then such solution will satisfy the system given
by $M$, what is a contradiction with the assumption that $M$ has a unique solution.
$(\Leftarrow)$ Assume that $M'$ represents an NICG set, but $M$ has at least two solutions. Analogously
to the first part we get a contradiction.
This completes the proof.
\end{proof}
\end{comment}
\begin{corollary}\label{cor:subset-row}
For NICG set $X$, and its corresponding matrix $M$, such that $M$ contains two rows $i_1$ and $i_2$ such that
every variable presented in $i_1$ is presented in $i_2$ as well, there exists an NICG set $X'$ which can be obtained
from $X$ by replacing the row $i_2$ by the row-subtruction $i_2 - i_1$.
\end{corollary}
Lemma \ref{lema:row-addition} gives a new insight about isomorphic NICG sets. Using the lemma,
we give a better estimate of $N'(d)$, as presented
in Section \ref{section:upper-bound-improvement}.
\subsection{Upper Bound Improvement for large $d$}\label{section:upper-bound-improvement}
In this section we give a better estimate of $N'(d)$. The improvement rely on result presented in
Section \ref{subsection:row-isomorphism}. State the following lemma:
\begin{lema}\label{lema:upper-bound}
For each NICG set $X$, and its corresponding matrix $M$, there exists an NICG set $X'$,
along with its corresponding matrix $M'$, such that
\begin{enumerate}[(1)]
\item $|X| = |X'|$, and
\item every row in $M'$ contains at least one value 0.
\end{enumerate}
\end{lema}
\begin{proof}
If $M$ satisfies the condition (2), then let $X' = X$ and the proof is done. Therefore, assume
that $M$ contains a row $i$ such that all its values are 1. It means that variables of every row $j$
are contained in row $i$ as well. By Corollary \ref{cor:subset-row}, there exists $X_1$ that is obtained from $X$
by replacing the row $i$ by the row $i - j$. If $X_1$ contains a row with 1s only, then we are going to apply
Corollary \ref{cor:subset-row} on $X_1$ getting $X_2$. We continue this process until we get $X_r$ that
does not contain row with all 1s. Because there is finite number of rows, $X_r$ will be obtained in a finite
number of steps. Once we obtain $X_r$, let $X' = X_r$. By the construction and the corollary, $X_r$
satisfies both (1) and (2), implying $X_r$ is NCIG.
This completes the proof.
\end{proof}
Acording to Lemma \ref{lema:upper-bound}, there exists a solution $X$ to the problem such that the corresponding matrix
does not contain a row with all values being 1.
Thus we have that every component of the sum of vectors of $X$ never exceed $N - 1$.
If we recall proof of Theorem 2 in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}, we obtain an upper bound on $N$ to be
the maximal value such that
\[
2^N \leq N^d.
\]
The last inequality gives a slight improvement on an upper bound of $N(d)$.
The upper bound can be even more improved by using result $N(d) > d + 1$, for $d > 4$, given in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}.
Consider a solution $X$ and its corresponding matrix $M$ for $d > 4$. By Lemma \ref{lema:upper-bound}
there exists a solution such that each row contains at least one 0 value. Assume each of the rows contains
a single 0 value.
Therefore, at least $N(d) - d$ columns would contain 1s only. Since $N(d) - d \ge 2$ for $d > 4$,
the system is not NICG. Therefore, the assumption is wrong
and there exist at least one row that contains 2 zeros. The last gives a new improvement on
upper bound on $N$ that can be described by the following inequality:
\[
2^N \leq N^{d - 1} \cdot (N - 1).
\]
\section{Introduction}\label{sec:introduction}
The theory of sets and set operations plays an important
role in software verification and data flow analysis
\cite{Aiken99SetConstraintsIntro}. Additionally, reasoning
about sets is used for proving correctness of data
structures, since a natural choice of an abstraction
function is the abstraction function that maps the content
of a data structure to a set. For full functional
verification of complex data structures often it is
important to maintain the number of elements stored in the
data structure
\cite{ZeeETAL08FullFunctionalVerificationofLinkedDataStructures}. The
logic in which one can express set relations, cardinality
constraints and linear integer arithmetic is known under the
name Boolean Algebra with Presburger Arithmetic
({\m{\small BAPA}})
\cite{KuncakETAL06DecidingBooleanAlgebraPresburgerArithmetic}. The
decidability of this logic was long known
\cite{FefermanVaught59FirstOrderPropertiesProductsAlgebraicSystems},
but it was not until recently that
\cite{KuncakETAL06DecidingBooleanAlgebraPresburgerArithmetic}
proved that {\m{\small BAPA}} admits quantifier-elimination
and has asymptotically the same complexity as Presburger Arithmetic.
The
quantifier-elimination algorithm introduced in \cite{KuncakETAL06DecidingBooleanAlgebraPresburgerArithmetic} reduces a
given {\m{\small BAPA}} formula to a Presburger arithmetic formula
using Venn regions.
Many verification conditions expressing
properties of complex data structures can be
immediately formulated in quantifier-free fragment of {\m{\small BAPA}}
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}, denoted {\m{\small QFBAPA}}.
For these theoretical and practical reasons, we consider only the {\m{\small QFBAPA}}
fragment in this paper.
Checking the satisfiability of
a {\m{\small QFBAPA}} formula is an NP-complete problem, where the non-trivial
aspect is showing the membership in NP
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}.
The recent advances in SAT solvers made SAT instances coming from hardware
and software verification
more amenable to solution attempts than before.
However, despite the existence of a polynomial encoding of {\m{\small QFBAPA}} into SAT,
an efficient {\m{\small QFBAPA}} solver is still
missing.
The most recent {\m{\small QFBAPA}} implementation
\cite{SuterETALL2011BAPASMT} uses the state-of-art efficient
SMT solver Z3.
This implementation relies on the DPLL(T)
mechanism of Z3 to reason about the top-level propositional
atoms of a {\m{\small QFBAPA}} formula. Although this implementation is
based on an an algorithm that explores all Venn
regions, it automatically decomposes problems into subcomponents when possible, and
applies Venn region construction only within individual components.
This approach is an important practical step forward, but there are still
natural formulas that cannot be decomposed. For such cases, the running time of the procedure increases
doubly-exponentially in the number of variables.\footnote{This paper was written in 2011; subsequently an \emph{implementation}
of another procedure is available in CVC4 and was documented in \cite{DBLP:journals/lmcs/BansalBRT18}, but this does not yield
improved complexity bounds for the general case nor does it impact the problem studied in this paper.}
An alternative approach towards the efficient implementation is
to explore the sparse model property of {\m{\small QFBAPA}}. In
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}
was shown that, if a given {\m{\small QFBAPA}} formula is satisfiable,
then there exists an equisatisfiable linear arithmetic
formula that is polynomial in the size of the original
formula. The decision procedure based on this theorem is
described in detail in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}. The procedure
takes as an input a {\m{\small QFBAPA}} formula and converts it into an
equisatisfiable Presburger arithmetic formula
$F_{\textsf{PA}}$. Based on the newly derived $F_{\textsf{PA}}$ and the
theorem on a sparse solution for integer linear programming
\cite{EisenbrandShmonina06CaratheodoryBoundsIntegerCones},
the algorithm computes a positive integer $N'(d)$, which
denotes an upper bound of the number of non-empty Venn regions for formula that contains $d$ constraints.
The algorithm then runs in a loop and
tries to incrementally construct a model of sparsity $0, 1,
\ldots, N'(d)$. If no model is found after
the loop execution is finished, then the input formula is
unsatisfiable.
The number $N'(d)$ is an upper bound and it can be easily computed
from a dimension of a problem. However, this bound is not
tight. Our goal is to establish the bound on $N'(d)$ as tight as
possible in order to make an efficient
implementations of a {\m{\small QFBAPA}} solver more feasible.
We are thus interested in deriving the smallest possible number
$N'(d)$, denoted by $N(d)$, which still preserves the desired property: if formula
has a solution, then it also has a solution of sparsity
$N(d)$. This paper will focus on computing the values of $N(d)$
using various combinatorial algorithms and their
optimizations.
The existence of $N(d)$ is guaranteed by the main theorem on
a sparse solution for integer linear programming
\cite{EisenbrandShmonina06CaratheodoryBoundsIntegerCones},
which states that if a vector is an element of an integer
cone, then it is also an element of some smaller integer
cone. The key observation in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}
was not to use any ``small'' integer cone, but the smallest
one. For this purpose in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}
was introduced so called a {\emph{non-redundant integer
cone}},
representing an integer cone that does not contain a
smaller cone that could generate a given vector.
This gives us a very simple definition of a function for which we known
linear lower bounds and $O(d\log(d))$ upper bounds.
\section{Randomizing Search Order; New Lower Bounds}
The algorithms we have presented so far deterministically choose the next state from the current state.
We developed a randomized algorithm that chooses a next state randomly in such a way that all states
reachable from the current state have the same probability to be chosen.
The uniform probability distribution on choice over the states is achieved by shuffling list of
states reachable from the current state, and then picking the first state from the shuffled list as the next state.
Although the implementation difference is minor,
results are significantly better than using deterministic algorithm, as can be seen in the following paragraph.
We did not succeed to get exact values for $N(7)$, $N(8)$, $N(9)$ or $N(10)$. Instead, we got better lower estimate
of these values. Those estimates are: $N(7) \geq 11$; $N(8) \geq 13$; $N(9) \geq 14$; $N(10) \geq 16$.
It is interesting that in less than a second we got
result $N(6) \geq 9$. In a few minutes we got $N(7) \geq 11$, and in an hour we got $N(8) \geq 13$, $N(9) \geq 14$ and
$N(10) \geq 16$. On the other hand, deterministic algorithm uses the following
amount of time: $N(6) \geq 9$ in a minute; $N(7) \geq 11$ after a few hours;
$N(8) \geq 13$ we did not get even after a day of running the algorithm.
\section{Preliminaries}
This section summarizes the previously known results that
are necessary for a better understanding of the rest of the
paper. We recall the definitions and the theorems introduced
in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}.
Quantifier-free Boolean Algebra with Presburger Arithmetic
({\m{\small QFBAPA}}) is a theory that includes reasoning about set
relations and operations, and reasoning about integer linear
arithmetic. Sets and integers are connected through the
cardinality operator. A simple decision procedure for
{\m{\small QFBAPA}} uses Venn regions and reduces checking satisfiability
of a {\m{\small QFBAPA}} formula to checking satisfiability of a
corresponding linear integer arithmetic formula. As an
illustration consider the following {\m{\small QFBAPA}} formula:
\begin{equation*}
|U| = 100 \wedge \bigwedge_{1 \leq i < j \leq 3} {|x_i \cup
x_j| = 30} \wedge \bigwedge_{1 \leq i \leq 3}{|x_i| = 20}
\wedge \bigwedge_{1 \leq i \leq 3}{|x_i| \subseteq U}.
\end{equation*}
With $l_i$ we denote fresh integer variables. The above
formula is equisatisfiable with the following formula written in a matrix form:
\begin{equation}\label{fla:derived}
\begin{bmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
0 & 0 & 1 & 1 & 1 & 1 & 1 & 1\\
0 & 1 & 0 & 1 & 1 & 1 & 1 & 1\\
0 & 1 & 1 & 1 & 0 & 1 & 1 & 1\\
0 & 0 & 0 & 0 & 1 & 1 & 1 & 1\\
0 & 0 & 1 & 1 & 0 & 0 & 1 & 1\\
0 & 1 & 0 & 1 & 0 & 1 & 0 & 1
\end{bmatrix}
\begin{pmatrix}
l_{000}\\
l_{001}\\
l_{010}\\
l_{011}\\
l_{100}\\
l_{101}\\
l_{110}\\
l_{111}
\end{pmatrix}
=
\begin{pmatrix}
100\\
30\\
30\\
30\\
20\\
20\\
20
\end{pmatrix}
\end{equation}
The details of the translation algorithm can be found in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}.
However, this newly derived formula might have an
exponential size in the size of the original formula. In
this example too, the number of variables is exponential
in the number of sets in the original formula.
\begin{definition}
Let $X \subseteq \mathbb{Z}^d$ be a set of integer vectors. An
integer cone generated by $X$, denoted with
$\rm{int\_cone}(X)$, is a linear additive closure of vectors
of $X$:
\[
\rm{int\_cone}(X) = \{\lambda_1 x_1 + \ldots + \lambda_n x_n
| n \geq 0, \ x_i \in X , \ \lambda_i \ge 0\ , \ \lambda_i \in \mathbb{Z}\}.
\]
\end{definition}
Note that checking satisfiability of \eqref{fla:derived}
reduces to checking whether a vector belongs to an integer
cone. The number of vectors in the integer cone can be
infinite and we are interested in deriving the ``small''
subset of them that would still generate the same initially
given vector. We apply the results obtained in the
operational research community on sparse solutions of
integer linear programming problems.
\begin{theorem}[Theorem 1 in
\cite{EisenbrandShmonina06CaratheodoryBoundsIntegerCones}]\label{tm:fritz}
Let $X \subseteq \mathbb{Z}^d$ be a finite set of integer
vectors and $M_X = \max\{n |\, n = |x_{ij}|, \ x_{ij}
\text{ is an ordinate of vector } x_i, \ x_i \in
X\}$. Assume that $b \in \rm{int\_cone}(X)$. Then there
exists a subset $\tilde{X} \subseteq X$ such that $b \in
\rm{int\_cone}(\tilde{X})$ and $|\tilde{X}| \leq 2 d
\log_2{(4 d M_X)}$.
\end{theorem}
This theorem establishes the bound on the number of vectors
of the cone needed to generate a given vector. Because $M_X
= 1$, the number of vectors in the ``smaller'' cone is
bounded by $2d\log_2 d + 4d$. In
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}
it was observed that this bound can be reduced to
$2d\log_2 d$ by taking into account that the vectors are non-negative.
\begin{definition}
Let $X \subseteq \mathbb{Z}^d$ and let $b$ be an integer
vector. Set $X$ is called a non-redundant integer cone
generator for $b$, denoted by $\rm{NICG}(X, b)$, if:
\begin{itemize}
\item $b \in \rm{int\_cone}(X)$
\item for every $x \in X$ holds: $b \notin \rm{int\_cone}(X \backslash \{x\})$.
\end{itemize}
\end{definition}
Nevertheless, we want to avoid computing a non-redundant
integer cone generator for every given vector. The following
theorem proved in
\cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean},
shows that it is enough to consider only one particular
vector, namely $\Sigma X = \sum_{x \in X}{x}$. We define $\rm{NICG}(X)$ as
$\rm{NICG}(X, \Sigma X)$.
\begin{lema}\label{lema:NICG}
Let $X \subseteq \mathbb{Z}^d_{\geq 0}$ be a set of
non-negative integer vectors. The following two statements
are equivalent:
\begin{itemize}
\item there exists a non-negative integer vector $b$ such
that $\rm{NICG}(X, b)$ holds
\item $\rm{NICG}(X)$ holds
\end{itemize}
\end{lema}
Our original motivation was to check satisfiability of
{\m{\small QFBAPA}} formulas. The decision procedure can be outlined as
follows: we reduce satisfiability of the initial {\m{\small QFBAPA}}
formula to check the membership in an integer cone, where
the generating vectors are bit vectors. Applying
Theorem~\ref{tm:fritz} results in the small model
property. Therefore, our new goal becomes to compute the
number $N(d)$ for a given dimension $d$. The number $N(d)$
sets an upper bound on the cone size: if a vector is a
member of an integer cone, then it is a member of a cone
generated with at most $N(d)$ vectors. To translate it back
to the {\m{\small QFBAPA}} satisfiability problem: if a {\m{\small QFBAPA}} formula
is satisfiable, then it also has a model where at most
polynomially many Venn regions are non-empty. The number of
non-empty Venn regions is determined using $N(d)$. The
decision procedures runs in the loop from 0 to $N(d)$ and
tries to incrementally construct a model of a size $0, 1,
\ldots, N(d)$.
Lemma~\ref{lema:NICG} justifies the following definition:
\begin{definition}
Let $d$ be a non-negative integer. With $N(d)$ we denote the
cardinality of a set $X$ such that $\rm{NICG}(X)$ holds and
for any set $Y$ of a greater size does not hold
$\rm{NICG}(Y)$
\[
N(d) = \max\{|X| \ | \ X \subseteq \{0, 1\}^d, \ \rm{NICG}(X) \}
\]
\end{definition}
Lastly we provide is the summary on known lower and
upper bounds on the value of $N(d)$, as well as the computed
values for $N(d)$ for some $d$:
\begin{theorem}\label{tm:knownNd}
For a positive integer $d \ge 1$ and $N(d)$ the following holds:
\begin{enumerate}
\item $d \le N(d)$
\item $N(d) \le (1 + \varepsilon(d))(d\log_2 d)$, where
$\varepsilon(d) \le 1$ and $\displaystyle\lim_{d
\rightarrow \infty} \varepsilon(d) = 0$
\item $N(d) + 1 \le N(d + 1)$
\item $N(d) = d$, for $ d = 1, 2, 3$
\item $N(d) > d$ for $d \ge 4$
\end{enumerate}
\end{theorem}
In the rest of the paper we will describe the algorithms and
optimizations we used to compute $N(4), N(5)$ and $N(6)$. We
will also provide improved lower bounds on $N(7)$ and
$N(8)$.
\section{Better estimate of $N'(7)$ using Decomposition: from 36 to 19}
The best known estimate so far for $N'(7)$ is 36. We successfully improved this upper bound to 19 as follows:\\
Let $X$ be a solution for $d = 7$, i.e. $X \subseteq \{0, 1\}^7$ and $|X| = N(7)$.
Then set $X$ can be decomposed into two subsets $X_0$ and $X_1$ such that
\begin{itemize}
\item $X_0 \cap X_1 = \emptyset$,
\item $X_0 \cup X_1 = X$,
\item $X_0$ contains only vectors which first component is 0,
\item $X_1$ contains only vectors which first component is 1.
\end{itemize}
From Lemma \ref{lema:monotonic} it follows that $X_0$ and $X_1$ are NICG sets. Since the first
component of each vector in $X_0$ is 0, $X_0$ can be considered as
a set of 6-dimensional vectors. Therefore, $|X_0| \le N(6)$.
In order to estimate an upper bound on $|X_1|$, we use the same algorithm as we use to obtain $N(6)$,
with the input defined as set $\{x\ |\ x \in \{0, 1\}^7 \wedge x_1 = 1\}$.
Running the algorithm on such a set we get the final result, a set $Y$, after 30 minutes with
$|Y| = 10$. Therefore, $|X_1| \leq 10$. Since every solution for $d = 7$ can be decomposed into $X_0$
and $X_1$, with upper bounds 9 and 10, respectively, it implies that solution for $d = 7$ is of cardinality of at
most $9 + 10 = 19$.
\section{Core Techniques: N(4)=5, N(5)=7}\label{sec:core}
In this section we present methods that we initially used to compute values of $N(4)$ and $N(5)$. Figure~\ref{fig:isInIntCone} describes a simple algorithm that checks whether a set of vectors $X \subseteq \{0, 1\}^d$ is a non-redundant integer cone.
\begin{figure}[h]
\begin{codebox}
\Procname{$\proc{NICG}(X)$}
\zi \Comment Global variable that stores $\rm{NICG}$ property of $X$.
\zi $found \gets \const{false}$
\zi \For each vector $x \in X$
\zi \Do
$\proc{inIntConeTest}(X \backslash \{x\}, \sum{X})$
\zi \If $found \isequal \const{true}$
\zi \Then
\Return $\const{false}$
\End
\End
\zi \Return $\const{true}$
\end{codebox}
\begin{codebox}
\Procname{$\proc{inIntConeTest}(X, b)$}
\zi \If $b \isequal 0$
\zi \Then
$found \gets \const{true}$
\zi \Return
\End
\zi \If $X \isequal \emptyset$
\zi \Then
\Return
\End
\zi $newB \gets b$
\zi $x \gets $ take any element from $X$
\zi \While \const{true}
\zi \Do
$\proc{inIntConeTest}(X \backslash \{x\}, newB)$
\zi $newB \gets newB - x$
\zi \If $found \isequal \const{true}$ or $newB$ contains negative component
\zi \Then
\Return
\End
\End
\end{codebox}
\caption{Program \textsc{NICG}: checks whether for a set of integer vector $X$ holds $\rm{NICG}(X)$}
\label{fig:isInIntCone}
\end{figure}
A simple incremental algorithm for computing the value $N(d)$ works as follows: the
algorithm starts with $n$ and constructs a set $X$ of the cardinality $n$, which has the property $\rm{NICG}(X)$. In the next iteration $n$ gets increased and the algorithm repeats the same steps.
As soon as the algorithm encounters the first $n$ for which it cannot construct a
$\rm{NICG}(X)$ of the cardinality $n$, it stops and returns $N(d) = n - 1$.
The correctness of this algorithm is guaranteed by the following theorem, originally proved in \cite{KuncakRinard07TowardsEfficientSatisfiabilityCheckingBoolean}:
\begin{lema}\label{lema:monotonic}
If $\rm{NICG}(X)$ and $Y \subseteq X$, then $\rm{NICG}(Y)$.
\end{lema}
Using this approach we computed $N(5) = 7$ after approximately 3 hours.
\vspace{0.5cm}
\smartparagraph{Optimization: Binary Search.} Instead of incrementally constructing all the sets, we can apply Lemma~\ref{lema:monotonic} together with Theorem~\ref{tm:knownNd} to devise an algorithm that computes
the value of $N(d)$ in the binary search manner.
The algorithm makes a guess $n$ on the value $N(d)$ and tries to construct a set $X$ such that $|X| = n$ and $\rm{NICG}(X)$. If no such set exists, then $N(d) < n$, otherwise $n \le N(d)$.
As an illustration, consider $d = 5$. Applying Theorem~\ref{tm:knownNd} to compute the bounds on the value of $N(5)$,
the algorithm derives the interval in which $N(5)$ occurs: $6 \le N(5) \le 11$. The first guess is $N(5) = 8$.
Then the algorithm tries to construct a set $X$ such that $|X| = 8$ and $\rm{NICG}(X)$. Because such a set does not
exist, the algorithm will not construct it implying $6 \le N(5) \le 7$. The next guess is $N(5) = 7$.
Since there exists a set $X$ such that $|X| = 7$ and $\rm{NICG}(X)$, the algorithm will construct such a set and output
$N(5) = 7$.
\smartparagraph{Incremental Construction vs Binary Search.} We have implemented both,
the incremental construction and the binary search approach, to derive $N(5)$.
The binary search approach found $N(5)$ faster than the incremental construction approach. However, our experimental results
show that the binary search approach is slower than the incremental construction approach in computing $N(d)$ for $d > 5$.
The difference in the experimental results is caused by the fact that testing the existence of a set $X$ such that $\rm{NICG}(X)$ is computationally more expensive than testing the existence of a set $Y$ such that $\rm{NICG}(Y)$ when
$|X| > |Y|$. Another issue with the binary search approach is that if the initial interval
is not tight enough, the algorithm might make a guess on $N(d)$ that is significantly larger than the value $N(d)$ itself.
As an example consider $d = 5$. In the incremental approach the algorithm must examine at most $\binom{31}{6} + \binom{31}{7} + \binom{31}{8} = 11254581$ sets of vectors. In the binary search approach the algorithm must examine $\binom{31}{7} + \binom{31}{8} = 2921750$ sets of vectors, where the value 31 represents cardinality
of the set $\{0, 1\}^5 \setminus \{0, 0, 0, 0, 0\}$.
\smartparagraph{Optimization: Preserving Sums.} In order to obtain a more efficient computation of $N(d)$ we tried
an approach based on preserving sums of vectors, which can be later reused in the computation.
The idea on preserving sums was motivated by the following observation:
if $Y \subset X$ and $\rm{NICG}(X)$, then $\Sigma X \not\in \rm{int\_cone}(Y)$.
To benefit from the observation, for every examined $Y$ for which $\rm{NICG}(Y)$ holds the
algorithm must keep track of the sum $\Sigma Y$.
We have tried this heuristic, but did not obtain any significant improvement.
The advantage of such an approach is that
the algorithm can compute new sums quickly, and detect not $\rm{NCIG}$ faster.
The disadvantage
is the process of maintaining sums. The search algorithm must be aware which sums should be stored and which removed.
In certain cases the search algorithm must
copy the whole data structure that keeps the sums. Our experiments have shown that maintaining so much
information is more costly than the calculation.
\subsection{Isomorphic sets}\label{subsec:isomorphic}
So far the algorithms searched for the solution over \textbf{all} sets of vectors of given cardinality. We applied
optimizations to early detect if set does not improve the solution. Also, we introduced
approaches which improved maintaining information about the sets. But common to all those cases was that almost all the sets were examined. This was a big drawback: we will demonstrate that one does not need to examine all the sets.
To motivate our observations, consider the following two sets: $X_1 = \{(1, 1, 0), (0, 1, 0)\}$ and $X_2 = \{(0, 1, 1), (0, 1, 0)\}$.
Performing a permutation $\begin{pmatrix} 1 & 2 & 3 \\ 3 & 2 & 1 \end{pmatrix}$ on indices of components of the
vectors in $X_1$ we obtain $X_2$. Permuting components of the vectors does not affect the solution.
Therefore, if set $X_1$ does not lead to the solution, then $X_2$ does not lead, too. Similarly, if
$X_1$ leads to the solution, $X_2$ leads as well. The observation allow us to consider only vectors
that are not isomorphic, where isomorphism between two sets of vectors is defined as follows:
\begin{definition}
We say that two sets of vectors $X, Y \subseteq \{0, 1\}^d$ are isomorphic if there exists
a permutation $P$ over the set $\{1, \ldots, d\}$
and a bijective function $f_P : X \rightarrow Y$ defined as
\[
f_P(x) = y \Leftrightarrow x_i = y_{P(i)}, i = 1, \ldots, d.
\]
\end{definition}
Basically, there are two ways to check, call it check functions, whether we already considered an isomorphic set:
\begin{enumerate}
\item For each considered set $X$ so far, mark all isomorphic sets to $X$, i.e. mark all $d!$ sets (note that some
of them might repeat) storing them in a structure $marked$. Before a new set is processed
check whether it is in $marked$.
\item Store each considered set in a structure $done$. When there is a new set $X$ to be examined, run all
$d!$ permutations on $X$. For each permutation $p$ check if $p(X)$ is in $done$.
\end{enumerate}
We used this approach for $d \leq 7$. There are $2^{64}$ different sets in case $d = 6$. A particular
set can be isomorphic to at most $6!$ other sets. Because this is an equivalence relation, at least $\frac{2^{64}}{6!}$
non-isomorphic sets should be stored somehow. This is far away too much. To avoid this problem, we
can use a bit different method. Let us define
\[
X^{(k)} = \{x | x \in X \rm{\ and\ } x \rm{\ contains\ exactly\ } k \rm{\ non-zero\ components}\}.
\]
Then we say $X$ and $Y$ are isomorphic if $(X^{(1)}, X^{(2)})$ is isomorphic to $(Y^{(1)}, Y^{(2)})$.
With such a method, sets $X = \{(1, 0, 0, 0)$ $, (1, 1, 1, 0)\}$
and $Y = \{(1, 0, 0, 0), $ $(1, 1, 0, 1)\}$ will not be considered as isomorphic, although they are isomorphic
by a permutation $\begin{pmatrix} 1 & 2 & 3 & 4 \\ 1 & 2 & 4 & 3 \end{pmatrix}$. Thus, it does not cover
all isomorphic pairs, but allow us to shrink the usage of memory.
Finally, the required memory is sufficiently small that we can
store sets in the both cases in an array. As a result, the check
function can be executed in a constant time.
The first approach uses $O(d! \cdot T)$ time ($T$ is number of non-isomorphic sets) for marking,
and $O(d! \cdot M)$ memory. Once we do that,
the check function is performed in a constant time.\\
The second approach uses $O(1)$ time to store an examined set, and it uses $O(M)$ memory.
For each stored set $X$ there are multiple isomorphic sets. Note that some of those sets do not
always have $d!$ isomorphic
sets, like for example $\{(1, 0, 0, 0)\}$. Actually, most of the time they have less than $d!$.
Before storing a set $X$ we have to generate $d!$ other sets and check whether they are in $done$ or not.
There are $T'$ isomorphic sets, where for $d = 6$ the value $T'$ is a few hundred times bigger than $T$.
This approach gives the time complexity $O((T + T') \cdot d!)$, and the memory complexity $O(M)$.\\
In our case, we have already shrank memory, thus the memory is not an issue, but time efficiency.
Therefore we choose the first approach.
Using this optimization we obtained a method that in a few minutes calculates $N(5) = 7$.
\section{Lessons Learned}
\section{Different Approaches}
\begin{comment}
As we have seen, Gaussian elimination gave good results for $d = 6$. But could we do significantly better, as
we did by replacing the procedure \proc{inIntConeTest} by Gaussian method? To come up with that answer, during
execution of the program we counted how many parameter-variables were part of the solutions, and how many times
values were assigned to the parameter-variables. Before we give the results, let us describe the notation.
By operation we consider one value assignment to a variable. For example, if we have a solution
$(t_1 + t_2, \frac{t_1 + t_2}{2}, t_1, t_2)$, and we assign $t_2 = t_1 = 1$, then the solution would be
$(2, 1, 1, 1)$, and we made 4 operations in total. If we assign $t_2 = 1, t_1 = 2$, then we will try to
assign values $(NA, 1.5, 2, 1)$, what would give 3 operations in total. Note that we are looking for
non-negative integer solutions, therefore we can not assign 1.5 to some variable, and we immediately stop
further assignment.\\
For each set of vectors $X$ we define a set of variables. For each of those sets of variables, we try to find
non-negative integer solutions. This kind of variables we call ``defined variables''.
Using a set of variables we get a system of linear equations. In order to check if that system defines a $\rm{NICG}$
set we force variable by variable to be 0. (In order to remind about this method, please, take a look
at the procedure \proc{NICG} in Subsection \ref{subsec:binary}.) Once we detect the system does not define
a $\rm{NICG}$ set, we stop further operations on that set of variables. Therefore, a set of variables $S$
involves solving at most $|S|$ systems of linear equations, where each system contains $|S| - 1$ variables.
These number of variables we call ``all variables''. When we write :``Statistics $(op, defVar, allVar)$.'' it means
:``On average, there are $op$ operations per single set of vectors; $defVar$ operations per single defined variables;
$allVar$ operations per single variable from set of all variables.'' We got the following results.
\begin{itemize}
\item $d = 6$, statistics (6, 4, 2).
\item $d = 7$. In this case, we could not find average number for all the sets of vectors, because we do not have solution
that runs in time. We used randomized solution to track these values. We made a two runs. In each of the
runs we track statistics over 50,000,000 sets of vectors. Result of the runs are the following.
\begin{enumerate}[1.]
\item For each set $X$ from this run $X^{(1)} = \emptyset$, statistics (14, 7, 4).
\item For each set $X$ from this run $X^{(1)} = X^{(2)} = \emptyset$, statistics (32, 16, 9).
\end{enumerate}
\item $d = 8$. As in the case $d = 7$ we used randomized approach. In this case, we made 4 runs. For the first
three of them, we track statistics over 50,000,000 sets. On the last one, we track over 100,000,000 sets.
\begin{enumerate}[1.]
\item For each set $X$ from the run, $X^{(1)} = \emptyset$, statistics (47, 21, 14).
\item For each set $X$ from the run, $X^{(1)} = X^{(2)} = \emptyset$, statistics (89, 38, 23).
\item For each set $X$ from the run, $X^{(1)} = X^{(2)} = X^{(3)} = \emptyset$, statistics (255, 106, 65).
\item For each set $X$ from the run, $X^{(1)} = X^{(2)} = X^{(3)} = \emptyset$, statistics (192, 79, 53).
\end{enumerate}
\end{itemize}
The first thing we can notice is that in the cases $d = 6$ there is not too much what can be improved in
efficiency of Gaussian method. For every system of equations we make it gives result very fast. In the cases
$d = 7$ very fast method might be better than Gaussian method. In the cases $d = 8$, especially if vectors
are not sparse, reasonably fast method for solving integer linear programming problems should perform
better than Gaussian elimination. For $d > 8$ we believe that Gaussian elimination becomes
so slow that it should be avoided.
\end{comment}
In order to compute $N(d)$ we used general solvers for solving systems of equations with non-negative
integer variables. The solvers were used
to give an answer whether a particular set of vectors $X$ has the $\rm{NICG}$ property.
If $M$ is the matrix that represents $X$, then existance of the
NICG property can be answered as follows:
\begin{itemize}
\item Let $M^k$ be defined as a matrix which $k$-th column contains only 0s, and every other column
is a copy of the corresponding column in $M$.
\item If for every matrix $M^k$, for $1 \le k \le \rm{\# of\ columns \ in \ } M$,
there does not exist a non-negative integer solution,
then $X$ has the $\rm{NICG}$ property, otherwise it does not have.
\end{itemize}
We tried this approach using standard solvers GLPK and jOpt. None of them was nearly as efficient on this problem as our
implementation of NICG property using Gaussian elimination. We suspect that the reason behind
this is the fact that we were mainly working with systems of a small number of equations and that our internal implementation avoids calls into external libraries.
Furthermore, we faced memory management bugs in GLPK.
\begin{comment}
\subsection{GLPK}
We tried to replace Gaussian method by GLPK (GNU Linear Programming Kit) library.
More information on the library can be found at the address http://bjoern.dapnet.de/glpk/.
GLPK is written in C++. Java (Native) Interface is provided on the link. To implement the solution
using the library, we used methods given by the documentation doing the same as before -- for given $X$,
ignore every $x \in X$, one at time, and check is there non-negative integer solution. When we want
to solve system of equations we can do one of the following:
\begin{itemize}
\item Make a new instance of the solver and set constraints, like $0 \leq x_i \leq UPPER\_BOUND$.
$UPPER\_BOUND$ might be a guess value, similarly to what we used before, or simply $d \log{d}$.
\item Before starting search for the solution, make a single instance of solver. The solver should
include all the variables ($2^d$ of them). For given set of vectors $X$, for which we should
test $\rm{NCIG}(X)$ property, if system representing $X$ should contain $x_i$, then we
set a constraint $0 \leq x_i \leq UPPER\_BOUND$, otherwise we set $x_i = 0$. Note that $x_i = 0$
is same as we do not use $x_i$ in the system.
\end{itemize}
We tried the library using both approaches. In the first case, we got a memory error
after a few hundred iterations. Note that we are solving millions of equations, running the solver repeatedly;
we suspect that memory management in the library is not implemented correctly. When we tried solver on an individual problem instance, it worked without problems;
we got correct solution and messages printed by the solver were user-friendly. The memory error message printed by
the solver is :``ufree: ptr ..... memory allocation error''. More on that is provided on the link
\url{http://www.mail-archive.com/help-glpk@gnu.org/msg00543.html}.
In the second case we had two options -- 1) enable printing the messages by using method
$\proc{enablePrints}(\const{true})$, 2) unable printing the messages by calling $\proc{enablePrints}(\const{false})$.
For unknown reasons to us, not printing the messages raise a fatal error. The exact message we got in that case
is :``A fatal error has been detected by the Java Runtime Environment: EXCEPTION\_ACCESS\_VIOLATION
(0xc0000005) at pc=0x0229fe2f, pid=3016, tid =3248.'' If we enable printing the messages, then everything works
fine, but program becomes too slow. It uses significant amount of time for printing, what do not give us
precise information how fast library is. More information on this problem is provided by the link
http://www.mail-archive.com/bug-glpk@gnu.org/msg00415.html.
Probably by proper handling, by the library itself, exceptions and errors raised by solving systems of equations
would fix the problems we encounter. As we have mentioned, we do not believe that a third party libraries
could improve Gaussian method significantly, thus we did not get into too many details on the library
C++ implementation, and did not find where the errors occur. Probably the library is not meant to be used for
a lot of iterations, but just for a few.
\end{comment}
\begin{comment}
\subsection{jOpt}
We tried, another, library jOpt (Java Optimization Programming Language), this time written in Java.
The whole library and documentation can be found by following the address http://jopt.sourceforge.net/.
We did not find some examples provided along with the library, or in the documentation, but luckily
names of methods and theirs usage is intuitive, therefore it was not a problem to use the library through API.
Using the library we got much slower solution than with our best algorithm. It was more than one hundred
times slower. We are not sure what is the reason.
We noticed that almost every method we used return an object as its result. Almost non of them
is transforming the given object(s). It might be one of the reasons.
\end{comment}
\begin{comment}
Below is provided a source code used for the library.
\begin{lstlisting}[mathescape, title = NICG using jOpt]
public boolean NICGJOPT(ArrayList<VectorElem> X){
VectorElem sum = new VectorElem();
for (VectorElem vector : X)
sum.add(vector);
VectorElem currVector;
// make variables
CspIntVariable vars[] = new CspIntVariable[X.size()];
for (int varIdx = 0; varIdx < X.size(); varIdx++)
vars[varIdx] = new IntVariable("x" + varIdx, 0, UPPER_BOUND);
for (int notUse = 0; notUse < X.size(); notUse++){
vars[notUse] = new IntVariable("x" + notUse, 0, 0);
CspSolver solver = SolverImpl.createSolver();
solver.setAutoPropagate(true);
try{
// make expression, and then use it to make a constraint
for (int eqIdx = 0; eqIdx < d; eqIdx++){
if (sum.getComponent(eqIdx) == 0)
continue;
CspIntExpr expr = null;
for (int idx = 0; idx < X.size(); idx++){
currVector = X.get(idx);
if (currVector.getComponent(eqIdx) == 0)
continue;
if (expr == null)
expr = (CspIntExpr)(vars[idx]);
else
expr = expr.add(vars[idx]);
}
solver.addConstraint(expr.eq(sum.getComponent(eqIdx)));
}
boolean found = solver.solve(vars);
if (found)
return false;
}
catch(Exception ex){
// There is no solution.
}
solver.clear();
vars[notUse] = new IntVariable("x" + notUse, 0, UPPER_BOUND);
}
return true;
}
\end{lstlisting}
In the method $\proc{NICGJOPT}$ we used the class $VectorElem$. It is class that represents a single vector.
Methods invoked on such objects have intuitive names, therefore we are not going to explain them in details.
On the line 9 we define an array of variables. The object $CspIntVariable$ represents an integer variable.
Similar objects are: $CspBooleanVariable$, $CspDoubleVariable$, $CspFloatVariable$, $CspIntSetVariable$,
$CspLongVariable$, $CspNumVariable$
and $CspSetVariable$. By using appropriate object we actually set a constraint. We can define an integer
variable by its name and scope. On the line 17 we set auto propagate on $\const{true}$. Auto propagate
system will check satisfiability of constraints in real time, while we are adding them. If a set of constraints
can not be satisfied, it is good to discover it as soon as possible.\\
On the line 14 we choose which variable to ignore by setting scope to be $[0 ,0]$. On the line 46
we are recovering that variable.\\
From the line 20 to the line 36 we are constructing the constraints. To make a constraint we have to define
an expression first. For example, if we define an int expression $CspIntEpxr expr$, then we can make a constraint
by invoking $expr.\proc{between}(min, max)$, $expr.\proc{eq}(val)$, $expr.\proc{lt}(val)$ and
$expr.\proc{gt}(val)$. The invokes would give the constraints $min < exp < max$, $expr = val$, $exp < val$ and
$exp > val$, respectively. There are a few more methods on $CspIntEpxr$ for making constraints.
Except the class $CspIntEpxr$, there are classes $CspBooleanExpr$, $CspDoubleExpr$, $CspFloatExpr$, $CspLongExpr$
and $CspNumExpr$. Not all of them have the same methods as $CspIntEpxr$, but they are very similar to those
mentioned. More information on that is given in the documentation.\\
It is very convenient that variable is an expression by itself. That fact we used on the line 30. Once we have
an expression $expr$, it is very easy to make a new expression. It can be done by using methods
$expr.\proc{add}(expr1)$, $expr.\proc{divide}(expr1)$, $expr.\proc{subtract}(expr1)$, $expr.\proc{multiply}(expr1)$,
what as the result gives $expr + expr1$, $expr / expr1$, $expr - expr1$, $expr * expr1$, respectively.
The value $expr1$ can be: an object $CspDoubleExpr$; an object $CspFloatExpr$;
an object $CspIntExpr$; an object $CspLongExpr$; a double value; a float value; an int value; a long value.
Constructing other types of expressions can be done in a similar way.
Once we define a constraint, we can add it to the solver as we did on the line 35.
By invoking the method $\proc{solve}(vars)$ on the solver, as the result we get $\const{true}$ if set of the
constraints can be satisfied, otherwise we get $\const{false}$. Probably you ask yourself what is the point of
calling the method \proc{solve} with parameter $vars$, because anyway the solver will take into account all the
constraint. The documentation for this method contains :``Resets the solver and
\textbf{locates a solution for an array of variables within the problem contained by the solver}.''.
The following example will make it a bit more clear.\\
Suppose we have constraints $x + y + z = 14, 1 \leq x, y, z \leq 9$. If we invoke $solver.\proc{solve}(x, y, z)$
then the result would be $x = 1, y = 4, z = 9$. If we invoke $solver.\proc{solve}(y)$, the result would be
$x \in [4, 9], y = 1, z \in [4, 9]$, what means that there exists solution when $x \in [4, 9]$. In other words
specifying a particular variable force solver to solve constraints by computing values to variables.
\end{comment}
| {
"timestamp": "2019-03-21T01:20:45",
"yymm": "1903",
"arxiv_id": "1903.08571",
"language": "en",
"url": "https://arxiv.org/abs/1903.08571",
"abstract": "A non-redundant integer cone generator (NICG) of dimension $d$ is a set $S$ of vectors from $\\{0,1\\}^d$ whose vector sum cannot be generated as a positive integer linear combination of a proper subset of $S$. The largest possible cardinality of NICG of a dimension $d$, denoted by $N(d)$, provides an upper bound on the sparsity of systems of integer equations with a large number of integer variables. A better estimate of $N(d)$ means that we can consider smaller sub-systems of integer equations when solving systems with many integer variables. Furthermore, given that we can reduce constraints on set algebra expressions to constraints on cardinalities of Venn regions, tighter upper bound on $N(d)$ yields a more efficient decision procedure for a logic of sets with cardinality constraints (BAPA), which has been applied in software verification. Previous attempts to compute $N(d)$ using SAT solvers have not succeeded even for $d=3$. The only known values were computed manually: $N(d)=d$ for $d < 4$ and $N(4) > 4$. We provide the first exact values for $d > 3$, namely, $N(4)=5$, $N(5)=7$, and $N(6)=9$, which is a significant improvement of the known asymptotic bound (which would give only e.g. $N(6) \\le 29$, making a decision procedure impractical for $d=6$). We also give lower bounds for $N(7)$, $N(8)$, $N(9)$, and $N(10)$, which are: $11$, $13$, $14$, and $16$, respectively. We describe increasingly sophisticated specialized search algorithms that we used to explore the space of non-redundant generators and obtain these results.",
"subjects": "Logic in Computer Science (cs.LO)",
"title": "Identifying Maximal Non-Redundant Integer Cone Generators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347883040039,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646081960011
} |
https://arxiv.org/abs/hep-th/9312023 | On free differentials on associative algebras | A free differential for an arbitrary associative algebra is defined as a differential with a uniqueness property. The existence problem for such a differential is posed. The notion of optimal calculi for given commutation rules is introduced and an explicit construction of it for a homogenous case is provided. Some examples are presented. | \section{Introduction}
A differential $d:R \rightarrow\, _{R}\!M\!_{R}$ is called free if the differential
of any element $v$ has a unique presentation of the form $dv=dx^{i}\cdot v_{i}$,
where $x^1,\ldots ,x^n$ are generators of the algebra and $dx^1,\ldots ,dx^n$ their
differentials. Any free differential defines a commutation formula
$vdx^i=dx^k\cdot A(v)^i_k$, where $A:v \mapsto A(v)^i_k$ is an algebra
homomorphism $A:R \rightarrow R_{n \times n}$. It is easy to see that for any
homomorphism $R \rightarrow R_{n \times n}$ there exists not more than one
free differential. We are going to consider the existence problem of such a
differential. We will show that for a given commutation rule
$vdx^i=dx^k\cdot A(v)^i_k$ a free algebra generated by the variables $x^1,\ldots
,x^n$
has a related free differential. We will define an {\it optimal} algebra with
respect to a fixed commutation rule. In the homogeneous case this algebra
is characterized as the unique algebra which has no nonzero $A$-invariant
subspaces with zero differentials. Finally, we will consider a number of
example
of optimal algebras for different commutation rules. In particular, we will
desc
two variable commutation rules which define commutative optimal algebra.
This article is closely related to the known Wess and Zumino paper
{\cite{Wess}. In our terms, they prove in particular that a system on $n^2$
quadratic forms vanish in the optimal algebra if the Yang-Baxter equation
holds.
\section{Free differential calculi}
Recall that a differential is a linear mapping from an algebra $R$
to a bimodule $M$ satisfying the Leibniz rule:
$$
d(uv)=d(u)v + ud(v)
\medskip $$
{\bf Lemma 1.1} \em
A differential $d$ has the uniqueness property iff $\Omega_d(R)=R^{\flat}
d(R)R^{\flat}$ is a free right $R$-module freely generated by $dx^1,\ldots ,
dx^n$ (here
$R^{\flat}=R$ if $R$ has a unit and the augmented algebra otherwise).
\medskip \\ \em
Due to the lemma the following definition is natural. \smallskip \\
{\bf Definition 1.2}
A differential is said to be {\it free} if it has the uniqueness property.
\medskip \\
That definition essentially depends on the generating space $V=\sum x^i F$.
Let us consider as example the case $V=R$. Of course, if $R$ is not finite
dimensional, then we have an infinite set of generators. Nevertheless, there
exists a free differential with respect to that space of generators. It is
exact
the universal derivation.
If $d$ is a free differential, then linear maps $D_k:R\rightarrow R$ (partial
derivatives) can be defined by the formula:
$$
d\,v=dx^k\cdot D_k(v)
\eqno (1)
$$
Those maps satisfy the relations
$$
D_k(x^i)= \delta^i_k ,
\eqno (2)
$$
where $\delta^i_k$ is the Kronecker delta.\medskip \\
{\bf Lemma 1.3} \em
A linear map $A_d:R\rightarrow R_{n\times n}$ from the algebra $R$ into the
algebra of $n$ by $n$ matrices over $R$ given by the formula
$$
A_d(v)^i_k= D_k(vx^i)- D_k(v)x^i \eqno (3)
$$
is an algebra homomorphism i.e.
$$
A_d(uv)^i_k =A_d(u)^l_k A_d(v)^i_l \eqno (4)
$$
\em
{\it Proof:} Let $v \in R$. The left multiplication $A(v):\omega \mapsto
v\omega$ is an endomorphism of the right module $\Omega_d(R)$. Ring of all
endomorphisms of any free module of rank $n$ is isomorphic to the ring of
all $n$ by $n$ matrices. Therefore, we can find a homomorphism $A: R
\rightarrow R_{n\times n}$ defined by the formulae
$$
v\,dx^i=dx^k\cdot A(v)_k^i
$$
By the Leibnitz rule we have
$$
v\,dx^i=d(vx^i) - d(v)x^i= dx^k[D_k(vx^i)-D_k(v)x^i]
$$
therefore,
$$
dx^k\cdot A_d(v)^i_k=dx^k\cdot A(v)^i_k
$$
i.e. $A_d=A$ and $A_d$ is also a homomorphism.\medskip \hfill $\Box$
Let us consider a linear map $D:R\rightarrow R^n$ of $R$ to the space of columns
of height $n$ acted by the formula
$$
D(v)= \left( \begin{array}{c}
D_1(v)\\ \vdots \\ D_n(v)
\end{array} \right) \ \ \ \!\!\!\! ,\;\; i.e. \ \ \ D= \left( \begin{array}{c}
D_1\\ \vdots \\ D_n
\end{array} \right) \medskip
$$\\
{\bf Proposition 1.4} \em
The map $D$ and homomorphism $A_d$ are connected by the relation
$$
D(uv)=D(u)v + A_d(u)D(v) \eqno (5)
$$\\ \em
{\it Proof: } We have
$$
u\,dv=u\,dx^i\cdot D_i(v)=dx^k\cdot A_d(u)^i_k D_i(v)
$$
and
$$
dx^k\cdot D_k(uv)=D_k(u)v\,+\,A_d(u)^i_kD_i(v)
$$
i.e. by the uniqueness condition
$$
D_k(uv)=D_k(u)v\,+\,A_d(u)^i_kD_i(v)
$$ \hfill $\Box$
The inverse statement is also valid \medskip \\
{\bf Proposition 1.5} \em
Let $R$ be an algebra generated by elements $x^1,\ldots ,x^n$
and $A:R \rightarrow R_{n \times n}$ be an algebra homomorphism. If $D:R\rightarrow R^n$
is a linear map such that
$$
D_k(x^i)=\delta^i_k \eqno (6)
$$
$$
D(uv)=D(u)v+A(u)D(v), \eqno (7)
$$
then the map $\Delta:v\mapsto dx^k\cdot D_k(v)$ is a free differential, where
$\Omega_\Delta (R)=\sum dx^i\cdot R$ is a free right module with the left module
structure defined by commutation rule, i.e. $A_\Delta =A$.\\ \em
{\it Proof: } We have to prove $\Delta(x^i)=dx^i$ and the Leibnitz formulae.
First equality follows from (6) and definition of $\Delta$. Finally
$$
\Delta(uv)=dx^k\cdot D(uv)=dx^k\cdot [D_k(u)v\,+\,A_d(u)^i_kD_i(v)]=
\Delta(u)v\,+\,u\Delta(v).
$$
\medskip \hfill $\Box$
A natural question concerning Proposition 1.5 arises here. If a homomorphism
$A$ is given, then formula (7) allows one to calculate partial derivatives
of a product in terms of its factors. That fact and formula (6) show that for
a given $A$ there exists not more then one $D$ satisfying formulas (6) and (7).
It is not
clear yet whether or not there exists at least one $D$ of such a type. Thus,
our
first
task is to describe these homomorphisms of $A$ for which there exist free
differentials with $A_d=A$. \medskip \\
{\bf Theorem1.6} \em
Let $R=F<x^1,\ldots ,x^n>$ be a free algebra generated by
$x^1,\ldots ,x^n$ and $A^1,\ldots ,A^n$ be any set of $n\times n$ matrices over
$R$. There exists a unique free differential $d$ such that $A_d(x^i)=A^i$.\\
\em
{\it Proof: } The map $x^k \mapsto A^k$ can be uniquely extended to a
homomorphism of algebras $A:R \rightarrow R_{n\times n}$. Let us define a map
$D$ on monomials in $x^1,\ldots ,x^n$ by induction on its degree. Let
$D_k(x^i)=\delta_k^i$ and
$$
D_k(x^iv)=\delta_k^i v\,+\,A(x^i)^j_kD_j(v),\ \ \ \ where \ A(x^i)=A^i
$$
We have to prove formulae (7) for arbitrary elements $u,\,v$. It can be done
by induction on degree of a monomial $u$.\\
If this degree is equal to one then (7) implies required result. Let
$u=x^iu_1$. Then by the equality $A(u)=A(x^iu_1)=A(x^i)A(u_1)$ and by
induction supposition we have
$$
D_k(uv)=D_k(x^iu_1v)=\delta^i_ku_1v\,+\,A(x^i)^j_kD_j(u_1v)=
$$
$$
=[\delta^i_ku_1\,+\,A(x^i)^j_kD_j(u_1)]v\,+\,A(x^i)^l_kA(u_1)^j_lD_j(v)=
D_k(u)v\,+\,A(u)^j_kD_j(v)
$$
\nopagebreak \hfill $\Box$
Let now $R$ be a non-free algebra defined by the set of generators $x^1,\ldots,x^n$
and the set of relations $f_m(x^1,\ldots ,x^n)=0,\ m\in M\ i.e.\ R=\hat{R}/ \hat{I}$, where
$\hat{R}=F<\hat{x}^1,\ldots ,\hat{x}^n>$ is a free algebra and $\hat{I}$ is its ideal generated by
elements $f_m(\hat{x}^1,\ldots ,\hat{x}^n),\,m\in M$.
Let us denote by $\pi$ the natural projection $\hat{R} \rightarrow R$ such that
$\pi(\hat{x}^i)=x^i$. Since $R_{n\times n}= R\otimes F_{n\times n}$, $\pi$
defines an epimorphism $\hat{\pi}:\hat{R}_{n\times n}\rightarrow R_{n\times n}$ by the formulae
$\hat{\pi}=\pi \otimes id$, where $id: F_{n\times n}\rightarrow F_{n\times n}$ is the
identity map.\\
If $A:R\rightarrow R_{n\times n}$ is any homomorphism of algebras, then we have the
following diagram of algebra homomorphism
$$
\begin{array}{ccccc}
\hat{R} & \stackrel{\hat{A}}{\longrightarrow} & \hat{R}\otimes F_{n\times n} & = & \hat{R}_{n\times
n}\\
\pi \downarrow & & \downarrow \pi \otimes id & = & \downarrow \hat{\pi} \\
R & \stackrel{A}{\longrightarrow} & R\otimes F_{n\times n} & = & F_{n\times n}
\end{array} \eqno (8)
$$
Let us choose for any generator $x^i$ an arbitrary element $\hat{A}^i\in \hat{R}$ such
that $\hat{\pi}(\hat{A}^i)=A^i$ (recall that $\hat{\pi}$ is epimorphism). Then the map
$\hat{x}^i\mapsto \hat{A}^i$ can be extended to an algebra homomorphism $\hat{A}:\hat{R}
\rightarrow \hat{R}_{n\times n}$ (recall that $\hat{x}^1,\ldots ,\hat{x}^n$ are free variables). That
homomorphism completes (8) to a commutative diagram. For any relation $f_m
(\hat{x}^1,\ldots ,\hat{x}^n)$ we have:
$$
\hat{\pi}(\hat{A}(f_m(\hat{x}^1,\ldots ,\hat{x}^n))) = A(\pi(f_m(\hat{x}^1,\ldots ,\hat{x}^n)))=0. \eqno (9)
$$
Furthermore,
$$
ker\hat{\pi}=ker(\pi\otimes id)=ker\pi\otimes F_{n\times n}=I_{n\times n},
$$
and finally
$$
\hat{A}(f_m(\hat{x}^1,\ldots ,\hat{x}^n)) \in ker\hat{\pi}=I_{n\times n}.
$$
Theorem 1.6 claims that for the homomorphism $\hat{A}:\hat{R}\rightarrow \hat{R}_{n\times n}$ there
exi
a unique free differential $\hat{d}$ of the free algebra $\hat{R}$. \medskip \\
{\bf Definition 1.7}
The differential $\hat{d}$ is called a {\it cover} differential with
respect to the homomorphism $A:R\rightarrow R_{n\times n}$\,. \medskip \\
Thus we have proved: \smallskip \\
{\bf Theorem 1.8} \em
For any homomorphism $A:R\rightarrow R_{n\times n}$ there there exists
a cover differential $\hat{d}$ of the algebra $\hat{R}$. \medskip \\ \em
{\bf Proposition 1.9} \em
An algebra $R$ with generators $x^1,\ldots ,x^n$ and the set of
defining relations $\{f_m,\,m\in M \}$ has a free differential with respect to
a homomorphism $A:R\rightarrow R_{n\times n}$ if and only if
$$
\hat{D}_k(f_m(\hat{x}^1,\ldots ,\hat{x}^n))\in \hat{I}
$$
where $\hat{D}_k$ are partial derivatives of the cover differential $\hat{d}$.\\ \em
{\it Proof: } Let the free differential exist. We claim that the diagram
$$
\begin{array}{ccc}
\hat{R} & \stackrel{\hat{D}}{\longrightarrow} & \hat{R}^n\\
\pi \downarrow & & \downarrow \pi^n\\
R & \stackrel{D}{\longrightarrow} & R^n
\end{array}
$$
is commutative. Indeed, the difference $\Delta=D\circ \pi \,-\,\pi^n\circ \hat{D}$
acts trivially on generators:
$$
\Delta_k(\hat{x}^i)=D_k\pi(\hat{x}^i)\,-\,\pi \hat{D}_k(\hat{x}^i)=\delta^i_k\,-\,\pi(\delta^i_k)=0
$$
Commutativity of (8) implies $A(\pi(f))=\hat{\pi}(A(f))$ and by (7) we have
$$
\Delta(fh)=D(\pi f\cdot \pi h)\,-\,\pi^n\hat{D}(fh)=
$$
$$
=D(\pi f)\pi h\,+\,A(\pi f)D(\pi h)\,-\,\pi^n(\hat{D}\cdot h\,+\,\hat{A} f\cdot \hat{D} h)=
$$
$$
=\Delta(f)\pi h\,+\,A(\pi f)\Delta(h)
$$
By evident induction, $\Delta=0$. \\
Finally for any relation $f_m$ we have
$$
\pi \hat{D}(f_m(\hat{x}^1,\ldots ,\hat{x}^n))=D(\pi(f_m(\hat{x}^1,\ldots ,\hat{x}^m)))=0
$$
i.e. $\hat{D}_k(f_m)\in ker \pi=\hat{I}$. \\
Inversely, if $\hat{D}_k(f_m)\in ker \pi=\hat{I}$ then we have
$$
\hat{D}(uf_mv)=\hat{D}(u)f_mv\,+\,\hat{A}(u)\hat{D}(f_m)v\,+\,\hat{A}(u)\hat{A}(f_m)\hat{D}(v)
$$
$$
\equiv \hat{A}(u)\hat{A}(f_m)\hat{D}(v)\ \ \ \ \ \ \ (mod\,I )
$$
By (8) one has $\hat{\pi} \hat{A}(f_m)=A(\pi f_m)=0$ and $\hat{A}(f_m)\in ker\,\hat{\pi}
=I_{n\times n}$. Therefore $\hat{D}_k:\hat{R}\rightarrow \hat{R}$ induce maps $D_k:\hat{R}/\hat{I}
\rightarrow \hat{R} \stackrel{\pi}{\longrightarrow}R$ in such a way that
$D\circ \pi=\pi^n\circ \hat{D}$. Finally for arbitrary $u=\pi f\in R$ and
$v=\pi h\in R$ we have
$$
D(uv)=D(\pi f\cdot \pi h)=\pi^n\hat{D}(f)h\,+\,\pi^n\hat{A}(f)\hat{D}(h)=
$$
$$
=D(\pi f)\pi h\,+\,A(\pi f)D(\pi h)=D(u)v\,+\,A(u)v
$$
and by Proposition 1.5 the proposition is proved. \medskip \hfill $\Box$
{\bf Corollary 1.10} \em
Let an algebra $R$ be defined by generators $x^1,\ldots ,x^n$
and the set of homogeneous relations $\{f_m \}$ of the same degree. If $A:R
\rightarrow R_{n\times n}$ acts linearly on generators $A(x^j)^i_k=\alpha^{i j}_{kl}
x^l$, then for the pair $(R, A)$ there exists a free differential iff
for all $m$
$$
\medskip \hat{d} f_m=0\ \ . \eqno (10)
$$
\\ \em
{\bf Definition 1.11}
An ideal $I$ of a free algebra $\hat{R}=F<\hat{x}^1,\ldots ,\hat{x}^n>$ is said to be
{\it comparable} with a homomorphism $A:\hat{R}\rightarrow \hat{R}_{n\times n}$ if the factor
ring $\hat{R}/I$ has a free differential satisfying the commutation rules
$$
x^jdx^i=dx^k\cdot A(x^j)^i_k\ \ . \eqno (11)
$$\\
If an ideal $I$ is $A$-comparable, then Lemma 1.3 defines a homomorphism
$A:r\mapsto A^i_k(r)$ from the factor algebra into the matrix algebra over it.
Thanks to Proposition 1.8, it follows that $I$ is $A$-invariant and $A$-stable
in the sense of the following definition: \smallskip \\
{\bf Definition 1.12}
An ideal $J$ of the algebra $\hat{R}$ is said to be $A$-{\it invariant}
if $A^i_k(J)\subseteq J$, where $A:r\mapsto A^i_k(r)$ is a homomorphism. An
ideal $I$ is said to be $A$-{\it stable} if $D_k(I)\subseteq I$ for any
of partial derivatives $D_k$ defined by a differential $d$ corresponding to
$A$ (see Theorem 1.6). \medskip \\
For any homomorphism $A$ there exists the largest $A$-comparable ideal $I(A)$
-- the sum of all comparable ideals. It is is again $A$-comparable
because a sum of invariant ideals is invariant and a sum of of stable ideals
is stable one.
Now, we are going to describe the ideal $I(A)$ in the homogeneous case. If a
homomorphism $A$ preserves a degree, then it must act linearly on generators
$A^i_k(\hat{x}^j)=\alpha^{ij}_{kl}\hat{x}^l$. Therefore, the homomorphism $A$ is defined by
t
2-covariant 2-contravariant tensor $A=\alpha^{ij}_{kl}$. \medskip \\
{\bf Theorem 1.13} \em
For any 2-covariant 2-contravariant tensor $A=\alpha^{ij}_{kl}$
the ideal $I(A)$ can be constructed by induction as the homogeneous space
$I(A)=
I_1(A)+I_2(A)+I_3(A)+,\cdots$ in the following way:
\begin{enumerate}
\item $I_1(A)=0$
\item Assume that $I_{s-1}(A)$ has been defined and $U_s$ be a space of all
polynomials $m$ of degree $s$ such that $D_k(m)\in I_{s-1}(A)$ for all
$k.\ 1\leq k\leq n$. Then $I_s(A)$ is the largest $A$-invariant
subspace of $U_s$.
\end{enumerate} \em
{\it Proof: } First of all, we should note that the maximal
$A$--comparable ideal has to be homogenous (graded). It is sufficient
to prove that every $A$--comparable ideal is contained in the homogenous
one. Since our free algebra $\hat{R}=\hat{R}_1+\hat{R}_2+\ldots$ is graded, every element
$u \in \hat{R}$ has unique decomposition $u=u_1+u_2+\ldots$ into homogenous
components. Let $J$ be an arbitrary $A$--comparable ideal. Define $J_s=
\{u_s: u \in J \}$. For $u\in J$ one has $A_k^i(u)=A_k^i(u_1)+A_k^i(u_2)
+\ldots \,\in A_k^i(J)\subseteq J$ and $deg A_k^i(u_s)=deg u_s =s$. Therefore
$A_k^i(J_s)\subseteq J_s$. Analogously, $D_k(u)=D_k(u_1)+D_k(u_2)+\ldots\,
\in D_k(J)\subseteq J$ and $deg D_k(u_s)=s-1$. So $D_k(J_s)\subseteq J_{s-1}$
and the sum $J_1+J_2+\ldots$ is an $A$--comparable subset. Similarly,
$\hat{R}_tJ_s\hat{R}_p\subseteq J_{t+s+p}$, hence $J_1+J_2+\ldots$ is an ideal in $\hat{R}$.\\
Next step is to prove that $I(A)$ is an ideal.
It is sufficient to show that $I_{s-1}\hat{x}^i+\hat{x}^jI_{s-1}\subseteq I_s \ \forall
\,i,\,j$. Let $V$be the space generated by the variables $\hat{x}^1,\ldots ,\hat{x}^n$.
Let us prove by induction that $I_{s-1}V+VI_{s-1}\subseteq I_s$. We have
$$
D_k(I_{s-1}V+VI_{s-1})\subseteq D_k(I_{s-1})V+A^j_k(I_{s-1})D_j(V)+
D_k(V)I_{s-1}+
$$
$$
+A^j_k(V)D_j(I_{s-1})
\subseteq I_{s-2}\cdot V+I_{s-1}+I_{s-1}+V\cdot I_{s-2}\subset I_{s-1}
$$
It follows that $I_{s-1}V+VI_{s-1}\subseteq U_s$. Finally, the space
$I_{s-1}V+VI_{s-1}$ is $A^j_k-$ invariant as so are $I_{s-1}$ and $V$.
Therefore $I_{s-1}V+VI_{s-1}\subset I_s$ and $I$ is an ideal. \\
Let now $J=J_1+J_2 \ldots$ be an arbitrary (graded) $A-$comparable ideal.
We are going to prove by induction that $J_s\subseteq I_s$. Let
$u=\beta_k\hat{x}^k\in J_1$. Then $\beta_kx^k=0$ in factor--ring $\hat{R}/J$.
Therefore $\beta_kdx^k=0$ and $\beta_k=0$ by uniqueness condition. So
$u=0$ in the free algebra and $0=J_1=I_1$.\\
Let $J_{s-1}\subseteq I_{s-1}$. By the Proposition 1.9 one has $D_k(J_s)
\subset J$. All elements from $D_k(J_s)$ have degree equal to $s-1$.
Therefore $D_k(J_s)\subseteq J_{s-1}\subseteq I_{s-1}$ and by the definition
of $U_s$ we have $J_s\subseteq U_s$. Finally $J_s$ is $A$--invariant
space and by the definition of the space $I_s$ we obtained $J_s\subseteq
I_s$. \medskip \hfill $\Box$\\
Let us denote by $\hat{R}_A$ the factor algebra $\hat{R}/I(A)$. In some sense $\hat{R}_A$
is an {\it optimal} algebra which has a free differential with respect to the
commutation rule $A$. Indeed, Theorem 1.13 shows in particular that if a
homogeneous element is such that all elements of the invariant subspace
generated by it have all partial derivatives equal to zero, then that element
vanishes in the optimal algebra.
We also have proved that there exists maximal algebra which has a free
differential with any given commutation rule (this is the free algebra,
see Theorem 1.6). Of course, it is very interesting to consider a number of
concrete commutation rules $A$ and related algebras $\hat{R}_A$. \medskip \\
{\bf Example 1.}
Let us consider the diagonal commutation rule: $x^jdx^i=dx^i\cdot
q^{ij}x^j$, with the symmetry condition $q^{ij}q^{ji}=1, i\neq j$. If none
of the coefficients $q^{ij}$ is a root of a polynomial of the type
$\lambda^{[m]}\doteq \lambda^{m-1}+\lambda^{m-2}+\cdots +1$, then the optimal algebra $\hat{R}
_A$ is equal to $F<x^1,\ldots ,x^n>/\{q^{ij}x^ix^j=x^jx^i, \ i<j \}$.\\
If $(q^{ii})^{[m_i]}=0,\ 1\leq i\leq s$ with minimal $m_i$ then \\
$\hat{R}_A=F<x^1,\ldots ,x^n>/\{q^{ij}x^ix^j=x^jx^i,\ i<j,\ (x^i)^{m_i}=0,\
1\leq i\leq s \}$. \medskip \\
{\bf Example 2.}
Let $A=0$ i.e. $x^idx^j=0$. Then $\hat{d}$ is a homomorphism of right
modules and the optimal algebra is free $\hat{R}_A=\hat{R}$.\medskip \\
{\bf Example 3.}
Let $x^idx^j=-dx^i\cdot x^j$. Then the optimal algebra is the
smallest possible algebra generated by the space $V$ i.e. $\hat{R}_A=F<x^1,\ldots
,x^n>/ \{x^ix^j=0 \}$. \medskip \\
{\bf Example 4.}
Let $x^1dx^1= dx^1\cdot (\alpha_2x^2+\cdots +\alpha_nx^n)$ and
$x^idx^j=-dx^i\cdot x^j$ if $i\neq 1\ or \ j\neq 1$. Then the optimal algebra
is almost isomorphic to the ring of polynomials in one variable. More
precisely,
$\hat{R}_A=F<x^1,\ldots ,x^n>/ \{x^ix^j=0, \ unless \ \,i=j=1 \}$. \medskip \\
{\bf Example 5.}
If $n=2$ and $x^1dx^1=dx^1\cdot \mu x^2, \ \ x^1dx^2=-dx^1\cdot x^2, \ \
x^2dx^1=-dx^2x^1, \ \ x^2dx^2=dx^2\cdot \lambda x^1$, then the optimal algebra is
isomorphic to the direct sum of two copies of the polynomial algebra
$\hat{R}_A=F<x^1, x^2>/ \{x^1x^2=x^2x^1=0 \}$. \medskip \\
Finally, we can formulate result which describes numbers of commutation
rules in two variables for which the optimal algebra is commutative.
\smallskip \\
{\bf Theorem 1.14} \em
In the two variable case, the following five series have commutative optimal
algebra:\\
\begin{enumerate}
\item \ \ \
$
x^1dx^1=dx^1\cdot u\,+\,dx^2\cdot s ,\ \ \ \
x^1dx^2=dx^1\cdot w\,+\,dx^2\cdot (\lambda s+x^1),
$
$$
x^2dx^1=dx^1\cdot(w+x^2)\,+\,dx^2\cdot(\lambda s),
$$
$$
x^2dx^2=dx^1\cdot (\lambda w)\,+\,dx^2\cdot (\lambda^2s-\lambda u+w+\lambda x^1+x^2);
$$
\item \ \ \
$
x^1dx^1=dx^1\cdot(x^1+\lambda w+s)\,+\,dx^2\cdot w, \ \ \ \
x^1dx^2=dx^1\cdot \gamma w\,+\,dx^2\cdot (x^1+s),
$
$$
x^2dx^1=dx^1\cdot (x^2+\gamma w)\,+\,dx^2\cdot s,\ \ \ \ \
x^2dx^2=dx^1\cdot \gamma s\,+\,dx^2\cdot (x^2+\gamma w-\lambda s);
$$
\item \ \ \
$
x^1dx^1=dx^1\cdot (x^1+\gamma w),\ \ \ \ \ x^1dx^2=dx^1\cdot w\,+\,dx^2\cdot x^1,
$
$$
x^2dx^1=dx^1\cdot (x^2+w),\ \ \ \ \ x^2dx^2=dx^1\cdot s\,+\,dx^2\cdot (x^2+w-\gamma s);
$$
\item \ \ \
$
x^1dx^1=dx^1\cdot u,\ \ \ \ \ \ x^1dx^2=dx^2\cdot x^1,
$
$$
x^2dx^1=dx^1\cdot x^2,\ \ \ \ \ \ x^2dx^2=dx^2\cdot v;
$$
\item\ \ \
$
x^1dx^1=dx^1\cdot u,\ \ \ \ \ \ x^1dx^2=dx^2\cdot u,
$
$$
x^2dx^1=dx^1\cdot x^2\,+\,dx^2\cdot (u-x^1),\ \ \ \ \ x^2dx^2=dx^1\cdot w\,+\,dx^2\cdot v.
$$
\end{enumerate}
where $u, v, w, s$ are arbitrary elements from the space $V$ and $\lambda, \gamma$
are parameters from the base field.\smallskip \\ \em
{\bf Remark. }The above series are not independent. For example, the standard
Newton--Leibnitz calculi ($x^idx^j=dx^j\cdot x^i$) belongs, as a special case
to each of them, by putting $s=w=0, u=x^1, v=x^2$. More detailed discussion
of the above examples and classification theorem for calculi with a commutative
optimal algebra will be given elsewhere \cite{next}.
| {
"timestamp": "1997-11-14T00:50:47",
"yymm": "9312",
"arxiv_id": "hep-th/9312023",
"language": "en",
"url": "https://arxiv.org/abs/hep-th/9312023",
"abstract": "A free differential for an arbitrary associative algebra is defined as a differential with a uniqueness property. The existence problem for such a differential is posed. The notion of optimal calculi for given commutation rules is introduced and an explicit construction of it for a homogenous case is provided. Some examples are presented.",
"subjects": "High Energy Physics - Theory (hep-th); Rings and Algebras (math.RA)",
"title": "On free differentials on associative algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.974434786819155,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646071150675
} |
https://arxiv.org/abs/1804.11342 | Hyperreal Numbers for Infinite Divergent Series | Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent series in a more comprehensive and tractable way. | \section{The Problem of Infinite Series}
Historically, infinities have led to many problems in mathematics.
Infinities, when not handled carefully, easily lead to contradictions and indeterminacies.
Therefore, caution has always been urged when dealing with infinite series.
This is especially true with divergent infinite series.
Convergent infinite series generally behave unproblematically similar to the value that they converge to.
Given a series that converges to $2$ and another series that converges to $3$ then the sum of the values of the series will be $5$ and their product will be $6$.
Therefore, the nature of these series can be summarized into a single number.
With divergent series, this is not so straightforward.
A lack of agreement on the rules for handling infinities had led to numerous problems with handling divergent series.
If a series diverges to infinity, is it greater than or equal to some other series that diverges to infinity?
Can the terms of the series be rearranged?
Can their spacing be modified?
Is $1 + 1 + 1 + \ldots$ equivalent to $1 + 0 + 1 + 0 + 1 + 0 + \ldots$?
Lack of answers to questions like this have stifled work in divergent series, and have caused many mathematicians to think of divergent series as invalid entities to work with rigorously.
\section{Working with Infinities}
Many paradoxes exist with infinities.
For instance, are there the same number of positive even integers as positive integers?
There are an infinity of them, but does that make them the same?
It seems pretty obvious that, on a number line, positive integers occur twice as often.
However, there are an infinite amount of both.
Cantor's solution to this problem is to separate out the final quantity of a set (the cardinality) from the arrangment of a set (its ordinality).
The cardinal numbers do not behave in any way similar to real numbers.
The ordinals, on the other hand, behave in many ways similar to real numbers.
However, Cantor's own system for ordinal arithmetic is difficult to use, and doesn't translate well between transfinite and regular real arithmetic.
The hyperreal number line has many similarities to Cantor's ordinals, operating essentially at the level of ``ordinal'' in Cantor's system.
However, the hyperreal number line offers a way to do arithmetic with infinities in a way that very closely matches real arithmetic through the use of the transfer principle \citep{henle2003}.
The transfer principle states that any first-order proposition that is true for the reals is also true for the hyperreals.
This means that the standard arithmetic principles for dealing with real numbers will apply to hyperreal numbers as well.
The hyperreal number line operates with an infinite unit, $\omega$, that represents an order of infinity.\footnote{The choice of character/typography for the unit varies with the author. For instance, Keisler uses $H$ \citep{keisler2012}. $\omega$ was chosen because of its historical connection with ordinal-type infinities.}
The way it is usually handled, $\omega$ isn't a specific number in the typical sense, but rather more of a benchmark of infinity.
Previous work has shown that hyperreal numbers could be a potential solution to how values of divergent series can be represented \citep{gaastra2016}.\footnote{Other work worth mentioning in this area are \citep{paterson2018a} and \citep{paterson2018b}. In the current work, we will use a notation similar to \citep{keisler2012} to notate hyperreal values, and show how infinite series can be simplified to them. Paterson did the opposite, by notating hyperreal values with the infinite sum that represents them.}
The present paper will build on this original idea and establish a system for using hyperreal numbers to assign values to infinite series.
\section{Hyperreals and Partial Sums}
\label{partialsum}
The vast majority of issues with divergent series comes with the transition from partial sums to infinity.
As long as a series remains a \emph{partial} sum, arithmetic with the series is unproblematic.
Therefore, it would be beneficial to develop a system which matched the partial sum behavior of finite sums, but allowed the result to be generalized to infinity.
The value of a partial sum of a given length is sensitive to the order of the terms in the infinite sequence.
Imagine summing the first $n$ terms of an infinite series.
The result will not be the same with different orderings of terms.
If the extent of the partial summation is unknown, then it is also unknown the extent to which numbers can be reordered.
Additionally, tacking on zeroes to the beginning of the series can potentially \emph{change} the partial sum.
Therefore, although adding zeroes to the beginning of a series has the \emph{appearance} of being a null operation, because doing so modifies the value of finite partial sums, it can also lead to long-term changes in behavior.
Therefore, just as ordinal infinities differ because of order-dependent properties, so too will infinite series exist as heavily order-dependent entities.
To understand many of the rules that will be developed for infinite series, imagine that the rules are being built for merely doing partial sums to an unknown parameter $k$, where $k$ at least acts like a particular finite value, but is larger than any particular list index referenced by any finite manipulation of the series.
Some of these formulas will be further reducible due to the nature of the hyperreals, as will be discussed in Section~\ref{secPrincipalValue}.
\section{Pinning Down $\omega$}
Since $\omega$ operates as a benchmark instead of a number, the first task is to identify the benchmark to associate $\omega$ with.
This is actually to some extent an arbitrary decision.
Any infinitely large value could be used to establish a baseline $\omega$.
However, the value that seems most natural for $\omega$ (especially for summation) is the size of the set of positive integers.
Therefore, $\omega$ will be used to refer to the total quantity of positive integers.
Because of this, the notation used will be more specific when writing summations.
Instead of summing to the ambiguous infinity, $\infty$, a summation to the specific infinity of all positive integers, $\omega$, will be used.
Therefore, the series $1 + 2 + 3 + \ldots$ will be written as
\begin{equation}
\sum_{i = 1}^{\omega} i
\end{equation}
This will establish the starting benchmark for relationships among the different series.
\section{The Standard Summation}
Because ordinal infinities are so order-dependent, it is important to establish an official standardization of summation.
That is, $\sum_{1}^{\omega}$ will be different from $\sum_{0}^{\omega}$.
Even though it looks like series with these types of sums will have an identical number of terms (after all they both have infinite terms), using this methodology the latter one will actually have more elements than the former.
This is due to the principle established in Section~\ref{partialsum}.
If, instead of $\omega$ being infinite, pretend that $\omega$ was just an ordinary finite integer parameter.
Examine the series
\begin{equation}
\label{basicomega1}
\sum_{i = 1}^{\omega} 1.
\end{equation}
If $\omega$ represented an integer (say, $5$) instead of $\infty$, it would be obvious that this sum represents a different value from the series
\begin{equation}
\label{basicomega0}
\sum_{i = 0}^{\omega} 1.
\end{equation}
Equation~\ref{basicomega1} would represent the value $5$ while Equation~\ref{basicomega0} would represent the value $6$.
Therefore, it is clear that having matching indices matters.
In fact, our ability to sum divergent series will sometimes depend on having summations with equivalent numbers of terms.
Therefore, a ``standard'' starting point for summation will need to be established in order to ensure that like entities are being compared and reasoned about.
In computations it technically winds up not mattering whether the starting point is $1$ or $0$, though the formulas would have to be reworked based on the starting index.
However, since $\omega$ has been defined as being the size of the set of all positive integers, it makes sense to start at $1$.
For the purposes of this paper, the ``standard'' way of summing will be to start with $1$ and proceed to $\omega$.
\section{Simple Arithmetic and Geometric Series}
\label{secSimpleSeries}
\subsection{Arithmetic Series}
Arithmetic series take the form
\begin{equation}
\sum_{i = 1}^{n} a + (i - 1)d.
\end{equation}
The sum of an arithmetic series, given a starting value $a$, the number of elements $n$, and distance between elements $d$, can be given by the formula
\begin{equation}
\sum_{i=1}^{n} a + (i - 1)d = \frac{n}{2}\left(2a + (n - 1)d\right).
\end{equation}
To find the sum of an infinite arithmetic series, $\omega$ is used for $n$, forming a hyperreal value.
That reduces the formula to
\begin{equation}
\label{arithmeticSeriesFormula}
\sum_{i=1}^{\omega} a + (i - 1)d = \omega a + \frac{\omega^2d}{2} - \frac{\omega d}{2}.
\end{equation}
Therefore, to find the summation of the series $1 + 1 + 1 + \ldots$, one must only substitute in the correct parameters.
Since the starting value is $1$ and the distance between terms is $0$, this yields
\begin{align}
\sum_{i=1}^{\omega} 1 &= \omega \cdot 1 + \frac{\omega^2 \cdot 0}{2} - \frac{\omega \cdot 0}{2} \\
&= \omega + 0 - 0\\
&= \omega.
\end{align}
It is intuitively obvious that since there are $\omega$ $1$s added together that the sum of them would add up to $\omega$, as would be true for any finite value as well.
The arithmetic series $1 + 2 + 3 + \ldots$ can be calculated using hyperreals as well.
\begin{align}
\sum_{i=1}^{\omega} i &= \omega \cdot 1 + \frac{\omega^2 \cdot 1}{2} - \frac{\omega \cdot 1}{2} \\
&= \frac{\omega^2}{2} + \frac{\omega}{2}.
\end{align}
The next arithmetic series to examine is $1 + 3 + 5 + \ldots$, which can be similarly calculated.
\begin{align}
\sum_{i=1}^{\omega} (2i - 1) &= \omega \cdot 1 + \frac{\omega^2 \cdot 2}{2} - \frac{\omega \cdot 2}{2} \\
&= \omega^2.
\end{align}
Thus, the value of $1 + 3 + 5 + \ldots$ is equal to $(1 + 1 + 1 \ldots)^2$.
Interestingly, as noted in Section~\ref{partialsum}, there is nothing intrinsically infinite about the behavior of $\omega$ in these series.
For instance, if $\omega$ was replaced with $5$, the results would hold.
That is, $(1 + 1 + 1 + 1 + 1)^2 = (1 + 3 + 5 + 7 + 9) = 25$.
Even though the sums are divergent, summing them has a very well-defined behavior within the combined hyper-real/partial sum methodology presented here.
\subsection{Geometric Series}
Geometric series take the form
\begin{equation}
\sum_{i = 1}^{n} a r^{i - 1},
\end{equation}
where $n$ is the number of terms, $a$ is the starting term, and $r$ is the common ratio.
A value for a geometric series can be given by the formula
\begin{equation}
\sum_{i = 1}^{n} a r^{i - 1} = a \frac{1 - r^n}{1 - r}.
\end{equation}
Because an infinite series will have $\omega$ terms, $n$ can be replaced with $\omega$.
Let us begin by looking at the series $1 + 2 + 4 + 8 + \ldots$.
The value of this series can be given by the formula
\begin{align}
\sum_{i = 1}^{\omega} 2^{i - 1} &= 1\cdot\frac{1 - 2^\omega}{1 - 2} \\
&= 2^\omega - 1.
\end{align}
Divergent geometric series will generally have the same form.
Convergent series are also interesting.
The series $1 + \frac{1}{2} + \frac{1}{4} + \ldots$ can be plugged into the formula to yield
\begin{align}
\sum_{i = 1}^{\omega} \frac{1}{2}^{i - 1} &= 1\cdot\frac{1 - \frac{1}{2}^\omega}{1 - \frac{1}{2}} \\
&= 2 - 2\cdot \left(\frac{1}{2}\right)^\omega \label{simpleConvergentGeometric}
\end{align}
\section{Generalizing to the Principal Value}
\label{secPrincipalValue}
In most discussions of hyperreal numbers, the halo of a number is considered the hyperreal values which are infinitely close to a standard real number.
However, this definition is too focused on real numbers.
We will consider the order of a hyperreal value to be its largest exponent of $\omega$.
This is the most significant term of the hyperreal value.
We will call this most significant term the principal value of the hyperreal.
The halo (also known as a monad) of a hyperreal consists of all of the hyperreals which have the same principal value.\footnote{Most texts on hyperreal numbers define the halo or monad of $x$ to be all of the values $y$ for which $x - y$ is infinitesimal \citep[pg.~21]{loeb2015} \citep[pg.~52]{goldblatt1998}.
However, defined in such a way, the infinitesimals $\omega^{-1}$ and $2\omega^{-1}$ are within a monad.
Using principal values, $\omega^{-1}$ and $2\omega^{-1}$ are in the same galaxy, but not the same monad.
You would have to have a term of lower-order infinity to be within a monad, such as $\omega^{-1}$ and $\omega^{-1} + \omega^{-2}$.
This seems to be the essence of what the other texts are getting at, but, since most mathematics focuses on the reals, their definitions were entirely based on using reals as a starting point.
Here, since we will have results in the hyperreals, we need definitions that are equally useful when the final result is a hyperreal number.}
We will use the $\simeq$ operator to denote two hyperreals which share the same principal value.\footnote{In practice, $\simeq$ can be replaced with $=$, as it denotes equality to the extent normally practiced in mathematics. For instance, the differential $\mathrm{d}\left(xy\right)$ is often stated as being \emph{equal} to $x\,\diff{y} + y\,\diff{x}$, but really it is just the principal value. The actual value is $x\,\diff{y} + y\,\diff{x} + \diff{y}\,\diff{x}$. The $\diff{y}\,\diff{x}$ term is always discarded because it is infinitely less significant than the other pieces. Even when discarding this term, the equality sign is used. Therefore, while the present paper will be pedantic about asserting exact equality or mere principal value, for most general purposes equality can be asserted even when only stating the principal value.}
Therefore, the halo of a hyperreal number consists of all of those numbers which share the same principal value.
Many people use ``infinitely close'' as a colloquialism to describe two hyperreals which share the same principal value.
However, technically it is not correct, since, when dealing with infinities, two hyperreals which differ by multiple infinities can be considered ``infinitely close.''
That is, if $d \neq 0$, then $\omega^2 + 5\omega$, $\omega^2 - 12\omega$, and $\omega^2 + 23$ all share the same principal value, $\omega^2$.
They are infinitely apart, yet, colloquially, they can be considered ``infinitely close'' because their differences are infinitely less significant than their similarities.
When dealing with hyperreals, the principal value is the main one of concern.
So, for instance, while $1 + 2 + 3 + \ldots$ is exactly described by $\frac{\omega^2}{2} + \frac{\omega}{2}$, its principal value is just $\frac{\omega^2}{2}$.
Therefore, the formula given in (\ref{arithmeticSeriesFormula}) can actually be simplified to
\begin{equation}
\sum_{i=1}^{\omega} a + (i - 1)d \simeq \frac{\omega^2d}{2}
\end{equation}
if $d \neq 0$.\footnote{When $d = 0$, then the $\omega^2$ term goes to zero, and the series simplifies to $a\cdot \omega$ instead.}
Interestingly, we can see that, while the exact value of the hyperreal associated with a series depends on the starting point, the principal value depends only on the distance chosen, provided that $d \neq 0$.
Geometric series can use similar considerations.
You may have noticed that the hyperreal given for the series $1 + \frac{1}{2} + \frac{1}{4} + \ldots$ in Section~\ref{secSimpleSeries} is $2 - 2\cdot \left(\frac{1}{2}\right)^\omega$.
Typically, this series is thought to converge to $2$.
In fact, its principal value is $2$, because $\left(\frac{1}{2}\right)^\omega$ is an infinitesimal.
The use of principal values allows for a great amount of simplification for hyperreal values and formulas.
As an example, the ratio between two given arithmetic series can be solved for very simply.
\begin{align}
S_1 &= \sum_{i=1}^{\omega} a_1 + (i - 1)d_1 \simeq \frac{\omega^2(d_1)}{2} \nonumber \\
S_2 &= \sum_{i=1}^{\omega} a_2 + (i - 1)d_2 \simeq \frac{\omega^2(d_2)}{2} \nonumber \\
\frac{S_1}{S_2} &\simeq \frac{\frac{\omega^2(d_1)}{2}}{\frac{\omega^2(d_2)}{2}} = \frac{d_1}{d_2}
\end{align}
In other words, the principal value of the ratio of two arithmetic series is simply the ratio of the distances.
\section{Series Manipulation Rules for Finite Subsets}
Many attempts to manipulate divergent series have resulted in contradictions, to the extent that many suggest that it is best to not attempt to do so.
The reason for these contradictions, however, lies in the treatment of the infinite nature of the number of values.
In the real system, $\infty$ is considered a boundless number.
That is, there is not $\infty + 1$ that is distinct from $\infty$.
Likewise, $\infty - 1$ is also infinity.
Essentially, within the real numbers, $\infty$ is used largely like an ambiguous infinite value, essentially saying that ``the real numbers can't handle this value.''
If, instead, the hyperreal numbers are used, then $\omega$ and $\omega + 1$ are distinct quantities, despite the fact that they are both infinite.
The rules for manipulating series come from these ideas.
\subsection{Finite Term Addition}
\label{finitetermaddition}
To begin with, it is possible to easily add a scalar value to a series, provided that it is added \emph{to} one of the particular terms of the series.
In other words, suppose the value $A$ is added to the series $1 + 2 + 3 + \ldots$.
This can be written as
\begin{equation}
A + \sum_{i = 1}^{\omega} i
\end{equation}
or as
\begin{equation}
A + (1 + 2 + 3 + \ldots).
\end{equation}
To integrate $A$ into the series, $A$ can be added to any distinct position.
The series could read as
\begin{equation}
(A + 1) + 2 + 3 + \ldots
\end{equation}
or
\begin{equation}
1 + 2 + (A + 3) + \ldots.
\end{equation}
All of these yield the same value for the final series, as long as partial sums are taken starting after the index where $A$ is added.
Additionally, $A$ can be spread across multiple finite terms.
For instance, half of $A$ could be added to each of the first two terms, yielding
\begin{equation}
A + (1 + 2 + 3 + \ldots) = \left(1 + \frac{A}{2}\right) + \left(2 + \frac{A}{2}\right) + 3 + \ldots .
\end{equation}
In fact, there is no reason why the same amount would have to be distributed to each position.
\begin{equation}
A + (1 + 2 + 3 + \ldots) = \left(1 + \frac{2}{5}A\right) + \left(2 + \frac{3}{5}A\right) + 3 + \ldots
\end{equation}
\subsection{Finite Term Insertion and Removal}
\label{terminsertion}
Because this method of summation is based on partial sums, it should be apparent that inserting and removing terms will in fact alter the summation.
For instance, let's begin with the arithmetic sum $1 + 1 + 1 + \ldots$.
It may seem intuitive that one should be able to freely add or remove a $1$ from this sum without affecting the sum.
In this particular series, the exact hyperreal value does change, but not the principal value.
Again, remember that, as mentioned in Section~\ref{partialsum}, this conception of summation will be based on partial sums.
So, let us begin by considering the partial sum
\begin{equation}
\sum_{i = 1}^{k} 1.
\end{equation}
If $k$ is a finite number, then adding one to this sequence will in fact alter its value.
Additionally, removing a $1$ from this sequence will also alter its value.
Therefore,
\begin{equation}
\sum_{i = 1}^{k} 1 \neq 1 + \sum_{i = 1}^{k} 1.
\end{equation}
Likewise,
\begin{equation}
\sum_{i = 1}^{k} 1 \neq \sum_{i = 0}^{k} 1 \neq \sum_{i = 2}^{k} 1.
\end{equation}
Because performing these operations will change the value for any partial sum of $k$ terms for a finite $k$, they will also change the value for a hyperreal $k$ such as $\omega$.
However, for these particular series, the principal value will be the same, because $\omega \simeq \omega + 1 \simeq \omega - 1$.
Additionally, a more surpising fact is that removing a term from a sequence also changes its value if it does not also change the number of terms being summed.
Consider the series
\begin{equation}
\label{seriesunbumped}
1 + 2 + 3 + \ldots = \sum_{i = 1}^{\omega} i.
\end{equation}
This series is not equal to the series
\begin{equation}
\label{seriesbumped}
1 + \sum_{i = 1}^{\omega} (i + 1).
\end{equation}
although it does have the same principal value in this case.
In other words,
\begin{equation}
(1 + 2 + 3 + \ldots) \neq 1 + (2 + 3 + 4 + \ldots)
\end{equation}
but
\begin{equation}
(1 + 2 + 3 + \ldots) \simeq 1 + (2 + 3 + 4 + \ldots).
\end{equation}
The reason for this is readily apparent when considering how these work in terms of partial sums.
If the parameter $k$ was used instead of $\omega$, then it is apparent that the value of (\ref{seriesbumped}) actually has an \emph{extra} term compared to (\ref{seriesunbumped}).
That is, it is obvious that
\begin{equation}
\sum_{i = 1}^{5} i \neq 1 + \sum_{i = 1}^5 (i + 1).
\end{equation}
This can also be seen in the results of applying the arithmetic series formula to the two series.
For $(1 + 2 + 3 + \ldots)$ the formula yields $\frac{\omega^2}{2} + \frac{\omega}{2}$.
However, for $(2 + 3 + 4 + \ldots)$ the formula yields $\frac{\omega^2}{2} + \frac{3}{2}\omega$.
Now, terms can be removed without even affecting the exact hyperreal value if they are replaced by zeroes in the sequence, or if the sequence starting index is moved appropriately.
In other words,
\begin{equation}
\label{termremoval}
(1 + 2 + 3 + \ldots) = 1 + (0 + 2 + 3 + \ldots) = 1 + \sum_{i = 2}^{\omega} i.
\end{equation}
This can be easily proved using the principle derived in Section~\ref{finitetermaddition}.
For instance, to move the $1$ outside of the series, $1 + -1$ can be added to the series.
\begin{align}
1 + -1 + (1 + 2 + 3 + \ldots) \nonumber \\
~~~~~~ &= 1 + ( (1 + -1) + 2 + 3 + \ldots) \nonumber \\
&= 1 + (0 + 2 + 3 + \ldots)
\end{align}
\subsection{Finite Term Rearrangement}
As can be deduced from Sections~\ref{finitetermaddition} and \ref{terminsertion}, any number of finite terms in a series can be rearranged in position.
That is, for any given series member with a value of $A$, $A - A$ can be added to the series, applying the $-A$ such that it cancels out the value of the series member.
After doing this to several series members, the inverse operations can then be applied to move these values to any finite position in the series.
Doing this will preserve the partial summing behavior of the series for all partial sums after the members which have been manipulated.
\section{More Advanced Series}
\label{advseries}
While basic formulas for divergent series of arithmetic and geometric series can be established using the standard formulas, more advanced series require the use of discrete integral calculus to establish the formulas for series.
Doing so leads to very interesting results.
\subsection{Ces\`aro Sums and Oscillating Series}
Oscillating series have an interesting history of treatment within mathematics.
The standard series to consider is Grandi's series: $1 - 1 + 1 - 1 + \ldots$.
Or, written more formally,
\begin{equation}
\sum_{i = 1}^{\infty} (-1)^{i + 1}.
\end{equation}
Partials sums for this series can be found by performing a discrete integral.
\begin{equation}
\sum_{i = 1}^{n} (-1)^{i + 1} = \frac{1}{2}(-1)^{n + 1} + \frac{1}{2}.
\end{equation}
What is particularly interesting about this formula is that the Ces\`aro sum of the infinite series ($\frac{1}{2}$) is present in the formula.
Now, consider the oscillating series $-1 + 1 - 1 + \ldots$.
This series has the formula
\begin{equation}
\sum_{i = 1}^{\infty} (-1)^i.
\end{equation}
A discrete integral of the partial sums yields the formula
\begin{equation}
\sum_{i = 1}^{n} (-1)^i = \frac{1}{2}(-1)^n - \frac{1}{2}.
\end{equation}
Note that in this as well, $-\frac{1}{2}$ is the Ces\'aro summation of the infinite series.
This leads to the conjecture that, in evaluating infinite series using integral formulas,
\begin{equation}
\label{negoneconjecture}
(-1)^\infty = 0,
\end{equation}
at least for additive offsets of $\omega$.
For instance, in the case of Grandi's series, using the $\omega$ notation, the infinite series would include $(-1)^{\omega + 1}$.
The other series includes $(-1)^{\omega}$.
According to the present conjecture, both of these simplify to $0$, at least for the purpose of creating formulas for infinite series based on partial sums.
This can be understood probabilistically.
Since we have no information about what sign $-1^{\omega}$ will have, we can say that
\begin{equation}
-1^{\omega} = \pm 1.
\end{equation}
Since both of these possibilities are equally probable, the limit towards infinity resolves to their average, or zero.
Also, since we have no information about the sign of $-1^{\omega}$, we have equally little information about the sign of $-1^{\omega + 1}$, or any other variation on $\omega$ which is not biased towards evenness (e.g., $2\omega$).
The expression $-1^{x}$ has an oscillation pattern very similar to $\sin(x)$.
Since \citep{paterson2018a} showed that $\sin(\omega) = 0$ in the surreal numbers, it is possible that a similar proof may be found for $-1^\omega = 0$ along similar lines in the hyperreals.
\subsection{Other Oscillatory Behavior}
Because (a) discrete integration can be used to find formulas for series involving partial sums, and (b) the behavior of $(-1)^\infty$ (for infinities without bias towards evenness) is conjectured to be zero, the behavior of a wide variety of oscillatory behaviors can be deduced.
Raising $-1$ to the $i$th power can produce all sorts of oscillatory behavior.
As has been seen with Grandi's series, this can produce a series of values that go back-and-forth across a mean value (the mean value can be changed by adding, and the back-and-forth can be changed by multiplying).
However, $(-1)^i$ can also be expanded to blank out members of a series.
For instance, to blank out every other member of a series, the formula
\begin{equation}
\label{blankingformula}
\frac{((-1)^i + 1)}{2}
\end{equation}
can be used.
This simplifies to $1$ where $i$ is even and $0$ when $i$ is odd.
Therefore, by multiplying a given formula by (\ref{blankingformula}), pieces of the given formula will be zeroed out.
For instance, take the series $1 + 2 + 3 + \ldots$.
This series can be converted to the series $0 + 2 + 0 + 4 + 0 + 6 + \ldots$ by applying (\ref{blankingformula}).
This gives the series
\begin{equation}
\sum_{i = 1}^{\omega} i\cdot\left( \frac{((-1)^i + 1)}{2} \right).
\end{equation}
The discrete integral yields
\begin{equation}
\label{otheroscillator}
\sum_{i = 1}^{n} i\cdot\left(\frac{((-1)^i + 1)}{2}\right) = \frac{1}{8}\left(2n^2 + 2n(-1)^n + 2n + (-1)^n - 1\right)
\end{equation}
When $n = \omega$ the formula runs into a problem with simplifying this through the conjecture (\ref{negoneconjecture}) because it yields an indeterminate form.
The term $2n(-1)^n$ becomes an indeterminate form of the type $\omega\cdot 0$.
This can be resolved, however, through L'Hospital's Rule.
\begin{equation}
\label{lhospital}
\lim_{n\to\infty} \frac{2n}{(-1)^{-n}} = \frac{2}{-\ln(-1)(-1)^{-n}} = -\frac{2}{\ln(-1)}(-1)^n.
\end{equation}
Now (\ref{negoneconjecture}) can be applied without ambiguity, simplifying it to zero.
Therefore, for $n = \omega$, (\ref{otheroscillator}) simplifies to
\begin{equation}
\sum_{i = 1}^{n} i\cdot\left(\frac{((-1)^i + 1)}{2}\right) = \frac{1}{8}\left(2n^2 + 2n - 1\right).
\end{equation}
This means that the value of this sum in the hyperreals is $\frac{1}{4}\omega^2 + \frac{1}{4}\omega - \frac{1}{8} \simeq \frac{1}{4}\omega^2$.
Interestingly, this is a different result than for the simple series $2 + 4 + 6 + \ldots$.
Since $2 + 4 + 6 + \ldots$ is a simple arithmetic series, we can determine the hyperreal sum using (\ref{arithmeticSeriesFormula}).
\begin{equation}
\sum_{i = 1}^{\omega} 2 + (i - 1)2 = \omega^2 + \omega \simeq \omega^2.
\end{equation}
This is a different result than what was obtained for $0 + 2 + 0 + 4 + 0 + 6 + \ldots$, which was $\frac{1}{4}\omega^2$, indicating that the two series have different behaviors.
\subsection{$1 - 2 + 3 - 4 + \ldots$}
Euler's sum for the series $1 - 2 + 3 - 4 + \ldots$ can be confirmed using this method as well.
This series can be given the value
\begin{equation}
\sum_{i = 1}^n i(-1)^{i - 1} = \frac{1}{4}\left(-2n(-1)^n + (-1)^{n + 1} + 1\right).
\end{equation}
Using (\ref{negoneconjecture}) and (\ref{lhospital}) this simplifies to $\frac{1}{4}$.
Interestingly, this is one of the few functions that is not changed even in its exact hyperreal by prepending a zero to the function.
\begin{equation}
\sum_{i = 1}^n (i - 1)(-1)^{i} = \frac{1}{4}\left(2n(-1)^n + (-1)^{n + 1} + 1\right).
\end{equation}
Likewise, (\ref{negoneconjecture}) allows this to reduce to $\frac{1}{4}$.
\section{Whole Series Manipulation Rules}
In addition to manipulation of finite partial sums of a series, certain operations can (and can't) be performed to the series as a whole.
In this section, some of these operations will be considered.
\subsection{Scalar Multiplication}
Because of the distributivity of multiplication, multiplication of a series by a scalar value will distribute the scalar multiplication to every term.
\begin{equation}
2(1 + 2 + 3 + \ldots) = (2\cdot 1 + 2\cdot 2 + 2\cdot 3 + \ldots).
\end{equation}
Or, written as a formula,
\begin{equation}
n \sum_{i = 1}^{\omega} f(i) = \sum_{i = 1}^{\omega} n\,f(i).
\end{equation}
\subsection{Whole Series Addition}
\label{seriesaddition}
Adding two series together is equivalent to a term-by-term addition of the series.
Since the method presented here is based on partial sums, term-by-term addition only works when the lower and upper bounds of the terms are identical.
Therefore,
\begin{equation}
\left( \sum_{i = 1}^{\omega} f(i) \right) + \left( \sum_{i = 1}^{\omega} g(i) \right) = \sum_{i = 1}^{\omega} f(i) + g(i).
\end{equation}
However,
\begin{equation}
\label{additionlimitsneq}
\left( \sum_{i = 0}^{\omega} f(i) \right) + \left( \sum_{i = 1}^{\omega} g(i) \right) \neq \sum_{i = 1}^{\omega} f(i) + g(i)
\end{equation}
because the limits of summation differ.
Again, to see why this is the case, imagine replacing $\omega$ with a fixed scalar such as $5$.
In (\ref{additionlimitsneq}), the left-hand addend would have a different number of terms than the right-hand addend.
\subsection{Series Spacing}
As noted in Section~\ref{terminsertion}, adding or removing elements of a series, even if they are zero, has an effect on the sum of the resulting series.
This effect can be calculated using the considerations discussed in Section~\ref{advseries}.
For instance, the series $1 + 1 + 1 + \ldots$ can be spaced out by adding in zeroes, to make $1 + 0 + 1 + 0 + \ldots$.
A variation of the oscillatory pattern in (\ref{blankingformula}) can be used to give the series the formula
\begin{equation}
\sum_{i = 1}^{n} \frac{((-1)^{i + 1} + 1)}{2}.
\end{equation}
The discrete integral of this yields the formula
\begin{equation}
\label{seriesoneblanksright}
\frac{1}{2}n + \frac{1}{4}(-1)^{n + 1} + \frac{1}{4}
\end{equation}
Using conjecture (\ref{negoneconjecture}) this reduces to the hyperreal value $\frac{1}{2}\omega + \frac{1}{4} \simeq \frac{1}{2}\omega$.
This is a slightly different value (but with the same principal value) than for the series $0 + 1 + 0 + 1 + \ldots$.
This series can be represented as
\begin{equation}
\label{seriesoneblanksleft}
\sum_{i = 1}^{n} \frac{((-1)^{i} + 1)}{2} = \frac{1}{2}n + \frac{1}{2}(-1)^n - \frac{1}{4}.
\end{equation}
Using conjecture (\ref{negoneconjecture}), the hyperreal value for this is $\frac{1}{2}\omega - \frac{1}{4} \simeq \frac{1}{2}\omega$.
If (\ref{seriesoneblanksright}) and (\ref{seriesoneblanksleft}) were added, it should be equivalent whether they are added term-by-term (Section~\ref{seriesaddition}) or by summing their relevant values.
Summing term-by-term it is apparent that
\begin{align}
&(1 + 0 + 1 + 0 + \ldots) \nonumber \\
+ ~~ &(0 + 1 + 0 + 1 + \ldots) \nonumber \\
= ~~ &(1 + 1 + 1 + 1 + \ldots).
\end{align}
The value of this series was deduced to be $\omega$ in (\ref{basicomega1}).
Likewise, if the values for each series are added the result is
\begin{equation}
\left(\frac{1}{2}\omega + \frac{1}{4}\right) + \left(\frac{1}{2}\omega - \frac{1}{4}\right) = \omega.
\end{equation}
\section{Conclusion}
Here a method of summation was presented that uses the structure of the hyperreal numbers to represent values for divergent series.
This methodology was shown to be stable across a variety of different scenarios.
One unproven, but seemingly correct, conjecture was relied upon for this formulation.
Future work will focus on proving (\ref{negoneconjecture}).
\section{Acknowledgements}
I wanted to take a moment to thank Stanley Schmidt.
I was thinking on this problem at the same time I was reading his \textit{Life of Fred} books to my children.
The fundamental idea for this method of summation came from thinking about Gaastra's original presentation \citep{gaastra2016} while reading \textit{Life of Fred: Kidneys} to my children, when Fred was using the formula for arithmetic series \citep{fredkidneys}.
Additionally, \textit{The Infinite} by A. W. Moore provided some help to the imagination in his discussion of the L\"owenheim-Skolem theorem.
The basics of the discussion was to point out that there was little in the theory of infinities that were really unique to infinity.
Even finite sets can look ``infinite'' in some ways to other sets.
The techniques and ideas explored in Section~\ref{partialsum} were based largely off of thinking about infinities as much more tame and finite-like than is normally considered.
Finally, I want to thank Jessica Hastings, whose interest in the ``Wheat and Chessboard'' problem\citep{weissteinwheat} originally introduced me to the concepts in discrete calculus.
\bibliographystyle{ieeetr}
| {
"timestamp": "2018-07-30T02:03:07",
"yymm": "1804",
"arxiv_id": "1804.11342",
"language": "en",
"url": "https://arxiv.org/abs/1804.11342",
"abstract": "Treating divergent series properly has been an ongoing issue in mathematics. However, many of the problems in divergent series stem from the fact that divergent series were discovered prior to having a number system which could handle them. The infinities that resulted from divergent series led to contradictions within the real number system, but these contradictions are largely alleviated with the hyperreal number system. Hyperreal numbers provide a framework for dealing with divergent series in a more comprehensive and tractable way.",
"subjects": "General Mathematics (math.GM)",
"title": "Hyperreal Numbers for Infinite Divergent Series",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347868191549,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646071150674
} |
https://arxiv.org/abs/0711.0540 | Umkehr Maps | In this note we study umkehr maps in generalized (co)homology theories arising from the Pontrjagin-Thom construction, from integrating along fibers, pushforward homomorphisms, and other similar constructions. We consider the basic properties of these constructions and develop axioms which any umkehr homomorphism must satisfy. We use a version of Brown representability to show that these axioms completely characterize these homomorphisms, and a resulting uniqueness theorem follows. Finally, motivated by constructions in string topology, we extend this axiomatic treatment of umkehr homomorphisms to a fiberwise setting. | \section{Introduction}
The classical umkehr homomorphism of Hopf \cite{Hopf},
assigns to a map $f\: M \to N$
of closed manifolds of the same dimension a ``wrong way'' homomorphism
$f_!\: H_*(N)\to H_*(M)$ on singular homology. Hopf showed that
this map is compatible with intersection pairings.
Freudenthal \cite{Freud} showed that $f_!$ corresponds
to the homomorphism $f^*\:H^*(N) \to H^*(M)$ induced by $f$ on
cohomology by means of the Poincar\'e duality isomorphisms
for $M$ and $N$. This identification allows one to give
a definition of the umkehr homomorphism for a map between
closed manifolds of any dimension.
Variants of the umkehr homomorphism, such as those defined
by the Pontrjagin-Thom construction, intersections of chains,
integration along fibers, and the Becker-Gottlieb transfer, have
played central roles in the development of differential and
algebraic topology. Similarly, the ``push-forward" constructions
in cohomology,
Chow groups, and $K$-theory, have been important techniques in
algebraic geometry and index theory. Topological generalizations
of umkehr mappings have played important roles in recent developments in topology, such as Madsen and Weiss's proof of the Mumford conjecture
and its generalizations \cite{madsen-weiss},
\cite{GMTW}, \cite{galatius}, and the development of
string topology \cite{chassullivan}, \cite{cohenjones}.
Considering these various different, but related constructions, it is natural to ask how they are related? Similarly, one might ask:
what properties characterize or classify umkehr homomorphisms?
The goal of this note is to describe naturally occurring axioms which completely classify umkehr homomorphisms. These axioms come as a result of considering the basic properties of the umkehr homomorphisms mentioned above. We will show that a Brown-type representability theorem
classifies these umkehr maps.
In more recent applications, such as those in string topology, umkehr homomorphisms were needed in the setting of pullback squares of Serre fibrations,
$$
\begin{CD}
E_1 @>\tilde f >> E_2 \\
@VVV @VVV \\
P @>>f > N
\end{CD}
$$
where $f : P \to N$ is a smooth map of manifolds. That is, one wanted an umkehr homomorphism, $\tilde f_! : H_*(E_2) \to H_*(E_1)$
(with a dimension shift of $\dim N - \dim P$).
This leads us to consider axioms for the existence
and uniqueness of umkehr homomorphism in this fiberwise
setting, using a fiberwise version of Brown representability,
which we prove in the appendix.
\section{Preliminaries}
We will work in the category ${\mathcal T}$ of compactly generated
weak Hausdorff spaces.
Products are to be re-topologized using the compactly
generated topology. Mapping spaces are to be given
the compactly generated, compact open topology.
A {\it weak equivalence} of
spaces denotes a (chain of) weak homotopy equivalence(s).
We will assume that the reader is familiar with
the standard machinery of algebraic topology including
homotopy limits and colimits (the standard reference
for the latter is \cite{Bousfield-Kan}).
A spectrum $E$ is a sequence of
based spaces $E_n$, $n \ge 0$ together with (structure) maps
$\Sigma E_n \to E_{n+1}$, where $\Sigma$ denotes reduced suspension.
A spectrum has homotopy groups $\pi_n(E)$ for $n \in {\mathbb
Z}$ defined by the colimit of the system $\{\pi_{n+j}(E_j)\}_{j\ge 0}$.
A morphism of spectra $E\to E'$ consists of maps
$E_n \to E'_n$ which are compatible with the structure maps.
A morphism is a weak equivalence if it induces an isomorphism in
homotopy groups in each degree. The category of spectra is
denoted ${\mathcal S}$.
We say that a spectrum $E$ is an {\it $\Omega$-spectrum}
if the adjoint maps $E_n \to \Omega E_{n+1}$ are weak equivalences.
For any spectrum $E$, there a weak equivalence
$E\simeq E^{\text{f}}$ with $E^{\text{f}}$ an $\Omega$-spectrum. This
weak equivalence is natural.
For an unbased space $K$ we let $\text{\rm map}(K,E)$ denote
the mapping spectrum whose $j$-th space is given
by the space of (unbased) maps $K \to E_j$. The basepoint of this mapping space is taken to be the constant map at the basepoint of $E_j$. The structure
maps in this case are induced by suspending and taking
the adjunction. For this to have the ``correct'' homotopy
type, it should be assumed that $E$ is an $\Omega$-spectrum and
that $K$ has the homotopy type of a CW complex.
Although it will not emphasized in the paper, the above discussion fits
naturally within the context of a Quillen model category structure
on the category of spectra (see for example, \cite{Schwede}).
\section{What should an umkehr map do?}
Umkehr homomorphisms are known to occur in all
cohomology theories. Now, every cohomology theory
is representable, so one can view the umkehr
homomorphism as arising from a certain map of spectra.
Minimally, an umkehr map should assign to
an embedding $f\: P \subset N$ of closed manifolds a wrong way stable map
$$
f^! \: N^+ \to P^\nu
$$
where $\nu$ is the normal bundle
of $f$. One definition of $f^!$ is given by taking the Pontryagin-Thom
construction of the embedding $f$.
For a variety of reasons, it is also desirable to twist the above
by an arbitrary vector bundle $\xi$ over $N$. It this case, the umkehr map
should give a map of Thom spectra
$$
f^!_\xi \: N^\xi \to P^{\nu+ \xi}\, .
$$
Classically, such an $f^!_\xi$ is produced by taking the Pontryagin-Thom
construction of the composite
$$
\begin{CD}
P \subset N @> \text{zero section} >> D(\xi)
\end{CD}
$$
where $D(\xi)$ is the unit disk bundle of $\xi$.
This directly motivates one to consider umkehr maps as being
defined not only for closed manifolds, but more generally
for maps of compact manifolds {\it having a boundary.}
The twisting by bundles then becomes a special case, as
$D(\xi)$ is a manifold with boundary.
For example,
if $f$ fails to be an embedding, we can always approximate
the composite
$$
f_j\: P \subset N = N \times 0 \subset N \times D^j
$$
by an embedding when $j$ is sufficiently large, and therefore,
assuming umkehr maps have been defined for manifolds with boundary,
we obtain an umkehr map for $f_j$ which we simply declare to
be the umkehr map for $f$.
The above suggests the following.
Let ${\mathcal M}$ be the category whose objects are compact
manifolds $P$ (possibly with boundary) in which a morphism
$P \to Q$ is a continuous map (not necessarily preserving the boundary).
Let ${\mathcal S}$ be the category of spectra. We will consider
{\it contravariant} functors
$$
u\: {\mathcal M} \to {\mathcal S}
$$
satisfying certain axioms.
The first two axioms will be
\begin{itemize}
\item {\it Vacuum Axiom.} If $\emptyset$ is
the empty manifold, then $u(\emptyset)$ is
contractible.
\item {\it Homotopy Invariance Axiom.} If $f\: P \to Q$
is a weak (homotopy) equivalence, then so is
$u(f)$.
\end{itemize}
The vacuum axiom is motivated by the
fact that Pontryagin-Thom collapse of an
empty submanifold yields a constant map.
The homotopy invariance axiom is motivated by the following. Let
$P\subset N$ be a homotopy equivalence, where $P$ is closed.
Then
the Pontryagin-Thom collapse $N/\partial N \to P^\nu$ is a
stable homotopy equivalence.
The last axiom umkehr functors are required to satisfy is {\it locality}.
In its most geometric form, locality will mean that
a decomposition of manifolds yields a corresponding decomposition
of their Pontryagin-Thom collapses.
Suppose for example that
$P \subset S^n$ is a closed submanifold with normal
bundle $\nu$ such that $P$ is transverse
to the equator $S^{n-1} \subset S^n$. Let $D^n_\pm$
denote the upper and lower hemispheres.
Setting $P_\pm = P \cap D^n_\pm$ and $Q = P \cap S^{n-1}$,
we obtain a decomposition $P = P_- \cup_Q P_+$.
The Pontryagin-Thom collapse of each inclusion $(P_\pm,Q) \subset
(D^n_\pm,S^{n-1})$
gives maps
$$
k_\pm\:(D^n_\pm,S^{n-1}) \to (P_\pm^\nu,Q^\nu)
$$
which may be glued to a yield a map
$$
\begin{CD}
S^n = D^n_- \cup_{S^{n-1}} D^n_+ @> k_- \cup k_+ >>
P_-^\nu \cup_{Q^\nu} P_+^\nu =
P^\nu
\end{CD}
$$
which is just the Pontryagin-Thom construction of $P \subset S^n$.
In general, it seems that
the cleanest way to formulate the locality axiom is in terms
of a (left homotopy) Kan extension of $u$
to the category ${\mathcal T}$ of topological spaces.
The resulting functor will also be homotopy invariant. The
Kan extension $u^\#\: {\mathcal T}\to {\mathcal S}$ is the contravariant functor given by
$$
u^\#(Y) = \underset{P \overset\sim\to Y}{\text{hocolim }} u(P)\, ,
$$
where the homotopy colimit is indexed over the category
of compact manifolds $P$ equipped with a weak equivalence to $Y$.
Notice that $u^\#$ restricted to ${\mathcal M}$
coincides with $u$ up to natural equivalence.
\begin{itemize}
\item {\it Locality Axiom.} The functor $u^\#$ is excisive, i.e.,
it preserves homotopy cocartesian squares.
\end{itemize}
The above axioms imply, by a version of Brown's representability theorem
(cf.\ appendix), that
the composite $u^\#$ is representable:
there is an $\Omega$-spectrum
$E$, unique up to weak equivalence, and a natural weak equivalence
$$
u^\#(X) \,\, \simeq \,\, \text{map}(X,E)\,
$$
where $\text{map}(X,E)$ denotes the spectrum of (unbased) maps from
$X$ to $E$, i.e., the spectrum whose
$j$-th space is the space of unbased maps $X \to E_j$.
(In the above, we are implicitly using the fact that every
compact manifold has the homotopy type of a finite complex. This
will imply that $u^\#$ is determined up to equivalence by its
restriction to the category of finite complexes over $X$.)
Notice that we can recover $E$ by taking
of $u(*)$, where $*$ is the one-point manifold.
Summarizing,
\begin{thm} An umkehr functor $u\: {\mathcal M} \to {\mathcal S}$ is
characterized up to natural weak equivalence
by its value $E:= u(*)$ at the one point manifold.
Conversely,
an $\Omega$-spectrum $E$ gives rise to
an umkehr functor $u$ by the rule
$$
u(P) \,\, := \text{\rm map}(P,E)\, .
$$
\end{thm}
\subsection*{Examples}
\subsubsection*{Example 1: The Pontryagin-Thom construction}
The traditional Pontryagin-Thom construction comes
from the umkehr functor corresponding to the spectrum $E = u(*) = S^0$,
the sphere spectrum. That is,
$$
u(P) = \text{map}(P, S^0),
$$
the Spanier-Whitehead dual of $P$.
The fact that this functor yields the Pontrjagin-Thom construction
comes from Atiyah duality, which gives a natural equivalence of spectra,
$$
\text{\rm map}(P, S^0) \simeq P^{-\tau_P}
$$
where $P^{-\tau_P}$ is the Thom spectrum of the stable normal bundle, that is, the virtual bundle $-\tau_P$, where $\tau_P$ is the tangent bundle of $P$.
Given an embedding $f\: P \to N$ with normal bundle $\nu(f)$,
the map of Spanier-Whitehead duals,
$f_! \: \text{\rm map}(N, S^0) \to \text{\rm map}(P, S^0)$
is equivalent to the Pontryagin-Thom map
$$
N^{-\tau_N} \to P^{-\tau_N \oplus \nu (f)} = P^{-\tau_P}.
$$
\subsubsection*{Example 2: Integration along the fibers}
Consider a smooth submersion of closed oriented manifolds,
$$
P^{n+k} \overset p\to M^n
$$
where the superscript denotes dimension. Then integration along fibers defines a homomorphism in de\,Rham cohomology,
$$
p^{\int} : H^q_{\text{dR}}(P) \to H^{q-k}_{\text{dR}}(M).
$$
This can be seen in terms of the umkehr functor defined
by setting $u(*) = h{\mathbb R}$, the Eilenberg-MacLane spectrum
for the real numbers. In other words, $u(N) = \text{map}(N, h{\mathbb R})$.
The homomorphism induced by the bundle gives a homomorphism,
$$
\text{\rm map}(M, h{\mathbb R}) \overset{p_!}\to \text{\rm map}(P, h{\mathbb R})
$$
which can be written as $p_! :\text{\rm map}(M, S^0)\wedge h{\mathbb R} \to
\text{\rm map}(P, S^0)\wedge h{\mathbb R}$. Using Atiyah duality,
as in the previous example, this is equivalent to a map
which, by abuse of notation, we also call $p_!$,
$$
p_!\: M^{-\tau_M}\wedge h{\mathbb R} \to P^{-\tau_P}\wedge h{\mathbb R}.
$$
When one applies homotopy groups to this map,
and the Thom isomorphism, one obtains a homomorphism,
$$
p_! \: H_{q-k}(M; {\mathbb R}) \to H_{q}(P; {\mathbb R})
$$which is linearly dual to the integration map $p^{\int}$. That is,
$$
\int_\alpha p^{\int} (\omega) = \int_{p_!\alpha}\omega.
$$
\subsubsection*{Example 3: Oriented bordism}
For a space $X$, let $\text{\rm MSO}_p(X)$ denote bordism
classes of maps $P \to X$ where $P$ is a closed smooth
oriented $p$-manifold.
If
$$
f\:Q \to N
$$
is a map of closed smooth oriented
manifolds, then we obtain an umkehr homomorphism
$$
f_*^!\:\text{\rm MSO}_p(N) \to \text{\rm MSO}_{p+q-n}(Q)
$$
as follows. Let $\gamma \in \text{\rm MSO}_p(N)$. Choose a representative $g\: P \to N$
of $\gamma$ in such
a way that $f$ and $g$ are mutually transverse. Then the fiber product
$P \times_N Q$ is an oriented manifold of dimension $p+q-n$, and the bordism
class of evident
map $P \times_N Q \to Q$ defines the umkehr homomorphism. Of course the spectrum representing the associated umkehr functor is the Thom spectrum, $\bf MSO$. \rm
\section{A generalization}
A generalization of the umkehr map arises
naturally within the framework of string topology.
The context is this:
one has an embedding $P \subset N$ of closed manifolds
with normal bundle $\nu$, and also a (not necessarily smooth) fiber bundle
$p\:E\to N$. Let $q\:E_{|P}\to P$ be the restriction of
$p$ to $P$. Then we have a cartesian square
$$
\xymatrix{
E_{|P} \ar[r] \ar[d] & E \ar[d]\\
P \ar[r] & N\, .
}
$$
The spaces $E$ and $E_{|P}$ may not be smooth, and may even be infinite dimensional, (for example in string topology the total space $E$ is typically built from path or loop spaces in the manifold $N$). However one
still observes that the codimension of $E_{|P}$ in $E$ is finite, and
that one can find a regular neighborhood which is homeomorphic to
the pullback of $\nu$ along $q$. Collapsing a complement
of this tubular neighborhood to a point, we obtain
a based map
$$
E^+ \to (E_{|P})^{q^*\nu}
$$
where the target is the Thom space of $q^*\nu$. Given this
construction, which seems depend on certain choices,
it is not entirely clear that it carries with it any
uniqueness properties. We will show in fact that it does.
\medskip
It will be convenient for us to categorify the above.
The idea will be that the above umkehr map
can be thought of as arising from a suitable functor on the
category of manifolds {\it over} $N$. The representing objects in this setting will be fiberwise spectra. For the sake of completeness, we begin with a digression describing those aspects of fiberwise spectra that we will need for our purposes. The reader is referred to \cite{May-Sigurdsson} for a more complete discussion.
\subsection{Fibered Spectra}
One may regard a spectrum as a generalization of
an abelian group, where the latter appear
as the Eilenberg-Mac\, Lane spectra.
Analogously
a fibered spectrum on a space $X$
can be thought of as a bundle of local coefficients on $X$
in which the fibers, which were formerly abelian groups,
are now replaced by spectra.
\medskip
For a space $X$, let ${\mathcal R}_X$
be the category of {\it retractive spaces} over $X$. An object
is a space $Y$ equipped with maps $s_Y\:X\to Y$ and $r_Y\: Y\to X$
such that $r_Y\circ s_Y$ is the identity map (the structure maps
$r_Y, s_Y$ are usually suppressed from the notation).
A morphism $f\:Y\to Z$ is a
map of underlying spaces which commutes with their structure maps:
$r_Z\circ f = r_Y$ and $f\circ s_Y = s_Z$.
A morphism is a {\it weak equivalence}
if it is a weak homotopy equivalence of underlying spaces.
One has a forgetful functor $u\:{\mathcal R}_X \to {\mathcal T}_X$.
There is also a functor $v$ in the other direction given by
$Y \mapsto Y^+$, where $Y^+$ is the retractive space $Y \amalg X$.
One readily verifies that $u$ is a right adjoint to $v$.
Given objects $Y,Z\in {\mathcal R}_X$, the hom-set
$\hom_{{\mathcal R}_X}(Y,Z)$ may be topologized as a subspace
of the function space of all continuous maps $Y \to Z$ of underlying spaces,
where the function space is equipped with the compactly generated compact
open topology. This gives
${\mathcal R}_X$ the structure of a of a topological category.
\medskip
\begin{defn}[Fiberwise suspension]
Given an object $Y \in {\mathcal T}_X$, its {\it unreduced fiberwise
suspension} is defined to be the double mapping cylinder
$$
S_X Y \,\, := \,\, X \times 0 \cup Y\times [0,1] \cup X \times 1\, .
$$
It comes with an evident map $S_X Y \to X$, so it is an object
of ${\mathcal T}_X$.
Given an object $Y \in {\mathcal R}_X$, its
{\it reduced fiberwise suspension} is given by
$$
\Sigma_X Y \,\, = \,\, S_X Y \cup_{S_X X} X
$$
Note that $\Sigma_X$ defines an endo-functor of ${\mathcal R}_X$.
If $Y, Z$ are objects of ${\mathcal R}_X$, its fiberwise smash product
$Y\wedge_X Z$ is the object given
the pushout of the diagram
$$
X \leftarrow Y\cup_X Z \to Y \times_X Z \, .
$$
\end{defn}
\begin{defn}[Fibered spectra]\label{fibered}
A {\it fibered spectrum}
$ {\mathcal E}$ over $X$
consists of objects ${\mathcal E}_j \in {\mathcal R}_X$ for $j \in {\mathbb N}$ together
with (structure) maps
$$
\Sigma_X {\mathcal E}_j \to {\mathcal E}_{j+1} \, ,
$$
for each $j \ge 0$.
A {\it morphism} ${\mathcal E}\to {\mathcal E}'$ is given by
maps ${\mathcal E}_j \to {\mathcal E}'_j$ which are compatible with the
structure maps.
\end{defn}
We say that ${\mathcal E}$ is {\it fibrant} if the adjoints to the structure
maps are weak homotopy equivalences of underlying spaces.
Any fibered spectrum ${\mathcal E}$ can be converted into a fibrant
one ${\mathcal E}^{\text{f}}$ in which
$$
{\mathcal E}^{\text{f}}_j\,\, := \,\, \underset{n}{\text{hocolim\, }}
\Omega^n_X {\mathcal E}_{j+n} \, ,
$$
where the homotopy colimit is taken in ${\mathcal R}_X$, and
$\Omega_X^j$ is the adjoint to $n$-fold reduced fiberwise
suspension. The above is called {\it fibrant
replacement.}
A morphism ${\mathcal E}\to {\mathcal E}'$ is a
{\it weak equivalence} if the associated morphism of
fibrant replacements ${\mathcal E}^\text{f}\to ({\mathcal E}')^\text{f}$
is a {\it levelwise} weak equivalence:
for each $j$, the map ${\mathcal E}_j^\text{f}\to ({\mathcal E}')_j^\text{f}$
is required to be weak equivalence of ${\mathcal R}_X$.
\medskip
\subsubsection*{Examples}
\medskip
\noindent
{(1). Fiberwise suspension spectra}
Let $Y \in {\mathcal R}_X$ be an object. Let $\Sigma^\infty_X Y$ be
the fibered spectrum over $X$ given by the collection $\Sigma^j_X Y$ of
iterated fiberwise suspensions of $Y$.
\medskip
\noindent{(2). Trivial fibered spectra}
Let $C$ be a spectrum. The collection
of spaces $C_j \times X$ as $j$-varies forms a fibered spectrum over $X$.
The maps $\Sigma_X (C_j \times X) \to C_{j+1} \times X$
use the identification $\Sigma_X (C_j \times X) = (\Sigma C_j) \times X$
together with the structure maps of $C$.
\medskip
\noindent{(3). Fibered Eilenberg-Mac\,Lane spectra}
Let ${\mathcal F}$ be a bundle
of abelian groups on $X$
Let ${\mathcal F}_x$ denote the fiber at $x$.
Then we have a fibered spectrum $h{\mathcal F}$ on $X$, in which
$h{\mathcal F}_j$ can be described as follows: the fiber at $x \in X$ is given by
$K({\mathcal F}_x,j)$, the Eilenberg-Mac\,Lane space based on the
abelian group ${\mathcal F}_x$.
\medskip
\noindent{(4). Fiberwise smash product with a spectrum}
Let $C$ be a spectrum and a let $E\to X$ be
a fibration. Then we obtain a fibered spectrum $E\otimes C$ in
whose $j$-th total space is given by the pushout of the diagram
$$
\begin{CD}
X @<<< E \times *@>\subset >> E \times C_j \, .
\end{CD}
$$
If $E_x$ is the fiber to $E\to X$ at $x\in X$, then the fiber
of $(E\otimes C)_j \to X$ is given by $(E_x)_+ \wedge C$.
\medskip
\noindent{(5). Twisted suspension} Let ${\mathcal E}$ be a fibered spectrum over
$X$. If $\xi$ is a vector bundle over $X$ we can form a new
fibered spectrum
$
{}^\xi \!{\mathcal E}
$
called the {\it twist} of ${\mathcal E}$ by $\xi$. The $j$-th total space
of ${}^\xi {\mathcal E}$ takes the form of a fiberwise smash product
$$S^\xi \wedge_X {\mathcal E}_j ,$$
where $S^\xi$ is the object of ${\mathcal R}_X$ given by the fiberwise
one point compactification of $\xi$.
The notion of twisting extends to the case when $\xi$ is a virtual bundle
(we omit the details, but see example (2) below of the Poincar\'e duality equivalence (Theorem \ref{poincare})).
\eject
\noindent \bf {Homology} \rm
\medskip
A fibered spectrum ${\mathcal E}$ gives rise to a
covariant spectrum-valued functor
$$
H_\bullet({-};{\mathcal E})\: {\mathcal T}_X \to {\mathcal S}
$$
called {\it homology} with ${\mathcal E}$-coefficients.
Consider the following construction: let $Y \in {\mathcal T}_X$ be an object and
call the structure map $f\: Y \to X$. Let $f^*{\mathcal E}$
be the fibered spectrum over $Y$ given by the collection of fiber products
$Y \times_X {\mathcal E}_j$. The set of quotient spaces
$$
(Y \times_X {\mathcal E}_j)/Y
$$
yields a spectrum. However,
we must take the derived version of this construction
to insure homotopy invariance.
Here are the details. First of all, we need to replace ${\mathcal E}$ with
its fibrant replacement ${\mathcal E}^{\text f}$. Secondly, we must replace
the above quotient, by a homotopy quotient, i.e., the mapping cone. The result
of these changes will produce a spectrum with $j$-th space
$$
(Y \times_X {\mathcal E}^{\text f}_j) \cup_Y CY \, .
$$
This spectrum is $H_\bullet(Y;{\mathcal E})$.
\begin{exs} The homology spectrum of the trivial fibered spectrum (example (2)
above)
is $C \wedge Y_+$.
For the fibered Eilenberg-Mac\,Lane spectrum $h{\mathcal F}$ (example (3) above)
The homology spectrum of $Y$ has homotopy groups isomorphic
to the homology $Y$ with coefficients in the bundle of coefficients
${\mathcal F}$ pulled back to $Y$.
\end{exs}
\noindent \bf {Cohomology} \rm
\medskip
Given a fibered spectrum ${\mathcal E}$,
we obtain a contravariant spectrum-valued functor
$$
H^\bullet({-};{\mathcal E}) \:{\mathcal T}_X \to {\mathcal S}
$$
called {\it cohomology} with ${\mathcal E}$-coefficients.
Roughly, it is given at an object $Y$ by taking the spectrum of sections
of ${\mathcal E}$ along $Y\to X$.
More precisely, consider the spectrum whose $j$-th space
is the hom-space $\hom_{{\mathcal T}_X}(Y, {\mathcal E}_j)$
(or equivalently, the space of sections of ${\mathcal E}_j \to X$
along $Y$). The structure maps for ${\mathcal E}$ yield structure
maps on these hom-spaces, so we obtain a spectrum.
To get a homotopy invariant version of this construction, we
need to replace ${\mathcal E}$ by its fibrant replacement, and
$Y$ by a functorial cellular approximation (for example,
we can replace $Y$ by $|SY|$, the realization of the simplicial
total singular complex of $X$). The result of these
manipulations yields a spectrum $H^\bullet(Y;{\mathcal E})$
which is homotopy invariant in $Y$.
\bigskip
\noindent \bf {Poincar\'e duality} \rm
\medskip
Let $N$ be a closed manifold of dimension $d$
with tangent bundle $\tau_N$.
Let $-\tau_N$ be the virtual bundle of dimension $-d$ representing the
stable normal bundle of $P$.
We now state the Poincar\'e duality theorem with coefficients
in a fibered spectrum.
\begin{thm}[Poincar\'e Duality]\label{poincare} For any
fibered spectrum ${\mathcal E}$ over $N$, there is a weak equivalence of spectra
$$
H_\bullet(N; {}^{-\tau_N}\!{\mathcal E}) \,\, \simeq \,\,
H^\bullet(N;{\mathcal E})\, .
$$
The equivalence is natural in ${\mathcal E}$.
\end{thm}
Although usually stated differently,
this result appears in the literature (see
\cite[thms.\ A,D]{Klein_dualizing}, \cite[\S5,8]{Klein_dualizing_2},
\cite[th.\ 4.9]{PoHu}, \cite[prop.\ 2.4]{WW1}).
\begin{defn} A closed $n$-manifold $N$ is {\it ${\mathcal E}$-orientable}
if there is a weak equivalence of fibered spectra
$$
{}^{-\tau_N}\!{\mathcal E} \,\, \simeq \,\, {\mathcal E}[-n]
$$
where ${\mathcal E}[-n]$ is the $n$-fold fiberwise desuspension of
${\mathcal E}$.
\end{defn}
\begin{cor} Assume $N$ is ${\mathcal E}$-orientable. Then there is
a weak equivalence of spectra
$$
H_\bullet(N; {\mathcal E}[-n]) \,\, \simeq\,\,
H^\bullet(N;{\mathcal E}) \, .
$$
\end{cor}
\subsubsection*{Examples}
\medskip
\noindent
{(1). (\sl Atiyah and Spanier-Whitehead duality) \rm
\medskip
Let $\mathcal E$ be the trivial suspension
spectrum, $\Sigma^\infty_NN$. In other words, the $j^{th}$-space is given by
$$
(\Sigma^\infty_NN)_j = S^j \times N \to N
$$
so the fiber over any point is the sphere $S^j$. We can describe the twisted spectrum,
$
{}^{-\tau_N}\!{(\Sigma^\infty_NN)}$ in the following way. Suppose we have an embedding in Euclidean space, $N \hookrightarrow \mathbb{R}^L$, with normal bundle $\nu_L \to N$. Then for any $j \geq 0$, the $(j+L)^{th}$ space of the twisted spectrum is given by
$$
{}(^{-\tau_N}\!{\Sigma^\infty_NN})_{j+L} = S^{\nu_L}\wedge_{N}(\Sigma^\infty_NN)_j = S^{\nu_L}\wedge S^j.
$$
Then clearly the homology spectrum, $$H_\bullet(N; {}^{-\tau_N}\!{\Sigma^\infty_NN }) = N^{-\tau_N}
$$
the Thom spectrum of the virtual bundle $-\tau_N$. On the other hand, the cohomology spectrum,
$H^\bullet(N; \, \Sigma^\infty_NN)$ has as its $j^{th}$-space the space of sections,
$$ \hom_{\mathcal{T}_N}(N, S^j \times N) =\text{\rm map}(N, S^j). $$ In other words, this cohomology spectrum is the mapping spectrum $\text{\rm map}(N, S^0)$,
or the Spanier-Whitehead dual of $N_+$.
Thus the Poincar\'e duality equivalence Theorem \ref{poincare} in this case gives the Atiyah duality,
$$
N^{-\tau_N} \simeq \text{map}(N , S^0).
$$
\medskip
(2). (\sl The free loop space and string topology) \rm
\medskip
Let $LN = \text{\rm map}(S^1, N)$ be the
free loop space, and let $e : LN \to N$ be
the fibration that evaluates a loop at the basepoint
$0 \in \mathbb{R}/\mathbb{Z} = S^1$. The fiber at $x_0$ is the based loop space,
$\Omega_{x_0}N$. There is a section $\sigma : N \to LN$ of this fibration by
considering a point $x \in N$ as the constant loop at $x$. We consider the fiberwise suspension spectrum, $\mathcal{E} = \Sigma^\infty_N LN$. This fibered spectrum has as its $j^{th}$ space the $j$-fold fiberwise reduced suspension,
$\Sigma^j_NLN$, which fibers over $N$, with fiber $\Sigma^j (\Omega N)$. We consider the Poincar\'e duality equivalence (Theorem \ref{poincare}) in the case of this fibered spectrum.
We consider the twisted spectrum ${}^{-\tau_N}\!{(\Sigma^\infty_NLN)}$. This fibered spectrum can be described in the following way. Suppose, as above, $N \hookrightarrow \mathbb{R}^L$ with normal bundle $\nu_L \to N$. Then for any $j \geq 0$, the $(j+L)^{th}$ space of the twisted spectrum is given by
$$
{}(^{-\tau_N}\!{\Sigma^\infty_NLN})_{j+L} = S^{\nu_L}\wedge_{N}(\Sigma^j_NLN).
$$
Then clearly the homology spectrum is given by,
\begin{equation}\label{loophom}
H_\bullet(N; {}^{-\tau_N}\!{\Sigma^\infty_NLN }) = LN^{-\tau_N}
\end{equation}
the Thom spectrum of the virtual bundle $e^*(-\tau_N)$. It was shown in \cite{cohenjones} that the spectrum $LN^{-\tau_N}$ is a ring spectrum, whose induced product in homology reflects the Chas-Sullivan loop product in \sl string topology \rm \cite{chassullivan} after one applies the Thom isomorphism. This product can be seen by applying the Poincar\'e duality equivalence (Theorem \ref{poincare}) as follows.
The cohomology spectrum,
$H^\bullet(N; \, \Sigma^\infty_NLN)$ has as its $j^{th}$-space the space of sections,
$\hom_{\mathcal{T}_N}(N, \Sigma^j_N LN).$ We therefore write this spectrum as
$\hom_{\mathcal{T}_N}(N, \Sigma^\infty_NLN)$.
The Poincar\'e duality equivalence in this setting gives an equivalence,
\begin{equation}\label{compare}
LN^{-\tau_N} \simeq \hom_{\mathcal{T}_N}(N, \Sigma^\infty_NLN).
\end{equation}
Now notice that the fiberwise spectrum $\Sigma^\infty_NLN$ is a fiberwise ring spectrum, since the fibration $\Omega N \to LN \to N$ is a fiberwise monoid. ( More precisely it is a fiberwise $A_\infty$-monoid. See \cite{gruhersalvatore}.) Thus the spectrum of sections, $\hom_{\mathcal{T}_N}(N, \Sigma^\infty_NLN)$ is a ring spectrum. This ring spectrum structure reflects
the ring structure in $LN^{\tau_N}$, and thus reflects the string topology loop product.
}
\medskip
\subsection{Generalized umkehr functors}
Let $X$ be a topological space.
Let ${\mathcal M}_X$ be the category whose objects
are compact manifolds $P$ (possibly
with boundary) equipped with a map $P \to X$; the
map will not usually be specified in the notation. A morphism
is a map $f\:P \to Q$ which is compatible with maps to $N$ in the
obvious way (again, we do not require that
$f$ preserves boundaries).
A morphism is a weak equivalence if and only if the
underlying map of spaces is a weak homotopy equivalence.
We will consider
contravariant functors
$$
u\: {\mathcal M}_X \to {\mathcal S} \, .
$$
\begin{defn}
A functor $u$ will be called a {\it generalized umkehr functor}
if it satisfies three axioms. The first two axioms are:
\begin{itemize}
\item {\bf Axiom 1} (Vacuum). The value of
$u$ at the empty manifold $\emptyset$ is contractible.
\item {\bf Axiom 2} (Homotopy Invariance).
$u$ is a homotopy functor, i.e., if
a morphism $f\: P \to Q$
is a weak (homotopy) equivalence, then so is
$u(f)$.
\end{itemize}
\end{defn}
Let ${\mathcal T}_X$ be the category of spaces
over $X$. An object of this category consists
of a space $Y$ together with map $Y\to X$ (the latter
not usually specified). A morphism $Y\to Z$ is map
of underlying spaces that is compatible with maps to $X$.
As before, we can perform a left homotopy Kan extension
to $u$ along the full inclusion ${\mathcal M}_X \subset {\mathcal T}_X$
to obtain a contravariant homotopy functor
$$
u^\#\: {\mathcal T}_X \to {\mathcal S}\, .
$$
The final axiom for generalized umkehr functors is
\begin{itemize}
\item {\bf Axiom 3} (Locality).
The functor $u^\#$ preserves homotopy cocartesian squares.
\end{itemize}
(a square of ${\mathcal T}_X$ is homotopy cocartesian if
it is one when considered in ${\mathcal T}$ by means of the forgetful
functor.)
Again, we see that these axioms imply that
$u^\#$ is representable. (The appropriate fiberwise version of Brown representability will be proved in the appendix.) In this fiberwise setting, representability means there is a
fibered spectrum ${\mathcal E}$, unique up to
equivalence, and a natural weak equivalence
$$
u^\#(X) \,\, \simeq \,\, H^\bullet(X;{\mathcal E})\, .
$$
In particular ${\mathcal E}$ and $u$ determine one another.
Summarizing,
\begin{thm} A fibered spectrum ${\mathcal E} \to N$ gives rise to
an umkehr functor by the rule
$$
u(P) \,\, := \,\, H^\bullet(P; {\mathcal E})\, .
$$
Conversely, a functor $u$ that satisfies axioms 1-3
determines a fibered spectrum ${\mathcal E} \to N$,
unique up to weak equivalence, whose associated
cohomology recovers $u$ up to natural equivalence.
\end{thm}
\subsection*{The generalized umkehr homomorphism}
Let ${\mathcal E} \to X$ be a fibered spectrum, and suppose
$$
f\: P \to Q
$$
is a morphism of ${\mathcal M}_X$ such that $P$ and $Q$ are closed manifolds.
We then have an induced map on cohomology spectra
$$
f^\bullet\:H^{\bullet}(Q;{\mathcal E}) \to H^{\bullet}(P;{\mathcal E})
$$
using the Poincar\'e duality equivalence, we can rewrite this
up to homotopy as a map
$$
f^!\: H_\bullet(Q;{}^{-\tau_Q}\!{\mathcal E})\to
H_\bullet(P;{}^{-\tau_P}\!{\mathcal E})
$$
Assume now that $P$ and $Q$ are ${\mathcal E}$-oriented. Then
taking homotopy groups of $f^!$, we get a homomorphism
$$
f^!_*\: H_*(Q;{\mathcal E})\to
H_{*+q-p}(P;{\mathcal E})\, .
$$
This is the generalized umkehr homomorphism.
\subsection*{Umkehr maps in string topology}
As seen in Example 2 of the Poincar\'e duality equivalence, the basic ring
structure structure arising in string topology can be seen via the equivalence (\ref{compare}) of $LM^{-\tau_M}$ and the ring spectrum, $\hom_{\mathcal{T}_M}(M, \Sigma^\infty_MLM)$. Here $M$ is a closed manifold. However in its original form \cite{chassullivan} and \cite{cohenjones}, the string topology product was created via an umkehr map. We now see how this fits
into our framework.
$L^\infty M $ be the space of maps from the figure eight
$S^1 \vee S^1$ to $M$. This space is the fiber product, $LM \times_M LM$. That is, we have a a pullback square
$$
\begin{CD}
L^\infty M @>\subset >> LM \times LM \\
@VeVV @VVe \times eV \\
M @>> \Delta > M \times M
\end{CD}
$$
where $\Delta$ is the
a diagonal map, the vertical maps of the square are
the fibrations given by evaluation at the basepoint, and
the upper horizontal map arises from the quotient map
$S^1 \amalg S^1 \to S^1 \vee S^1$ by taking maps into $M$.
Let $h{\mathbb Z}$ be the Eilenberg-Mac\,Lane spectrum on the integers. Consider the product
$$
LM \times LM \xrightarrow{e \times e} M \times M
$$
as an object in $\mathcal{R}_{M\times M}$. We consider the fiberwise smash product
spectrum $\mathcal{E} = (LM \times LM) \otimes h\mathbb{Z}$ (See example (4) after Definition (\ref{fibered}) above.)
In particular, $(LM \times LM) \otimes h\mathbb{Z}$ is the fibered spectrum whose $j$-th space
is given by the pushout of the diagram
$$
\begin{CD}
M \times M @<<< LM\times LM \times * @> \subset >>
L M \times LM \times (h{\mathbb Z})_j \, .
\end{CD}
$$
The umkehr homomorphism taken with respect to the fibered spectrum $(LM \times LM) \otimes h\mathbb{Z}$, applied to the diagonal map $\Delta : M \to M \times M$ viewed as a morphism in $\mathcal{M}_{M\times M}$ is then computed by the induced map in cohomology spectra,
\begin{align}\label{stringcoho}
H^{\bullet}(M\times M;{(LM \times LM) \otimes h\mathbb{Z} }) &\xrightarrow{\Delta^{\bullet}} H^{\bullet}(M;{(LM \times LM) \otimes h\mathbb{Z}}) \\
\hom_{\mathcal{T}_{M\times M}}(M \times M, (LM \times LM) \otimes h\mathbb{Z} ) &\xrightarrow{\Delta^*} \hom_{\mathcal{T}_{M\times M}}(M , (LM \times LM) \otimes h\mathbb{Z} ) \notag \\
&= \hom_{\mathcal{T}_M}(M, L_\infty M\otimes h \mathbb{Z}), \notag
\end{align}
and then apply the Poincar\'e duality equivalence (Theorem \ref{poincare}).
But by an argument completely analogous to that used to verify (\ref{loophom}) in Example (2) of the Poincar\'e duality equivalence, we see that
$$
H_\bullet(M\times M ; {}^{-\tau_{M\times M}}\!{((LM \times LM) \otimes h\mathbb{Z} })) = LM^{-\tau_M} \wedge LM^{-\tau_M}\wedge h\mathbb{Z}
$$
If $M$ is oriented in singular homology, this last spectrum is equivalent to
$\Sigma^{-2m}(LM \wedge LM)_+ \wedge h\mathbb{Z}$. Similarly, from the above pull back square we see that
$$
H_\bullet(M ; {}^{-\tau_{M\times M}}\!{((LM \times LM) \otimes h\mathbb{Z} })) = L_\infty M^{-\tau_M} \wedge h\mathbb{Z}
$$
where the last spectrum is equivalent to $\Sigma^{-m}(L_\infty M)_+ \wedge h\mathbb{Z}$ assuming $M$ is equipped with an orientation. Thus the umkehr map in this situation
gives a map
$$
\Sigma^{-2m}LM \wedge LM \wedge h\mathbb{Z} \to \Sigma^{-m}L_\infty M \wedge h\mathbb{Z},
$$ or by taking homotopy groups this
takes the form
$$
H_*(L M \times LM ) \to H_{*-m}(L^\infty M) \, .
$$
The Chas-Sullivan loop product is given by the composite
\begin{align}
H_{p}(L M)\otimes H_q(LM) &\to H_{p+q}(L M \times LM ) \notag \\
&\to
H_{p+q-m}(L^\infty M) \to H_{p+q-m}(LM), \notag
\end{align}
where the first homomorphism
is the external product, the second is the umkehr homomorphism
and the third is given by taking
the homology of the map of spaces $L^\infty M \to L M$ arising from
pinch map $S^1 \to S^1 \vee S^1$.
\section{Appendix: Representability}
The purpose of this section is to outline a proof the representability theorem
for contravariant functors $f\:{\mathcal T}_X \to {\mathcal S}$.
\begin{defn} A functor $f$ is said to be {\it excisive} if
for any collection of objects $Y_\alpha$, the natural map
$$
f(\coprod_\alpha Y_\alpha) \to \prod_{\alpha} f(Y_\alpha)
$$
is a weak equivalence.
\end{defn}
\begin{rem} This condition can be stated alternatively
as saying that $f$ preserves homotopy pushouts and that
up to weak equivalence, $f$ is determined up to equivalence by its
restriction to the full subcategory of
finite complexes over $X$.
The last condition means that
the natural map
$$
f(Y) \to \underset{Z \in C_Y }{\text{holim }} f(Z)
$$
is a weak equivalence, where the homotopy limit is
indexed over the category $C_Y$ consisting of spaces over $Y$
which are homeomorphic to a finite complex.
\end{rem}
\begin{defn} A functor $f\: {\mathcal T}_X \to {\mathcal S}$
is {\it cohomological} if
\begin{itemize}
\item $f$ is a homotopy functor.
\item The value of $f$ at the initial object $\emptyset$ is
contractible.
\item $f$ is strongly excisive.
\end{itemize}
\end{defn}
\begin{thm}[Representability] \label{rep} For
cohomological functors $f$,
there is a fibered spectrum ${\mathcal E}$ and a natural equivalence
of functors
$$
f(Y)\,\, \simeq \,\, H^\bullet(Y;{\mathcal E})
$$
\end{thm}
\begin{rems} (1). The fibered spectrum
${\mathcal E}$ is unique up to equivalence. Heuristically, the
value of $H^\bullet({-}; {\mathcal E})$ at
the one point maps $x\to X$ recovers
the fibers ${\mathcal E}_x$ of ${\mathcal E}$. The homotopy
colimit in the category of {\it un}based spaces of
$({\mathcal E}_x)_j$ recovers the $j$-th total space ${\mathcal E}_j$
up to equivalence.
{\flushleft (2).} Our method of proof can be adapted
to show that the functor
$$
{\mathcal E} \mapsto H^\bullet({-};{\mathcal E})
$$
defines an equivalence the homotopy category of
fibered spectra over $X$ and the homotopy category
of cohomological functors. We will not need this statement.
\end{rems}
The main tool in the proof of \ref{rep} is
a natural transformation
$$
f(Y) \to f^\natural(Y)\, ,
$$
called the {\it coassembly map},
which is defined for any homotopy functor $f$. The target functor
$f^\natural$ is always strongly excisive.
The idea will then be to show that the coassembly map is
a weak equivalence when $f$ satisfies our assumptions, and
that $f^\natural$ is representable.
\subsection*{The coassembly map}
Let $f^\natural$ be the functor defined by
$$
f^\natural(Y) = \underset{\Delta^p \to Y}{\text{holim }} f(\Delta^p) \, ,
$$
where the homotopy limit is indexed over the category $\Delta_Y$ of singular
simplices in $Y$. This is the category whose objects are maps
$\Delta^p\to Y$ (for $p \ge 0$), where $\Delta^p$ is the standard $p$-simplex,
and morphisms are given by inclusions of faces.
Given any map $\Delta^p \to Y$ we obtain a map $f(Y) \to f(\Delta^p)$.
This assignment is compatible with taking faces, so we get a natural map
$$
c: f(Y) \to f^\natural(Y)\, .
$$
This is the coassembly map.
We now verify the properties of $f^\natural$.
Note that $f^\natural$ is a homotopy functor since
$f$ is and the homotopy limit
construction is homotopy invariant. Furthermore, $f^\natural$
is strongly excisive
because
$$
\Delta_{\amalg_\alpha Y_\alpha} \,\, = \,\, \coprod_{\alpha}
\Delta_{Y_{\alpha}}
$$
and the homotopy limit indexed over coproduct of categories
is the product of the corresponding homotopy limits.
Consequently, $f^\natural$ is strongly excisive.
\begin{proof}[Proof of Theorem \ref{rep}]
Assuming that $f$ is cohomological,
We first show that the coassembly map
$$
c: f(Y) \to f^\natural(Y)
$$
is a weak equivalence. It
clearly is a weak equivalence when
$Y$ is the initial object. It is also a weak equivalence
when $Y$ is a point, since in this case the map $\Delta^p \to *$
is a weak equivalence and $f$ is a homotopy functor. Since $f$ is excisive,
$c$ is a weak equivalence when $Y$ is a finite set over $X$.
A Mayer-Vietoris
argument then shows that the coassembly map is a weak equivalence
whenever $Y$ is a finite complex over $X$.
Because $f$ is strongly excisive, this is enough to show that $c$ is a weak
equivalence in general, since $f$ is determined up to weak equivalence by
its restriction to the category of finite complexes over $X$.
To complete the proof of Theorem \ref{rep}, we will show that $f^\natural$
is representable. For $Y \in {\mathcal T}_X$, consider the functor
$$
f^Y_{j} \: \Delta_Y \to {\mathcal T}
$$
given as follows: for an object $\sigma\:\Delta^p \to Y$, set
$f^Y_j(\sigma) =
f(\Delta^p)_j$, the $j$-th space of the spectrum $f(\Delta^p)$,
which we will consider here as an unbased space.
Define
$$
{\mathcal E(Y)}_j \,\, := \,\, \text{hocolim } f^Y_j\, .
$$
If we let $j$ vary, the ${\mathcal E}(Y)_j$ define a fibered spectrum
${\mathcal E(Y)}$.
By considering the constant map $f^Y_j(Y) \to *$
and taking homotopy colimits, we have a map
$$
{\mathcal E(Y)}_j \to B\Delta_Y
$$
where $B\Delta Y$ is
the classifying space of the category $\Delta_Y$, i.e,
the geometric realization of its nerve
(recall that the homotopy colimit of
the constant functor to a point is $B\Delta_Y$).
This map has the following properties.
\begin{itemize}
\item It is a quasifibration, i.e., the map
from each fiber to its corresponding
homotopy fiber is a weak equivalence.
\item The space of sections of the associated
fibration is weak equivalent to the homotopy limit of $f^Y_j$.
This is an observation of Dwyer
\cite[prop.\ 3.12]{Dwyer_centralizer}.
\item By definition, the collection
$$
\{\text{holim } f^Y_j\}_{j \ge 0}
$$
is the spectrum $f^\natural(Y)$.
\item
$$
\xymatrix{
{\mathcal E(Y)}_j \ar[r]\ar[d]& {\mathcal E(X)}_j \ar[d] \\
B\Delta_Y \ar[r] & B\Delta_X
}
$$
is homotopy cartesian.
\end{itemize}
Set ${\mathcal E} := {\mathcal E}(X)$. Then ${\mathcal E}$ is a
fibered spectrum over $X$, and
it is a straightforward consequence of the above properties
that there is a natural weak equivalence
$f^\natural(Y) \simeq H^\bullet(Y;{\mathcal E})$. This completes the
proof of Theorem \ref{rep}.
\end{proof}
| {
"timestamp": "2007-11-04T20:48:14",
"yymm": "0711",
"arxiv_id": "0711.0540",
"language": "en",
"url": "https://arxiv.org/abs/0711.0540",
"abstract": "In this note we study umkehr maps in generalized (co)homology theories arising from the Pontrjagin-Thom construction, from integrating along fibers, pushforward homomorphisms, and other similar constructions. We consider the basic properties of these constructions and develop axioms which any umkehr homomorphism must satisfy. We use a version of Brown representability to show that these axioms completely characterize these homomorphisms, and a resulting uniqueness theorem follows. Finally, motivated by constructions in string topology, we extend this axiomatic treatment of umkehr homomorphisms to a fiberwise setting.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Umkehr Maps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347853343058,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646060341339
} |
https://arxiv.org/abs/1308.1700 | A simple combinatorial interpretation of certain generalized Bell and Stirling numbers | In a series of papers, P. Blasiak et al. developed a wide-ranging generalization of Bell numbers (and of Stirling numbers of the second kind) that appears to be relevant to the so-called Boson normal ordering problem. They provided a recurrence and, more recently, also offered a (fairly complex) combinatorial interpretation of these numbers. We show that by restricting the numbers somewhat (but still widely generalizing Bell and Stirling numbers), one can supply a much more natural combinatorial interpretation. In fact, we offer two different such interpretations, one in terms of graph colourings and another one in terms of certain labelled Eulerian digraphs. | \section{Introduction}
\label{sec:intro}
In \cite{blasiak1,blasiak2,blasiak3,blasiak4} P. Blasiak et al.
introduced coefficients $B_{r,s}(n)$, and $S_{r,s}(n,k)$ that
provide a wide-ranging generalization of Bell numbers, and of
Stirling numbers of the second kind, respectively. In particular
they defined the generalized Bell polynomial (see
\cite[Equations (1.5) and (2.1)]{blasiak1})\footnote{We denote
by $(x)_n$ the falling factorial $x(x-1)\cdots (x-n+1)$.
(Note that the authors of \cite{blasiak1,blasiak2,blasiak3,blasiak4}
use the symbol $x^{\underline{n}}$ instead.)}
\begin{equation}\label{eq:gen_Bell_poly}
\begin{split}
B_{r,s}(n,t)& =\sum_{k=s}^{ns} S_{r,s}(n,k)t^k\ =\\
&=e^{-t}\sum_{k=0}^{\infty}\frac{1}{k!}\prod_{j=1}^n((k+(j-1)(r-s))_st^k,
\end{split}
\end{equation}
\noindent where $r, s, n, k$ are positive integers and $r \geq s$.
These coefficients generalize Bell numbers, and Stirling numbers of
the second kind, usually denoted $B_n$, and $S(n,k)$, respectively,
because by letting $r=s=t=1$ in the above formula, one obtains the
classical formula of Dobinski \cite{comtet}
\begin{equation}\label{eq:dobinski}
B_{1,1}(n)=\frac{1}{e}\sum_{k=0}^{\infty}\frac{k^n}{k!}.
\end{equation}
\noindent In fact $B_n=B_{1,1}(n)$, and $S(n,k)=S_{1,1}(n,k).$
\medskip
The work of P. Blasiak et al. was motivated by the fact that their
coefficients appear to be relevant for the so called \emph{Boson
normal ordering problem}.
\medskip
In \cite{blasiak2} the authors asked for a combinatorial
interpretation of these coefficients. Later on, in \cite{blasiak4},
they provided one such interpretation, in terms of what they called
\emph{colonies of bugs}. We refer to \cite[Section III]{blasiak4} for
the exact definition, but we remark that a colony of bugs is a fairly
complex object that corresponds to a labelled tree whose vertices
include {\em labels} as well as {\em cells}. Each bug in a colony
corresponds to a subtree, and has a {\em type} $(r,s)$; it consists
of a body of $r$ cells, as well as of $s$ {\em legs}, some of which
can be {\em free} \cite[Section III]{blasiak4}. It turns out
(\cite[Theorem 3.1]{blasiak4}) that $B_{r,s}(n)$ counts the number
of colonies of $n$ bugs each of type $(r,s)$, and that $S_{r,s}(n,k)$
counts the number of such colonies having exactly $k$ free legs.
In this note we suggest a simpler combinatorial interpretation of
these coefficients, at least in some important cases. Our interpretations
are stated in standard combinatorial terminology, in terms of colourings
and labeled Eulerian digraphs.
\medskip
Our focus is the case $r=s$. We supply two simple combinatorial
interpretations of the coefficients $B_{m,m}(n)$ and $S_{m,m}(n,k)$, for
all positive integers $m, n, k$. We note that these coefficients are still
much more general than the Bell numbers $B_{1,1}(n)$ and the Stirling
numbers of the second kind $S_{1,1}(n,k)$. Our first interpretation
(Section \ref{sec:stablepart}) is in terms of colourings of a certain graph.
In Sections \ref{sec:cycles} we supply another interpretation of the same
numbers in terms of the number of certain labeled Eulerian digraphs.
Finally, in Section \ref{sec:21case} we remark that in the general case
when $r$ and $s$ are different, there appear to be in certain cases
well-known simple combinatorial interpretations as well; we discuss
mostly the case $r=2, s=1$, but also remark on possible connections
for certain values in the cases $r>2$ and $s=1$.
\section{Colourings}
\label{sec:stablepart}
A $k$-{\em colouring} of a graph $G$ is a partition of the vertex set of $G$
into $k$ non-empty stable sets, \textit{i.e.}{} sets not containing adjacent vertices.
Each such stable set is called a {\em colour-class} of the partition.
Sometimes a $k$-colouring is defined as a mapping of vertices into a set of
$k$ colours, so that adjacent vertices obtain different colours. We note that
for us the names of the colours do not play a role, \textit{i.e.}, two mappings that
yield the same partition are considered the same colouring. Moreover, we
require that each colour-class is non-empty (which corresponds to the
requirement that each colour is used).
We denote by $K_m$ the complete graph on $m$ vertices, and by
$nK_m$ the disjoint union of $n$ copies of $K_m$.
For positive integers $ m,n, k$, let $C_{m}(n,k)$ denote the number of
$k$-colourings of $nK_m$. We first prove a recurrence for the numbers
$C_{m}(n,k)$.
\begin{proposition}
\label{prop:rec1}
We have
\begin{equation}
\label{eq:formula-rec1}
C_{m}(n,k)=\sum_{i=0}^m\binom{m}{i}(k-i)_{m-i} C_{m}(n-1,k-i)\,,
\end{equation}
with initial conditions
\begin{align*}
C_{m}(n,k)& = 0 \text{ whenever } k<m\,,\ \ and\\
C_{m}(1,k)& = \begin{cases}
1 & \text{if }\ k=m\,, \\
0 & \text{otherwise}\,.
\end{cases}
\end{align*}
\end{proposition}
\begin{proof}
The case $k<m$ is trivial (with fewer then $m$ colours we cannot colour $K_m$).
It is also obvious that when $n=1$ we have a unique $k$-colouring of $K_m$ when
$k=m$, and none when $k>m$.
To prove the recurrence, we describe how to obtain, in two steps, all $k$-colourings
of $nK_m$, for $k\geq m$ and $n\neq 1$. Fix an arbitrary copy of $K_m$.
\smallskip
(1) Choose $i$ vertices of the fixed $K_m$, each forming a singleton colour-class.
\smallskip
(2) Insert the remaining $m-i$ vertices of the fixed $K_m$ in the colour-classes of all
$(k-i)$-colourings of $(n-1)K_m$.
\smallskip
Step (1) can be done in $\binom{m}{i}$ ways, and step (2) in
$(k-i)_{m-i} C_{m}(n-1,k-i)$ ways. Our claim is proved.
\end{proof}
\begin{remark}
Note that $C_{m}(n,nm)=1$.
\end{remark}
We now have the following result.
\begin{proposition}
\label{prop:colouring_Smm}
$S_{m,m}(n,k)$ counts the number of $k$-colourings of $nK_m$.
In other words, $S_{m,m}(n,k)=C_{m}(n,k)$.
\end{proposition}
\begin{proof}
A simple manipulation of the formulas shows that recurrence
(\ref{eq:formula-rec1}) coincides with the recurrence (21) in
\cite{blasiak2}, namely:
\begin{align*}
&S_{r,r}(n+1,k)=\sum_{p=0}^r\binom{k+p-r}{p}(r)_p S_{r,r}(n,k+p-r)\,
\end{align*}
Indeed,
$(m)_i\binom{k+i-m}{i}=(k+i-m)_i\binom{m}{i}$.
\end{proof}
Needless to say, the recurrence (\ref{eq:formula-rec1}) generalizes
the classical recursion for the Stirling numbers of the second
kind. Using (\ref{eq:formula-rec1}), we can compute a few examples.
In Table \ref{tab:S_3} we compute the number of $k$-colourings of $nK_3$.
\begin{table}[h]\footnotesize
\begin{center}
\begin{tabular}{c|rrrrrrrr}
\toprule
$S_{3,3}(n,k)$ & \text{k=3} & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\midrule
\text{n=1} & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\
2 & 6 & 18 & 9 & 1 & \text{} & \text{} & \text{} & \text{} \\
3 & 36 & 540 & 1242 & 882 & 243 & 27 & 1 & \text{} \\
4 & 216 & 13608 & 94284 & 186876 & 149580 & 56808 & 11025 & 1107 \\
5 & 1296 & 330480 & 6148872 & 28245672 & 49658508 & 41392620 & 18428400 & 4691412 \\
\bottomrule
\end{tabular}
\end{center}
\caption{$S_{3,3}(n,k)$.}
\label{tab:S_3}
\end{table}
Table \ref{tab:S_3} also appears in \cite[Table 1]{blasiak1}.
Denoting by $B_m(n)$ the number of \emph{all} colourings of $n K_m$, we have (cf. \cite[Equation (1.5)]{blasiak1}):
\begin{equation}\label{eq:Bm}
B_m(n) = \sum_{k=m}^{nm} C_{m}(n,k) = \sum_{k=m}^{nm} S_{m,m}(n,k) = B_{m,m}(n)\,.
\end{equation}
For instance, summing the rows of Table \ref{tab:S_3} we obtain
$1, 34, 2971, 513559, \dots$, that is the sequence $B_{3,3}(n)$ in \cite{blasiak1},
that is the sequence A069223 from \cite{oeis}.
\begin{example}Figure \ref{fig:3k3} shows the graph $2K_3$.
\bigskip
\begin{center}
\begin{tikzpicture}[auto,node distance=2cm,
thick,main node/.style={circle,fill=blue!20,draw,font=\sffamily\Large\bfseries}]
\node[main node] (1) {a};
\node[main node] (2) [above right of=1] {b};
\node[main node] (3) [below right of=2] {c};
\node[main node] (4) [ right of=3] {d};
\node[main node] (5) [ above right of=4] {e};
\node[main node] (6) [ below right of=5] {f};
\path[color= red, every node/.style={color=red}]
(1) edge [] node[above] {} (2)
(2) edge [] node[below] {} (3)
(1) edge [] node[above] {} (3)
(4) edge [] node[above] {} (5)
(5) edge [] node[below] {} (6)
(4) edge [] node[above] {} (6);
\end{tikzpicture}
\captionof{figure}{The graph $2 K_3$.}
\label{fig:3k3}
\end{center}
\smallskip
The eighteen $4$-colourings of $2K_3$ are
\begin{align*}
a|d|be|cf & & a|d|bf|ce & & a|e|bd|cf & & a|e|bf|cd & & a|f|bd|ce & & a|f|be|cd\, \\
ad|b|e|cf & & ad|b|f|ce & & ae|b|d|cf & & ae|b|f|cd & & af|b|d|ce & & af|b|e|cd\, \\
ad|be|c|f & & ad|bf|c|e & & ae|bd|c|f & & ae|bf|c|d & & af|bd|c|e & & af|be|c|d\,.
\end{align*}
\end{example}
\section{Labelled Eulerian Digraphs}
\label{sec:cycles}
We consider digraphs that allow loops and multiple edges in the same direction.
A digraph $G$ is \emph{Eulerian} if at every vertex the in-degree equals
the out-degree. (Note that we do not require $G$ to be connected.)
The edge set of an Eulerian digraph $G$ can be partitioned into directed cycles.
We call an Eulerian digraph \emph{$(n,m)$-labelled} if
its edge set is partitioned into $n$ directed $m$-cycles, each with a distinguished
first edge (and hence a unique second, third, etc.,
$m$-th edge). Figure 2 shows a $(2,3)$-labelled Eulerian digraph, with its 2
directed 3-cycles; the $j^{th}$ edge of the $i^{th}$ cycle is labelled $e_{i,j}$.
\medskip
\begin{center}
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm,
thick,main node/.style={circle,fill=blue!20,draw,font=\sffamily\Large\bfseries}]
\node[main node] (2) {};
\node[main node] (1) [below left of=2] {};
\node[main node] (3) [below right of=1] {};
\node[main node] (4) [below right of=2] {};
\path[color= red, every node/.style={color=red}]
(1) edge [bend left] node[left] {e11} (2)
(2) edge [bend left]node[left] {e12} (3)
(3) edge [bend left]node[left] {e13} (1);
\path[every node/.style={font=\sffamily\small}]
(2) edge [bend left]node[left] {e22} (4)
(3) edge [bend left]node[left] {e21} (2)
(4) edge [bend left]node[left] {e23} (3);
\end{tikzpicture}
\captionof{figure}{A $(2,3)$-labelled Eulerian digraph.}
\label{fig:T_3(2)}
\end{center}
\begin{theorem}
\label{Pavol}
The number of $(n,m)$-labelled Eulerian digraphs is equal to $B_{m,m}(n)$.
\end{theorem}
\begin{proof}
We show a bijection between the set of $(n,m)$-labelled Eulerian digraphs
and the number of colourings of $n K_m$. To this end we assign an
arbitrary order to the $n$ cliques of $nK_m$. Thus the vertices of $nK_m$ will be
called $v_{i,j}$ for $i=1, 2, \dots, n$, and $j=1, 2, \cdots, m$. We define a
bijective mapping $\phi$ associating $e_{i,j}$ with $v_{i,j}$. (Here $e_{i,j}$ is the
$i^{th}$ edge of the $j^{th}$ cycle.)
\medskip\noindent$\bullet$
From graphs to colourings. Let $\mathcal{T}_{m}(n)$ be the set of
$(n,m)$-labelled Eulerian digraphs.
Here we establish a bijection between the $k$-colourings
of $nK_m$ and the elements of $\mathcal{T}_{m}(n)$ with $k$ vertices. Let now $\tau$ be an element of $\mathcal{T}_{m}(n)$
with $k$ vertices. Let , for $t=1, 2, \dots, k$, $B_t$ be the set of
edges of $\tau$ that are incident
in vertex $t$. It is obvious that
\[
\{B_1, B_2, \dots, B_k\}
\]
is a partition of the set of edges of $\tau$. Now, by construction, one sees that
\[
\{\phi(B_1), \phi(B_2), \dots, \phi(B_k)\}
\]
is a $k$-colouring of $nK_m$.
For instance, the graph drawn in the picture \ref{fig:T_3(2)} corresponds to the following
colouring of $2K_3$
\[
v_{1,3}\ |\ v_{1,1} v_{2,1}\ |\ v_{1,2} v_{2,3\ }|\ v_{2,2}
\]
\medskip\noindent$\bullet$
From colourings to graphs. Let $\pi=\{B_1, B_2, ..., B_k\}$ be a colouring of
$nK_m$. We describe the directed graph, $\tau$, associated with $\pi$.
\smallskip
$\tau$ has $k$ vertices, say $w_1, w_2, ...,w_k$. To define the edges of $\tau$ we
assume first $m>1$. Let, for $i=1, 2, ..., n$, and $j=1, 2, ..., m$, $B_{p}$ be the
block of $\pi$ containing vertex $v_{i,j}$, and $B_{q}$ be the block of $\pi$ containing
vertex $v_{i,j+1}$. Notice that the indices of the vertices of $nK_m$ are considered
in clockwise order: $v_{i,m+1}\equiv v_{i,1}$. Then edge $e_{i,j}$ starts at $w_{p}$,
and ends at $w_{q}$.
If $m=1$, the edges of $\tau$ are loops. Specifically $e_{i,1}$ starts and ends at
$w_{t}$, where $t$ is the index of the block of $\pi$ containing vertex $v_{i,1}$.
\end{proof}
Thus we can say again that the number of $(n,m)$-labelled Eulerian digraphs with
$k$ vertices enjoy the same recurrence as $S_{m}(n,k)$.
Therefore counting these graphs corresponds to another combinatorial interpretations
of the coefficients of \cite{blasiak1}.
\medskip
We close the Section with a remark. It is obvious that any $k$-colouring of a given
set is fully described by any $k-1$ of its colour-classes. Accordingly, one can give
a slightly different interpretation of coefficients $S_{m}(n,k)$ by removing the last
edge from each cycle, producing a partition into labeled directed paths instead of
cycles. This model generalizes the concept of loopless, oriented multigraphs on $n$
labeled arcs as in A020556 in \cite{oeis}.
\section{Conclusions}
\label{sec:21case}
We hope that simpler combinatorial interpretations can be found for other generalized
Bell numbers and Stirling numbers of the second kind. In particular, we note that
our bijections (in Sections 2 and 3) exist for the disjoint union of cliques of different
sizes.
For the coefficients $S_{2,1}(n,k)$ we observe that Equation (15) of \cite{blasiak2}
implies that $S_{2,1}(n,k)$ is equal to the (positive) Lah number
\[
L(n,k)=\frac{n!}{k!}\binom{n-1}{k-1}\,.
\]
According to the classical interpretation of Lah numbers, this means that $S_{2,1}(n,k)$
counts the number of ordered placements of $n$ balls into $k$ boxes, and $B_{2,1}(n)$
counts the number of ordered placements of $n$ balls into boxes \cite{comtet}.
Table \ref{tab:S_21} provides some values of $S_{2,1}(n,k)$. Those values also appear in
\cite[Table 1]{blasiak1}, and in sequences A105278 of \cite{oeis}, where
further combinatorial interpretations of such coefficients are proposed.
\begin{table}[h]\footnotesize
\begin{center}
\begin{tabular}{c|rrrrrrrrr}
\toprule
$S_{2,1}(n,k)$ & \text{k=1} & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\
\midrule
\text{n=1} & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\
2 & 2 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\
3 & 6 & 6 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} & \text{} \\
4 & 24 & 36 & 12 & 1 & \text{} & \text{} & \text{} & \text{} & \text{} \\
5 & 120 & 240 & 120 & 20 & 1 & \text{} & \text{} & \text{} & \text{} \\
6 & 720 & 1800 & 1200 & 300 & 30 & 1 & \text{} & \text{} & \text{} \\
7 & 5040 & 15120 & 12600 & 4200 & 630 & 42 & 1 & \text{} & \text{} \\
8 & 40320 & 141120 & 141120 & 58800 & 11760 & 1176 & 56 & 1 & \text{} \\
9 & 362880 & 1451520 & 1693440 & 846720 & 211680 & 28224 & 2016 & 72 & 1 \\
\bottomrule
\end{tabular}
\end{center}
\caption{$S_{2,1}(n,k)$.}
\label{tab:S_21}
\end{table}
Finally, we remark that the values of $S_{3,1}(n,1)$ in Table 1 in \cite{blasiak1} appear to be identical to the sequence
A001147 from \cite{oeis}, which counts the number of increasing ordered rooted trees on $n+1$ vertices. (Here
"increasing" means the vertices are labeled $0, 1, 2,..., n$ so that each path from the root has increasing labels.)
Similarly, the values $S_{4,1}(n,1)$ appear to be identical to the sequence A007559 from \cite{oeis}.
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2013-08-09T02:00:46",
"yymm": "1308",
"arxiv_id": "1308.1700",
"language": "en",
"url": "https://arxiv.org/abs/1308.1700",
"abstract": "In a series of papers, P. Blasiak et al. developed a wide-ranging generalization of Bell numbers (and of Stirling numbers of the second kind) that appears to be relevant to the so-called Boson normal ordering problem. They provided a recurrence and, more recently, also offered a (fairly complex) combinatorial interpretation of these numbers. We show that by restricting the numbers somewhat (but still widely generalizing Bell and Stirling numbers), one can supply a much more natural combinatorial interpretation. In fact, we offer two different such interpretations, one in terms of graph colourings and another one in terms of certain labelled Eulerian digraphs.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "A simple combinatorial interpretation of certain generalized Bell and Stirling numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347853343059,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646060341339
} |
https://arxiv.org/abs/2202.00754 | On Wilson's theorem about domains of attraction and tubular neighborhoods | In this paper, we show that the domain of attraction of a compact asymptotically stable submanifold of a finite-dimensional smooth manifold of an autonomous system is homeomorphic to its tubular neighborhood. The compactness of the attractor is crucial, without which this result is false; two counterexamples are provided to demonstrate this. | \section{Introduction}
The domain of attraction of an attractor of a continuous dynamical system has been widely studied. An \emph{attractor} is a closed invariant set of which there exists an open neighborhood such that every trajectory of the dynamical system starting within the neighborhood eventually converges to the attractor, in the sense that the distance between the trajectory and the attractor converges to zero; namely, the attractor is \emph{attractive}. And the set of all initial conditions rendering the corresponding trajectories to converge to the attractor is called the \emph{domain of attraction} of the attractor \cite{khalil2002nonlinear,bhatia2002stability}. Generally, it is difficult or sometimes impossible to find analytically the domain of attraction of an attractor. Since an attractor is attractive, if additionally it is \emph{Lyapunov stable} \cite[Chapter 4]{khalil2002nonlinear}, then it is called an \emph{asymptotically stable attractor}; sometimes Lyapunov functions can be utilized to estimate its domain of attraction, but the estimate can be conservative \cite[Chapers 4 and 8]{khalil2002nonlinear}.
Partly due to the difficulty of calculating the domain of attraction of an attractor, some studies in the literature instead investigate the ``shapes'' or ``sizes'' of domains of attraction in the topological sense \cite{sontag2013mathematical,bernuau2019topological,moulay2010topological,bhatia2002stability,bhat2000topological,wilson1967structure}. In particular, in the simplest case where the attractor is an asymptotically stable equilibrium point, it has been shown in \cite[Theorem 21]{sontag2013mathematical} that the domain of attraction is contractible. This result characterizes the ``shape'' of the domain of attraction, and it also implies the ``size'' of the domain of attraction. Namely, it leads to the topological obstruction that if the state space of the system is not contractible, then an equilibrium point cannot be stabilized globally \cite[Corollary 5.9.3]{sontag2013mathematical}. Another topological obstruction is shown in \cite{bhat2000topological}, which states that the domain of attraction of an asymptotically stable equilibrium point cannot be the whole state space (i.e., global asymptotic stability of an equilibrium is impossible) if the state space of the continuous dynamical system has the structure of a vector bundle over a compact manifold. Some studies partly generalize these results to asymptotically stable attractors that are not necessarily equilibrium points. In \cite{moulay2010topological}, it is proved that a compact, asymptotically stable attractor defined on a manifold (or more generally, on a locally compact metric space) is a \emph{weak deformation retract} of its domain of attraction. The conclusion is further developed in \cite{bernuau2019topological}, which shows that if the considered manifold is the Euclidean space $\mathbb{R}^n$, then the compact asymptotically stable attractor is a \emph{strong deformation retract} of its domain of attraction.
Assuming that the asymptotically stable attractors are \emph{compact submanifolds} of some ambient finite-dimensional smooth manifolds, stronger conclusions can be made about the domains of attraction. For example, it is proved in \cite[Chapter V, Lemma 3.2]{bhatia2002stability} that the intersection of an $\epsilon$-neighborhood of the attractor and some sublevel set of a corresponding Lyapunov function (of which the existence is automatically guaranteed \cite{wilson1969smoothing}) is a deformation retract of the domain of attraction of the attractor. This result is refined in \cite{moulay2010topological,yao2021topo}, which conclude that the attractor itself is a strong deformation retract of its domain of attraction. Therefore, the attractor and its domain of attraction are homotopy equivalent. This result has practical significance. For example, it facilitates the analysis regarding the existence of singular points and the possibility of global convergence of trajectories to desired paths in the vector-field guided path-following problem ffor robotic control systems \cite{yao2021topo}. Note that the results discussed in this paragraph are strengthened for \cite{wilson1967structure} the case where the attractor is an embedded submanifold, where Theorem 3.4 in \cite{wilson1967structure} claims that the domain of attraction is diffeomorphic to a tubular neighborhood of the attractor, which can be either a compact or non-compact submanifold. However, in this paper, we will show that the compactness of the attractor is crucial, without which such a claim becomes false. In addition, the proof of Theorem 3.4 in \cite{wilson1967structure} is very brief, only indicating the method without giving sufficient detail. In this paper, we will detail the proof for a corrected version of this theorem, where the attractor is required to be compact.
\textit{Contributions}: Throughout the paper, manifolds or submanifolds are \emph{without} boundaries, and they are second countable and paracompact. We assume that the attractor is compact, asymptotically stable and it is a submanifold of some finite-dimensional smooth manifold. We show that the compactness of the attractor is crucial for Theorem 3.4 in \cite{wilson1967structure} by providing counterexamples where Theorem 3.4 in \cite{wilson1967structure} no longer holds if the attractor is \emph{not} compact. Taking the compactness of the attractor into account, Theorem 3.4 in \cite{wilson1967structure} is corrected as below:
\begin{theorem}[Corrected version of Theorem 3.4 in \cite{wilson1967structure}] \label{thm:DA homeomorphic to tub}
The domain of attraction of a compact asymptotically stable submanifold of a finite-dimensional smooth manifold of an autonomous system is homeomorphic to its tubular neighborhood.
\end{theorem}
In this paper, we will give a complete and detailed proof of Theorem \ref{thm:DA homeomorphic to tub}, along with some auxiliary results to gain more insight into the theorem.
The remainder of the paper is organized as follows. Section \ref{sec_pre} provides some preparatory results for the convenience of proving Theorem \ref{thm:DA homeomorphic to tub}. Then the detailed proof of Theorem \ref{thm:DA homeomorphic to tub} is elaborated in Section \ref{sec_proof}. To justify the importance of the compactness of the attractor in this theorem, we provide two counterexamples where the attractor is \emph{not} compact and hence Theorem \ref{thm:DA homeomorphic to tub} fails to hold in Section \ref{sec_example}. Finally, Section \ref{sec_conclu} concludes the paper.
\section{Preparatory results} \label{sec_pre}
In this section, we go through some basic notions and facts that will be used in the sequel. Let $\mathcal{M}$ and $\mathcal{N}$ be smooth manifolds, and $\mathcal{S}$ be a submanifold of $\mathcal{M}$. Note that in this section, the submanifold $\mathcal{S}$ can be compact or non-compact unless its compactness is specified explicitly. The notation $:=} %{\stackrel{\Delta}{=}$ means ``defined to be''. The map ${\rm id}$ is the identity map where the domain and codomain are clear from the context.
First, we recall the definitions of topological and smooth embeddings.
\begin{defn}[Topological and smooth embeddings, {\cite[p. 85]{lee2015introduction}}]
A \emph{(topological) embedding} is an injective continuous map that is a homeomorphism onto its image (with the subspace topology). A \emph{smooth embedding} is a smooth immersion that is also a (topological) embedding.
\end{defn}
If $f: \mathcal{M} \to \mathcal{N}$ is an embedding, the image $f(\mathcal{M})$ can be regarded as a homeomorphic copy of $\mathcal{M}$ inside $\mathcal{\mathcal{N}}$. If $f: \mathcal{M} \to \mathcal{N}$ is a smooth embedding, then it is both a topological embedding and a smooth immersion
For each $p\in \mathcal{M}$, denote by $T_{p}\mathcal{M}$ and $T_{p}\mathcal{S}$ the tangent
spaces respectively of $\mathcal{M}$ and $\mathcal{S}$ at $p$, and by $T\mathcal{M}$ and $T\mathcal{S}$
the tangent bundles. Note that $T\mathcal{S}$ can be regarded as a subbundle of $T\mathcal{M}$ in a natural way.
\begin{defn}[Normal bundle]
The normal bundle $\mathcal{N}_{\mathcal{S}}$ of $\mathcal{S}$ in $\mathcal{M}$ is the quotient bundle $T_{\mathcal{S}}\mathcal{M}\big/T\mathcal{S} :=} %{\stackrel{\Delta}{=} \bigsqcup_{p\in \mathcal{S}} (T_{p}\mathcal{M} / T_p \mathcal{S})$, where $\bigsqcup$ denotes the disjoint union
\end{defn}
\begin{fact}[{\cite[Sections 6.1 and 7.1]{mukherjee2016differential}}]
Let $g$ be any Riemannian metric on $\mathcal{M}$. For each $p\in \mathcal{M}$, let
$\mathcal{N}_{p}$ be the orthogonal complement of $T_{p}\mathcal{S}$ in $T_{p}\mathcal{M}$ with
respect to $g$. Then $\bigsqcup_{p\in \mathcal{S}}\mathcal{N}_{p}$ is a subbundle of
$T_{\mathcal{S}}\mathcal{M}$ and it is isomorphic to $T_{\mathcal{S}}\mathcal{M}\big/T\mathcal{S}$. This gives another way of defining the normal bundle of $\mathcal{S}$ in $\mathcal{M}$
\end{fact}
\begin{fact}[{\cite[Section 5.1]{mukherjee2016differential}}]
For any vector bundle $\mathcal{E}$ over $\mathcal{S}$, (the image of) the zero section of $\mathcal{E}$ can be canonically identified with $\mathcal{S}$ via
\begin{align*}
\iota_{\mathcal{S}}: \bar{0}_{\mathcal{S}} \subseteq \mathcal{E} &\to \mathcal{S} \\
0_{x} &\mapsto x
\end{align*}
where $\bar{0}_{\mathcal{S}} \subseteq \mathcal{E}$ denotes (the image of) the zero section of $\mathcal{E}$, and $0_{x}$ denotes the zero vector in the vector space $\mathcal{E}_{x}$ for $x \in \mathcal{S}$. Therefore, $\iota_{\mathcal{S}}$ is a diffeomorphism from $\bar{0}_{\mathcal{S}}$ to $\mathcal{S}$. Note that viewing $\mathcal{S}$ as a submanifold of $\mathcal{M}$, $\iota_{\mathcal{S}}$ can also be regarded as an embedding of $\bar{0}_{\mathcal{S}}$ into $\mathcal{M}$.
\end{fact}
\begin{defn}[Tubular neighborhood] \label{def:tubular-neighborhood}
A tubular neighborhood of $\mathcal{S}$ is an open embedding $\tau:\mathcal{E} \rightarrow \mathcal{M}$ from some vector bundle $\mathcal{E}$ over $\mathcal{S}$ to $\mathcal{M}$ satisfying
\[
\tau\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}.
\]
More loosely, we often call the open set $\mathcal{W} :=} %{\stackrel{\Delta}{=} \tau(\mathcal{E})$ a tubular neighborhood of $\mathcal{S}$
\end{defn}
Whether we refer to a tubular neighborhood as an embedding or an open set should be clear from the context.
\begin{theorem}[Existence of tubular neighborhood, {\cite[Proposition 7.1.3]{mukherjee2016differential}}] \label{thm:existence of tubular neighborhoods-1}
Suppose that $\mathcal{S}$ is a submanifold of $\mathcal{M}$. Then there exists an embedding
$\tau:N_{\mathcal{S}} \to \mathcal{M}$ from the normal bundle $N_{\mathcal{S}}$ of $\mathcal{S}$
into $\mathcal{M}$ such that $\tau$ keeps the zero section of $N_{\mathcal{S}}$ (i.e., $\tau(0_{x})=x$ for all $x\in\mathcal{S}$, or $\tau\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}$).
\end{theorem}
\begin{remark}
This means that $\tau: N_\mathcal{S} \to \mathcal{M}$ is a tubular neighborhood of $\mathcal{S}$, and $\tau$ is a diffeomorphism between $N_\mathcal{S}$ and $\tau(N_\mathcal{S})$.
\end{remark}
Before presenting the uniqueness result of tubular neighborhoods, we first recall the definitions of \emph{isotopy} and \emph{diffeotopy}.
\begin{defn}[Isotopy and diffeotopy, {\cite[pp. 177-178]{hirsch2012differential}}]
An \emph{isotopy} from $\mathcal{M}$ to $\mathcal{N}$ is a map $F : \mathcal{M} \times \mathcal{I} \to \mathcal{N} $, where $\mathcal{I} \subseteq \mathbb{R}$ is an interval, such that for each $t \in \mathcal{I}$, the map
$
F_t: \mathcal{M} \to \mathcal{N}
$
defined by $ x \mapsto F(x,t)$ is an embedding. We also say $F$ is an isotopy from $F_0$ to $F_1$, and $F_0$ and $F_1$ are called \emph{isotopic}.
%
If each $F_t$ is a smooth embedding, then $F$ is a \emph{smooth isotopy} from $\mathcal{M}$ to $\mathcal{N}$.
%
If each $F_t$ is a diffeomorphism, \cmnt{\red{and $F_0={\rm id}_\mathcal{N}$}, where ${\rm id}_\mathcal{N}$ denotes the identity map of $\mathcal{N}$,} then $F$ is called a \emph{diffeotopy}.
\end{defn}
Throughout the paper, whenever we mention an \emph{isotoopy}, we mean a \emph{smooth isotopy}. Now we show the uniqueness result of the tubular neighborhood as follows.
\begin{theorem}[Uniqueness of tubular neighborhood I, {\cite[Theorem 7.4.4]{mukherjee2016differential}}] \label{thm:uniqueness of tubular neighborhoods-1}
Suppose that $f_{i}:\mathcal{E}_{i}\rightarrow\mathcal{M}, \;i=0,1$, are tubular neighborhoods of $\mathcal{S}$. Then there exists a bundle map\footnote{More specifically, the bundle map $\gamma$ is a bundle isomorphism. This is because $f_0$ and $f_1$ are embeddings and their images are open sets in $\mathcal{M}$; therefore, $\mathcal{E}_0$ and $\mathcal{E}_1$ are vector bundles which, as manifolds, have the same dimensions as $\mathcal{M}$ does. } $\lambda:\mathcal{E}_{0}\rightarrow \mathcal{E}_{1}$
such that $f_{0}$ and $f_{1}\circ\lambda$ are isotopic (see Fig. \ref{fig: thm:uniqueness of tubular neighborhoods-1}).
\end{theorem}
\begin{figure}[h]
\centering
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
{\mathcal{E}_0} & {\mathcal{E}_1} \\
{\mathcal{M}} & {} \\};
\path[-stealth]
(m-1-1) edge node [above] {$\lambda$} (m-1-2)
(m-1-1) edge node [left] {$f_0$} (m-2-1)
(m-1-2) edge node [below right] {$f_1$} (m-2-1);
\end{tikzpicture}
\caption{Relations in Theorem \ref{thm:uniqueness of tubular neighborhoods-1}. If $f_0$ and $f_1$ are two tubular neighborhoods, then there exists a bundle map $\lambda: \mathcal{E}_0 \to \mathcal{E}_1$ such that $f_{0}$ and $f_{1}\circ\lambda$ are isotopic.}
\label{fig: thm:uniqueness of tubular neighborhoods-1}
\end{figure}
Due to Theorems \ref{thm:existence of tubular neighborhoods-1} and \ref{thm:uniqueness of tubular neighborhoods-1}, Theorem \ref{thm:DA homeomorphic to tub} implies the following result.
\begin{prop}
The domain of attraction of a compact asymptotically stable submanifold $\mathcal{S}$ of a finite-dimensional smooth manifold $\mathcal{M}$ of an autonomous system is homeomorphic to its normal bundle $N_{\mathcal{S}}$.
\end{prop}
\begin{proof}
Combine Theorems \ref{thm:DA homeomorphic to tub}, \ref{thm:existence of tubular neighborhoods-1} and \ref{thm:uniqueness of tubular neighborhoods-1}.
\end{proof}
Denote by $G: \mathcal{E}_0 \times (-\delta, 1+\delta) \to \mathcal{M}$ the isotopy from $f_{0}$ to $f_{1}\circ\lambda$. Then Theorem \ref{thm:uniqueness of tubular neighborhoods-1} implies that $G_t: \mathcal{E}_0 \to \mathcal{M}$ is a tubular neighborhood for any $t \in (-\delta, 1+\delta)$. Now let
\[
h(x,t)=G(f_{0}^{-1}(x),t)
\]
for $(x,t)\in f_0(\mathcal{E}_0) \times(-\delta,1+\delta)$. We have the following corollary.
\begin{coroll}[Uniqueness of tubular neighborhood II, {\cite[Theorem 7.4.4]{mukherjee2016differential}}] \label{cor:uniqueness of tubular neighborhoods}
Suppose that $\mathcal{S}$ is a submanifold of $\mathcal{M}$, and $\mathcal{W}_{0}$ and $\mathcal{W}_{1}$ are two tubular neighborhoods (as open sets) of $\mathcal{S}$ in $\mathcal{M}$, then there exists an isotopy $h:\mathcal{W}_{0}\times(-\delta,1+\delta)\rightarrow\mathcal{M}$ such that
\[
h_{0}=j_{\mathcal{W}_{0}}, \quad h_{1}(\mathcal{W}_{0})=\mathcal{W}_{1}, \quad h_{t}\big|_{\mathcal{S}}=j_{\mathcal{S}}
\]
for every $t\in(-\delta,1+\delta)$, where $h_t :=} %{\stackrel{\Delta}{=} h(\cdot, t)$, $j_{\mathcal{W}_{0}}$ and $j_{\mathcal{S}}$ are the inclusions of $\mathcal{W}_{0}$ and $\mathcal{S}$ into $\mathcal{M}$
respectively.
\end{coroll}
Therefore, any two tubular neighborhoods $\mathcal{W}_0$ and $\mathcal{W}_1$ are homeomorphic.
\begin{defn}[Closed tubular neighborhood]
Fix a Euclidean metric $g$ on the vector bundle $\mathcal{E}$ over $\mathcal{S}$, and for any $r>0$, let\footnote{Note that $\mathcal{BE}_{r}$ is a submanifold of $\mathcal{E}$ with boundary $\partial(\mathcal{BE}_{r})=\{v\in \mathcal{E} : g(v,v)=r^{2}\}$.}
\[
\mathcal{BE}_{r}=\{v\in \mathcal{E} : g(v,v)\leq r^{2}\}.
\]
A \emph{closed tubular neighborhood} $\mathcal{K}$ of $\mathcal{S}$ is a closed neighborhood of $\mathcal{S}$ in $\mathcal{M}$ such that there is an embedding $\phi:\mathcal{BE}_{r}\rightarrow \mathcal{M}$ satisfying
\[
\phi(\mathcal{BE}_{r}) = \mathcal{K}, \quad \phi\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}.
\]
\end{defn}
\begin{remark} \label{remark: BErEr}
If $\mathcal{S}$ is compact, then $\mathcal{BE}_{r}$ is by definition a closed tubular neighborhood of $\bar{0}_{\mathcal{S}}$ in $\mathcal{E}$ and that it is compact. Since $\mathcal{E}_{r}=\{v\in \mathcal{E} : g(v,v)<r^{2}\}$ can homeomorphically map to $\mathcal{E}$ while keeping the zero section, it is an (open) tubular neighborhood of $\bar{0}_{\mathcal{S}}$ in $\mathcal{E}$. In particular, $\mathcal{E}$ itself is a tubular neighborhood of $\bar{0}_{\mathcal{S}}$ in $\mathcal{E}$.
\end{remark}
Due to Remark \ref{remark: BErEr}, the following proposition holds.
\begin{prop} \label{prop:precompact tubular neighborhoods}
If $\mathcal{S}$ is compact, then there exists some tubular neighborhood $\mathcal{W}$ of $\mathcal{S}$ such that its closure $\bar{\mathcal{W}}$ is a closed tubular neighborhood which is also compact.
\end{prop}
We will use a technique which relies on the following results to prove Theorem \ref{thm:DA homeomorphic to tub} later.
\begin{lemma}[{\cite[Chapter 8, Theorem 1.4]{hirsch2012differential}}] \label{thm: isotopyextension}
Suppose that $\mathcal{U}$ is an open set of the manifold $\mathcal{N}$ and that $\mathcal{S}$ is a compact subset of $\mathcal{N}$ contained in $\mathcal{U}$. Suppose that $h: \mathcal{U} \times(-\delta,1+\delta)\rightarrow\mathcal{N}$ is an isotopy with $h_{0}:\mathcal{U} \rightarrow \mathcal{N}$ being the inclusion. Then for any $\delta'\in(0,\delta)$, there exists a diffeotopy $H:\mathcal{N} \times(-\delta',1+\delta')\rightarrow\mathcal{N}$ with some open neighborhood $\mathcal{U}_{0}$ of $\mathcal{S}$ in $\mathcal{U}$ such that
\[
H\big|_{\mathcal{U}_{0}\times(-\delta',1+\delta')}=h\big|_{\mathcal{U}_{0}\times(-\delta',1+\delta')}.
\]
\end{lemma}
\begin{remark}
Let $\tilde{h}$ be the level preserving map\footnote{The map $\tilde{h}$ is called the \emph{track} of $h$ \cite[p. 111]{hirsch2012differential}.}:
\begin{align*}
\tilde{h}: \mathcal{U} \times(-\delta,1+\delta) &\to \mathcal{N}\times(-\delta,1+\delta) \\
(p,t) &\mapsto \big(h_{t}(p),t\big).
\end{align*}
Note that Theorem 1.4 in Chapter 8 of \cite{hirsch2012differential} requires $\tilde{h}\big( \mathcal{U} \times(-\delta,1+\delta)\big)$ to be open in $\mathcal{N}\times(-\delta,1+\delta)$. However, this requirement is unnecessary at least in our case, since it can be easily checked that $\tilde{h}$ is a submersion\footnote{This is because $\tilde{h}$ is an immersion and the dimensions of $\mathcal{U}$ and $\mathcal{N}$ are the same.} and hence an open map.
\end{remark}
\begin{coroll} \label{cor:isotopyextension}
Suppose that $\mathcal{U}$ is an open set of the manifold $\mathcal{N}$ and that $\mathcal{S}$ is a compact subset of $\mathcal{N}$ contained in $\mathcal{U}$. Suppose that $h':\mathcal{U}\times(-\delta,1+\delta)\rightarrow\mathcal{N}$
is an isotopy, and there exits a diffeomorphism $f_{0}:\mathcal{N} \to \mathcal{N}$ that agrees with $h'_{0}$ on $\mathcal{U}$; i.e.,
\begin{equation} \label{eq_f0u}
f_0|_{\mathcal{U}} = h_0'.
\end{equation}
Then for any $\delta'\in(0,\delta)$, there is a diffeotopy $F: \mathcal{N} \times(-\delta',1+\delta')\rightarrow\mathcal{N}$ with some open neighborhood $\mathcal{U}_{0}$ of $\mathcal{S}$ in $\mathcal{U}$ such that
\[
F\big|_{\mathcal{U}_{0}\times(-\delta',1+\delta')}=h'\big|_{\mathcal{U}_{0}\times(-\delta',1+\delta')}.
\]
\end{coroll}
\begin{proof}
Let $h=f_{0}^{-1}\circ h'$. Therefore, from \eqref{eq_f0u}, we have $h_0=f_{0}^{-1}\circ h'_0=j_{\mathcal{U}}$, where $j_{\mathcal{U}}: \mathcal{U} \to \mathcal{N}$ is the inclusion map from $\mathcal{U}$ to $\mathcal{N}$. According to Lemma \ref{thm: isotopyextension}, there is a diffeotopy $H$ such that $H\big|_{\mathcal{U}_{0}\times(-\delta',1+\delta')}=h\big|_{\mathcal{U}_{0}\times(-\delta',1+\delta')}$. Then let $F=f_{0}\circ H$.
\end{proof}
\begin{remark}
Note that the open set $\mathcal{U}$ in the theorems above may be $\mathcal{N}$ itself, which is the case in Lemma \ref{lem:(extension-of-tubular-neighborhoods)} to be discussed later.
\end{remark}
Now we prove a lemma concerning tubular neighborhoods of the submanifold $\mathcal{S}$ of $\mathcal{M}$. This lemma greatly facilitates the arguments in Section \ref{sec_proof}.
Note that $(N_{\mathcal{S}}, \pi, \bar{0}_{\mathcal{S}})$, where $\pi: N_{\mathcal{S}}\rightarrow\bar{0}_{\mathcal{S}}$ defined by $p \mapsto 0_{x}$ for any $p\in N_{x}$ and $x\in \mathcal{S}$, is a vector bundle over $\bar{0}_{\mathcal{S}}$. Though this might be trivial since $\bar{0}_{\mathcal{S}}$ is identical to $\mathcal{S}$ in a canonical way, we still point it out as follows for the sake of clarity from the set-theoretic perspective.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\matrix (m) [matrix of math nodes,row sep=3em,column sep=4em,minimum width=2em]
{
{N_\mathcal{S}} & {N_\mathcal{S}} \\
{N_\mathcal{S}} & {N_\mathcal{S}} \\};
\path[-stealth]
(m-1-1) edge node [right] {$j$} (m-2-1)
(m-1-1) edge node [above] {$\lambda$} (m-1-2)
(m-1-2) edge node [right] {${\rm id}_{N_\mathcal{S}}$} (m-2-2);
\end{tikzpicture}
\caption{Proof of Lemma \ref{lem:(extension-of-tubular-neighborhoods)}. }
\label{fig: lem:(extension-of-tubular-neighborhoods)}
\end{figure}
\begin{lemma}[Extension of tubular neighborhoods] \label{lem:(extension-of-tubular-neighborhoods)}
Suppose that $j:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$ is a tubular
neighborhood of $\bar{0}_{\mathcal{S}}$; i.e., $j$ is an embedding and
$j(0_{x})=0_{x}$ for all $x\in \mathcal{S}$. Then for any compact set $\mathcal{K}$
in $N_{\mathcal{S}}$, there is a diffeomorphism $\beta$ on $N_{\mathcal{S}}$ such that
$\beta$ agrees with $j$ on some neighborhood of $\mathcal{K}$.
\end{lemma}
\begin{proof}
The idea is to use Corollary \ref{cor:isotopyextension}. To this end, we seek an isotopy $h:N_{\mathcal{S}}\times(-\delta,1+\delta)\rightarrow N_{\mathcal{S}}$ such that $h_{1}=j$ and $h_{0}$ is a diffeomorphism on $N_{\mathcal{S}}$.
Note that both ${\rm id}_{N_{\mathcal{S}}}:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$ and $j:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$
are tubular neighborhoods of $\bar{0}_{\mathcal{S}}$ in $N_{\mathcal{S}}$. Hence, according
to Theorem \ref{thm:uniqueness of tubular neighborhoods-1}, there exists a bundle isomorphism $\lambda:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$ such
that there exists an isotopy $h$ from ${\rm id}_{N_{\mathcal{S}}}\circ\lambda$
to $j$ (see Fig. \ref{fig: lem:(extension-of-tubular-neighborhoods)}). Since ${\rm id}_{N_{\mathcal{S}}}\circ\lambda$ is a diffeomorphism, according
to Corollary \ref{cor:isotopyextension}, there exists a diffeotopy
$H:N_{\mathcal{S}}\times(-\delta,1+\delta)\rightarrow N_{\mathcal{S}}$ such that $H$
agrees with $h$ on some neighborhood of $\mathcal{K}$. Let $\beta=H_{1}$
and then it is a diffeomorphism and agrees with $h_{1}=j$ on such
a neighborhood.
\end{proof}
\section{Proof of Theorem \ref{thm:DA homeomorphic to tub}} \label{sec_proof}
The proof of Theorem \ref{thm:DA homeomorphic to tub} is based on \cite[Lemma 3.3]{wilson1967structure}. For clarity, we decompose the proof into several propositions. Denote by $\mathcal{M}$ the state space with a vector field $X$. Denote by
$\varphi$ the flow of $X$ and assume that $\mathcal{S}$ is a \emph{compact} boundaryless
submanifold of $\mathcal{M}$ and is an asymptotic
stable attractor of $\varphi$. Denote by $\mathcal{D}_{A}$ the domain of attraction of
$\mathcal{S}$.
We start by fixing a precompact tubular neighborhood
\[
f_{o}:N_{\mathcal{S}}\rightarrow\mathcal{W}
\]
of $\mathcal{S}$ in $\mathcal{D}_{A}$, where $\mathcal{W} :=} %{\stackrel{\Delta}{=} f_o(N_{\mathcal{S}})$. The existence of $f_o$ is guaranteed by Proposition \ref{prop:precompact tubular neighborhoods}.
\begin{prop} \label{prop:compactsets in tubularneighborhoodbyflow}
For each compact set $\mathcal{K}$ in the domain of attraction $\mathcal{D}_{A}$ , there exists
some $T_{K}>0$, such that $\varphi^{T}(\mathcal{K})\subseteq\mathcal{W}$ for
any $T>T_{K}$. Consequently, $\mathcal{K} \subseteq \varphi^{-T}(\mathcal{W})$ for any $T>T_{K}$.
\end{prop}
\begin{proof}
Due to the asymptotic stability of $\mathcal{S}$, there is some neighborhood $\mathcal{U}$ of $\mathcal{S}$ in $\mathcal{W}$ such that $\varphi^{[0,\infty)}(\mathcal{U})\subseteq\mathcal{W}$.
For any $x\in \mathcal{K}$, there is some $T_{x}>0$ with some neighborhood $\mathcal{B}_{x}$ of $x$ such that $\varphi^{T_{x}}(\mathcal{B}_{x})\subseteq \mathcal{U}$. Since $\mathcal{K}$ is compact, there is $\{\mathcal{B}_{x_{i}} \} _{i=1,...,k}$, where $k < \infty$, such that $\bigcup_{i} \mathcal{B}_{x_{i}}\supseteq \mathcal{K}$. Let $T_{K} :=} %{\stackrel{\Delta}{=} \max_{i=1,...,k} T_{x_{i}}$ and the proof is completed.
\end{proof}
Note that $\mathcal{S}$ is invariant under $\varphi$, and hence $\mathcal{S} \subseteq\varphi^{-T}(\mathcal{W})$
for any $T \in \mathbb{R}$. Since $\varphi^{-T}:\mathcal{W}\rightarrow \mathcal{W}_{T} :=} %{\stackrel{\Delta}{=} \varphi^{-T}(\mathcal{W})$
is a diffeomorphism and $\mathcal{W}$ is a tubular neighborhood
of $\mathcal{S}$, it is natural to conjecture that $\mathcal{W}_{T}$ should also be a tubular neighborhood
of $\mathcal{S}$. This is indeed true as shown in the next proposition, but it is not straightforward. According to Definition \ref{def:tubular-neighborhood}, we still need to find a diffeomorphism $f_{T}$ from $N_{\mathcal{S}}$ to $\mathcal{W}_{T}$
such that $f_{T}\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}$. Although $f=\varphi^{-T}\circ f_{o}$
is a diffeomorphism from $N_{\mathcal{S}}$ to $\mathcal{W}_{T}$, we have $f\big|_{\bar{0}_{\mathcal{S}}}=\varphi^{-T}\circ\iota_{\mathcal{S}}$, which is not necessarily equal to $\iota_{\mathcal{S}}$, and hence $f:N_{\mathcal{S}}\rightarrow \mathcal{W}_{T}$ is not necessarily a tubular neighborhood.
Yet $f\big|_{\bar{0}_{\mathcal{S}}}=\varphi^{-T}\circ\iota_{\mathcal{S}}$ and $\iota_{\mathcal{S}}$
are isotopic as maps from $\bar{0}_{\mathcal{S}}$ to $\mathcal{W}_{T}$ while $f$ and
$\varphi^{T}$ are both diffeomorphisms. This makes it possible to
use Lemma \ref{thm: isotopyextension}.
\begin{prop}
\label{prop:tubular neighborhoods by the flow}For any $T>0$, $\mathcal{W}_{T} :=} %{\stackrel{\Delta}{=} \varphi^{-T}(\mathcal{W})$
is a tubular neighborhood of $\mathcal{S}$ in $\mathcal{D}_{A}$. That is,
there exists a diffeomorphism $f_{T}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{T}$ such
that $f_{T}\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}$.
\end{prop}
\begin{proof}
Obviously $f=\varphi^{-T}\circ f_{o}$ is a diffeomorphism from $N_{\mathcal{S}}$
to $\mathcal{W}_{T}$ with $0_{x}\in\bar{0}_{\mathcal{S}} \mapsto \varphi^{-T}(x)$.
Now we need to ``rectify'' the map. Denote by $f_{\mathcal{S}}$ the restriction
of $f$ on $\bar{0}_{\mathcal{S}}$ . Then $j_{1}=f^{-1}\circ\varphi^{T}\circ f_{\mathcal{S}}$
is a map mapping $\bar{0}_{\mathcal{S}}$ diffeomorphically to $\bar{0}_{\mathcal{S}}$. Let $j_{s}=f^{-1}\circ\varphi^{s\cdot T}\circ f_{\mathcal{S}}$
for $s\in(-\delta,1+\delta)$ and then $j:\bar{0}_{\mathcal{S}}\times(-\delta,1+\delta)\rightarrow N_{\mathcal{S}}$
is an isotopy such that $j_{0}$ is the inclusion map, and $f\circ j_{1}=\iota_{\mathcal{S}}$
on $\bar{0}_{\mathcal{S}}$.
Note that $g=\varphi\circ f$ with $g(x,t)=\varphi^{t}\circ f(x)$
is a smooth map from $N_{\mathcal{S}}\times\mathbb{R}$ to $\mathcal{D}_{A}$.
Since $\varphi^{[-\delta,1+\delta]\cdot T}\circ f(\bar{0}_{\mathcal{S}})=\mathcal{S}\subseteq \mathcal{W}_{T}$
and $[-\delta,1+\delta]\cdot T$ is compact, there exists an open
neighborhood $\mathcal{U}$ of $\bar{0}_{\mathcal{S}}$ in $N_{\mathcal{S}}$ such that $\varphi^{[-\delta,1+\delta]\cdot T}\circ f(\mathcal{U})\subseteq \mathcal{W}_{T}$.
Moreover, for any fixed $s\in[-\delta,1+\delta]$, $\varphi^{s\cdot T}\circ f(\cdot)$
is an injective submersion, and hence a smooth embedding. Define
\begin{align*}
h&:\mathcal{U} \times(-\delta,1+\delta)\rightarrow N_{\mathcal{S}} \\
h(x,s)&=f^{-1}\circ\varphi^{s\cdot T}\circ f(x),
\end{align*}
which is an isotopy with $h_{0}$ being the inclusion map of $\mathcal{U}$ into $N_{\mathcal{S}}$
and $h_{s}\big|_{\bar{0}_{\mathcal{S}}}=j_{s}$. Then by Lemma \ref{thm: isotopyextension}, there exists a diffeotopy $H:N_{\mathcal{S}}\times(-\delta',1+\delta')\rightarrow N_{\mathcal{S}}$ for $\delta'\in(0,\delta)$ such that $H$ agrees with $h$ on $\mathcal{U}_{0}\times(-\delta',1+\delta')$
for some open neighborhood $\mathcal{U}_{0}$ of $\mathcal{S}$.
Let $f_{T}=f\circ H_{1}$ and this is a diffeomorphism between $N_{\mathcal{S}}$
and $\mathcal{W}_{T}$. Moreover, restricted on $\mathcal{S}$, $f_{T}=f\circ h_{1}=f\circ j_{1}=\iota_{\mathcal{S}}$. Hence, $f_{T}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{T}$ is a tubular neighborhood.
\end{proof}
Since the domain of attraction $\mathcal{D}_{A}$ is a smooth manifold
with the second countability, there exists an ascending chain of compact subsets $\mathcal{K}_{0}\subseteq \mathcal{K}_{1}\subseteq \cdots$ such that $\bigcup_{i\in\mathbb{N}}\mathcal{K}_{i}=\mathcal{D}_{A}$.
Choose $0<T_{0}<T_{1}<\cdots$ such that
\[
\mathcal{W}_{i} :=} %{\stackrel{\Delta}{=} \varphi^{-T_{i}}(\mathcal{W})
\]
contains $\mathcal{K}_{i}$ for each $i$ and that $\overline{\mathcal{W}}_{i}\subseteq \mathcal{W}_{i+1}$.
This is possible due to the precompactness of $\mathcal{W}$. By Proposition
\ref{prop:tubular neighborhoods by the flow}, there exist tubular
neighborhoods $f_{i}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{i}$ for all $i\in\mathbb{N}$.
The strategy to prove Theorem \ref{thm:DA homeomorphic to tub}
is to construct by induction an ascending chain of compact subsets $\mathcal{C}_{0}\subseteq \mathcal{C}_{1}\subseteq \cdots$ with tubular neighborhoods
$g_{i}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{i}$ ``rectified'' from $f_{i}$ such that
$g_{i}(\mathcal{C}_{i})\supseteq \mathcal{K}_{i}$, $g_{i+1}$ agrees with $g_{i}$ on $\mathcal{C}_{i}$
and $\bigcup_{i}\mathcal{C}_{i}=N_{\mathcal{S}}$. Then the theorem follows by defining
a map $g:N_{\mathcal{S}}\rightarrow\mathcal{D}_{A}$ with $g=g_{i}$ on $\mathcal{C}_{i}$.
\begin{theorem}
There exists a diffeomorphism $g:N_{\mathcal{S}}\rightarrow\mathcal{D}_{A}$
such that $g\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}$.
\end{theorem}
\begin{proof}
Let $\mathcal{K}_{0}\subseteq \mathcal{K}_{1}\subseteq \cdots$ be an ascending chain of compact subsets such that $\bigcup_{i\in\mathbb{N}}\mathcal{K}_{i}=\mathcal{D}_{A}$ and $\mathcal{K}_{0}\supseteq \mathcal{S}$. Since $\mathcal{W}$ is precompact in $\mathcal{D}_{A}$, $\varphi^{-T}(\mathcal{W})$ is precompact for any $T>0$ in $\mathcal{D}_{A}$. Then by Proposition \ref{prop:compactsets in tubularneighborhoodbyflow}
we can choose inductively $0<T_{0}<T_{1}< \cdots$ such that $\overline{\mathcal{W}}_{i}\cup \mathcal{K}_{i+1}\subseteq \mathcal{W}_{i+1}$.
According to Proposition \ref{prop:tubular neighborhoods by the flow},
for each $i\in\mathbb{N}$, there is a diffeomorphism $f_{i}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{i}$
such that $f_{i}(0_{x})=x$ for all $x\in \mathcal{S}$. Now we construct $\{(g_{i},\mathcal{C}_{i}) : i\in\mathbb{N}\}$
with $\mathcal{C}_{i}$ being compact sets in $N_{\mathcal{S}}$ and $g_{i}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{i}$
being tubular neighborhoods such that
\begin{enumerate}[label = (\roman*)]
\item\label{req1} $\mathcal{C}_{i}\subseteq \mathrm{int\,} \mathcal{C}_{i+1}$;
\item\label{req2} $g_{i}(\mathcal{C}_{i})\supseteq \mathcal{K}_{i}$;
\item\label{req3} $g_{i+1}\big|_{\mathcal{C}_{i}}=g_{i}\big|_{\mathcal{C}_{i}}$;
\item\label{req4} $\bigcup_{i\in\mathbb{N}}\mathcal{C}_{i}=N_{\mathcal{S}}$;
\end{enumerate}
Take $g_{0}=f_{0}$ and $\mathcal{C}_{0}=\mathcal{BE}_{r_{0}}$ with $r_{0}$ large enough such that $\mathcal{BE}_{r_{0}}\supseteq g_{0}^{-1}(\mathcal{K}_{0})$. Let $j_{1}=f_{1}^{-1}\circ g_{0}$.
Then $j_{1}:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$ is a tubular neighborhood of
$\bar{0}_{\mathcal{S}}$ in $N_{\mathcal{S}}$ and $f_{1}\circ j_{1}=g_{0}$. According
to Lemma \ref{lem:(extension-of-tubular-neighborhoods)}, there is
a bundle isomorphism $\beta_{1}:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$ such that
$\beta_{1}$ agrees with $j_{1}$ on $\mathcal{C}_{0}$. Let $g_{1}=f_{1}\circ\beta_{1}$.
Then $g_{1}:N_{\mathcal{S}}\rightarrow \mathcal{W}_{1}$ is a diffeomorphism and $g_{1}=g_{0}$
on $\mathcal{C}_{0}$. Take $r_{1}$ large enough such that $r_{1}>2r_{0}$
and $\mathcal{C}_{1}=\mathcal{BE}_{r_{1}}$contains $g_{1}^{-1}(\mathcal{K}_{1})$.
Suppose that for $n\in\mathbb{N}$, $\mathcal{A}_{n}=\{(g_{i},\mathcal{C}_{i}) : 0\leq i\leq n\}$
such that \ref{req1}, \ref{req2}, \ref{req3} are satisfied and $\mathcal{C}_{n}=\mathcal{BE}_{r_{n}}$ with $r_{n}>2^{n}r_{0}$.
Let $j_{n+1}=f_{n+1}^{-1}\circ g_{n}$. Then $j_{n+1}:N_{\mathcal{S}}\rightarrow N_{\mathcal{S}}$
is a tubular neighborhood of $\bar{0}_{\mathcal{S}}$ in $N_{\mathcal{S}}$. Again, according
to Lemma \ref{lem:(extension-of-tubular-neighborhoods)}, there exists
a diffeomorphism $\beta_{n+1}$ on $N_{\mathcal{S}}$ such that $\beta_{n+1}=j_{n+1}$
on $\mathcal{C}_{n}$. Set $g_{n+1}=f_{n+1}\circ\beta_{n+1}$ and then $g_{n+1}=g_{n}$
on $\mathcal{C}_{n}$. Pick a positive number $r_{n+1}$ such that $r_{n+1}>2r_{n}$
and $\mathcal{C}_{n+1}=\mathcal{BE}_{r_{n+1}}\supseteq g_{n+1}^{-1}(\mathcal{K}_{n+1})$. Then $\mathcal{A}_{n+1}=\mathcal{A}_{n}\cup\{(g_{n+1},\mathcal{C}_{n+1})\}$
again satisfies \ref{req1}, \ref{req2}, \ref{req3} with $r_{n+1}>2^{n+1}r_{0}$. By induction
we have $\{(g_{k},\mathcal{C}_{k}) : k\in\mathbb{N}\}$ satisfying \ref{req1}, \ref{req2},
\ref{req3} with $r_{k}>2^{k}r_{0}$ for all $k\in\mathbb{N}$.
Define $g:N_{\mathcal{S}}\rightarrow\mathcal{D}_{A}$ with $g\big|_{\mathcal{C}_{i}}=g_{i}\big|_{\mathcal{C}_{i}}$
for all $i\in\mathbb{N}$. Then $g$ is well defined and $\textrm{Im } g=\mathcal{D}_{A}$
due to \ref{req3} and \ref{req2} respectively. Moreover, since $g_{i}$'s are diffeomorphisms
and $\bigcup_{i} \mathrm{int\,} \mathcal{C}_{i}=N_{\mathcal{S}}$, $g$ is a local diffeomorphism. It is also obvious that \ref{req4} is satisfied. For any $p,q\in N_{\mathcal{S}}$, there exists $i$ such that $\mathcal{C}_{i}$ contains
$p$ and $q$. Then $g(p)=g(q)$ $\implies$ $g_{i}(p)=g_{i}(q)$
$\implies$ $p=q$. Hence $g$ is also injective. Therefore, $g$
is a diffeomorphism from $N_{\mathcal{S}}$ onto $\mathcal{D}_{A}$. Moreover,
since $\mathcal{K}_{0}$ is chosen to contain $\mathcal{S}$ in the beginning and $g_{0}$
keeps the zero section (i.e., $g(0_{x})=x$ for all $x\in\mathcal{S}$), $\mathcal{C}_{0}\supseteq\bar{0}_{\mathcal{S}}$. Therefore $g\big|_{\bar{0}_{\mathcal{S}}}=g_{0}\big|_{\bar{0}_{\mathcal{S}}}=\iota_{\mathcal{S}}$,
which concludes the proof.
\end{proof}
\section{Two counterexamples} \label{sec_example}
In this section, we illustrate two counterexamples to invalidate the original claim in \cite[Theorem 3.4]{wilson1967structure}. The first counterexample in Section \ref{sec:count1} discusses how \cite[Theorem 3.4]{wilson1967structure} fails to hold when the attractor is a noncompact manifold. The idea of constructing the counterexample is straightforward, but it usually involves an \emph{incomplete} Riemannian manifold as the ambient space. Nevertheless, another counterexample in Section \ref{sec:count2} involves a \emph{complete} Riemmannian manifold as the ambient space. The idea of the counterexample is to present two topologically equivalent dynamical systems, where the domains of attraction of the noncompact attractors are not homotopy equivalent. As a result, the domain of attraction of the noncompact attractor of either of the system is of a different homotopy type from its tubular neighbourhood, contradicting \cite[Theorem 3.4]{wilson1967structure}. Note that all the vector fields of the dynamical systems in this section are complete; i.e., solutions exist for all $t \in \mathbb{R}$.
\subsection{$\mathcal{M}$ is an incomplete Riemannian manifold} \label{sec:count1}
Theorem 3.4 in \cite{wilson1967structure} states that the domain of attraction of a uniformly asymptotically stable attractor, be it a compact or non-compact manifold, of a complete autonomous system is diffeomorphic to its tubular neighborhood. While the argument in Section \ref{sec_proof} holds for a compact attractor $\mathcal{S}$, it does not hold for a \emph{non-compact} attractor, since Proposition \ref{prop:compactsets in tubularneighborhoodbyflow} may be invalid when the attractor is noncompact. More specifically, when $\mathcal{S}$ is noncompact, it is possible that none of its tubular neighborhood contains any $\epsilon$-neighborhood of $\mathcal{S}$. To see this, note that if we take out one point from a submanifold, the $\epsilon$-neighbourhood of the new submanifold will only miss one point compared to that of the original submanifold, while its tubular neighbourhood (viewed as a vector bundle) would lose the whole fiber over the missing point. Exploiting this observation, we can construct a counterexample by starting with a compact asymptotically stable attractor and then taking one fixed point out of it.
\begin{example}
Start with the smooth function $\bar{f}(x)=({\rm dist}(x,\mathbb{S}^1))^2$ on $\mathbb{R}^2$ and let $\bar{X}=-\,\mathrm{grad}\, \bar{f}$. This system has the unit circle $\mathbb{S}^1 \subseteq \mathbb{R}^2$ as the asymptotically stable attractor, and all points on $\mathbb{S}^1$ are fixed points.
%
Now consider the state space $\mathcal{M}=\mathbb{R}^{2}-\{(1,0)\}$. Let $\mathcal{S}=\mathbb{S}^1-\{(1,0)\}$. It is a closed set and also a submanifold of $\mathcal{M}$, but it is noncompact. Let $f$ be the function on $\mathcal{M}$ such that $f(x)=({\rm dist}(x,\mathcal{S}))^2$. The function $f$ is the restriction of $\bar{f}$ on $\mathcal{M}$, and hence it is smooth. The vector field $X=-\,\mathrm{grad}\, f$ is then the restriction of $\bar{X}$ on $\mathcal{M}$, and it has $\mathcal{S}$ as an attractor, which is uniformly asymptotically stable. The domain of attraction is $\mathcal{M}-\{(0,0)\}$, which is not contractible. However, a tubular neighborhood of $\mathcal{S}$ is homeomorphic to $\mathcal{S} \times \mathbb{R}$, which is contractible, contradicting Theorem 3.4 in \cite{wilson1967structure}.
\end{example}
\subsection{$\mathcal{M}$ is a complete Riemannian manifold} \label{sec:count2}
In this section we demonstrate a dynamical system $(\mathcal{M},\varphi)$ as
a counterexample to Theorem 3.4 in \cite{wilson1967structure}
where the state space $\mathcal{M}$ is a complete Riemannian manifold and the
asymptotically stable attraction $\mathcal{S}$ is \emph{not} compact. Instead of directly showing the construction of the flow map $\varphi$ on $\mathcal{M}$, we first construct an auxiliary
system $(\mathcal{M}_{0},\varphi_{0})$, and then obtain $(\mathcal{M},\varphi)$ via
a topological conjugacy \cite[Chapter 2]{brin2002introduction} $h:\mathcal{M}_{0}\rightarrow \mathcal{M}$ . As an extra benefit
to be seen later, such a demonstration shows that uniformly
asymptotic stability is rather a ``geometric'' concept than a ``topological'' one. Namely, even if two dynamical systems are topologically conjugate, properties concerning the uniform asymptotic stability of the systems may not be (fully) preserved by the conjugacy.
\subsubsection{The auxiliary system $(\mathcal{M}_{0},\varphi_{0})$}
Let
\[
\mathcal{M}_{0}=\{(x,y,z)\in\mathbb{R}^{3} : x^{2}+z^{2}=1\}
\]
and
\[
\mathcal{S}_{0}=\{(x,y,z)\in \mathcal{M}_{0} : x=0,z=1\}.
\]
Endow $\mathcal{M}_{0}$ with the Riemannian metric $g_{0}$ induced by the
standard Riemannian metric $(dx)^{2}+(dy)^{2}+(dz)^{2}$ on $\mathbb{R}^{3}$.
Then $(\mathcal{M}_{0},g_{0})$ is a complete Riemannian manifold with the distance
$d_{\mathcal{M}_{0}}$.
Let $Y_{0},Z_{0}$ be the vector fields on $\mathcal{M}_{0}$ defined by
\[
Y_{0}(x,y,z)=\begin{cases}
e^{-\frac{1}{y}}\frac{\partial}{\partial y}\big|_{(x,y,z)} & y>0\\
0 & y\leq0
\end{cases}
\]
and
\[
Z_{0}(x,y,z)=x\cdot \left( x\frac{\partial}{\partial z}-z\frac{\partial}{\partial x} \right) \Big|_{\mathcal{M}_{0}}.
\]
Let $X_{0}=Y_{0}+Z_{0}$ and denote by $\varphi_{0}$ the
flow of $X_{0}$ on $\mathcal{M}_{0}$. Then $\mathcal{S}_{0}$ is a uniformly
asymptotically stable manifold of the dynamical system $(\mathcal{M}_{0},\varphi_{0})$
with its domain of attraction being
\[
\mathcal{D}_{0}=\{(x,y,z)\in \mathcal{M}_{0} : z>-1\}.
\]
The following characterization of the stability of $\mathcal{S}_{0}$ will be
needed later. Namely, given any $a>-1$ with $\mathcal{W}'_{z>a}=\mathcal{M}_{0}\cap\{(x,y,z)\in \mathcal{M}_{0} : z>a\}$, corresponding to each $\epsilon>0$, there exists some $T_{\epsilon}>0$
such that $d_{\mathcal{M}_{0}}\big(\varphi_{0}^{[T_{\epsilon},+\infty)}(\mathcal{W}'_{z>a}),\mathcal{S}_{0}\big)<\epsilon$.
To see this, denote by $(x'_{t},y'_{t},z'_{t})$ the orbit $\varphi_{0}^{t}(p')$
for $p'=(x',y',z')\in \mathcal{M}_{0}$. Then $(x'_{t},z'_{t})\subseteq \mathbb{S}^{1}$
is subject to the equation
\begin{equation} \label{eq:x'(t)-z'(t)}
\frac{d}{dt}(x'_{t},z'_{t})=\big(-x'_{t}z'_{t}, {x'_{t}}^{2}\big).
\end{equation}
Note that the dynamical system (\ref{eq:x'(t)-z'(t)}) on $\mathbb{S}^{1}$
has the point $q'_{0}=(0,1)$ as an asymptotically stable equilibrium
with the domain of attraction $\{(x,z)\in \mathbb{S}^{1} : z\neq-1\}$. Hence
for any $\epsilon>0$, there exists $T'>0$ such that for any $t\geq T'$
and $q'=(x',z')\in \mathbb{S}^{1}$ with $z'\geq a$, ${\rm dist}\big(\phi^{t}(q'),q'_{0}\big)<\epsilon$,
where ${\rm dist}$ is the distance on $\mathbb{S}^{1}$ measured by lengths of minor arcs, and $\phi$ is the flow of \eqref{eq:x'(t)-z'(t)}. Therefore,
\[
d_{\mathcal{M}_{0}}\big(\varphi_{0}^{t}(x',y',z'),\mathcal{S}_0 \big)\leq {\rm dist}\big(\phi^{t}(x',z'),q'_{0}\big)<\epsilon
\]
for all $t>T'$ and $(x',y',z')\in W'_{z>a}$.
For a point $(0,y,-1)\in \mathcal{M}_{0}-\mathcal{D}_{0}$ with $y\leq0$, it holds that $X_{0}\big|_{(x,y,z)}=0$.
For any $y>0$,
\begin{equation}
\label{eq:X0-(0,y,-1)}
X_{0}\big|_{(0,y,-1)}=Y_{0}\big|_{(0,y,-1)}=e^{-\frac{1}{y}}\frac{\partial}{\partial y}\Big|_{(0,y,-1)},
\end{equation}
implying
\begin{equation}
\varphi_{0}^{t}(0,y,-1)=(0,\gamma(t),-1)
\end{equation}
with
\begin{equation}
\label{eq:dot_gamma>0}
\dot{\gamma}(t)=e^{-\frac{1}{\gamma(t)}}>0.
\end{equation}
Therefore, both $\gamma(t)$ and $\dot{\gamma}(t)$ increase strictly with respect to $t>0$.
\subsubsection{The system $(\mathcal{M},\varphi)$}
Now we construct the dynamical system $(\mathcal{M},\varphi)$ which will serve
as a counterexample. More specifically, a vector field $X$ on some Riemannian manifold $(\mathcal{M},g)$ is to be constructed with a uniformly asymptotically stable submanifold $\mathcal{S}$ of which the domain of attraction $\mathcal{D}$ is not homotopy equivalent to $\mathcal{S}$ itself.
Let
\[
r(y)=\begin{cases}
1-e^{-\frac{1}{y}} & y>0\\
1 & y\leq0
\end{cases}.
\]
Let $\mathcal{M}$ be the two-dimensional cylinder embedded in $\mathbb{R}^{3}$
defined by
\[
\mathcal{M} = \{(x,y,z) \in \mathbb{R}^3 : x^{2}+z^{2}=r(y) \},
\]
and let
\[
\mathcal{S}=\{(x,y,z) \in \mathcal{M} : x=0,z=\sqrt{r(y)}\}.
\]
Then $\mathcal{S}$ is an embedded submanifold and a closed subset in $\mathcal{M}$. Endowed
with the Riemannian metric $g_{\mathcal{M}}$ induced by the standard Riemannian
metric $g=(dx)^{2}+(dy)^{2}+(dz)^{2}$ on $\mathbb{R}^{3}$, $\mathcal{M}$
is a complete Riemannian manifold. Note that although the Riemannian metric $g_{\mathcal{M}}$ is induced by $g$, the corresponding distance $d_{\mathcal{M}}$
on $\mathcal{M}$ is not the restriction on $\mathcal{M}$ of the Euclidean distance $d$
on $\mathbb{R}^{3}$. Generally speaking, it holds that $d_{\mathcal{M}}(p,q)\geq d(p,q)$
for $p,q\in \mathcal{M}$. However, the topology $\tau_{\mathcal{M}}$ induced by $d_{\mathcal{M}}$
on $\mathcal{M}$ is exactly the subspace topology inherited from $\mathbb{R}^{3}$,
meaning that $\tau_{\mathcal{M}}$ is also the same as the topology induced
by (the restriction of) $d$. Then, if a sequence $\{p_{n}\}$ on $\mathcal{M}$
is a Cauchy sequence with respect to $d_{\mathcal{M}}$, it is also a Cauchy
sequence with respect to $d$. Due to the completeness of $\mathbb{R}^{3}$
and the closedness of $\mathcal{M}$ in $\mathbb{R}^{3}$, there exists $\bar{p}\in \mathcal{M}$ such that $p_{n}\xrightarrow{d}\bar{p}$ (i.e., the sequence $\{p_n\}$ converges to $\bar{p}$ with respect to the metric $d$). Since $d_{\mathcal{M}}$ and $d$
induce the same topology on $\mathcal{M}$, this implies that $p_{n}\xrightarrow{d_{\mathcal{M}}}\bar{p}$ (i.e., the sequence $\{p_n\}$ converges to $\bar{p}$ with respect to the metric $d_{\mathcal{M}}$), ensuring the completeness of $(\mathcal{M},d_{\mathcal{M}})$.
The map $h:\mathcal{M}_{0}\rightarrow \mathcal{M}$ defined by
\[
h(x,y,z)=\big(\sqrt{r(y)}\cdot x,\, y,\, \sqrt{r(y)}\cdot z\big)
\]
is a diffeomorphism between the pairs $(\mathcal{M}_{0},\mathcal{S}_{0})$ and $(\mathcal{M},\mathcal{S})$.
Here, we define $X$ to be the vector field on $\mathcal{M}$ related
to $X_{0}$ by $h$. That is, $X=h_{*}(X_{0})$, where $h_{*}: T \mathcal{M}_0 \to T \mathcal{M}$ is the tangent map. Let $\varphi$ be the flow of $X$ on $\mathcal{M}$. Then $h$ is
a conjugacy between the flows $\varphi_{0}$ and $\varphi$. That
is, the identity $h\circ\varphi_{0}=\varphi\circ h$ holds, or equivalently,
\begin{equation}
\varphi^{t}(p'')=h\circ\varphi_{0}^{t}\circ h^{-1}(p'')\label{eq:conjugacy of phi and phi_0}
\end{equation}
for all $p''\in \mathcal{M}$.
Note that for a point $p'=(x',y',z')$ on $\mathcal{M}_{0}$, the distance $d_{\mathcal{M}_{0}}(p',\mathcal{S}_{0})$ is \emph{exactly} the length of the minor arc on the circle $\mathcal{M}_{0}\cap\{(x,y,z) \in \mathbb{R}^3 : y=y'\}$
between $p'$ and $(0,y',1)$. Meanwhile, for a point $p''=(x'',y'',z'')=h(p')$
on $\mathcal{M}$, the distance $d_{\mathcal{M}}(p'',\mathcal{S})$ is \emph{no larger than} the length of the minor arc on the circle $\mathcal{M}\cap\{(x,y,z) \in \mathbb{R}^3 : y=y''\}$ between $p''$ and $\big(0,y'',\sqrt{r(y'')}\big)$.
With $r(y)\leq1$, this implies
\[
d_{\mathcal{M}}\big(h(p'),\mathcal{S}\big)\leq d_{\mathcal{M}_{0}}(p',\mathcal{S}_{0})
\]
for all $p'\in \mathcal{S}_{0}$. Combined with \eqref{eq:conjugacy of phi and phi_0},
it yields the following inequality:
\begin{equation}
d_{\mathcal{M}}\big(\varphi^{t}(p''),\mathcal{S}\big)=d_{\mathcal{M}}\big(h\circ\varphi_{0}^{t}\circ h^{-1}(p''),\mathcal{S}\big)\leq d_{\mathcal{M}_{0}}\big(\varphi_{0}^{t}\circ h^{-1}(p''),\mathcal{S}_{0}\big).\label{eq:d_M=000026d_M0}
\end{equation}
Since $h^{-1}$ maps $\tilde{\mathcal{D}}_{0} :=} %{\stackrel{\Delta}{=} \{(x,y,z)\in \mathcal{M} : z>-1\}$ diffeomorphically to $\mathcal{D}_{0}$, it implies that as $t\rightarrow+\infty$,
$d_{\mathcal{M}}\big(\varphi^{t}(p),\mathcal{S}\big)\rightarrow0$ for all $p\in\tilde{\mathcal{D}}_{0}$.
However, if $\mathcal{S}$ is an attractor, then the domain of attraction of $\mathcal{S}$ should be
\[
\mathcal{D}=\tilde{\mathcal{D}}_{0}\cup\{\big(0,y,-\sqrt{r(y)}\big) : y>0\}.
\]
To see this, first note that for any point $p''=(x'',y'',z'')$ in $\{\big(0,y,-\sqrt{r(y)}\big) : y>0\}$,
\[
\begin{aligned}\varphi^{t}(p'') & & = & h\circ\varphi_{0}^{t}\circ h^{-1}(p'')\\
& & = & h\circ\varphi_{0}^{t}(0,y'',-1)\\
& & = & h(0,\gamma''(t),-1)\\
& & = & \big(0,\gamma''(t),\sqrt{r\circ\gamma''(t)}\big),
\end{aligned}
\]
where $\frac{d\gamma''}{dt}>0$. Then from (\ref{eq:dot_gamma>0}) we can deduce that $\gamma''(t)$ and $\frac{d\gamma''}{dt}$ both strictly increase with respect to $t$. Hence $d_{\mathcal{M}}(\varphi^{t}(p''),\mathcal{S})\leq\pi\sqrt{r\circ\gamma''(t)}\rightarrow0$
as $t\rightarrow+\infty$. Meanwhile, for any point $p\in \mathcal{M}-\mathcal{D}$,
i.e. $p=(0,y,-1)$ with $y\leq0$, $X|_{p}=h_{*}(X_{0} |_{p})=0$, and hence $p$ stays stationary under the flow $\varphi$. Therefore, if $p''\in \mathcal{M}$, then $\varphi^{t}(p)\xrightarrow{d_{\mathcal{M}}}\mathcal{S}$ as $t \to \infty$ if and only if $p''\in\mathcal{D}$. Since $\mathcal{D}$ contains circles in the
form $\{(x,y,z) \in \mathbb{R}^3 : x^{2}+z^{2}=r(y),y>0\}$ in $\mathcal{M}$, its fundamental group is
non-zero and hence is not homotopy equivalent to $\mathcal{S}$.
To show that this is a counterexample, it remains to prove that $\mathcal{S}$ is indeed a uniformly asymptotically stable manifold of
the system $(\mathcal{M},\varphi)$. Let
\[
\mathcal{W} :=} %{\stackrel{\Delta}{=} \{(x,y,z)\in \mathcal{M} : z>0\}\cup\{(x,y,z)\in \mathcal{M} : y>1\}.
\]
We will first show that $\mathcal{W}$ contains some $\alpha$-neighborhood
$\mathcal{N}_{\alpha}$ of $\mathcal{S}$ for some $\alpha>0$, and then show
that for each $\epsilon>0$, there exists some $T_{\epsilon}>0$ such
that $d_{\mathcal{M}}\big(\varphi^{[T_{\epsilon},+\infty)}(\mathcal{W}),\mathcal{S}\big)<\epsilon$.
To see that $\mathcal{W}$ contains some $\alpha$-neighborhood of $\mathcal{S}$, we
only need to show that there is a positive distance between its complement
$\mathcal{W}^{c}$ and $\mathcal{S}$. Note that
\[
\mathcal{W}^{c}=\{(x,y,z)\in \mathcal{M} : z\leq0,y\leq1\}=\mathcal{C}\cup \mathcal{K}
\]
with
\[
\mathcal{C} :=} %{\stackrel{\Delta}{=} \{(x,y,z)\in \mathcal{M} : z\leq0,y\leq-\pi\}
\]
and
\[
\mathcal{K} :=} %{\stackrel{\Delta}{=} \{(x,y,z)\in \mathcal{M} : z \leq 0,-\pi\leq y\leq1\}.
\]
Then $\mathcal{K}$ is compact and $\mathcal{C}$ is closed in $\mathcal{M}$, and $\mathcal{C}\cap \mathcal{S}$, $\mathcal{K}\cap \mathcal{S}$
are both empty. Since $(\mathcal{M},g_{\mathcal{M}})$ is a complete Riemannian manifold
with the distance $d_{\mathcal{M}}$, it holds that $d_{\mathcal{M}}(\mathcal{S},\mathcal{K})>0$ as a consequence of the disjointedness of a closed subset and a compact subset. To see that $d_{\mathcal{M}}(\mathcal{S},\mathcal{C})>0$, note that $d_{\mathcal{M}}(\mathcal{S}_{y\leq0},\mathcal{C}) = \pi / 2$ and
$d_{\mathcal{M}}(\mathcal{S}_{y\geq0},\mathcal{C})\geq\pi$, where $\mathcal{S}_{y\leq0} :=} %{\stackrel{\Delta}{=} \mathcal{S}\cap\{ (x,y,z) \in \mathbb{R}^3 : y\leq0\}$ and
$\mathcal{S}_{y\geq0} :=} %{\stackrel{\Delta}{=} \mathcal{S}\cap\{ (x,y,z) \in \mathbb{R}^3 : y\geq0\}$. Then for any $0<\alpha<\min\{d_{\mathcal{M}}(\mathcal{S},\mathcal{K}),d_{\mathcal{M}}(\mathcal{S},\mathcal{C})\}$, there holds $\mathcal{N}_{\alpha}\subset \mathcal{W}$.
Now we proceed to show that for any $\epsilon>0$, there exists $T_{\epsilon}>0$. Denote by $\mathcal{W}_{z>0}$ the set $\{(x,y,z)\in \mathcal{M}: z>0\}$ and
by $\mathcal{W}_{y>1}$ the set $\{(x,y,z)\in \mathcal{M} : y>1\}$. Then $\mathcal{W}=\mathcal{W}_{z>0}\cup \mathcal{W}_{y>1}$.
Note that for each point $p=(x,y,z)\in \mathcal{W}_{y>1}$, $X |_{p}$
takes the form $a_{p}\frac{\partial}{\partial x}+e^{-\frac{1}{y}}\frac{\partial}{\partial y}+c_{p}\frac{\partial}{\partial z}$,
and therefore, $\mathcal{W}_{y>1}$ is an invariant open set of the system $(\mathcal{M},\varphi)$.
It holds that $\varphi^{t}(p)=\big(x_{t},y_{t},z_{t}\big)$ with $\frac{dy_{t}}{dt}>e^{-1}$
for any $p\in \mathcal{W}_{y>1}$. Choose $T''$ to be some positive number
large enough such that $r(e^{-1}\cdot T'')<(\epsilon / \pi)^{2}$.
Then for any $t\geq T''$ and $p\in \mathcal{W}_{y>1}$, it holds that $r(y_{t})<r(e^{-1}\cdot T'')$
and therefore $d_{\mathcal{M}}\big(\varphi^{t}(p),\mathcal{S}\big)\leq\pi\sqrt{r(y_{t})}<\epsilon$.
To see that the points in $\mathcal{W}_{z>0}$ converge uniformly towards $\mathcal{S}$,
first note that $h^{-1}(\mathcal{W}_{z>0})=\mathcal{W}'_{z>0}=\mathcal{M}_{0}\cap\{(x,y,z) \in \mathbb{R}^3 : z>0\}$. Then
combined with \eqref{eq:d_M=000026d_M0}, there holds $d_{\mathcal{M}}\big(\varphi^{[T',+\infty)}(\mathcal{W}_{z>0}),\mathcal{S}\big)<\epsilon$.
Finally, one only needs to choose $T_{\epsilon}$ to be $\min\{T',T''\}$ and the whole argument is complete.
\section{Conclusion} \label{sec_conclu}
In this paper, we have revisited Wilson's theorem (i.e., Theorem 3.4 in \cite{wilson1967structure}) about the relation between the domain of attraction of an attractor and its tubular neighborhood. Specifically, we show with detailed and rigorous proofs that the domain of attraction of a \emph{compact} asymptotically submanifold of a finite-dimensional smooth manifold of a continuous dynamical system is homeomorphic to its tubular neighborhood. We emphasize that the compactness of the attractor is crucial, without which Wilson's theorem cannot hold. This is shown by two counterexamples where the attractor is not compact and the state space is either complete or incomplete.
\bibliographystyle{plain}
| {
"timestamp": "2022-02-03T02:03:57",
"yymm": "2202",
"arxiv_id": "2202.00754",
"language": "en",
"url": "https://arxiv.org/abs/2202.00754",
"abstract": "In this paper, we show that the domain of attraction of a compact asymptotically stable submanifold of a finite-dimensional smooth manifold of an autonomous system is homeomorphic to its tubular neighborhood. The compactness of the attractor is crucial, without which this result is false; two counterexamples are provided to demonstrate this.",
"subjects": "Dynamical Systems (math.DS); Optimization and Control (math.OC)",
"title": "On Wilson's theorem about domains of attraction and tubular neighborhoods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.974434792016126,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7093646051475421
} |
https://arxiv.org/abs/2211.06291 | Do Bayesian Neural Networks Need To Be Fully Stochastic? | We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary. To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors for $n$-dimensional predictive problems. In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs. | \section{Proofs}\label{sec:supplement:proofs}
We provide a proof of Theorem~\ref{theorem:one_bias_is_all_you_need}, which states that a number of architectures are universal conditional distribution approximators (UCDAs). First, we restate the architectures that we consider and our theorem statement for convenience. The architectures that we consider are:
\begin{itemize}
\vspace{-5pt}
\item[{[a]}] A deterministic multi-layer perceptron (MLP) with a single hidden layer of arbitrary width; non-polynomial activation function; and which takes $[Z;X]$ as its input.
\item[{[b]}] An MLP with $L=2$ layers; continuous, invertible, and non-polynomial activation functions; $d$ units with deterministic biases and $m$ units with Gaussian random biases in the first layer; and a second layer of arbitrary width.
\item[{[c]}] An MLP with $L=2$ layers; RELU activations; $2d$ units with deterministic biases and $m$ units with Gaussian random biases in the first layer; and a second layer of arbitrary width.
\item[{[d]}] An MLP with $L\ge 2$ layers; continuous and non-polynomial activation functions that are either invertible or RELUs; at least $2\max(d+m,n)$ units with deterministic biases in each hidden layer; finite weights and biases throughout; one non-final hidden layer with $m$ additional units with Gaussian random biases (other layers may also have additional units with random biases, alongside their $2\max(d+m,n)$ deterministic ones), and; an arbitrary number of hidden units in one of the subsequent hidden layers.
\vspace{-5pt}
\end{itemize}
We recall Theorem~\ref{theorem:one_bias_is_all_you_need}.
\ucda*
\vfill
\newpage
\begin{proof}
We start by noting that for any Gaussian $Z \in \mathbb{R}^m$, there must be some invertible matrix $V \in \mathbb{R}^{m\times m}$ and vector $u \in \mathbb{R}^m$ such that $Z = V \eta+u$, where $\eta \sim \mathcal{N}(0,I_m)$ can be used as the noise input to our generator function.
This is essentially a reparameterization, and it allows us to express $f_{\theta}(Z,x)$ as $f_{\theta}(V \eta + u,x)$.
We next show that if our network is able to represent the vector $[Z;x]$ exactly in one layer and the downstream subnetwork is a universal function approximator as per Lemma~\ref{theorem:uat}, this provides a sufficient condition for the result to hold.
More formally, assume that the all of the following hold for some hidden layer, $h_{\ell}\in \mathcal{H}_\ell \subset \mathbb{R}^\ell$,
\begin{enumerate}
\item $Z$ and $x$ are fully input into the network by this layer;
\item $h_{\ell}$ is compact provided $[Z;X]$ is itself is compact;
\item $h_{\ell}$ can exactly represent $[Z;x]$ in the sense that there is some deterministic, surjective, and continuous function, $g : \mathcal{H}_\ell \rightarrow \mathbb{R}^m\times \mathcal{X}$, such that $g(h_{\ell})$ recovers $[Z;x]$ exactly for all $h_\ell$.
\item The downstream network $f_{\theta}^{>\ell}(h_{\ell})$ satisfies the assumptions of Lemma~\ref{theorem:uat}.
\end{enumerate}
Invoking Lemma~\ref{theorem:uat} for approximating the function $\tilde{f}\left([V^{-1};\mathbf{0}](g(h_{\ell})-[u;\mathbf{0}]),[\mathbf{0};I_d]g(h_{\ell})\right)=\tilde{f}(\eta,x)$ (noting that $\tilde{f}$ is continuous by assumption in the Theorem) gives
\begin{align}
\forall \varepsilon>0,~ \exists \theta ~:~ \sup_{h_{\ell}\in\mathcal{H}_\ell} \|f_{\theta}^{>\ell}(h_{\ell}) - \tilde{f}\left([V^{-1};\mathbf{0}](g(h_{\ell})-[u;\mathbf{0}]),[\mathbf{0};I_d]g(h_{\ell})\right)\| < \varepsilon.
\end{align}
Now by the first assumption, $h_{\ell}$ must itself be a function of $[Z;x]=[V\eta+u;x]$, so we can rewrite the above as
\begin{align*}
\forall \varepsilon>0, \lambda<\infty~ \exists \theta ~:~ \sup_{x\in \mathcal{X},\eta\in \mathbb{R}^m, \|\eta\|<\lambda} \|f_{\theta}(V \eta +u,x) - \tilde{f}(\eta,x)\| < \varepsilon,
\end{align*}
which is the desired result, with $V$ and $u$ taking on the values required for $Z=V\eta+u$.
Here $\lambda$ and the assumption $\|\eta\|<\lambda$ have been introduced to ensure that $[Z;x]$ is itself compact, noting this further requires the assumption made in the theorem itself that $Z$ has finite mean and variance.
To complete the proof, we now need to show that the provided architectures are capable of producing networks that satisfy the four assumptions above.
For architecture [a] they are all trivially satisfied as we have $h_0 = [Z;x]$, which directly ensures assumptions 1-3 hold, and $f_\theta^{>0}$ satisfies the assumptions of Lemma~\ref{theorem:uat} and is a suitable universal approximator.
For architecture [b], we start by noting that the fourth assumption directly holds by the architecture construction.
Now by using the weight matrix $W_1=[\mathbf{0}; I_d]$ and the biases $b_1 = [Z;0]$ for this first layer, we have that its pre-activations are exactly $[Z;x]$ for all $Z$ and $x$.
This ensures the first and second assumptions hold, noting that the continuity of the activation functions ensures that $h_{\ell}$ remains compact.
Finally, we can show that the third assumption holds by using the fact that the architecture uses invertible activation functions to simply define the required $g$ to be the corresponding inverse applied element-wise.
We can now view architecture [c] as an extension of architecture [b], wherein we no longer have an invertible activation function, but can exploit properties of the RELU and an increased number of hidden units instead.
Here we will now use the weight matrix $W_1=[\mathbf{0}; I_d; -I_d]$ and the biases $b_1 = [Z;\mathbf{0};\mathbf{0}]$ for this first layer, so that its pre-activations are exactly $[Z;x;-x]$ for all $Z$ and $x$.
This again immediately ensure that the first two assumption holds, while the fourth assumption is again immediately ensured by downstream subnetwork construction.
For the third assumption, we note that we have $h_{\ell}=[Z;\max(x,0);-\min(x,0)]$, and thus we already immediately have $Z$ and simply need to substract the third set of hidden units from the second to recover $x$, that is the assumptions is satisfied by taking $g([a;b;c])=[a;b-c]$.
Architecture [d] is now a generalization of those in [b] and [c] to allow additional layers and units in each layer.
We can show that the result holds for this set of architectures by showing that any such architecture can replicate the behavior of one of the architectures in [b] or [c] exactly.
For this, we first set all the weight matrices to the identity mapping and all the biases to zero for any layer which is not the specified layer with $m$ random Gaussian biases, with an arbitrary number of hidden units, or the output layer.
If the number of hidden units varies from one layer and the next, we simply pad the weight matrix with zeros, or truncate appropriately.
Here the assumption that we have at least $2\max(d+m,n)$ deterministic units in each layer means we always have enough units to exactly propagate either $[Z;x;-Z;-x]$ or $[Y;-Y]$, as required depending on the position in the network.
For the weights coming into the layer with the $m$ random biases, we use $W_{\ell}=[\mathbf{0}; I_d; -I_d; \mathbf{0}]$ and $b_{\ell} = [Z;\mathbf{0};\mathbf{0};\mathbf{0}]$, producing preactivations for $h_{\ell}$ that are always identical to the preactivations of $h_1$ in architecture [c], appended with zeros if necessary.
The arguments for architectures [b] and [c] (depending on whether our activations are invertible or RELUs) can now be applied to show that we can always recover $[Z;x]$ from $h_{\ell}$.
From here we simply note that the downstream network will behave identically as if it only had one more hidden layer of arbitrary width.
Thus, this architecture must always exactly emulate an architecture of type either [b] or [c], and is, therefore, a universal approximator as required.
\end{proof}
\section{Ethical Considerations}
\label{sec:supplement:ethics}
We hope that our work will help pave the way for cheap, high-quality uncertainty estimates. Such estimates could help build safe and robust artificial intelligence \cite{hendrycks2020unsolved}. Additionally, partially stochastic networks typically require less computation than fully stochastic networks and are therefore more environmentally friendly. However, strongly performing systems could lead to unintended consequences and pose societal costs \cite{russell2019human}, especially if humans place unwarranted credibility in the uncertainty estimates provided by deep learning systems.
\section{Computational Considerations}
\label{sec:supplement:computational_considerations}
We now briefly discuss some of the computational considerations around partially stochasic networks.
At deployment, the memory cost of partially stochastic networks scales with the number of stochastic parameters; the fewer stochastic parameters used, the lower the memory cost, with the exact savings depending on the specific implementation. However, the cost of computing the subset predictive depends on the particular stochastic subset. For example, a stochastic input layer would \textit{not} reduce the number of forward passes required, whilst a stochastic output layer would.
\vfill
\newpage
\newpage
\FloatBarrier
\section{Additional results and experiment details}
\FloatBarrier
\subsection{HMC Mixing Analysis (\S\ref{sec:hmc_mixing})}
Here, we provide further results and details relating to the analysis in \S\ref{sec:hmc_mixing}: \nameref{sec:hmc_mixing} In this section, we analysed the convergence of HMC samples provided by \citet{izmailov2021bayesian}. Table~\ref{tab:hmc_details} contains details pertaining to this analysis.
\paragraph{Analysis Details}To compute the prediction associated with each chain, we averaged the softmax probabilities produced by the samples associated with the chain, in accordance with:
\begin{align}
p(y|x, \mathcal{D}) = \mathbb{E}_{p(\theta | \mathcal{D})}[p(y|x, \theta)].
\end{align}
That is, for each chain, we computed a predictive distribution by averaging the prediction probabilities for each class across the samples from the relevant chain. The ``prediction'' for each datapoint associated with each chain is the class that has the highest predictive probability for that i.e., $\arg \max_y p(y|x, \mathcal{D})$.
The agreement metric that we report is the percentage of data-points from a given dataset on which \textit{all three chains agree}. Note that this metric is different to the metric used by \citet{izmailov2021bayesian}, who compute the percentage of points on which one chain and the \textit{ensemble} of the remaining chains agree.
\paragraph{Additional Results} Although we computed the agreement of each chain on all of the corruptions on the CIFAR-10-C dataset, we presented only a subset of corruptions in Fig.~\ref{fig:hmc_agreement}. Here, we additionally present results for the all corruptions below in Figure~\ref{fig:all_hmc_agreement}.
In an additional analysis, we compute the accuracy of each chain on different corruptions (Fig~\ref{fig:all_hmc_acc}). We find differences in accuracy of up to 8\% on certain corruptions, noticeably exceeding the within-chain variability (Fig~\ref{fig:all_hmc_acc_bootstrapped}). For example, the second HMC chain (orange) is less robust than the first and third HMC chain to all corruptions we consider. This further suggests that each HMC chain appears is exploring different regions of the posterior predictive.
\begin{table}[h]
\centering
\caption{Additional details for analysis into whether full-batch HMC is converging, found in \S\ref{sec:hmc_mixing}: \nameref{sec:hmc_mixing}}
\vspace{5pt}
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Dataset & CIFAR-10 \citep{krizhevsky2009learning} (MIT license) \\
& CIFAR-10-C \citep{hendrycks2018benchmarking} (CC 4.0 license). \\
Use of existing assets & HMC samples from \cite{izmailov2021bayesian} (CC BY 4.0 license). \\
Architecture & ResNet-20-FRN, as in \cite{izmailov2021bayesian}. \\
Compute Infrastructure & Google Colab \\
Hardware & Tesla T4 (or Tesla P100). \\
Runtime & ca. 12 hours.\\
\midrule
\bottomrule
\end{tabular}
\label{tab:hmc_details}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/hmc_agreement_all.pdf}
\caption{ Assessment of function space mixing of ResNet-20-FRN full batch Hamiltonian Monte Carlo (HMC) samples trained on CIFAR-10.
We measure the variability in predictions made across HMC chains released by \citet{izmailov2021bayesian}. To account for the finite sample size, we also measure the variability across simulated chains formed by resampling the first HMC chain i.e., bootstrapping. (a) We compute the percentage of points across different corruptions that all three chains make the same prediction on. While the agreement is 90\% on the CIFAR-10 test set, the agreement decreases to <60\% on certain datasets. (b) The agreement of bootstrapped HMC chains is greater than 94\% across all data considered.}
\label{fig:all_hmc_agreement}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/hmc_all_accs.pdf}
\caption{Assessment of function space mixing of ResNet-20-FRN full batch Hamiltonian Monte Carlo (HMC) samples trained on CIFAR-10.
We measure the variability in predictions made across HMC chains released by \citet{izmailov2021bayesian}. Here, we present the accuracy of each chain on the CIFAR-10 test set and all corruptions of the CIFAR-10-C \cite{hendrycks2018benchmarking} dataset with corruption intensity 5.}
\label{fig:all_hmc_acc}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/hmc_all_accs_bootstrapped.pdf}
\caption{Assessment of within-chain function space variability of ResNet-20-FRN full batch Hamiltonian Monte Carlo (HMC) samples trained on CIFAR-10. We measure the variability in predictions made across simulated HMC chains, using released by \citet{izmailov2021bayesian}. Specifically, we generated multiple simulated chains by sampling from the first chain with replacement.}
\label{fig:all_hmc_acc_bootstrapped}
\end{figure}
\FloatBarrier
\newpage
\subsection{1D Regression with Hamiltonian Monte Carlo (\S\ref{sec:1d_regression})}
We now provide further details relating to \S\ref{sec:1d_regression}: \nameref{sec:1d_regression}. In this section, we focus on the experiment details related to the experiments that used Hamiltonian Monte Carlo. Please see Table~\ref{tab:1d_regression_details_hmc} for relevant experiment details.
\paragraph{Data} We generate synthetic data as follows. We draw 25 points from $\mathcal{U}(-3, -1.7)$ and 25 points from $\mathcal{U}(2.2, 4)$ to generate a set of 50 input points, $\{x_i\}$. We generate the output using $y_i = \sin(4\cdot(x_i-4.3)) + \epsilon_i$, where $\epsilon_i \sim \mathcal{N}(0, 0.05)^2$.
\paragraph{Additional Results} In Fig.~\ref{fig:more_hmc_networks}, we show the predictive distributions of additional partially stochastic networks that use two-stage training. We note that for the the No-U-Turn Sampler (NUTS), the number of steps is chosen adaptively.
\begin{table}[h]
\caption{Additional experiment details for 1D Regression using Hamiltonian Monte Carlo, found in \S\ref{sec:1d_regression}: \nameref{sec:1d_regression}.}
\vspace{5pt}
\centering
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Architecture & Multi-layer perceptron \\
Number of Hidden Layers & 2 \\
Layer Width & 50 \\
Activation Function & SiLU~\citep{hendrycks2016gaussian}\\
Prior Mean & 0 \\
Prior Variance & $\frac{|\Theta|}{|\Theta_S|}$, following \citep{daxberger2021bayesian}. \\
Network Parameterization & Neural Tangent Kernel Parameterization~\citep{jacot2018neural} \\
Inference Algorithm & Hamiltonian Monte Carlo~\citep{neal2012bayesian} with NUTS~\citep{no-u-turn} \\
MCMC chains & 8 \\
Warmup samples per chain & 1000 \\
Samples per chain & 500 \\
Maximum Tree Depth & 15 \\
Likelihood Function & Gaussian \\
Output Noise Variance & $0.05^2$ (As generated) \\
Dataset & Synthetic \\
Dataset Split & 70\% train, 20\% val, 10\% test. \\
Preprocessing & None \\
Computing Infrastructure & Macbook Pro \\
Runtime & ca. 15 minutes (Fully stochastic network). \\
\midrule
\bottomrule
\end{tabular}
\label{tab:1d_regression_details_hmc}
\end{table}
\begin{figure}
\centering
\vspace{-20pt}
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/toy_hmc}
\caption{Additional partially stochastic network configurations using HMC inference over subsets of model parameters.}
\vspace{-10pt}
\label{fig:more_hmc_networks}
\end{figure}
\FloatBarrier
\newpage
\subsection{1D Regression with Variational Inference (\S\ref{sec:1d_regression})}
We now provide further details relating to \S\ref{sec:1d_regression}: \nameref{sec:1d_regression}. In this section, we focus on the experiment details related to the experiments that used variational inference. Please see Table~\ref{tab:1d_regression_details} for relevant experiment details.
\paragraph{Data} We generate synthetic data as follows. We draw 700 points from $\mathcal{U}(-2, -1.4)$ and 700 points from $\mathcal{U}(2, 2.8)$ to generate a set of 1400 input points, $\{x_i\}$. We generate the output using $y_i = \sin(4\cdot(x_i-4.3)) + \epsilon_i$, where $\epsilon_i \sim \mathcal{N}(0, 0.05)^2$.
\begin{table}[h]
\caption{Additional experiment details for 1d regression using variational inference, found in \S\ref{sec:1d_regression}: \nameref{sec:1d_regression}.}
\vspace{5pt}
\centering
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Architecture & Multi-layer perceptron \\
Number of Hidden Layers & 3 \\
Layer Width & 100 \\
Activation Function & Leaky ReLU \\
Prior & $\mathcal{N}(0, 1)$ \\
Training Monte Carlo Samples & 1 \\
Inference Algorithm & Flipout Mean-Field Variational Inference \citep{wen2018flipout} \\
Posterior Mean Initialisation & $\mu \sim \mathcal{N}(0, 0.1^2)$ \\
Posterior Standard Deviation Initialistion & $\sigma = \log(1+\exp(\rho))$, with $\rho \sim \mathcal{N}(-3, 0.1)$ \\
Stochastic Layers & All, or output layer only. \\
Likelihood Function & Gaussian \\
Output Noise Variance & $0.05^2$ (As generated) \\
Dataset & Synthetic \\
Dataset Split & 70\% train, 20\% val, 10\% test. \\
Preprocessing & None \\
Optimizer & AdamW \citep{loshchilov2017decoupled}\\
Learning Rate & $0.001$\\
Weight Decay & $0.0001$ only on deterministic weights and biases \\
Batch Zize & 350 \\
Epochs & $12000$\\
Plotting Epoch & Maximum validation set likelihood \\
Computing Infrastructure & Nvidia Tesla V100-PCIE-32GB \\
Runtime & ca. 15 minutes. \\
Use of existing assets & Bayesian Torch (BSD-3-Clause License)
\citep{krishnan2022bayesiantorch} \\
\midrule
\bottomrule
\end{tabular}
\label{tab:1d_regression_details}
\end{table}
\FloatBarrier
\newpage
\subsection{UCI Regression with Hamiltonian Monte Carlo (\S\ref{sec:uci_regression})}
We now provide further details relating to \S\ref{sec:uci_regression}: \nameref{sec:uci_regression}. Please see Table~\ref{tab:uci_regression_details} for relevant experiment details.
\paragraph{Additional Details.} We note the additional details used in these experiments. (i) We used a homoscedastic noise model $p(y_i|x_i, \theta) = \mathcal{N}(y_i|f_\theta(x_i), \sigma_o^2)$, where $f_\theta(x_i)$ represents the neural network predictions. (ii) We tuned the prior variance so that the deterministic MAP network does not overfit. (iii) For the energy dataset, we predict only the first outcome variance, such that all the tasks we consider have one dimensional targets. (iv) All stochastic networks use a tempered posterior, where the sampler targets the density $\lambda\cdot\log p(\mathcal{D}|\theta) + \log p(\theta)$. We tuned $\lambda$ for each dataset by maximising the likelihood of a validation set. (v) We place a prior over the output noise precision, $\lambda_o = 1/\sigma_o^2$.
\begin{table}[h]
\caption{Additional experiment details for UCI regression using Hamiltonian Monte Carlo, found in \S\ref{sec:uci_regression}: \nameref{sec:uci_regression}.}
\vspace{5pt}
\centering
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Architecture & Multi-layer perceptron \\
Number of Hidden Layers & 2 \\
Layer Width & 50 \\
Activation Function & Leaky ReLU \\
Prior & $\mathcal{N}(0, \sigma^2)$ \\
Prior Variance & $\sigma^2\in[0.1, 0.01, 0.01]$ for UCI Yacht, Boston and Energy respectively.\\
Likelihood Scale & $\lambda\in[6.0, 1.0, 8.0]$ for UCI Yacht, Boston and Energy respectively.\\
Inference Algorithm & Hamiltonian Monte Carlo~\citep{neal2012bayesian} with NUTS~\citep{no-u-turn}\\
MCMC chains & 8 \\
Warmup samples per chain & 325 \\
Samples per chain & 75 \\
Maximum Tree Depth & 15 \\
Output Precision Prior & $\operatorname{Gamma}(3.0, 1.0)$ \\
Likelihood Function & Gaussian \\
Datasets & UCI Yacht, Boston, Energy~\citep{Dua:2019} \\
Dataset Split & 90\% train, 10\% test. Standard and ``gap'' splits~\citep{foong2019between} \\
Preprocessing & Feature normalisation \\
Computing Infrastructure & Internal CPU Cluster \\
Runtime & $\leq$30 minutes; exact time depends on network. \\
\midrule
\bottomrule
\end{tabular}
\label{tab:uci_regression_details}
\end{table}
\FloatBarrier
\clearpage
\newpage
\subsection{Image Classification with Laplace Approximation (\S\ref{sec:vision_laplace})}
We now provide further results and details relating to \S\ref{sec:vision_laplace}: \nameref{sec:vision_laplace}. In this section, we considered the use of the Laplace approximation for fully stochastic and partially stochastic networks on an image classification task. Please see Table~\ref{tab:laplace_details} for relevant experiment details.
Note that the experiments in this section build heavily on the \texttt{Laplace} library, released by \citet{daxberger2021laplace}.
\begin{table}[h]
\centering
\caption{Additional experiment details for image classification experiments using the Laplace approximation, found in \S\ref{sec:vision_laplace}: \nameref{sec:vision_laplace}.}
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Architecture & FixUp \citep{zhang2019fixup} WideResNet-16-4 \citep{zagoruyko2016wide}\\ & following \citep{daxberger2021laplace}\\
Dataset & CIFAR-10 \citep{krizhevsky2009learning} (MIT License), \\
& CIFAR-10-C \citep{hendrycks2020unsolved} (CC 4.0 License). \\
Use of Existing Assets & \texttt{Laplace} Library \citep{daxberger2021laplace} (MIT License) \\
Computing Infrastructure & 4x Nvidia A100 GPU.\\
Preprocessing & Per-channel normalisation $\mu=0$, $\sigma=1$ \\
Number of Seeds & 10 \\
\midrule
\textbf{MAP Training} & \\ \midrule
Data Augmentation & Random crop and horizontal flip\\
Runtime & ca. 2 hours. \\
Epochs & 350 \\
Batch Size & 1024 \\
Optimizer & AdamW \citep{loshchilov2017decoupled}\\
Learning Rate & $0.001$\\
Weight Decay & $0.0001$ \\
\midrule
\textbf{Laplace Approximation} & \\ \midrule
Hessian Structure & Kronecker Factorised (KFAC) \\
Validation Set & $10\%$ of CIFAR-10 test set. \\
Prior Precision Tuning & Min val NLL (log-sweep in $(10^{-2}, 10^{5})$ with 125 increments)\\
Batch Size & 32 \\
Predictive & Linearized GLM Predictive \\
Temperature & 1.0 \\
Runtime & ca. 5 hours for fully stochastic networks \\
& less for partially stochastic networks \\
\midrule
\bottomrule
\end{tabular}
\label{tab:laplace_details}
\end{table}
\FloatBarrier
\clearpage
\newpage
\subsection{Image Classification with SWAG (\S\ref{sec:vision_swag})}
We now provide further results and details relating to \S\ref{sec:vision_swag}: \nameref{sec:vision_swag}. In this section, we considered the use of the SWAG inference for fully stochastic and partially stochastic networks on an image classification task. Please see Table~\ref{tab:swag_details} for relevant experiment details. We mostly followed \citet{maddox2019simple} in the choice of hyperparameters, using the hyperparameters they used for their ImageNet experiments from a pre-trained solution. We, however, tuned the learning rate per architecture using a validation set.
\paragraph{Additional Partially Stochastic Network Configurations} We present selected partially stochastic network configurations in Fig.~\ref{fig:swag_results}. Fig.~\ref{fig:swag_results_moreconfig} shows more configurations. Several configurations outperform the fully stochastic network in distribution, but only the input and first ResNet block stochastic network outperforms the fully stochastic network on large corruption intensities. Nevertheless, the partially stochastic networks have lower memory cost.
\begin{table}[h]
\centering
\caption{Additional experiment details for image classification experiments using SWAG, found in \S\ref{sec:vision_swag}: \nameref{sec:vision_swag}}
\vspace{5pt}
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Architecture & FixUp \citep{zhang2019fixup} WideResNet-16-4 \citep{zagoruyko2016wide}\\ & following \citep{daxberger2021laplace}\\
Dataset & CIFAR-10 \citep{krizhevsky2009learning} (MIT License), \\
& CIFAR-10-C \citep{hendrycks2020unsolved} (CC 4.0 License). \\
Use of Existing Assets & \texttt{Laplace} Library \citep{daxberger2021laplace} (MIT License) \\
Computing Infrastructure & 4x Nvidia A100 GPU.\\
Preprocessing & Per-channel normalisation $\mu=0$, $\sigma=1$ \\
Number of Seeds & 10 \\
\midrule
\textbf{MAP Training} & \\ \midrule
Data Augmentation & Random crop and horizontal flip\\
Runtime & ca. 2 hours. \\
Epochs & 350 \\
Batch Size & 1024 \\
Optimizer & AdamW \citep{loshchilov2017decoupled}\\
Learning Rate & $0.001$\\
Weight Decay & $0.0001$ \\
\midrule
\textbf{SWAG} & \\ \midrule
Rank of Covariance Matrix ($K$) & 20 \\
Evaluation Monte Carlo Samples & 30 \\
SWAG Epochs & 10 \\
SWAG Snapshots per Epoch & 4 \\
Weight decay & 3e-4 \\
Validation Set & $10\%$ of CIFAR-10 test set. \\
Learning Rate & Tuned: log-sweep in $(10^{-5}, 10^{-2})$ with 25 increments)\\
Batch Size & 1024 \\
Runtime & ca. 3 hours \\
\midrule
\bottomrule
\end{tabular}
\label{tab:swag_details}
\end{table}
\begin{table}
\caption{Correspondence between network name and stochastic blocks for additional configurations for SWAG experiments (Fig.~\ref{fig:swag_results_moreconfig}). Note that ResNet block 1 is the ResNet block immediately after the input layer, and as the block number increases, the block is closer to the network output}
\centering
\begin{tabular}{@{}ll@{}}
\toprule
Name & Stochastic Units \\ \midrule
MAP & None \\
All (Fully Stochastic) & All layers \\
Input Layer & Input Layer \\
Input+ & Input Layer and ResNet Block 1\\
Output Layer & Output Layer \\
Output+ & Output Layer and ResNet Block 3\\
Input and Output Layer & Input and Output Layer \\
Bottleneck & ResNet Block 2 \\
\midrule
\bottomrule
\end{tabular}
\label{tab:swag_moreconfig_names}
\end{table}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/SWAG_subnetwork_relative_more.pdf}
\caption{Relative NLL for various SWAG networks on CIFAR-10 and CIFAR-10-C \cite{hendrycks2018benchmarking}. Results averaged across 10 random seeds. We show many more configurations here---see Table~\ref{tab:swag_moreconfig_names} for correspondence between model name and the stochastic units.}
\label{fig:swag_results_moreconfig}
\end{figure}
\FloatBarrier
\clearpage
\newpage
\FloatBarrier
\subsection{Image Classification with Variational Inference}
We now provide further results and details relating to \S\ref{sec:vision_vi}: \nameref{sec:vision_vi}. In this section, we considered the use of variational inference for fully stochastic and partially stochastic networks on an image classification task. Please see Table~\ref{tab:vi_vision_details} for relevant experiment details.
Note that the experiments in this section build heavily on the \texttt{uncertainty-baselines} library, released by \citet{nado2021uncertainty}.
\begin{table}[h]
\centering
\caption{Additional experiment details for image classification experiments using variational inference,found in \S\ref{sec:vision_vi}:\nameref{sec:vision_vi}.}
\vspace{5pt}
\begin{tabular}{@{}ll@{}}
\toprule
Hyper-parameter & Description \\ \midrule
Architecture & WideResNet-28-10~\cite{zagoruyko2016wide}\\
Dataset & CIFAR-10, CIFAR-100~\cite{krizhevsky2009learning} (MIT License) \\
Use of Existing Assets & \texttt{uncertainty-baselines} \cite{nado2021uncertainty} (Apache 2.0 license) \\
Computing Infrastructure & 4x Nvidia A100 GPU. \\
Inference Algorithm & Flipout Mean-Field Variational Inference \cite{wen2018flipout}. \\
KL Annealing Epochs & 200 \\
Prior $\sigma$ & 0.1 \\
Posterior Standard Deviation Initialisation & 0.001\\
Training Monte Carlo Samples & 1 \\
Evaluation Monte Carlo Samples & 5 \\
Training Epochs & 250 \\
Dataset Split & 95\% train, 5\% validation. \\
$\ell_2$ Weight Decay & $4 \cdot 10^4$ \\
Batch Size & 256 \\
Learning Rate & 0.2 \\
Learning Rate Warmup Epochs & 1 \\
Momentum & 0.9 \\
Learning Rate Decay Ratio & 0.2 \\
Learning Rate Decay Epochs & 60, 120, 160 \\
Optimizer & SGD \\
Preprocessing & Per-channel normalisation $\mu=0$, $\sigma=1$ \\
Runtime & ca. 8 hours (fully stochastic) \\
\midrule
\bottomrule
\end{tabular}
\label{tab:vi_vision_details}
\end{table}
\paragraph{Variability across random seeds.} Fig.~\ref{fig:vi_results_variation} shows the mean and standard deviation of across different random seeds for large scale image classification with variational inference on the CIFAR test sets. The conclusions in \S\ref{sec:vision_vi}: \nameref{sec:vision_vi} are consistent across random seeds---partially stochastic networks can perform well, while fully stochastic networks do not appear to be well-performing despite their large computational cost.
\paragraph{Additional network configurations.} We considered several partially stochastic network considerations---see Fig.~\ref{fig:vi_more_configs}---and presented a selection of the results in \S\ref{sec:vision_vi}: \nameref{sec:vision_vi}. Though every partially stochastic network does not perform well, there are performant partially stochastic networks. One exciting area for future work is investigating and establishing best practices for the configuration and training of such partially stochastic networks.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/vi_results_variation.pdf}
\caption{We report the accuracy, expected calibration error (ECE) and NLL on the standard CIFAR test sets when performing VI for subsets of parameters and learning the remaining parameters by maximising the (penalised) ELBO. Dots indicate the mean across 3 random seeds, bars indicate the standard deviation. This results are a graphical display of Table~\ref{tab:vi_results_table}, found in \S\ref{sec:vision_vi}: \nameref{sec:vision_vi}.}
\label{fig:vi_results_variation}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/vi_more_configs.pdf}
\caption{NLL and expected calibration error (ECE) on the CIFAR-10 test set for different network configurations. These results produced using only 1 random seed. Though every partially stochastic network does not perform well, there are performant partially stochastic networks.}
\label{fig:vi_more_configs}
\end{figure}
\subsubsection*{\bibname}}
\usepackage[english]{babel}
\usepackage[square,authoryear,sort]{natbib}
\bibliographystyle{abbrvnat}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{hyperref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage{xcolor}
\usepackage{graphicx}
\usepackage{amsmath}
\usepackage{placeins}
\usepackage{amsthm}
\usepackage{thmtools}
\usepackage{thm-restate}
\usepackage{cleveref}
\usepackage{nameref}
\usepackage{mathtools}
\usepackage{float}
\usepackage{FiraMono}
\definecolor{myorange}{rgb}{0.87, 0.56, 0.02}
\definecolor{myblue}{rgb}{0.00392156862745098, 0.45098039215686275, 0.980392156862745}
\definecolor{mygreen}{rgb}{0.00784313725490196, 0.6196078431372549, 0.45098039215686275}
\definecolor{myred}{rgb}{0.8352941176470589, 0.3686274509803922, 0.0}
\definecolor{mypurple2}{rgb}{0.8, 0.47058823529411764, 0.7372549019607844}
\definecolor{mypurple}{rgb}{0.501, 0, 0.501}
\declaretheorem[name=Theorem,numberwithin=section]{thm}
\newtheorem{theorem}{Theorem}
\newtheorem{remark}{Remark}
\newtheorem{lemma}{Lemma}
\hypersetup{
colorlinks,
citecolor=[rgb]{0.501, 0, 0.501},
linkcolor=[rgb]{0.501, 0, 0.501},
urlcolor=[rgb]{0.501, 0, 0.501},
}
\usepackage[symbol]{footmisc}
\renewcommand*{\thefootnote}{\fnsymbol{footnote}}
\begin{document}
\twocolumn[
\aistatstitle{Do Bayesian Neural Networks Need To Be Fully Stochastic?}
\aistatsauthor{Mrinank Sharma$^\dagger$ \And Sebastian Farquhar \And Eric Nalisnick \And Tom Rainforth}
\aistatsaddress{University of Oxford \And University of Oxford \And University of Amsterdam \And University of Oxford }]
\begin{abstract}
We investigate the efficacy of treating \emph{all} the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary.
To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors
for $n$-dimensional predictive problems.
In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs.
\end{abstract}
\section{Introduction}
\footnotetext{$\dagger$~Correspondance to Mrinank Sharma, \texttt{<mrinank@robots.ox.ac.uk>}.}
\renewcommand*{\thefootnote}{\arabic{footnote}}
Bayesian neural networks (BNNs) are often considered to be the most principled approach for uncertainty quantification in deep learning~\citep{wilson2020case,mackay1992bayesian,neal2012bayesian,abdar2021review}.
Indeed, they have a simple and compelling foundation: we use neural networks to define flexible hypotheses classes of predictive functions by defining a prior over \emph{all} their weights and biases, then perform inference to produce posterior predictive distributions.
In practice, full posterior inference for BNNs is intractable and so practitioners must resort to approximate inference schemes~\citep{neal2012bayesian, blundell2015weight, daxberger2021laplace, welling2011bayesian}.
This can lead to practical behaviour that is highly distinct from that of the true posterior \citep{foong2020expressiveness, coker2021wide}, while still being extremely computationally expensive.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{rethinking_fs_bnns_aistats_2023/figures/toy_hmc_intro.pdf}
\vspace{-20pt}
\caption{Perhaps surprisingly, inference over only the first hidden layer weights of a small multi-layer perceptron represents uncertainty as well as inference over all weights, whilst training c.a.~7 times faster. We first train a maximum-a-posterior network and then use Hamiltonian Monte Carlo inference over (a) the first hidden layer parameters only---other parameters are fixed---and (b) all network parameters. Lines: mean predictions. Shaded areas: predictive intervals.}
\label{fig:toy_hmc_intro}
\vspace{-10pt}
\end{figure}
To reduce these costs, the research community has recently considered partially stochastic networks~\citep{daxberger2021laplace,izmailov2020subspace,daxberger2021bayesian,kristiadi2020being,ober2019benchmarking,snoek2015scalable,lei2021spatial}. Though promising, these approaches are usually seen as pragmatic cost-saving measures relative to more expensive but principled fully stochastic approaches. For example, \citet{kristiadi2020being} describe stochastic last-layer approaches as ``approximation schemes,'' \citet{daxberger2021bayesian} see partial stochasticity as a tool for approximating the full posterior, and \citet{ober2019benchmarking} describe a compromise between ``tractability and expressiveness.''
In this work, we question this underlying assumption that full stochasticity is preferable to, and indeed more principled than, partial stochasticity. Despite the prevalence of this assumption, we uncover compelling theoretical and empirical evidence that suggests it may be misguided.
To begin, we first consider whether full stochasticity is necessary for our networks to be sufficiently expressive (§\ref{sec:theory}).
Although one may intuit that reducing the number of stochastic parameters would hamper expressivity, we prove this is not the case.
In fact, many simple architectures using only a handful of stochastic parameters are universal conditional distribution approximators (UCDAs)---they can sample from any continuous conditional distribution arbitrarily well. Moreover, finite-width bounded-variance fully stochastic layers can even destroy information about the input. These results demonstrate full stochasticity is certainly not necessary for expressive predictive distributions.
We then question whether full stochasticity can be justified by its original Bayesian formulation by examining whether approximate inference can faithfully capture the posterior. Here, we find even state-of-the-art inference schemes using impractical amounts of compute do \textit{not} produce faithful representations (§\ref{sec:hmc_mixing}). Thus fully stochastic networks cannot be supported through their Bayesian formulation alone.
Of course, full stochasticity could still be a \textit{practically} helpful construction for learning useful predictive distributions. Accordingly, we empirically investigate whether full stochasticity translates to improved predictive performance over partially stochastic networks (§\ref{sec:experiments}). In fact, across four inference modalities and eight datasets, we find no systematic benefit of full stochasticity; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite reduced memory costs and typically shorter training times (Fig.~\ref{fig:toy_hmc_intro}).
Overall, our work questions the prevalent assumption that full stochasticity is preferable to and more principled than partial stochasticity. We demonstrate that partially stochastic networks are no less principled than fully stochastic ones, challenging the \textit{de facto} default model construction of full stochasticity.
To summarise, our key contributions are:
\begin{enumerate}
\vspace{-5pt}
\item[(i)] We show that there is no tradeoff between the number of stochastic parameters and network expressivity. In particular, we prove partially stochastic networks are universal conditional distribution approximators.
\vspace{-5pt}
\item[(ii)] Across four inference modalities, ranging from high-fidelity Hamiltonian Monte Carlo to crude mean-field variational inference, we surprisingly demonstrate that there is no benefit for full stochasticity in terms of predictive performance. The best-performing partially stochastic network varies by inference modality.
\vspace{-5pt}
\end{enumerate}
\section{Background}
We focus on supervised learning problems. Let the training set be denoted as $\mathcal{D}=\{(x_i, {y}_i) \}_{i=1}^{N}$ with inputs $x_i \in \mathcal{X}$ and outputs $y_i \in \mathcal{Y}$. We assume the data is independently and identically drawn from an underlying distribution $P_{X,Y}$. Our task is to learn a conditional distribution $Y|X=x$.
\textbf{Bayesian Neural Networks (BNNs)}~ Let $f_{\theta}(x)$ be a deep neural network with parameters $\theta$, which represent a set of weights and biases.
Rather than employing empirical risk minimization to train $\theta$, BNNs place a prior $p(\theta)$ over $\theta$ and define a likelihood, $p(y|f_{\theta}(x))$.
By Bayes' rule, this now defines a \textit{posterior}, $p(\theta|{\mathcal{D}}) \propto p(\theta) p(\mathcal{D}|\theta)$---where $p(\mathcal{D}|\theta) = \prod_i p(y_i| f_{\theta}(x_i))$---that represents the updated beliefs about $\theta$ given the data $\mathcal{D}$.
Prediction is performed using the
\textit{posterior predictive}, $p(y | x, \mathcal{D}) = \mathbb{E}_{p(\theta | \mathcal{D})}\left[ p(y|f_{\theta}(x))\right]$, which represents the push forward distribution of the posterior through the network for a given input $x$.
Given that BNNs are explicitly algorithms for supervised prediction, one ultimately only cares about this posterior predictive distribution, rather than the posterior itself \citep{foong2020expressiveness,farquhar2020liberty}.
The properties of the posterior predictive distribution are often referred to as the ``function space'' properties of a BNN~\citep{izmailov2021bayesian}.
\textbf{Approximate Inference in BNNs}~ Unfortunately, exact inference is generally intractable for BNNs. As such, practitioners resort to approximate inference, typically over all model parameters. Sampling-based approaches, such as Hamiltonian Monte Carlo (HMC) \citep{neal2012bayesian} or Stochastic Gradient Langevin Dynamics \citep{welling2011bayesian} attempt to sample from the posterior. Alternatively, traditional variational approaches \citep{mackay1992bayesian,blundell2015weight,gal2016dropout} learn an approximate posterior, $q(\theta; \phi) \approx p(\theta|\mathcal{D})$, for which existing methods usually make some kind of mean-field assumption over $\theta$.
Meanwhile, some modern approaches have instead looked directly to learn variational approximations of the posterior predictive~\citep{sun2019functional,rudner2020rethinking}.
\textbf{Partially Stochastic Networks}~ Let $f_{\Theta}(x)$ be a deep neural network and define a likelihood $p(y|f_\Theta(x))$. In a partially stochastic network \citep{kristiadi2020being,kristiadi2021learnable,daxberger2021bayesian,izmailov2020subspace,dusenberry2020efficient,lei2021spatial,snoek2015scalable}, we have $\Theta = \Theta_{S} \cap \Theta_{D}$.
We learn point estimates for $\Theta_D$ and a distribution over $\Theta_S$, which could be learnt jointly with the deterministic parameters or separately in a two-stage training procedure.
To make predictions, we compute the \textit{subset predictive distribution} by holding $\Theta_D$ fixed and pushing forward the distribution over $\Theta_S$ through the network.
\section{Related Work}
\textbf{Limitations of BNNs}~ Several works raise concerns around BNNs. \citet{foong2020expressiveness}, \citet{coker2021wide}, and \citet{trippe2018overpruning} showed mean-field variational inference behaves pathologically. Others find deviating from posterior predictive, for example by sharpening the posterior~\citep{wenzel2020good} or degrading inference quality~\citep{izmailov2021dangers}, improves performance.
Our work complements these observations. Our demonstration of inaccurate inference weakens the theoretical justification for BNNs (\S\ref{sec:hmc_mixing}). Further, we find full stochasticity consistently does not improve predictive performance (\S\ref{sec:experiments}), which similarly questions the value of the full network posterior predictive.
\textbf{Existing Partially Stochastic Networks}~ Partially stochastic networks are gaining popularity. \citet{daxberger2021bayesian} approximate full network inference by performing \textit{expressive} inference over a carefully chosen subset of model weights. Further, \citet{izmailov2020subspace} perform expressive inference in an alternative probabilistic model, constructed by projecting network parameters to a low-dimensional subspace. But we demonstrate that expressive inference is not necessary in theory (\S\ref{sec:theory}) and in practice (\S\ref{sec:experiments}). Moreover, several works consider partial stochasticity as a pragmatic cost-saving measure relative to full stochasticity~\citep{snoek2015scalable,lei2021spatial,kristiadi2020being,dusenberry2020efficient}. We, however, question the value of full stochasticity and demonstrate partial stochasticity is no less justified than full stochasticity. Finally, we show stochastic output layers---the most popular approach---are typically not universal conditional distribution approximators (\S\ref{sec:theory}).
\textbf{Alternative Uncertainty Quantification Approaches}
Other than BNNs, there are many approaches for uncertainty quantification in deep learning~\citep{abdar2021review}. Deep ensembles are popular and peformant~\citep{lakshminarayanan2017simple}. Others use entirely deterministic methods ~\citep{van2020uncertainty,skafte2019reliable,mukhoti2021deterministic}. Further, \citet{osband2021epistemic} suggest using neural networks to approximate inference in some other probabilistic model, rather than performing inference over a neural network's weights and biases.
Our demonstration of inaccurate inference (\S\ref{sec:hmc_mixing}) supports this perspective by highlighting the challenge of accurate posterior inference.
\section{Expressivity of Partially Stochastic Networks}\label{sec:theory}
\vspace{-5pt}
Fully stochastic networks are typically assumed to be preferable to partially stochastic networks.
We now question this assumption by examining whether fully stochastic networks are necessary for theoretical \textit{expressivity}. That is, can partially stochastic networks, in principle, approximate conditional distributions as well as fully stochastic ones? Our findings are emphatically in the affirmative: we will show that networks using only a number of random variables equal to the dimensionality of the output space are universal conditional distribution approximators.
Our theoretical results leverage the \textit{Noise Outsourcing Lemma} \citep{kallenberg1997foundations, austin2012exchangeable, zhou2022deep} and the Universal Approximation Theorem (UAT) \citep{leshno1993multilayer}. We start by restating these results.
\begin{lemma}[Noise Outsourcing Lemma \citep{kallenberg1997foundations, austin2012exchangeable,zhou2022deep}]\label{lemma:noise_outsourcing}
Let $X$ and $Y$ be random variables in Borel spaces $\mathcal{X}$ and $\mathcal{Y}$. For any given $m\ge1$, there exists a random variable $\eta \sim \mathcal{N}(0,I_m)$
and a Borel-measurable function $\tilde{f}: \mathbb{R}^m \times \mathcal{X} \rightarrow \mathcal{Y}$ such that $\eta$ is independent of $X$ and
\begin{align}
(X, Y) = (X, \tilde{f}(\eta, X))
\end{align}
\vspace{-5pt}almost surely. Thus, $\tilde{f}(\eta, x) \sim Y|X=x,~ \forall x \in \mathcal{X}$.
\end{lemma}
The noise outsourcing lemma states that conditional distribution estimation can always be reduced to learning an appropriate function $\tilde{f}$ that maps from the input and independent noise to the output. Thus, if we can learn a $\tilde{f}$, we can sample from $Y|X=x$ simply by sampling $\eta \sim {N}(0,I_m)$ and calculating $Y=\tilde{f}(\eta,x)$. We term $\tilde{f}$ a \emph{generator function} of the conditional distribution $Y|X$ and note that it is not unique (e.g. we can always have $\eta'=-\eta$ and $\tilde{f}'(\eta',X)=\tilde{f}(-\eta',X)$).
\begin{lemma}[Universal Approximation Theorem for Arbitrary Width Networks~\citep{leshno1993multilayer}]\label{theorem:uat}
Let $\mathcal{X}$ be some compact subspace of $\mathbb{R}^d$ and let $\mathcal{Y}\subseteq \mathbb{R}^n$.
Further, let $f_{\theta} : \mathcal{X} \to \mathcal{Y}$ be a fully connected neural network with one hidden layer of arbitrary width and a non-polynomial activation function, where $\theta \in \Theta$ represents the parameters of the network.
Then for any arbitrary continuous function $g: \mathcal{X} \rightarrow \mathcal{Y}$ and all $\varepsilon>0$,
\begin{align}
\exists \theta\in\Theta ~:~ \sup_{x\in\mathcal{X}} \|f_\theta(x) - g(x)\| < \varepsilon,
\end{align}
provided that the network is sufficiently wide.
\end{lemma}
Informally, Lemma~\ref{theorem:uat} states that we can approximate any continuous function arbitrarily well with a sufficiently wide network, even if that network only has a single hidden layer.
We now combine these two ideas to present our main result below in Theorem~\ref{theorem:one_bias_is_all_you_need}, which shows that arbitrary-sized networks with a small fixed amount of stochasticity \emph{before} their last layer are universal conditional distribution approximators.
Specifically, we show that the following architectures with deterministic weights can approximate any continuous conditional distribution $Y|X=x$ arbitrarily well for all $x\in \mathcal{X} \subset \mathbb{R}^d$, where $Y \in \mathcal{Y}\subseteq \mathbb{R}^n$, using only a finite set of Gaussian random variables, $Z=\{Z_1,\dots,Z_m\}$, $m\ge n$, that are independent of the input $X$ and have finite mean and variance:
\begin{itemize}
\vspace{-8pt}
\item[{(i)}] A deterministic multi-layer perceptron (MLP) with a single hidden layer of arbitrary width; non-polynomial activation function; and which takes $[Z;X]$ as its input.
\item[{(ii)}] An MLP with $L=2$ layers; continuous, invertible, and non-polynomial activation functions; $d$ units with deterministic biases and $m$ units with Gaussian biases in the first layer; and a second layer of arbitrary width.
\item[{(iii)}] An MLP with $L=2$ layers; RELU activations; $2d$ units with deterministic biases and $m$ units with Gaussian random biases in the first layer; and a second layer of arbitrary width.
\item[{(iv)}] An MLP with $L\ge 2$ layers; continuous and non-polynomial activation functions that are either invertible or RELUs; at least $2\max(d+m,n)$ units with deterministic biases in each hidden layer; finite weights and biases throughout; one non-final hidden layer with $m$ additional units with Gaussian random biases (other layers may also have additional units with random biases, alongside their $2\max(d+m,n)$ deterministic ones), and; an arbitrary number of hidden units in one of the subsequent hidden layers.
\vspace{-8pt}
\end{itemize}
We note that the above set of architectures is by no means exhaustive, as discussed later, but is chosen to be demonstrative of how simple architectures with universal approximation properties can be.
\begin{restatable}[Universal Conditional Distribution with Finite Stochasticity]{theorem}{ucda} \label{theorem:one_bias_is_all_you_need}
Let $X$ be a random variable taking values in $\mathcal{X}$, where $\mathcal{X}$ is a compact subspace of $\mathbb{R}^d$, and let $Y$ be a random variable taking values in $\mathcal{Y}$, where $\mathcal{Y}\subseteq \mathbb{R}^n$.
Further, let $f_{\theta} : \mathbb{R}^m \times \mathcal{X} \rightarrow \mathcal{Y}$ represent one of the neural network architectures defined in (i-iv) with deterministic parameters $\theta \in \Theta$, such that, for input $X=x$, the network produces outputs $f_{\theta}(Z,x)$, where $Z=\{Z_1,\dots,Z_m\}, Z_i\in \mathbb{R}$, are the random variables in the network, which are Gaussian, independent of $X$, and have finite mean and variance.
If there exists a continuous generator function, $\tilde{f} : \mathbb{R}^m \times \mathcal{X} \rightarrow \mathcal{Y}$, for the conditional distribution $Y|X$,
then $f_{\theta}$ can approximate $Y|X$ arbitrarily well.
Formally, $\forall \varepsilon>0, \lambda < \infty$,
\begin{align}
\exists &\theta \in \Theta, V \in \mathbb{R}^{m\times m}, u \in \mathbb{R}^m: \nonumber\\ &\sup_{x\in \mathcal{X},\eta\in \mathbb{R}^m, \|\eta\| \le \lambda} \|f_{\theta}(V
\eta + u,x) - \tilde{f}(\eta,x)\| < \varepsilon.
\end{align}
\end{restatable}
The proof is provided in the Supplement. At a high level, Theorem~\ref{theorem:one_bias_is_all_you_need} shows that the collection of simple partially stochastic architectures (i-iv) are \textit{Universal Conditional Distribution Approximators} (UCDAs). That is, they can form samplers which match \emph{any continuous} target conditional distribution, $Y|X=x$, arbitrarily well: in principle, they can learn to do any probabilistic predictive task perfectly.
The high-level basis for the proof is to show a) that if our network can represent $[Z;x]$ exactly in one of its hidden layers and the downstream network is a universal deterministic approximator (as per Lemma~\ref{theorem:uat}), then it forms a UCDA, and then b) that each of the architectures (i-iv) satisfy these conditions.
Note that the distribution over the random biases in these networks does not need to be learned: we only require the presence of some random noise that can be detached from the input, and the remainder of the network to be able to approximate the conditional generating function $\tilde{f}$.
Many other partially stochastic networks will also satisfy these conditions and thus form UCDAs, though it is difficult to exactly characterize this set.
In practice, we expect \emph{most} partially stochastic networks to form UCDAs, provided that they are sufficiently large, maintain some deterministic (or arbitrarily low variance) units in each layer, and have some stochasticity \emph{before} the final layer.
One could extend our results to more complex architectures, such as those that are not fully connected (e.g.~CNNs~\citep{lecun1995convolutional}) and/or which make use of skip connections (e.g.~ResNets~\citep{he2016deep} and DenseNets~\citep{iandola2014densenet}).
Meanwhile, $Z$ being non-Gaussian should also be perfectly viable, provided it is measurable with respect to a $m$-dimensional Lebesgue measure with a continuous density function.
The following property is important to note in this generalization to other architectures.
\begin{remark}
\label{rem:higher_dim}
If a continuous generator function exists for independent random noise of dimension $p$, then one also exists for any higher noise dimension $q>p$.
\end{remark}
This follows directly from the fact that the generator can simply ignore some of the noise variables.
As such, we can always add more units with stochastic biases and weights to a network without undermining universality.
However, this does not necessarily mean we can \emph{replace} the existing deterministic units with stochastic ones and still maintain universality.
Our results thus explicitly \emph{do not} ratify the standard BNN case, where all the weights and biases are stochastic with \emph{bounded} means and variances: our construction relies on being able to perfectly reconstruct $X$, which is typically not possible when using a fully stochastic layer.
In other words, finite-width fully stochastic layers can, in principle, destroy required information about the input.
\textbf{Discussion of Assumptions}~ Other than considerations about the architecture itself, the key assumption made by Theorem~\ref{theorem:one_bias_is_all_you_need} is that
a \emph{continuous} generator function exists for the conditional distribution we are approximating, $Y|X$.
Thankfully, this is generally a weak assumption; it is the analogue of the need for a continuous target in the UAT.
One can think of it as a formalization of the need for the distribution $Y|X$ itself to be continuous.
Though not an explicit condition of the theorem itself, the architectures we consider further assume that the number of stochastic variables in the network $m$ is greater than or equal to the output dimension $n$.
This is because it is difficult, albeit not necessarily impossible, for a generator function to be continuous when mapping from lower-dimensional noise to a higher-dimensional output.
However, if $Y$ is measurable with respect to an $n$-dimensional Lebesgue measure, then a continuous generator function will usually exist for exactly $m=n$ dimensional noise (and thus all $m\ge n$ by Remark~\ref{rem:higher_dim}), if one exists at all.
For example, we can consider sampling each dimension of $Y$ autoregressively using the inverse cumulative density functions of the conditionals $Y_j | X, Y_{<j}$, whenever these all exist and are continuous.
\textbf{Comparison to Previous Results}~ Our results share some similarities to previous expressivity results on \emph{fully} stochastic BNNs, most notably those of~\citet{farquhar2020liberty} and \citet{foong2020expressiveness}, who argued that deep, fully stochastic, mean-field BNNs are expressive.
Their results rely on taking some weights in the network to the zero variance limit, so they are no longer fully stochastic.
Thus, though their motivations, formulations, and conclusions are quite different to our own, their results are highly compatible with ours and can be viewed as indirectly hinting at the potential benefits of partially stochastic networks.
\textbf{Classification Problems}~ Classification problems have discrete $\mathcal{Y}$ that will clearly not satisfy our assumption of a continuous generator function from $\mathbb{R}^m \times \mathcal{X}$.
Thankfully, UCDA can be achieved even more easily here by simply regressing the class probabilities $P(Y=k|X=x)$ with a deterministic network, followed by making a simple draw of the class from this categorical distribution (which can be achieved with a single, one-dimensional, random draw).
\textbf{Stochastic Last Layer Networks are \textit{not} UCDAs}~
As an aside, we also consider the expressivity of partially stochastic networks where only the last layer of the network is stochastic.
Such approaches are used quite commonly in practice with notable success~\citep{daxberger2021laplace, kristiadi2020being, ober2019benchmarking, snoek2015scalable}, often allowing tractable inference.
However, such architectures will generally \emph{not} be UCDAs (except for classification problems) because their distributional form of $Y|X=x$ is limited to a linear mapping of the weights and biases in the last layer.
For example, if their distribution on weights and biases is Gaussian, this will induce a Gaussian distribution on $Y|X=x$ as well.
Though this certainly does not undermine the usefulness of such approaches, it does highlight that care is required in their deployment.
\section{Does Bayesian Reasoning Support Fully Stochastic Networks?}
\label{sec:hmc_mixing}
Although we have seen that fully stochastic networks are not necessary in terms of their theoretical expressivity, their use could alternatively be supported through their conformance to Bayesian principles. Indeed, following a strict Bayesian approach, one would place a prior distribution over all unknown parameters and then perform inference over each of them. This corresponds to a fully stochastic network. Such an approach could be justified through one or more of the following benefits: (a) the ability to naturally include prior beliefs through subjective prior distributions~\citep{neal2012bayesian}; (b) improved uncertainty by averaging over different hypotheses consistent with observed data~\citep{wilson2020case}; and (c) coherent updates to uncertainty when observing data~\citep{jaynes2003probability}.
We now examine these benefits in turn.
First, with regard to (a), standard practice is to use vague parameter-space priors~\citep{fortuin2021bayesian}. But these priors are chosen for convenience, not because they well capture our beliefs. Indeed, several studies have raised serious concerns about the suitability of current BNN prior distributions~\citep{wenzel2020good,noci2021disentangling}.
Similarly, (b) does not provide support for full stochasticity. We can average over multiple hypotheses consistent with the data with partially stochastic networks.
Finally, though (c) could still support full stochasticity, it is highly dependent on our ability to perform inference accurately.
In particular, our approximations cannot be said to capture uncertainty in a ``principled'' Bayesian way if they vary significantly from true posterior.
As such, it is natural to wonder: just how challenging is accurate inference in fully stochastic networks?
To provide some insight, we revisit the posterior samples released by \citet{izmailov2021bayesian}, who used full-batch HMC and 512 Tensor processing units---a deliberately extreme computing effort. As they do, we assess the variability of predictions across HMC chains. If each chain is well exploring the posterior predictive, the predictions made by each chain ought to agree. To assess the variability of predictions associated with the finite sample size, we resample the first HMC chain with replacement. Unlike \citet{izmailov2021bayesian},
we focus on out-of-distribution (OOD) data, where poor function space mixing may manifest more strongly.
We compute the percentage of data points on which \textit{all} chains produce the same prediction.\footnote{This is different to the agreement metric of \citet{izmailov2021bayesian}, who report the percentage of data points on which one chain and the ensemble of the other two chains agree.} As shown in Fig.~\ref{fig:hmc_agreement}{\color{mypurple}a}, while the chains agree on 90\% of the CIFAR-10 test set, the agreement falls to less than 60\% on certain OOD corruptions.
However, the agreement of the bootstrapped samples is consistently above 94\% (Fig.~\ref{fig:hmc_agreement}{\color{mypurple}b}).
The variability of predictions between chains far exceeds the variability of predictions within each chain, suggesting that each HMC chain is not well exploring the full posterior predictive distribution. Thus, additional chains would likely sample from previously unexplored regions of the posterior predictive.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{rethinking_fs_bnns_aistats_2023/figures/hmc_agreement.pdf}
\vspace{-20pt}
\caption{\textbf{Assessment of function space mixing of ResNet-20-FRN Hamiltonian Monte Carlo (HMC) samples trained on CIFAR-10.}
We measure the variability in predictions across HMC chains released by \citet{izmailov2021bayesian}. We consider the CIFAR-10 test set and selected corruptions from the CIFAR-10-C dataset \citep{hendrycks2018benchmarking}.
(a) We compute the percentage of points that all three original chains make the same prediction on.
(b) To account for the finite sample size, we measure the variability across simulated chains formed by resampling the first HMC chain (bootstrapping).
The agreement of bootstrapped HMC chains is greater than 94\% across all data considered.
}
\vspace{-10pt}
\label{fig:hmc_agreement}
\end{figure}
Even with astronomical compute and a state-of-the-art unbiased inference scheme, we see that accurate posterior inference remains elusive.
But practical methods tend to use biased and crude posterior approximations, aggravating these concerns and leading to pathological behaviour~\citep{foong2020expressiveness,coker2021wide,trippe2018overpruning,wenzel2020good,farquhar2019unifying}.
Overall, we conclude that the use of fully stochastic methods can \textit{not} be justified by their Bayesian formulation, at least not with current inference methods.
Of course, this does not undermine the use of fully stochastic networks in and of itself. But, it does suggest adopting a holistic viewpoint, such as that of \citet{osband2021epistemic}, and focusing on developing methods that yield networks with the desired practical behaviours, rather than implicitly assuming that full approximate inference should be our ultimate aim.
\FloatBarrier
\section{Does Full Stochasticity Improve Predictions in Practice?} \label{sec:experiments}
\vspace{-5pt}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{rethinking_fs_bnns_aistats_2023/figures/VI_issues_four.pdf}
\vspace{-20pt}
\caption{\textbf{1D regression with fully and partially stochastic mean-field variational inference.} The partially stochastic network has only a stochastic output layer. Lines: mean predictions. Shaded areas: $\pm\sigma, \pm2\sigma, \pm3\sigma$ predictive intervals.}
\label{fig:vi_surprises}
\vspace{-10pt}
\end{figure}
\begin{figure*}[t]
\centering
\vspace{-10pt}
\includegraphics[width=\textwidth]{rethinking_fs_bnns_aistats_2023/figures/subset_hmc.pdf}
\vspace{-20pt}
\caption{\textbf{UCI regression with Hamiltonian Monte Carlo (HMC)}. We use a small MLP with high-fidelity HMC inference. The partially stochastic networks first train a deterministic MAP solution, and then sample only the weights that had the largest absolute value under that MAP solution; the remaining weights are fixed at their MAP value. We consider both standard splits and gap splits \citep{foong2019between}. Diamonds: median across 15 train-test splits. Lines: interquartile range.
}
\label{fig:subset_hmc}
\vspace{-10pt}
\end{figure*}
We saw that full stochasticity is unnecessary for theoretical expressivity (\S\ref{sec:theory}). Further, such networks cannot be supported through their Bayesian formulation alone (\S\ref{sec:hmc_mixing}). Nevertheless, one could hypothesize that full stochasticity is \textit{practically} useful for learning performant predictive distributions. We now examine this hypothesis: does full stochasticity improve predictive performance in practice?
Across four inference modalities and eight datasets, we find \textbf{\color{myblue}no systematic benefit of full stochasticity}. In fact, there usually exist \textbf{\color{myorange}partially stochastic networks that outperform fully stochastic ones}.
Moreover, while previous work often argues that reducing stochasticity improves performance by enabling higher-fidelity inference~\citep{daxberger2021bayesian,izmailov2020subspace}, we show partially stochastic networks can outperform full stochastic networks, even when both networks use the same posterior approximation families over their stochastic parameters.
\textbf{Partially Stochastic Network Strategies}~ Although there are many ways to train partially stochastic networks, here, we focus on the following relatively simple strategies:
\begin{enumerate}
\vspace{-8pt}
\item[(i)]\textit{Two-stage training}. All parameters of the network are trained deterministically e.g., using MAP inference with prior $p_1(\Theta)=p_1(\Theta_S, \Theta_D)$. We perform (approximate) inference over the stochastic subset, targeting $p(\Theta_S|\mathcal{D}; \Theta_D)\propto p_2(\Theta_S)\prod_{i}p(y_i|f_{\Theta_S \cap \Theta_D}(x_i))$. The stochastic subset could be chosen before or after deterministic training. We could also modify the prior over $\Theta_S$ i.e., have $p_2(\Theta_S)\neq \int p_1(\Theta_S, \Theta_D)\ \Theta_D$. Here, we consider two-stage partially stochastic variants of Hamiltonian Monte Carlo \citep{neal2012bayesian} (\S\ref{sec:1d_regression},\ref{sec:uci_regression}), Laplace Approximation \citep{mackay1992bayesian} (\S\ref{sec:vision_laplace}) and SWAG \citep{maddox2019simple} (\S\ref{sec:vision_swag}).
\item[(ii)]\textit{Joint training}. Alternatively, we can choose the stochastic subset \textit{a priori}, and jointly train $\Theta_D$ and $q_\Phi(\Theta_S)$. Here, we use partially stochastic variational inference \citep{hinton1993keeping, graves2011practical,blundell2015weight} (\S\ref{sec:1d_regression},\ref{sec:vision_vi}), where $\Theta_D$ and $\Phi$ are learnt by maximising the evidence lower bound.
\vspace{-8pt}
\end{enumerate}
We note that these strategies do not directly target the full network predictive. As such, these partially stochastic networks \textit{do not} approximate the full network predictive distribution. In this section, we will examine whether their predictive distributions are useful in their own right.
\subsection{1D Regression with Hamiltonian Monte Carlo and Variational Inference}\label{sec:1d_regression}
To visually understand the effects of full and partial stochasticity, we first consider 1D regression. We consider both high-fidelity inference with Hamiltonian Monte Carlo (HMC) on a small dataset (c.a.~50 datapoints) and relatively crude approximate inference with mean-field variational inference (MFVI) on a larger dataset (c.a.~1000 datapoints). We use a two hidden layer MLP with independent $\mathcal{N}(0, \sigma^2)$ priors over the network's weights and biases.
First, on the smaller dataset, we train a deterministic MAP network. We then perform HMC over the first hidden layer weights (others fixed), and also over all weights. We follow \citet{daxberger2021bayesian} and increase the partially stochastic network's prior variance when performing HMC, also using $\sigma_\text{PS}^2 = \sigma_\text{FS}^2 \cdot |\Theta|/|\Theta_S|$. $\sigma_\text{PS}^2$ and $\sigma_\text{FS}^2$ represent the prior variance for the partially and fully stochastic network.
Examining the predictions (Fig.~\ref{fig:toy_hmc_intro}), we find that \textbf{\color{myblue} both networks well capture in-between uncertainty}, but the partially stochastic network trains c.a.~7 times faster. Full stochasticity does not necessarily lead to substantially improved predictions, even under high-fidelity inference.
Second, on the larger dataset, we use MFVI to train a fully stochastic network and a partially stochastic network that uses only a stochastic output layer.
We find that the fully stochastic network does not well capture in-between uncertainty (Fig.~\ref{fig:vi_surprises}{\color{mypurple}b}), even though the network is expressive enough to do so \citep{farquhar2020liberty, foong2020expressiveness}. In contrast, \textbf{\color{myorange} the partially stochastic network represents far more in-between uncertainty than the fully stochastic network} (Fig.~\ref{fig:vi_surprises}{\color{mypurple}a}), whilst also using 200 times fewer stochastic parameters. Further, both networks use \textit{the same} crude mean-field approximate posterior, showing that higher fidelity inference is not necessary for partially stochastic networks to improve performance.
\vspace{-10pt}
\subsection{UCI Regression with Hamiltonian Monte Carlo}
\label{sec:uci_regression}
\vspace{-5pt}
We next investigate the effect of increasing stochasticity under high-fidelity inference. That is, how does changing the number of stochastic parameters affect predictive performance? We thus use a small MLP and HMC inference on UCI regression datasets. Here, we consider partially stochastic networks with increasing numbers of stochastic parameters that are trained with two-stage HMC. That is, we first train a MAP network, and then form different stochastic networks by performing HMC over different subsets of parameters. We choose the stochastic subset by picking the weights and biases that had the maximum absolute value under the trained MAP solution. To understand the generalisation properties of these networks, we additionally consider the ``gap'' data splits from \citet{foong2019between}. To create these splits, we order the data by a chosen input feature, and use the central $10\%$ as the test set. In contrast, the standard splits are created by uniformly sampling the dataset.
We first consider how increasing stochasticity affects predictive performance on the standard splits (Fig.~\ref{fig:subset_hmc}). We find that increasing the number of sampled parameters first improves performance, but then \textbf{\color{myblue}the benefits of further increasing stochasticity plateau}.
Furthermore, on the gap datasets, we find that \textbf{\color{myorange}increasing stochasticity first improves and then degrades performance}. The underwhelming performance of high-fidelity inference with fully stochastic BNNs on out-of-distribution (OOD) data is reminiscent of observations by \citet{izmailov2021dangers}, who found that even MAP inference can outperform high-fidelity HMC on OOD data.
Together, these results demonstrate that partially stochastic networks can match and even outperform fully stochastic networks, \textit{even when we can perform high-fidelity inference}.
\subsection{Image Classification with Laplace Approximation}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{rethinking_fs_bnns_aistats_2023/figures/laplace_with_subnetwork_relative.pdf}
\vspace{-20pt}
\caption{\textbf{Image classification with the Laplace Approximation}. We compute the average negative log-likelihood on CIFAR-10 and CIFAR-10-C relative to the fully stochastic network. Results are averaged across corruptions and shown for different corruption intensities. Markers and lines show mean and std. over 10 seeds.}
\vspace{-10pt}
\label{fig:laplace_results}
\end{figure}
\label{sec:vision_laplace}
We now evaluate full and partial stochasticity in larger models. To do so, we consider Laplace Approximation networks on CIFAR-10 using a WideResNet-16-4. We use two-stage training, first training a MAP solution and then using post hoc Laplace approximations on subsets of model parameters. We primarily use KFAC covariance approximations \citep{ritter2018scalable}. We also consider using a full covariance approximation using the stochastic subset selection strategy proposed by \citet{daxberger2021bayesian}---selecting parameters with the largest posterior variance under a diagonal SWAG approximation. To evaluate the networks, we compute the holdout likelihood for various networks on the CIFAR-10 and CIFAR-10-C corrupted datasets.
When comparing the relative performance between the fully stochastic network and a partially stochastic network where only the input and output layer is stochastic (Fig.~\ref{fig:laplace_results}), we find that \textbf{\color{myorange} the partially stochastic network slightly outperforms the fully stochastic network}.
This may be surprising since \textit{both networks use the same KFAC posterior approximation} over their stochastic parameters, but the partially stochastic network has 900 times fewer of them and predicts faster.\footnote{Although the partially stochastic network has a stochastic input layer, it is much faster than the fully stochastic network at prediction time because we use \textit{linearised} predictive distributions.}
Moreover, despite the additional costs of subnetwork selection, the increased expressivity of the posterior approximation family, and increased numbers of stochastic parameters, the `SWAG subnetwork stochastic' network actually \textit{underperforms} the stochastic input and output layer network.
\subsection{Image Classification with SWAG}
\label{sec:vision_swag}
We now investigate the effects of full and partial stochasticity under a different inference modality. We use SWA-Gaussian (SWAG, \cite{maddox2019simple}), which runs high learning rate stochastic gradient descent (SGD) starting from a set of pre-trained weights. The approximate posterior is formed by fitting a low-rank Gaussian to the SGD iterates. For the partially stochastic networks, we perform SGD only on the stochastic subset i.e., particular subsets of model parameters. We use the default hyperparameters from \citet{maddox2019simple} for SWAG with pre-trained weights, except that we tune the learning rate for each network separately. As before, we use a WideResNet-16-4 and evaluate the holdout likelihood on CIFAR-10 and CIFAR-10-C.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{rethinking_fs_bnns_aistats_2023/figures/SWAG_subnetwork_relative.pdf}
\vspace{-20pt}
\caption{\textbf{Image classification with SWAG inference.}
We compute the average negative log-likelihood on CIFAR-10 and CIFAR-10-C relative to the fully stochastic network. Results are additionally averaged across corruptions, and shown for different corruption intensities. Markers and lines show mean and std. over 10 seeds.}
\vspace{-10pt}
\label{fig:swag_results}
\end{figure}
When comparing the relative performance across networks (Fig.~\ref{fig:swag_results}), we find that the fully stochastic network outperforms the deterministic network, particularly on large corruption intensities. We further find \textbf{\color{myorange}SWAG inference only over the input layer and the first ResNet block consistently outperforms the fully stochastic network}. Even though the fully stochastic network marginalises over more parameters, and thus over presumably more diverse functions, it surprisingly seems to perform worse than the partially stochastic network, despite 11x higher memory costs.
\subsection{Image Classification with Variational Inference.}
\label{sec:vision_vi}
Finally, we investigate the effects of full and partial stochasticity on even larger networks. We apply MFVI on CIFAR-10 and CIFAR-100 with a Wide-ResNet-28-10,
using the reference implementation from~\citet{nado2021uncertainty}. We report the accuracy and negative log-likelihood.
Strengthening our comparison, note that we re-used the tuned hyper-parameters for the fully stochastic and deterministic networks from \citet{nado2021uncertainty}, but did not tune the hyper-parameters for the partially stochastic networks.
We find the fully stochastic network performed worse than the deterministic network, despite using twice as many parameters. In contrast, even without tuned hyperparameters, \textbf{\color{myorange} the partially stochastic networks outperform the fully stochastic network}. The stochastic input layer performs best in terms of accuracy, and the network where the last block and output layer performs best in terms of NLL.
In particular, we emphasise the potential of stochastic input layers rather than the more commonly considered stochastic output layers.
In each case, the partially stochastic networks use only slightly more parameters than deterministic networks.
\begin{table}[]
\caption{\textbf{Partially and fully stochastic networks trained with mean-field variational inference.} We report the accuracy and average negative log-likelihood (NLL) on the CIFAR test set when performing subset VI and learning the remaining parameters by maximising the (penalised) ELBO. Results are averaged across 3 random seeds.}
\vspace{5pt}
\resizebox{\linewidth}{!}{\begin{tabular}{@{}lcccccc@{}}
\toprule
& \multicolumn{2}{c}{CIFAR10} & \multicolumn{2}{c}{CIFAR100} \\
Model & Acc (\%) & NLL & Acc (\%) & NLL\\ \midrule
Deterministic & 95.6 & 0.19 & 79.3 & 0.86 \\
{\color{myblue}Fully stochastic} & 94.7 & 0.21 & 77.7 & 0.94 \\
{\color{myorange}Input layer stochastic} & \textbf{95.7} & 0.19 & \textbf{79.5} & 0.86 \\
{\color{myorange} Output layer stochastic} & 95.6 & 0.19 & 78.9 & 0.93 \\
\begin{tabular}[l]{@{}l@{}}{\color{myorange}Output layer and}\\{\color{myorange}last block stochastic}\end{tabular} & 95.6 & \textbf{0.17} & 79.0 & \textbf{0.83}\\ \bottomrule
\end{tabular}}
\label{tab:vi_results_table}
\vspace{-10pt}
\end{table}
\section{Discussion}
\label{sec:discussion}
We questioned the prevalent assumption that full stochasticity is preferable to and more principled than partial stochasticity. We found full stochasticity is not needed for theoretical expressivity~(\S\ref{sec:theory}). Further, across four inference modalities, we did not find full stochasticity to yield consistent improvements in predictive performance~(\S\ref{sec:experiments}). In fact, there usually existed partially stochastic networks that outperformed their corresponding fully stochastic variants.
Altogether, our results call into question full stochasticity as the \textit{de facto} default model construction. We believe partially stochastic networks are a highly promising model class that are just as principled as fully stochastic networks.
Furthermore, our observations around inaccurate inference in large BNNs (\S\ref{sec:hmc_mixing}) support holistic viewpoints such as those of \citet{osband2021epistemic}, which set aside posterior inference and instead focus on learning useful predictive distributions.
| {
"timestamp": "2022-11-14T02:13:43",
"yymm": "2211",
"arxiv_id": "2211.06291",
"language": "en",
"url": "https://arxiv.org/abs/2211.06291",
"abstract": "We investigate the benefit of treating all the parameters in a Bayesian neural network stochastically and find compelling theoretical and empirical evidence that this standard construction may be unnecessary. To this end, we prove that expressive predictive distributions require only small amounts of stochasticity. In particular, partially stochastic networks with only $n$ stochastic biases are universal probabilistic predictors for $n$-dimensional predictive problems. In empirical investigations, we find no systematic benefit of full stochasticity across four different inference modalities and eight datasets; partially stochastic networks can match and sometimes even outperform fully stochastic networks, despite their reduced memory costs.",
"subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Machine Learning (stat.ML)",
"title": "Do Bayesian Neural Networks Need To Be Fully Stochastic?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347838494567,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646049532001
} |
https://arxiv.org/abs/1902.10901 | Robust and Local Optimal A Priori Error Estimates for Interface Problems with Low Regularity: Mixed Finite Element Approximations | For elliptic interface problems in two- and three-dimensions with a possible very low regularity, this paper establishes a priori error estimates for the Raviart-Thomas and Brezzi-Douglas-Marini mixed finite element approximations. These estimates are robust with respect to the diffusion coefficient and optimal with respect to the local regularity of the solution. Several versions of the robust best approximations of the flux and the potential approximations are obtained. These robust and local optimal a priori estimates provide guidance for constructing robust a posteriori error estimates and adaptive methods for the mixed approximations. |
\section{Introduction}\label{intro}
\setcounter{equation}{0}
As a prototype of problems with interface singularities, this paper studies {\em a priori} error estimates of mixed finite element methods for the following interface problem (i.e., the diffusion problem with discontinuous coefficients):
\begin{equation}\label{scalar}
-\nabla\cdot \,(\alpha(x)\nabla\, u) = f
\quad \mbox{in} \,\,\Omega
\end{equation}
with homogeneous Dirichlet boundary conditions (for simplicity)
\begin{equation}\label{bc1}
u = 0 \quad \mbox{on } \partial \O,
\end{equation}
where $\Omega$ is a bounded polygonal domain in $\rm I\kern-.19emR^d$ with $d=2$ or $3$; $f \in L^{2}(\O)$ is a given function; and diffusion coefficient $\alpha(x)$ is positive and piecewise constant with possible large jumps across subdomain boundaries (interfaces):
\[
\alpha(x)=\alpha_i > 0\quad\mbox{in }\,\O_i
\quad\mbox{for }\, i=1,\,...,\,n.
\]
Here, $\{\Omega_i\}_{i=1}^n$ is a partition of the domain $\O$ with $\O_i$ being an open polygonal domain.
It is well known that the solution $u$ of problem (\ref{scalar}) belongs to $H^{1+s}(\O)$ with possibly very small $s> 0$, see for example Kellogg \cite{Kel:75}. But we should also note that even the global regularity is low, when a finite element mesh is given, the singularity or those elements whose solution having a large gradient often only appear bear some points, or along a curve. Thus it is a bad idea to use the global regularity and a global uniform mesh-size to do the a priori error estimate.
In \cite{CHZ:17}, we introduced the idea of robust and local optimal a priori error estimate. The robustness means that the genetic constants appeared in the estimates are independent of the parameters of the equation, the coefficient $\alpha$ in our case. The local optimality means that in the error estimate, the upper bound is optimal with the regularity of each element and local mesh sizes, instead of using a global uniform mesh size and a global regularity.
The local optimal and robust a priori error estimate is very important for the adaptive mesh refinement algorithm. Since that all mesh refinements algorithms are based on the so-called "error equi-distribution" principle \cite{NoVe:12}, that is, each element has an almost equal size of the error measured in an appropriate norm, we need to show this is possible via a priori error estimate. In some sense, if we have a known exact solution $u$ so that the a priori error bound can be computed exactly, we should be able to find an optimal mesh with a fixed number of degrees of freedom that each element has a very similar size of the error. Also, in the robust a posteriori error analysis, we always try to find an equivalence between some intrinsic norm of the error and a computable error estimator, the so call the reliability and efficiency bounds. When constructing the error estimator, it is essential to realize that the best the adaptive numerical method can get is restricted by the robust local a priori estimates with respect each elements. This is especially important for the mixed methods, since there are two unknowns, the flux and the potential, and there are various post-processing methods. It is important to find which is the right quantity and norm to estimate in the a posteriori error estimates.
The proof of local optimal and robust a priori error estimate often contains two parts: one is the {\bf robust best approximation} result (Cea's lemma type of result), which has its own importance; the other is the {\bf robust local approximation properties of the interpolation operator}.
Before we discuss the robust best approximation result and robust local interpolations results for the mixed approximations, we first discuss the corresponding results for the conforming, Crouzeix-Raviart nonconforming, and discontinuous Galerkin results of the interface problem.
For the interface problem (\ref{scalar}), the robust best approximation property is well known and it almost trivial for the $H^1$ conforming approximation:
$$
\|\alpha^{1/2}\nabla (u-u_k^c)\|_0 \leq \inf_{v_k^c \in V_k^c}\|\alpha^{1/2}\nabla (u-v_k^c)\|_0,
$$
where $V_k^c$ is the $k$-th degree $H^1_0$-conforming finite element space, and $u_k^c$ is the corresponding $H^1$ conforming approximation.
On the other hand, the proofs of the robust best approximation for CR nonconforming and discontinuous Galerkin is not easy. In \cite{CHZ:17}, for the Croueix-Raviart nonconforming element approximation, we showed the robust best approximation property (the constant $C$ independent of $\alpha$ and mesh size):
$$
\|\alpha^{1/2}\nabla_h (u-u_1^{nc})\|_0 \leq C\left( \inf_{v_1^{nc} \in V_1^{nc}}\|\alpha^{1/2}\nabla_h (u-v_1^{nc})\|_0 +{\mbox{osc}}_{\alpha,nc} \right),
$$
where $V_1^{nc}$ is the Crouzeix-Raviart non-conforming finite element space, and $u_1^{nc}$ is the corresponding non-conforming approximation, and ${\mbox{osc}}_{\alpha,nc}$ is a robust oscillation term. Also in \cite{CHZ:17}, for the discontinuous Galerkin approximation, we showed the robust best approximation property (the constant $C$ independent of $\alpha$ and mesh size):
$$
|\!|\!| u-u_k^{dg}|\!|\!|_{dg} \leq C\left( \inf_{v_k^{dg} \in D_k}|\!|\!| u-v_k^{dg}|\!|\!|_{dg} +{\mbox{osc}}_{\alpha,dg} \right),
$$
where $D_k$ is the $k$-th degree discontinuous finite element space, and $u_k^{dg}$ is the corresponding discontinuous Galerkin approximation, $|\!|\!| \cdot|\!|\!|_{dg}$ is the $\alpha$-weighted $H^1$ discontinuous Galerkin norm, and ${\mbox{osc}}_{\alpha,dg}$ is a robust oscillation term.
The local approximation properties of the interpolation operators for the DG space and Crouzeix-Raviart is easy to show. For the conforming finite element approximation, there are two types of local interpolations: nodal interpolations which require high regularity of the solution, and the Scott-Zhang or Clement interpolations whose regularity requirement is very low. For the nodal interpolation, it is completely local in each element, but the it need very high regularity to exist, especially in three dimensions. For the Scott-Zhang/Clement interpolations, since they are defined on a local patch, their local robustness depends on a non-realistic assumption, the quasi-monotonicity assumption, see \cite{DrSaWi:96, BeVe:00, CaZh:09, CHZ:17}. Thus, the existence of robust local optimal result for the conforming finite element approximation for the low regularity interface problem is still open.
For the mixed methods, we have two unknowns, one is the flux $\mbox{\boldmath$\sigma$}$, and the other is the potential $u$. For the potential $u$, the discontinuous finite element approximation is used, so the robust local interpolation property is obvious. We use Raviart-Thomas or Brezzi-Douglas-Marini elements to approximate to the flux variable, a robust local interpolation property can be proved by the average Taylor series technique developed in \cite{DuSc:80}. This leaves the main task of proving the robust local optimal error estimates to the proof of the robust best approximation properties of the mixed methods. Unlike the conforming, non-conforming, or DG methods, we have several choices of the norms and the approximations spaces.
Our first robust best approximation property is simple, the weighted $L^2$-norm of the flux error in the equilibrated discrete spaces, see Theorem 3.2 and 3.3.
For the potential $u$, in the standard analysis of the mixed method, the $L^2$ norm is used. It turns out that we have difficulties to have a robust inf-sup condition with the weighted $L^2$ norm for the discrete approximation $u_h$ and a modified $H({\rm div})$ norm. Thus, we use the $\alpha$- and mesh-dependent norms to do the robust analysis. The choice of norm for $u_h$ is a norm similar to the standard discontinuous Galerkin norm, that is, a weighted discrete $H^1$ norm. With this $\alpha$- and mesh-dependent norm analysis, we show robust best approximation result for the potential approximation in the $\alpha$-dependent discrete $H^1$ norm. But since the approximation space for the potential $u$ is not rich enough, the order of approximation of $u$ in the $\alpha$-dependent discrete $H^1$ norm is one or two orders lower than the flux approximation. This order discrepancy suggests that we should not try to do the robust estimate of the $\alpha$ weighted discrete $H^1$-norm of the potential approximation in the a posteriori error analysis, as stated the earlier discussion by Kim \cite{Kim:07}.
For the flux approximation, with the help of $\alpha$- and mesh-dependent analysis, we show the robust best approximation result in the non-equilibrated RT/BDM space with an $\alpha$ and $h$ weighted $H({\rm div})$ norm for the first time. The corresponding robust and local a priori error estimates are also given without order loss even for the BDM approximations.
Finally, since the discrete $H^1$ norm of the potential approximation $u_h$ is often of a lower order than the corresponding flux approximation, we use Stenberg's post-processing to recover a new approximation with a compatible polynomial degree. We show that for the recovered potential approximation, the robust local best approximation result is true and a robust local a priori error estimates of the same order as the flux approximation is obtained. We also prove a new trace inequality of the normal trace. We also point out in the paper that any recovery or post-processing should based on the flux approximation since it is more accurate.
There are many a priori estimates for mixed methods available. The standard analysis can be found in the books and papers \cite{DR:82, BBF:13, RT:91, Ga:14}. In these analysis, $L^2$ or $H({\rm div})$ norms are used for the flux approximation and the $L^2$ norm is used for the potential approximation. No robust analysis is discussed in these papers or books. The mesh-dependent norm analysis can be found in \cite{BrVe:96,LS:06}, also, no robust analysis is discussed. In \cite{Voh:07,Voh:10,Kim:07}, many a priori and a posteriori error results are presented for the mixed methods, some are robust and some are non-robust. No robust and local optimal estimates are discussed for mixed methods before.
The paper is organized as follows. Section 2 describes the mixed finite element methods for the model problem. Various robust best approximations results and robust and local a priori error estimates are presented in Section 3, including the robust best approximation results for the flux in the weighted $L^2$ norm in the discrete equilibrated space and in the weighted $H({\rm div})$ norm in the whole mixed approximations spaces, the robust best approximation result for the potential in weighted discrete $H^1$ norm. In Section 4, we discuss Stenberg's of post-processing and show its robust and local optimal a priori error estimates in each elements. In Section 8, we make some concluding remarks.
\section{Mixed Finite Element Methods}
Introducing the flux
\[
\mbox{\boldmath$\sigma$} = -\alpha(x)\nabla u,
\]
the mixed variational formulation for the problem in (\ref{scalar}) and (\ref{bc1}) is to find
$(\mbox{\boldmath$\sigma$},\,u)\in H({\rm div};\O)\times L^2(\O)$ such that
\begin{equation}\label{mixed}
\left\{\begin{array}{lclll}
(\alpha^{-1}\mbox{\boldmath$\sigma$},\,\mbox{\boldmath$\tau$})-(\nab\cdot \mbox{\boldmath$\tau$},\, u)&=&0 \quad & \forall\,\, \mbox{\boldmath$\tau$} \in H({\rm div};\O),\\[2mm]
(\nab\cdot \mbox{\boldmath$\sigma$}, \,v) &=& (f,\,v)&\forall \,\, v\in L^2(\O).
\end{array}\right.
\end{equation}
Let ${\cal T}=\{K\}$ be a regular triangulation of the domain
$\Omega$ (see, e.g., \cite{Cia:78, BrSc:08}).
Denote by $h_K$ the diameter of the element $K$. Assume that
interfaces $\{\partial\O_i\cap\partial\O_j\,:\, i,j=1,\,...,\,n\}$ do not cut
through any element $K\in{\cal T}$. For any element $K\in{\cal T}$,
denote by $P_k(K)$ the space of polynomials on $K$ with total degree less than or equal to $k$.
Define the discontinuous piecewise polynomial space of degree $k$ by
$$
D_k = \{ v \in L^2(\O)\, :\, v|_K \in P_k \; \forall\, K\in{\cal T}\}.
$$
Define the $H({\rm div})$ conforming Raviart-Thomas (RT) finite element space
and Brezzi-Douglas-Marini (BDM) finite element space of order $k$ by
$$
RT_k = \{ \mbox{\boldmath$\tau$} \in H({\rm div};\O)\, :\, \mbox{\boldmath$\tau$}|_K \in P_k(K)^d + {\bf x} P_k(K) \; \forall\, K\in{\cal T}\}.
$$
and
$$
BDM_k = \{ \mbox{\boldmath$\tau$} \in H({\rm div};\O)\, :\, \mbox{\boldmath$\tau$}|_K \in P_k(K)^d \; \forall\, K\in{\cal T}\}.
$$
For mixed problems, $RT_k\times D_k$ and $BDM_{k+1}\times D_k$ are stable pairs. Thus, we use the notation $\Sigma_k$ to denote $RT_k$ or $BDM_{k+1}$.
The mixed finite element approximation is to find $(\mbox{\boldmath$\sigma$}_h,\,u_h) \in \Sigma_k \times D_k$ such that
\begin{equation}\label{problem_mixed}
\left\{\begin{array}{lclll}
(\alpha^{-1}\mbox{\boldmath$\sigma$}_h,\,\mbox{\boldmath$\tau$}_h)-(\nab\cdot \mbox{\boldmath$\tau$}_h,\, u_h)&=&0
\quad & \forall\,\, \mbox{\boldmath$\tau$}_h \in \Sigma_k,\\[2mm]
(\nab\cdot \mbox{\boldmath$\sigma$}_h,\, v_h) &=& (f,\,v_h)&\forall \,\, v_h\in D_k.
\end{array}\right.
\end{equation}
Difference between (\ref{mixed}) and (\ref{problem_mixed}) yields the following error equation:
\begin{equation}\label{erroreq_mixed}
\left\{\begin{array}{lclll}
(\alpha^{-1}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h),\,\mbox{\boldmath$\tau$}_h)-(\nab\cdot \mbox{\boldmath$\tau$}_h,\, u-u_h)&=&0
\quad & \forall\,\, \mbox{\boldmath$\tau$}_h \in \Sigma_k,\\[2mm]
(\nab\cdot (\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h),\, v_h) &=& 0&\forall \,\, v_h\in D_k.
\end{array}\right.
\end{equation}
\section{Robust and Local Optimal A Priori Error Estimates}
\setcounter{equation}{0}
\subsection{Mixed finite element interpolations and approximation properties}
For a fixed $r>0$, denote by $I^{rt,k}_{h}: H({\rm div};\,\Omega) \cap [H^r(\O)]^d \mapsto RT_k$ the standard $RT$ interpolation operator and $I^{bdm,k}_{h}: H({\rm div};\,\Omega) \cap [H^r(\O)]^d \mapsto BDM_k$ the standard $BDM$ interpolation operator. We have the following local approximation property: for $\mbox{\boldmath$\tau$} \in H^{s_K}(K)$, $s_K >0$,
\begin{eqnarray} \label{rti}
\|\mbox{\boldmath$\tau$} - I^{\Sigma,k}_{h} \mbox{\boldmath$\tau$}\|_{0,K}
&\leq& C h_K^{\min\{k+1,s_K\}} |\mbox{\boldmath$\tau$}|_{\min\{k+1,s_K\},K} \quad\forall\,\, K\in {\cal T}
\end{eqnarray}
with $I^{\Sigma,k}_{h} = I^{rt,k}_{h}$ or $I^{bdm,k}_{h}$. The estimate in (\ref{rti}) is standard for $s_K\geq 1$ and can be proved by the average Taylor series developed in \cite{DuSc:80} and the standard reference element technique with Piola transformation for $0<s_K<1$. We also should notice that the interpolations and approximation properties are completely local.
Denote by $Q^k_{h}: L^2 (\O) \mapsto D_k$ the $L^2$-projection onto $D_k$. The following commutativity property is well-known:
\begin{eqnarray}\label{comm}
\nabla\cdot (I^{rt,k}_{h}\,\mbox{\boldmath$\tau$})&=&Q^k_{h}\,\nabla\cdot\mbox{\boldmath$\tau$} \qquad
\quad\forall\,\,\mbox{\boldmath$\tau$}\inH({\rm div};\,\Omega) \cap H^r(\O)^d \,\mbox{ with }\, r>0, \\[2mm]
\label{comm_bdm}
\nabla\cdot (I^{bdm,k}_{h}\,\mbox{\boldmath$\tau$})&=&Q^{k-1}_{h}\,\nabla\cdot\mbox{\boldmath$\tau$} \qquad
\quad\forall\,\,\mbox{\boldmath$\tau$}\inH({\rm div};\,\Omega) \cap H^r(\O)^d \,\mbox{ with }\, r>0.
\end{eqnarray}
\begin{rem}
The requirement $r>0$ in $H({\rm div};\,\Omega) \cap [H^r(\O)]^d$ is to make sure that the mixed interpolations are well defined. Another choice is $\{\mbox{\boldmath$\tau$}\in L^p(\O)^d\mbox{ and }\nabla\cdot \mbox{\boldmath$\tau$} \in L^2(\O)\}$ for $p>2$ or $W^{1,t}(K)$ for $t>2d/(d+2)$ as in \cite{BBF:13}. We use the Hilbert space based choice since it is more suitable for our analysis.
\end{rem}
\subsection{Robust best approximation in the discrete equilibrated space for the flux}
Define the discrete equilibrated space
$$
\Sigma_k^f = \{\mbox{\boldmath$\tau$}_h \in \Sigma_k : \nabla\cdot \mbox{\boldmath$\tau$}_h =Q^k_{h} f\}.
$$
Note that $\Sigma_k^f = RT_k^f = \{\mbox{\boldmath$\tau$}_h \in RT_k : \nabla\cdot \mbox{\boldmath$\tau$}_h =Q^k_{h} f\}$ for the RT case and $\Sigma_k^f = BDM_{k+1}^f= \{\mbox{\boldmath$\tau$}_h \in BDM_{k+1} : \nabla\cdot \mbox{\boldmath$\tau$}_h =Q^k_{h} f\}$ for the BDM case.
The following theorem is almost standard in the mixed finite element analysis.
\begin{thm}\label{apriori_mixed} (Robust best approximation in the discrete equilibrated space)
Let $(\mbox{\boldmath$\sigma$}, u)$ and $(\mbox{\boldmath$\sigma$}_h,\,u_h) \in \Sigma_k \times D_k$ be the solutions of {\em (\ref{mixed})} and {\em (\ref{problem_mixed})}, respectively, then the following robust best approximation result holds:
\begin{equation}\label{rba_equ}
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h)\|_{0,\O} \leq \inf_{\mbox{\boldmath$\tau$}_h^f \in \Sigma_k^f} \|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_{0,\O}.
\end{equation}
\end{thm}
\begin{proof}
To establish (\ref{rba_equ}), denote by
\[
{\bf E} = \mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h
\quad\mbox{and}\quad
e = u- u_h
\]
the respective errors of the flux and the solution.
Now, let $\mbox{\boldmath$\tau$}_h^f$ be an arbitrary function in $RT_k^f$, then it follows from the first equation in (\ref{erroreq_mixed}), the fact $\mbox{\boldmath$\sigma$}_h \in \Sigma_k^f$, and the Cauchy-Schwarz inequality that
\begin{eqnarray*}
\|\alpha^{-1/2}{\bf E}\|_{0,\O}^2
&= & (\alpha^{-1}{\bf E},\, \mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f) + (\alpha^{-1}{\bf E},\, \mbox{\boldmath$\tau$}_h^f -\mbox{\boldmath$\sigma$}_h)\\
&=&(\alpha^{-1}{\bf E},\, \mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f) + (\nab\cdot (\mbox{\boldmath$\tau$}_h^f-\mbox{\boldmath$\sigma$}_h),\,e)\\
&=&(\alpha^{-1}{\bf E},\, \mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)
\leq \|\alpha^{-1/2}{\bf E}\|_{0,\O}\,\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_{0,\O},
\end{eqnarray*}
which implies the result of the theorem.
\end{proof}
\begin{thm}\label{apriori_mixed2} (Robust local a priori error estimates)
Let $(\mbox{\boldmath$\sigma$}, u)$ and $(\mbox{\boldmath$\sigma$}_h,\,u_h) \in \Sigma_k \times D_k$ $(k\geq 0)$ be the solutions of {\em (\ref{mixed})} and {\em (\ref{problem_mixed})}, respectively. Assume that $u\in H^{1+r}(\O)$ with some $r>0$ and that $u|_K\in H ^{1+s_K}(K)$ with an element-wisely defined regularity $s_K>0$ for all $K\in{\cal T}$. Then there exists a constant $C>0$ independent $\alpha$ and $h$ for both the two- and three-dimension such that
\begin{eqnarray}\label{err-bound-L2RT}
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h)\|_{0} &\leq& C \sum_{K\in{\cal T}} h_K^{\min\{k+1,s_K\}} |\alpha^{1/2}\nabla u|_{\min\{k+1,s_K\},K}, \quad RT_k \mbox{ case} ,
\\[2mm] \label{err-bound-L2BDM}
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h)\|_{0} &\leq& C \sum_{K\in{\cal T}} h_K^{\min\{k+2,s_K\}} |\alpha^{1/2}\nabla u|_{\min\{k+2,s_K\},K}, \quad BDM_{k+1} \mbox{ case}.
\end{eqnarray}
\end{thm}
\begin{proof}
For the $RT_k \times D_k$ case, the commutativity property in (\ref{comm}) and the second equations in (\ref{mixed}) and (\ref{problem_mixed}) lead to
$$
\nabla\cdot (I_h^{rt,k}\mbox{\boldmath$\sigma$}) = Q^k_{h}\,\nabla\cdot\mbox{\boldmath$\sigma$} = Q^k_{h} f = \nab\cdot
\mbox{\boldmath$\sigma$}_h.
$$
Thus, the result is a direct consequence of the best approximation property in (\ref{rba_equ}) and the local approximation property in (\ref{rti}) by choosing $\mbox{\boldmath$\tau$}_h^f = I_h^{rt,k}\mbox{\boldmath$\sigma$} \in RT_k^f$.
Using the same argument, we can get the result for the $DBM_{k+1} \times D_k$ case.
\end{proof}
\begin{rem}
For those elements with a low regularity $0<s_K<1$, $RT_0$ is enough and there is no need to use BDM or high order RT approximations.
\end{rem}
\begin{rem}
For the case that in each element $K\in {\cal T}$, the diffusion coefficient being a full symmetric positive definite constant matrix $A|_K$ instead of a scalar constant $\alpha_K$, from the proofs, it is clear the above robust best approximation result is also true:
$$
\|A^{-1/2}(\mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h)\|_{0,\O} \leq \inf_{\mbox{\boldmath$\tau$}_h^f \in \Sigma_k^f} \|A^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_{0,\O}.
$$
In each element $K\in{\cal T}$, for the quantity ${\bf q} \in P_k^d$, $A^{-1/2} {\bf q}$ is also in $P_k^d$, and thus $A^{-1/2}I_h^{\Sigma,k} {\bf q} = A^{-1/2}{\bf q}$. Thus for piecewise constant symmetric positive definite constant matrix $A$, we have
$$
\|A^{-1/2}(\mbox{\boldmath$\tau$} - I^{\Sigma,k}_{h} \mbox{\boldmath$\tau$})\|_{0,K}
\leq C h_K^{\min\{k+1,s_K\}} |A^{-1/2}\mbox{\boldmath$\tau$}|_{\min\{k+1,s_K\},K} \quad\forall\,\, K\in {\cal T}.
$$
And we have the robust local a priori error estimatesL
\begin{eqnarray*}
\|A^{-1/2}(\mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h)\|_{0,\O} &\leq& C \sum_{K\in{\cal T}} h_K^{\min\{k+1,s_K\}} |A^{1/2}\nabla u|_{\min\{k+1,s_K\},K}, \quad RT_k \mbox{ case},
\\[2mm]
\|A^{-1/2}(\mbox{\boldmath$\sigma$} -\mbox{\boldmath$\sigma$}_h)\|_{0,\O} &\leq& C\sum_{K\in{\cal T}} h_K^{\min\{k+2,s_K\}} |A^{1/2}\nabla u|_{\min\{k+2,s_K\},K}, \quad BDM_{k+1} \mbox{ case}.
\end{eqnarray*}
The corresponding results for discontinuous Galerkin methods are not proved, since the robustness of the DG method for the diffusion problem depends on the right choice of the weights of the averages and penalty coefficients. For the full tensor case, the right weight is not clear or probably not possible for a full matrix $A$, see \cite{CHZ:17}. For the conforming finite element approximations, due to the lack of the nodal interpolations for the low regularity cases, such robust local optimal estimates is not available. For averaging operators like the Scott-Zhang or Clement interpolations, the robustness with respect to the full tensor $A$ is also impossible since even the famous quasi-monotonicity assumption is not meaningful in the case. For the Crouzeix-Raviart non-conforming finite element approximation, it is possible we can get a similar result by using the relation between the $RT_0$ and Crouzeix-Raviart elements.
\end{rem}
\subsection{Mesh-dependent norm analysis}
In this subsection, we use mesh-dependent norm analysis to derive the robust best approximation properties for the flux and the potential in appropriate norms. Earlier analysis on the mixed methods using mesh-dependent norms can be found in Babu\v{s}ka, Osborn, and Pitk\"{a}ranta \cite{BaOsPi:80}, Braess and Verf\"{u}rth \cite{BrVe:96}, and \cite{CaZh:12}. In the mesh-dependent analysis, we need to restrict ourselves to the scalar case.
First, we discuss the averages of the coefficients on the edge/face $F\in {\cal E}$. For $F = \partial K_F^{+} \cap \partial K_F^{-}\in {\cal E}_{I}$, denote by $\alpha^+_{F}$ and $\alpha^-_{F}$ the restriction of $\alpha$ on the respective $K_F^{+}$ and $K_F^{-}$. Denote the harmonic averages of $\alpha$ on $F \in {\cal E}$ by
\[
\alpha_{F,H} = \left\{\begin{array}{cl}
\displaystyle\frac{\alpha_F^+ \alpha_F^- }{\alpha_F^+ + \alpha_F^-},&\quad F \in {\cal E}_{I},\\[4mm]
\alpha_F^- &\quad F \in {\cal E}_{{_D}}\cup{\cal E}_{{_N}},
\end{array}\right.
\]
which is equivalent to the minimum of $\alpha$:
\begin{equation}\label{a-h}
\displaystyle\frac{1}{2}\min\{\alpha_F^+, \alpha_F^- \}\leq \alpha_{F,H} \leq \min\{\alpha_F^+, \alpha_F^- \} .
\end{equation}
\begin{lem} The bilinear form $(\nabla\cdot \mbox{\boldmath$\tau$}, v)$ for $(\mbox{\boldmath$\tau$},v)\in H({\rm div};\O)\times L^2(\O)$
has the following representation:
\begin{equation} \label{rep}
(\nabla\cdot\mbox{\boldmath$\tau$}, v)
= -\sum_{K\in{\cal T}} (\nabla v,\mbox{\boldmath$\tau$})_{K}
+ \sum_{F\in {\cal E}_{I}} (\mbox{\boldmath$\tau$}\cdot{\bf n}, \jump{v})_F
+ \sum_{F\in {\cal E}_D} (\mbox{\boldmath$\tau$}\cdot{\bf n}, v)_F
\end{equation}
\end{lem}
\begin{proof} The representation (\ref{rep}) is a consequence of integration by parts.
\end{proof}
Define $(\alpha,\,h)$-dependent norms on ${\cal T}$ by
\begin{eqnarray*}
&& \| \mbox{\boldmath$\tau$} \|_{\alpha,h}^2 :=\|\alpha^{-1/2} \mbox{\boldmath$\tau$}\|_{0}^2
+ \displaystyle\sum_{F\in {\cal E} }\frac{h_F}{\alpha_{F,H}} \|\mbox{\boldmath$\tau$} \cdot {\bf n}\|_{0,F}^2,
\quad \forall
\mbox{\boldmath$\tau$} \in \Sigma_k
\\[2mm]
\mbox{and }\,\,&&
|\!|\!| v|\!|\!|_{\alpha, h}^2
=\|\alpha^{1/2} \nabla_h v\|_{0,{\cal T}}^2
+ \displaystyle\sum_{F\in {\cal E}_{I}} \displaystyle\frac{\alpha_{F,H}}{h_F} \|\jump{v}\|_{0,F}^2
+ \displaystyle\sum_{F\in {\cal E}_{D}} \displaystyle\frac{\alpha_{F}}{h_F} \|v\|_{0,F}^2,
\quad \forall v \in D_k.
\end{eqnarray*}
Note that the $|\!|\!|\cdot|\!|\!|_{\alpha,h}$ norm is the standard $\alpha$-weighted DG norm used in the discontinuous Galerkin methods, see \cite{CHZ:17}. For a $v\in H_0^1(\O)$, $|\!|\!| v|\!|\!|_{\alpha, h} = \|\alpha^{1/2}\nabla v\|_{0,\O}$.
\begin{lem} \label{lem_hnorm}
For all $\mbox{\boldmath$\tau$} \in \Sigma_k(K)$, there exists a positive constant $C>0$ independent of $\alpha$ and $h$, such that
\[
\displaystyle\sum_{F\in {\cal E}_K}\frac{h_F}{\alpha_K} \|\mbox{\boldmath$\tau$} \cdot {\bf n}\|_{0,F}^2 \leq C \|\alpha^{-1/2}\mbox{\boldmath$\tau$}\|_{0,K}^2.
\]
\end{lem}
\begin{proof}
The lemma is a simple consequence of the standard scaling argument and the fact that both $RT_k(K)$ and $BDM_{k+1}(K)$ are finite dimensional.
\end{proof}
\begin{thm} \label{thm_hnorm}
The following norm equivalence holds with $C>0$ independent of $\alpha$ and $h$:
\begin{equation} \label{norm_equ}
\|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_0 \leq \|\mbox{\boldmath$\tau$}_h\|_{\alpha,h}
\leq C \|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_0, \quad \forall \mbox{\boldmath$\tau$}_h \in \Sigma_k.
\end{equation}
\end{thm}
\begin{proof}
Since for the harmonic average $\alpha_{F,H}$, we have $1/\alpha_{F,H} = 1/\alpha_{F}^+ +1/\alpha_{F}^-$, by Lemma \ref{lem_hnorm}, we immediately get the robust discrete norm equivalence.
\end{proof}
For $\mbox{\boldmath$\tau$} \in H({\rm div};\O)$, define the following $\alpha$ and $h$ dependent norm:
\begin{equation}
\|\mbox{\boldmath$\tau$}\|_{\alpha,h,H({\rm div})}:= \left(
\|\alpha^{-1/2} \mbox{\boldmath$\tau$}\|_0^2 + \sum_{K\in{\cal T}}h_K^2\|\alpha^{-1/2}\nabla\cdot \mbox{\boldmath$\tau$}\|_{0,K}^2
\right)^{1/2}.
\end{equation}
We also use $\|\mbox{\boldmath$\tau$}\|_{\alpha,h,H({\rm div}),K}$ to denote the norm on a single element $K$.
The following trace inequality can be found in Lemma 2.4 and Remark 2.5 of \cite{CHZ:17}.
\begin{lem} Let $F$ be an edge/face of $K\in{\cal T}$ and ${\bf n}_F$ the unit vector normal to $F$. Assume that $\mbox{\boldmath$\tau$}$ is a given function in $H({\rm div};K)\cap [H^r(K)]^d$, $r>0$ then for any $w_h\in P_k(K)$, we have
\begin{eqnarray}\label{tracecombined}
(\mbox{\boldmath$\tau$}\cdot{\bf n}, w_h)_F
&\leq & C\, h_F^{-1/2}\|w_h\|_{0,F}
\left(\|\mbox{\boldmath$\tau$}\|_{0,K} + h_K\|\nabla\cdot \mbox{\boldmath$\tau$}\|_{0,K}\right).
\end{eqnarray}
\end{lem}
The following two continuity results are true.
\begin{lem}
The following continuity results hold with constants $C_{con,1}>0$ and $C_{con,2}>0$ independent of $\alpha$ and $h$:
\begin{eqnarray} \label{con1}
(\nabla\cdot \mbox{\boldmath$\tau$}_h,v)
&\leq& C_{con,1}\|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_0|\!|\!| v|\!|\!|_{\alpha,h}
, \quad
\forall \mbox{\boldmath$\tau$}_h \in \Sigma_k, \quad v \in H^1_0(\O)
\mbox{ or } v\in D_k,\\[2mm]
\label{cont_hDiv}
(\nabla\cdot \mbox{\boldmath$\tau$},v_h)
&\leq & C_{con,2} \|\mbox{\boldmath$\tau$}\|_{\alpha,h,H({\rm div})}|\!|\!| v_h|\!|\!|_{\alpha,h}, \quad
\forall \mbox{\boldmath$\tau$} \in H({\rm div};\O)\cap [H^r(\O)]^d, \quad v \in D_k.
\end{eqnarray}
\end{lem}
\begin{proof}
The continuity (\ref{con1}) is clear from the representation (\ref{rep}), Cauchy-Schwarz inequality,
the definition of norms $\|\mbox{\boldmath$\tau$}\|_{\alpha,h}$ and $|\!|\!| v|\!|\!|_{\alpha,h}$, and the robust norm equivalent result (\ref{norm_equ}).
To show (\ref{cont_hDiv}), we still start from the representation (\ref{rep}):
$$
(\nabla\cdot\mbox{\boldmath$\tau$}, v_h)
= -\sum_{K\in{\cal T}} (\nabla v_h,\mbox{\boldmath$\tau$})_{K}
+ \sum_{F\in {\cal E}_{I}} (\mbox{\boldmath$\tau$}\cdot{\bf n}, \jump{v_h})_F
+ \sum_{F\in {\cal E}_D} (\mbox{\boldmath$\tau$}\cdot{\bf n}, v_h)_F.
$$
For the term $(\mbox{\boldmath$\tau$}\cdot{\bf n}, \jump{v_h})_F$, where $F\in{\cal E}_I$, by (\ref{tracecombined}),
\begin{eqnarray*}
(\mbox{\boldmath$\tau$}\cdot{\bf n},\,\, \jump{v_h})_F
&\leq & C\, h_F^{-1/2}\|\jump{v_h}\|_{0,F}
\left(\|\mbox{\boldmath$\tau$}\|_{0,K} + h_K\|\nabla\cdot \mbox{\boldmath$\tau$}\|_{0,K}\right),
\end{eqnarray*}
where $K$ is one of the elements having $F$ as an edge/face.
Choosing $K$ to be the element with the smaller $\alpha_K$.
From (\ref{a-h}), the smaller $\alpha_K$ is equivalent to the harmonic average $\alpha_{F,H}$, then
\begin{eqnarray*}
(\mbox{\boldmath$\tau$}\cdot{\bf n},\,\, \jump{v_h})_F
&\leq & C\, \alpha_{F,H}^{1/2}h_F^{-1/2}\|\jump{v_h}\|_{0,F}
\left(\|\alpha^{-1/2}\mbox{\boldmath$\tau$}\|_{0,K} + h_K\|\alpha^{-1/2}\nabla\cdot \mbox{\boldmath$\tau$}\|_{0,K}\right).
\end{eqnarray*}
The term $(\mbox{\boldmath$\tau$}\cdot{\bf n}, v_h)_F$, $F\in{\cal E}_D$, can be handled similarly.
Then by the Cauchy-Schwarz inequality, (\ref{cont_hDiv}) can be easily proved.
\end{proof}
\begin{lem}
The following discrete inf-sup condition
\begin{equation} \label{infsup}
\sup_{\mbox{\boldmath$\tau$}_h \in \Sigma_k} \displaystyle\frac{(\nabla\cdot \mbox{\boldmath$\tau$}_h,v_h)}{ \|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_0}
\geq \beta |\!|\!| v_h |\!|\!|_{\alpha, h} \quad \forall\, v_h \in D_k
\end{equation}
holds with a constant $\beta>0$ independent of $\alpha$ and $h$.
\end{lem}
\begin{proof}
By the robust norm equivalent result (\ref{norm_equ}), we only need to prove the result for $\mbox{\boldmath$\tau$}_h$ in the norm $\|\mbox{\boldmath$\tau$}_h\|_{\alpha,h}$. Since $RT_k \subset BDM_{k+1}$, thus
$$
\sup_{\mbox{\boldmath$\tau$} \in BDM_{k+1}} \displaystyle\frac{(\nabla\cdot \mbox{\boldmath$\tau$}_h,v_h)}{ \|\mbox{\boldmath$\tau$}_h\|_{\alpha,h}}
\geq
\sup_{\mbox{\boldmath$\tau$} \in RT_k} \displaystyle\frac{(\nabla\cdot \mbox{\boldmath$\tau$}_h,v_h)}{ \|\mbox{\boldmath$\tau$}\|_{\alpha,h}}, \quad \forall\, v \in D_k,
$$
we only need to prove the RT version.
Choose a $\tilde{\mbox{\boldmath$\tau$}}_h\in RT_k$ such that
\[
(\tilde{\mbox{\boldmath$\tau$}}_h,\nabla q)_K = -(\alpha \nabla v, \nabla q)_K \quad \forall\, q\in P_{k-1}(K)
\quad\forall\,\, K\in{\cal T}
\]
and that
\begin{equation}\label{n-bc}
\tilde{\mbox{\boldmath$\tau$}}_h \cdot {\bf n} |_F
=\left\{\begin{array}{llll}
\displaystyle\frac{\alpha_{F,H}}{h_F}\jump{v}
& \,\, F\in {\cal E}_{I}, \\[3mm]
\displaystyle\frac{\alpha_F}{ h_F} v & \,\, F\in {\cal E}_D,
\end{array}\right.
\end{equation}
which, together with (\ref{rep}), gives
\begin{equation}\label{5.10}
(\nabla\cdot \tilde{\mbox{\boldmath$\tau$}}_h,v_h)= |\!|\!| v |\!|\!|_{\alpha,h}^2.
\end{equation}
For every $K\in{\cal T}$, by the standard scaling argument, there exists a constant $C>0$ independent of $\alpha$ and the mesh size such that
\[
\|\tilde{\mbox{\boldmath$\tau$}}_h\|_{0,K}^2 \leq C \left (
\|\alpha_K \nabla v \|_{0,K}^2 + h_K \sum_{F\in {\cal E}_K\cap{\cal E}_{I}}
\|\displaystyle\frac{\alpha_{F,H}}{h_F}\jump{v}\|_{0,F}^2
+h_K \sum_{F\in {\cal E}_K\cap{\cal E}_{D}}
\|\displaystyle\frac{\alpha_{F}}{h_F} v\|_{0,F}^2
\right),
\]
which, together with (\ref{a-h}), gives
\[
\|\alpha_K^{-1/2}\tilde{\mbox{\boldmath$\tau$}}_h\|_{0,K}^2 \leq C \left (
\|\alpha_K^{1/2} \nabla v \|_{0,K}^2 +\sum_{F\in {\cal E}_K\cap{\cal E}_{I}} \displaystyle\frac{\alpha_{F,H}}{h_F}
\|\jump{v}\|_{0,F}^2
+ \sum_{F\in {\cal E}_K\cap{\cal E}_{D}}
\displaystyle\frac{\alpha_{F}}{h_F} \|v\|_{0,F}^2
\right),
\]
Hence, there exists a constant $\tilde{C}>0$ independent of $\alpha$ and $h$ such that
\[
\|\tilde{\mbox{\boldmath$\tau$}}_h\|_{\alpha,h} \leq \tilde{C} |\!|\!| v |\!|\!|_{\alpha,h}.
\]
which, together with (\ref{5.10}), leads to the discrete inf-sup condition of the lemma.
\end{proof}
Define the following discrete divergence-free subspace of $\Sigma_k$:
$$
\Sigma_k^0 =\{\mbox{\boldmath$\tau$}_h \in \Sigma_k : \nabla\cdot \mbox{\boldmath$\tau$}_h =0\}.
$$
Its orthogonal complement is
$$
(\Sigma_k^0)^\perp =\{\mbox{\boldmath$\tau$}_h \in \Sigma_k : (\mbox{\boldmath$\tau$}_h, \bf{\rho}_h) =0, \forall \bf{\rho}_h \in \Sigma_k^0\}.
$$
Note that the inf-sup condition (\ref{infsup}) is also equivalent to the following inf-sup condition with $\beta>0$ independent of $\alpha$ and $h$:
\begin{equation} \label{infsup2}
\sup_{v_h\in D_k} \displaystyle\frac{(\nabla\cdot \mbox{\boldmath$\tau$}_h,v_h)}{ |\!|\!| v_h|\!|\!|_{\alpha,h}}
\geq \beta \|\mbox{\boldmath$\tau$}_h\|_{\alpha,h} \geq \beta \|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_0 \quad \forall\, \mbox{\boldmath$\tau$}_h \in (\Sigma_k^0)^\perp.
\end{equation}
The condition (\ref{infsup}) also guarantees that for each $g\in L^2(\O)$, there exists a unique solution $\mbox{\boldmath$\tau$}_h \in (\Sigma_k^0)^\perp$ such that
\begin{equation} \label{orth}
(\nabla\cdot \mbox{\boldmath$\tau$}_h, v_h) = (g, v_h), \quad \forall v_h \in D_k.
\end{equation}
Now let us prove the following robust best approximation property for $|\!|\!| u-u_h|\!|\!|_{\alpha,h}$.
\begin{thm} (Robust best approximation in the weighted discrete $H^1$ norm)
Let $(\mbox{\boldmath$\sigma$}, u)$ and $(\mbox{\boldmath$\sigma$}_h,\,u_h)\in \Sigma_k\times D_k$
be the solutions of
{\em (\ref{mixed})} and {\em (\ref{problem_mixed})}, respectively.
Assume that $u\in H^{1+r}(\O)$ with $r>0$ and that $u|_K\in H ^{1+s_K}(K)$ with element-wisely defined $s_K>0$ for all $K\in{\cal T}$. Then there exists a constant $C>0$ independent of $\alpha$ and $h$ for both the two- and three-dimension such that
\begin{equation} \label{aprioriu}
|\!|\!| u-u_h|\!|\!|_{\alpha,h}
\leq C\left( \inf_{\mbox{\boldmath$\tau$}_h^f\in\Sigma_k^f} \|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_{0,\O}+\inf_{v_h \in D_k}|\!|\!| u -v_h|\!|\!|_{\alpha,h}\right).
\end{equation}
\end{thm}
\begin{proof}
By the inf-sup condition, for each $v_h \in D_k$ we have
\begin{equation}\label{uhvh}
|\!|\!| u_h - v_h|\!|\!|_{\alpha,h} \leq \frac{1}{\beta} \sup_{\mbox{\boldmath$\tau$}_h \in \Sigma_k} \displaystyle\frac{(\nabla\cdot \mbox{\boldmath$\tau$}_h, u_h-v_h)}{\|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_0}.
\end{equation}
By the first equation in the error equations (\ref{erroreq_mixed}),
$$
(\nabla\cdot \mbox{\boldmath$\tau$}_h, u_h-v_h) = (\nabla\cdot \mbox{\boldmath$\tau$}_h, u-v_h) + (\nabla\cdot \mbox{\boldmath$\tau$}_h, u_h-u) =
(\nabla\cdot \mbox{\boldmath$\tau$}_h, u-v_h) - (\alpha^{-1}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h),\,\mbox{\boldmath$\tau$}_h).
$$
Then, by the continuity result \eqref{cont_hDiv} and the Cauchy-Schwarz inequality,
$$
(\nabla\cdot \mbox{\boldmath$\tau$}_h, u_h-v_h) \leq C \|\mbox{\boldmath$\tau$}_h\|_{\alpha,h} |\!|\!| u-v_h|\!|\!|_{\alpha,h} + \|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0 \|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_{0}.
$$
Thus by \eqref{uhvh} and the equivalence of $ \|\mbox{\boldmath$\tau$}_h\|_{\alpha,h}$ and $\|\alpha^{-1/2}\mbox{\boldmath$\tau$}_h\|_{0}$,
$$
|\!|\!| u_h - v_h|\!|\!|_{\alpha,h} \leq C(|\!|\!| u-v_h|\!|\!|_{\alpha,h}+ \|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0).
$$
A simple application of the triangle inequality yields
$$
|\!|\!| u-u_h|\!|\!|_{\alpha,h} \leq |\!|\!| u-v_h|\!|\!|_{\alpha,h}+|\!|\!| u_h-v_h|\!|\!|_{\alpha,h}
\leq C\left(\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0+ |\!|\!| u -v_h|\!|\!|_{\alpha,h}\right).
$$
By the optimal convergence results of $\mbox{\boldmath$\sigma$}_h$, we have the robust best approximation result of the theorem.
\end{proof}
\begin{rem}
Even though we have the robust best approximation result (\ref{aprioriu}), due to the fact that the approximation orders of $\Sigma_k$ and $D_k$ are different for the corresponding norms, the order of convergence for $u-u_h$ in the discrete $H^1$ norm $|\!|\!| \cdot|\!|\!|_{\alpha,h}$ is one or two order lower than the corresponding weighted $L^2$ RT or BDM approximation errors in Theorem \ref{apriori_mixed2}, respectively.
Due to this order difference, in the a posteriori error analysis, we should only construct the error estimator related to $\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0$.
\end{rem}
Now, let us show the robust best approximation property in $\Sigma_k$.
\begin{thm} (Robust best approximation in the mixed approximation space)
The following robust best approximation properties are true with a constant $C$ independent of $\alpha$ and $h$:
\begin{eqnarray}
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0 &\leq& C \inf_{\mbox{\boldmath$\tau$} \in \Sigma_k}\|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h\|_{\alpha,h,H({\rm div})}, \\[2mm] \label{rbahdiv}
\|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\sigma$}_h\|_{\alpha,h,H({\rm div})} &\leq& C \inf_{\mbox{\boldmath$\tau$} \in \Sigma_k}\|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h\|_{\alpha,h,H({\rm div})}.
\end{eqnarray}
\end{thm}
\begin{proof}
For an arbitrary $\mbox{\boldmath$\tau$}_h \in \Sigma_k$, by \eqref{orth}, there exists a unique $\boldsymbol{\zeta}_h \in (\Sigma_k^0)^\perp$, such that
$$
(\nabla\cdot \boldsymbol{\zeta}_h, v_h) = (\nabla\cdot (\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h), v_h), \quad\forall v_h \in D_k,
$$
and
\begin{equation}
\beta \|\alpha^{-1/2}\boldsymbol{\zeta}_h\|_0 \leq \sup_{v_h \in D_k} \displaystyle\frac{(\nabla\cdot \boldsymbol{\zeta}_h, v_h)} {|\!|\!| v_h |\!|\!|_{\alpha,h}}=
\sup_{v_h \in D_k} \displaystyle\frac{(\nabla\cdot (\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h), v_h)} {|\!|\!| v_h |\!|\!|_{\alpha,h}}.
\end{equation}
By the continuity (\ref{cont_hDiv}),
$$
(\nabla\cdot (\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h), v_h) \leq C|\!|\!| v_h |\!|\!|_{\alpha,h} \|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h\|_{\alpha,h,H({\rm div})}.
$$
Thus,
$$
\|\alpha^{-1/2}\boldsymbol{\zeta}_h\|_0 \leq C \|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h\|_{\alpha,h,H({\rm div})}.
$$
Setting $\mbox{\boldmath$\tau$}_h^f := \boldsymbol{\zeta}_h + \mbox{\boldmath$\tau$}_h$, it is clear that $\mbox{\boldmath$\tau$}_h^f \in \Sigma_k^f$. Then by the best approximation (\ref{rba_equ}),
$$
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0 \leq
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_0 \leq
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h)\|_0+\|\alpha^{-1/2}\boldsymbol{\zeta}_h\|_0 \leq C \|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h\|_{\alpha,h,H({\rm div})}.
$$
On the other hand, since on each element $K\in {\cal T}$,
$$
(\nabla\cdot \boldsymbol{\zeta}_h, v_h)_K = (\nabla\cdot (\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h), v_h)_K, \quad\forall v_h \in P_k(K),
$$
and $\nabla\cdot \boldsymbol{\zeta}_h \in P_k(K)$, we have
$$
\|\nabla\cdot \boldsymbol{\zeta}_h\|_{0,K} \leq \|\nabla\cdot (\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\tau$}_h)\|_{0,K}.
$$
Since $\nabla\cdot(\mbox{\boldmath$\sigma$}_h-\mbox{\boldmath$\tau$}_h^f)=0$, we have
\begin{eqnarray*}
\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_{0,K}&\leq&
\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_{0,K}+\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}_h-\mbox{\boldmath$\tau$}_h^f)\|_{0,K}\\
&=&
\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h^f)\|_{0,K}\\
&\leq&
\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h)\|_{0,K}+
\|\alpha^{-1/2}\nabla\cdot \boldsymbol{\zeta}_h\|_{0,K} \\
&\leq & 2 \|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\tau$}_h)\|_{0,K}.
\end{eqnarray*}
With this, the robust best approximation property (\ref{rbahdiv}) in $\|\cdot\|_{\alpha,h,H({\rm div})}$ can proved.
\end{proof}
We classify the elements in the mesh into two sets:
\begin{eqnarray}
{\cal T}_{low} = \{ K\in {\cal T} : 0<s_K<1\} \quad\mbox{and}\quad
{\cal T}_{high} = \{ K\in {\cal T} : 1\leq s_K\}.
\end{eqnarray}
\begin{thm}\label{apriori_mixed3} (Robust local a priori error estimates in weighted $H({\rm div})$ norm)
Let $(\mbox{\boldmath$\sigma$}, u)$ and $(\mbox{\boldmath$\sigma$}_h,\,u_h) \in \Sigma_k \times D_k$ $(k\geq 0)$ be the solutions of {\em (\ref{mixed})} and {\em (\ref{problem_mixed})}, respectively. Assume that $u\in H^{1+r}(\O)$ with some $r>0$ and that $u|_K\in H ^{1+s_K}(K)$ with an element-wisely defined regularity $s_K>0$ for all $K\in{\cal T}$. Then there exists a constant $C>0$ independent $\alpha$ and $h$ for both the two- and three-dimension such that
\begin{eqnarray}\label{err-bound-Div2RT}
\|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\sigma$}_h\|_{\alpha,h,H({\rm div})} &\leq& C
\sum_{K\in{\cal T}_{low}} \left(h_K^{s_K} |\alpha^{1/2}\nabla u|_{s_K,K} + h_K \|\alpha^{-1/2}f\|_{0,K}\right)\\
&&\quad + C\sum_{K\in{\cal T}_{high}} \left ( h_K^{\min\{k+1,s_K\}} |\alpha^{1/2}\nabla u|_{\min\{k+1,s_K\},K} \right. \\
&&\quad \left. + h_K^{\min\{k+2,s_K\}}\|\alpha^{-1/2}f\|_{\min\{k+1,s_K-1\},K}\right), RT_{k} \mbox{ case}.
\\[2mm] \label{err-bound-Div2BDM}
\|\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\sigma$}_h\|_{\alpha,h,H({\rm div})} &\leq& C
\sum_{K\in{\cal T}_{low}} \left(h_K^{s_K} |\alpha^{1/2}\nabla u|_{s_K,K} + h_K \|\alpha^{-1/2}f\|_{0,K}\right)\\
&&\quad + C\sum_{K\in{\cal T}_{high}} h_K^{\min\{k+2,s_K\}} \left ( |\alpha^{1/2}\nabla u|_{\min\{k+2,s_K\},K} \right. \\
&&\quad \left. + \|\alpha^{-1/2}f\|_{\min\{k+1,s_K-1\},K}\right), BDM_{k+1} \mbox{ case}.
\end{eqnarray}
\end{thm}
\begin{proof}
By the definition of the norm $\|\cdot\|_{\alpha,h,H({\rm div})}$, we only need to discuss the term
$$
h_K\|\alpha^{-1/2}\nabla\cdot (\mbox{\boldmath$\sigma$}- \mbox{\boldmath$\sigma$}_h)\|_{0,K} = h_K\|\alpha^{-1/2}(f-Q^k_h f)\|_{0,K}
$$
for each element $K\in{\cal T}$.
The first case is that the regularity is low in the element $K\in {\cal T}_{low}$, with $0< s_K <1$. In this case, notice that $f \in L^2(K)$, thus
$$
h_K\|\alpha^{-1/2}(f-Q^k_h f)\|_{0,K} \leq h_K\|\alpha^{-1/2}f\|_{0,K}.
$$
Compared to the error $h_K^{s_K} |\alpha^{1/2}\nabla u|_{s_K,K}$ from the weighted $L^2$ approximation, it is of high order.
The other case is that $s_K \geq 1$ in the element $K$. Note that $\alpha_K$ is assumed to be a constant in $K$, thus $f = \nabla\cdot(\alpha_K\nabla u) = \alpha_K \Delta u \in H^{s_K-1}(K)$, thus
$$
h_K\|\alpha^{-1/2}(f-Q^k_h f)\|_{0,K} \leq C h_K^
{\min\{s_K,k+2\}}\|\alpha^{-1/2}f\|_{\min\{s_K-1,k+1\},K}.
$$
Compared with the weighted $L^2$ error, this term is of the same order for the $BDM_{k+1}$ approximation and one order high for the $RT_k$ approximation.
\end{proof}
\begin{rem}
One may want to use the Brezzi's theory directly as in \cite{LS:06} to get the following a priori error estimate
$$
|\!|\!| u - u_h|\!|\!|_{\alpha,h} +\|\mbox{\boldmath$\sigma$} - \mbox{\boldmath$\sigma$}_h\|_{\alpha, h} \leq C\left(\inf_{v\in D_k}|\!|\!| u - v_h|\!|\!|_{\alpha,h} +\inf_{\mbox{\boldmath$\tau$} \in \Sigma_k}\|\mbox{\boldmath$\sigma$} - \mbox{\boldmath$\tau$}_h\|_{\alpha, h}\right ).
$$
This is not right, since for problems with a low regularity, the $L^2$ norm of the trace $\|\mbox{\boldmath$\sigma$}\cdot{\bf n}\|_{0,F}$ is not defined and thus $\|\mbox{\boldmath$\sigma$} \|_{\alpha,h}$ is not well-defined. Also, the result obtained by this is sub-optimal for the flux approximation.
\end{rem}
\begin{rem}
In the standard mixed method analysis, the $L^2$ norm of $u-u_h$ is analyzed and it has the same order convergence as the $RT$ approximation. In the case of the robust local a priori error estimate, we cannot get a robust local estimate for $\|\alpha^{1/2}(u-u_h)\|_0$ since robust an inf-sup condition
$$
\sup_{\mbox{\boldmath$\tau$}_h \in \Sigma_k} \displaystyle\frac{(\nabla\cdot \mbox{\boldmath$\tau$}_h,v_h)}{ \|\mbox{\boldmath$\tau$}_h\|_{\alpha,h,H({\rm div})}}
\geq \beta \| \alpha^{1/2} v_h \|_0 \quad \forall\, v_h \in D_k,
$$
with a constant $\beta$ independent of $h$ and $\alpha$ is not available.
\end{rem}
\section{Stenberg's Post-processing}
Since in the mixed methods, the approximation $u_h$ measured in the weighted discrete $H^1$ energy norm is lower than that of the approximation of the flux, we introduce the Stenberg's post-processing to get a same order approximation.
On each element $K\in {\cal T}$, if $(\mbox{\boldmath$\sigma$}_h,u_h) \in RT_k\times D_k$ ($k\geq 0$) or $(\mbox{\boldmath$\sigma$}_h,u_h) \in BDM_k\times D_{k-1}$ ($k\geq 1$), i.e., the index of the flux approximation space is $k$, we find a $u_{h,K}^* \in P_{k+1}(K)$, such that
\begin{equation}
(\alpha \nabla u_{h,K}^*, \nabla v_h)_K = (f,v_h)_K - (\mbox{\boldmath$\sigma$}_h\cdot{\bf n}, v_h)_{\partial K}, \quad \forall v_h\in P_{k+1}(K)/\rm I\kern-.19emR,
\end{equation}
and
\begin{equation}
\int_K u_{h,K}^* dx = \int_K u_{h} dx.
\end{equation}
We first prove the following trace theorem by using techniques in \cite{BH:01,CaYeZh:11}.
\begin{thm}
For an element $K\in{\cal T}$ with the mesh size $h_K$, we have
\begin{equation} \label{trace}
\|\mbox{\boldmath$\tau$}\cdot{\bf n}\|_{-1/2,\partial K} \leq C(\|\mbox{\boldmath$\tau$}\|_{0,K} + h_K \|\nabla\cdot \mbox{\boldmath$\tau$}\|_{0,K}), \quad \forall \mbox{\boldmath$\tau$} \in H({\rm div};K).
\end{equation}
\end{thm}
\begin{proof}
For any $\mbox{\boldmath$\tau$}\in H({\rm div};K)$ and $v\in H^1(K)$, we have the following identity:
\begin{equation}
\langle v, \mbox{\boldmath$\tau$}\cdot{\bf n} \rangle_{\partial K} =(\mbox{\boldmath$\tau$}, \nabla v)_K + (\nabla\cdot \mbox{\boldmath$\tau$}, v)_K,
\end{equation}
where $\langle v, \mbox{\boldmath$\tau$}\cdot{\bf n}\rangle_{\partial K}$ should be viewed as the duality pair between
$H^{1/2}(\partial K)$ and $H^{-1/2}(\partial K)$.
Thus
$$
\|\mbox{\boldmath$\tau$}\cdot{\bf n}\|_{-1/2,\partial K} = \sup_{v\in H^{1/2}(\partial K)} \displaystyle\frac
{(\mbox{\boldmath$\tau$}, \nabla v)_K + (\nabla\cdot \mbox{\boldmath$\tau$}, v)_K}{\|v\|_{1/2,\partial K}}.
$$
On a reference element $\hat{K}$, given $g\in H^{1/2}(\partial \hat{K})$, consider the following equation
$$
- \Delta z + z =0 \in \hat{K}, \quad z = g \mbox{ on } \partial \hat{K}.
$$
By the elliptic stability theory, we have
$$
\|\nabla z\|_{0,\hat{K}} + \| z\|_{0,\hat{K}} \leq C\|g\|_{1/2, \partial \hat{K}}.
$$
Mapping back to the physical element $K$ we have that given a $g\in H^{1/2}(\partial K)$,
there exits a $w_g\in H^1(K)$ and $w=g$ on $\partial K$, such that
$$
\|\nabla w_g\|_{0, K} + h_K^{-1} \| w_g\|_{0, K} \leq C\|g\|_{1/2, \partial K}.
$$
Thus
$$
\|\mbox{\boldmath$\tau$}\cdot{\bf n}\|_{-1/2,\partial K}
\leq \displaystyle\frac
{(\mbox{\boldmath$\tau$}, \nabla w_g)_K + (\nabla\cdot \mbox{\boldmath$\tau$}, w_g)_K}{\|g\|_{1/2,\partial K}}
\leq
C(\|\mbox{\boldmath$\tau$}\|_{0,K} + h_K \|\nabla\cdot \mbox{\boldmath$\tau$}\|_{0,K}).
$$
The
\end{proof}
\begin{thm} In each element $K\in{\cal T}$, the following robust best approximation property holds:
\begin{equation}
\|\alpha^{1/2}_K\nabla(u-u_{h,K}^*)\|_{0,K} \leq C \left( \inf_{w_h \in P_{k+1}(K)}\|\alpha^{1/2}_K\nabla (u-w_h)\|_{0,K} + \|\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h\|_{\alpha,h, H({\rm div}),K}
\right).
\end{equation}
\end{thm}
\begin{proof}
Let $w_h$ be an arbitrary function in $P_{k+1}(K)$, and $v_h= u_{h,K}^* - w_h$. Let $\overline{v}_h = \int_K v_h dx /|K|$ be the average of $v_h$ on $K$, then $v_h-\overline{v}_h$ belongs to the test space $P_{k+1}(K)/\rm I\kern-.19emR$. Then
\begin{eqnarray*}
\|\alpha^{1/2}_K\nabla(u_{h,K}^* - w_h)\|_{0,K}^2
& = &\|\alpha^{1/2}_K\nabla v_h\|_{0,K}^2 = (\alpha\nabla(u_{h,K}^* - w_h), \nabla v_h)_K\\
& = & (\alpha_K\nabla u_{h,K}^*,\nabla (v_h- \overline{v}_h))_K -(\alpha\nabla w_h, \nabla v_h)_K\\
& = & (f,v_h-\overline{v}_h)_K - (\mbox{\boldmath$\sigma$}_h\cdot{\bf n}, v_h-\overline{v}_h)_{\partial K} -(\alpha_K\nabla w_h, \nabla v_h)_K \\
& = & (\alpha_K \nabla (u-w_h),\nabla v_h)_K +((\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\cdot{\bf n}, v_h-\overline{v}_h)_{\partial K},
\end{eqnarray*}
where we use the fact that
$(\alpha_K \nabla u, \nabla v)_K = (f,v)_K - (\mbox{\boldmath$\sigma$}\cdot{\bf n}, v)_{\partial K}$ is true for any $v \in H^1(K)$.
By the Cauchy-Schwarz inequality, $(\alpha_K \nabla (u-w_h),\nabla v_h)_K \leq \|\alpha_K^{1/2}\nabla (u-w_h)\|_{0,K}\|\alpha_K^{1/2}\nabla v_h\|_{0,K}$. By the definition of the dual norm, the trace inequality \eqref{trace}, and the fact $\|v_h-\overline{v}_h\|_{0,K} \leq C h_K\|\nabla v_h\|_{0,K}$, we have
\begin{eqnarray*}
((\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\cdot{\bf n}, v_h-\overline{v}_h)_{\partial K} &\leq& \|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\cdot{\bf n}\|_{-1/2,\partial K} \|\alpha^{1/2}(v_h-\overline{v}_h)\|_{1/2,\partial K} \\
&\leq & C h_K^{-1}\|\alpha^{1/2}(v_h-\overline{v}_h)\|_{0,K} \|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\cdot{\bf n}\|_{-1/2,\partial K}\\
&\leq & C \|\alpha^{1/2}\nabla v_h\|_{0,K}(\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_{0,K} + h_K\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_{0,K}).
\end{eqnarray*}
Thus
$$
\|\alpha^{1/2}\nabla(u_{h,K}^* - w_h)\|_{0,K} \leq
C(\|\alpha^{1/2}\nabla (u-w_h)\|_{0,K} +
\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_{0,K} + h_K\|\alpha^{-1/2}\nabla\cdot(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_{0,K}).
$$
By the triangle inequality,
\begin{equation}
\|\alpha^{1/2}\nabla(u-u_{h,K}^*)\|_{0,K} \leq \|\alpha^{1/2}\nabla(u-w_h)\|_{0,K}+\|\alpha^{1/2}\nabla(u_{h,K}^* - w_h)\|_{0,K}.
\end{equation}
The theorem is proved.
\end{proof}
By the approximation property of $P_{k+1}(K)$, and the robust local optimal error estimate of $\mbox{\boldmath$\sigma$}_h$, we immediately have the following robust local optimal error estimate for the Stenberg's post-processing.
\begin{thm}
For both the $(\mbox{\boldmath$\sigma$}_h,u_h) \in RT_k\times D_k$ ($k\geq 0$) or $(\mbox{\boldmath$\sigma$}_h,u_h) \in BDM_k\times D_{k-1}$ ($k\geq 1$) case, the Stenberg's recovery $u_{h,K}^* \in P_{k+1}(K)$ has the following robust local a priori error estimate in the low regularity elements $K\in {\cal T}_{low}$ with $0\leq s_K<1$:
\begin{equation}
\|\alpha^{1/2}_K\nabla(u-u_{h,K}^*)\|_{0,K} \leq
C h_K^{s_K} |\alpha^{1/2}\nabla u|_{s_K,K} + h_K \|\alpha^{-1/2}f\|_{0,K}, K \in {\cal T}_{low}.
\end{equation}
For those elements $K\in {\cal T}_{high}$ with $1\leq s_K$, the following robust local a priori error estimate holds:
\begin{eqnarray
\|\alpha^{1/2}_K\nabla(u-u_{h,K}^*)\|_{0,K} &\leq&
C \left( h_K^{\min\{k+1,s_K\}} |\alpha^{1/2}\nabla u|_{\min\{k+1,s_K\},K} \right. \\
&& \left.+ h_K^{\min\{k+2,s_K\}}\|\alpha^{-1/2}f\|_{\min\{k+1,s_K-1\},K} \right), RT_{k}\times D_k \mbox{ case}.
\\[2mm]
\|\alpha^{1/2}_K\nabla(u-u_{h,K}^*)\|_{0,K} &\leq& C h_K^{\min\{k+1,s_K\}} \left (|\alpha^{1/2}\nabla u|_{\min\{k+1,s_K\},K} \right. \\
&&\quad \left. + \|\alpha^{-1/2}f\|_{\min\{k,s_K-1\},K}\right), BDM_{k}\times D_{k
-1} \mbox{ case}.
\end{eqnarray}
\end{thm}
\begin{rem}
There are other post-processings available, such as the one proposed in \cite{AC:95} and analyzed in \cite{Voh:10}. The recovered potential is also mainly from the numerical flux $\mbox{\boldmath$\sigma$}_h$, a similar robust and local optimal a priori error estimate can also be derived.
It is also well known if the mixed method is implemented by hybridization, the Lagrange multiplier is also a better approximation of $u$ than $u_h$, and is a good source for post-processing or solution reconstruction. With careful analysis, it should not be hard to derive robust and local optimal result for the Lagrange multiplier and its post-processed solution under a similar weighted discrete $H^1$ norm.
\end{rem}
\section{Final comments}
In this paper, for elliptic interface problems in two- and three-dimensions with a possible very low regularity, we establish robust and local optimal a priori error estimates for the Raviart-Thomas and Brezzi-Douglas-Marini mixed finite element approximations. For the flux approximation, we show the robust best best approximation in the discrete equilibrated space and the whole mixed approximation space with appropriated norms, an $\alpha$-weighted $L^2$ norm or an $(\alpha,h)$-weighted $H({\rm div})$ norms. We show the robust local optimal error estimates for the flux approximation in these norms. For the potential approximation, we show a robust best approximation result in a weighted discrete $H^1$ norm and show that the convergence order is sub-optimal compared to the flux approximation. We then show that with the flux as the main source of post-processing, the Stenberg's post-processing can recover a potential with the robust local optimal error estimate.
These robust and local optimal a priori estimates provide guidance for constructing robust a posteriori error estimates and adaptive methods for the mixed approximations. For robust a posteriori error for the mixed methods of the interface problem, we should focus on $\|\alpha^{-1/2}(\mbox{\boldmath$\sigma$}-\mbox{\boldmath$\sigma$}_h)\|_0$, like the approaches in \cite{Ain:07,CaZh:10,Kim:07,Voh:10}. The approaches in \cite{BrVe:96,LS:06} are not right since they are all try to put $u_h$ into the estimator. If any post-processing is going to be used to construct the a posteriori error estimator, the main source of information should be the numerical flux $\mbox{\boldmath$\sigma$}_h$, not the numerical potential $u_h$ itself.
| {
"timestamp": "2019-03-01T02:09:36",
"yymm": "1902",
"arxiv_id": "1902.10901",
"language": "en",
"url": "https://arxiv.org/abs/1902.10901",
"abstract": "For elliptic interface problems in two- and three-dimensions with a possible very low regularity, this paper establishes a priori error estimates for the Raviart-Thomas and Brezzi-Douglas-Marini mixed finite element approximations. These estimates are robust with respect to the diffusion coefficient and optimal with respect to the local regularity of the solution. Several versions of the robust best approximations of the flux and the potential approximations are obtained. These robust and local optimal a priori estimates provide guidance for constructing robust a posteriori error estimates and adaptive methods for the mixed approximations.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Robust and Local Optimal A Priori Error Estimates for Interface Problems with Low Regularity: Mixed Finite Element Approximations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347808797581,
"lm_q2_score": 0.7279754548076477,
"lm_q1q2_score": 0.7093646027913324
} |
https://arxiv.org/abs/2012.09055 | Degree Counting Theorems for 2x2 non-symmetric singular Liouville Systems | Let $(M,g)$ be a compact Riemann surface with no boundary and $u=(u_1,u_2)$ be a solution of the following singular Liouville system: $$\Delta_g u_i+\sum_{j=1}^2 a_{ij}\rho_j(\frac{h_je^{u_j}}{\int_M h_je^{u_j}dV_g}-1)=\sum_{l=1}^{N}4\pi\gamma_l(\delta_{p_l}-1), $$ where $h_1,h_2$ are positive smooth functions, $p_1,\cdots,p_N$ are distinct points on $M$, $\delta_{p_l}$ are Dirac masses, $\rho=(\rho_1,\rho_2)(\rho_i\geq 0)$ and $(\gamma_1,\cdots,\gamma_N)(\gamma_l > -1)$ are constant vectors. In the previous work, we derive a degree counting formula for the singular Liouville system when $A$ satisfies standard assumptions. In this article, we establish a more general degree counting formula for 2$\times$2 singular Liouville system when the coefficient matrix $A$ is non-symmetric and non-invertible. Finally, the existence of solution can be proved by the degree counting formula which depends only on the topology of the domain and the location of $\rho$. | \section{Introduction}
In this article we study the following Liouville system defined on a compact Riemann surface $(M,g)$ with no boundary:
\begin{equation}\label{1.1}
\begin{aligned}
\Delta_g u_1^*+a_{11}\rho_1(\frac{h_1^*e^{u_1^*}}{\int_M h_1^*e^{u_1^*}}-1)+a_{12}\rho_2(\frac{h_2^*e^{u_2^*}}{\int_M h_2^*e^{u_2^*}}-1)=\sum_{l=1}^{N}4\pi\gamma_l(\delta_{p_l}-1), \\
\Delta_g u_2^*+a_{21}\rho_1(\frac{h_1^*e^{u_1^*}}{\int_M h_1^*e^{u_1^*}}-1)+a_{22}\rho_2(\frac{h_2^*e^{u_2^*}}{\int_M h_2^*e^{u_2^*}}-1)=\sum_{l=1}^{N}4\pi\gamma_l(\delta_{p_l}-1),
\end{aligned}
\end{equation}
where $h_1^*,h_2^*$ are positive smooth functions on $M$, $p_1,\cdots,p_N$ are distinct points on $M$, $\delta_{p_l}$ are Dirac masses, $\rho=(\rho_1,\rho_2)(\rho_i\geq 0)$ and $(\gamma_1,\cdots,\gamma_N)(\gamma_l > -1)$ are constant vectors. Without loss of generality, we assume $Vol(M)=1$. $\Delta_g$ is the Laplace-Beltrami operator ($-\Delta_g\geq 0$). Equation (1.1) is called Liouville system if all the entries in the coefficient matrix $A=(a_{ij})_{2\times2}$ are nonnegative.
In mathematics, physics and many other fields, the Liouville system (\ref{1.1}) has many significant applications. In physics, Liouville systems can be derived form the mean field limit of point vortices of the Euler flow (see \cite{cag-1,cag-2,kie-1,kie-2}). In the classical gauge field theory, Liouville systems are closely related to the Chern-Simons Higgs equation for the non-abelian case\cite{dun,hong,jack,yang}. Here, both the Liouville and Toda equations, together with their solutions, appear prominently in the analysis of the nonrelativistic self-dual Chern-Simons models. Chemotaxis is a prominent feature in the organization of many biological populations. Various Liouville systems are also used to describe such models in the theories of Chemotaxis \cite{chil,kel}.
In geometry, the single Liouville equation is closely related to the famous Nirenberg equation. And the Liouville equation with singular sources describes metrics with conic singularity \cite{kuo}.
In recent developments of related Liouville systems, Chen and Lin completed the program in a series of pioneering works \cite{chenlin1,chenlin2} for Liouville equation. Chipot, Shafrir and Wolansky showed solutions of Liouville systems are radially symmetric with respect to a common point \cite{csw}. Chen-Lin's work was extended by Lin and Zhang to Liouville systems {\cite{linzhang1,linzhang2,linzhang3}}. Then author and Zhang \cite{lei-y} extended Lin-Zhang's work to the systems with Dirac poles. Since our goal is to derive more general degree counting theorem, the main purpose of this article is to extend the existing results (the a priori estimate and degree counting theorem) \cite{lei-y} to 2$\times$2 singular systems with non-symmetric and non-invertible coefficient matrix $A$.
For the coefficient matrix $A$, throughout the paper, we postulate the following conditions:
$(H): a_{11},a_{22}\geq0$, $a_{12},a_{21}>0$ and $a_{21}\geq a_{11},a_{12}\geq a_{22}$.
Obviously, if $u=(u_1,u_2)$ is a solution of (\ref{1.1}), then after adding a constant, $u+c=(u_1+c_1, u_2+c_2)$ is also a solution of (\ref{1.1}). Hence, we can always assume that each component of $u=(u_1,u_2)$ is in
$$ ^\text{\r{}}\hspace{-.33cm}H^{1}(M):=\{v\in L^2(M);\quad \nabla v\in L^2(M), \mbox{and }\,\, \int_M v dV_g=0\}. $$
Then the equation (\ref{1.1}) is the Euler-Lagrange equation for the following nonlinear functional $J_\rho(u)$ in\hspace{0.1cm} $^\text{\r{}}\hspace{-.33cm}H^{1}(M)$:
$$J_\rho(u)=\frac{1}{2}\int_M \sum_{i,j=1}^{2} a^{ij}\nabla_g u_i\nabla_g u_j dV_g- \sum_{i=1}^{2}\rho_i \log \int_M h_i e^{u_i} dV_g.$$
Let $\mathbb{N}^+$ be the set of positive integers. To state our theorem, we shall use the following notation:
$$\Sigma:=\{8m\pi+\sum_{p_l\in \Lambda}8\pi(1+\gamma_l); \Lambda\subset\{p_1,\cdots,p_N\}, m\in \mathbb{N}^+\cup\{0\}\}\setminus \{0\}. $$
Writing $\Sigma$ as
$$\Sigma=\{8\pi n_k|n_1<n_2<\cdots\},$$
we first establish the following a priori estimate:
\begin{thm}
\label{a priori estimate}
Let $A$ satisfies $(H)$. For $k\in \mathbb{N}^+ \cup \{0\},$ and
$$\mathcal{O}_k=\{(\rho_1,\rho_2)|\rho_i\geq0,i=1,2; {\rm and}$$
$$8\pi n_k(\frac{a_{21}}{a_{12}}\rho_1+\rho_2)<\frac{a_{11}a_{21}}{a_{12}}\rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2<8\pi n_{k+1}(\frac{a_{21}}{a_{12}}\rho_1+\rho_2) \}.$$
Suppose $h_1,h_2$ are positive and $C^3$ functions on $M$ and $K$ is a compact subset of $\mathcal{O}_k$. Then there exists a constant $C$ such that for any solution $u=(u_1,u_2)$ of (\ref{1.1}) with $\rho \in K$ and $u_i \in ^\text{\r{}}\hspace{-.33cm}H^{1}(M)$, we have
$$|u_i(x)|\le C, \quad \forall x\in M, \quad i=1,2, $$
where $C$ is depended on $M,g,k,K,A,h$.
\end{thm}
Note that the set $\mathcal{O}_k$ is bounded if all $a_{ii}>0$ and is unbounded if $a_{ii}=0$ for some $i$. By Theorem 1.1, the critical parameter set for (\ref{1.1}) is
$$\Gamma_k=\{\rho;\frac{a_{11}a_{21}}{a_{12}}\rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2=8\pi n_{k}(\frac{a_{21}}{a_{12}}\rho_1+\rho_2) \}. $$
Thanks to the Chen-Lin's work \cite{chenlin1,chenlin2}, we can define the nonlinear map $T_\rho=(T^1,T^2)$ from $\hspace{0.1cm} ^\text{\r{}}\hspace{-.33cm}H^{1,2}=\hspace{0.1cm}^\text{\r{}}\hspace{-.33cm}H^{1}(M)\times\hspace{0.1cm}^\text{\r{}}\hspace{-.33cm}H^{1}(M)$ to $\hspace{0.1cm}^\text{\r{}}\hspace{-.33cm}H^{1,2}$ by
$$T^i=-\Delta_g^{-1}(\sum_{j=1}^2 a_{ij}\rho_j(\frac{h_j e^{u_j}}{\int_M h_j e^{u_j}}-1)),\quad i=1,2.$$
Obviously, $T_\rho$ is compact from $\hspace{0.1cm}^\text{\r{}}\hspace{-.33cm}H^{1,2}$ to itself. Then thanks to the a Priori estimate, for $\rho\notin \Gamma_k$, the Leray-Schauder degree of (\ref{1.1}) can be defined by
$$d_\rho=\deg(I-T_\rho; B_R,0),$$
where $R$ is sufficiently large and $B_R=\{u; u\in\hspace{0.1cm}^\text{\r{}}\hspace{-.33cm}H^{1,2},\hspace{0.1cm}{
\rm and}\hspace{0.2cm}\sum_{i=1}^2 \|u_i\|_{H^1}<R\}.$ By the homotopic invariance and Theorem 1.1, $d_\rho$ is constant for $\rho \in\mathcal{O}_k$ and is independent of $h=(h_1,h_2)$.
To state the degree counting formula for $d_\rho$, we consider the following generating function $g$:
$$g(x)=(1+x+x^2+\cdots)^{-\mathcal{X}(M)+N}\prod_{l=1}^N (1-x^{1+\gamma_l}), $$
where $\mathcal{X}(M)=2-2g_e(M)$ is the Euler Characteristic of $M$ and $g_e(M)$ is the genus of $M$. We note that if $-\mathcal{X}(M)+N< 0$,
$$(1+x+x^2+\cdots)^{-\mathcal{X}(M)+N}=(1-x)^{\mathcal{X}(M)-N}.$$
Writing $g(x)$ in the following form
$$g(x)=1+b_1 x^{n_1}+b_2 x^{n_2}+\cdots+b_k x^{n_k}+\cdots,$$
we use $b_j (j=1,2,\cdots)$ to describe the degree counting theorem:
\begin{thm}
\label{degree counting theorem}
Let $d_\rho$ be the Leray-Schauder degree for (\ref{1.1}). Suppose
$$ 8\pi n_k(\frac{a_{21}}{a_{12}}\rho_1+\rho_2)<\frac{a_{11}a_{21}}{a_{12}}\rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2<8\pi n_{k+1}(\frac{a_{21}}{a_{12}}\rho_1+\rho_2),$$
then
$$d_\rho=\sum_{j=0}^{k}b_j, \hspace{0.2cm} {\rm where}\hspace{0.2cm} b_0=1.$$
\end{thm}
For most applications $\gamma_l$ are positive integers, which implies that
$$ \Sigma=\{8\pi m; \hspace{0.1cm}m\in\mathbb{N}^+ \}.$$
Thus in this case if $\mathcal{X}(M) \leq0$ we have
$$g(x)=(1+x+x^2+\cdots)^{-\mathcal{X}(M)}\prod_{l=1}^{N}\frac{1-x^{1+\gamma_l}}{1-x}$$
$$=(1+x+x^2+\cdots)^{-\mathcal{X}(M)}\prod_{l=1}^{N}(1+x+\cdots+x^{\gamma_l})$$
$$=1+b_1x+b_2x^2+\cdots+b_kx^k+\cdots.$$
Obviously $b_j\geq0$ for all $j\geq 1$, which implies
$$d_\rho=1+\sum_{j=1}^{k}b_j>0.$$
\noindent{\bf Corollary 1.1.}
{\em Suppose all $\gamma_l\in \mathbb{N}^+$ and $\mathcal{X}(M) \leq 0$. Then $d_\rho>0$ if
\begin{equation*}
\frac{a_{11}a_{21}}{a_{12}}\rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2\neq8\pi m(\frac{a_{21}}{a_{12}}\rho_1+\rho_2), \quad \forall m\in \mathbb{N}^+.
\end{equation*}
Thus (\ref{1.1}) always has a solution in this case. }
\medskip
The organization of this article is as follows: In section two we first prove the a priori estimate for the singular Liouville system (\ref{1.1}). When $A$ is non-invertible, we find $u_1$ is a scale multiply of $u_2$. So we can prove the theorem by reducing the system to a single Liouville equation. When $A$ is non-symmetric, we prove the result by reducing it to the symmetric case. That is, $u$ is a solution of singular Liouville system corresponding to the non-symmetric $A$ then we can find a $\tilde u$ which is a solution of the same equation corresponding to a symmetric $\tilde A$. Then we can get the generalization of the a priori estimate and degree counting theorems for this case with some modifications. In section three we prove the degree counting theorem by reducing the system to single Liouville equation and use the previous results of Chen-Lin \cite{chenlin1,chenlin2}. Finally, in section four we discuss some applications of the degree counting theorem for some related topics.
\section{{Proof} of the a priori estimate }
The main purpose of this section is to prove the Theorem 1.1.
Let $u^*=(u_1^*,u_2^*)$ be a solution of (\ref{1.1}). We set
$$v_i^*=u_i^*-\log\int_M h_i^* e^{u_i^*}dV_g, \hspace{0.1cm} i=1,2.$$
which gives
$$\int_M h_i^*e^{v_i^*}dV_g=1, \hspace{0.1cm} i=1,2.$$
Therefore, for convenience, in this article we assume that
\begin{equation}\label{scaleto1}
\int_M h_i^*e^{u_i^*}dV_g=1, \hspace{0.1cm} i=1,2.
\end{equation}
Thus (\ref{1.1}) can be written as
\begin{equation}\label{1.2}
\begin{aligned}
\Delta_g u_1^*+a_{11}\rho_1(h_1^*e^{u_1^*}-1)+a_{12}\rho_2(h_2^*e^{u_2^*}-1)=\sum_{l=1}^{N}4\pi\gamma_l(\delta_{p_l}-1),\\
\Delta_g u_2^*+a_{21}\rho_1(h_1^*e^{u_1^*}-1)+a_{22}\rho_2(h_2^*e^{u_2^*}-1)=\sum_{l=1}^{N}4\pi\gamma_l(\delta_{p_l}-1).
\end{aligned}
\end{equation}
Around each singular source, the leading term of $u_i^*$ is a logarithmic function that comes from the following Green's function $G(x,q):$
$$-\Delta_x G(x,q)=\delta_q-1 \quad {\rm and}\quad \int_M G(x,q)dx=0.$$
Define
$$u_i=u_i^*-4\pi \sum_{l=1}^{N} \gamma_l G(x,p_l),$$
and rewrite (\ref{1.2}) as
\begin{equation}\label{manifold}
\begin{aligned}
\Delta_g u_1+a_{11}\rho_1(h_1e^{u_1}-1)+a_{12}\rho_2(h_2e^{u_2}-1)=0,\\
\Delta_g u_2+a_{21}\rho_1(h_1e^{u_1}-1)+a_{22}\rho_2(h_2e^{u_2}-1)=0,
\end{aligned}
\end{equation}
where
$$h_i(x)=h_i^*(x)exp\{-\sum_{l=1}^{N}4\pi\gamma_l G(x,p_l)\},$$
which implies that around each singular source, say, $p_l$, in local coordinates, $h_j$ can be written as
$$h_j(x)=|x|^{2\gamma_l}g_j(x)$$
for some positive, smooth function $g_j(x)$.
Let $u=(u_1,u_2)$ be a solution of (\ref{manifold}). To prove a priori estimate for $u$. We only need to prove the upper bound for $u$, because the lower bound of $u$ can be obtained from the upper bound of $u$ and standard Harnack inequality.
Therefore our goal is to prove
\begin{equation}\label{upper bound}
u_i(x)\leq C, \quad i=1,2.
\end{equation}
The proof of (\ref{upper bound}) is by contradiction. Suppose there exists a sequence $u^k$ to (\ref{manifold}) such that $\lim_{k\rightarrow \infty} \max_x\max_i u_i^k(x)\rightarrow \infty$.
Then the equation for $u^k$ is
\begin{equation}\label{uk}
\begin{aligned}
\Delta_g u_1^k+a_{11}\rho_1^k(h_1e^{u_1^k}-1)+a_{12}\rho_2^k(h_2e^{u_2^k}-1)=0, \\
\Delta_g u_2^k+a_{21}\rho_1^k(h_1e^{u_1^k}-1)+a_{22}\rho_2^k(h_2e^{u_2^k}-1)=0.
\end{aligned}
\end{equation}
By an argument similar to a Brezis-Merle type lemma \cite{brez}, it's easy to see that there are only finite blow up points:$\{p_1,\cdots,p_N\}$. And $u_i^k$ is uniformly bounded above in any compact subset away from the blowup set.
Then we consider two cases for the coefficient matrix $A=(a_{ij})$.
{\em Case one:} $det(A)=0$.
$det(A)=0$. This implies $a_{11}a_{22}=a_{12}a_{21}$ and $a_{ij}>0$.
If we multiply $a_{21}$ by the first equation and multiply $a_{11}$ by the second equation in (\ref{uk}) and then subtract, we get
$$\Delta_g (a_{21} u_1^k-a_{11} u_2^k)=0.$$
Therefore,
$$u_1^k=\frac{a_{11}}{a_{21}}u_2^k+C^k.$$
By assumption $(H)$, we have either $a_{11}<a_{12}$ or $a_{11}=a_{12}$. Then we are going to discuss this two separate cases.
Case (i): $a_{11}< a_{21}.$
Let $a=\frac{a_{11}}{a_{21}}(<1)$. Then the second equation in (\ref{uk}) becomes
\begin{equation}\label{single}
\Delta_g u_2^k+a_{21}\rho_1^k(h_1e^{a u_2^k+C^k}-1)+a_{22}\rho_2^k(h_2e^{u_2^k}-1)=0.
\end{equation}
To apply the local estimate, we rewrite (\ref{single}) in local coordinates. For $p\in M$, let $y=(y^1,y^2)$ be the isothermal coordinates near $p$ such that $y_p(p)=(0,0)$ and $y_p$ depends smoothly on $p$. In this coordinates, $ds^2$ has the form
$$e^{\phi(y_p)}[(dy^1)^2+(dy^2)^2],$$
where
$$\nabla\phi(0)=0,\phi(0)=0.$$
Also near $p$ we have
$$\Delta_{y_p}\phi=-2Ke^\phi,\quad {\rm where}\hspace{0.1cm} K {\rm \hspace{0.1cm}is\hspace{0.1cm} the \hspace{0.1cm}Gauss\hspace{0.1cm} curvature}. $$
When there is no ambiguity, we write $y=y_p$ for simplicity. In this local coordinates, (\ref{single}) is of the form
$$ -\Delta u_2^k=e^\phi a_{21}\rho_1^k(h_1e^{au_2^k+C^k}-1)+e^\phi a_{22}\rho_2^k(h_2e^{u_2^k}-1),\quad {\rm in}\hspace{0.1cm}B(0,\delta).$$
In this article, we always use $B(p,\delta)$ to denote the ball centered at $p$ with radius $\delta>0$.
Let $f_2^k$ be defined as
$$ -\Delta f_2^k=-e^\phi a_{21}\rho_1^k-e^\phi a_{22}\rho_2^k,\quad {\rm in}\hspace{0.1cm}B(0,\delta).$$
and $f_2^k(0)=|\nabla f_2^k(0)|=0$. Let $\tilde u_2^k=u_2^k-f_2^k$ and
$$ H_1^k=e^\phi \rho_1^k h_1 e^{af_2^k}, \hspace{0.1cm}H_2^k=e^\phi \rho_2^k h_2 e^{f_2^k},$$
then the equation for $\tilde u_2^k$ becomes
\begin{equation}\label{local1}
-\Delta \tilde u_2^k=a_{21}H_1^ke^{a\tilde u_2^k+C^k}+a_{22}H_2^k e^{\tilde u_2^k},\quad {\rm in} \quad B(0,\delta).
\end{equation}
Set $M_k=\max_x\tilde u_2^k(x)/(1+\gamma)$.
Let
\begin{equation*}
v_2^k(y)=\tilde u_2^k(\epsilon_k y)+2\log\epsilon_k,\quad {\rm where}\hspace{0.1cm} \epsilon_k=e^{-\frac{1}{2}M_k}.
\end{equation*}
Then it is easy to verify that
\begin{equation}\label{rescale}
-\Delta v_2^k=\epsilon_k^{2-2a} a_{21}H_1^k e^{av_2^k+C^k}+a_{22}H_2^k e^{v_2^k},\quad{\rm in}\quad B(0,\delta\epsilon_k^{-1}).
\end{equation}
Since $u_i^k$ tends to $-\infty$ in $M\backslash \cup_{j=1}^N B(p_j,\delta)$, we have
$$\int_{M\backslash\cup_{j=1}^N B(p_j,\delta)} h_i e^{u_i^k} dV_g \rightarrow 0,\quad i=1,2,$$
and
\begin{equation*}
\begin{aligned}
\lim_{k\rightarrow\infty} \int_{B(p_l,\delta)} \rho_1^k h_1 e^{u_1^k} dV_g=\lim_{k\rightarrow\infty} \int_{B(p_l,\delta)} H_1^k e^{a\tilde u_2^k+C^k}dx \\
=\lim_{k\rightarrow\infty}\int_{B(p_l\epsilon_k^{-1},\delta\epsilon_k^{-1})}\epsilon_k^{2-2a}H_1^k e^{av_2^k+C^k}dx=0.
\end{aligned}
\end{equation*}
Therefore,
$$\int_M h_1 e^{u_1}dV_g = o(1)\neq 1.$$
Contradiction. Thus, blowup cannot happen when $a_{11}<a_{21}$.
\bigskip
Case (ii): $a_{11}=a_{21}$.
In this case, the equation (\ref{uk}) becomes
\begin{equation}
\begin{aligned} \label{uk2}
\Delta_g u_1^k+a_{11}\rho_1^k(h_1e^{u_1^k}-1)+a_{12}\rho_2^k(h_2e^{u_2^k}-1)=0, \\
\Delta_g u_2^k+a_{11}\rho_1^k(h_1e^{u_1^k}-1)+a_{12}\rho_2^k(h_2e^{u_2^k}-1)=0.
\end{aligned}
\end{equation}
If we subtract this two equation in (\ref{uk2}), we get
$$\Delta_g (u_1^k-u_2^k)=0,$$
which implies
$$ u_1^k= u_2^k+C^k.$$
Thus, we can rewrite (\ref{uk2}) as a single equation
\begin{equation}\label{single2}
-\Delta_g u_1^k=a_{11}\rho_1^k(h_1e^{u_1^k}-1)+a_{12}\rho_2^k(h_2e^{u_1^k-C^k}-1).
\end{equation}
(\ref{single2}) in local coordinates can be written as
$$ -\Delta u_1^k=e^\phi a_{11}\rho_1^k(h_1e^{u_1^k}-1)+e^\phi a_{12}\rho_2^k(h_2e^{u_1^k-C^k}-1),\quad {\rm in}\hspace{0.1cm}B(0,\delta).$$
Let $f_1^k$ be defined as
$$ -\Delta f_1^k=-e^\phi a_{11}\rho_1^k-e^\phi a_{12}\rho_2^k,$$
and $f_1^k(0)=|\nabla f_1^k(0)|=0$. Let $\tilde u_1^k=u_1^k-f_1^k$ and
$$ H_1^k=e^\phi \rho_1^k h_1 e^{f_1^k}, H_2^k=e^\phi \rho_2^k h_2 e^{f_1^k},$$
then the equation for $\tilde u_1^k$ becomes
\begin{equation}
\begin{aligned}
-\Delta \tilde u_1^k=a_{11} H_1^k e^{\tilde u_1^k}+a_{12} H_2^ke^{\tilde u_1^k-C^k},\quad {\rm in}\hspace{0.1cm}B(0,\delta).
\end{aligned}
\end{equation}
Again, by an argument similar to a Brezis-Merle type lemma \cite{brez}, there are only finite blow up points:$\{p_1,\cdots,p_N\}$.
Bartolucci, Chen, Lin and Tarantello characterize the blowup solutions of the single Liouville equation (See \cite{bclt,chenlin1,chenlin2}) should satisfy either (Let $\Lambda$ denotes the set of points comes from the singular source.)
\begin{equation*}
\lim_{k\rightarrow\infty}\int_{B(p_l,\delta)} a_{11} H_1^k e^{\tilde u_1^k}+a_{12} H_2^ke^{\tilde u_2^k}dx=8\pi (1+\gamma), \hspace{0.1cm}p_l\in \Lambda,
\end{equation*}
or
\begin{equation*}
\lim_{k\rightarrow\infty}\int_{B(p_m,\delta)} a_{11} H_1^k e^{\tilde u_1^k}+a_{12} H_2^ke^{\tilde u_2^k}dx=8\pi,\hspace{0.1cm}p_m\notin \Lambda.
\end{equation*}
Here we also observe that
$$\int_{B(p_l,\delta)} H_1^k e^{\tilde u_1^k}dx=\int_{B(p_l,\delta)} \rho_1^k h_1 e^{u_1^k} dV_g,\quad l=1,\cdots,N.$$
Therefore by (\ref{scaleto1}) we have
$$a_{11}\rho_1^k+a_{12}\rho_2^k\rightarrow8\pi n_k.$$
Thus if blowup does happen, $(\rho_1,\rho_2)\in \Gamma_k$.
Therefore if $\rho$ is not on critical hyper-surface $\Gamma_k$, then the a priori estimate holds in this case.
\bigskip
{\em Case two:} $det(A)\neq0$.
To apply the local estimate, we rewrite the equation (\ref{uk}) in local coordinates
$$ -\Delta u_1^k=e^\phi a_{11}\rho_1^k(h_1e^{u_1^k}-1)+e^\phi a_{12}\rho_2^k(h_2e^{u_2^k}-1),\quad{\rm in}\quad B(0,\delta), $$
$$ -\Delta u_2^k=e^\phi a_{21}\rho_1^k(h_1e^{u_1^k}-1)+e^\phi a_{22}\rho_2^k(h_2e^{u_2^k}-1),\quad{\rm in}\quad B(0,\delta).$$
Let $f_i^k$ be defined as
$$ -\Delta f_1^k=-e^\phi a_{11}\rho_1^k-e^\phi a_{12}\rho_2^k,$$
$$ -\Delta f_2^k=-e^\phi a_{21}\rho_1^k-e^\phi a_{22}\rho_2^k,$$
and $f_i^k(0)=|\nabla f_i^k(0)|=0$. Let $\tilde u_i^k=u_i^k-f_i^k$ and
$$ H_1^k=e^\phi \rho_1^k h_1 e^{f_1^k}, H_2^k=e^\phi \rho_2^k h_2 e^{f_2^k},$$
then the equation for $\tilde u_i^k$ becomes
\begin{equation}\label{local2}
\begin{aligned}
\Delta\tilde u_1^k+a_{11}H_1^k e^{\tilde u_1^k}+a_{12}H_2^k e^{\tilde u_2^k}=0, \\
\Delta\tilde u_2^k+a_{21}H_1^k e^{\tilde u_1^k}+a_{22}H_2^k e^{\tilde u_2^k}=0.
\end{aligned}
\end{equation}
Let
$$B=(b_{ij})_{2\times 2},\hspace{0.3cm} b_{11}=a_{11} \frac{a_{12}}{a_{21}},\hspace{0.5cm}b_{12}=b_{21}=a_{12},\hspace{0.5cm} b_{22}=a_{22},$$
and
$$U_1=\tilde u_1+\log\frac{a_{21}}{a_{12}},\hspace{0.3cm}U_2=\tilde u_2.$$
Then we can rewrite (\ref{local2}) as
\begin{equation}
\begin{aligned}
\Delta U_1+b_{11}H_1^ke^{U_1^k}+b_{12}H_2^ke^{U_2^k}=0, \\
\Delta U_2+b_{12}H_1^ke^{U_1^k}+b_{22}H_2^ke^{U_2^k}=0,
\end{aligned}
\end{equation}
where $B=(b_{ij})_{2\times 2}$ is a symmetric matrix.
Here we observe that
$$\int_{B(p_l,\delta)} H_1^k e^{U_1^k}dx=\frac{a_{21}}{a_{12}} \int_{B(p_l,\delta)} H_1^k e^{\tilde u_1^k}dx=\frac{a_{21}}{a_{12}}\int_{B(p_l,\delta)}\rho_1^kh_1e^{u_1^k}dV_g,$$
$$\int_{B(p_l,\delta)} H_2^k e^{U_2^k}dx= \int_{B(p_l,\delta)} H_2^k e^{\tilde u_2^k}dx=\int_{B(p_l,\delta)}\rho_2^kh_2e^{u_2^k}dV_g.$$
Then we invoke Lemma 2.2 and Lemma 2.3 from \cite{lei-y}, it is easy to see
\begin{equation}
\int_{M\setminus \cup_{j=1}^N B(p_j,\delta)} h_i e^{u_i^k}dV_g\rightarrow 0,
\end{equation}
and
\begin{equation}\label{ratio}
\lim_{k\rightarrow\infty}\int_{B(p_l,\delta)}\rho_i^k h_i e^{u_i^k}dV_g/\mu_l=\lim_{k\rightarrow\infty}\int_{B(p_m,\delta)}\rho_i^k h_i e^{u_i^k}dV_g/\mu_m
\end{equation}
for $i=1,2$ and any pair of $l,m$ between 1 and $N$.
Here we use $\mu_l=1+\gamma_l$ to denote the possible strength of the singular source at each $p_l$ and use $(\sigma_{il})$ to denote the energy around $p_l$:
$$\sigma_{il}=\lim_{k\rightarrow \infty}\frac{1}{2\pi}\int_{B(p_l,\delta)} H_i^k e^{U_i^k}dx,\hspace{0.1cm} i=1,2.$$
for some $\delta>0.$ Then by (\ref{ratio}) we have, for each $i=1,2$,
$$\frac{\sigma_{i1}}{\mu_1}=\frac{\sigma_{i2}}{\mu_2}=\cdots=\frac{\sigma_{iN}}{\mu_N},$$
and
$$2\pi(\sigma_{11}+\sigma_{12}+\cdots+\sigma_{1N})=\frac{a_{21}}{a_{12}}\rho_1,$$
$$2\pi(\sigma_{21}+\sigma_{22}+\cdots+\sigma_{2N})=\rho_2.$$
Thus
$$\sigma_{1l}=\frac{a_{21}}{a_{12}}\frac{\rho_1\mu_l}{2\pi\sum_{s=1}^N\mu_s},\quad \sigma_{2l}=\frac{\rho_2\mu_l}{2\pi\sum_{s=1}^N\mu_s},\quad l=1,\cdots,N.$$
For each $l$, the Pohozaev identity (see \cite{lei-y,linzhang1}) for $(\sigma_{1l},\sigma_{2l})$ can be written as
$$\sum_{i,j\in\{1,2\}} a_{ij} \sigma_{il}\sigma_{jl}=4\mu_l\sum_{i\in\{1,2\}} \sigma_{il}.$$
Thus if blowup does happen, $(\rho_1,\rho_2)$ satisfies
\begin{equation}
\frac{a_{11}a_{21}}{a_{12}} \rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2=8\pi\sum_{s=1}^{N} \mu_s (\frac{a_{11}}{a_{12}}\rho_1+\rho_2).
\end{equation}
Thus if $\rho$ is not on critical hyper-surfaces $\Gamma_k$, the a priori estimate holds in this case.
\section{{Proof} of degree counting theorems}
The main idea of the proof of the degree counting theorem is to reduce the whole system to the single equation.
{\em Case one:} At least one of $a_{ii}>0$.
We may assume $a_{11}>0$. Thanks to Theorem 1.1, the Leray-Schauder degree of (\ref{manifold}) for $\rho\in \mathcal{O}_k$ is equal to the degree for the following specific system corresponding to $(\rho_1,0)$:
\begin{equation}
\begin{aligned}
\Delta_g u_1+a_{11}\rho_1(h_1e^{u_1}-1)=0,\\
\Delta_g u_2+a_{21}\rho_1(h_1e^{u_1}-1)=0,
\end{aligned}
\end{equation}
where $\rho_1$ satisfies
$$8\pi n_k<a_{11}\rho_1<8\pi n_{k+1}.$$
It is easy to see that $(\rho_1,0)\in \mathcal{O}_k$, using the degree counting formula of Chen-Lin \cite{chenlin2} for the single equation, we obtain the desired formula in this case:
$$d_\rho=\sum_{j=0}^{k}b_j, \hspace{0.2cm} {\rm where}\hspace{0.2cm} b_0=1.$$
{\em Case two:} $a_{ii}=0$ for both $i=1,2$.
Using $a_{12},a_{21}>0$, we reduce the degree counting formula for $\rho\in \mathcal{O}_k$ to the following system:
\begin{equation}
\begin{aligned}
\Delta_g u_1+a_{12}\rho_2(h_2e^{u_2}-1)=0,\\
\Delta_g u_2+a_{21}\rho_1(h_1e^{u_1}-1)=0,
\end{aligned}
\end{equation}
where $\rho_1,\rho_2$ satisfy
\begin{equation*}
8\pi n_k (\frac{a_{21}}{a_{12}}\rho_1+\rho_2)<2a_{21}\rho_1\rho_2<8\pi n_{k+1}(\frac{a_{21}}{a_{12}}\rho_1+\rho_2).
\end{equation*}
It is easy to see that $(\rho_1,\rho_2)\in \mathcal{O}_k$. Now we consider the special case $\rho_2=\frac{a_{21}}{a_{12}}\rho_1, h_1=h_2=h$. In this case, the maximum principle gives $u_1=u_2+C$. Since they both have average equal to 0, we have $u_1=u_2$. Then (3.2) turns out to be a single equation:
$$\Delta_g u_1+a_{21}\rho_1(he^{u_1}-1)=0,$$
where $\rho_1$ satisfies
$$8\pi n_k<a_{21}\rho_1<8\pi n_{k+1}.$$
By applying the degree counting formula of Chen-Lin \cite{chenlin2} for the single equation, the desired formula can also be obtained in this case:
$$d_\rho=\sum_{j=0}^{k}b_j, \hspace{0.2cm} {\rm where}\hspace{0.2cm} b_0=1.$$
This completes the proof of Theorem 1.2.
\section{{Related} topics}
In this section, we will discuss some applications of the degree counting theorem.
For an open, bounded smooth domain $\Omega$ in $\mathbb{R}^2$, we are interested in the following Dirichlet problem to the Liouville equations:
\begin{equation}\label{dirichlet}
\left\{\begin{array}{ll}
\Delta u_i^*+\sum_{j=1}^{2} a_{ij}\rho_j\frac{h_j^*e^{u_j^*}}{\int_\Omega h_j^*e^{u_j^*}}=\sum_{l=1}^{N}4\pi\gamma_l\delta_{p_l}\quad {\rm in}\hspace{0.1cm} \Omega,\\
u_i|_{\partial \Omega}=0, \quad i=1,2,
\end{array}\right.
\end{equation}
where $h_1^*,h_2^*$ are smooth functions on $\bar{\Omega}$ and $p_1,\cdots,p_N$ are distinct points in the interior of $\Omega$.
We have the following existence result for (\ref{dirichlet}):
\begin{thm}
\label{dirichlet theorem}
Suppose
$$ 8\pi n_k(\frac{a_{21}}{a_{12}}\rho_1+\rho_2)<\frac{a_{11}a_{21}}{a_{12}}\rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2<8\pi n_{k+1}(\frac{a_{21}}{a_{12}}\rho_1+\rho_2),$$
then the Leray-Schauder degree $d_\rho$ for (\ref{dirichlet}) is
$$d_\rho=\sum_{j=0}^{k}b_j, \hspace{0.2cm} {\rm where}\hspace{0.2cm} b_0=1,$$
where $\mathcal{X}(\Omega)=1-g_e(\Omega)$ is the Euler Characteristic of $\Omega$ and $g_e(\Omega)$ is the number of holes inside $\Omega$. In particular, If $\gamma_1,\cdots,\gamma_N \in \mathbb{N}^+$ and $\Omega$ is not simply connected, we have $d_\rho>0$ and (\ref{dirichlet}) always has a solution.
\end{thm}
\noindent{\bf Remark 4.1.}
{\em To prove Theorem 4.1, we need to show $u^k$ never blow up near the boundary $\partial \Omega$. Since all the singular sources are in the interior of $\Omega$, this fact can be proved by a standard moving plane argument. We refer readers to \cite{linzhang2} for more detailed proof. Then the remaining part of the proof of Theorem 4.1 is the same as Theorem 1.2.}
If the Liouville system on $(M,g)$ is written as
\begin{equation}\label{special}
\Delta_g u_i^*+\sum_{j=1}^{2} a_{ij}h_j^*e^{u_j^*}=4\pi\sum_{l=1}^{N}\gamma_l\delta_{p_l},\quad i=1,2,
\end{equation}
with the same assumptions on $h_i^*$ and $vol(M)=1$, $A$ satisfies $(H)$ and $A$ is invertible, here (\ref{special}) is a special case of (\ref{1.1}). Integrating (\ref{special}) on both sides, we get
\begin{equation*}
\sum_{j=1}^{2} a_{ij}\int_M h_j^*e^{u_j^*}=4\pi\sum_{l=1}^{N}\gamma_l,\quad i=1,2,
\end{equation*}
Thus
\begin{equation}
\int_M h_j^*e^{u_j^*}=4\pi (\sum_{j=1}^{2} a^{ij})(\sum_{l=1}^{N}\gamma_l),\quad i=1,2,
\end{equation}
where $(a^{ij})$ is the inverse of $(a_{ij})$.
Setting
$$\rho_i= (\sum_{j=1}^{2}a^{ij})(4\pi\sum_{l=1}^{N}\gamma_l), \quad i=1,2,$$
we can write (\ref{special}) as
$$ \Delta_g u_1^*+\sum_{j=1}^{2} a_{ij}\rho_j(\frac{h_j^*e^{u_j^*}}{\int_M h_j^*e^{u_j^*}}-1)=\sum_{l=1}^{N}4\pi\gamma_l(\delta_{p_l}-1),\quad i=1,2.$$
If $M$ is a torus ($\mathcal{X}(M)=0$) and $\gamma_l\in N^+$, we can compute the Leray-Schauder degree if $\sum_l \gamma_l$ is odd.
\begin{thm}
Suppose $M$ is a torus, $\gamma_l\in N^+$ and $\sum_l \gamma_l$ is odd. Then the Leray-Schauder degree for (\ref{special}) is $\frac{1}{2}\Pi_{l=1}^{N}(1+\gamma_l)$.
\end{thm}
\noindent{\bf Proof of Theorem 4.2:}
Since the genus of the torus $M$ is 1, $\mathcal{X}(M)=0$ and the generating function is
$$g(x)=\Pi_{p=1}^{N}\frac{1-x^{\mu_p}}{1-x}=\Pi_{p=1}^{N}(1+x+x^2+\cdots+x^{\gamma_p})$$
$$=1+b_1x+b_2x^2\cdots+b_kx^k+\cdots+b_mx^m,$$
where $m=\sum_p\gamma_p$. Let
$$\rho_i=(\sum_{j=1}^{2}a^{ij})(4\pi\sum_{p=1}^{N}\gamma_p),$$
it is easy to see that
$$ 8\pi n_k(\frac{a_{21}}{a_{12}}\rho_1+\rho_2)<\frac{a_{11}a_{21}}{a_{12}}\rho_1^2+2a_{21}\rho_1\rho_2+a_{22}\rho_2^2<8\pi n_{k+1}(\frac{a_{21}}{a_{12}}\rho_1+\rho_2),$$
for $n_k=(m-1)/2$ and $n_{k+1}=(m+1)/2$. Thus the Leray-Schauder degree $d_\rho$ can be computed as
$$d_\rho=\sum_{l=0}^{(m-1)/2}b_l.$$
Using $b_{m-1}=b_l$ for $l=0,1,\cdots,m$, we further write $d_\rho$ as
$$ d_\rho=\frac{1}{2}\sum_{l=1}^mb_l=\frac{g(1)}{2}=\frac{\Pi_{p=1}^N(1+\gamma_p)}{2}.$$
Theorem 4.2 is established.
\vspace{1cm}
| {
"timestamp": "2020-12-17T02:22:38",
"yymm": "2012",
"arxiv_id": "2012.09055",
"language": "en",
"url": "https://arxiv.org/abs/2012.09055",
"abstract": "Let $(M,g)$ be a compact Riemann surface with no boundary and $u=(u_1,u_2)$ be a solution of the following singular Liouville system: $$\\Delta_g u_i+\\sum_{j=1}^2 a_{ij}\\rho_j(\\frac{h_je^{u_j}}{\\int_M h_je^{u_j}dV_g}-1)=\\sum_{l=1}^{N}4\\pi\\gamma_l(\\delta_{p_l}-1), $$ where $h_1,h_2$ are positive smooth functions, $p_1,\\cdots,p_N$ are distinct points on $M$, $\\delta_{p_l}$ are Dirac masses, $\\rho=(\\rho_1,\\rho_2)(\\rho_i\\geq 0)$ and $(\\gamma_1,\\cdots,\\gamma_N)(\\gamma_l > -1)$ are constant vectors. In the previous work, we derive a degree counting formula for the singular Liouville system when $A$ satisfies standard assumptions. In this article, we establish a more general degree counting formula for 2$\\times$2 singular Liouville system when the coefficient matrix $A$ is non-symmetric and non-invertible. Finally, the existence of solution can be proved by the degree counting formula which depends only on the topology of the domain and the location of $\\rho$.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Degree Counting Theorems for 2x2 non-symmetric singular Liouville Systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347845918814,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7093645997428742
} |
https://arxiv.org/abs/math/9906085 | Comments on Lagrange Partial Differential Equation | The relations between solutions of the three types of totally linear partial differential equations of first order are presented. The approach is based on factorization of a non-homogeneous first order differential operator to products consisting of a scalar function, a homogeneous first order differential operator and the reciprocal of the scalar function. The factorization procedure is utilized to show that all totally linear differential equations of first order can be transformed to each other, and in particular to a homogeneous one. | \section{Introduction}
The method for solution of Lagrange partial differential equation is well
known, and is found almost in every text book on partial differential
equations\cite{Piaggio,Ayres,Carrier}. Our goal in this article is to show
how the factorization of a non-homogeneous first order differential operator
leads quite naturally to simple relations between the solutions of three
related types of Lagrange equations.
Let $E$ be an open subset of $\Re ^n,$ $L$ be a continuous vector field in $E
$, and denote by $C^k(E)$ the set of real valued functions which are
continuously differentiable of order $k$ on $E.$ The continuous vector field
$L$ may be viewed as a differential operator \cite{Abraham} from $C^1(E)$ to
$C^0(E)$. For each real valued continuous function $q:E\rightarrow R$ there
corresponds an operator $L+q:C^1(E)\rightarrow C^0(E)$ defined by $(L+q)\psi
=L\psi +q\psi ,\forall \psi \in C^1(E)$, where $L\psi $ is the Lie
derivative of the function $\psi $ with respect to the field $L,$ and $q\psi
$ is the usual product of two functions. Our goal in this work is to study
the relations between the solutions of the partial differential equations
$\;\;(i)\;L\phi =0$ \ \ \ \ \ $(ii)\ (L+q)\psi =0,\;\;\;\;(iii)\;(L+q)\chi
=b,$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \
where $b$ is a continuous function on $E$. The approach followed here hinges
on factorization of the first order non-homogeneous differential operator $%
(L+q)$ to a product of a scalar function, a homogeneous differential
operator, and the reciprocal scalar function.
\section{Factorization of a First Order \protect\\Non-Homogeneous Operator}
Let $\eta \in C^1(E)$ be a non-zero solution of the differential equation
\begin{equation}
(L+q)\eta =0. \label{e1}
\end{equation}
Equivalently,$\,\eta $ is any element in the kernel of the linear operator $%
L+q$ that is different from zero. As a first step we assume that $\eta \,$%
has no zeros in $E$, and hence $\eta ^{-1}\,$exists and of class $C^1(E)$.\
The general case in which $\eta $ vanishes on a subset $\delta \subset E$
will be considered in section 4. We start by proving a useful operator
equality on which hinges the method of reducing one type of Lagrange
equations to another.
\begin{theorem}
In $C^1(E)$ the following operator equality holds
\begin{equation}
\eta L\eta ^{-1}=L+q. \label{e2}
\end{equation}
Proof: for every $\psi \in C^1(E).$
\end{theorem}
\[
(\eta L\eta ^{-1})\psi =\eta (\eta ^{-1}L-\eta ^{-2}(L\eta ))\psi =(L-\eta
^{-1}(L\eta ))\psi =(L+q)\psi .
\]
We have used equation (\ref{e1}) to make the last step.%
\setcounter{theorem}{0}
\begin{corollary}
Equality (\ref{e3}) is equivalent to
\begin{equation}
L=\eta ^{-1}(L+q)\eta \label{e3}
\end{equation}
which shows that all operators of the form $(L+q)\;$which are based on the
same field $L$ may be transformed to $L,$ and accordingly to each other.\
\end{corollary}
\begin{corollary}
The equality (\ref{e2}) shows that the left hand-side must not be dependent
on the particular solution $\eta $ of equation (\ref{e1})$,$ since its right
hand-side is not. Therefore if $\xi $ is another solution of (\ref{e1}),
then by equation (\ref{e2}) and a similar equation written for the solution $%
\xi ,$ we have$\;\xi ^{-1}\eta L\eta ^{-1}\xi =L.$ This yields $L(\xi /\eta
)=0.$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\end{corollary}
\begin{corollary}
From (\ref{e3}) we deduce that
\begin{equation}
L^k=\eta ^{-1}(L+q)^k\eta \,\,\,\,\,and\;(L+q)^k=\eta L^k\eta ^{-1}
\label{e4}
\end{equation}
$\;$where $k$ is a non-negative integer. If $L\;$is invertible then the
latter relation holds for all integers.\thinspace \thinspace We assume in
relation (\ref{e4})\thinspace \thinspace \thinspace that $L$ is a $C^{k-1}\;$%
vector\ field,\thinspace $q$ is a $C^{k-1}$ function and $\eta \;$is a $C^k$
function. \thinspace \thinspace \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \
\end{corollary}
\begin{corollary}
If (\ref{e1}) holds then it is easily checked that $(L+kq)\eta ^k=0$, and
hence
\begin{equation}
L=\eta ^{-k}(L+kq)\eta ^k \label{e5}
\end{equation}
$\;$In general, and for any real number $\alpha ,$ we have
\begin{equation}
L=\mid \eta \mid ^{-\alpha }(L+\alpha q)\mid \eta \mid ^\alpha \label{e6}
\end{equation}
\end{corollary}
\begin{corollary}
If $Q$ is a real valued continuous function on $E$ then
\begin{equation}
\eta ^{-1}(L+q+Q)\eta =L+Q. \label{e7}
\end{equation}
\end{corollary}
\begin{corollary}
Take $Q=-\lambda \,(\lambda \in R)$ in corollary (\ref{e4}) to obtain
\begin{equation}
(L-\lambda )\psi _\lambda =0\Leftrightarrow (L+q-\lambda )(\eta \psi
_\lambda )=0 \label{e8}
\end{equation}
$\;$The last relation states that: if $\psi _\lambda $ is an eigenfunction
of the operator $L\;$belonging to the eigenvalue $\lambda $ then$\;\eta \psi
_\lambda $ is an eigenfunction of the operator $L+q$ belonging to the same
eigenvalue $\lambda .\;$
\end{corollary}
\begin{example}
Take $L+q=\frac d{dx}+2x:C^1(R)\rightarrow C^0(R).\;$Since $\eta =e^{-x^2}$
is a solution of (\ref{e1}), we have
\[
\frac d{dx}+2x=e^{-x^2}\frac d{dx}e^{x^2}
\]
It is obvious that every complex number $\lambda $ is an eigenvalue of the
operator $\frac d{dx}$ to which an eigenfunction $\psi _\lambda =e^{\lambda
x}$ belongs. In accordance with the last corollary, it is easily checked
that $\lambda $ is an eigenvalue of the operator $\frac d{dx}+2x$ to which
the function $\eta \psi _\lambda =e^{-x^2+\lambda x}$ belongs.
\end{example}
\section{On Totally Linear Partial Differential Equations}
Let $(x_1,...,x_n)$ be a global system of coordinates on the region $E$, in
which $L$ is expressed as
\begin{equation}
L=\stackrel{n}{\stackunder{k=1}{\sum }}a_k(x_1,...,x_n)\partial /\partial
x_k, \label{e9}
\end{equation}
where the components $a_k$ are of class $C^0$ on $E$. We shall describe the
partial differential equation
\begin{equation}
(L+q)\chi =b \label{e10}
\end{equation}
where $q(x)$ and $b(x)$ are continuous functions on $E$, as totally linear.
The totally linear equation
\begin{equation}
(L+q)\psi =0, \label{e11}
\end{equation}
will be referred to as the non-homogeneous reduced equation corresponding to
(\ref{e10}), or simply, as the non-homogeneous equation. Equation (\ref{e10}%
) is a special type of Lagrange equation. The method of solution of Lagrange
equation (\ref{e10}), and consequently equations (\ref{e10}) and (\ref{e11})
is well known \cite{Piaggio}. However we aim here to utilize equality (\ref
{e2}) to reduce the non-homogeneous equation (\ref{e11}) to the homogeneous
equation
\begin{equation}
L\phi =0, \label{e12}
\end{equation}
and express its general solution in terms of a particular solution and the
general solution of (\ref{e12}). Alternatively, to express the general
solution of (\ref{e11}) in terms of $n$ particular solutions. The results we
have just pointed to are expressed in the following facts in which we assume
that $\eta $ is a solution of (\ref{e11}) on $E$ and that it has no zeros on
$E$.
F1. A function $\phi _0$ is a solution on $E$ of $L\phi =0$ if and only if $%
\psi _0=\eta \phi _0$ is a solution of $(L+q)\psi =0$ on $E$.
The proof is a direct consequence of corollary 1 in the previous section.
F2. Let $Q:E\rightarrow \Re $ be continuous. By corollary 5 in the previous
section, a function $\psi _0$ is a solution of (\ref{e11}) on $E$ iff $\Psi
_0=\eta \psi _0$ is a solution of $(L+q+Q)\Psi =0$ on $E.$ In a more
familiar language to the subject of differential equation, the
transformation $\Psi =\eta \psi $ reduces the last equation to (\ref{e11}).
F3. If $\eta $ and $\xi $ are solutions of (\ref{e11}) then $\xi /\eta $ is
a solution of (\ref{e12}). Indeed, from equation (\ref{e11}) which is
satisfied by $\eta $ and $\xi $ we get $\eta L\xi =\xi L\eta $, and hence
\[
L(\xi /\eta )=\eta ^{-2}(\eta L\xi -\xi L\eta )=0.
\]
F4. The general solution of the reduced non-homogenous equation (\ref{e11})
is given by
\begin{equation}
\psi =\eta \;f(\phi _1,....,\phi _{n-1}), \label{e13}
\end{equation}
$\;$where
\begin{equation}
\phi _i=\phi _i(x_1,...,x_n)\;\;\;\;\;\;\;(i=1,....,n-1) \label{e14}
\end{equation}
are $(n-1)$ functionally independent solutions of (\ref{e12}).
Proof. According to the standard method in solving Lagrange equation \cite
{Piaggio}, the general integral of the homogeneous equation (\ref{e12}) is
given by
\begin{equation}
\phi =f(\phi _1,....,\phi _{n-1}), \label{e15}
\end{equation}
where $f$ is an arbitrary $C^1$function in its arguments. Now if $\psi $ is
a solution of (\ref{e11}) then by F3 $\psi /\eta $ is a solution of (\ref
{e12}), and hence it must be of the form (\ref{e15}). It follows that the
general solution of (\ref{e11}) is given by (\ref{e13}).
F5. If
\begin{equation}
\eta (x_1,....,x_n),\;\eta _j(x_1,....,x_n)\;\;\;\;\;\;(j=1,....,n-1)
\label{e16}
\end{equation}
are functionally independent solutions of (\ref{e11}) then the ratios
\begin{equation}
\phi _j=\eta _j/\eta \;\;\;\;(j=1,....,n-1) \label{e17}
\end{equation}
are solutions of (\ref{e12}). It is easily seen that these ratios are
functionally independent. Hence
\begin{equation}
\phi =f(\eta _1/\eta ,....,\eta _{n-1}/\eta ) \label{e18}
\end{equation}
is the general solution of the homogeneous equation (\ref{e12}), and
\begin{equation}
\psi =\eta \;f(\eta _1/\eta ,....,\eta _{n-1}/\eta ) \label{e19}
\end{equation}
is the general solution of the non-homogeneous equation (\ref{e11}). If $%
\eta _n$ is a further solution of (\ref{e11}) then by (\ref{e19})
\begin{equation}
\eta _n/\eta =f_0(\eta _1/\eta ,....,\eta _{n-1}/\eta ), \label{e20}
\end{equation}
where $f_0$ is some specified $C^1$ function.
F6. Let $\chi _0$ be a solution of the totally linear equation (\ref{e10}).
If $\chi $ is any other solution of (\ref{e10}) then $(L+q)(\chi -\chi _0)=0.
$ Hence $\psi =\chi -\chi _0$ is a solution of (\ref{e11}), and consequently
must be of the form (\ref{e13}). It follows that the general solution of (%
\ref{e10}) is given by
\begin{equation}
\chi =\chi _0+\eta \;f(\phi _1,....,\phi _{n-1}). \label{e21}
\end{equation}
If a second particular solution $\chi _1$ of (\ref{e10}) is given, then $%
\eta =\chi _1-\chi _0$ is a solution of (\ref{e11}), and therefore the
general solution of (\ref{e10}) can be written as follows
\begin{equation}
(\chi -\chi _0)/(\chi _1-\chi _0)=f(\phi _1,....,\phi _{n-1}). \label{e22}
\end{equation}
If $\eta _2$ is a third solution of (\ref{e10}), then by (\ref{e22}), ($\chi
_2-\chi _1)/(\chi _1-\chi _0)$ is a solution of (\ref{e12}) and has
accordingly the form (\ref{e15}). If it is known $(n+1)$ solutions $\chi
_0,\chi _1,.....,\chi _n$ of (\ref{e10}), then
\[
\eta =\chi _{1-}\chi _0,\;\eta _1=\chi _2-\chi _0,....,\;\eta _{n-1}=\chi
_n-\chi _0
\]
are solutions of (\ref{e11}), and hence
\begin{equation}
\phi _1=\frac{\eta _1}\eta =\frac{\chi _2-\chi _0}{\chi _1-\chi _0}%
\;,......,\phi _{n-1}=\frac{\eta _{n-1}}\eta =\frac{\chi _n-\chi _0}{\chi
_1-\chi _0} \label{e23}
\end{equation}
$\;$are solutions of (\ref{e12}). If these ratios are functionally
independent then
\begin{equation}
\frac{\chi -\chi _0}{\chi _1-\chi _0}=f(\frac{\chi _2-\chi _0}{\chi _1-\chi
_0}\;,.....,\;\frac{\chi _n-\chi _0}{\chi _1-\chi _0}) \label{e24}
\end{equation}
$\;$gives the general solution $\chi $ of (\ref{e10}) in terms of $(n+1)$
particular solutions. It is clear of course that these $(n+1)$ solutions of (%
\ref{e10}) are certainly functionally dependent. If $\chi _{n+1}$ is a
further solution of (\ref{e10}) then by (\ref{e24}) $(\chi _{n+1}-\chi
_0)/(\chi _1-\chi _0)$ is some function of the solutions (\ref{e23}) of
equation (\ref{e12}).
\section{The General Case\ }
We proceed here to study the general case in which the solution $\eta $ of
equation (\ref{e1}) is defined only on an open subset $E_\eta \subset E,$
and vanishes on a closed subset $\delta \subset E$. This latter assumption
embodies most interesting cases which are encountered in practical examples,
such as $\eta $ vanishing at a finite or a countable set, or outside an open
subset of its domain of definition. In the present case the operator $\eta
L\eta ^{-1}$ exists only on $E_\eta -\delta .$ Hence the equality (\ref{e2})
must be replaced by the inclusion relation $(\eta L\eta ^{-1}\subset L+q);$
it is an equality only on $E_\eta -\delta .$\
Some minor modification has to be made so that the corollaries of theorem1
in section 2 and the facts in section 3 conform to the new assumptions. As
an example the fact F1 in section 3 must be modified as follows:
F1. Let $\eta $ be a solution of (\ref{e1}) on $E_\eta $ which vanishes on a
closed subset $\delta \subset E_\eta $.
(i) $\phi $ is a solution of (\ref{e12}) on $E_\phi $ $\Rightarrow \eta \phi
$ is a solution of (\ref{e11}) on $E_\phi \cap E_\eta .$
(ii) $\psi $ is a solution of (\ref{e11}) on $E_\psi $ $\Rightarrow \psi
/\eta $ is a solution of (\ref{e12}) on $E_\psi \cap E_\eta -\delta .$
Proof: (i) on $E_\phi \cap E_\eta $ $,$ where $\eta $ is defined, we have
\[
(L+q)(\eta \phi )=\phi (L+q)\eta +\eta L\phi .
\]
If $L\phi =0$ on $E_\phi $ then $(L+q)(\eta \phi )=0$ on $E_\eta \cap E_\phi
.$
(ii) On $E_\phi \cap E_\eta -\delta ,$ where $\psi /\eta $ is defined and is
of class $C^1,$ we have $L(\psi /\eta )=0$, as it was shown in F3.
It must be noted that the last fact determines the smallest domains on which
$\phi $ and $\psi $ are defined. These domains may be extended by continuity
to larger domains.
The remaining facts and corollaries can be modified in a similar way.
Example. Consider the operator $L=x\partial /\partial x+y\partial /\partial
y+z\partial /\partial z$ which is continuous on $\Re ^3$. The general
solution of the equation $L\phi =0$ is $\phi =f(y/x,z/x),$ where $f$ is an
arbitrary $C^1$ function. The equation $(L+3/2)\psi =0$ admits the
particular solution $\eta =x^{-3/2}$ on $E_\eta =\{(x,y,z)\in \Re ^3:x>0\}.$
The function $\phi =x/y$ is a solution of $L\phi =0$ on $E_\phi
=\{(x,y,z)\in \Re ^3:y\neq 0\},$ and the function $\psi =\eta \phi
=x^{-1/2}y^{-1}$ is a solution of $(L+3/2)\psi =0$ on $\{(x,y,z)\in \Re
^3:y\neq 0,x>0\}=E_\eta \cap E_\phi .$ It is clear that we could have
adopted $\eta ^{\prime }=\left| x\right| ^{-3/2},$ and $\phi ^{\prime
}=\left| x\right| /y$ (since $L$ is linear), to obtain a solution $\psi
^{\prime }=\left| x\right| ^{-1/2}y^{-1}$ defined on $\Re
^3-\{x=0,y=0\}=E_{\eta ^{\prime }}\cap E_{\phi ^{^{\prime }}}.$ If we
replace in the given equation $3/2$ by $1$ , then $\eta =1/x,$ and hence $%
\eta \phi =1/y$ is a solution of $(L+1)\psi =0$ on $E_\phi .$
Consider now the totally linear equation $(L+1)\chi =3x^2.$ Four particular
solutions of this equation are $\chi _0=x^2$ , $\chi _1=x^2+x^{-1},\;\chi
_2=x^2+y^{-1},\;\chi _3=x^2+z^{-1}.$ It can be checked that the formula (\ref
{e24}) yields, in accordance with (\ref{e21}), the general solution $\chi
=x^2+x^{-1}f(y/x,z/x).$
\section{An Algebraic Comment}
Let $A$ denote the commutative real algebra formed by the set of all
functions defined in $E$. The set of solutions of the homogeneous equation $%
L\phi =0,$ namely $KerL,$ is clearly a subalgebra of $A$. The set of
solutions of non-homogeneous equation (\ref{e11}) is a coset of this
subalgebra determined by a particular solution of (\ref{e11}). If $\eta $ is
a particular solution of (\ref{e11}) defined every where in $E$, then $%
Ker(L+q)=\eta \;KerL.$ It is clear that $Ker(L+q)$ is a sub-vector space of $%
A$. The set of solutions of equation (\ref{e10}) is a coset of the
sub-vector space $Ker(L+q)$ determined by any particular solution of (\ref
{e10}).
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\paragraph{Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\def\note{$^{\dag}}%
\defLaTeX2e{LaTeX2e}
\def\chkcompat{%
\if@compatibility
\else
\usepackage{latexsym}
\fi
}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\chkcompat
\fi
\def\alpha{\Greekmath 010B }%
\def\beta{\Greekmath 010C }%
\def\gamma{\Greekmath 010D }%
\def\delta{\Greekmath 010E }%
\def\epsilon{\Greekmath 010F }%
\def\zeta{\Greekmath 0110 }%
\def\eta{\Greekmath 0111 }%
\def\theta{\Greekmath 0112 }%
\def\iota{\Greekmath 0113 }%
\def\kappa{\Greekmath 0114 }%
\def\lambda{\Greekmath 0115 }%
\def\mu{\Greekmath 0116 }%
\def\nu{\Greekmath 0117 }%
\def\xi{\Greekmath 0118 }%
\def\pi{\Greekmath 0119 }%
\def\rho{\Greekmath 011A }%
\def\sigma{\Greekmath 011B }%
\def\tau{\Greekmath 011C }%
\def\upsilon{\Greekmath 011D }%
\def\phi{\Greekmath 011E }%
\def\chi{\Greekmath 011F }%
\def\psi{\Greekmath 0120 }%
\def\omega{\Greekmath 0121 }%
\def\varepsilon{\Greekmath 0122 }%
\def\vartheta{\Greekmath 0123 }%
\def\varpi{\Greekmath 0124 }%
\def\varrho{\Greekmath 0125 }%
\def\varsigma{\Greekmath 0126 }%
\def\varphi{\Greekmath 0127 }%
\def\Greekmath 0272{\Greekmath 0272}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mbox{\boldmath$\mathchar"#1#2#3#4$}
\else
\mathchar"#1#2#3#
\fi
\else
\ifnum\mathgroup=5
\mbox{\boldmath$\mathchar"#1#2#3#4$}
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\expandafter\ifx\csname ds@amstex\endcsname\relax
\else\message{amstex already loaded}\makeatother\endinput\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\ex@.2326ex
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\textstyle \int}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\textstyle \oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\displaystyle \int }%
\def\diint{\mathop{\displaystyle \iint }}%
\def\diiint{\mathop{\displaystyle \iiint }}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\displaystyle \oint }%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\endequation{%
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1
}
\makeatother
\endinput
| {
"timestamp": "1999-06-12T23:57:24",
"yymm": "9906",
"arxiv_id": "math/9906085",
"language": "en",
"url": "https://arxiv.org/abs/math/9906085",
"abstract": "The relations between solutions of the three types of totally linear partial differential equations of first order are presented. The approach is based on factorization of a non-homogeneous first order differential operator to products consisting of a scalar function, a homogeneous first order differential operator and the reciprocal of the scalar function. The factorization procedure is utilized to show that all totally linear differential equations of first order can be transformed to each other, and in particular to a homogeneous one.",
"subjects": "Functional Analysis (math.FA)",
"title": "Comments on Lagrange Partial Differential Equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347823646073,
"lm_q2_score": 0.7279754489059774,
"lm_q1q2_score": 0.7093645981214735
} |
https://arxiv.org/abs/2108.07863 | Computations of relative topological coHochschild homology | Hess and Shipley defined an invariant of coalgebra spectra called topological coHochschild homology, and Bohmann-Gerhardt-Høgenhaven-Shipley-Ziegenhagen developed a coBökstedt spectral sequence to compute the homology of coTHH for coalgebras over the sphere spectrum. We construct a relative coBökstedt spectral sequence to study coTHH of coalgebra spectra over any commutative ring spectrum $R$. Further, we use algebraic structures in this spectral sequence to complete some calculations of the homotopy groups of relative topological coHochschild homology. | \section{Introduction}
We develop computational tools for studying topological coHochschild homology ($\mathrm{coTHH}$). Recent work of Hess and Shipley \cite{hess2018invariance} defines this invariant to study coalgebra spectra. Work of Malkiewich \cite{malkiewich2017cyclotomic} and Hess-Shipley \cite{hess2018invariance} shows $\mathrm{coTHH}$ of suspension spectra is related to suspension spectra of free loop spaces for simply connected spaces $X$. In particular,
\[
\mathrm{coTHH}(\Sigma^\infty_+ X) \simeq \Sigma^\infty_+ \mathcal{L}X \simeq \mathrm{THH}(\Sigma^\infty_+(\Omega X)),
\]
where the last equivalence comes from work of B\"okstedt and Waldhausen \cite{bokstedtwaldhausen}. Thus $\mathrm{coTHH}$ is relevant for studying the homology of free loop spaces, $\mathcal{L}X$, the main topic of the field of string topology \cite{chas1999string, cohen2003loop}. In fact, Bohmann-Gerhardt-Shipley use topological coHochschild homology to compute the homology of free loop spaces in \cite{bohmanngerhardtshipley}. Further, because $\mathrm{THH}(\Sigma^\infty_+(\Omega X))$ is directly related to the algebraic $K$-theory of the space $X$ via trace methods, $\mathrm{coTHH}$ also has applications for algebraic $K$-theory of spaces.
In 2018, Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen showed that there is a \textit{coB\"okstedt spectral sequence} that is the Bousfield-Kan spectral sequence for the cosimplicial spectrum $\mathrm{coTHH}(C)^\bullet$, for $C$ a coalgebra spectrum over the sphere spectrum \cite{bohmann2018computational}. This spectral sequence has classical coHochschild homology of \cite{doi1981homological} as its $E_2$-term, and in cases when it does indeed converge we have:
\[
E_2^{*,*}= \mathrm{coHH}_*(H_*(C;k)) \implies H_*(\mathrm{coTHH}_*(C);k).
\]
In their work however, these tools are only set up to study coalgebra spectra over the sphere spectrum, $\mathbb{S}$. Examples of this sort are closely related to suspension spectra of spaces, and recent work of P\'eroux-Shipley shows that such examples are fairly limited \cite{P_roux_2019}. In this paper, we broaden these tools to apply to $\mathrm{coTHH}$ for coalgebras over any commutative ring spectrum.
We call the spectral sequence that allows us to study the topological coHochschild homology of coalgebras over an arbitrary commutative ring spectrum $R$ the \textit{relative coB\"okstedt spectral sequence}$:$
\begin{theorem}
Let $E$ and $R$ be commutative ring spectra, $C$ an $R$-coalgebra spectrum that is cofibrant as an $R$-module, and $N$ a $(C,C)$-bicomodule spectrum. If $E_*(C)$ is flat over $E_*(R)$, then there exists a Bousfield-Kan spectral sequence for the cosimplicial spectrum $\mathrm{coTHH}^R(N,C)^\bullet$ that abuts to $E_{t-s}(\mathrm{coTHH}^R(N,C))$ with $E_2$-page
\[
E_2^{s,t} = \mathrm{coHH}^{E_*(R)}_{s,t}(E_*(N), E_*(C))
\]
given by the classical coHochschild homology of $E_*(C)$ with coefficients in $E_*(N)$.
\end{theorem}
Further, we identify conditions for convergence of this spectral sequence. In particular, for the case when $E = \mathbb{S}$, if for every $s$ there exists some $r$ so that $E^{s,s+i}_r = E^{s,s+i}_\infty$ then the relative coB\"okstedt spectral sequence converges completely to $\pi_*(\mathrm{coTHH}^R(N,C))$.
By work of Angeltveit-Rognes, the classical B\"okstedt spectral sequence for a commutative ring spectrum has the structure of a spectral sequence of Hopf algebras under certain flatness conditions \cite{angeltveit2005hopf}. Bohmann-Gerhardt-Shipley show that under appropriate coflatness conditions, the coB\"okstedt spectral sequence for a cocommutative coalgebra spectrum has what is called a \textit{$\square$-Hopf algebra structure}, an analog of a Hopf algebra structure for working over a coalgebra \cite{bohmanngerhardtshipley}. We show that the relative coB\"okstedt spectral sequence has a similar structure and use it to prove the following results.
\begin{theorem}
For a field $k$, let $C$ be a cocommutative $Hk$-coalgebra spectrum that is cofibrant as an $Hk$-module with $\pi_*(C) \cong \Lambda_k(y)$ for $|y|$ odd and greater than 1. Then the relative coB\"okstedt spectral sequence collapses and
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Lambda_k(y) \otimes k[w]
\]
as graded $k$-modules for $|w|=|y|-1$.
\end{theorem}
\begin{theorem}
Let $k$ be a field and let $p=char(k)$, including 0. For $C$ a cocommutative $Hk$-coalgebra spectrum that is cofibrant as an $Hk$-module with $\pi_*(C) \cong \Lambda_k(y_1, y_2)$ for $|y_1|, |y_2|$ both odd and greater than 1, if $p^m$ is not equal to $\frac{|y_2|-1}{|y_1|-1}+1$ for all $m \ge 1$ and $p \ne 2$,
then the relative coB\"okstedt spectral sequence collapses and
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Lambda_k(y_1, y_2) \otimes k[w_1, w_2],
\]
as graded $k$-modules for $|w_i| = |y_i|-1$. Moreover, the same result holds at the prime $p=2$ with the additional assumption that $p^m \ne \frac{|y_2|-1}{|y_1|-1}$ for $m \ge 1$.
\end{theorem}
\noindent Further, in a result analogous to the work of Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen \cite{bohmann2018computational} we have
\begin{theorem}
Let $k$ be a field and let $C$ be a cocommutative $Hk$-coalgebra spectrum that is cofibrant as an $Hk$-module spectrum,
and whose homotopy coalgebra is
\[
\pi_*(C) = \Gamma_k[x_1, x_2, \ldots],
\]
where the $x_i$ are in non-negative even degrees and there are only finitely many of them in each degree. Then the relative coB\"okstedt spectral sequence calculating the homotopy groups of the topological coHochschild homology of $C$ collapses at $E_2$, and
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Gamma_k[x_1, x_2, \ldots] \otimes \Lambda_k(z_1, z_2, \ldots)
\]
as $k$-modules, with $z_i$ in degree $|x_i|-1$.
\end{theorem}
\subsection{Organization}
The paper is organized as follows. Section 2 provides background on coalgebras in spectra and (topological) coHochschild homology. In Section 3 we construct the relative coB\"okstedt spectral sequence. Section 4 examines the algebraic structures of this spectral sequence based on work of \cite{bohmanngerhardtshipley}, and Section 5 discusses some explicit calculations of topological coHochschild homology.
\subsection{Acknowledgements}
These results are part of the author's dissertation work, and as such the author would like to thank her advisor, Teena Gerhardt, for her help and guidance over the years. In addition, discussions with Maximilien P\'eroux and \"Ozg\"ur Bay\i nd\i r about their work with coalgebras and conversations with Gabe Angelini-Knoll were particularly helpful.
\section{Topological coHochschild homology}
We recall the definition of a coalgebra for the general setting of a symmetric monoidal category in order to introduce coalgebras in spectra and consider examples.
\begin{definition}
Let $(\mathcal{D}, \otimes, 1)$ be a symmetric monoidal category. Then a (coassociative, counital) \textbf{coalgebra} $C \in \mathcal{D}$ has a comultiplication $\Delta: C \to C \otimes C$ that is coassociative and counital, i.e. there exists a counit morphism $\epsilon: C \to 1$ such that the following coassociativity and counitality diagrams commute:
\begin{minipage}{.45\textwidth}
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
C \arrow{r}{\Delta} \arrow[swap]{d}{\Delta} \& C \otimes C \arrow{d}{\text{Id } \otimes \Delta} \\%
C \otimes C \arrow{r}{\Delta \otimes \text{Id }} \& C \otimes C \otimes C
\end{tikzcd}
\]}
\end{minipage}
\begin{minipage}{.4\textwidth}
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
C \arrow{r}{\Delta} \arrow[swap]{d}{\Delta} \arrow[swap]{dr}{\text{Id}} \& C \otimes C \arrow{d}{\text{Id } \otimes \epsilon} \\%
C \otimes C \arrow{r}{\epsilon \otimes \text{Id }} \& 1 \otimes C \cong C \cong C \otimes 1
\end{tikzcd}
\]}
\end{minipage}
\end{definition}
Examples of coalgebras over a field include the polynomial, exterior, and divided power coalgebras.
\begin{example}
For a field $k$, the polynomial coalgebra $k[w_1, w_2, \ldots]$ for $w_i$ in even degree is the vector space with basis given by $\{w_i^j \}$ for $j \ge 0$ and $i \ge 1$. It has coproduct
\begin{align*}
\Delta(w_i^j) &= \sum_k \binom{j}{k} w_i^k \otimes w_i^{j-k}
\end{align*}
and counit
\begin{align*}
\epsilon (w_i^j) &= \begin{cases} 1 & \text{if } j=0\\ 0 & \text{if } j \ne 0. \end{cases}
\end{align*}
\end{example}
\begin{example}
For a field $k$, the exterior coalgebra $\Lambda_k(y_1, y_2, \ldots)$ for $y_i$ in odd degrees is the vector space with basis given by $\{1, y_i \}$ for $i \ge 1$, which has coproduct
\begin{align*}
\Delta(y_i) &= 1 \otimes y_i + y_i \otimes 1\\
\Delta(1) &= 1 \otimes 1
\end{align*}
and counit
\begin{align*}
\epsilon(y_i) &= 0\\
\epsilon(1)&=1
\end{align*}
\end{example}
\begin{example}
For a field $k$, the divided power coalgebra $\Gamma_k[x_1, x_2, \ldots]$ with $x_i$ in even degrees is the vector space with basis given by $\{ \gamma_j(x_i) \}$ for $j \ge 0$ and $i \ge 1$. It has coproduct
\begin{align*}
\Delta(\gamma_j (x_i)) &= \sum_{a+b=j} \gamma_a(x_i) \otimes \gamma_b(x_i)
\end{align*}
where $\gamma_0(x_i)= 1, \gamma_1(x_i)=x_i$, and counit
\begin{align*}
\epsilon (\gamma_j(x_i)) &= \begin{cases} 1 & \text{if } j=0\\ 0 & \text{if } j \ne 0. \end{cases}
\end{align*}
\end{example}
Because it will also be useful to have the ``dual'' notion of modules on which $C$ \textit{coacts}, we also recall the following definition of a comodule.
\begin{definition}
Let $R$ be a commutative ring and $C$ an $R$-coalgebra. Then $N$ is a \textbf{right $C$-comodule} if it is an $R$-module together with an $R$-linear map $\gamma: N \to N \otimes_R C$ that is coassociative and counital, so that the following diagrams commute:
\begin{minipage}{.45\textwidth}
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
N \arrow{r}{\gamma} \arrow[swap]{d}{\gamma} \& N \otimes_R C \arrow{d}{\text{Id } \otimes \Delta} \\%
N \otimes_R C \arrow{r}{\gamma \otimes \text{Id }} \& N \otimes_R C \otimes_R C
\end{tikzcd}
\]}
\end{minipage}
\begin{minipage}{.4\textwidth}
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
N \arrow{r}{\gamma} \arrow[swap]{dr}{\text{Id}} \& N \otimes_R C \arrow{d}{\text{Id } \otimes \epsilon} \\%
\& N
\end{tikzcd}
\]}
\end{minipage}
\noindent The map $\gamma$ is referred to as a \textit{right $C$-coaction}. Similarly, a \textbf{left $C$-comodule} is an $R$-module together with an $R$-linear map $\psi: N \to C \otimes_R N$ that is coassociative and counital, so that the following diagrams commute:
\begin{minipage}{.45\textwidth}
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
N \arrow{r}{\psi} \arrow[swap]{d}{\psi} \& C \otimes_R N \arrow{d}{\Delta \otimes \text{Id }} \\%
C \otimes_R N \arrow{r}{\text{Id } \otimes \psi} \& C \otimes_R C \otimes_R N
\end{tikzcd}
\]}
\end{minipage}
\begin{minipage}{.4\textwidth}
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
N \arrow{r}{\psi} \arrow[swap]{dr}{\text{Id}} \& C \otimes_R N \arrow{d}{\epsilon \otimes \text{Id }} \\%
\& N
\end{tikzcd}
\]}
\end{minipage}
\noindent The map $\psi$ is referred to as a \textit{left $C$-coaction}.
\end{definition}
The right and left comodule structures together can be used to define a bicomodule:
\begin{definition}
For $R$-coalgebras $C, D$, a $(C,D)$-\textbf{bicomodule} $N$ is a left $C$-comodule with left coaction $\psi: N \to C \otimes_R N$ and right $D$-comodule with right coaction $\gamma: N \to N \otimes_R D$ that satisfies the following commutative diagram:
{\large
\[ \begin{tikzcd}[ampersand replacement=\&]
N \arrow{r}{\gamma} \arrow[swap]{d}{\psi} \& N \otimes_R D \arrow{d}{\psi \otimes \text{Id }} \\%
C \otimes_R N \arrow{r}{\text{Id } \otimes \gamma} \& C \otimes_R N \otimes_R D
\end{tikzcd}
\]}
\end{definition}
Since we want to apply these definitions in the topological setting, we introduce our notation for coalgebras in spectra in the following example.
\begin{example}
A \textbf{coalgebra spectrum} is a coalgebra in one of the symmetric monoidal categories of spectra. For a commutative ring spectrum $R$, an \textbf{$R$-coalge\-bra spectrum} $C$ is a coalgebra in the symmetric monoidal category of $R$-modules. It has comultiplication $\Delta: C \to C \wedge_R C$ and counit $\epsilon: C \to R$, satisfying the coassociativity and counitality conditions.
\end{example}
A familiar example of a coalgebra in spectra comes from suspension spectra, since for a space $X$, the diagonal map $X \to X \times X$ on topological spaces induces a comultiplication map on the suspension spectrum $\Sigma_+^\infty (X)$ making $\Sigma_+^\infty (X)$ into a coalgebra spectrum.
However, most spectra do not have diagonal maps and P\'eroux-Shipley \cite{P_roux_2019} show that examples of this form are quite limited. In particular, they prove that all coalgebra spectra over $\mathbb{S}$ are cocommutative in monoidal categories of spectra such as symmetric spectra, orthogonal spectra, $\mathbb{S}$-modules, etc. As we saw in the above example, some coalgebras over the sphere spectrum come from suspension spectra; in fact, P\'eroux-Shipley further show that in model categories all $\mathbb{S}$-coalgebras are closely related to suspension spectra.
This rigid structure of $\mathbb{S}$-coalgebras thus provides motivation for studying other kinds of coalgebra spectra.
Examples of these kinds of coalgebra spectra include $H\mathbb{F}_p$-coalgebras such as
$H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p$,
which we will examine later on in further detail. We will study these coalgebras via the invariant \textit{topological coHochschild homology}.
Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen \cite{bohmann2018computational} define (topological) coHochschild homology for any general symmetric monoidal category, but classical coHochschild homology as defined by Doi \cite{doi1981homological} can be recovered from this definition by considering the category of coalgebras over a field.
\begin{definition}
Let $(\mathcal{D}, \otimes, 1)$ be a symmetric monoidal model category and let $C \in \mathcal{D}$ be a coalgebra with coassociative comultiplication $\Delta: C \to C \otimes C$ and counit $\epsilon: C \to 1$. Further, let $N$ be a $(C,C)$-bicomodule with left and right coactions $\psi: N \to C \otimes N$ and $\gamma: N \to N \otimes C$.
Define $\mathrm{coTHH}(N, C)^\bullet$ to be the cosimplicial object with r-simplices $\mathrm{coTHH}(N, C)^r = N \otimes C^{\otimes r}$,
with coface maps
\begin{align*}
\delta_i = \begin{cases}
\gamma \otimes \text{Id}^{\otimes r} &i=0\\
\text{Id}^{\otimes i} \otimes \Delta \otimes \text{Id}^{\otimes (r-i)} &0 < i \le r\\
\Tilde{t} \circ (\psi \otimes \text{Id}^{\otimes r}) &i=r+1
\end{cases}
\end{align*}
where $\Tilde{t}$ is the map that twists the first factor to the last, and with codegeneracy maps $\sigma_i: N \otimes C^{\otimes (r+1)} \to N \otimes C^{\otimes r}$ for $0 \le i \le r$
\[
\sigma_i = \text{Id}^{\otimes (i+1)} \otimes \epsilon \otimes \text{Id}^{\otimes r-i}.
\]
This gives a cosimplicial object of the form
\begin{align*}
&\vdots\\
N \otimes &C \otimes C\\
\uparrow \downarrow &\uparrow \downarrow \uparrow\\
N &\otimes C\\
\uparrow &\downarrow \uparrow\\
&N
\end{align*}
The \textbf{topological coHochschild homology} of the coalgebra $C$ with coefficients in $N$ is defined by
\[
\mathrm{coTHH}(N,C) = \Tot(\mathcal{R} \mathrm{coTHH}(N,C)^\bullet)
\]
where $\mathcal{R}$ is the Reedy fibrant replacement and $\Tot$ represents the totalization.
\end{definition}
The above definition gives topological coHochschild homology with coefficients in a $(C,C)$-bicomodule $N$, but when we consider $C$ as a bicomodule over itself, we write $\mathrm{coTHH}(C)$ for coefficients in $C$.
We will further decorate the notation with $\mathrm{coTHH}^R(C)$ when we consider the topological coHochschild homology of $C$ relative to $R$, i.e. $\mathrm{coTHH}$ of an $R$-coalgebra $C$ for $R$ a commutative ring spectrum.
As mentioned above, working in the category of coalgebras over a field recovers the classical $\mathrm{coHH}$ of Doi \cite{doi1981homological}. Our general convention will be to specifically refer to this construction as \textit{topological} coHochschild homology when we are considering as input some coalgebra \textit{spectrum}. For instance, we will refer to the work of Hess-Parent-Scott \cite{hess2009cohochschild} as studying \textit{coHochschild homology} of differential graded coalgebras (dg-coalgebras) over a field. However, these constructions are all coming from the same general framework by work of Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen \cite{bohmann2018computational}.
\section{Construction of a relative coB\"okstedt spectral sequence}
Given the homology theory $E_*$ associated to the commutative ring spectrum $E$, recall the classical B\"okstedt spectral sequence \cite{bokstedt1985topological} for studying the topological Hochschild homology of an $R$-algebra:
\begin{theorem}[{\cite[Thm X.2.9]{elmendorf1995rings}}]
Suppose $E$ and $R$ are commutative ring spectra,
$A$ is an $R$-algebra, and
$M$ is a cell $(A,A)$-bimodule. Then if $E_*(A)$
is flat over $E_*(R)$,
then there exists a spectral sequence
\[
E^2_{p,q} = \mathrm{HH}^{E_*(R)}_{p,q}(E_*(A),E_*(M)) \implies E_{p+q}(\mathrm{THH}^R(A,M))
\]
\end{theorem}
This result follows from the general spectral sequence that arises from the skeletal filtration of the simplicial spectrum $\mathrm{THH}^R(A,M)_\bullet$. Similarly, associated to a cosimplicial spectrum is a Bousfield-Kan spectral sequence \cite{bousfield1972homotopy}, which when applied to the cosimplicial spectrum $\mathrm{coTHH}(C)^\bullet$ yields a spectral sequence Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen \cite{bohmann2018computational} call the \textit{coB\"okstedt spectral sequence}. They identify its $E_2$-term as the classical coHochschild homology of coalgebras in the sense of Doi \cite{doi1981homological}:
\begin{theorem}[{\cite[Thm 4.1]{bohmann2018computational}}]
Let $k$ be a field and $C$ a coalgebra spectrum that is cofibrant as a spectrum. Then the Bousfield-Kan spectral sequence for the cosimplicial spectrum $\mathrm{coTHH}(C)^\bullet$ gives a coB\"okstedt spectral sequence for calculating $H_{t-s}(\mathrm{coTHH}(C);k)$ with $E_2$-page
\[
E_2^{s,t} = \mathrm{coHH}^k_{s,t}(H_*(C;k))
\]
given by the classical coHochschild homology of $H_*(C;k)$ as a graded $k$-module.
\end{theorem}
The Bousfield-Kan spectral sequence does not always converge, so the authors specify conditions under which the coB\"okstedt spectral sequence will converge completely.\footnote{For instance, the coB\"okstedt spectral sequence converges when $C$ is a suspension spectrum $\Sigma_+^\infty X$ for simply connected $X$ \cite{bohmann2018computational}.} Note that this spectral sequence computes the ordinary homology over a field of $\mathrm{coTHH}(C)$ where $C$ is an $\mathbb{S}$-coalgebra.
We want a relative version of this theorem in order to study $R$-coalgebra spectra more broadly, giving the Bousfield-Kan spectral sequence computing the homology of $\mathrm{coTHH}^R(C)$. As in the $\mathrm{THH}$ setting, we would expect that some flatness conditions must be satisfied.
We first formally state and prove that this \textit{relative coB\"okstedt spectral sequence} exists, and then identify a corollary that will be useful for computations. Further, we will describe conditions for convergence of this spectral sequence. Note in particular that this result holds for any generalized homology theory in addition to being over the more general ring spectrum $R$.
\begin{theorem}
Let $E$ and $R$ be commutative ring spectra, $C$ an $R$-coalgebra spectrum that is cofibrant as an $R$-module, and $N$ a $(C,C)$-bicomodule spectrum. If
$E_*(C)$ is flat over $E_*(R)$, then there exists a Bousfield-Kan spectral sequence for the cosimplicial $R$-module $\mathrm{coTHH}^R(N, C)^\bullet$ that abuts to $E_{t-s}(\mathrm{coTHH}^R(N,C))$ with $E_2$-page
\[
E_2^{s,t} = \mathrm{coHH}^{E_*(R)}_{s,t}(E_*(N), E_*(C))
\]
given by the classical coHochschild homology of $E_*(C)$ with coefficients in $E_*(N)$.
\end{theorem}
\begin{proof}
To begin, as in \cite{bohmann2018computational} we will recall the general construction of the Bousfield-Kan spectral sequence \cite{bousfield1972homotopy} for a general Reedy fibrant cosimplicial $R$-module $X^\bullet$.
Let $\Delta$ be the cosimplicial space with the standard $n$-simplex $\Delta^n$ as its $n^{th}$ level. The category of $R$-modules is cotensored over pointed spaces (see e.g.\ \cite{barnes2020introduction}), and the notation $D^K$ will be used for the cotensor of an $R$-module $D$ with a simplicial space $K$.
So for a Reedy fibrant cosimplicial $R$-module $X^\bullet$ the totalization of $X^\bullet$ is given by:
\[
\Tot(X^\bullet) = eq\big(\prod_{n \ge 0} (X^n)^{\Delta^n} \rightrightarrows \prod_{\alpha \in \Delta([a], [b])} (X^b)^{\Delta^a}\big).
\]
\noindent Let $sk_s\Delta \subset \Delta$ be the cosimplicial subspace with $n^{th}$ level $sk_s\Delta^n$ that is the $s$-skeleton of the $n$-simplex. Then one can define
\[
\Tot_s(X^\bullet) = eq\big(\prod_{n \ge 0} (X^n)^{sk_s\Delta^n} \rightrightarrows \prod_{\alpha \in \Delta([a], [b])} (X^b)^{sk_s\Delta^a}\big).
\]
The inclusions $sk_s\Delta \hookrightarrow sk_{s+1}\Delta$ then induce a tower of fibrations with fibers $F_i$:
\begin{align*}
&\cdots \to &Tot_s(X^\bullet) &\xlongrightarrow{p_s} &Tot_{s-1}(X^\bullet) &\xlongrightarrow{p_{s-1}} &Tot_{s-2}(X^\bullet) &\to &\cdots &\xlongrightarrow{p_{1}} &Tot_0(X^\bullet) \cong X^0.\\
& &\uparrow{i_s} & &\uparrow{i_{s-1}} & &\uparrow{i_{s-2}} & & & &\uparrow{i_0}\\
& &F_s & &F_{s-1} & &F_{s-2} & & & &F_0
\end{align*}
\noindent We then have an associated exact couple
\[
\begin{tikzcd}
\pi_*(\Tot_*(X^\bullet)) \arrow{rr}{p_*} && \pi_*(\Tot_*(X^\bullet)) \arrow{dl}{\partial}\\
& \pi_*(F_*) \arrow{ul}{i_*}
\end{tikzcd}
\]
that yields a
cohomological spectral sequence $\{E_r, d_r\}$ with differentials
\[
d_r: E^{s,t}_r \to E^{s+r, t+r-1}_r.
\]
We now want to identify the fiber $F_s$.
Recall that the normalized complex $N^s X^\bullet$ is defined to be:
\begin{align*}
N^sX^\bullet = \bigcap^{s-1}_{i=0} \text{ker}(\sigma_i: X^s \to X^{s-1})
\end{align*}
for codegeneracy maps $\sigma_i$ as given by the cosimplicial structure. Then each fiber $F_s$ can be identified with
\[
F_s = \Omega^s(N^s X^\bullet).
\]
Thus the $E_1$-term of the spectral sequence is given by
\begin{align*}
E^{s,t}_1 &= \pi_{t-s}(F_s) \\
&\cong \pi_{t-s}(\Omega^s (N^s X^\bullet))\\
&\cong \pi_t(N^s X^\bullet)\\
&\cong N^s \pi_t(X^\bullet)
\end{align*}
with differential $d_1 : N^s \pi_t(X^\bullet) \to N^{s+1}\pi_t(X^\bullet)$. This map can then be identified with $\Sigma (-1)^i \pi_t(\delta^i)$ where $\delta^i$ denote the coface maps of the cosimplicial object $X^\bullet$, and we have
\begin{align*}
&H^*(N^s \pi_t(X^\bullet)) \cong H^s(\pi_t(X^\bullet))\\
\implies &E^{s,t}_2 \cong H^s(\pi_t(X^\bullet), \Sigma (-1)^i \pi_t(\delta^i)).
\end{align*}
Here we care about the specific case when $X^\bullet = \mathcal{R}(E \wedge \mathrm{coTHH}^R(N, C)^\bullet)$, where $\mathcal{R}$ indicates the Reedy fibrant replacement,
and so
\begin{align*}
\pi_*(X^\bullet) = \pi_*(\mathcal{R}(E \wedge \mathrm{coTHH}^R(N, C)^\bullet)) \cong \pi_*(E \wedge \mathrm{coTHH}^R(N,C)^\bullet).
\end{align*}
\noindent Recall that $\mathrm{coTHH}^R(N, C)$ has cosimplicial structure:
\begin{align*}
&\vdots\\
N \wedge_R &C \wedge_R C\\
\uparrow \downarrow &\uparrow \downarrow \uparrow\\
N &\wedge_R C\\
\uparrow &\downarrow \uparrow\\
&N
\end{align*}
so when we take $\pi_*(E \wedge - )$, at the $n^{th}$ level we see that:
\small
\begin{align*}
\pi_*(E \wedge N \wedge_R C \wedge_R \cdots \wedge_R C)
&\cong \pi_*(E \wedge N \wedge_{E \wedge R} E \wedge R \wedge_R C \wedge_{E \wedge R}E \wedge R \wedge_R C \cdots \wedge_{E \wedge R} E \wedge R \wedge_R C)\\
&\cong \pi_*(E \wedge N \wedge_{E \wedge R} E \wedge C \wedge_{E \wedge R} E \wedge C \cdots \wedge_{E \wedge R} E \wedge C)\\
&\cong \pi_*(E \wedge N) \otimes_{\pi_*(E \wedge R)} \pi_*(E \wedge C ) \otimes_{\pi_*(E \wedge R)} \cdots \otimes_{\pi_*(E \wedge R)} \pi_*(E \wedge C)\\
&\cong E_*(N) \otimes_{E_*(R)} E_*(C) \otimes_{E_*(R)} \cdots \otimes_{E_*(R)} E_*(C)
\end{align*}
\normalsize
\noindent where the third isomorphism follows since
$\pi_*(E \wedge C)$ is flat over $\pi_*(E \wedge R)$ by hypothesis, and therefore
\begin{align*}
\pi_* \mathcal{R}(E \wedge \mathrm{coTHH}^R(N,C)^n) &\cong \pi_* (E \wedge \mathrm{coTHH}^R(N,C)^n)\\
&\cong E_*(N) \otimes_{E_*(R)} E_*(C)^{\otimes_{E_*(R)}n}.
\end{align*}
Then $\Sigma(-1)^i \pi_*(\delta^i)$ gives the coHochschild differential under this identification, and thus we get the coHochschild complex:
\begin{align*}
E^{s,t}_2 &\cong H^s(\pi_t(X^\bullet), \Sigma (-1)^i \pi_t(\delta^i))\\
&\cong H^s(E_t(N) \otimes_{E_*(R)} E_t(C)^{\otimes_{E_*(R)}n}, \Sigma (-1)^i \pi_t(\delta^i))\\
&\cong \mathrm{coHH}^{E_*(R)}_{s,t}(E_*(N), E_*(C))
\end{align*}
Therefore the result is the Bousfield-Kan spectral sequence with $E_2$-page
\[
E_2^{s,t} = \mathrm{coHH}^{E_*(R)}_{s,t}(E_*(N), E_*(C))
\]
abutting to $E_{t-s}(\mathrm{coTHH}^R(N,C))$.
\end{proof}
Because it will be particularly useful in future examples, we state the following special case when $E=\mathbb{S}$
as a corollary:
\hypertarget{corollary}{}
\begin{corollary}\label{Cor-pi}
Let $R$ be a commutative ring spectrum and $C$ an $R$-coalgebra spectrum.
If $\pi_*(C)$ is flat over $\pi_*(R)$, then there exists a Bousfield-Kan spectral sequence that abuts to $\pi_{t-s}(\mathrm{coTHH}^R(C))$ with $E_2$-page
\[
E_2^{s,t} = \mathrm{coHH}^{\pi_*(R)}_{s,t}(\pi_*(C))
\]
given by the classical coHochschild homology of $\pi_*(C)$.
\end{corollary}
Since the Bousfield-Kan spectral sequence does not converge in general, we must specify the conditions required for convergence. Based on work of Bousfield-Kan \cite{bousfield1972homotopy} and Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen \cite{bohmann2018computational}, we have the following convergence result.
\begin{proposition}
If for every $s$ there exists some $r$ such that $E^{s,s+i}_r = E^{s,s+i}_\infty$, then the relative coB\"okstedt spectral sequence for $\mathrm{coTHH}^R(C)$ converges completely to
\[
\pi_* \text{Tot} \mathcal{R}(E \wedge \mathrm{coTHH}^R(C)^\bullet).
\]
\end{proposition}
\noindent Conditions for complete convergence can be found in Goerss-Jardine \cite{goerss2009simplicial}. Further, from the natural construction of a map $\text{Hom}(X, Y) \wedge Z \to \text{Hom}(X, Y \wedge Z)$ we get a natural map
\[
P: E \wedge \text{Tot}(\mathcal{R}\mathrm{coTHH}^R(C)^\bullet) \to \text{Tot} \mathcal{R}(E \wedge \mathrm{coTHH}^R(C)^\bullet),
\]
giving us the following corollary.
\begin{corollary}
If for every $s$ there exists some $r$ such that $E^{s,s+i}_r = E^{s,s+i}_\infty$ and $P: E \wedge \text{Tot}(\mathcal{R}\mathrm{coTHH}^R(C)^\bullet) \to \text{Tot} \mathcal{R}(E \wedge \mathrm{coTHH}^R(C)^\bullet)$ induces an isomorphism in homotopy, then the relative coB\"okstedt spectral sequence for $\mathrm{coTHH}^R(C)$ converges completely to $E_*(\mathrm{coTHH}^R(C))$.
\end{corollary}
For the computations in this paper we take $E=\mathbb{S}$, and so the condition on the map $P$ is satisfied \cite{bohmann2018computational}.
We formally state this specific case here for easy reference:
\hypertarget{converge}{}
\begin{corollary}\label{converge}
When considering $E=\mathbb{S}$, if for every $s$ there exists some $r$ so that $E^{s,s+i}_r = E^{s,s+i}_\infty$ then the relative coB\"okstedt spectral sequence converges completely to $\pi_*(\mathrm{coTHH}^R(C))$.
\end{corollary}
\section{Algebraic structures in the (relative) (co)B\"okstedt spectral sequence}
We will need to understand additional algebraic structure of the relative co\-B\"okstedt spectral sequence in order to facilitate calculations in the next section. By work of Angeltveit-Rognes, the classical B\"okstedt spectral sequence for a commutative ring spectrum has the structure of a spectral sequence of Hopf algebras under certain flatness conditions \cite{angeltveit2005hopf}:
\begin{theorem}[{\cite[Thm 4.5]{angeltveit2005hopf}}]
If $A$ is a commutative ring spectrum, then:
\begin{enumerate}
\item If $H_*(\mathrm{THH}(A);\mathbb{F}_p)$ is flat over $H_*(A; \mathbb{F}_p)$, then there is a coproduct
\[
\psi: H_*(\mathrm{THH}(A);\mathbb{F}_p) \to H_*(\mathrm{THH}(A);\mathbb{F}_p) \otimes_{H_*(A;\mathbb{F}_p)} H_*(\mathrm{THH}(A);\mathbb{F}_p)
\]
and $H_*(\mathrm{THH}(A);\mathbb{F}_p)$ is an $\mathcal{A}_*$-comodule $H_*(A; \mathbb{F}_p)$-Hopf algebra, where $\mathcal{A}_*$ is the dual Steenrod algebra.
\item If each term $E^r_{*,*}(A)$ for $r \ge 2$ is flat over $H_*(A; \mathbb{F}_p)$, then there is a coproduct
\[
\psi: E^r_{*,*}(A) \to E^r_{*,*}(A) \otimes_{H_*(A; \mathbb{F}_p)} E^r_{*,*}(A)
\]
and $E^r_{*,*}(A)$ is an $\mathcal{A}_*$-comodule $H_*(A; \mathbb{F}_p)$-Hopf algebra spectral sequence. In particular, the differentials $d^r$ respect the coproduct $\psi$.
\end{enumerate}
\end{theorem}
In order to understand the implications of this spectral sequence structure in the setting of Angeltveit-Rognes \cite{angeltveit2005hopf}, we recall the following definitions of indecomposable and primitive elements.
\begin{definition}
For an augmented algebra $A$ over a commutative ring $R$ with augmentation $\epsilon: A \to R$, the \textbf{indecomposable elements} of $A$, denoted by the $R$-module $QA$, are given by the short exact sequence
\[
IA \otimes IA \xlongrightarrow{\mu} IA \xlongrightarrow{} QA \xlongrightarrow{} 0
\]
for multiplication map $\mu$ and $IA = \text{ker}(\epsilon)$.
\end{definition}
\begin{definition}
For a coaugmented coalgebra $C$ over a commutative ring $R$ with coaugmentation $\eta: R \to C$ and counit $\epsilon: C \to R$, the \textbf{primitive elements} of $C$, denoted by the $R$-module $PC$, are given by the short exact sequence
\[
0 \xlongrightarrow{} PC \xlongrightarrow{} JC \xlongrightarrow{\Delta} JC \otimes JC
\]
for comultiplication map $\Delta$ and $JC = \text{coker}(\eta)$. An element $x \in \text{ker}(\epsilon)$ is primitive if its image under the quotient by $Im(\eta)$ in $JC$ is in $PC$.
\end{definition}
\begin{remark}
In a coaugmented coalgebra $C$, it is equivalent to say that $x$ is primitive if $\Delta(x)=1 \otimes x + x \otimes 1$. Note that this formulation is equivalent to the above definition because the coproduct on $x \in IC=\text{ker}(\epsilon)$ is given by
\begin{align*}
\Delta(x) = 1 \otimes x + x \otimes 1 + \Sigma_i x'_{i} \otimes x''_{i}.
\end{align*}
Since $C$ is coaugmented, it splits as $R \oplus IC$, which means that
\begin{align*}
C \otimes C = (R \otimes R) \oplus (IC \otimes R) \oplus (R \otimes IC) \oplus (IC \otimes IC).
\end{align*}
Because $C$ is counital,
\[
\text{Id} = (\epsilon \otimes \text{Id}) \circ \Delta = (\text{Id} \otimes \epsilon) \circ \Delta,
\]
so $\Sigma_i x'_{i} \otimes x''_{i} \in IC \otimes IC$. But
\begin{align*}
\eta: R &\xlongrightarrow{} C \cong R \oplus IC\\
r &\mapsto (r,0)
\end{align*}
has cokernel $JC \cong IC$, so for primitive $x \in IC \cong JC$,
\begin{align*}
0 \xlongrightarrow{} PC &\xlongrightarrow{} JC \xlongrightarrow{\Delta} JC \otimes JC\\
x &\mapsto x \mapsto 0
\end{align*}
means that $\Sigma_i x'_{i} \otimes x''_{i} \in JC \otimes JC$ must be zero, and so $\Delta(x)=1 \otimes x+x \otimes 1$ as desired.
\end{remark}
If we apply these definitions to our examples of coalgebras from above, we have the following indecomposable and primitive elements, which will be useful for the calculations in the final section of this paper.
\begin{example}
Indecomposable elements in the polynomial algebra $k[w_1, w_2, \ldots]$ are classes of the form $w_i$.
The augmentation in this case is
\begin{align*}
\epsilon: k[w_1, w_2, \ldots] &\to k\\
w_i &\mapsto 0
\end{align*}
so $IA= \text{ker}(\epsilon) = (w_1, w_2, \ldots)$. So then the image of the product on $IA$ will be terms of the form $w_i^{m_i} \ldots w_j^{m_j}$ for $\Sigma m_k > 1$, which means $QA$ is given by elements of the form $w_i$.
\end{example}
\begin{example}
Similarly, in the exterior algebra $\Lambda_k(y_1, y_2, \ldots)$, indecomposable elements are classes of the form $y_i$. The augmentation is given by
\begin{align*}
\epsilon: \Lambda_k(y_1, y_2, \ldots) &\to k\\
y_i &\mapsto 0
\end{align*}
so $IA= \text{ker}(\epsilon) = (y_1, y_2, \ldots)$. The image of the product on $IA$ will be terms of the form $y_{i_1} y_{i_2} \ldots y_{i_n}$ for $n>1$, which means $QA$ is given by elements of the form $y_i$.
\end{example}
\begin{example}
Primitive elements in the polynomial coalgebra $k[w_1, w_2, \ldots]$ are classes of the form $w_i^{p^m}$ for $p = char(k)$. The coaugmentation
\begin{align*}
\eta: k &\xlongrightarrow{} k[w_1, w_2, \ldots]\\
1 &\mapsto 1
\end{align*}
has cokernel $JC$ with basis $\{w_1^{j_1} w_2^{j_2} \ldots\}$ for all $j_i \ge 0$, with at least one nonzero $j_i$.
Recall the comultiplication is given by
\begin{align*}
\Delta(w_i^j) &= \sum_k \binom{j}{k} w_i^k \otimes w_i^{j-k}
\end{align*}
Since $p$ is the characteristic of $k$,
\begin{align*}
\Delta(w_i^{p^m}) = 1 \otimes w_i^{p^m} + w_i^{p^m} \otimes 1
\end{align*}
so $w_i^{p^m}$ is primitive.
The other $w_i^n$ are not primitive because $\Delta(w_i^n) \ne 1 \otimes w_i^n + w_i^n \otimes 1$ since those binomial coefficients do not vanish.
\end{example}
\begin{example}
In the exterior coalgebra $\Lambda_k(y_1, y_2, \ldots)$, primitive elements are classes of the form $y_i$. Recall that the coproduct on $\Lambda_k(y_1, y_2, \ldots)$ is given by
$\Delta(y_i)=1\otimes y_i + y_i \otimes 1$
and therefore the those terms are primitive.
\end{example}
\begin{example}
Primitive elements in the divided power coalgebra $\Gamma_k[x_1, x_2, \ldots]$ are classes of the form $x_i$.
Recall that the divided power coalgebra has comultiplication
\begin{align*}
\Delta(\gamma_j (x_i)) &= \sum_{a+b=j} \gamma_a(x_i) \otimes \gamma_b(x_i)
\end{align*}
So since $\gamma_0(x_i) = 1$ and $\gamma_1(x_i)=x_i$, we have
\begin{align*}
\Delta(x_i) = 1 \otimes x_i + x_i \otimes 1.
\end{align*}
The other $\gamma_j(x_i)$ for $j>1$ are not primitive because their image under $\Delta$ will have additional $\gamma_a(x_i) \otimes \gamma_b(x_i)$ terms.
\end{example}
Studying primitive and indecomposable elements can be particularly useful because of results like the following from Angeltveit and Rognes:
\begin{theorem}[{\cite[Prop 4.8]{angeltveit2005hopf}}]
Let $A$ be a commutative $\mathbb{S}$-algebra with $H_*(A; k)$ connected and such that $\mathrm{HH}_*(H_*(A; k))$ is flat over $H_*(A; k)$. Then the $E^2$-term of the B\"okstedt spectral sequence
\[
E^{2}_{*,*}(A) = \mathrm{HH}_*(H_*(A; k))
\] is an
$H_*(A; k)$-Hopf algebra, and a shortest non-zero differential $d^r_{s,t}$ in lowest total degree $s+t$, if one exists, must map from an algebra indecomposable to a coalgebra primitive
in $\mathrm{HH}_*(H_*(A; k))$.
\end{theorem}
Bohmann-Gerhardt-Shipley show that under appropriate coflatness conditions the coB\"okstedt spectral sequence for a cocommutative coalgebra spectrum has what is called a \textit{$\square$-Hopf algebra structure}, an analog of a Hopf algebra structure working over a coalgebra \cite{bohmanngerhardtshipley}, where the $\square$ in this notation is the cotensor product. Recall that
for an $R$-coalgebra $C$, a right $C$-comodule $M$ with $\gamma: M \to M \otimes C$, and a left $C$-comodule $N$ with $\psi: N \to C \otimes N$,
the \textit{cotensor} of $M$ and $N$ over $C$ is defined to be the following equalizer in $R$-modules:
\centerline{
\xymatrix@=2cm{
& M \square_C N = eq\bigg((M \otimes_R N) \ar@<.5ex>[r]^{ \hspace{1cm}\gamma \otimes \text{Id}_N} \ar@<-.5ex>[r]_{ \hspace{1cm}\text{Id}_M \otimes \psi} &M \otimes_R C \otimes_R N\bigg).
}
}
Note that the cotensor does not always yield a $C$-comodule, but under some conditions it does. In particular, if $C$ is a coalgebra over a field and $M$ and $N$ are $C$-bicomodules, then $M \square_C N$ is a $C$-bicomodule.
In order to define a $\square_C$-Hopf algebra for a coalgebra $C$ over a field $k$, we first recall the definitions of a $\square_C$-algebra, a $\square_C$-coalgebra, and a $\square_C$-bialgebra from \cite{bohmanngerhardtshipley}. See Definition 2.10, Definition 2.11, Definition 2.12, and Definition 2.13 of \cite{bohmanngerhardtshipley} for the coassociativity and counitality diagrams as well as those specifying the interactions between the algebra and coalgebra structures.
\begin{definition}[{\cite[Def 2.11]{bohmanngerhardtshipley}}]
Let $C$ be a cocommutative coalgebra over a field. A \textbf{$\square_C$-algebra} $D$ is a $C$-comodule along with a multiplication map $\mu: D \square_C D \to D$ and a unit map $\eta: C \to D$ that are associative and unital maps of $C$-comodules.
\end{definition}
\begin{definition}[{\cite[Def 2.10]{bohmanngerhardtshipley}}]
Let $C$ be a cocommutative coalgebra over a field. A \textbf{$\square_C$-coalgebra} $D$ is a $C$-comodule along with a comultiplication map $\Delta: D \to D \square_C D$ and a counit map $\epsilon: D \to C$ that are coassociative and counital maps of $C$-comodules.
\end{definition}
\begin{definition}[{\cite[Def 2.12, 2.13]{bohmanngerhardtshipley}}]
Let $C$ be a cocommutative coalgebra over a field. A \textbf{$\square_C$-bialgebra} $D$ is a $\square_C$-coalgebra that is also equipped with a multiplication map $\mu: D \square_C D \to D$ and a unit map $\eta: C \to D$ that satisfy associativity and unitality. The multiplication must also be compatible with the $\square_C$-coalgebra structure. A \textbf{$\square_C$-Hopf algebra} $D$ is a $\square_C$-bialgebra along with an antipode $\chi: D \to D$ that is a $C$-comodule map satisfying the corresponding hexagonal antipode diagram.
\end{definition}
\noindent In \cite{bohmanngerhardtshipley}, they further extend these definitions to that of a \textbf{differential bigraded $\square_C$-Hopf algebra} (Definition 6.8) and a \textbf{spectral sequence of $\square_C$-Hopf algebras} (Definition 6.9).
We also recall the definitions of coflat and connected coalgebras.
\begin{definition}
For a coalgebra $C$ over a field $k$, a right comodule $M$ over $C$ is called \textbf{coflat} if $M \square_C -$ is exact as a functor from left $C$-comodules to $k$-modules.
\end{definition}
\begin{definition}
A graded $k$-coalgebra $D_*$ is connected if $D_*=0$ when $* <0$, and the counit map $\epsilon: D_* \to k$ is an isomorphism in degree zero.
\end{definition}
\noindent We can now state Bohmann-Gerhardt-Shipley's analog of \cite[Theorem 4.5]{angeltveit2005hopf} for $\mathrm{coTHH}$.
\begin{theorem}[{\cite[Thm 6.14]{bohmanngerhardtshipley}}]
For $C$ a connected cocommutative coalgebra spectrum and $k$ a field,
if for $r \ge 2$ each $E_r^{*,*}(C)$ is coflat over $H_*(C;k)$,
then the coB\"okstedt spectral sequence is a spectral sequence of $\square_{H_*(C;k)}$-Hopf algebras.
\end{theorem}
It follows from their work that the relative coB\"okstedt spectral sequence computing $\pi_*(\mathrm{coTHH}(C))$ for $C$ an $Hk$-coalgebra also has this type of $\square$-Hopf algebra structure. For the remainder of the paper we will focus on this case, in order to use this $\square$-coalgebra structure.
\begin{theorem}
For $C$ a cocommutative $Hk$-coalgebra spectrum,
if for $r \ge 2$ each $E_r^{*,*}(C)$ is coflat over $\pi_*(C)$, then the relative coB\"okstedt spectral sequence computing $\pi_*(\mathrm{coTHH}^{Hk}(C))$ is a spectral sequence of $\square_{\pi_*(C)}$-Hopf algebras.
\end{theorem}
Note that in this case the relative coB\"okstedt spectral sequence is the Bousfield-Kan spectral sequence for the cosimplicial object $\mathrm{coTHH}^{Hk}(C)^\bullet$, and the proof of the above result follows as in Theorem 6.14 of \cite{bohmanngerhardtshipley} for the setting of $Hk$-modules, with $\pi_*(C)$ in place of their $H_*(C;k)$ in general.
As we saw in the B\"okstedt spectral sequence, this additional algebraic structure is computationally useful. In order to understand these ideas in the dual setting, we require the following definitions and results regarding indecomposable and primitive elements.
\begin{definition}
A unital $\square_C$-algebra $A$ with multiplication $\mu: A \square_C A \to A$ and unit $\eta: C \to A$ is \textbf{augmented} if there exists an augmentation map $\epsilon: A \to C$ such that $\epsilon \mu = \epsilon \square \epsilon$ and $\epsilon \eta = \text{Id}$.
\end{definition}
\begin{definition}
A counital $\square_C$-coalgebra $D$ with comultiplication $\Delta: D \to D \square_C D$ and counit $\epsilon: D \to C$ is \textbf{coaugmented} if there exists a coaugmentation map $\eta: C \to D$ such that $\Delta \eta = \eta \square \eta$ and $\epsilon \eta = \text{Id}$.
\end{definition}
The definitions of primitive and indecomposable elements are then analogous to those given earlier.
\begin{definition}[{\cite[Def 2.16]{bohmanngerhardtshipley}}]
Given a coaugmented $\square_C$-coalgebra $D$, let $PD$ be defined by the short exact sequence
\[
0 \xlongrightarrow{} PD \xlongrightarrow{} JD \xlongrightarrow{\Delta} JD \square_C JD,
\]
where $JD = \text{coker}(\eta)$. An element in $\text{ker}(\epsilon)$ is \textbf{primitive} if its image in $JD$ is in $PD$.
\end{definition}
\begin{definition}[{\cite[Def 2.15]{bohmanngerhardtshipley}}]
For an augmented $\square_C$-algebra $A$, the \textbf{indecomposables} of $A$, denoted by $QA$, are defined by the short exact sequence
\[
IA \square_C IA \xlongrightarrow{\mu} IA \xlongrightarrow{} QA \xlongrightarrow{} 0,
\]
where $IA = \text{ker}(\epsilon)$.
\end{definition}
With these definitions, we can now state the following result that will be particularly useful for computations.
\hypertarget{shortest}{}
\begin{theorem}\label{shortest}
For a field $k$, let $C$ be a cocommutative $Hk$-coalgebra spectrum such that $\mathrm{coHH}_*(\pi_*(C))$ is coflat over $\pi_*(C)$
and the graded coalgebra $\pi_*(C)$ is connected. Then the $E_2$-term of the relative coB\"okstedt spectral sequence calculating $\pi_*(\mathrm{coTHH}^{Hk}(C))$,
\[
E_2^{*,*}(C) = \mathrm{coHH}^k_*(\pi_*(C)),
\]
is a $\square_{\pi_*(C)}$-bialgebra, and the shortest non-zero differential $d^{s,t}_r$ in lowest total degree $s+t$ maps from a $\square_{\pi_*(C)}$-algebra indecomposable to a $\square_{\pi_*(C)}$-coalgebra primitive.
\end{theorem}
The proof follows as in the non-relative version in \cite{bohmanngerhardtshipley}.
\section{Explicit calculations}
A natural question that arises when studying $\mathrm{coTHH}$ is to ask what kinds of coalgebra spectra exist, and for those that exist, is the $E_2$-page of the relative coB\"okstedt spectral sequence computable? Although the coB\"okstedt spectral sequence can study examples of the form $\Sigma^\infty_{+}X$ for simply connected $X$ as in \cite{bohmann2018computational, bohmanngerhardtshipley}, we can now study general $R$-coalgebra spectra via the relative coB\"okstedt spectral sequence. Here we examine $H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p$, which is an $H\mathbb{F}_p$-coalgebra spectrum by work of Bayindir-P\'eroux \cite{bayindir2020spanierwhitehead}.
We begin by computing the $E_2$-term of the relative coB\"okstedt spectral sequence calculating $\pi_*(\mathrm{coTHH}^{H\mathbb{F}_p}(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p))$.
\hypertarget{HZEX}{}
\begin{proposition}\label{HZEX}
For the $H\mathbb{F}_p$-coalgebra $H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p$, the $E_2$-page of the spectral sequence calculating $\pi_{t-s}(\mathrm{coTHH}^{H\mathbb{F}_p}(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p))$ is
\[
E_2^{s,t} = \mathrm{coHH}^{\mathbb{F}_p}_{s,t}(\pi_*(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p)) \cong \Lambda_{\mathbb{F}_p} (\tau) \otimes_{\mathbb{F}_p} \mathbb{F}_p[\omega]
\]
for $||\tau||=(0,1), ||\omega||=(1,1)$.
\end{proposition}
\begin{proof}
From \cite{bayindir2020spanierwhitehead}, $H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p$ is an $H\mathbb{F}_p$-coalgebra.
Note that $\pi_*(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p)$ is flat over $\pi_*(H\mathbb{F}_p) \cong \mathbb{F}_p$ since modules are flat over fields. Thus \hyperlink{corollary}{Corollary} \ref{Cor-pi} states that the $E_2$-page of the the relative coB\"okstedt spectral sequence computing $\pi_{t-s}(\mathrm{coTHH}^{H\mathbb{F}_p}(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p))$ has the form:
\[
E_2^{s,t} = \mathrm{coHH}^{\mathbb{F}_p}_{s,t}(\pi_*(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p)).
\]
We can use the K\"unneth spectral sequence to calculate $\pi_*(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p)$ \cite[Thm IV.6.2]{elmendorf1995rings}:
\begin{align*}
\mathrm{Tor}^{\pi_*(H\mathbb{Z})}_{p,q}(&\pi_*(H\mathbb{F}_p), \pi_*(H\mathbb{F}_p)) \cong \mathrm{Tor}^{\mathbb{Z}}_{p,q}(\mathbb{F}_p, \mathbb{F}_p) \Rightarrow \pi_{p+q}(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p).
\end{align*}
\noindent To compute this $E_2$-term, we form a projective resolution of $\mathbb{F}_p$ as a $\mathbb{Z}$-module:
\begin{alignat*}{4}
&\hspace{.5cm}\mathbb{Z}\hspace{.5cm} &\xlongrightarrow{\times p} &\hspace{.5cm}\mathbb{Z}\hspace{.5cm} &\xlongrightarrow{\text{mod } p} &\hspace{.5cm}\mathbb{F}_p\hspace{.5cm} &\longrightarrow 0.
\end{alignat*}
Then we can truncate and $-\otimes_\mathbb{Z} \mathbb{F}_p$ in order to simplify the resolution to
\begin{alignat*}{3}
&\hspace{.5cm} \mathbb{F}_p \hspace{.5cm} &\xlongrightarrow[0]{\times p} &\hspace{.5cm} \mathbb{F}_p\hspace{.5cm} &\longrightarrow 0.
\end{alignat*}
Thus we have $\mathbb{F}_p$ in degree 0 and 1, which as a coalgebra is the exterior coalgebra with a single generator in degree 1.
Therefore the $E_2$-page looks like:
\begin{align*}
E_2^{s,t} &= \mathrm{coHH}^{\mathbb{F}_p}_{s,t}(\pi_*(H\mathbb{F}_p \wedge_{H\mathbb{Z}} H\mathbb{F}_p))\\
&\cong \mathrm{coHH}^{\mathbb{F}_p}_{s,t}(\Lambda_{\mathbb{F}_p} (\tau))\\
&\cong \Lambda_{\mathbb{F}_p} (\tau) \otimes \mathbb{F}_p[\omega]
\end{align*}
with bidegrees $||\tau|| = (0,1)$ and $||\omega||=(1,1)$ by Proposition 5.1 in \cite{bohmann2018computational} .
\end{proof}
Such results confirm that there are
examples for which we can compute the $E_2$-page of the relative coB\"okstedt spectral sequence calculating relative topological coHochschild homology. In fact, recent work \cite{bayindir2020spanierwhitehead} completes this computation using properties of the duality between $\mathrm{coTHH}$ and $\mathrm{THH}$ indicating that the spectral sequence should collapse in this case.
In order to apply the $\square$-Hopf algebra structure to find the $E_\infty$-page and complete the computation of the homotopy groups of $\mathrm{coTHH}$ in general, we will consider cocommutative coalgebras with homotopy that is similar to the above $E_2$-page example. We state further computational tools in \hyperlink{primitive}{Proposition} \ref{primitive} and \hyperlink{coalgebrass}{Theorem} \ref{coalgebrass} in order to apply them to the computations involving the relative coB\"okstedt spectral sequence.
Recall from the previous section that the shortest nonzero differential must go from a $\square$-Hopf algebra indecomposable to a $\square$-coalgebra primitive. In later computations we will consider $\square_C$-coalgebras of the form $C \otimes D$, and Bohmann-Gerhardt-Shipley further identify primitive elements in this case:
\hypertarget{primitive}{}
\begin{proposition}[\cite{bohmanngerhardtshipley}]\label{primitive}
For cocommutative coaugmented $k$-coalgebras $C$ and $D$, $C \otimes D$ is a $\square_C$-coalgebra and an element $c \otimes d \in C \otimes D$
is primitive as an element of the $\square_C$-coalgebra $C \otimes D$ if and only if $d$ is primitive in the $k$-coalgebra $D$.
\end{proposition}
They also prove that if $\mathrm{coHH}(D)$ is coflat over $D$ then $\mathrm{coHH}(D)$ is a $\square_D$-algebra \cite{bohmanngerhardtshipley}. However, in order to similarly identify indecomposable elements, we will restrict to each specific computational setting.
Finally, for $Hk$-coalgebras the relative coB\"okstedt spectral sequence is itself a spectral sequence of $k$-coalgebras, which restricts differentials even further to targets that are $k$-coalgebra primitives.
Bohmann-Gerhardt-H{\o}genhaven-Shipley-Ziegenhagen
\cite{bohmann2018computational} showed the following result for the ordinary coB\"okstedt spectral sequence, and the relative case follows from their work since for $E=\mathbb{S}$ we are already in the setting of cosimplicial $Hk$-modules.
\hypertarget{coalgebrass}{}
\begin{theorem}\label{coalgebrass}
If $C$ is a connected cocommutative $Hk$-coalgebra that is cofibrant as an $Hk$-module, the relative coB\"okstedt spectral sequence computing $\pi_*(\mathrm{coTHH}^{Hk}(C))$ is a spectral sequence of $k$-coalgebras. In particular, for every $r>1$ there is a coproduct
\begin{align*}
\psi: E^{*,*}_r \xlongrightarrow{} E^{*,*}_r \otimes_k E^{*,*}_r,
\end{align*}
and the differentials $d_r$ respect the coproduct.
\end{theorem}
Using these algebraic structures, we show the following result:
\begin{theorem}
For a field $k$, let $C$ be a cocommutative $Hk$-coalgebra spectrum that is cofibrant as an $Hk$-module with $\pi_*(C) \cong \Lambda_k(y)$ for $|y|$ odd and greater than 1. Then the relative coB\"okstedt spectral sequence collapses and
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Lambda_k(y) \otimes k[w]
\]
as graded $k$-modules for $|w|=|y|-1$.
\end{theorem}
\begin{proof}
The $E_2$-page of the relative coB\"okstedt spectral sequence is
\[
E_2^{s,t} = \mathrm{coHH}_{s,t}^k(\Lambda_k(y)) \cong \Lambda_k(y) \otimes k[w]
\]
by Proposition 5.1 in \cite{bohmann2018computational}.
By \hyperlink{shortest}{Proposition} \ref{shortest} we know that the shortest nontrivial differential in lowest total degree must map from a $\square_{\Lambda_k(y)}$-algebra indecomposable to a $\square_{\Lambda_k(y)}$-coalgebra primitive. Since the $E_2$-page is given by $\Lambda_k(y) \otimes k[w]$, \hyperlink{primitive}{Proposition} \ref{primitive} implies that elements in this $\square_{\Lambda_k(y)}$-coalgebra will be primitive if and only if the component from $k[w]$ is primitive in the $k$-coalgebra $k[w]$. Recall that primitives in the $k$-coalgebra $k[w_1, w_2, \ldots]$ more generally are of the form $w_i^{p^m}$ for $p = char(k)$, so here
\begin{align*}
\Lambda_k(y) \otimes \big(\text{primitives in } k[w]\big) &\cong \Lambda_k(y) \otimes w^{p^m}.
\end{align*}
Therefore the only terms in the spectral sequence that are possible targets of differentials are $w^{p^m}$ and $y w^{p^m}$ for $m \ge 0$ and prime $p$.
Bohmann-Gerhardt-Shipley also identify the indecomposable elements for the $\square_{\Lambda_k(y)}$-algebra $\Lambda_k(y) \otimes k[w]$ as those of the form $\Lambda_k(y) \otimes w$
\cite{bohmanngerhardtshipley}. Thus the only terms in the spectral sequence that are possible sources of differentials are $1 \otimes w = w$ and $y \otimes w = yw$.
However, differentials from $w$ must be trivial for degree reasons. Further, because the possible targets are of the form $w^{p^m}$ and $y w^{p^m}$, and the $y w^{p^m}$ appear in the same diagonal as $yw$, those elements cannot be hit by any $(r,r-1)$-bidegree differential. Thus we need only justify why differentials from $yw$ cannot hit terms of the form $w^{p^m}$.
Note that since $||y||=(0,2n+1)$
and $||w||=(1,2n+1)$, the elements we are considering live in the following bidegrees ($||-||$)
for $m,n \ge 1$:
\begin{align*}
||w^{p^m}|| &= (p^m, p^m(2n+1)) = (p^m, 2np^m+p^m)\\
||y w|| &= (1, 4n+2)\\
||d_r(y w)|| &= (1+r, 4n+2+r-1) = (1+r, 4n+r+1).
\end{align*}
First, we will justify that $||d_r(yw)|| \ne ||w^{p^m}||$ for any $m \ge 1$. Suppose by contradiction that these terms were in the same bidegrees. Then the first coordinate tells us that $r+1 = p^m$, so we have from the second coordinate:
\begin{align*}
4n+p^m &= 2np^m+p^m\\
4n &= 2np^m\\
2 &= p^m,
\end{align*}
which is true only when $m=1$ and $p=2$. However, if $m=1$ then $r+1=p^m=2^1$ implies that $r=1$ and we are already considering the $E_2$-page, so no such differential exists.
Note that
the convergence conditions of \hyperlink{converge}{Corollary} \ref{converge} hold,
and the relative coB\"okstedt spectral sequence converges completely to $\pi_*(\mathrm{coTHH}^{Hk}(C))$. Therefore
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Lambda_k(y) \otimes k[w]
\]
as graded $k$-modules.
\end{proof}
We now generalize this result to more cogenerators. In particular, we aim to compute $\pi_*(\mathrm{coTHH}^{Hk}(C))$ for $C$ an $Hk$-coalgebra such that $\pi_*(C) \cong \Lambda_k(y_1, y_2)$.
\hypertarget{Exton2}{}
\begin{theorem}\label{Exton2}
Let $k$ be a field and let $p=char(k)$, including 0. For $C$ a cocommutative $Hk$-coalgebra spectrum that is cofibrant as an $Hk$-module with $\pi_*(C) \cong \Lambda_k(y_1, y_2)$ for $|y_1|, |y_2|$ both odd and greater than 1, if $p^m$ is not equal to $\frac{|y_2|-1}{|y_1|-1}+1$ for all $m \ge 1$ and $p \ne 2$,
then the relative coB\"okstedt spectral sequence collapses and
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Lambda_k(y_1, y_2) \otimes k[w_1, w_2],
\]
as graded $k$-modules for $|w_i| = |y_i|-1$. Moreover, the same result holds at the prime $p=2$ with the additional assumption that $p^m \ne \frac{|y_2|-1}{|y_1|-1}$ for $m \ge 1$.
\end{theorem}
\begin{proof}
Suppose $|y_1|=a$ and $|y_2|=b$ so that on the $E_2$-page of the spectral sequence $y_1$ appears in bidegree $(0,a),$ and $y_2$ appears in bidegree $(0,b)$, which implies $||w_1|| = (1,a), ||w_2||=(1,b)$. Then we assume WLOG that $b \ge a$ and we will determine if there is the possibility for differentials by examining the degrees of the terms in the spectral sequence.
Note that because of the $\square$-coalgebra structure from \hyperlink{shortest}{Proposition} \ref{shortest} the shortest nontrivial differential has to hit a coalgebra primitive. If $char(k)=p$ a prime, then by \hyperlink{primitive}{Proposition} \ref{primitive} coalgebra primitives will be of the form
\[
\Lambda_k(y_1, y_2) \otimes w_i^{p^m}
\]
since the primitives in $k[w_1, w_2]$ are of the form $w_1^{p^m}$ or $w_2^{p^n}$. However, by \hyperlink{coalgebrass}{Theorem} \ref{coalgebrass} the relative coB\"okstedt spectral sequence in this setting also has a coalgebra structure over $k$. Therefore the first nontrivial differential has to hit a $k$-coalgebra primitive, that is only classes of the form $y_i$ or $w_i^{p^m}$ (and not any of their tensored combinations). Since the $y_i$'s appear in the zero column, they cannot be hit by any differentials, so the only possible targets are classes $w_1^{p^m}$ or $w_2^{p^n}$. Similarly, if $char(k)=0$ then the only primitives in $k[w_1, w_2]$ are $w_1$ and $w_2$.
Further, the source of the shortest nontrivial differential must be a $\square$-algebra indecomposable. By \cite{bohmanngerhardtshipley}, the indecomposable elements in the $\square_{\Lambda_k(y_1, y_2)}$-algebra $\Lambda_k(y_1, y_2) \otimes k[w_1, w_2]$
will be of the form $\Lambda_k(y_1, y_2) \otimes w_i$.
Thus we only consider differentials from the following sources that land in bidegrees:
\begin{align*}
||d_r(w_1)|| &= (1+r, a+r-1)\\
||d_r(w_2)|| &= (1+r, b+r-1)\\
||d_r(y_1 w_1)|| &= (1+r, 2a+r-1)\\
||d_r(y_2 w_1)|| &= (1+r, a+b+r-1)\\
||d_r(y_1 w_2)|| &= (1+r, a+b+r-1)\\
||d_r(y_2 w_2)|| &= (1+r, 2b+r-1)\\
||d_r(y_1 y_2 w_1)|| &= (1+r, 2a+b+r-1)\\
||d_r(y_1 y_2 w_2)|| &= (1+r, a+2b+r-1).
\end{align*}
The primitive elements that could serve as possible targets live in bidegrees:
\begin{align*}
||w_1^{p^m}|| &= (p^m, ap^m)\\
||w_2^{p^m}|| &= (p^m, bp^m).
\end{align*}
Note that if there is a nonzero differential hitting one of these classes, comparing the degree of the first coordinate implies $p^m=1+r$. In the $char(k)=0$ case, $|w_1|= (1,a)$ and $|w_2|=(1,b)$ imply that no nontrivial differentials exist since we are already on the $E_2$-page. Thus we assume $char(k)=p$ is prime and use $p^m = 1+r$ to simplify the second coordinate of the bidegree. Because differentials from $w_1$ will not hit $w_1^{p^m}$ or $w_2^{p^m}$ for degree reasons, we begin by considering differentials from $w_2$.
Suppose that $d_r(w_2)$ hits a class $w_1^{p^m}$. Then the second coordinate gives:
\[
b+p^m-2 = ap^m,
\]
so $b-2=p^m(a-1)$, but $b-2$ is odd and $a-1$ is even so equality will not hold due to parity, so there are no such possible differentials.
Similar parity issues arise to show $d_r(w_1) \ne w_2^{p^m}$, as well as for $d_r(w_2) \ne w_1^{p^m}$ or $w_2^{p^m}$, $d_r(y_1 y_2 w_1) \ne w_1^{p^m}$ or $w_2^{p^m}$, and $d_r(y_1 y_2 w_2) \ne w_1^{p^m}$ or $w_2^{p^m}$.
Suppose $d_r(y_1 w_1)$ hits a class $w_1^{p^m}$. Then by comparing degrees:
\[
2a+r-1=a(r+1).
\]
So $\frac{a-1}{a-1}=r$, but we are considering the $E_2$-page, so no such differential exists.
A similar justification regarding $r$ determines that $d_r(y_2 w_2) \ne w_2^{p^m}$.
Now suppose $d_r(y_1 w_1)$ hits a class $w_2^{p^m}$. Then by comparing degrees:
\[
2a+r-1=b(1+r),
\]
so $2 \frac{a-1}{b-1} = r+1$. But $a \le b$ so $2 \frac{a-1}{b-1} \le 2(1) < 3 \le r+1$ since $r \ge 2$ and so no such differential exists. Similar justifications based on the assumption that $a \le b$ allow us to conclude that $d_r(y_2 w_1) \ne w_2^{p^m}$ and $d_r(y_1 w_2) \ne w_2^{p^m}$.
Suppose $d_r(y_2 w_1)$ hits a class $w_1^{p^m}$. Then
\[
a+ b + p^m -2 = ap^m,
\]
so $\frac{b-1}{a-1} = p^m-1$. However we assumed that $p^m$ cannot be equal to $\frac{|y_2|-1}{|y_1|-1}+1$, so no such differential exists. This condition also arises in the case $d_r(y_1 w_2) \ne w_1^{p^m}$ since $y_2 w_1$ and $y_1 w_2$ appear in the same bidegree.
Finally suppose $d_r(y_2 w_2)$ hits a class $w_1^{p^m}$. Then
\[
2b+p^m-2=ap^m,
\]
so $2\frac{b-1}{a-1}=p^m$. However, if $m=0$ then $p^m=1$ and this equality does not hold since we assumed above that $b \ge a$. For $m \ge 1$, an odd prime $p$ to any power will still be odd and so $p^m \ne 2\frac{|y_2|-1}{|y_1|-1}$ due to parity.
If $p=2$, we need only consider $m \ge 2$ since the first coordinates implied that $p^m = r+1$ and we are already on the $E_2$-page. If $\frac{|y_2|-1}{|y_1|-1}$ is odd, then $2 \frac{b-1}{a-1}$ will only be equal to a power of $p=2$ if $m=1$; if $\frac{|y_2|-1}{|y_1|-1}$ is even, then our additional assumption implies that $2^m \ne 2\frac{|y_2|-1}{|y_1|-1}$ for $m \ge 2$, so no such differential from $y_2 w_2$ to $w_1^{p^m}$ exists.
We have now justified via combinatorics why all possible differentials can be eliminated, whether that is for parity reasons, because we are already considering the $E_2$-page, or because we restricted values of $p^m$ based on the conditions listed in the hypotheses. Thus the spectral sequence collapses and the convergence conditions of \hyperlink{converge}{Corollary} \ref{converge} hold, so we have the desired result.
\end{proof}
Note that the conditions on $p^m$ allow us to avoid cases like $|y_1|=3, |y_2|=5$, which has a possible $d_2$ differential from $y_2 w_1$ to $w_1^3$ for the prime $p=3$ (eliminated by the primary condition) or a possible $d_3$ differential for the prime $p=2$ from $y_2 w_2$ to $w_1^4$.
Finally we state a result analogous to \cite{bohmann2018computational} for the relative coB\"okstedt spectral sequence in the case when $E = \mathbb{S}$ and $R = Hk$ for a field $k$.
\begin{theorem}
Let $C$ be a cocommutative $Hk$-coalgebra spectrum that is cofibrant as an $Hk$-module spectrum,
and whose homotopy coalgebra is
\[
\pi_*(C) = \Gamma_k[x_1, x_2, \ldots],
\]
where the $x_i$ are in non-negative even degrees and there are only finitely many of them in each degree. Then the relative coB\"okstedt spectral sequence calculating the homotopy groups of the topological coHochschild homology of $C$ collapses at $E_2$, and
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Gamma_k[x_1, x_2, \ldots] \otimes \Lambda_k(z_1, z_2, \ldots)
\]
as $k$-modules, with $z_i$ in degree $|x_i|-1$.
\end{theorem}
\begin{proof}
Since $E_*(C) = \pi_*(C) = \Gamma_k[x_1, x_2, \ldots]$ is flat over $E_*(R) = \pi_*(Hk) \cong k$, the relative coB\"okstedt spectral sequence that abuts to $\pi_{t-s}(\mathrm{coTHH}^{Hk}(C))$ has $E_2$-page
\[
E_2^{s,t} = \mathrm{coHH}^{k}_{s,t}(\Gamma_k[x_1, x_2, \ldots]).
\]
By Proposition 5.1 in \cite{bohmann2018computational},
\[
\mathrm{coHH}^k_{*,*}(\Gamma_k[x_1, x_2, \ldots]) \cong \Gamma_k[x_1, x_2, \ldots] \otimes \Lambda_k(z_1, z_2, \ldots),
\]
where $||z_i|| = (1, |x_i|)$. Now we want to examine the differentials on this $E_2$-page of our spectral sequence. In particular, \hyperlink{shortest}{Theorem} \ref{shortest} says that the coalgebra structure implies that the shortest nonzero differential has to hit a $\square$-coalgebra primitive. Since $\mathrm{coHH}_*(\pi_*(C))$ is a $\square$-coalgebra over $\pi_*(C)=\Gamma_k[x_1, x_2, \ldots]$, we know by \hyperlink{primitive}{Proposition} \ref{primitive} that the primitive elements will be of the form
\[
\Gamma_k[x_1, x_2, \ldots] \otimes \big(\text{primitives in } \Lambda_k(z_1, z_2, \ldots)\big),
\]
where the primitives in $\Lambda_k(z_1, z_2, \ldots)$ viewed as a $k$-coalgebra are of the form $z_i$.
Note that since all of the $x_i$'s appear in degree $(0, |x_i|)$, all $x_i$'s and all the divided powers will stay in the zero column. Similarly, the exterior cogenerator $z_i$ is in degree $(1, |x_i|)$, and so all possible targets, i.e. combinations of $x_i$'s with a single $z_i$, will be in the first column. Because we are on the $E_2$-page, the differentials of bidegree $(2, 1)$ will be mapping outside of these two columns, as will all possible $d_r$ differentials on later $E_r$-pages. Thus beyond the zero and first columns, the only elements that may be hit by differentials are those that include at least $z_i z_j$. However, as we said above, such elements are not primitive, and the shortest non-zero differential $d^{s,t}_r$ in lowest total degree $s+t$ has to hit a $\square_{\pi_*(C)}$-coalgebra primitive. Therefore, our spectral sequence collapses at $E_2$.
Note that the convergence conditions of \hyperlink{converge}{Corollary} \ref{converge} hold,
and the relative coB\"okstedt spectral sequence converges completely to $\pi_*(\mathrm{coTHH}^{Hk}(C))$.
Therefore we have the following isomorphism of $k$-modules:
\[
\pi_*(\mathrm{coTHH}^{Hk}(C)) \cong \Gamma_k[x_1, x_2, \ldots] \otimes \Lambda_k(z_1, z_2, \ldots). \qedhere
\]
\end{proof}
| {
"timestamp": "2021-08-19T02:03:36",
"yymm": "2108",
"arxiv_id": "2108.07863",
"language": "en",
"url": "https://arxiv.org/abs/2108.07863",
"abstract": "Hess and Shipley defined an invariant of coalgebra spectra called topological coHochschild homology, and Bohmann-Gerhardt-Høgenhaven-Shipley-Ziegenhagen developed a coBökstedt spectral sequence to compute the homology of coTHH for coalgebras over the sphere spectrum. We construct a relative coBökstedt spectral sequence to study coTHH of coalgebra spectra over any commutative ring spectrum $R$. Further, we use algebraic structures in this spectral sequence to complete some calculations of the homotopy groups of relative topological coHochschild homology.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Computations of relative topological coHochschild homology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347875615794,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7093645961539488
} |
https://arxiv.org/abs/1908.10989 | Revisiting a Cutting Plane Method for Perfect Matchings | In 2016, Chandrasekaran, Végh, and Vempala published a method to solve the minimum-cost perfect matching problem on an arbitrary graph by solving a strictly polynomial number of linear programs. However, their method requires a strong uniqueness condition, which they imposed by using perturbations of the form $c(i)=c_0(i)+2^{-i}$. On large graphs (roughly $m>100$), these perturbations lead to cost values that exceed the precision of floating-point formats used by typical linear programming solvers for numerical calculations. We demonstrate, by a sequence of counterexamples, that perturbations are required for the algorithm to work, motivating our formulation of a general method that arrives at the same solution to the problem as Chandrasekaran et al. but overcomes the limitations described above by solving multiple linear programs without using perturbations. We then give an explicit algorithm that exploits are method, and show that this new algorithm still runs in strongly polynomial time. | \section{Introduction}
Given a graph $G=(V, E)$ with edge cost function $c$, the minimum-cost
(or minimum-weight) perfect matching problem is to find a perfect matching
$E' \subseteq E$ (a subset such that every vertex $v \in V$ is covered
by exactly one $uv \in E'$) so that the sum of the costs of $E'$ is minimized.
As mentioned in \cite{cook_rohe_1999}, the minimum-cost perfect matching
problem is a classical problem in combinatorial optimization with numerous and
varied applications.
Since Edmonds \cite{edmonds1965a} introduced the blossom algorithm (a
polynomial-time combinatorial method of solving the problem), a number of
efficient
implementations have been developed over the years, with Kolmogorov's Blossom
V~\cite{kolmogorov_blossom_2009} being a recent notable version.
The problem can also be formulated as a binary integer program:
\begin{align*}
\min \sum_{e \in E}& c(e) x(e) \\
{\mbox{s.t.}} \sum_{uv \in E} x(uv) &= 1 & \forall~v \in V \\
x(e) &\in \{0,1\} &\forall~e \in E.
\end{align*}
To use linear programming (LP) techniques to solve the problem, the constraints
$x(e) \in \{0,1\}$ are first relaxed to $x(e) \in [0, 1]$ and then to
$x(e) \geq 0$ since the upper bounds are then implied. The linear program
that results turns out to be exact for bipartite graphs in the sense that a
basic optimal solution is the incidence vector of a minimum-weight perfect
matching. Edmonds \cite{edmonds1965b} provides an LP formulation for
non-bipartite graphs that has the same property. It requires the addition of
``blossom inequalities":
\begin{align*}
\min \sum_{e \in E}& c(e) x(e) \\
{\mbox{s.t.}} \sum_{uv \in E} x(uv) &= 1 &\forall~v \in V \\
\sum_{\substack{uv \in E \\ u \in S, v\notin S}} x(uv) &\geq 1, &\forall
S\subseteq
V,~|S|\mbox{ odd},~ 3 \leq |S| \leq |V|-3 \\
x(e) &\geq 0 &\forall~e \in E.
\end{align*}
Unfortunately, the presence of an exponential number of constraints in
this formulation precludes polynomial-time solvability via a generic LP
solver. As a result, researchers in the past have experimented with a
cutting-plane approach, solving the relaxation first without the blossom
inequalities, then iteratively finding and adding violated inequalities until the problem
has an integral solution. A polynomial-time (though impractical) algorithm
follows using the equivalence of separation and optimization via the
ellipsoid method (see Grötschel \textit{et al.}~\cite{GLS}) and the
polynomial-time identification of violated blossom inequalities due to Padberg
and Rao~\cite{padberg_rao_1982}. The existence of a practical LP-based cutting
plane method for the minimum-weight perfect matching remained uncertain until
2016, when Chandrasekaran~\textit{et al.}~\cite{chandrasekaran_cutting_2016}
gave a cutting-plane algorithm which uses only a polynomial number of linear
programs.
Their approach involves carefully selecting the blossom inequalities to be
included at each iteration and requires that the optimal solution to the linear
program be unique. As this uniqueness property does not always hold in
general, their method introduces an edge ordering and a perturbation on the
edge costs. (The edge costs are assumed to be integers.) In particular, if
$c_0(i)$ is the original cost for the $i$-th edge, then the perturbed cost is
$c(i)=c_0(i)+2^{-i}$. Such a perturbation turns out to be sufficient for
providing the required uniqueness property. Even though the increase in
size in representing the perturbed costs is polynomial, when the graph is large
(say with hundreds of edges), the precision required to represent the
perturbed costs exceeds what is typical of the floating-point formats used by
most LP solvers \cite{gunluk_exact_2011}. (For example, $4+2^{-100}=
\frac{5070602400912917605986812821505}{1267650600228229401496703205376}$ requires a mantissa of over 100 bits.)
To overcome the potential numerical difficulties caused by
perturbation, we present a variant of the algorithm which does not require
an explicit perturbation to ensure uniqueness. It works instead by solving a
sequence of linear programs for each single linear program that the original
algorithm would solve. We present a method whereby, given the solutions to
these programs, we can derive the optimal solution to a hypothetical perturbed
linear program without any explicit calculations on perturbed costs. After
this, the rest of the proof follows just as it did for the original algorithm.
The trade-off is that our algorithm has a worse runtime than that of
Chandrasekaran \textit{et al.} Theirs requires solving $O(n \log n)$ linear
programs, while ours solves $O(mn \log n)$. This is, however, still
polynomial.
The rest of this paper is organized as follows. After defining some terms
(Section~\ref{sec:prelim}) and
summarizing the algorithm from \cite{chandrasekaran_cutting_2016}
(Section~\ref{sec:cvv}), we give
examples of graphs which show that this algorithm requires some form of perturbation in both the primal and dual problems.
In particular, without perturbing the edge costs, we cannot guarantee that the
intermediate solutions will always be half-integral (Section~\ref{NonHalf})
or that the algorithm will terminate (Section~\ref{sec:cycling}).
This occurs even if we force the primal solution to be the same as it
would have been with perturbations. This motivates our new method, which uses
multiple linear programs to accurately emulate the perturbations. We first explain this in a general case
(Section~\ref{sec:perturb}) and then apply it to the specific problem of finding
perfect matchings (Section~\ref{sec:newalg}).
\section{Notation and definitions}\label{sec:prelim}
The set of $m \times n$ matrices with real entries is denoted by $\mathbb{R}^{m\times
n}$.
For a matrix $A\in\mathbb{R}^{m\times n}$, $A_{i,j}$ denotes the $(i,j)$-entry
of $A$; that is, the entry of $A$ at the intersection of the $i$-th row and the
$j$-th column. $A_{:,j}$ denotes the $j$-th column of $A$ and
$A_{i,:}$ the $i$-th row. The transpose of $A$ is denoted by $A^{\mathsf{T}}$.
Following common usage in combinatorics, for a finite set $E$, $\mathbb{R}^E$ denotes
the set of tuples of real numbers indexed by elements of $E$. For $y \in
\mathbb{R}^E$, $y(i)$ denotes the entry indexed by $i \in E$. For a positive integer
$n$, $\mathbb{R}^n$ is an abbreviation for $\mathbb{R}^{\{1,\ldots,n\}}.$ Depending on the
context, elements of $\mathbb{R}^n$ are treated as if they were elements of
$\mathbb{R}^{n\times 1}$.
We assume familiarity with basic terminology related to matchings and linear
programming. A refresher of the former can be found at \cite[Chapter
5]{cook_combinatorial_1998}, and of the latter at
\cite{schrijver_theory_2000}. We next recall some definitions in
Chandrasekaran \textit{et al.}~\cite{chandrasekaran_cutting_2016} to facilitate
discussion of their minimum-cost perfect matching algorithm.
Let $G=(V,E)$ be an simple undirected graph with integer edge costs given by
$c \in \mathbb{Z}^E$. A family $\mathscr{F}$ of subsets of $V$ is said to be
\textit{laminar} if for all $U,W \in \mathscr{F}$, $U \cap W = \emptyset$ or $U
\subseteq W$ or $W \subseteq U$. For a set $S \subseteq V$,
$\delta(S)$ denotes the set of edges incident to one vertex in $S$ and one
vertex not in $S$. For a vertex $u$, $\delta(u)$ denotes $\delta(\{u\})$. For
$x \in \mathbb{R}^E$ and $T \subseteq E$, $x(T)$ denotes the sum $\sum_{e \in T}
x(e)$.
Let $M$ be a matching of a graph $H = (V, E)$. Let $U \subseteq V$, and let $\mathscr{F}$ be a laminar family
of subsets of $V$. Then $M$ is a \textit{$(U, \mathscr{F})$-perfect-matching} if
${|\delta(S) \cap M|} \leq 1$ for every $S \in \mathscr{F}$ and $M$ covers exactly the
vertex set $U$. A set of vertices $S \in \mathscr{F}$ is said to be \textit{$(H,
\mathscr{F})$-factor-critical} for a graph $H$ if, for every $u \in S$, there exists an
$(S \setminus \{u\}, \mathscr{F})$-perfect-matching using the edges of $H$.
For a laminar family $\mathscr{F}$ of odd subsets of $V$, define the
following primal-dual pair of linear programming problems:
\begin{align*}
\min \sum_{uv\in E}& c(uv) x(uv)\tag{$P_\mathscr{F}(G, c)$}\label{Pf}\\
{\mbox{s.t.}}\ x(\delta(u))&=1&\forall u\in V\\
x(\delta(S))&\ge 1& \forall S\in {\mathscr{F}}\\
x&\ge0,\\
\\
\max \sum_{S\in V \cup \mathscr{F}}&\Pi(S)\tag{$D_\mathscr{F}(G, c)$}\label{Df}\\
{\mbox{s.t.}}\ \sum_{S\in V \cup \mathscr{F}:uv\in \delta(S)} \Pi(S)&\le c(uv) & \forall uv\in E \\
\Pi(S)&\ge0&\forall S\in \mathscr{F}.
\end{align*}
Let $\Pi$ be a feasible solution to \ref{Df}.
$G_\Pi$ denotes the graph $(V,E_\Pi)$ where
$E_\Pi = \{ uv \in E :
\sum_{S\in V \cup \mathscr{F}:uv\in \delta(S)} \Pi(S) = c(uv)\}$. Colloquially,
$E_\Pi$ is the set of ``tight" edges with respect to $\Pi$. We say that $\Pi$
is an \textit{$\mathscr{F}$-critical dual} if every $S \in \mathscr{F}$ is $(G_\Pi,
\mathscr{F})$-factor-critical and $\Pi(T) > 0$ for every non-maximal $T \in \mathscr{F}$. If
$\Pi$ is an $\mathscr{F}$-critical dual except that some sets $S \in \mathscr{F}$ for which
$\Pi(S)=0$ may not be $(G_\Pi, \mathscr{F})$-factor-critical, we say that $\Pi$ is an
$\mathscr{F}$-positively-critical dual.
Finally, we define a metric on solutions to \ref{Df}
$$\Delta(\Gamma, \Pi)=\sum_{S \in V \cup\mathscr{F}} \frac{1}{|S|}
|\Gamma(S)-\Pi(S)|.$$ It can be easily verified that this has the properties of a metric.
For a given fixed $\Gamma$, we say that $\Pi$ is \textit{$\Gamma$-extremal} if
it minimizes $\Delta(\Gamma, \Pi)$. Given $\Gamma$ and a primal solution $x$,
we may find a $\Gamma$-extremal dual optimal solution by solving the following
linear program \cite[Section 5]{chandrasekaran_cutting_2016}:
\begin{align*}
\min \sum_{S \in V \cup \mathscr{F}}&\frac{1}{|S|}r(S)\tag{$D^*_\mathscr{F}(G,
c)$}\label{D*}\\
{\mbox{s.t.}}\ r(S)+\Pi(S)&\geq\Gamma(S)&\forall S \in V \cup \mathscr{F}_x\\
-r(S)+\Pi(S)&\leq\Gamma(S)&\forall S \in V \cup \mathscr{F}_x\\
\sum_{uv \in \delta(S)}\Pi(S)&=c(uv)&\forall uv \in \operatorname{supp}(x)\\
\sum_{uv \in \delta(S)}\Pi(S)&\leq c(uv)&\forall uv \notin \operatorname{supp}(x)\\
\Pi(S)&\geq0& \forall S \in \mathscr{F}_x\\
\Pi(S)&=0&\forall S \in \mathscr{F} \setminus \mathscr{F}_x\\
r(s)&=0&\forall S \in \mathscr{F} \setminus \mathscr{F}_x,
\end{align*}
where $\mathscr{F}_x=\{S \in \mathscr{F} : x(\delta(S))=1\}$. The solution will give us values
for $r$ and $\Pi$; we ignore $r$ and take $\Pi$ to be our $\Gamma$-extremal
solution.
\section{The Chandrasekaran-Végh-Vempala algorithm}\label{sec:cvv}
Algorithm~\ref{alg:cvv} for finding a minimum-cost perfect matching on $G$
is due to Chandrasekaran \textit{et al.}~\cite{chandrasekaran_cutting_2016}.
It assumes, as we will from now on, that the edge costs are integers.
\begin{algorithm}[H]\caption{C-P-Matching Algorithm}\label{alg:cvv}
\KwIn{A graph $G=(V, E)$ with edge costs $c \in \mathbb{Z}^E$.}
\KwOut{A binary vector $x$ representing a minimum-cost perfect matching on
$G$.}
Let $c$ be the cost function on the edges after perturbation (i.e., after
ordering the edges arbitrarily and increasing the cost of each edge $i$ by
$2^{-i}$). \label{cvv:OrderEdges}
$\mathscr{F} \leftarrow \emptyset$, $\Gamma \leftarrow 0$
\While{$x$ is not integral}{
Find an optimal solution $x$ to \ref{Pf}.\label{cvv:primalstep}
Find a $\Gamma$-extremal dual optimal solution $\Pi$ to \ref{Df}
(possibly by solving \ref{D*}).
\label{cvv:extremal}
$\mathscr{H}'\leftarrow\{S\in {\mathscr{F}}: \Pi(S)>0\}$ \label{cvv:H'}
Let $\mathscr{C}$ denote the set of odd cycles in $\operatorname{supp}(x)$. For each
$C\in \mathscr{C}$, define $\hat C$ as the union of $V(C)$ and the
maximal sets of $\mathscr{H}'$ intersecting it.
$\mathscr{H}''\leftarrow \{\hat C: C\in \mathscr{C}\}$
$\mathscr{F} \leftarrow \mathscr{H}'\cup \mathscr{H}''$, $\Gamma
\leftarrow \Pi$
}
\KwRet{$x$}
\end{algorithm}
The authors of the algorithm showed that $\mathscr{F}$ is always a laminar
family and that the algorithm terminates after $O(n \log n)$ iterations,
assuming that \ref{Pf} has a unique optimal solution in every
iteration of the algorithm. This is ensured through the use of perturbations
in the first step. The authors further demonstrate that a $\Gamma$-extremal
dual solution, with an $\mathscr{F}$-critical $\Gamma$, is an $\mathscr{F}$-positively-critical
dual optimal to \ref{Df}, so the result of step~\ref{cvv:extremal} is
$\mathscr{F}$-positively-critical. When combined with the uniquness assumption, this
leads to $x$ being half-integral in each iteration.
The choice of using powers of $\frac{1}{2}$ for the perturbations is to keep
the increases in input size polynomial. However, to guarantee uniqueness,
powers of a sufficiently small $\epsilon > 0$ can be used instead.
\begin{lemma}\label{cvvepsilon}
There exists a $\delta>0$ such that the perturbations used in Algorithm
\ref{alg:cvv} may be replaced with powers of $\epsilon$ for any
$\delta>\epsilon>0$.
\end{lemma}
\begin{proof}
Consider the proof given for the efficacy of the $2^{-i}$ perturbation in
\cite[Section 7]{chandrasekaran_cutting_2016}. This uses only one property
of the perturbation: that, if $\sum_{i=1}^m a(i) 2^{-i}=\sum_{k=1}^n
b(k) 2^{-k}$, with $a(i), b(k) > 0$, then $m = n$
and $a(i)=b(i)$ for all $i$. We prove this for a class of arbitrary $\epsilon > 0$, after which the
desired result follows.
Assume $\sum_{i=1}^m a(i)\epsilon^i = \sum_{k=1}^n b(k) \epsilon^k$.
Assume further, without loss of generality, that $m \leq n$. Then
$\sum_{i=1}^m (a(i)-b(i))\epsilon^i - \sum_{k=m+1}^n b(k) \epsilon^k = 0$.
Take $m < n$. For $\epsilon$ sufficiently small, either $a(i)=b(i)$ for
all $i \in \{1, \ldots, m\}$ or $\sum_{i=1}^m |(a(i)-b(i))|\epsilon^i >
\sum_{k=m+1}^n b(k) \epsilon^k$. In the first case, $ \sum_{k=m+1}^n b(k)
\epsilon^k = 0$, a contradiction since $\epsilon$ and all $b(k)$ are
positive; in the second, $\sum_{i=1}^m (a(i)-b(i))\epsilon^i -
\sum_{k=m+1}^n b(k) \epsilon^k \neq 0$. Therefore $m = n$.
Assume there exists a minimal $l$ such that $a(l)-b(l) \neq 0$. Then
\[
0=\sum_{i=1}^m(a(i)-b(i))\epsilon^i = (a(l)-b(l))\epsilon^l +
\epsilon^{l+1}\sum_{i=l+1}^n(a(i)-b(i))\epsilon^{i-l-1}.
\]
For sufficiently small $\epsilon$, $|(a(l)-b(l))\epsilon^l| >
|\epsilon^{l+1}\sum_{i=l+1}^n(a(i)-b(i))\epsilon^{i-l-1}|$, so
$(a(l)-b(l))\epsilon^l + \epsilon^{l+1} \sum_{i=l+1}^n(a(i)-b(i))
\epsilon^{i-l-1} \neq 0$.
This shows that, for any given $a$ and $b$, there exists a $\delta$ such
that if $\sum a(i)\delta^i = \sum b(k)\delta^k,$ then $a=b$ for all
$\delta > \epsilon > 0$. In fact, we need only consider the cases where
$a$ and $b$ are basic feasible solutions to \ref{Pf}, because if there
exists an optimal solution that is not a basic feasible solution then there
exist two distinct basic feasible solutions that are optimal. Therefore,
if optimal basic feasible solutions are unique, so are optimal solutions in
general.
Fix $\mathscr{F}$. Then, because \ref{Pf} is bounded and finite-dimensional, it
has a finite number of basic feasible solutions $s_1, \dots, s_k$. Every
pair $(s_p, s_q)$ gives us a $\delta$ by setting $a=s_p$, $b=s_q$ and
running through the logic above. Take the smallest of these $\delta$s to
complete the proof.
\end{proof}
For reasons that will become clear later (see Section~\ref{sec:perturb}),
it will be more convenient to use powers of a sufficiently small $\epsilon > 0$
as perturbations instead of powers of $\frac{1}{2}$. In any case, increasing
the bit-length required to represent the edge costs can lead to practical
computation challenges since most LP solvers employ fixed-length floating-point
formats. (Notable exceptions exist, such as
QSopt-Exact~\cite{ApplegateDavidL.2007Estl} and the SoPlex rational
solver~\cite{GleixnerAmbrosM.2016IRfL}, but they are significantly slower than
non-exact solvers.) We feel strongly that the key to a successful
implementation of Algorithm~\ref{alg:cvv} is to not work with any explicit
numerical perturbation.
An obvious way of modifying the algorithm is simply to not perturb the edge
costs and run the rest of the procedure as stated, but this violates the uniqueness assumption, and as easily demonstrated in \cite[Section 1]{chandrasekaran_cutting_2016}, can lead to non-half-integrality and cycling.
Instead, we may emulate perturbations by ordering the edges (as in step
\ref{cvv:OrderEdges} of Algorithm~\ref{alg:cvv}) and then finding a
lexicographically-minimal optimal solution to \ref{Pf}, where $c$ is now an
unperturbed cost function. This may be accomplished using Algorithm
\ref{alg:LexMinPrimal}, which shows the process in a more general case.
\begin{algorithm}[H]
\caption{Lexicographically-Minimal Primal Algorithm}\label{alg:LexMinPrimal}
\KwIn{A linear program $P$ of the form $\min c^{\mathsf{T}} x\ {\mbox{s.t.}}\ Ax \geq b$, where
$x \in \mathbb{R}^n$.}
\KwOut{The lexicographically-minimal solution $x$ to $P$.}
Solve $P$ and let its opimal value be $\gamma$.
$K \leftarrow \emptyset$, $x \leftarrow 0$
\For{$i \leftarrow 1$ \KwTo $n$}{
Set $x$ to an optimal solution to
\abovedisplayskip=0pt
\belowdisplayskip=0pt
\begin{align}
\min \ & x_i\notag \\
{\mbox{s.t.}}\ c^{\mathsf{T}} x &= \gamma \notag \\
x_j &=z & \forall (j, z) \in K \notag\\
Ax&\geq b. \notag
\end{align}
$K \leftarrow K \cup \{(i, x_i)\}$
}
\KwRet{$x$}
\end{algorithm}
By \cite[p. 138]{schrijver_theory_2000}, the
lexicographically-minimal optimal solution to \ref{Pf} is the same
as the optimal solution to the perturbed \ref{Pf}.
Unfortunately, this on its own ensures neither half-integrality nor
convergence: for an arbitrarily small nonzero value there exist graphs such
that the lexicographically-minimal optimal solution to \ref{Pf} contains
smaller values, and there exist graphs on which this modification of the
algorithm enters an infinite loop. Before giving a slightly more complex
modification of the algorithm that uses a multi-stage approach to mimic solving
with perturbation without actually working with perturbations, we first give
some examples of graphs that demonstrate the problems just mentioned.
\section{Non-half-integral solution}\label{NonHalf}
The following example, which we call the ``dancing robot," shows that, if
the edge costs are not perturbed at all, having an $\mathscr{F}$-critical dual is not
sufficient to guarantee that all lexicographically-minimal optimal primal
solutions are half-integral. Chandrasekaran \textit{et
al.}~\cite{chandrasekaran_cutting_2016} provide an example early in their paper
of a graph on which their algorithm as written does not maintain
half-integrality, but this does not entirely suffice for our purposes, as the
lexicographically-minimal primal solution on this graph, for any edge ordering,
is integral.
The graph shown in Figures \ref{DRPerfect}-\ref{DR3}, with all edges having
cost 1, eventually gives non-half-integral values when run through the original
algorithm without any perturbation while enforcing a lexicographically-minimal
optimal primal. During each iteration, an optimal dual solution is given by
$\Pi$, the vector having value $\frac{1}{2}$ on the entries indexed by the
vertices and $0$ on entries indexed by the sets in $\mathscr{F}$. Note that all edges
in the graph are tight with respect to $\Pi$. We can see that, although the
primal solutions in the first and second iterations are half-integral
(shown in Figures~\ref{DR1} and~\ref{DR2}), the solution in the third iteration
no longer is. The $\frac{1}{3}$- and $\frac{2}{3}$-edges are shown in
Figure~\ref{DR3}.
Meanwhile, the dual solution $\Pi$ is a positively-critical
optimal dual for the current $\mathscr{F}$ in every iteration, as well as a critical
dual for the next $\mathscr{F}$. For instance, the $\Pi$ from the second iteration,
feasible to the dual problems from both the second and third iterations, is
trivially an $\mathscr{F}$-positively-critical optimal dual for the second iteration,
since none of the sets $S \in \mathscr{F}$ have positive dual value. For the third
iteration, since there exists an $(S\setminus\{u\}, \mathscr{F})$-perfect-matching for
any node $u \in S \in \mathscr{F}$, and since $\mathscr{F}$ only has maximal sets, that same
$\Pi$ is an $\mathscr{F}$-critical dual.
\begin{figure}
\begin{minipage}{0.84\textwidth}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (0, 4);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\draw (4) -- (12);
\draw[integral] (8) -- (9);
\draw (0) -- (12);
\draw[integral] (0) -- (3);
\draw (0) -- (1);
\draw [integral] (1) -- (5);
\draw (2) -- (13);
\draw[integral] (2) -- (6);
\draw (3) -- (7);
\draw (4) -- (13);
\draw[integral] (4) -- (11);
\draw (5) -- (13);
\draw (5) -- (15);
\draw[integral] (7) -- (12);
\draw (8) -- (11);
\draw (9) -- (11);
\draw[integral] (10) -- (14);
\draw (10) -- (11);
\draw (11) -- (14);
\draw[integral] (13) -- (15);
\node at (0)[draw, thick, circle, fill=white, inner sep = 3.5pt]{0};
\node at (1)[draw, thick, circle, fill=white, inner sep = 3.5pt]{1};
\node at (2)[draw, thick, circle, fill=white, inner sep = 3.5pt]{2};
\node at (3)[draw, thick, circle, fill=white, inner sep = 3.5pt]{3};
\node at (4)[draw, thick, circle, fill=white, inner sep = 3.5pt]{4};
\node at (5)[draw, thick, circle, fill=white, inner sep = 3.5pt]{5};
\node at (6)[draw, thick, circle, fill=white, inner sep = 3.5pt]{6};
\node at (7)[draw, thick, circle, fill=white, inner sep = 3.5pt]{7};
\node at (8)[draw, thick, circle, fill=white, inner sep = 3.5pt]{8};
\node at (9)[draw, thick, circle, fill=white, inner sep = 3.5pt]{9};
\node at (10)[draw, thick, circle, fill=white, inner sep = 3.5pt]{10};
\node at (11)[draw, thick, circle, fill=white, inner sep = 3.5pt]{11};
\node at (12)[draw, thick, circle, fill=white, inner sep = 3.5pt]{12};
\node at (13)[draw, thick, circle, fill=white, inner sep = 3.5pt]{13};
\node at (14)[draw, thick, circle, fill=white, inner sep = 3.5pt]{14};
\node at (15)[draw, thick, circle, fill=white, inner sep = 3.5pt]{15};
\draw [rounded corners = 4pt] (-7.4,-3.5) rectangle (-1.5, -9);
\draw (-7, -4.5) -- (-3, -4.5) [integral];
\draw (-7, -5.5) -- (-3, -5.5) [half integral];
\draw (-7, -6.5) -- (-3, -6.5) [one third];
\draw (-7, -7.5) -- (-3, -7.5) [two thirds];
\node at (-2.25,-4.5){\Large 1};
\node at (-2.25,-5.5){\Large 1/2};
\node at (-2.25,-6.5){\Large 1/3};
\node at (-2.25,-7.5){\Large 2/3};
\node at (-4.45, -8.5) {\Large Zero otherwise};
\end{tikzpicture}}
\caption{Perfect Matching}
\label{Legend}
\label{DRPerfect}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (0, 4);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\draw (4) -- (12) [integral];
\draw (8) -- (9) [integral];
\draw (0) -- (12);
\draw (0) -- (3);
\draw (0) -- (1) [integral];
\draw (1) -- (5);
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [integral];
\draw (4) -- (13);
\draw (4) -- (11);
\draw (5) -- (13) [half integral];
\draw (5) -- (15) [half integral];
\draw (7) -- (12);
\draw (8) -- (11);
\draw (9) -- (11);
\draw (10) -- (14) [half integral];
\draw (10) -- (11) [half integral];
\draw (11) -- (14) [half integral];
\draw (13) -- (15) [half integral];
\node at (0)[draw, thick, circle, fill=white, inner sep = 3.5pt]{0};
\node at (1)[draw, thick, circle, fill=white, inner sep = 3.5pt]{1};
\node at (2)[draw, thick, circle, fill=white, inner sep = 3.5pt]{2};
\node at (3)[draw, thick, circle, fill=white, inner sep = 3.5pt]{3};
\node at (4)[draw, thick, circle, fill=white, inner sep = 3.5pt]{4};
\node at (5)[draw, thick, circle, fill=white, inner sep = 3.5pt]{5};
\node at (6)[draw, thick, circle, fill=white, inner sep = 3.5pt]{6};
\node at (7)[draw, thick, circle, fill=white, inner sep = 3.5pt]{7};
\node at (8)[draw, thick, circle, fill=white, inner sep = 3.5pt]{8};
\node at (9)[draw, thick, circle, fill=white, inner sep = 3.5pt]{9};
\node at (10)[draw, thick, circle, fill=white, inner sep = 3.5pt]{10};
\node at (11)[draw, thick, circle, fill=white, inner sep = 3.5pt]{11};
\node at (12)[draw, thick, circle, fill=white, inner sep = 3.5pt]{12};
\node at (13)[draw, thick, circle, fill=white, inner sep = 3.5pt]{13};
\node at (14)[draw, thick, circle, fill=white, inner sep = 3.5pt]{14};
\node at (15)[draw, thick, circle, fill=white, inner sep = 3.5pt]{15};
\end{tikzpicture}}
$\Pi_1(v)=\frac{1}{2}$ $\forall{v} \in \mathcal{V}$, $\mathscr{F}_1=\emptyset$
\caption{First Iteration}
\label{DR1}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (0, 4);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\draw (4) -- (12) [half integral];
\draw (8) -- (9) [half integral];
\draw (0) -- (12) [half integral];
\draw (0) -- (3);
\draw (0) -- (1) [half integral];
\draw (1) -- (5) [half integral];
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [integral];
\draw (4) -- (13) [half integral];
\draw (4) -- (11);
\draw (5) -- (13);
\draw (5) -- (15) [half integral];
\draw (7) -- (12);
\draw (8) -- (11) [half integral];
\draw (9) -- (11) [half integral];
\draw (10) -- (14) [integral];
\draw (10) -- (11);
\draw (11) -- (14);
\draw (13) -- (15) [half integral];
\node at (0)[draw, thick, circle, fill=white, inner sep = 3.5pt]{0};
\node at (1)[draw, thick, circle, fill=white, inner sep = 3.5pt]{1};
\node at (2)[draw, thick, circle, fill=white, inner sep = 3.5pt]{2};
\node at (3)[draw, thick, circle, fill=white, inner sep = 3.5pt]{3};
\node at (4)[draw, thick, circle, fill=white, inner sep = 3.5pt]{4};
\node at (5)[draw, thick, circle, fill=white, inner sep = 3.5pt]{5};
\node at (6)[draw, thick, circle, fill=white, inner sep = 3.5pt]{6};
\node at (7)[draw, thick, circle, fill=white, inner sep = 3.5pt]{7};
\node at (8)[draw, thick, circle, fill=white, inner sep = 3.5pt]{8};
\node at (9)[draw, thick, circle, fill=white, inner sep = 3.5pt]{9};
\node at (10)[draw, thick, circle, fill=white, inner sep = 3.5pt]{10};
\node at (11)[draw, thick, circle, fill=white, inner sep = 3.5pt]{11};
\node at (12)[draw, thick, circle, fill=white, inner sep = 3.5pt]{12};
\node at (13)[draw, thick, circle, fill=white, inner sep = 3.5pt]{13};
\node at (14)[draw, thick, circle, fill=white, inner sep = 3.5pt]{14};
\node at (15)[draw, thick, circle, fill=white, inner sep = 3.5pt]{15};
\end{tikzpicture}}
$\Pi_2(v)=\frac{1}{2}$ $\forall{v} \in \mathcal{V}$,
$\mathscr{F}_2 = \{\{5, 15, 13\}, \{10, 11, 14\}\}$, $\Pi_2(S)= 0 $ $\forall{S} \in \mathscr{F}_2$
\caption{Second Iteration}
\label{DR2}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (0, 4);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\draw (4) -- (12) [two thirds];
\draw (8) -- (9) [integral];
\draw (0) -- (12);
\draw (0) -- (3) [one third];
\draw (0) -- (1) [two thirds];
\draw (1) -- (5) [one third];
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [two thirds];
\draw (4) -- (13);
\draw (4) -- (11) [one third];
\draw (5) -- (13) [one third];
\draw (5) -- (15) [one third];
\draw (7) -- (12) [one third];
\draw (8) -- (11);
\draw (9) -- (11);
\draw (10) -- (14) [two thirds];
\draw (10) -- (11) [one third];
\draw (11) -- (14) [one third];
\draw (13) -- (15) [two thirds];
\node at (0)[draw, thick, circle, fill=white, inner sep = 3.5pt]{0};
\node at (1)[draw, thick, circle, fill=white, inner sep = 3.5pt]{1};
\node at (2)[draw, thick, circle, fill=white, inner sep = 3.5pt]{2};
\node at (3)[draw, thick, circle, fill=white, inner sep = 3.5pt]{3};
\node at (4)[draw, thick, circle, fill=white, inner sep = 3.5pt]{4};
\node at (5)[draw, thick, circle, fill=white, inner sep = 3.5pt]{5};
\node at (6)[draw, thick, circle, fill=white, inner sep = 3.5pt]{6};
\node at (7)[draw, thick, circle, fill=white, inner sep = 3.5pt]{7};
\node at (8)[draw, thick, circle, fill=white, inner sep = 3.5pt]{8};
\node at (9)[draw, thick, circle, fill=white, inner sep = 3.5pt]{9};
\node at (10)[draw, thick, circle, fill=white, inner sep = 3.5pt]{10};
\node at (11)[draw, thick, circle, fill=white, inner sep = 3.5pt]{11};
\node at (12)[draw, thick, circle, fill=white, inner sep = 3.5pt]{12};
\node at (13)[draw, thick, circle, fill=white, inner sep = 3.5pt]{13};
\node at (14)[draw, thick, circle, fill=white, inner sep = 3.5pt]{14};
\node at (15)[draw, thick, circle, fill=white, inner sep = 3.5pt]{15};
\end{tikzpicture}}
$\Pi_3(v)=\frac{1}{2}$ $\forall{v} \in \mathcal{V}$,
$\mathscr{F}_3 = \{\{0, 1, 5, 15, 13, 4, 12\}, \{8, 11, 9\}\}$, $\Pi_3(S)= 0 $ $\forall{S} \in \mathscr{F}_3$
\caption{Third Iteration}
\label{DR3}
\end{center}
\end{minipage}
\end{minipage}
\begin{minipage}[c]{0.15\textwidth}
\centering
\begin{flushright}
\textbf{Edge ordering:}\\
(1, 5)\\
(2, 13)\\
(10, 14)\\
(0, 3)\\
(4, 12)\\
(5, 13)\\
(7, 12)\\
(5, 15)\\
(3, 7)\\
(8, 9)\\
(0, 1)\\
(11, 14)\\
(0, 12)\\
(4, 13)\\
(2, 6)\\
(10, 11)\\
(9, 11)\\
(4, 11)\\
(8, 11)\\
(13, 15)\\
\end{flushright}
\end{minipage}
\end{figure}
Even worse, the algorithm will eventually enter into an infinite loop on this
example. The details are tedious, so we will not go into them --- the example
in the next section also loops but does not lose half-integrality.
On their own, then, a lexicographically-minimal primal and an $\mathscr{F}$-critical
dual can guarantee neither half-integrality nor termination. It is worth
mentioning that we could expand the graph to get one with more
non-half-integral edges by making the $2$-$6$ edge (the ``arm" of the dancing
robot) overlap with another dancing robot's $6$-$2$ edge. This combination
would have twice as many non-half-integral edges, spread across twice as many
non-half-integral paths, as the original dancing robot. By combining multiple
dancing robots in such a manner, we can get as many non-half-integral paths as
we want, which indicates that we cannot avoid these non-half-integral edges via
a simple combinatorial linear- or constant-time algorithm.
We can even alter the dancing robot in order to give us arbitrarily small but
nonzero values in a lexicographically-minimal optimal solution. Say we want a
primal solution $x$ such that, for some $uv$, $x(uv)=\frac{1}{2n+1}$ for some
$n \in \mathbb{Z}_{\geq 0}$. We add $2(n-1)$ new edges between $0$ and the $0$-$1$
edge, alternately without and with adjoining $4$-cycles. The next example shows
how this works for $n=2$.
\begin{figure}
\begin{minipage}{0.84\textwidth}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (3, 5);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\coordinate (16) at (-3, 5);
\coordinate (17) at (0, 6);
\coordinate (18) at (-3.8, 8);
\coordinate (19) at (-0.8, 9);
\draw (4) -- (12);
\draw (8) -- (9) [integral];
\draw (0) -- (12) [integral];
\draw (0) -- (3);
\draw (0) -- (16);
\draw (1) -- (5) [integral];
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [integral];
\draw (4) -- (13);
\draw (4) -- (11) [integral];
\draw (5) -- (13);
\draw (5) -- (15);
\draw (7) -- (12);
\draw (8) -- (11);
\draw (9) -- (11);
\draw (10) -- (14) [integral];
\draw (10) -- (11);
\draw (11) -- (14);
\draw (13) -- (15) [integral];
\draw (16) -- (17) [integral];
\draw (1) -- (17);
\draw (16) -- (18);
\draw (18) -- (19) [integral];
\draw (17) -- (19);
\foreach \n in {0,...,19}
\node at (\n)[draw, thick, circle, fill=white, inner sep = 2.8pt]{\n};
\draw [rounded corners = 4pt] (-8,-3.5) rectangle (-3.5, -11.25);
\draw (-7.5, -4.5) -- (-5, -4.5) [integral];
\draw (-7.5, -5.5) -- (-5, -5.5) [half integral];
\draw (-7.5, -6.5) -- (-5, -6.5) [one third];
\draw (-7.5, -7.5) -- (-5, -7.5) [two fifths];
\draw (-7.5, -8.5) -- (-5, -8.5) [three fifths];
\draw (-7.5, -9.5) -- (-5, -9.5) [two thirds];
\node at (-4.25,-4.5){\Large 1};
\node at (-4.25,-5.5){\Large 1/2};
\node at (-4.25,-6.5){\Large 1/5};
\node at (-4.25,-7.5){\Large 2/5};
\node at (-4.25,-8.5){\Large 3/5};
\node at (-4.25,-9.5){\Large 4/5};
\node at (-5.75,-10.5) {\Large Zero otherwise};
\end{tikzpicture}}
\caption{Perfect Matching}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (3, 5);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\coordinate (16) at (-3, 5);
\coordinate (17) at (0, 6);
\coordinate (18) at (-3.8, 8);
\coordinate (19) at (-0.8, 9);
\draw (4) -- (12) [integral];
\draw (8) -- (9) [integral];
\draw (0) -- (12);
\draw (0) -- (3);
\draw (0) -- (16) [integral];
\draw (1) -- (5) ;
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [integral];
\draw (4) -- (13);
\draw (4) -- (11);
\draw (5) -- (13) [half integral];
\draw (5) -- (15) [half integral];
\draw (7) -- (12);
\draw (8) -- (11);
\draw (9) -- (11);
\draw (10) -- (14) [half integral];
\draw (10) -- (11) [half integral];
\draw (11) -- (14) [half integral];
\draw (13) -- (15) [half integral];
\draw (16) -- (17);
\draw (1) -- (17) [integral];
\draw (16) -- (18);
\draw (18) -- (19) [integral];
\draw (17) -- (19);
\foreach \n in {0,...,19}
\node at (\n)[draw, thick, circle, fill=white, inner sep = 2.8pt]{\n};
\end{tikzpicture}}
\caption{First Iteration}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (3, 5);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\coordinate (16) at (-3, 5);
\coordinate (17) at (0, 6);
\coordinate (18) at (-3.8, 8);
\coordinate (19) at (-0.8, 9);
\draw (4) -- (12) [half integral];
\draw (8) -- (9) [half integral];
\draw (0) -- (12) [half integral];
\draw (0) -- (3);
\draw (0) -- (16) [half integral];
\draw (1) -- (5) [half integral];
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [integral];
\draw (4) -- (13) [half integral];
\draw (4) -- (11);
\draw (5) -- (13);
\draw (5) -- (15) [half integral];
\draw (7) -- (12);
\draw (8) -- (11) [half integral];
\draw (9) -- (11) [half integral];
\draw (10) -- (14) [integral];
\draw (10) -- (11);
\draw (11) -- (14);
\draw (13) -- (15) [half integral];
\draw (16) -- (17) [half integral];
\draw (1) -- (17) [half integral];
\draw (16) -- (18);
\draw (18) -- (19) [integral];
\draw (17) -- (19);
\foreach \n in {0,...,19}
\node at (\n)[draw, thick, circle, fill=white, inner sep = 2.8pt]{\n};
\end{tikzpicture}}
\caption{Second Iteration}
\label{alteredrobot}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tikzpicture}[thick, yscale=0.6]
\tikzset{
font={\fontsize{18pt}{12}\selectfont}}
\coordinate (0) at (-3.46, 2);
\coordinate (1) at (3, 5);
\coordinate (2) at (6, -5);
\coordinate (3) at (-7.46, 2);
\coordinate (4) at (0, -4);
\coordinate (5) at (3.46, 2);
\coordinate (6) at (6.5, -8);
\coordinate (7) at (-7.46, -2);
\coordinate (8) at (-3, -11);
\coordinate (9) at (-1, -12);
\coordinate (10) at (1, -12);
\coordinate (11) at (0, -8);
\coordinate (12) at (-3.46, -2);
\coordinate (13) at (3.46, -2);
\coordinate (14) at (3, -11);
\coordinate (15) at (6.92, 0);
\coordinate (16) at (-3, 5);
\coordinate (17) at (0, 6);
\coordinate (18) at (-3.8, 8);
\coordinate (19) at (-0.8, 9);
\draw (4) -- (12) [two thirds];
\draw (8) -- (9) [integral];
\draw (0) -- (12);
\draw (0) -- (3) [one third];
\draw (0) -- (16) [two thirds];
\draw (1) -- (5) [one third];
\draw (2) -- (13);
\draw (2) -- (6) [integral];
\draw (3) -- (7) [two thirds];
\draw (4) -- (13);
\draw (4) -- (11) [one third];
\draw (5) -- (13) [two fifths];
\draw (5) -- (15) [two fifths];
\draw (7) -- (12) [one third];
\draw (8) -- (11);
\draw (9) -- (11);
\draw (10) -- (14) [three fifths];
\draw (10) -- (11) [two fifths];
\draw (11) -- (14) [two fifths];
\draw (13) -- (15) [three fifths];
\draw (16) -- (17);
\draw (1) -- (17) [two thirds];
\draw (16) -- (18) [one third];
\draw (18) -- (19) [two thirds];
\draw (17) -- (19) [one third];
\foreach \n in {0,...,19}
\node at (\n)[draw, thick, circle, fill=white, inner sep = 2.8pt]{\n};
\end{tikzpicture}}
\caption{Third Iteration}
\end{center}
\end{minipage}
\end{minipage}
\begin{minipage}[c]{0.15\textwidth}
\centering
\begin{flushright}
\textbf{Edge ordering:}\\
(1, 5)\\
(2, 13)\\
(10, 14)\\
(0, 3)\\
(17, 19)\\
(4, 12)\\
(5, 13)\\
(7, 12)\\
(16, 18)\\
(5, 15)\\
(3, 7)\\
(18, 19)\\
(8, 9)\\
(0, 16)\\
(1, 17)\\
(11, 14)\\
(0, 12)\\
(16, 17)\\
(4, 13)\\
(2, 6)\\
(10, 11)\\
(9, 11)\\
(4, 11)\\
(8, 11)\\
(13, 15)
\end{flushright}
\end{minipage}
\end{figure}
Simply by following the algorithm through, we see that we will eventually end
up with a cut ($\{4, 12, 0, 16, 17, 1, 5, 15,
13\}$ in Figure~\ref{alteredrobot}) with $2n+1$ edges coming out of it.
Furthermore, by the conditions of the matching ($x(\delta(u))=1$) and the fact
that these edges form a path in the matching, each edge coming out of this cut
must have the same value, which we will call $\zeta$. Since $x(\delta(S)) \geq
1$, the minimum cost is when $(2n+1)\zeta=1$ or $\zeta=\frac{1}{2n+1}$.
\section{Cycling example}\label{sec:cycling}
Even when seeking a lexicographically-minimal optimal solution to \ref{Pf}
with half-integrality maintained throughout, cycling can still occur in the
absence of perturbation. The graph in Figures \ref{Step1} and \ref{Step2}
(which is easily seen to have a perfect matching), with each edge having cost
$1$, exhibits such behavoir. At all times, an optimal $\mathscr{F}$-positively critical dual is given by a
vector with the vertices having value $\frac{1}{2}$ and the odd sets in $\mathscr{F}$
having value $0$. Since Algorithm \ref{alg:cvv} only retains cuts which have
nonzero values in the dual (step \ref{cvv:H'}), no cuts are preserved between
iterations. Thus, the blossom inequalities which were violated in the previous
iteration are once again allowed to be violated in the next iteration, leading
to cycling.
\begin{figure}
\begin{center}
\begin{minipage}{0.41\textwidth}
\centering
\begin{tikzpicture}[thick, xscale=1]
\coordinate (Zero) at (1,4);
\coordinate (One) at (1,0);
\coordinate (Two) at (5,-1);
\coordinate (Three) at (3,4);
\coordinate (Four) at (5,4);
\coordinate (Five) at (5,0);
\coordinate (Six) at (1,-1);
\coordinate (Seven) at (2,1.5);
\coordinate (Eight) at (3,5.5);
\coordinate (Nine) at (3,2);
\draw (One) -- (Five) [half integral thin];
\draw (Five) -- (Six);
\draw (Three) -- (Five);
\draw (Four) -- (Eight) [integral thin];
\draw (Zero) -- (Nine) [half integral thin];
\draw (Two) -- (Five);
\draw (Five) -- (Seven) [half integral thin];
\draw (Zero) -- (Three) [half integral thin];
\draw (One) -- (Seven) [half integral thin];
\draw (Zero) -- (Eight);
\draw (One) -- (Six);
\draw (Five) -- (Nine);
\draw (Three) -- (Four);
\draw (Three) -- (Eight);
\draw (Four) -- (Five);
\draw (Two) -- (Six) [integral thin];
\draw (Three) -- (Nine) [half integral thin];
\draw (Zero) -- (One);
\node at (Zero)[draw, circle, fill=white, inner sep=2pt]{0};
\node at (One)[draw, circle, fill=white, inner sep=2pt]{1};
\node at (Two)[draw, circle, fill=white, inner sep=2pt]{2};
\node at (Three)[draw, circle, fill=white, inner sep=2pt]{3};
\node at (Four)[draw, circle, fill=white, inner sep=2pt]{4};
\node at (Five)[draw, circle, fill=white, inner sep=2pt]{5};
\node at (Six)[draw, circle, fill=white, inner sep=2pt]{6};
\node at (Seven)[draw, circle, fill=white, inner sep=2pt]{7};
\node at (Eight)[draw, circle, fill=white, inner sep=2pt]{8};
\node at (Nine)[draw, circle, fill=white, inner sep=2pt]{9};
\end{tikzpicture}
\caption{Odd iterations}
\label{Step1}
\end{minipage}
\begin{minipage}{0.41\textwidth}
\centering
\begin{tikzpicture}[thick, xscale=1]
\coordinate (Zero) at (1,4);
\coordinate (One) at (1,0);
\coordinate (Two) at (5,-1);
\coordinate (Three) at (3,4);
\coordinate (Four) at (5,4);
\coordinate (Five) at (5,0);
\coordinate (Six) at (1,-1);
\coordinate (Seven) at (2,1.5);
\coordinate (Eight) at (3,5.5);
\coordinate (Nine) at (3,2);
\draw (One) -- (Five);
\draw (Five) -- (Six) [half integral thin];
\draw (Three) -- (Five);
\draw (Four) -- (Eight) [half integral thin];
\draw (Zero) -- (Nine) [integral thin];
\draw (Two) -- (Five) [half integral thin];
\draw (Five) -- (Seven);
\draw (Zero) -- (Three);
\draw (One) -- (Seven) [integral thin];
\draw (Zero) -- (Eight);
\draw (One) -- (Six);
\draw (Five) -- (Nine);
\draw (Three) -- (Four) [half integral thin];
\draw (Three) -- (Eight)[half integral thin];
\draw (Four) -- (Five);
\draw (Two) -- (Six) [half integral thin];
\draw (Three) -- (Nine);
\draw (Zero) -- (One);
\node at (Zero)[draw, circle, fill=white, inner sep=2pt]{0};
\node at (One)[draw, circle, fill=white, inner sep=2pt]{1};
\node at (Two)[draw, circle, fill=white, inner sep=2pt]{2};
\node at (Three)[draw, circle, fill=white, inner sep=2pt]{3};
\node at (Four)[draw, circle, fill=white, inner sep=2pt]{4};
\node at (Five)[draw, circle, fill=white, inner sep=2pt]{5};
\node at (Six)[draw, circle, fill=white, inner sep=2pt]{6};
\node at (Seven)[draw, circle, fill=white, inner sep=2pt]{7};
\node at (Eight)[draw, circle, fill=white, inner sep=2pt]{8};
\node at (Nine)[draw, circle, fill=white, inner sep=2pt]{9};
\end{tikzpicture}
\caption{Even iterations}
\label{Step2}
\end{minipage}
\begin{minipage}[c]{0.15\textwidth}
\raggedleft
\textbf{Edge ordering:}\\
(5, 9)\\
(3, 5)\\
(4, 5)\\
(1, 6)\\
(3, 9)\\
(0, 8)\\
(5, 7)\\
(3, 4)\\
(1, 5)\\
(5, 6)\\
(0, 3)\\
(0, 1)\\
(1, 7)\\
(0, 9)\\
(2, 6)\\
(3, 8)\\
(2, 5)\\
(4, 8)
\end{minipage}
\end{center}
\end{figure}
\sloppy
This precludes us from ensuring termination of an approach using a
lexicographically-minimal primal with an unperturbed dual. More significantly,
it also means we could not even implement a heuristic version which, in the
event of cycling, would restart the algorithm with randomized edge orderings.
Were half-integrality to occur in tandem with cycling, as it does in Section
\ref{NonHalf}, we could simply verify half-integrality at each iteration, and,
in rare cases of non-half-integrality, begin the entire algorithm again with a
different edge ordering, giving us a good average-case runtime.\footnote{We
discovered the dancing robot after searching through over two thousand randomly-generated graphs, all of which were rapidly and correctly solved by the use of
a lexicographically-minimal primal and unperturbed dual. Furthermore, if a random edge ordering is applied to the dancing robot for example, it is very likely that a perfect matching will be found without issue.} However, this graph
shows that when a possibility of cycling exists, it is likely undetectable by
any means other than direct comparison between iterations. This forces us to
adopt an algorithm which simulates perturbations in the dual.
\section{Solving LP problems with perturbed costs}\label{sec:perturb}
Chandrasekaran \textit{et al.}~\cite{chandrasekaran_cutting_2016} chose a
specific perturbation of the costs, namely, adding $2^{-i}$ on each edge $i$.
In general, perturbation in linear programming (usually for the purpose of
eliminating degeneracy, as in \cite{charnes_optimality_1952}) is of the form
$\epsilon^i$ where $\epsilon$ is sufficiently small. In theoretical analysis,
$\epsilon$ is simply left unspecified. In the same spirit, we show in this
section how we could obtain optimal solutions to both the primal and dual
problems with perturbed costs, working with the fact that $\epsilon$ is
sufficiently small yet not given exactly, without the need of an optimal basis.
The method we describe is therefore able to avoid working
directly with cost values that exceed the representation capacity of
fixed-length floating-point formats typically used by LP solvers. Our method
is applicable to any situation in which the objective function of a generic LP
problem is perturbed in order to enforce uniqueness of the optimal solution.
In the next section, we specialize it to Algorithm \ref{alg:cvv}.
Let $A \in \mathbb{R}^{m\times n}$,
$b \in \mathbb{R}^m$, and
$c_0,\ldots,c_k \in \mathbb{R}^n$ for some nonnegative integer $k$.
Let $N \subseteq \{1,\ldots,n\}$.
Let $F = \{1,\ldots,n\}\setminus N$.
Define $c_\epsilon$ as $\sum_{p = 0}^k c_p \epsilon^p$ where $\epsilon \geq 0$.
Consider the linear programming problem:
\begin{align*}
\min \ c_\epsilon^{\mathsf{T}} x \tag{$P(\epsilon)$}\label{eqn:Peps}\\
{\mbox{s.t.}} \
A x & \geq b \\
x_j & \geq 0 & \forall~j \in N.
\end{align*}
Its dual is
\begin{align*}
\max \ & y^{\mathsf{T}} b \tag{$D(\epsilon)$}\label{eqn:Deps}\\
{\mbox{s.t.}} \ y^{\mathsf{T}} A_{:,j} & \leq c_\epsilon(j) & \forall~j \in N \\
y^{\mathsf{T}} A_{:,j} & = c_\epsilon(j) & \forall~j \in F \\
y &\geq 0.
\end{align*}
\begin{algorithm}[H]\label{alg:perturb}
\SetAlgoLined
\caption{Algorithm for perturbed LP primal-dual pair}
\KwIn{\ref{eqn:Peps} with $\epsilon > 0$ sufficiently small.}
\KwOut{An optimal $x'$ to \ref{eqn:Peps} and an optimal $y'$ to
\ref{eqn:Deps}.}
$E \leftarrow \emptyset$, $J \leftarrow \emptyset$
\For{$p \leftarrow 0$ \KwTo $k$}{
$\overline{J} \leftarrow N \setminus J$
\abovedisplayskip=0pt
\belowdisplayskip=-\baselineskip
Set $x_p$ to an optimal solution to
\begin{align*}
\min \displaystyle\sum_{j \in \overline{J}} & c_p(j) x(j) \\
{\mbox{s.t.}}
\displaystyle\sum_{j \in \overline{J}}{A_{i,j}} x(j) & \geq b(i) & \forall~i \notin E \\
\displaystyle\sum_{j \in \overline{J}}{A_{i,j}} x(j)& = b(i) & \forall~i \in E \\
x(j)& \geq 0 & \forall~j \in \overline{J}
\end{align*}
\label{alg:perturb:primal}\\
and $y_p$ to an optimal solution to its dual.
$E \leftarrow E\cup \{ i : y_{p}(i) > 0\}$
$J \leftarrow J \cup \{ j : {y_{p}}^{\mathsf{T}} A_{:,j} < b(j)\}$
}
Form $x' \in \mathbb{R}^n$ such that $x'(j) = x_k(j)$ for all $j \notin J$ and
$x'(j) = 0$ for all $j \in J$.
$y' \leftarrow \displaystyle\sum_{p=0}^k \epsilon^p y_p$
\KwRet{$x', y'$}
\end{algorithm}
The correctness of Algorithm \ref{alg:perturb} follows from
Lemma~\ref{cheunglp} below. Before we give the proof, we illustrate the
algorithm with an example. For each $p$, let $M_p$ denote the LP problem in
step \ref{alg:perturb:primal} of the algorithm. Consider \ref{eqn:Peps} with
\begin{align*}
A = \begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 2 \end{pmatrix},&&
b = \begin{pmatrix} 1 \\ 1 \end{pmatrix},&&
c_0 = \begin{pmatrix} 1 \\ 1 \\ 3 \end{pmatrix},&&
c_1 = \begin{pmatrix} 4 \\ 2 \\ 0 \end{pmatrix},&&
c_2 = \begin{pmatrix} -2 \\ -1 \\ 1 \end{pmatrix},&&
N = \{1,2\}.
\end{align*}
The dual problem is
\[\begin{array}{rrrrcl}
\max & y(1) & + & y(2) \\
{\mbox{s.t.}}
& y(1) & & & \leq & 1+4\epsilon-2\epsilon^2 \\
& & & y(2) & \leq & 1+2\epsilon-\epsilon^2 \\
& y(1) & + & 2y(2)& = &3+\epsilon^2 \\
& y(1) &, & y(2) & \geq & 0.
\end{array}\]
Note that $x_0 = \begin{pmatrix} 1 & 1 & 0\end{pmatrix}^{\mathsf{T}}$
and $y_0= \begin{pmatrix} 1 & 1 \end{pmatrix}^{\mathsf{T}}$
are optimal solutions to $P(0)$ and $D(0)$, respectively, which are in turn equivalent to $M_0$ and its dual.
Since $y_0(1), y_0(2) > 0$ and all the constraints in
$D(0)$ are satisfied with equality at $y_0$, $M_1$ is
\[\begin{array}{rrrrrrcl}
\min & 4x(1) & + & 2x(2) \\
{\mbox{s.t.}}
& x(1) & & & + & x(3) & = & 1 \\
& & & x(2) & + & 2x(3) & = & 1 \\
& x(1) &, & x(2) & & & \geq & 0.
\end{array}\]
The dual of $M_1$ is
\[\begin{array}{rrrrcl}
\max & y(1) & + & y(2) \\
{\mbox{s.t.}}
& y(1) & & & \leq & 4 \\
& & & y(2) & \leq & 2 \\
& y(1) & + & 2y(2)& = & 0.
\end{array}\]
An optimal solution to $M_1$ is $x_1 = \begin{pmatrix} \frac{1}{2} & 0 &
\frac{1}{2}\end{pmatrix}^{\mathsf{T}}$. An optimal dual solution is
$y_1 = \begin{pmatrix} 4 & -2\end{pmatrix}^{\mathsf{T}}$.
The second constraint in the dual is not active at $y_1$.
Hence, $M_2$ is
\[\begin{array}{rrrrcl}
\min & -2x(1) & + & x(3) \\
{\mbox{s.t.}}
& x(1) & + & x(3) & = & 1 \\
& & & 2x(3) & = & 1 \\
& x(1) & & & \geq & 0.
\end{array}\]
The dual of $M_2$ is
\[\begin{array}{rrrrcl}
\max & y(1) & + & y(2) \\
{\mbox{s.t.}}
& y(1) & & & \leq & -2 \\
& y(1) & + & 2y(2)& = & 1.
\end{array}\]
An optimal solution to $M_2$ is
$\begin{pmatrix} \frac{1}{2} & 0 & \frac{1}{2}\end{pmatrix}^{\mathsf{T}}$.
An optimal dual solution is
$y_2 = \begin{pmatrix} -2 & \frac{3}{2} \end{pmatrix}^{\mathsf{T}}$.
Setting
\[
y' = y_0 + \epsilon y_1 + \epsilon^2 y_2 =\begin{pmatrix}
1+ 4\epsilon - 2\epsilon^2 \\
1 -2\epsilon + \frac{3}{2}\epsilon^2
\end{pmatrix},\]
we have that $y'$ is a feasible solution to \ref{eqn:Deps}
and satisfies complementary slackness with
$x' = \begin{pmatrix} \frac{1}{2} & 0 & \frac{1}{2}\end{pmatrix}^{\mathsf{T}}$ for the
primal-dual pair \ref{eqn:Peps} and \ref{eqn:Deps}
for a sufficiently small $\epsilon > 0$.
\begin{lemma}\label{cheunglp}
Let $M_p$ denote the LP problem solved in step \ref{alg:perturb:primal}
of Algorithm~\ref{alg:perturb}.
\begin{enumerate}
\item
For every $p \in \{1,\ldots,k\}$,
$x_p$ is an optimal solution to $M_0,\ldots,M_{p-1}$.
\item $x'$ and $y'$ are feasible to
\ref{eqn:Peps} and \ref{eqn:Deps}, respectively,
and satisfy complementary slackness.
\end{enumerate}
\end{lemma}
\begin{proof}
For each $j = 1,\ldots,p$, $M_j$ is obtained from $M_{j-1}$
by adding constraints to enforce complementary slackness with $y_{j-1}$.
Removing $x(j)$ can be viewed as adding the constraint $x(j) = 0$. It
follows that $x_p$ is feasible to $M_j$ and satisfies complementary slackness
with $y_j$ for all $j \in \{0,\ldots,p-1\}$.
To prove the second part, we start by noting that $x'$ is feasible to \ref{eqn:Peps}. Let $E_p$, $J_p$, and $\overline{J_p}$ be the sets $E$, $J$, and $\overline{J}$ referred to in $M_p$. The dual of $M_p$ is
\begin{align*}
\max \ & y^{\mathsf{T}} b \\
{\mbox{s.t.}} \
y^{\mathsf{T}} A_{:,j}& \leq c_p(j) & \forall~j \in \overline{J_p} \\
y^{\mathsf{T}} A_{:,j}& = c_p(j) & \forall~j \in F \\
y(i) &\geq 0 & \forall~i \notin E_p.
\end{align*}
Clearly, $y'^{\mathsf{T}} A_{:,j} = c_\epsilon(j)$ for all $j \in F$.
Next, we show that $y'^{\mathsf{T}} A_{:,j} \leq c_\epsilon(j)$ for all $j \in N$.
Suppose that $j \in \overline{J_k}$.
Since $\overline{J_p} \subseteq \overline{J_{p-1}}$ for $p = 1,\ldots, k$,
we have $y_p^{\mathsf{T}} A_{:,j} \leq c_p(j)$.
Thus, $$y'^{\mathsf{T}} A_{:,j} =
\sum_{p = 0}^k \epsilon^p y_p^{\mathsf{T}} A_{:,j}
\leq \displaystyle\sum_{p = 0}^k \epsilon^p c_p(j) = c_\epsilon(j).$$
Now, suppose that $j \in J_k$. Then, there exists $r < n$ such that
$y_r^{\mathsf{T}} A_{:,j} < c_r(j)$.
Let $s_i = c_i(j) - y_i^{\mathsf{T}} A_{:,j}$ for $i = 1,\ldots,m$.
Thus,
\begin{align*}
c_\epsilon(j) - {y'}^{\mathsf{T}} A_{:,j}
& = \sum_{p = 0}^k \epsilon^p s_p \\
& = \sum_{p = 0}^r \epsilon^p s_p + \sum_{p = r+1}^k \epsilon^p s_p \\
& \geq \epsilon^r \left(s_r + \sum_{p = r+1}^k \epsilon^{p-r} s_p\right) \\
& = \epsilon^r \left(s_r + \epsilon\sum_{q = 0}^{k-r-1} \epsilon^{q} s_{q+r+1}\right) \\
& > 0
\end{align*}
for $\epsilon > 0$ sufficiently small.
We now show that $y' \geq 0$.
Consider $y'(j)$ for some $j \in \{1,\ldots,m\}$. If $j \notin E_k$,
then $y_p(j) \geq 0$ for $p = 0,\ldots,k$, implying that $y'(j) \geq 0$.
Otherwise, $j \in E_r$ for some $r \in \{1,\ldots,k\}$. Choose $r$
as small as possible. We must have $y_{r-1}(j) > 0$.
Then,
\begin{align*}
y'(j)
& = \sum_{p = 0}^k \epsilon^p y_p(j) \\
& \geq \sum_{p = r}^k \epsilon^p y_p(j) \\
& = \epsilon^r \left(y_r(j)_ + \epsilon\sum_{p = 0}^{k-r-1} \epsilon^{p} y_{p+r+1}(j)\right) \\
& > 0
\end{align*}
for $\epsilon > 0$ sufficiently small.
Finally, to see that $x'$ and $y'$ satisfy complementary slackness, note that,
by part 1, if $x'(j) > 0$, then $y_p^{\mathsf{T}} A_{:,j} = c_p(j)$ for all $p \in
\{0,\ldots,k\}$. Thus ${y'}^{\mathsf{T}} A_{:,j} = c_\epsilon(j)$. Furthermore, if
$A_{i,:}x' < b(i)$ for some $i$, ${y_p}(i) = 0$ for all $p \in
\{0,\ldots,k\}$. This implies that $y'(i) = 0$.
\end{proof}
We now make two observations that will be useful in the next section. First,
we can see from the proof of Lemma~\ref{cheunglp} that the dual of $M_p$ can be
obtained directly from the dual of $M_{p-1}$ and an associated optimal solution
$y_{p-1}$ by removing constraints (including the nonnegativity bound
constraints) that are not active at $y_{p-1}$. It follows that one can work
exclusively with the duals of $M_0,\ldots,M_k$ if one is only interested in
obtaining an optimal solution to \ref{eqn:Deps}. Moreover, in practice, $y'$
hardly needs to be worked out for a particular value of $\epsilon$ and can be
represented by the list $y_0,\ldots,y_k$. Then, to determine if $y'(i) \neq 0$
for some $i$, simply check if there exists a $p$ such that $y_p(i) \neq 0$,
since, for sufficiently small $\epsilon$, $y'(i) = 0$ if and only if $y_p(i) =
0$ for all $p$.
\section{Modified Chandrasekaran-Végh-Vempala algorithm}\label{sec:newalg}
We now modify Algorithm~\ref{alg:cvv} to circumvent the need to utilize an
explicit perturbation of the edge costs. First, we arbitrarily order the edges
and increase the cost of each edge $i$ by $\epsilon^i$ for some sufficiently
small $\epsilon > 0$ that will remain unspecified. By Lemma~\ref{cvvepsilon},
we may assume that with such a perturbation, Algorithm~\ref{alg:cvv} will still
return a minimum-cost perfect matching. In Algorithm~\ref{alg:cvv},
step~\ref{cvv:primalstep} and step~\ref{cvv:extremal} involve solving \ref{Pf}
and \ref{D*} respectively with perturbed data. We emulate perturbations in the
first of these by finding a lexicographically-minimal optimal solution. The
other is handled through the method developed in the previous section applied
to the following LP, which is easily seen to be equivalent to \ref{D*}:
\begin{align*}
\max \displaystyle \sum_{S \in \mathscr{V} \cup \mathscr{F}_x} & -\frac{1}{|S|}r(S) \\
{\mbox{s.t.}} \ -r(S)-\Pi(S) & \leq-\Gamma(S) & \forall~S \in \mathscr{V} \cup \mathscr{F}_x \\
-r(S)+\Pi(S) & \leq\Gamma(S) & \forall~S \in \mathscr{V} \cup \mathscr{F}_x \\
\displaystyle \sum_{S \in \mathscr{V} \cup \mathscr{F}_x: uv \in \delta(S)}\Pi(S) & =c(uv) & \forall~uv \in \operatorname{supp}(x) \\
\displaystyle \sum_{S \in \mathscr{V} \cup \mathscr{F}_x: uv \in \delta(S)}\Pi(S) & \leq c(uv) & \forall~uv \in E\setminus\operatorname{supp}(x) \\
\Pi(S) & \geq0 & \forall~S \in \mathscr{F}_x, \\
r(S) & \geq0 & \forall~S \in \mathscr{V} \cup \mathscr{F}_x
\end{align*}
where $\mathscr{V} = \{ \{v\} : v \in V \}$ and $\mathscr{F}_x = \{ S \in \mathscr{F} :
x(\delta(S)) = 1\}$. With explicit perturbation of the edge costs, $\Gamma$
and $c$ will be polynomials in $\epsilon$. Intuitively, we define $\Gamma_i$
and $c_i$ to be the coefficients of $\epsilon^i$ in $\Gamma$ and $c$; we will
define these rigorously in a moment.
The reason for writing \ref{D*} as above is to make it plain that it can be
viewed as the dual problem \ref{eqn:Deps} of some
\ref{eqn:Peps} with cost values given by polynomials in $\epsilon$.
However, in an actual algorithm as seen below, we can work directly with
\ref{D*} as originally written.
With these changes and the following definitions, we obtain
Algorithm~\ref{alg:new}.
Given an ordering $\sigma : E \mapsto \{1, \dots, |E|\}$ on the edges of $G$,
define the following cost function:
$$ c_i(uv)=\begin{cases}
c(uv) & i = 0\\
1 & i > 0,\ \sigma(uv) = i\\
0 & i > 0,\ \sigma(uv) \neq i
\end{cases}$$
With this, we define the following linear program:
\begin{align*}
\min \displaystyle \sum_{S \in \mathscr{V} \cup \mathscr{F}_x} & \frac{1}{|S|}r(S)
\tag{$D^i_\mathscr{F}(G, c, \sigma, \Gamma, L, M, N, Q)$}\label{Di} \\
{\mbox{s.t.}} \ r(S)+\Pi(S) & \geq\Gamma_i(S) & \forall~S \in (\mathscr{V} \cup \mathscr{F}_x)\setminus L \\
-r(S)+\Pi(S) & \leq\Gamma_i(S) & \forall~S \in (\mathscr{V} \cup \mathscr{F}_x)\setminus M \\
\displaystyle \sum_{S \in \mathscr{V} \cup \mathscr{F}_x: uv \in
\delta(S)}\Pi(S) & =c_i(uv) & \forall~uv \in \operatorname{supp}(x) \\
\displaystyle \sum_{S \in \mathscr{V} \cup \mathscr{F}_x: uv \in
\delta(S)}\Pi(S) & \leq c_i(uv) & \forall~uv \notin \operatorname{supp}(x) \cup N \\
\Pi(S) & \geq0 & \forall~S \in \mathscr{F}_x\setminus Q.
\end{align*}
Intuitively, $c_i$ and $\Gamma_i$ correspond to the coefficients of
$\epsilon^i$ in $c$ and $\Gamma$ if we were to perturb the edge costs on the
graph by $\epsilon^i$ and run Algorithm \ref{alg:cvv}.
\begin{algorithm}[H]
\caption{Unperturbed C-P-Matching Algorithm}\label{alg:new}
\KwIn{A graph $G=(V, E)$ with edge costs $c \in \mathbb{Z}^E$ and an
ordering $\sigma : E \mapsto \{1, \dots, |E|\}$.}
\KwOut{A binary vector $x$ representing a minimum-cost perfect matching on $G$.}
$\mathscr{F} \leftarrow \emptyset$; $\Gamma_0, \dots, \Gamma_{|E|} \leftarrow 0$
\While{$x$ is not integral}{
Let $x$ be the lexicographically-minimal optimal solution to \ref{Pf}
with respect to $\sigma$.
$L \leftarrow \emptyset; M \leftarrow \emptyset; N \leftarrow
\emptyset; Q \leftarrow \emptyset; D_0, \dots, D_{|E|} \leftarrow 0$
$\mathscr{F}_x \leftarrow \{ S \in \mathscr{F} : x(\delta(S)) = 1\}$
\For{$i \leftarrow 0$ \KwTo $|E|$}{
\label{dualline}Obtain an optimal solution $r, \Pi$ to \ref{Di}.\\
\vspace{\baselineskip}
$L \leftarrow L \cup \{S \in V \cup \mathscr{F}_x :
r(S)+\Pi(S)\neq\Gamma_{i}(S)\}$ \label{removeconstraints_start}
$M \leftarrow M \cup \{S \in V \cup \mathscr{F}_x : -r(S) + \Pi(S) \neq
\Gamma_i(S)\}$
$N \leftarrow N \cup \{uv \in E : \sum_{uv \in \delta(S)} \Pi(S) \neq c_i(uv)\}$
$Q \leftarrow Q \cup \{S \in \mathscr{F}_x : \Pi(S) \neq 0
\}$\label{removeconstraints_end}\\
\vspace{\baselineskip}
$D_i \leftarrow \Pi$
}
$\mathscr{H}' \leftarrow \{S \in \mathscr{F} : \exists\ i\ \mathrm{s.t.}\ D_i(S) > 0\}$
Let $\mathscr{C}$ be the set of odd cycles in $\operatorname{supp}(x)$. For each $C \in \mathscr{C}$, let $V(C)$ be the union of $C$ with all sets in $\mathscr{H}'$ intersecting it.
$\mathscr{H}'' \leftarrow \{V(C) : C \in \mathscr{C}\}$
$\mathscr{F} \leftarrow \mathscr{H}' \cup \mathscr{H}''$
$\Gamma \leftarrow D$
}
\end{algorithm}
Steps \ref{removeconstraints_start} through \ref{removeconstraints_end} exist
to remove the slack constraints from the next iterations of \ref{Di}, as in
Algorithm \ref{alg:perturb}.
A reference implementation, written in Python 3, is available at
\cite{kielstra_code_2019}.
\begin{lemma}
In every iteration of the Unperturbed C-P-Matching Algorithm (\ref{alg:new}), $x$ is equal to its
counterpart in the C-P-Matching Algorithm (\ref{alg:cvv}) with perturbations $c(i)=\epsilon^i$.
\end{lemma}
\begin{proof}
As mentioned in Section \ref{sec:cvv}, by \cite{schrijver_theory_2000}, the
lexicographically-minimal unperturbed primal solution is equal to the
unique perturbed optimal primal solution for a given $\mathscr{F}$, so we need only
show that $\mathscr{F}$ is always equal to its counterpart.
Consider $\Gamma$ as a single vector of polynomials in $\epsilon$, with
the coefficients of the $\epsilon^i$ terms given by $\Gamma_i$. Then, by Lemma
\ref{cheunglp}, $y=\sum_i \epsilon^iD_i$ is an optimal solution to the linear
program in step \ref{dualline}, and $y(S)>0$ if and only if $\max (D_i) > 0$.
But the linear program in question is exactly that which the C-P-Matching
algorithm uses to obtain a $\Gamma$-extremal dual optimal solution. Therefore
$\mathscr{H}'$, which is defined solely based on whether $y(S)>0$ or not, is
equal to its counterpart in the C-P-Matching algorithm. Since $\mathscr{H}''$
is defined exactly the same way as its counterpart, the two are equal, so $\mathscr{F}$
is equal to its counterpart as well.
\end{proof}
Since, by Lemma \ref{cvvepsilon}, neither the correctness nor the complexity of
Algorithm \ref{alg:cvv} are affected by changing from the perturbation
$c(i)=2^{-i}$ to the perturbation $c(i)=\epsilon^i$, we can rephrase this to
give
\begin{theorem}
The Unperturbed C-P-Matching Algorithm gives a minimum-cost perfect matching.
\end{theorem}
The lemma also has the following
\begin{corollary}
\sloppy
The Unperturbed C-P-Matching algorithm requires solving $O(mn \log n)$ linear programming problems in the worst case.
\end{corollary}
\begin{proof}
According to \cite[Theorem 1]{chandrasekaran_cutting_2016}, the
C-P-Matching Algorithm takes at most $O(n \log n)$ iterations. The Unperturbed
C-P-Matching Algorithm has the same number of iterations, but each iteration
utilizes $2(m+1)$ linear programming problems, which is $O(m)$. Therefore, the
Unperturbed C-P-Matching Algorithm requires solving $O(m) \times O(n \log n) =
O(mn \log n)$ linear programming problems in total.
\end{proof}
\section{Final remarks}
We have developed a general method for solving perturbed linear programs in
polynomial time without performing explicitly perturbed calculations, and
demonstrated that it applies to the minimum-cost perfect matching problem. The
use of perturbations to guarantee uniqueness is common in linear programming,
and it remains to be seen whether or not our method could be applied to other
algorithms or used to solve other problems. We do not yet know if our new
algorithm, when properly implemented and optimized, can be made competitive
with combinatorial methods such as Edmonds's blossom algorithm. According to
\cite{kolmogorov_blossom_2009}, the best known asymptotic runtime for such an
implementation is $O(n(m + \log n))$. Our algorithm, which solves $O(mn \log
n)$ linear programs, each of which requires the use of a theoretically
polynomial-time solver, is significantly slower in the worst case.
We encountered a number of interesting phenomena regarding the subroutine for
finding the lexicographically-minimal primal optimal solution. Although, as
written,
it requires solving a fixed number of linear programs ($|E|+1$), we noticed in
empirical testing that it often gave this solution far more quickly than that,
with the last few linear programs all giving the same answer. We did not
investigate this any further, but hypothesize that shortcuts exist
to decrease the runtime by a factor
of $\frac{1}{4}$ or more.
\section*{Acknowledgements}
The authors would like to thank James Addis, as well as the organizers and
supporting organizations of the Fields Undergraduate Summer Research Program
--- Brittany Camp, Bryan Eelhart, Huaxiong Huang, Michael McCulloch, and the
Fields Institute and Fields Centre for Quantitative Analysis and Modelling
(Fields CQAM) --- without whom this research would not have been possible.
\bibliographystyle{splncs04}
| {
"timestamp": "2019-08-30T02:04:46",
"yymm": "1908",
"arxiv_id": "1908.10989",
"language": "en",
"url": "https://arxiv.org/abs/1908.10989",
"abstract": "In 2016, Chandrasekaran, Végh, and Vempala published a method to solve the minimum-cost perfect matching problem on an arbitrary graph by solving a strictly polynomial number of linear programs. However, their method requires a strong uniqueness condition, which they imposed by using perturbations of the form $c(i)=c_0(i)+2^{-i}$. On large graphs (roughly $m>100$), these perturbations lead to cost values that exceed the precision of floating-point formats used by typical linear programming solvers for numerical calculations. We demonstrate, by a sequence of counterexamples, that perturbations are required for the algorithm to work, motivating our formulation of a general method that arrives at the same solution to the problem as Chandrasekaran et al. but overcomes the limitations described above by solving multiple linear programs without using perturbations. We then give an explicit algorithm that exploits are method, and show that this new algorithm still runs in strongly polynomial time.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)",
"title": "Revisiting a Cutting Plane Method for Perfect Matchings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347875615794,
"lm_q2_score": 0.727975443004307,
"lm_q1q2_score": 0.7093645961539486
} |
https://arxiv.org/abs/2102.02737 | Insight of the Green's function as a defect state in a boundary value problem | A new perspective of the Green's function in a boundary value problem as the only eigenstate in an auxiliary formulation is introduced. In this treatment, the Green's function can be perceived as a defect state in the presence of a $\delta$-function potential, the height of which depends on the Green's function itself. This approach is illustrated in one-dimensional and two-dimensional Helmholtz equation problems, with an emphasis on systems that are open and have a non-Hermitian potential. We then draw an analogy between the Green's function obtained this way and a chiral edge state circumventing a defect in a topological lattice, which shines light on the local minimum of the Green's function at the source position. | \section{Introduction}
Initially distributed to just 51 private members of a subscription library in 1828 \cite{green_book}, the essay of George Green on the application of mathematical analysis to the theories of electricity and magnetism has inspired generations of physicists and mathematicians. By studying the three-dimensional Laplace equation governing the static electric potential, Green recognized the unique utility of a function inversely proportional to the distance between two charged bodies, which now bears his name and preceded Dirac's introduction of the $\delta$-function by more than a century. The importance of the Green's function, together with the Green's theorem, was recognized by masterminds of modern physics such as Lord Kelvin and Julian Schwinger.
Besides its effectiveness as a theoretical and numerical tool to solve linear differential equations \cite{Stakgold2011,Melnikov2017}, the Green's function provides a fundamental connection between the source and the field \cite{Morse1953}. When extended to address dynamical equations and their spectral representations, the Green's function becomes the propagator that is fundamental to field theories \cite{schwinger_book}. As such, it is a powerful and essential tool to study a variety of transport and scattering problems, ranging from condensed matter physics \cite{Rickayzen1980,Economou2007}, optics and photonics \cite{Davy2015,Lin2016,Pick2017} to high energy physics \cite{Newton1982}.
In optical physics, one area of interest in this article, the Green's function has been routinely employed to analyze rich optical phenomena including light-matter interaction in lasers and other light sources \cite{Tureci2006,Tureci2008,Ge2010}, as well as to devise applications for optical computing, information processing, and telecommunications. What we offer here is a novel perspective of the Green's function, i.e., treating it as the single eigenstate in an auxiliary boundary value problem. In addition to further enrichment and shaping of our physical intuition through the Green's function, we find exceptional parallels between the Green's function and defect states due to a local potential, including a chiral edge state circumventing a defect on its path in a topological lattice.
Below we introduce formally the Green's function problem and lay out the fundamentals of our approach. The Green’s function of an operator $\mathcal{L}$ in a variety of physics problems can be defined by
\begin{equation}
[z-\mathcal{L}]G(\bm{r},\bm{r'};z)=\delta(\bm{r}-\bm{r'})\label{eq:G_general}
\end{equation}
with a proper boundary condition. Here $z$ is a parameter (e.g., the energy) and $\bm{r}, \bm{r'}$ are the coordinates of the field and the source, respectively. A new perspective of the Green's function can be obtained using the following auxiliary eigenvalue problem
\begin{equation}
[z-\mathcal{L}]\psi_m(\bm{r},\bm{r'})=\lambda_m(\bm{r'}) f(\bm{r},\bm{r'})\psi_m(\bm{r},\bm{r'}).\label{eq:G_new_def}
\end{equation}
Below we suppress the $\bm{r'}$-dependence of $\psi_m$ and $\lambda_m$ for conciseness. As we will show, this auxiliary problem has a \textit{single} eigenvalue $\lambda_0$ and eigenstate $\psi_0(\bm{r})$, \textit{when} $f(\bm{r},\bm{r'})$ is chosen to be $\delta(\bm{r}-\bm{r'})$. The Green's function is then uniquely determined by
\begin{equation}
G(\bm{r},\bm{r'})=\frac{\psi_0(\bm{r})}{\lambda_0\psi_0(\bm{r'})},\label{eq:G_new}
\end{equation}
and $\lambda_0^{-1}$ gives the value of the Green's function at the source $\bm{r}=\bm{r'}$. As a bonus, we obtain directly the local density of states (LDOS) that is proportional to the imaginary part of $G(\bm{r},\bm{r};z)$ \cite{Economou2007}, i.e., $\text{Im}[\lambda_0^{-1}]$. The reciprocity of the Green's function, though implicit in Equation~(\ref{eq:G_new}), can be easily verified in the absence of an effective magnetic field, as shown in Appendix \ref{sec:reciproc}].
Before we apply this approach to various Hermitian and non-Hermitian problems, we first prove Equation~(\ref{eq:G_new}) and discuss the general properties of $\psi_{m}(\bm{r})$. With the choice of $f(\bm{r},\bm{r'})$ mentioned above, the right hand side of Equation~(\ref{eq:G_new_def}) for $\psi_0(\bm{r})$ is simply
\begin{equation}
\lambda_0\delta(\bm{r}-\bm{r'})\psi_0(\bm{r})=\lambda_0\delta(\bm{r}-\bm{r'})\psi_0(\bm{r'}),\label{eq:trick}
\end{equation}
from which Equation~(\ref{eq:G_new}) follows directly by comparing with Equation~(\ref{eq:G_general}), with the requirement $\psi_0(\bm{r'})\neq 0$.
If there were another eigenstate $\psi_{m\neq 0}(\bm{r})$, then by repeating the same procedure we would find that the Green's function is proportional to $\psi_{m\neq 0}(\bm{r})$ as well, which contradicts the uniqueness of the Green's function with a properly imposed boundary condition. When implemented numerically, there do exist spurious eigenvectors $\psi_{m\neq 0}(\bm{r})$, which however can be easily discarded due to their ill-behaved $\lambda_m$'s as we will discuss in Section~\ref{sec:1A}.
The auxiliary eigenvalue approach equips us with a conceptually different way to treat the Green's function, i.e., as a defect state \cite{qi_defect_2018} emerging due to the $\delta$-function potential:
\begin{equation}
\left[\mathcal{L}+\lambda_0\delta(\bm{r}-\bm{r'})\right]\psi_0(x)=z\psi_0(x).\label{eq:defect}
\end{equation}
As we will show, this point of view is particularly interesting and helpful in a topological system with chiral edge states \cite{ hafezi_robust_2011,hafezi_imaging_2013,bandres_topological_2018,zhao_non-hermitian_2019,barik_topological_2018}. For example, if a point source is placed at the edge of a two-dimensional (2D) topological insulator, the auxiliary eigenvalue approach indicates that an analogy exists between the Green's function and a chiral edge state circumventing a defect at the same location. This interpretation provides an intuitive understanding of the local minimum of the Green's function at the source position, which we will illustrate using a 2D square lattice with a $\pi/2$ Landau gauge.
The rest of the paper is organized into two main parts, where we validate our method and discuss the new insight it provides, respectively. In Section~\ref{sec:II}, we first validate our method in a 1D Hermitian (closed) system where the analytical form of the Green's function is available. We then extend the validation to two non-Hermitian 1D systems with parity-time ({$\cal PT$}) symmetry \cite{Bender1998,Feng2017}, focusing on the Green's function at an exceptional point (EP). An EP is a unique degeneracy found only in non-Hermitian systems, where two or more eigenstates of the system coalesce. It has led to a plethora of intriguing phenomena, and in particular, it was shown recently that the Green's function can be fully decoupled from the coalesced eigenstate in a photonic system, which is instead given by the Jordan vector or the ``missing dimension'' of the Hilbert space \cite{Chen2020}. We show that our method based on Equation~(\ref{eq:G_new_def}) captures this extraordinary behavior nicely in a ring cavity, besides describing correctly the Green's function in a {$\cal PT$}-symmetric photonic molecule. We further validate our method in quasi-1D waveguides, which are frequently employed to study disordered mesoscopic and optical systems \cite{Beenakker1997,Imry2008,Mello2004}. In Section \ref{sec:III}, we first discuss how the defect state corresponding to the Green's function is conceptually different from previous studies of defect states in a 1D continuous Hermitian system and a 1D non-Hermitian lattice. We then highlight the intriguing manifestation of the linkage between the Green's function and a defect state in the aforementioned 2D topological lattice.
\vspace{-10pt}
\section{Validation} \label{sec:II}
\subsection{1D Hermitian case}
\label{sec:1A}
\begin{figure}[b]\centering
\includegraphics[width=\linewidth]{Fig1.pdf}
\caption{The Green's function (a) and DOS (b) in a 1D dielectric cavity with perfect mirrors. The green dashed lines are obtained using the analytical expression (\ref{eq11}), while the black solid lines are from our auxiliary eigenvalue approach (\ref{eq:G_new}). $n=3$, $k = 30/L$ and $x' = 0.45 L$ (marked by the arrow) are used in (a), and 2000 grid points are used for the finite difference implementation of Equation~(\ref{eq:G_new_def}).}\label{fig:1D_Hermitian}
\end{figure}
We start with an exmaple where the analytical form of the Green's function is available. Consider the scalar Helmholtz equation in 1D with a uniform refractive index $n\in\mathbb{R}$:
\begin{equation}
\mathcal{L}=-\frac{1}{n^2}\partial_x^2,\quad z = k^2.\label{eq:Helmholtz}
\end{equation}
Here $k$ is the real-valued wave vector in free space. Below we take the speed of light in vacuum to be 1 and do not distinguish $k$ from the (circular) frequency. We impose the Dirichlet boundary conditions $G(x,x')=0$, $\psi_m(x)=0$ at $x=0,L$, which renders the system Hermitian. Consequently, it can be shown that the Green's function at the source position (i.e., $\lambda_0^{-1}$) is real:
\begin{equation}
\lambda_0=\frac{k}{n}\left[\frac{1}{\tan[nk(x'-L)]}-\frac{1}{\tan(nkx')}\right].\label{eq:lambda_1D_Hermitian}
\end{equation}
When solved using a finite difference scheme \cite{Ge2017}, the numerical value of $\lambda_0$ given by Equation~(\ref{eq:G_new}) shows a good agreement with the analytical result given by Equation~(\ref{eq:lambda_1D_Hermitian}). The corresponding Green's function, which we shown in Figure \ref{fig:1D_Hermitian}(a), also agrees nicely with its analytical expression
\begin{equation}
G(x,x')=\begin{cases}
\frac{\sin[nk(x-L)]}{\lambda_0\sin[nk(x'-L)]}, & (x>x')\\
\frac{\sin(nkx)}{\lambda_0\sin(nkx')}. & (x\leq x')
\end{cases}\label{eq11}
\end{equation}
The numerical implementation of the $\delta$-function is usually taken to be the limit of a sharp analytical distribution, such as a Gaussian with the standard deviation $\sigma \rightarrow 0$. At the same time, another small quantity that requires attention in the finite difference implementation of Equation~(\ref{eq:G_new_def}) is the lattice spacing $\Delta x$. If we choose $\sigma>\Delta x$, we find spurious eigenstates $\psi_{m\neq0}(x)$ with almost identical spatial dependence away from the source position (not shown), but they have structures that reside in the finite extension of the approximated $\delta$-function (e.g., fast oscillations). This problem is remedied by letting $\sigma<\Delta x$, which practically leads to an approximation of the $\delta$-function that has a single non-zero element at the position of the source, the value of which is given by $1/\Delta x$. This choice warrants that the integration of the $\delta$-function is 1 over any range enclosing the source and 0 otherwise.
With this choice, we find that all spurious eigenvalues of Equation~(\ref{eq:G_new_def}) approaches infinity (e.g., $|\lambda_{m\neq0}L|>10^{17}$ in the case shown in Figure~\ref{fig:1D_Hermitian}), and the sole, physical one $\lambda_0$ is easily obtained by setting the numerical routine to search for the eigenvalue with the smallest modulus.
As mentioned in the introduction, our auxiliary eigenvalue approach also produces the LDOS directly. For a Hermitian system, a limiting procedure is needed to regulate the singularity at real-valued resonant frequencies:
\begin{equation}
\text{LDOS}(x;k) = \lim_{s\to 0^+}-\frac{2k}{\pi}\text{Im} [G(x,x;k+is)],\label{eq:LDOS}
\end{equation}
i.e., a small positive imaginary part is added to the frequency here, constructing in this way the retarded Green's function [See Appendix \ref{sec:reciproc}]. Equation (\ref{eq:LDOS}) also applies to non-Hermitian cases, as long as the complex resonant frequencies are on or below the real axis. The integration of LDOS$(x;k)$ over the whole system then gives the density of states (DOS) as a function of the frequency.
To calculate DOS using our approach based on Equation~(\ref{eq:G_new_def}), we choose a small $s=0.03/L$ and calculate $-\frac{2k}{\pi}\text{Im} [\lambda_0^{-1}]$ numerically. The result agrees well with the analytical result [see Figure~\ref{fig:1D_Hermitian}(b)], where $\lambda_0$ given by Equation~(\ref{eq:lambda_1D_Hermitian}) is used. The latter leads to
\begin{equation}
\text{DOS}(k)=\sum_{m}\delta\left(k-\frac{m\pi}{nL}\right)
\end{equation}
as expected once $s\to0$, where $k_m = m\pi/nL\,(m=1,2\ldots)$ are the real-valued resonant frequencies.
\subsection{1D non-Hermitian cases}
\label{sec:nonHermitian}
For our next validation, we study the Green's function at an EP in a photonic molecule \cite{Liertzer2012}. Such a case presents a serious challenge to the standard approach based on the eigenvalues of $\cal L$, i.e., the bilinear expansion
\begin{eqnarray}
G(\bm{r},\bm{r'};z)=\sum_{m}&&\frac{\overline{\phi}_m(\bm{r})\phi_m(\bm{r'})}{(z-z_m) (m,m)},\label{eq:G_expansion}\\
\mathcal{L}\phi_m(\bm{r})&&=z_m\phi_m(\bm{r}).
\end{eqnarray}
Depending on the symmetry of $\mathcal{L}$, the partner function $\overline{\phi}_m(\bm{r})$ may or may not be the same as $\phi_m^*(\bm{r})$, and $(m,n)$ is the resulting inner product of $\overline{\phi}_m(\bm{r})$ and $\phi_n(\bm{r})$. At an EP, the inner product $(m,m)$ becomes zero due to the coalescence of two or more eigenstates of the system \cite{Heiss2004,Berry2004,Feng2017}. Although this divergence can be eliminated using a non-Hermitian perturbation theory \cite{Pick2017,Chen2020}, it requires \textit{a priori} knowledge of the EPs and the generalized eigenvectors, which adds to the complexity of the problem. Our auxiliary eigenvalue formulation, on the other hand, does not suffer from this drawback.
Below we exemplify our method and compare it to the result of the perturbation theory, without the need to invoke the Jordan vector that completes the Hilbert space at the EP \cite{Chen2020}. Our photonic molecule is composed of two half-wavelength cavities coupled by a distributed Bragg reflector (DBR) placed in air [See Figure \ref{fig:1D_EP}(a); inset]. If an unbalanced loss distribution is introduced in the two half-wavelength cavities, an emerging effective {$\cal PT$}~symmetry governs the system \cite{Ge2016}.
Here we consider the outgoing boundary condition at a real-valued frequency, which corresponds to the laser and is different from quasi-bound states or resonances with a complex frequency throughout the entire space \cite{Tureci2006}. The resulting complex eigenvalues of $\mathcal{L}$ given in Equation~(\ref{eq:Helmholtz}) are found in the lower half of the complex frequency plane, as a result of the non-Hermiticity caused by the cavity openness. These complex eigenvalues are known as continuous flux (CF) states \cite{Tureci2006,Ge2017}, which have also been used to study nuclear decays \cite{Kapur1937,Goldberger1964}. We note that the EPs of CF states have not been studied before, unlike their counterparts of quasi-bound states or resonances.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig2.pdf}
\caption{Green's function of a $\mathcal{PT}$ photonic molecule at an EP. The source is placed at the center of the right cavity in (a) and in between the two cavities in (b). Inset in (a): Schematic of the photonic molecule and the imposed boundary condition, and labels denoting the refractive indices of each component of the heterostructure. Green dashed lines correspond to the perturbative expression in Equation~(\ref{eq:G_EP}) and black solid lines correspond to our auxiliary eigenvalue approach. The locations of the sources are marked with arrows. (c) Difference between the bilinear expansion with the perturbative correction and the auxiliary method in (b) at the source position.
}
\label{fig:1D_EP}
\end{figure}
In the vicinity of an EP of frequency $k_0$, the CF states in our system can be expressed in terms of waves confined in the left- and right-cavity, i.e., $\psi(x)\approx a_l\psi_l(x)+a_r\psi_r(x)$, where the amplitudes $a_{l,r}$ are determined by the Helmholtz equation. Without the $\mathcal{PT}$-symmetric perturbation, our photonic molecule is symmetric and hence $a_l=\pm a_r$ in the symmetric and anti-symmetric modes with CF frequencies $\tilde{k}_S,\tilde{k}_A$. The introduction of a weak $\mathcal{PT}$-symmetric perturbation in the dielectric function couples the amplitudes $a_l$ and $a_r$, determined by their spatial overlap $C$ with the non-Hermitian perturbation, which represents the strength of gain and loss.
The eigenfrequencies of the perturbed system are then found to be $q_\pm^2=k_0^2 \left(1\pm \Delta\varsigma\right)$. Here $k_0$ is the CF frequency of a single half-wavelength cavity sandwiched by two DBRs and $\Delta$ is a dimensionless detuning defined by $(\tilde{k}_S^2-\tilde{k}_A^2)/2k_0^2$. We have also defined $\varsigma\equiv\sqrt{1-\beta^2}$, where $\beta=C/\Delta$.
One EP is reached when $\varsigma=0$, resulting in $q^2_\pm=q_0^2$. The corresponding eigenstates $\psi_\pm(x)$ are given by
\begin{equation}
\psi_\pm(x)=\psi_l(x)+\beta_\pm\psi_r(x),
\end{equation}
where $\beta_\pm\equiv i\beta\pm\varsigma$. The inner product of the eigenstates is defined as
\begin{equation}
(i,j)\equiv\int \epsilon_0(x) \psi_{i}(x)\psi_{j}(x) dx, \quad (i,j=\pm)\label{eq:inner_EP}
\end{equation}
where $\epsilon_0(x)$ is the dielectric function before the {$\cal PT$}-symmetric perturbation is introduced. In other words, the partner functions in Equation~(\ref{eq:G_expansion}) are chosen as $\overline{\psi}_\pm(x)=\epsilon_0(x)\psi_\pm(x)$, similar to the Hermitian case discussed earlier. This definition of the inner product warrants the biorthogonality $(+,-)=(-,+)=0$, and we find $(+,+)=2\beta_+\varsigma\to0$, $(-,-)=-2\beta_-\varsigma\to0$ as the system approaches the EP.
As seen from Equation~(\ref{eq:G_expansion}), the vanishing inner products $(+,+),\,(-,-)$ here at the EP cause a catastrophe in the calculation of the Green's function, because the two corresponding terms diverge independent of the frequency:
\begin{equation}
G(x,x';k)\approx\frac{\overline{\psi}_+(x')\psi_+(x)}{(q^2-q_+^2)(+,+)}+\frac{\overline{\psi}_-(x')\psi_-(x)}{(q^2-q_-^2)(-,-)}.\label{eq:G_EP0}
\end{equation}
However, it can be shown that the diverging behaviors in the two terms cancel each other precisely \cite{Pick2017,Chen2020}, leading to
\begin{align}
G(x,x';k) \approx &\;\epsilon_0(x')\frac{\psi_l(x')\psi_l(x)+\psi_r(x')\psi_r(x)}{k^2-k_0^2} \nonumber\\
&-i\Delta k_0^2\epsilon_0(x')\frac{\psi_{EP}(x')\psi_{EP}(x)}{(k^2-k_0^2)^2}
\label{eq:G_EP}
\end{align}
to the leading order of the small perturbation parameter $\varsigma$. Here $\psi_{EP}=\psi_\pm|_{\beta=1}$ is the coalesced eigenstate at the EP. As expected \cite{Lin2016,Heiss2015}, a second-order pole appears in the second term due to this coalescence. Details on the perturbative analysis can be found in Appendix \ref{sec:pert}.
To verify the robustness of our method based on Equation~(\ref{eq:G_new}) in the vicinity of the EP, we choose the heterostructure of length $L$ shown in Figure~\ref{fig:1D_EP}. It consists of DBRs of refractive indexes $n_1=2+0.001i$ and $n_2=3+0.001i$, and each layer accommodates a quarter of their respective wavelengths at $ka=1.570$, where $a$ is the lattice constant.
The two half-wavelength cavities with loss are fine-tuned to achieve an EP at $k_\text{EP}a=1.570-0.006913i$
using $n_l=2+0.001i$,$n_r=1.9999+0.0029i$, and $k=\text{Re}[k_\text{EP}]$. We note that the loss in the right cavity is stronger than that in the left cavity, creating an effective $\mathcal{PT}$-symmetric system with its complex eigenvalue spectrum shifted along the imaginary axis \cite{Guo}
Figure~\ref{fig:1D_EP} shows the Green's function of this system when the source is placed at different locations. First, we place the source at the center of the right cavity, where we expect the approximation (\ref{eq:G_EP}) using just $\psi_\pm(x)$ [and $\psi_{l,r}(x)$] to hold. As Figure~\ref{fig:1D_EP}(a) shows, it is nearly identical to the result of our auxiliary eigenvalue approach (\ref{eq:G_new}), and the inclusion of more CF states away from the EP barely changes the Green's function (not shown).
If instead, we place the source at the center of the heterostructure, Equation~(\ref{eq:G_EP}) alone is insufficient to capture the features of the Green's function [see Figure~\ref{fig:1D_EP}(b)]. However, once a large set of additional eigenfunctions of $\mathcal{L}$ are also included, a good agreement to our auxiliary eigenvalue approach is again observed [see Figure~\ref{fig:1D_EP}(c)].
These results show that Equation~(\ref{eq:G_new}) provides a reliable and convenient method to calculate the Green's function, even in the presence of EPs.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig3.pdf}
\caption{Chirality-reversal Green's function in a {$\cal PT$}-symmetric ring cavity. (a) Phase of the Green's function (solid line) and the coalesced eigenstate (dotted line). Dashed lines point to the phase of the Green's function at the source (marked by the arrow). Shaded regions show the half periods with higher loss. (b) False color plots showing the constant amplitudes of the coalesced eigenstate (top) and the Green's function (bottom). A finite width is imposed on the ring for visual clarity. Here $n_0=3$, $\delta n=0.003$, $l=1$, $k_\text{EP}L=2.0944 - 0.0021i$, $kL=2.0944$, and $x'=L/8$. }\label{fig:Jordan}
\end{figure}
A similar but more striking behavior of the Green's function at an EP was recently reported in an effective {$\cal PT$}-symmetric ring cavity with refractive index \cite{Chen2020}
\begin{equation}
n(\theta) = (n_0 +i\delta n) + \delta n(\cos2l\theta+i\sin2l\theta).\label{eq:n_PT_ring}
\end{equation}
Here $\theta$ is the azimuthal angle, and below we will use the arc length $x=R\theta\in[0,L]$ as the coordinate, where $R$ is the radius of the ring and $L=2\pi R$ is the circumference. $n_0$ and $\delta n$ are the real and imaginary parts of the background index, and the latter includes both absorption and radiation losses and is positive. $l$ is a positive integer, and the complex index grating, proportional to $e^{2il\theta}$, scatters the clockwise (CW) wave of angular momentum $-l$ to the counterclockwise (CCW) wave of angular momentum $l$
but \textit{not} vice versa. Note that the chirality and the sign of $\text{Im}[n]$ here are defined with respect to the temporal dependence $e^{-i\omega t}$.
Consequently, an EP appears at $k_\text{EP}R = l/(n_0+i\delta n)$ with the coalesced CCW eigenstate $\psi(x)=e^{il\theta}$ \cite{Miao2016}. The Green's function, on the other hand, can be fully decoupled from this mode even on resonance, if the source is placed at $\theta=(m+1/4)\pi/l$ where $m$ is a non-negative integer \cite{Chen2020}, i.e., at one of the most lossy spots in the passive ring cavity. The Green's function is given by the corresponding Jordan vector $J(x)\propto e^{-il\theta}$ instead, i.e., the ``missing dimension'' of the Hilbert space at the EP in the CW direction.
This extraordinary behavior is captured nicely using our auxiliary eigenvalue approach (see Figure~\ref{fig:Jordan}). In addition, a perturbative approach shows that the Green's function at the source is given by \cite{Chen2020}
\begin{equation}
G(x',x';k) \approx \frac{1}{2(k-k_\text{EP})k_\text{EP}L},
\end{equation}
which is almost imaginary on resonance (i.e., $k=\text{Re}[k_\text{EP}]$) for a high-Q resonance with $|\text{Im}[k_\text{EP}]|\ll\text{Re}[k_\text{EP}]$. In the case shown in Figure~\ref{fig:Jordan}, this value is $(-113.99i + 0.228)R$ and nearly identical to that given by our auxiliary eigenvalue approach, i.e., $(-113.99i + 0.224)R$.
\subsection{Quasi-1D waveguides}
\label{sec3}
Quasi-1D waveguides are frequently used in the study of disordered mesoscopic and optical systems \cite{Beenakker1997,Imry2008,Mello2004}, and the Green's function plays a crucial role to construct the scattering and transfer matrices \cite{fisher_relation_1981,stone_what_1988}. Here again we consider the scalar Helmholtz equation in a waveguide with background refractive index $n(\bm{r})$:
\begin{equation}
\mathcal{L}=-\frac{1}{n^2(\bm{r})}(\partial_x^2+\partial_y^2),\quad z = k^2.\label{eq:Helmholtz2D}
\end{equation}
A finite width $L_y$ in the transverse direction and the Dirichlet boundary conditions at $y=0,L_y$ lead to a set of transverse modes (``channels'') $f_m(y)=\sin [nk_m^{(y)}y]$, where $nk_m^{(y)}=m\pi/L_y$ is the transverse wave number and $m$ is a positive integer. At a given frequency $k$, the longitudinal wave number in the $m$th channel is given by $nk_m^{(x)}=n\{k^2-[k_m^{(y)}]^2\}^{1/2}$, and this channel is propagating (evanescent) if $k_m^{(x)}$ is real (imaginary).
\begin{figure}[b]\centering
\includegraphics[width=\linewidth]{Fig4.pdf}
\caption{Green's function in a uniform waveguide with $L_y=2 \,\mu$m and $n=1.5$ everywhere. (a) and (b) show the slices along $x$ and $y$ at the source position $x'=0,\,y'=3/13\,\mu$m. The free-space wavelength is $1550$ nm. }\label{fig:WG_empty}
\end{figure}
To validate our auxiliary eigenvalue approach in quasi-1D waveguides, we first consider a uniform waveguide with the outgoing boundary condition. In this case an analytical expression exists for the Green's function, which can be written as the following infinite sum using the channel functions:
\begin{equation}
G(\bm{r},\bm{r'};k) = \sum_m n\frac{\sin[nk_m^{(y)}y']\sin[nk_m^{(y)}y]}{ik_m^{(x)}L_y}e^{\pm ink_m^{(x)}x}.\label{eq:G_wg_anal}
\end{equation}
Here the source point $\bm{r'}=(x',y')$ is placed at $x'=0$, $y'\neq0,L_y$, and the ``$+\,(-)$" sign in the exponent applies to a positive (negative) $x$. Clearly, only the propagating channels of a finite number affect the farfield behavior of the Green's function, while the logarithmic divergence of the Green's function at the source, a generic property in 2D (including quasi-1D), is reflected by the infinite number of evanescent channels in the summation. Numerically, this divergence is truncated either by the inclusion of only a finite number of evanescent channels or the finite resolution of the spatial discretization. We also note that the Green's function is dimensionless in quasi-1D.
Figure~\ref{fig:WG_empty} shows one example where our auxiliary eigenvalue approach, implemented using the finite difference method \cite{Ge_scattering_2015,Ge2017} (solid line), is compared with the analytical result given by Equation~(\ref{eq:G_wg_anal}). With just the three propagating channels available in this case (dotted line), Equation~(\ref{eq:G_wg_anal}) describes the Green's function well far from the source ($>2\,\mu{m}$); with more channels included (e.g., 100; dashed line), a good agreement
between Equation~(\ref{eq:G_wg_anal}) and our auxiliary eigenvalue approach is observed, where the grid spacings $\Delta x=1/30\,\mu$m, $\Delta y=3/130\,\mu$m used in the finite difference method are comparable to the shortest evanescent tail in the summation (i.e., $1/k_{m=100}^{(x)}\approx0.01\,\mu$m.)
For structured or disordered quasi-1D waveguides connected to two semi-infinite regions (``leads"), the Helmholtz equation is no longer separable in $x$ and $y$, and an analytical expression for the Green's function in the form of Equation~(\ref{eq:G_wg_anal}) does not exist. Nevertheless, one can still compare to the bilinear expansion (\ref{eq:G_expansion}) as we show in Figs.~\ref{fig:WG}(a) and (b) for a {$\cal PT$}-symmetric waveguide, despite its slow convergence [see Figure~\ref{fig:WG}(c)].
\begin{figure}[b]\centering
\includegraphics[width=\linewidth]{Fig5.pdf}
\caption{(a,b) Same as Figure~\ref{fig:WG_empty} but for the Green's function in a {$\cal PT$}-symmetric waveguide. Here $L_x=9 \,\mu$m, $L_y=3 \,\mu$m, and the source is at $x'=-1/60\,\mu$m, $y'=1.5\,\mu$m. Inset in (a): Schematic of the waveguide. $\text{Im}[n(x)]=\pm0.05$ for $x\in[-3,0]\mu$m and $[0,3]\mu$m respectively, where $\text{Re}[n(x)]=2$ in the central half of the width. $\text{Re}[n(x)]=1.2$ elsewhere inside the waveguide and $1$ outside. (c) Convergence of the bilinear expansion at the source (upper) and at $x=4.5\,\mu$m, $y=1.5\, \mu$m (lower). }\label{fig:WG}
\end{figure}
Here we also briefly review a standard technique used to calculate the Green's function in quasi-1D waveguides, i.e., the recursive Green's function method \cite{thouless_conductivity_1981,Sols_theory_1989}. The comparison of the Green's functions obtained in our proposed approach and by the recursive Green's function method will be present in Section \ref{sec:III}.
The heart of the recursive Green's function method lies in the celebrated Dyson's equation:
\begin{equation}
\bm{G}=\bm{G}^0 + \bm{G}^0\bm{VG}.\label{eq:RGF}
\end{equation}
$\bm{G}$ is the Green's matrix expressed in the spatial basis here, i.e., its element $\bm{G}_{\underline{i,j}}$ gives the value of the Green's function at point $i$ when the source is placed at point $j$. In a quasi-1D system with $N$ segments, $\bm{V}$ represents the couplings between the $(N-1)$th and $N$th segments due to $\mathcal{L}$, and $\bm{G}^0$ is the value of $\bm{G}$ when $\bm{V}$ is taken to be zero. In other words, $\bm{G}^0$ contains the Green's matrices of two separate systems, one for the first $N-1$ segments on the left as a whole and one for the last segment on the right. This recursive procedure then starts with a single segment on the left and proceeds with more segments added from the right, one at a time.
When implemented using the finite difference method that is equivalent to a tight-binding lattice \cite{Sols_theory_1989}, the recursive Green's function gives the identical result as our auxiliary eigenvalue approach, because they solve exactly the same discretized equation. Each segment of the system is simply a single column along $y$. We note, however, these two methods serve different purposes. The auxiliary eigenvalue approach gives the values of the Green's function inside the entire waveguide for a given position of the source, which can be both in the interior of the waveguide or on its boundaries. The recursive Green's function method, on the other hand, is particularly suitable for solving the Green's function connecting the two boundary layers, with the source placed in either one of them [See Appendix \ref{sec:RGF}]. Extra steps are needed to solve the Green's function inside the waveguide, which is more intense numerically.
\section{New insights} \label{sec:III}
Having validated the proposed auxiliary eigenvalue approach to calculate the Green's function in various systems, we now turn to the new insight this method provide, i.e., viewing the Green's function as a defect state as manifested by Equation~(\ref{eq:defect}).
We start by revisiting the 1D Hermitian system described by the scalar Helmholtz equation in Figure~\ref{fig:1D_Hermitian}. The eigenstate $\psi_0(x)$ in our auxiliary eigenvalue equation (\ref{eq:G_new_def}) (and the Green's function) for a \textit{given} energy $z=k^2$ then corresponds to the wave function given by the Schr\"odinger equation inside a closed box with a $\delta$-function potential:
\begin{equation}
\left[-\frac{1}{n^2}\partial_x^2+\lambda_0\delta(x-x')\right]\psi_0(x)=z\psi_0(x),\label{eq:Schrodinger}
\end{equation}
and the sign of $\lambda_0$ determines whether the potential is attractive or repulsive, which can be found analytically using Equation~(\ref{eq:lambda_1D_Hermitian}).
This property of $\lambda_0$ is a crucial difference between our approach and previous studies of defect states, where the defect strength (i.e., $\lambda_0$) is treated as a free parameter to find all possible $z$'s. In a nutshell, a given defect strength leads to an infinite set of wave functions with different energies $\{z_i\}$ \cite{Patil2006}, while for a given $z$ there is a single wave function of Equation~(\ref{eq:Schrodinger}) [and more generally, Equation~(\ref{eq:defect})], i.e., the Green's function, the value of which at the source gives $\lambda_0^{-1}$. Such a one-to-one mapping from $z$ to $\lambda_0$ as an eigenvalue problem has not been recognized previously to the best of our knowledge, which has prevented the association of a Green's function as a defect state in the past.
To illustrate this point, we show in Figure~\ref{fig:Schrodinger}(a) several energy eigenvalues $z_i$'s of Equation~(\ref{eq:Schrodinger}) as a function of $\lambda_0$. It is important to note that the ``bandwidths'' of the eigenstates do not overlap [shaded areas in in Figure~\ref{fig:Schrodinger}(a)], even when the range of $\lambda_0$ is extended from $-\infty$ to $\infty$. This feature is determined by the uniqueness of the Green's function: for a given energy $z$ and source position $x'$, the Green's function is uniquely determined by the boundary condition, and so is its inverse height at $x'$, which gives the only allowed value of $\lambda_0$ for this pair of $z$ and $x'$. Each of the ``bandgaps'' in Figure~\ref{fig:Schrodinger}(a) shrinks to a single point when $\lambda_0$ is extended from $-\infty$ to $\infty$, and they correspond to the energy eigenvalues of the two subsystems separated by the $\delta$-potential.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig6.pdf}
\caption{(a) Eigenvalues of the Schr\"odinger equation (\ref{eq:Schrodinger}) as a function of $\lambda_0$ for $n=3$ and $x' = 0.45 L$. Only four of them are shown near $z=900/L^2$. (b) Schematics showing the potential as a function of the position. Shaded areas indicate the Dirichlet boundary condition and the $\delta$-function potential with $\lambda_0=38.6/L$. Two unnormalized wave functions $|\psi(x)|^2$ are also shown.}\label{fig:Schrodinger}
\end{figure}
A particularly interesting type of defect states has a spatially localized wave function. It takes place when the energy of the defect state is inside a bandgap, causing the defect state to be largely decoupled from the rest of the system. Here we show that the Green's function in a 1D non-Hermitian lattice can have the same property, which further stresses the new perspective our method provides.
This 1D lattice features a non-Hermitian flatband \cite{qi_defect_2018}, enabled by imposing non-Hermitian particle-hole (NHPH) symmetry \cite{Malzard,zeromodeLaser,NHC_arxiv,Kawabata} via staggered gain and loss along a lattice of optical resonators. In the tight-binding Hamiltonian description, it is given by
\begin{equation}
H \phi_n = [z_0+(-1)^{n-1} i\gamma]\phi_n + g(\phi_{n+1}+\phi_{n-1}),
\end{equation}
where $n$ labels the lattice sites from 1 to $N$, $g\in\mathbb{R}$ is the nearest neighbor coupling, and the alternate positive and negative imaginary on-site detuning $\gamma$ models gain and loss. Below we shift the energy reference such that the on-site energy $z_0$ of a resonator mode is zero. NHPH symmetry then warrants a symmetric energy spectrum about the origin of the complex plane, i.e., $z_i=-z_j^*$. At a critical value of the gain and loss strength $\gamma=2g$, the real part of the energy bands collapse to 0, with all the eigenvalues $\{z_i\}$ completely imaginary.
Now let us consider the Green's function of such a system, i.e.,
\begin{equation}
(z-H)G = \delta_{nn'}.
\end{equation}
In the auxiliary eigenvalue problem, it can be rewritten as
\begin{equation}
(H+\lambda_0\delta_{nn'})\psi_n = z\psi_n,\quad G = \frac{\psi_n}{\lambda_0\psi_{n'}}.
\end{equation}
In other words, the Green's function corresponds to a single defect state, where the on-site energy at the position of the source $n'$ is shifted by $\lambda_0$.
We emphasize that this shift $\lambda_0$ is \textit{complex} in general and hence different from the Hermitian case shown in Figure~\ref{fig:Schrodinger} and other typical considerations of a defect state, including that in Reference~\cite{qi_defect_2018}. One example is shown in Figure~\ref{fig:NHPH}, where we place the source at the left edge ($n'=1$) of a lattice with 60 lattice sites. $\lambda_0$ stays in the lower complex plane when $z$ increases from $-g$ to $g$, and the stronger defect strength (and lower LDOS) at $z=\pm g$ gives the Green's function a clearly localized profile, as reflected by the half participation ratio (HPR) defined by $\sum_n \,(|\psi_n|^2)^2/(2\sum_n\,|\psi_n|^4)$ in the units of the lattice constant [see Figure~\ref{fig:NHPH}(b)]. We note that the HPR of an exponentially localized intensity $|\psi_n|^2=e^{-n/\xi}\,(n=1,2,\ldots,N)$ is approximately equal to its localization length $\xi$ \cite{Baboux} when $\xi,N\gg1$. The localization of the Green's function is weakened as the defect strength $\lambda_0$ becomes smaller near $z=0$, with its HPR now greater than 15 lattice sites.\\
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{Fig7.pdf}
\caption{(a) Complex defect strength $\lambda_0$ when $z$ increases from $-g$ to $g$. The lattice has 60 sites and $\gamma=2g$. (b) HPR as a function of $z$, showing a strong localization (HPR $\sim1$) at $|z|=t$ to a weaker localization (HPR $>15$) at $z=0$.}\label{fig:NHPH}
\end{figure}
Finally, we reveal the most intriguing connection between the Green's function and a defect state, i.e., in a 2D topological lattice with a chiral edge state. This system breaks Lorentz reciprocity \cite{Landau,ge_reciprocity2016}, which can be achieved in an electronic system by imposing a magnetic field. An analogy can be introduced to photonic systems by imposing an artificial gauge field, achieved experimentally by asymmetric couplings between two neighboring lattice sites on a tight-binding lattice \cite{hafezi_robust_2011,hafezi_imaging_2013,bandres_topological_2018,zhao_non-hermitian_2019}: while the couplings in both the $x$ and $y$ directions still have the same amplitude, their phases are now different. Here we consider a square lattice (Figure~\ref{fig:topo}) with uniform vertical coupling and horizontal asymmetric couplings of the same amplitude $g$, realizing a Landau gauge with a $\pi/2$-flux through the smallest plaquette \cite{hafezi_robust_2011,hafezi_imaging_2013}. Its bottom right corner is pierced by the opposite flux, and an on-site potential shift of $-2g$ is also introduced to decouple it from the rest of the system.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{Fig8.pdf}
\caption{Analogy between the Green's function and a chiral edge state circumventing a defect in a topological square lattice. (a) The Green's function calculated with $z=1.85g$ and the source at $x=a,y=10a$. Solid, dashed, and dotted lines show the results of our auxiliary eigenvalue approach, the recursive Green's function, and the bilinear expansion, respectively. (b,c) The values of the Green's function along $y$ at $x=40a$ and $a$, respectively.}\label{fig:topo}
\end{figure}
Due to its sublattice symmetry \cite{Hasan}, the energy spectrum of the main region is symmetric about $z=0$, set at the value of the identical on-site potential. It has an edge band with CCW chiral edge states in $z/g\in[1.08, 2.61]$, with their CW counterparts in $z/g\in[-2.61, -1.08]$. We probe the response of the system by placing a point source at $x=a,y=10a$, where $a=1$ is the lattice constant. As Figure~\ref{fig:topo}(a) shows, the Green's function with $z$ at the middle of the CCW chiral edge band displays the same characteristic as the eigenstates of $\mathcal{L}$, i.e., localized near the edge of the topological region. Our auxiliary eigenvalue approach gives identical result as the recursive Green's function method [solid and dashed lines in Figs.~\ref{fig:topo}(b) and (c)], and the Green's function displays two noticeable features when compared with the bilinear expansion: while only a small number ($\sim$20) of edge states are needed to capture the Green's function on the opposite (right) edge of the system (dotted line), they cannot describe the behavior of the Green's function near the source, where it displays a local minimum.
One way to understand this behavior is offered by our auxiliary eigenvalue approach, where the Green's function is effectively a defect state perturbed by the $\delta$-function potential [see Equation~(\ref{eq:defect})]. In this point of view, the local minimum of the Green's function is the consequence of the topological protection of the chiral edge states: it circumvents this defect as can be clearly seen in Figure~\ref{fig:topo}(a). This change, of course, creates a local perturbation so strong that can only be captured by a large number of the unperturbed eigenstates of $\mathcal{L}$ in the bilinear expansion, including even the chiral edge states in the opposite direction that are much different in energy (not shown). The far-field, on the other hand, is not affected by this local perturbation, as we have seen in Figure~\ref{fig:topo}(b). We note that the Green's function does not display a local minimum at all energies: if $z$ is almost on resonance with an unperturbed eigenstate in the edge band, the Green's function is given essentially by this single chiral edge mode, which goes through the defect created by the $\delta$-function potential with little scattering.
\section{Discussion and conclusions}
In summary, we have introduced in this work a new perspective of the time-independent Green's function, which is obtained as a single eigenstate of an auxiliary eigenvalue formulation that embodies a defect state created by the $\delta$-function potential. The height of the $\delta$-function potential is determined by the inverse value of the Green's function at the source position, which is given directly in the form of a generalized eigenvalue problem given by Equation~(\ref{eq:G_new_def}). It is the only well behaved and finite eigenvalue, easy to identify numerically. Therefore, our approach differs both conceptually and computationally from previous investigations of eigenstates of a $\delta$-function potential, which were not related to the Green's function.
The uniqueness of the eigenstate that gives the Green's function in our approach should be distinguished from the single bound state in an attractive 1D $\delta$-function potential \cite{Gottfried}. In our case the $\delta$-function potential can be repulsive or attractive in a Hermitian system [see Figure~\ref{fig:Schrodinger}(a)], and it becomes complex in general in non-Hermitian systems. Furthermore, the eigenstate in our approach does not depend on the original potential included in the operator $\cal L$, while the aforementioned bound state assumes a vacuum background. Finally, our eigenstate exists in higher dimensions as well, with clearly different boundary conditions and spatial profiles from a bound state [see Figure~\ref{fig:topo}(a), for example].
We have also verified the Green's function obtained in our method by comparing both to analytical results when available and to two other frequently used numerical methods, i.e., the recursive Green's function method and the bilinear expansion in the basis of the system's eigenstates. At an EP, where a perturbative treatment of the bilinear expansion becomes necessary, our method is still robust as seen from the examples in two $\mathcal{PT}$-systems. Our method also gives identical result to the recursive Green's function method, implemented by finite difference on the same tight-binding lattice.
Our defect-state approach can also be applied to more numerically demanding cases, e.g., in the study of diffusive transport and wave localization \cite{Beenakker1997,Imry2008}. We have investigated disordered quasi-1D waveguides over 100 wavelengths long and with over 60 transverse channels (not shown). In these cases, the memory storage for the recursive Green's function may become an issue, when the values of the Green's function in the interior of the waveguides are also computed for light-matter interaction or laser emissions. This is because the Green's matrix obtained from the recursive procedure mingles the values of the Green's function generated by sources placed across the entire system. The defect-state approach excels in this regard, because each source is treated independently. We should also mention that this advantage becomes less noticeable or even a disadvantage if the Green's function needs to be evaluated with the source at many different locations.
More importantly, our approach offers a previously unexplored physical insight that both the recursive Green's function method and the bilinear expansion lack, i.e., the linkage between the Green's function and a defect state. We have exemplified an intriguing manifestation of this linkage using a topological chiral edge state, where the local minimum of the Green's function is analogous to a chiral edge state circumventing a boundary defect. Therefore, even though our discussions have focused on the Helmholtz equation for scalar optical waves, we expect this new perspective and the physical insight it offers to have important applications in other related fields as well, including condensed matter systems, acoustics, electronic circuits and so on.
\medskip
\textbf{Acknowledgements} \par
We thank Douglas Stone, Yidong Chong and Azriel Genack for helpful discussions. This project is supported by the NSF under Grant No. PHY-1847240.
\medskip
| {
"timestamp": "2021-02-05T02:19:53",
"yymm": "2102",
"arxiv_id": "2102.02737",
"language": "en",
"url": "https://arxiv.org/abs/2102.02737",
"abstract": "A new perspective of the Green's function in a boundary value problem as the only eigenstate in an auxiliary formulation is introduced. In this treatment, the Green's function can be perceived as a defect state in the presence of a $\\delta$-function potential, the height of which depends on the Green's function itself. This approach is illustrated in one-dimensional and two-dimensional Helmholtz equation problems, with an emphasis on systems that are open and have a non-Hermitian potential. We then draw an analogy between the Green's function obtained this way and a chiral edge state circumventing a defect in a topological lattice, which shines light on the local minimum of the Green's function at the source position.",
"subjects": "Mesoscale and Nanoscale Physics (cond-mat.mes-hall)",
"title": "Insight of the Green's function as a defect state in a boundary value problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347853343059,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7093645945325483
} |
https://arxiv.org/abs/1604.06127 | The HOMFLY Polynomial of Links in Closed Braid Form | It is well known that any link can be represented by the closure of a braid. The minimum number of strings needed in a braid whose closure represents a given link is called the braid index of the link and the well known Morton-Frank-Williams inequality reveals a close relationship between the HOMFLY polynomial of a link and its braid index. In the case that a link is already presented in a closed braid form, Jaeger derived a special formulation of the HOMFLY polynomial. In this paper, we prove a variant of Jaeger's result as well as a dual version of it. Unlike Jaeger's original reasoning, which relies on representation theory, our proof uses only elementary geometric and combinatorial observations. Using our variant and its dual version, we provide a direct and elementary proof of the fact that the braid index of a link that has an $n$-string closed braid diagram that is also reduced and alternating, is exactly $n$. Until know this fact was only known as a consequence of a result due to Murasugi on fibered links that are star products of elementary torus links and of the fact that alternating braids are fibered. | \section{Introduction}
\medskip
It is well known that any link can be represented by the closure of a braid. The minimum
number of strands needed in a braid whose closure represents a given link is called the
{\em braid index} of the link. Defined as the extreme value of a quantity over an infinite family of links that are topologically equivalent like other link invariants (such as the minimum crossing number), the braid index of a link is hard to
compute~\cite{A} in general. In the case of the minimum crossing number, there was a long standing conjecture which states that for a reduced alternating link diagram, the number of crossings in the diagram is equal to the minimum crossing number of the link. It is known that the span of the Jones polynomial \cite{Jo} of a link gives a lower bound on the crossing number of the link. In the case of a reduced alternating link diagram, it was shown that the span of the Jones polynomial of a link equals the number of crossings in the diagram, which leads to the proof of the conjecture \cite{K,M1,T}. In the case of braid index, there is a similar inequality relating the braid index of a link to the $a$-span of its HOMFLY polynomial (which is a polynomial of two variables $z$ and $a$ to be defined in the next section). S.\ Yamada proved that any link diagram of a given link $L$ with $k$ {\em Seifert circles} can be realized as the closure of a braid on $k$ strands, which implies that the braid index of an oriented link $L$ equals the minimum number of Seifert circles of all link
diagrams of $L$~\cite{Ya}. In ~\cite{Mo}, H.\ Morton showed that the
number of Seifert circles of a link $L$, hence the braid index of $L$
(in light of Yamada's result), is bounded from below by $a$-span$/2+1$
(which is called the Morton-Frank-Williams inequality, or MFW inequality
for short). In analogy to the crossing number conjecture for a reduced
alternating link diagram, K.~Murasugi conjectured that the number of
Seifert circles, hence the braid index, in such a diagram equals $a$-span$/2+1$ (the Murasugi Conjecture) \cite{Mu}. Although this conjecture turned out to be false in general~\cite{MP} (for example the knot $5_2$ has 4 Seifert circles, but the $a$-span of its HOMFLY polynomial is 4 so $a$-span$/2+1=3$ and the braid index of $5_2$ is also 3), researchers had shown that the MFW inequality is sharp for many classes of links (including some non-alternating ones) hence the $a$-span of the HOMFLY polynomial for these links can be used to determine their braid index. Examples include the closed positive braids with a full twist (in particular the torus links)~\cite{FW}, the 2-bridge links and fibered alternating links~\cite{Mu}, and a new class of links discussed in a more recent paper by S.~Y.~Lee and M.~Seo~\cite{Lee}. For more readings on related topics, interested readers can refer to J.S.~Birman and W.W.~Menasco~\cite{Bir}, P.R.~Cromwell~\cite{Crom}, E.A.~Elrifai~\cite{El}, H.~Morton, H.~B.~Short~\cite{MS}, T.~Nakamura~\cite{Na1} and A.~Stoimenow~\cite{Sto}.
Motivated by this question, in this paper the authors seek a special and explicit formulation of the HOMFLY polynomial for certain classes of links where the explicit forms of the HOMFLY polynomial would allow us to analyze and derive the $a$-spans of the HOMFLY polynomials of these links. Our main result expresses the HOMFLY polynomial of a link presented in a closed braid diagram form in two explicit formulas. We show that one of our formulas of the HOMFLY polynomial is equivalent to the expansion derived by F.\ Jaeger ~\cite{Ja}. However, our approach is combinatorial in nature and the proof of the formula is shorter than the proof in ~\cite{Ja}. We show that the Morton-Frank-Williams inequality is an immediate consequence of these two HOMFLY polynomial formulas. As a more significant application of our result, we use it to show that if a link has an $n$-string closed braid diagram that is also reduced and alternating, then the braid index of the link is exactly $n$. The proof of this is direct and short. Our approach is very different from the proof of Murasugi's more general results ~\cite{Mu} on oriented alternating fibered links, which is inductive in nature.
This paper is structured as follows. In Section~\ref{s2}, we introduce
basic definitions and terminology about link diagrams, braids and the HOMFLY
polynomial. We introduce two special classes of resolving trees for
closed braids, in which every vertex is a closed braid and the leaf vertices are closed braids that
represent trivial links. We call these resolving
trees descending trees and ascending trees respectively. In
Section~\ref{s3}, we state and prove our main result, namely two formulas expressing the HOMFLY
polynomial of a closed braid as a total
contribution of all leaf vertices in our trees and show that the Morton-Frank-Williams inequality~\cite{FW,Mo} is a direct consequence of these two formulas. In Section~\ref{s4}, we show that the $a$-span of the HOMFLY
polynomial of a reduced alternating braid on $n$ strands is exactly
$2n-2$ and an application of the the Morton-Frank-Williams inequality shows that the braid index of
a reduced alternating braid equals the number of strands in the braid. As another application of our main result from Section~\ref{s3}, we also give a short proof that the leading coefficient of the Alexander polynomial of such a closed braid equals $\pm 1$.
Finally, in Section~\ref{acp},
we show that one of our formulas is equivalent to the expansion of the HOMFLY polynomial derived by
F.\ Jaeger~\cite{Ja} based on the concept of admissible circuit partitions.
\section{Basic concepts}\label{s2}
\subsection{Link diagrams and Reidemeister moves}
We assume that the reader has the basic knowledge about the definition of a link and refer a reader without such knowledge to a textbook such as \cite{A, B, Li}. For the convenience of the reader, however, we will review one important result that is needed in Section \ref{s3}.
Figure \ref{R-moves} defines three moves one can make on a link diagram without changing its topology, and these are called Reidemeister moves of type I, II and III. In 1926, K.~Reidemeister~\cite{Re} proved that two link diagrams represent the same link if and only if one diagram can be changed to the other through a finite sequence of Reidemeister moves.
\begin{figure}[htb!]
\includegraphics[scale=0.6]{Figure1}
\caption{Reidemeister moves of type I, II and III.}
\label{R-moves}
\end{figure}
For a given oriented link diagram $L$, we assign each crossing $+1$ or $-1$ according to its sign as defined in Figure~\ref{fig:cross}. The writhe of $L$, written as $w(L)$, is the sum of these $+1$'s and $-1$'s over all crossings of $L$. However, if we only sum the $+1$'s and $-1$'s over crossings between two different components $\mathcal{C}_1$ and $\mathcal{C}_2$ of the link, then this number is always even and half of it is called the {\em linking number} between $\mathcal{C}_1$ and $\mathcal{C}_2$. Using the Reidemeister moves, it is easy to show that the linking number is a link invariant. In particular, if $\mathcal{C}_1$ and $\mathcal{C}_2$ are separated by a topological plane (in which case $\mathcal{C}_1$ and $\mathcal{C}_2$ are said to be {\em splittable}), then the linking number between $\mathcal{C}_1$ and $\mathcal{C}_2$ is zero. We make the following remark for future reference. Its proof is trivial and is left to the reader.
\begin{remark}\label{R_remark}
{\em Reidemeister moves of type II and III do not change the writhe of a link diagram hence link diagrams related by a finite sequence of Reidemeister moves II and III have the same writhe. In particular, if a link diagram $L_2$ is obtained through $L_1$ by a finite sequence of moves that involve only deformation of segments of the link within planes that are parallel to the projection plane of the link diagram, then $w(L_2)=w(L_1)$ since such moves do not introduce Reidemeister moves of type I. Also, if $L$ is a link with splittable components $\mathcal{C}_1$, $\mathcal{C}_2$, \ldots, $\mathcal{C}_{\tau}$, then $w(L)=\sum_{1\le j\le \tau}w(\mathcal{C}_j)$.}
\end{remark}
A crossing in a link diagram is called {\em nugatory} if there is a simple closed curve such that the link diagram intersects the curve only at the crossing and the link diagram intersects both components of the complement of the curve.
A link diagram is said to be {\em reduced} if no crossing in the diagram is nugatory. A link diagram is {\em alternating} if as we travel through the link diagram by any given orientation, the strands go through the crossings alternately between overpasses and underpasses. For example, the closure of the braid in Figure \ref{fig:bd} is an alternating link diagram. As we mentioned in the introduction, reduced alternating link diagrams are special since the number of crossings in a reduced alternating link diagram is the minimum crossing number of the link \cite{K,M1,T}.
\subsection{Braids}
Consider $\mathbb{R}^2$ as the standard Euclidean $xy$-plane. A {\em braid diagram} (or just a {\em braid}) on $n$ strands is a set $\mathcal{D}\subset \mathbb{R}\times [0,1]$ consisting of $n$ curves called $strands$ of $\mathcal{D}$ such that the following four conditions are met. First, each strand is a monotonic curve in the $y$ direction. Second, every point of $\{1,2,\ldots,n\}\times\{0\}$ is a starting point of a unique strand and every point of $\{1,2,\ldots,n\}\times\{1\}$ is an end point of a unique strand. Third, every point of $\mathbb{R}\times I$ belongs to at most two strands. A point that belongs to two strands is called a {\em crossing}. At each crossing, one strand is identified as an {\em overpass} and the other is as an {\em underpass}~\cite{Ka}. Fourth, there is at most one crossing in $\mathbb{R}\times \{t\}$ for each $t\in [0,1]$. Note that the second condition gives the braid diagram a downward orientation, so the closure of a braid diagram is an oriented link diagram.
Treated as topological objects, one can speak of topological equivalence
of braid diagrams, and such equivalence relations provide the
foundations for one to treat the braids as elements in the algebraic
objects called the {\em braid groups}. Not to deviate from our main
task, we will only point out that a braid group $B_n$ is a group with
$n-1$ generators $\sigma_1, \ldots, \sigma_{n-1}$ satisfying certain
relations. An element of $B_n$ is a
word of these generators and each letter in the word corresponds to a
crossing in the braid. An example of a braid on 5 strands
and its counterpart in $B_5$ is given in Figure \ref{fig:braid}. Please
refer to ~\cite{Ka} for more details on braid groups. In this paper we
are only interested in closed braids as topological objects, not the
braids in the algebraic sense. For this reason, {\em we will not distinguish a braid and its closure}, that is, the word braid (or a symbol of it) can either represent the braid itself or its closure. The reader should rely on the context to determine its meaning (in many cases it really does not matter).
We define the {\em label} of a strand by the $x$ coordinate of its starting point and we denote the corresponding mapping by $\ell$, that is, if a strand $s$ starts at $(m,1)$, then $\ell(s)=m$. On the
other hand, the mapping $p$ that takes the label of a strand to the $x$ coordinate of its ending point defines a permutation of the labels (namely the integers from $1$ to $n$). Denote this permutation by $p(\mathcal{D})$ and write it as a product of disjoint cycles, we have $p(\mathcal{D})=(s_{11}s_{12} \ldots s_{1k_1})(s_{21}s_{22}\ldots s_{2k_2})\ldots (s_{\tau 1}s_{\tau 2}\ldots s_{\tau k_\tau })$ where $s_{i1}$ is the label of the first strand in the $i$-th cycle, $s_{i2}$ is the label of the second strand in the $i$-th cycle, ..., and so on, $\tau $ is the number of cycles in the permutation. Furthermore, we can re-arrange the orders of the cycles and the numbers in each cycle so that $s_{i1}$ is the smallest integer in each cycle for each $i$, and $s_{i1}<s_{j1}$ if $i<j$. We call this special form of $p(\mathcal{D})$ the {\em standard form}. From now on, $p(\mathcal{D})$ will always be expressed in its standard form. Note that the standard form of $p(\mathcal{D})$ defines a total order among the strand labels, namely
\begin{displaymath}
s_{11}\triangleleft s_{12}\triangleleft \ldots \triangleleft s_{1k_1}\triangleleft s_{21}\triangleleft s_{22}\triangleleft \ldots \triangleleft s_{2k_2}\triangleleft \ldots \triangleleft s_{\tau 1}\triangleleft s_{\tau 2}\triangleleft \ldots\triangleleft s_{\tau k_\tau}
\end{displaymath}
We call this order the {\em return order} of the strands in the braid diagram $\mathcal{D}$.
We call each $s_{i1}$ in $p(\mathcal{D})$ the {\em pivot label} within its corresponding cycle and $(s_{i1},1)$ the {\em pivot point} of the cycle when $p(\mathcal{D})$ is expressed in its standard form. Note that each cycle in $p(\mathcal{D})$ corresponds to a connected component in the closed braid diagram and we can travel through $\mathcal{D}$ by traveling through each such component. We say that we travel through $\mathcal{D}$ {\em naturally} if we travel along the strands of $\mathcal{D}$ in its return order, starting from the pivot point at each component and follow the orientation of the braid $\mathcal{D}$.
\begin{figure}[htb!]
\includegraphics[scale=.4]{Figure2}
\caption{An example of a braid $\mathcal{D}$ on 5 strands that has 8 crossings. }
\label{fig:braid}
\end{figure}
\begin{example} {\em Consider the braid diagram $\mathcal{D}$ shown in Figure~\ref{fig:braid} whose corresponding word in $B_5$ is $\sigma_1^{-1}\sigma_3\sigma_2^{-1}\sigma_4^{-3}\sigma_1\sigma_3^{-1}$. The dotted curves show how the braid is closed to form a link diagram. The circled points on top are pivot points of the three components. We have $p(\mathcal{D})=(14)(2)(35)$ (in its standard form), so its return order is $1\triangleleft 4\triangleleft 2\triangleleft 3\triangleleft 5$. One can see that if we close the braid using disjoint curves starting and ending at the points $(i,1)$, $(i,0)$ for each $1\le i\le 5$ (shown in Figure \ref{fig:braid} with dotted curves), strands $1$ and $4$ are in the same component, strand $2$ is a component by itself and strands $3$ and $5$ are in the same component. The pivot labels of these components are $1$, $2$ and $3$ respectively.}
\end{example}
Let $c$ be a crossing in a braid diagram $\mathcal{D}$, $O_c$ be the overpassing strand at $c$ and $U_c$ be the underpassing strand at $c$. We say that $c$ is {\em descending} if $\ell(O_c)\triangleleft \ell(U_c)$ and {\em ascending} if $\ell(O_c)\triangleright \ell(U_c)$. If all crossings in $\mathcal{D}$ are descending (ascending), then we say that $\mathcal{D}$ is a descending (ascending) braid diagram. For example, all crossings except the circled one in Figure \ref{fig:braid} are descending crossings. Switching the circled crossing to a descending crossing will then results in a descending braid.
\begin{remark}\label{descending_remark}
{\em Descending (ascending) braid diagrams have the easy and well known property that the closure of any such braid diagram is a trivial link, that is, a link topologically equivalent (ambient isotopic) to the disjoint union of several circles contained in the same plane. Take the braid given in Figure~\ref{fig:braid} as an example by flipping the circled crossing to make the braid descending. Using the order $1\triangleleft 4\triangleleft 2\triangleleft 3\triangleleft 5$, we can actually move strands 1, 4, 2, 3 and 5 into planes $z=5$, 4, 3, 2 and 1 respectively so that their projections would still give the descending braid, and that this move will not change the topology of the corresponding link (since there are no crossing changes in the process). It is then easy to see that each component corresponding to a cycle $(14)$, $(2)$ or $(35)$ is an unknot and these knots are splittable since they are separated by planes. }
\end{remark}
\subsection{The HOMFLY polynomial} Let $L_+$, $L_-$, and $L_0$ be oriented link diagrams that coincide except at a small neighborhood of a crossing where the diagrams are presented as in Figure~\ref{fig:cross}: the crossing in $L_+$ ($L_-$) is positive (negative) and is assigned $+1$ ($-1$) in the calculation of the writhe of the link diagram. We say the crossing presented in $L_+$ is of a {\em positive} sign and the crossing presented in $L_-$ is of a {\em negative} sign. The following result appears in~\cite{Fr,Ja}.
\begin{proposition}\label{Ho}
There is a unique function that maps each oriented link diagram $L$ to a two-variable Laurent polynomial with integer coefficients $P(L,z,a)$ such that
\begin{enumerate}
\item If $L_1$ and $L_2$ are ambient isotopic, then $P(L_1,z,a)=P(L_2,z,a)$.
\item $aP(L_+,z,a) - a^{-1}P(L_-,z,a) = zP(L_0,z,a)$.
\item If L is an unknot, then $P(L,z,a)=1$.
\end{enumerate}
\end{proposition}
\begin{figure}[htb!]
\includegraphics[scale=.6]{Figure3}
\caption{The sign convention at a crossing of an oriented link and the splitting of the crossing.}
\label{fig:cross}
\end{figure}
The Laurent polynomial $P(L,z,a)$ is called the {\em HOMFLY polynomial}
of the oriented link $L$. The second condition in the proposition is
called the {\em skein relation} of the HOMFLY polynomial. With conditions (2) and (3) above, one can easily show that if $L$ is a trivial link with $n$ connected components, then $P(L,z,a)=((a-a^{-1})z^{-1})^{n-1}$ (by applying these two conditions repeatedly to a simple closed curve with $n-1$ twists in its projection).
For our purposes, we will actually be using the following two equivalent forms of the skein relation:
\begin{eqnarray}
P(L_+,z,a)&=&a^{-2}P(L_-,z,a)+a^{-1}zP(L_0,z,a),\label{Skein1}\\
P(L_-,z,a)&=&a^2 P(L_+,z,a)-azP(L_0,z,a).\label{Skein2}
\end{eqnarray}
A rooted and edge-weighted binary tree $\mathcal{T}$ is called a {\em resolving tree} of an oriented link diagram $L$ (for the HOMFLY polynomial)
if the following conditions hold. First, every vertex of $\mathcal{T}$ corresponds to an oriented link diagram. Second, the root vertex of $\mathcal{T}$ corresponds to the original link diagram $L$. Third, each leaf vertex of $\mathcal{T}$ corresponds to a trivial link. Fourth, if we direct $\mathcal{T}$ using the directions of the paths from the root vertex to the leaf vertices, then under this direction any internal vertex has exactly two children vertices and the corresponding link diagrams of these three vertices are identical except at one crossing and they are related by one of the two possible relations at that crossing as shown in Figure~\ref{fig:rtree}, where the edges are weighted and the directions of the edges coincide with the direction of $\mathcal{T}$.
\begin{remark}\label{tree_formula_remark} {\em If $L$ admits a resolving tree $\mathcal{T}$, then one can easily show that $P(L,z,a)$ is a summation in which each leaf vertex of $\mathcal{T}$ contributes exactly one term in the following way. Let $\mathcal{U}$ be the trivial link corresponding to a leaf vertex in $\mathcal{T}$ and let $Q$ be the unique path from the root ($L$) to the leaf vertex ($\mathcal{U}$). Then the contribution of the leaf vertex is simply $((a-a^{-1})z^{-1})^{\gamma(\mathcal{U})-1}$ multiplied by the weights of the edges in $Q$, where $\gamma(\mathcal{U})$ is the number of components in $\mathcal{U}$. It is known (and not hard to prove) that resolving trees exist for any given oriented link diagram $L$, and that they are not unique in general. If $L^\prime$ is the mirror image of $L$, a resolving tree for $L^\prime$ can be obtained from a resolving tree of $L$ by taking mirror images of all link diagrams in it and replacing $a$ by $a^{-1}$ and $z$ by $-z$ in the edge weights. It follows the well known fact that $P(L^\prime,z,a)=P(L,-z,a^{-1})$.}
\end{remark}
\begin{figure}[htb!]
\includegraphics[scale=.47]{Figure4}
\caption{A pictorial view of edge weight assignment by the skein relations (\ref{Skein1}) and (\ref{Skein2}). }
\label{fig:rtree}
\end{figure}
Let $\mathcal{N}$ be a braid on $n$ strands and let us define an algorithm (let
us call it Algorithm D) that operates on $\mathcal{N}$ as follows. If $\mathcal{N}$ is
descending, then the algorithm does not do anything and simply returns
$\mathcal{N}$. If $\mathcal{N}$ is not descending, then it contains at least one ascending
crossing. In this case, let us travel through $\mathcal{N}$ naturally, until we
run into the first ascending crossing $c$, which can be a positive or a
negative crossing as shown in either the left hand side or the right
hand side of Figure~\ref{fig:rtree}. The algorithm then returns the two
corresponding braids split from the original one as shown in
Figure~\ref{fig:cpd}, which we will name as $\mathcal{N}_f$ and $\mathcal{N}_s$ ($f$ for
``flipping" the crossing and $s$ for ``smoothing" the crossing). Notice
that $\mathcal{N}_f$ has one less ascending crossing than $\mathcal{N}$ does since $\mathcal{N}_f$
and $\mathcal{N}$ are identical (including the return order of the strands)
except at the crossing $c$. But one cannot say the same for $\mathcal{N}_s$ since
it does not share the same strands (and the return order of the strands)
with $\mathcal{N}$. However, $\mathcal{N}$ and $\mathcal{N}_s$ share the same strands and crossings
up to crossing $c$ (which are all descending) while travel through them
naturally, and $\mathcal{N}_s$ has one less crossing than $\mathcal{N}$. So if Algorithm D
is repeatedly applied to a braid and the resulting braids of this
operation, then this process will end after a finite number of repeats
(in fact this number is bounded above by the number of crossings in
$\mathcal{D}$). It follows that we can construct a special resolving tree $\mathcal{T}$
for ${\mathcal{D}}$ (as a link diagram) as follows. We apply Algorithm D to $\mathcal{D}$
first if it is not descending, and then apply the algorithm again to the
two resulting braids (if they are not descending), and so on, until all
the leaf vertices are descending braids. The closures of the braids
involved in the process are the vertices of $\mathcal{T}$. In particular, the
closures of the resulting descending braids form the leaf vertices of
$\mathcal{T}$ since they are all trivial links. By assigning appropriate weights
to the edges of $\mathcal{T}$, one can easily verify that $\mathcal{T}$ is indeed a
resolving tree of ${\mathcal{D}}$. By the way it is constructed, $\mathcal{T}$ is
unique. In a similar fashion, we can also construct another (unique)
resolving tree of ${\mathcal{D}}$ by replacing ``descending" with ``ascending" in
the above (the algorithm corresponding to Algorithm D would be called
Algorithm A). To distinguish the two, we will call the first the
``descending tree" and the later the ``ascending tree" of ${\mathcal{D}}$, and
denote them by $\mathcal{T}^{\downarrow}({\mathcal{D}})$ and $\mathcal{T}^{\uparrow}({\mathcal{D}})$
respectively. An example of a descending tree is shown in the right hand
side of Figure~\ref{fig:cpd}.
\begin{figure}[htb!]
\includegraphics[scale=.35]{Figure5}
\caption{Left: Admissible circuit partitions of the braid on the top (including itself), to be covered in Section \ref{acp}; Right: The descending tree of (the closure of) the braid on the top left (only the braids are shown). }
\label{fig:cpd}
\end{figure}
\begin{remark}\label{R_cor}
{\em For a given braid $\mathcal{U}$ obtained from $\mathcal{D}$ by flipping and smoothing
some of its crossings, there is a simple way to check whether $\mathcal{U}\in
\mathcal{F}^{\downarrow}(\mathcal{D})$ by the way $\mathcal{T}^{\downarrow}(\mathcal{D})$ is
constructed. We travel through $\mathcal{U}$ naturally and visit each crossing
of $\mathcal{D}$ exactly twice. For each crossing
of $\mathcal{D}$ that we encounter in this process for the first time (including the smoothed
ones, these are marked by small circles in Figure \ref{fig:cpd}) we
perform the following test. If this crossing is smoothed in $\mathcal{U}$,
we check whether we would be approaching it from its underpass if the crossing (in its original form in $\mathcal{D}$) were not smoothed.
On the other hand, if the crossing is also a crossing in $\mathcal{U}$ (which may or may not have been flipped from its original form in $\mathcal{D}$), we check whether we are approaching it from its overpass. If all crossings pass this
check, then $\mathcal{U}\in \mathcal{F}^{\downarrow}(\mathcal{D})$, otherwise $\mathcal{U}\not\in
\mathcal{F}^{\downarrow}(\mathcal{D})$.}
\end{remark}
\section{The HOMFLY polynomial of a closed braid}\label{s3}
In this section, we derive the main result of this paper, namely the two formulas of the HOMFLY polynomial of a braid ${\mathcal{D}}$ based on $\mathcal{T}^{\downarrow}({\mathcal{D}})$ and $\mathcal{T}^{\uparrow}({\mathcal{D}})$ respectively, given in Theorem \ref{p3}. Let $\mathcal{N}$ be a vertex in $\mathcal{T}^{\downarrow}({\mathcal{D}})$ (or $\mathcal{T}^{\uparrow}({\mathcal{D}})$), $w(\mathcal{N})$ be the writhe of $\mathcal{N}$, $\gamma(\mathcal{N})$ be the number of components in ${\mathcal{N}}$. Note that $\mathcal{N}$ is obtained from $\mathcal{D}$ by applying Algorithm D (or Algorithm A) repeatedly, and in this process some crossings of $\mathcal{D}$ may have been smoothed. Let $t(\mathcal{N})$ be the number of smoothed crossings and $t'(\mathcal{N})$ be the number of smoothed crossings that are negative. It is trivial to note that $t(\mathcal{N})$ is simply the difference between the number of crossings in $\mathcal{D}$ and the number of crossings in $\mathcal{N}$.
\begin{lemma}\label{l3}
If $\mathcal{U}$ is a descending braid on $n$ strands, then
$\gamma(\mathcal{U})-w(\mathcal{U})=n$. On the other hand, if $\mathcal{V}$ is an ascending braid on $n$ strands, then $\gamma(\mathcal{V})+w(\mathcal{V})=n$.
\end{lemma}
\begin{proof}
Since $\mathcal{U}$ is descending, by Remark~\ref{descending_remark}, ${\mathcal{U}}$ can
be realized by such space curves that its components are separated by
planes that are parallel to the $xy$-plane, hence the writhe
contribution of crossings whose strands belong to different components
is zero. Let $\tau=\gamma(\mathcal{U})$, $\mathcal{C}_1$, $\mathcal{C}_2$, \ldots, $\mathcal{C}_{\tau}$ be the
cycles of $p(\mathcal{U})$ and $l(\mathcal{C}_j)$ be the length of the cycle $C_j$. We
have $\sum_{1\le j\le \tau}l(C_j)=n$ and $\sum_{1\le j\le
\tau}w(C_j)=w(\mathcal{U})$. We claim that for each component $\mathcal{C}_j$ of ${\mathcal{U}}$,
$l(\mathcal{C}_j)=1-w(\mathcal{C}_j)$. For simplicity let $m=l(\mathcal{C}_j)$ and let the labels
of the strands in $\mathcal{C}_j$ be $s_1$, \ldots, $s_m$, ordered by their return
order in $\mathcal{U}$. Without loss of generality, assume that strand $s_{1}$ is
in the plane $z=m$, strand $s_2$ is in the plane $z=m-1$, ... and strand
$s_m$ is in the plane $z=1$. Furthermore, the curve connecting the
ending point of $s_i$ to the starting point of strand $s_{i+1}$ is
bounded between the two planes $z=m-i+1$ and $z=m-i$ for $1\le i\le
m$. We first observe that each strand and part of the curves connecting
to its ends can be deformed to straight line segments (within the plane
that it is in), in a form as shown in the left hand side of Figure
\ref{fig:layers}, where each strand resides in a plane parallel to $z=0$ (and the equation of the plane is marked in the figure).
The part of the connecting curve attached to the strand residing in the same plane is marked by solid lines and the dotted curves have their end points on
different planes parallel to the $xy$-plane whose $z$ coordinates differ
by exactly one, with the exception of the dotted curve on the far left
(which is between the planes $z=1$ and $z=m$).
\begin{figure}[htb!]
\includegraphics[scale=.6]{Figure6}
\caption{ Left: A connected component corresponding to the cycle $\mathcal{C}_j=(14352)$ with its strands and the connecting curves straightened. Right: A special re-arrangement of the strands of $\mathcal{C}_j$ through a finite sequence of Reidemeister moves of type II and III.}
\label{fig:layers}
\end{figure}
By Remark \ref{R_remark},
this deformation does not change the writhe. For the same reason, these straight
line segments can freely slide within the plane they reside in since
there are no other curves in that plane. See Figure \ref{slide} for one
such move. In particular, nothing can preventing us from sliding them
into the position where the strands $s_1,\ldots,s_{m-1}$ are parallel
lines arranged in this order from left to right, as shown in the right
hand side of Figure~\ref{fig:layers}. Since all moves are made within
the planes where these curves reside in, the writhe does not change by
Remark \ref{R_remark}. We see that $w(\mathcal{C}_j)=-(m-1)$ from the right hand side of Figure~\ref{fig:layers} since there are exactly $m-1$ crossings in the projection and all of them are negative. Thus $w(\mathcal{U})=\sum_{1\le j\le \tau}w(C_j)=-\sum_{1\le j\le \tau}(l(C_j)-1)=-n+\tau$, {\em i.e.}, $\gamma(\mathcal{U})-w(\mathcal{U})=n$. An ascending braid diagram $\mathcal{V}$ is the mirror image of a descending braid
diagram $\mathcal{U}$. It is known that $w(\mathcal{U})=-w(\mathcal{V})$ and it follows that
$\gamma(\mathcal{V})+w(\mathcal{V})=n$.
\end{proof}
\begin{figure}[htb!]
\includegraphics[scale=.6]{Figure7}
\caption{Left: The strand with label $1$ is the only strand in the top plane and can be slid freely within the plane without causing self intersection of the link; Right: The same strand after a sliding move.}
\label{slide}
\end{figure}
\medskip
\begin{theorem}\label{p3}
Let $\mathcal{D}$ be a braid on $n$ strands, $\mathcal{F}^{\downarrow}({\mathcal{D}})$ and $\mathcal{F}^{\uparrow}({\mathcal{D}})$ be the set of leaf vertices of $\mathcal{T}^{\downarrow}({\mathcal{D}})$ and $\mathcal{T}^{\uparrow}({\mathcal{D}})$ respectively, then
\begin{equation}\label{e1}
P({\mathcal{D}},z,a) = a^{1-n-w({\mathcal{D}})} \sum_{\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}})} (-1)^{t'(\mathcal{U})}z^{t(\mathcal{U})}((a^2-1)z^{-1})^{\gamma(\mathcal{U})-1}
\end{equation}
\begin{equation}\label{e2}
P({\mathcal{D}},z,a) = a^{n-1-w({\mathcal{D}})} \sum_{\mathcal{V}\in \mathcal{F}^{\uparrow}({\mathcal{D}})} (-1)^{t'(\mathcal{V})}z^{t(\mathcal{V})}((1-a^{-2})z^{-1})^{\gamma(\mathcal{V})-1}
\end{equation}
where $t(\mathcal{U})$ is the number of crossings in $\mathcal{D}$ that are smoothed in obtaining $\mathcal{U}$ and $t^\prime(\mathcal{U})$ is the number of negative crossings among these smoothed crossings.
\end{theorem}
\begin{proof} Let us consider the descending tree first. By Remark~\ref{tree_formula_remark}, the contribution of $\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}})$ to
$P({\mathcal{D}},z,a)$ is $((a-a^{-1})z^{-1})^{\gamma(\mathcal{U})-1}$ multiplied by the
weights of the edges on the unique path of $\mathcal{T}^{\downarrow}({\mathcal{D}})$
from ${\mathcal{D}}$ to ${\mathcal{U}}$. As shown in Figure~\ref{fig:rtree}, the degree
of $a$ in the weight of an edge is exactly the change of writhe from
the starting vertex of the edge (remember that it is directed from the
root to the leaf) to the ending vertex of the edge, whereas a $z$ term
in the weight of the edge indicates that the ending vertex is obtained
from the starting vertex by a crossing smoothing and a negative sign in the weight indicates that the smoothed crossing is a negative crossing. It follows that the total contribution of
$\mathcal{U}$ is
\begin{eqnarray*}
&&(-1)^{t'(\mathcal{U})}z^{t(\mathcal{U})}a^{w(\mathcal{U})-w({\mathcal{D}})}((a-a^{-1})z^{-1})^{\gamma(\mathcal{U})-1}\\
&=& (-1)^{t'(\mathcal{U})}z^{t(\mathcal{U})}a^{w(\mathcal{U})-w({\mathcal{D}})-\gamma(\mathcal{U})+1}((a^2-1)z^{-1})^{\gamma(\mathcal{U})-1}\\
&=& a^{1-n-w({\mathcal{D}})}\left[(-1)^{t'(\mathcal{U})}z^{t(\mathcal{U})}((a^2-1)z^{-1})^{\gamma(\mathcal{U})-1}\right]
\end{eqnarray*}
by Lemma~\ref{l3}. This proves (\ref{e1}). Equation (\ref{e2}) can be proved in a similar fashion and is left to the reader.
\end{proof}
\begin{remark}{\em
Let $L$ be a link with braid index $n$ and ${\mathcal{D}}$ a braid representation of $L$ where $\mathcal{D}$ is a braid of $n$ strands. Let $E$ and $e$ be the maximum and minimum degrees of $a$ in $P(L,z,a)$. The $a$-span of $P(L,z,a)$ is defined as $E-e$. Since $\gamma(\mathcal{U})\le n$ for any $\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}})$,
formulas (\ref{e1}) and (\ref{e2}) imply that $E\le 1-n-w({\mathcal{D}})+2(n-1)=n-w({\mathcal{D}})-1$ and $e\ge n-1-w({\mathcal{D}})-2(n-1)=-n-w({\mathcal{D}})+1$. It follows that $a$-span$/2+1\le n$, that is, the Morton-Frank-Williams inequality is a direct consequence of Theorem \ref{p3}.}
\end{remark}
\section{The braid index of reduced alternating braids}\label{s4}
A link is {\em splittable} if there exist components of the link that lie on different sides of a topological plane in $\mathbb{R}^3$ and a link is {\em non-splittable} if it is not splittable~\cite{A}. A braid diagram is {\em reduced} if its closure is a reduced link diagram. A braid diagram is {\em alternating} if its closure is an alternating link diagram. An oriented link is a {\em reduced alternating braid} on $n$ strands if it can be represented by a reduced alternating braid diagram on $n$ strands. A reduced alternating braid is an alternating link but the converse is not true in general. Figure \ref{fig:bd} is an example a non-splittable, reduced alternating braid diagram. In this section, we prove the following theorem with a simple and direct proof based on our results from the last section.
\begin{theorem}\cite{Mu}\label{Theorem4.1} Let $\mathcal{D}$ be the closure of a reduced alternating braid on $n$ strands, then the braid index of $\mathcal{D}$ is $n$.
\end{theorem}
\begin{remark}{\em
The above theorem is a special case of a more general theorem on a class of oriented alternating fibered links due to Murasugi \cite{Mu} where the proof relies on using a sequence of lemmas shown by induction. The alternating links in this class are the $^\ast$-products of $(2,n)$ torus links (which include the alternating closed braids) \cite{Mu} and are also known to be fibered \cite{Sto2}. The fact that the reduced alternating closed braids are fibered can be established directly by using another result due to Murasugi, which states that an alternating link is fibered if the leading coefficient of its Alexander polynomial is $\pm 1$ \cite{Mu2}, and by proving that the leading coefficient of the Alexander polynomial of a reduced alternating closed braid is indeed $\pm 1$. This latter fact seems to be known but we failed to find a direct proof of it in the literature so we will provide a short one at the end of this section, which also serves as another application of Theorem \ref{p3} and our method. We also note that fact that the reduced alternating closed braids are fibered is a direct consequence of a result due to Stallings in which he proved that all homogeneous closed braids (which include the alternating ones) are fibered \cite{Stallings}.}
\end{remark}
Before we proceed to the proof of the theorem, we note that it suffices for us to prove this result for reduced alternating braids that are non-splittable. Since if not, say $\mathcal{D}=\mathcal{D}_1\cup \mathcal{D}_2\cup \cdots\cup \mathcal{D}_k$ with the $\mathcal{D}_j$'s being the non-splittable components of $\mathcal{D}$, we can simply apply our result to each $\mathcal{D}_j$. Since the braid index of a link equals the sum of the braid indices of its non-splittable components, and $a$-span$/2+1$ of $P(\mathcal{D},z,a)$ is the sum of the ($a$-span$/2+1$)'s of the $P(\mathcal{D}_j,z,a)$'s as one can easily check, the general result then follows.
\begin{figure}[htb!]
\includegraphics[scale=.35]{Figure8}
\caption{A non-splittable reduced positive-leading alternating braid}
\label{fig:bd}
\end{figure}
Let $\mathcal{D}$ be a braid diagram on $n>1$ strands. Consider $\mathcal{D}$ as a word of generators of the braid group $B_n$. Note that each crossing in the braid diagram corresponds to a letter $\sigma_i$ or $\sigma_i^{-1}$ in the word $\mathcal{D}$. In a standard drawing of $\mathcal{D}$ as a braid, a crossing corresponding to $\sigma_i$ or $\sigma_i^{-1}$ is drawn in the space between the vertical line $x=i$ and $x=i+1$ in the $xy$-plane. We call this space a {\em gap}. A gap is {\em odd} if $i$ is odd, and {\em even} if $i$ is even. Thus, a crossing corresponds to $\sigma_i$ or $\sigma_i^{-1}$ is in an {odd gap} ({even gap}) if $i$ is odd (even). If $\mathcal{D}$ is non-splittable, then each gap in the braid diagram contains at least one crossing. If $\mathcal{D}$ is also reduced, then each gap in the diagram contains at least two crossings. Suppose $\mathcal{D}$ is alternating, the overpass (underpass) of a crossing $c$ must be the underpass (overpass) at next crossing $c^\prime$. So if $c^\prime$ is in the same gap, $c^\prime$ must have the same sign as $c$ and if $c^\prime$ is in an adjacent gap, $c^\prime$ must have the opposite sign. Therefore, if $\mathcal{D}$ is alternating, all odd gaps contain crossings of the same sign and all even gaps contain crossings of the opposite sign. If the first gap contains positive crossings, we say the alternating braid diagram $\mathcal{D}$ is {\em positive-leading} and if the first gap contains negative crossings, we say the alternating braid diagram $\mathcal{D}$ is {\em negative-leading}.
\begin{lemma}\label{l4}
Let $\mathcal{D}$ be a non-splittable reduced alternating braid diagram on $n$ strands, $E$ be the highest degree of $a$ in $P(\mathcal{D},z,a)$ and $e$ the lowest degree of $a$
in $P(\mathcal{D},z,a)$. Then $E=n-1-w(\mathcal{D})$ and $e=1-n-w(\mathcal{D})$. It follows that $a$-span$=2(n-1)$.
\end{lemma}
A simple application of the Morton-Frank-Williams inequality and Lemma~\ref{l4}, together with the note in the second paragraph of this section, immediately shows that if $\mathcal{D}$ is a reduced alternating braid on $n$ strands, then the braid index of $\mathcal{D}$ is exactly $n$. We now proceed to prove Lemma \ref{l4}.
\begin{proof} We will consider the positive-leading alternating braid diagrams first. Bear in mind that for a positive-leading alternating braid diagram, all the odd gaps contain only positive crossings and all the even gaps contain only negative crossings.
Consider $P(\mathcal{D},z,a)$ as a Laurent
polynomial of $a$ with coefficients in the ring of Laurent polynomials
in the variable $z$. By (\ref{e1}), the highest possible degree of $a$ is $n-1-w(\mathcal{D})$ and the only leaf vertices $\mathcal{U}\in \mathcal{F}^{\downarrow}(\mathcal{D})$ that can make contributions to the
term of $P(\mathcal{D},z,a)$ with this $a$ degree must satisfy the condition $\gamma(\mathcal{U})=n$. Let us consider
the braid $\mathcal{U}^*$ obtained from $\mathcal{D}$ by first smoothing all crossings in
the odd gaps, then smoothing all crossings in even gaps except the first one and the last one and finally
flip the sign of the last crossings in the even gaps.
Claim 1. $\mathcal{U}^*\in \mathcal{F}^{\downarrow}(\mathcal{D})$. This is obvious by the checking method in Remark \ref{R_cor}.
Claim 2. Part of the $\mathcal{U}^*$ contribution to $P(\mathcal{D}, z, a)$ is a term of
the form $\pm z^{t(\mathcal{U}^*)-n+1} a^{n-1-w(\mathcal{D})}$.
Proof of Claim 2. By (\ref{e1}), the contribution of $\mathcal{U}^*$ to $P(\mathcal{D}, z, a)$ is
$$
a^{1-n-w({\mathcal{D}})}(-1)^{t'(\mathcal{U}^*)}z^{t(\mathcal{U}^*)}((a^2-1)z^{-1})^{\gamma(\mathcal{U}^*)-1}.
$$
We have $\gamma(\mathcal{U}^*)=n$ and the result follows.
Claim 3. For any $\mathcal{U}\in \mathcal{F}^{\downarrow}(\mathcal{D})$, $\mathcal{U}\not= \mathcal{U}^*$, the contribution of $\mathcal{U}$ to
$P(\mathcal{D}, z, a)$ either has its maximum degree in the variable $a$ less than $n-1-w(\mathcal{D})$, or has its degree in $z$ less than $t(\mathcal{U}^*)-n+1$.
Proof of Claim 3. If $\gamma(\mathcal{U})<n$ then there is nothing to prove. Assuming that $\gamma(\mathcal{U})=n$, then the contribution of $\mathcal{U}$ to $P(\mathcal{D},z,a)$ is $a^{1-n-w({\mathcal{D}})}(-1)^{t'(\mathcal{U})}z^{t(\mathcal{U})}((a^2-1)z^{-1})^{n-1}$ so the degree of $z$ is $t(\mathcal{U})-n+1$. We need to show that $t(\mathcal{U})-n+1<t(\mathcal{U}^*)-n+1$, that is, $t(\mathcal{U})<t(\mathcal{U}^*)$. Since $\gamma(\mathcal{U})=n$, the return order of $\mathcal{U}$ is $1\triangleleft 2\triangleleft\cdots\triangleleft n$. It follows that each gap contains either no crossings or at least two crossings of $\mathcal{U}$ since each strand, say it has label $i$ with starting point $(i,1)$, has to go through a gap an even number of times in order for it to end at the point $(i,0)$. We claim that $\mathcal{U}$ contains the first crossing of $\mathcal{D}$ in each even gap in its original sign. If this is not the case, let $c$ be the first such crossing that has been changed (so it is either smoothed or flipped in $\mathcal{U}$). Assume that $c$ is in the $i$-th gap (with $i$ being even). Notice that any strand with label $j$ less than $i$ can only cross the $i$-th gap into gaps to the right of the $i$-th gap and must return to the point $(j,0)$ at the end because the return order of $\mathcal{U}$ is $1\triangleleft 2\triangleleft\cdots\triangleleft n$ and the strands can only travel downward. It follows that the strand entering $c$ from its right side must have label greater than $i$. Now consider the strand of $\mathcal{U}$ that first enters $c$ as we travel through $\mathcal{U}$ naturally. This strand has to come from the left side of $c$ by the above observation. But then it fails the test given in Remark \ref{R_cor} since $c$ is descending in $\mathcal{D}$ and cannot be changed in $\mathcal{U}$. This gives us the needed contradiction.
It now follows that $\mathcal{U}$ has at least two crossings in each even gap. If $\mathcal{U}$ also contains some crossings in the odd gaps, or contains more than two crossings in some even gaps, then we already have $t(\mathcal{U})<t(\mathcal{U}^*)$ and there is nothing left to prove. So the only case left is the case when $\mathcal{U}$ contains no crossings in the odd gaps and exactly two crossings in each even gap.
Claim 4. If $\mathcal{U}\in \mathcal{F}^{\downarrow}(\mathcal{D})$ and it contains no crossings in the odd gaps and exactly two crossings in each even gap, then $\mathcal{U}=\mathcal{U}^*$, that is, $\mathcal{U}^*$ is the only element in $\mathcal{F}^{\downarrow}(\mathcal{D})$ with this property.
Proof of Claim 4. By the proof of Claim 3, if $\mathcal{U}\not=\mathcal{U}^*$, then there exists an even gap such that $\mathcal{U}$ contains the first crossing of $\mathcal{D}$ in this gap in its original sign, and exactly one other crossing $c$ of $\mathcal{D}$ in this gap which is not the last crossing in this gap.
By Remark \ref{R_cor} again, the sign of $c$ in $\mathcal{D}$ has to be changed to make it descending in $\mathcal{U}$. But as we travel through $\mathcal{U}$ naturally past $c$ and encounter the first crossing of $\mathcal{D}$ in the same gap below $c$ (keep in mind that this crossing exists because $c$ is not the last crossing of $\mathcal{D}$ in this gap, and the other crossings of $\mathcal{D}$ in the adjacent odd gap have all been smoothed in $\mathcal{U}$). This crossing has been smoothed in $\mathcal{U}$ but we are now approaching it from its overpass, so it fails the check in Remark \ref{R_cor} and $\mathcal{U}\not\in \mathcal{F}^{\downarrow}(\mathcal{D})$, contradicting to $\mathcal{U}\in \mathcal{F}^{\downarrow}(\mathcal{D})$.
The consequence of Claims 1 to 4 is that if we write $P(\mathcal{D},z,a)$ as a Laurent polynomial of $a$ whose coefficients are Laurent polynomials of $z$, then it contains a nontrivial term of the form $g(z)a^{n-1-w(\mathcal{D})}$ and all other terms have degrees less than $n-1-w(\mathcal{D})$, that is, $E=n-1-w(\mathcal{D})$. To obtain $e$, we will use $\mathcal{V}^*\in \mathcal{T}^{\uparrow}(\mathcal{D})$ and (\ref{e2}), where $\mathcal{V}^*$ is obtained from $\mathcal{D}$ by keeping the first crossing and flipping the last crossing in each odd gap, and smoothing all other crossings. The details are left to the reader.
Finally, if $\mathcal{D}$ is a non-splittable reduced negative-leading alternating braid diagram, then its mirror image $\mathcal{D}^\prime$ is a non-splittable reduced positive-leading alternating braid diagram and we have $w(\mathcal{D})=-w(\mathcal{D}^\prime)$. Let $E^\prime$ and $e^\prime$ be the highest and lowest degrees of $a$ in $P(\mathcal{D}^\prime,z,a)$. Then by the first part of the lemma, we have $E^\prime=n-1-w(\mathcal{D}^\prime)$ and $e^\prime=1-n-w(\mathcal{D}^\prime)$. Therefore by Remark~\ref{tree_formula_remark} we have $E=-e^\prime=-(1-n-w(\mathcal{D}^\prime))=n-1-w(\mathcal{D})$ and $e=-E^\prime=-(n-1-w(\mathcal{D}^\prime))=1-n-w(\mathcal{D})$.
\end{proof}
\begin{remark}\label{remark4.4}
{\em Another way to handle the case when $\mathcal{D}$ is a non-splittable
reduced negative-leading alternating braid diagram is to take its
connected sum with a positive Hopf link, thus creating a new first gap
with a single pair of positive crossings. The statement now follows the
positive leading case and from the fact that the braid index is additive
under taking the connected sum.}
\end{remark}
Finally, as another application of Theorem \ref{p3}, we provide a short proof to the following theorem. This result seems to be known \cite{Stallings}, although we failed to find a specific proof in the literature.
\begin{theorem}\label{T4.4}
Let $\mathcal{D}$ be the closure of a reduced alternating braid on $n$ strands, then the leading coefficient in the Alexander polynomial of $\mathcal{D}$ is $\pm 1$.
\end{theorem}
\begin{proof}
Let $\Delta_\mathcal{D}(x)$ be the Alexander polynomial of $\mathcal{D}$, then $\Delta_\mathcal{D}(x)=P(\mathcal{D},x^{1/2}-x^{-1/2},1)$ \cite{Doll}. Assume that $\mathcal{D}$ is positive leading, substituting $a=1$, $z=x^{1/2}-x^{-1/2}$ in (\ref{e1}) leads to
$$
\Delta_\mathcal{D}(x)= \sum_{\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}}), \gamma(\mathcal{U})=1} (-1)^{t'(\mathcal{U})}(x^{1/2}-x^{-1/2})^{t(\mathcal{U})}.
$$
Construct $\mathcal{U}^\prime\in \mathcal{F}^{\downarrow}({\mathcal{D}})$ by the following procedure: (i) for the first gap of $\mathcal{D}$, smooth all crossings (which are positive) except the last one, which we will flip and cross it into the second gap. Once there, we have no choice but to keep the first (negative) crossing we encounter which will lead us into the third gap. (ii) Now we will smooth all crossings we encounter (negative and positive ones in the second and the third gaps) until we encounter the last positive crossing in the third gap. Notice that in doing so we may have reached the bottom of the braid at $\{3\}\times \{1\}$ and returned to the top of the braid at $\{3\}\times \{0\}$, so this last positive crossing we encounter may not be the last positive crossing in the third gap. At this positive crossing we flip it and cross it into the next gap and repeat (ii). This process is repeated until we reach the last gap and we can and will smooth all crossings in the last gap except one, no matter it is even or odd. At this point, we have exactly one positive crossing in each odd gap but may have negative crossings in the even gaps that we have not visited. However in the last gap we only have one crossing left and we will be traveling back through this crossing. As one can easily verify, each time we travel back to an even gap it is through the only crossing left in the odd gap to its right, so if there are any negative crossings that we have not visited before, these crossings will be between the two positive crossings in the two odd gaps to the left and right of the said even gap. By the descending rule, we can and will smooth all these crossings. Thus we have shown that
$\mathcal{U}^\prime\in \mathcal{F}^{\downarrow}({\mathcal{D}})$, $\gamma(\mathcal{U}^\prime)=1$ and $t(\mathcal{U}^\prime)=c(\mathcal{D})-n+1$ where $c(\mathcal{D})$ is the total number of crossings in $\mathcal{D}$. Furthermore, it is also rather easy to see that $t(\mathcal{U})>c(\mathcal{D})-n+1$ implies $\gamma(\mathcal{U})>1$. Finally we leave it to our reader to verify that $\mathcal{U}^\prime$ is the only element in $\mathcal{F}^{\downarrow}({\mathcal{D}})$ with the property $t(\mathcal{U})=c(\mathcal{D})-n+1$. The result of the theorem then follows. If $\mathcal{D}$ is negative leading, we will simply apply the above argument to (\ref{e2}) instead.
\end{proof}
\section{Admissible circuit partitions}\label{acp}
In this section we show that the HOMFLY polynomial formation (\ref{e1}) given in Theorem~\ref{p3}
is equivalent to the expansion derived by F.\ Jaeger for braids in \cite{Ja}. Thus our approach used to prove Proposition~\ref{p3}
provides an alternative (and in fact shorter) proof of his expansion. Unlike our approach (which is combinatorial in nature),
Jaeger stated his formula using a concept called the {\em admissible circuit partitions} of a braid diagram. We establish
this equivalence by showing that for any braid $\mathcal{D}$, there is a bijection between the leaf
vertices in the descending resolving tree $\mathcal{T}^{\downarrow}({\mathcal{D}})$ and the admissible circuit partitions of $\mathcal{D}$, such that
each leaf vertex and its corresponding admissible partition under this bijection make the same contributions to their respective
HOMFLY polynomial formulas.
We begin with a brief review of the essential concepts introduced by
F.\ Jaeger with some small modification of the terminology to make the terms more consistent with our current paper.
Interested readers please refer to \cite{Ja} for the details and his original terminologies.
Given a braid diagram $\mathcal{D}$, a circuit partition $\pi$ of $\mathcal{D}$ is a braid diagram obtained
from $\mathcal{D}$ by smoothing every crossing in a
subset $S$ of its crossings, while leaving the other crossings
unchanged. The smoothed crossings are denoted by small dotted circles in Figure 5. We can identify a circuit partition $\pi$ by the ordered pair
$(\mathcal{D},S)$ where $\mathcal{D}$ is the original braid diagram and
$S$ the set of crossings of $\mathcal{D}$ to be smoothed. Two circuit partitions $(\mathcal{D},S)$ and $(\mathcal{D}',S')$
are defined to be equal if and only if $\mathcal{D}=\mathcal{D}'$ and $S=S'$.
Let $c$ be a crossing in $\mathcal{D}$.
If $c$ is smoothed, the upper left part of the strand entering $c$ is connected to the
lower left part of the strand exiting $c$ and the resulting curve is called a {\em left tangence} at $c$.
Similarly one can define the {\em right tangence} at $c$.
While traveling through $\pi$ naturally, we will meet each crossing in $\mathcal{D}$ exactly
twice (including the crossings that have been smoothed). A smoothed crossing $c$ in $\pi$ is said to be {\em admissible}
if it has the following property. If $c$ is a positive crossing,
then the first passage at $c$ is a left tangence, if $c$ is negative
in $\mathcal{D}$, then the first passage at $c$ is a right tangence.
A circuit partition is {\em admissible} if all the smoothed crossings in $\pi$ are admissible.
In particular, $\mathcal{D}$ itself is an admissible circuit partition, in which no crossing is smoothed.
We denote the set of admissible circuit partitions of $\mathcal{D}$ by $\mathcal{A}(\mathcal{D})$. Then in \cite{Ja} Jaeger showed that
for any braid diagram $\mathcal{D}$ on $n$ strands,
\begin{equation}
P({\mathcal{D}},z,a)=a^{1-n-w({\mathcal{D}})} \sum_{\pi\in \mathcal{A}(\mathcal{D})} (-1)^{t^\prime(\pi)}z^{t(\pi)}((a^2-1)z^{-1})^{\gamma(\pi)-1}
\label{J_equation}
\end{equation}
where $\pi=(\mathcal{D},S)$, $t(\pi)$ is the number of crossings in $S$, $t^\prime(\pi)$ is the number of negative crossings in $S$ and $\gamma(\pi)$
is the number of components in the closure of $\pi$.
\begin{proposition}
\label{thm:da}
For any braid diagram $\mathcal{D}$, there exists a bijection $h_\mathcal{D}:\ \mathcal{F}^{\downarrow}({\mathcal{D}})\to \mathcal{A}(\mathcal{D})$
such that for each $\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}})$, $\mathcal{U}$ and $\pi=h_\mathcal{D}(\mathcal{U})$ are both obtained
from $\mathcal{D}$ by smoothing crossings from the same set.
\end{proposition}
\begin{proof}
Define a mapping $h_\mathcal{D}$ from $\mathcal{F}^{\downarrow}({\mathcal{D}})$ to the set of all circuit partitions of $\mathcal{D}$ as follows.
For any $\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}})$, let $h_\mathcal{D}(\mathcal{U})$ be
the circuit partition $(\mathcal{D},S)$ where $S$ is the set of crossings in $\mathcal{D}$ that are smoothed in
the process of obtaining $\mathcal{U}$. We claim that $h_\mathcal{D}$ is a bijection between $\mathcal{F}^{\downarrow}({\mathcal{D}})$ and $\mathcal{A}(\mathcal{D})$.
We proceed by induction on $k$, the number of vertices in the
descending tree $\mathcal{T}^{\downarrow}({\mathcal{D}})$. If $k=1$, then the tree consists of only the root vertex,
that is, $\mathcal{D}$ is descending. So $h_\mathcal{D}({D})=(\mathcal{D},\emptyset)=\mathcal{D}\in \mathcal{A}(\mathcal{D})$.
Assume that there is an admissible circuit partition $\pi\not=\mathcal{D}$, and let
$c$ be the first crossing encountered as we travel through $\pi$ naturally. Up to the crossing $c$,
traveling through $\pi$ naturally is the same as traveling through $\mathcal{D}$ naturally
since nothing has been changed up to that point. Since $c$ is descending, the first strand
entering $c$ is the top strand at $c$, and smoothing $c$
results in a right tangence if $c$ is positive and in a left tangence if
$c$ is negative. This contradicts the definition of an admissible circuit partition. Thus the only
admissible circuit partition of $\mathcal{D}$ is itself. So $h_\mathcal{D}$ is a bijection.
Assume that the statement is true for all $\mathcal{D}$ whose descending tree has at most $k(\ge 1)$ vertices
and let $\mathcal{D}$ be such that $\mathcal{T}^{\downarrow}({\mathcal{D}})$ has at most $k+1$ vertices.
Since $k+1\ge 2$, $\mathcal{D}$ contains at least one ascending crossing. Let
$c$ be the first ascending crossing of $\mathcal{D}$ encountered when we travel through $\mathcal{D}$ naturally. All crossings preceding $c$ in the
return order being descending, they are not switched or smoothed in any
vertex (braid) of $\mathcal{T}^{\downarrow}({\mathcal{D}})$. Let $\mathcal{D}_f$ and $\mathcal{D}_s$ be the children
of the root vertex of $\mathcal{T}^{\downarrow}({\mathcal{D}})$, which are obtained by flipping
and smoothing $c$ respectively. The descendants
of $\mathcal{D}_f$ and $\mathcal{D}_s$ respectively form the rooted trees
$\mathcal{T}^{\downarrow}(\mathcal{D}_f)$ and $\mathcal{T}^{\downarrow}(\mathcal{D}_s)$ respectively. The set of leaves
$\mathcal{F}^{\downarrow}({\mathcal{D}})$ is the union of the sets $\mathcal{F}^{\downarrow}(\mathcal{D}_f)$ and
$\mathcal{F}^{\downarrow}(\mathcal{D}_s)$. Note that this union is disjoint, since $c$ is
not smoothed in elements of $\mathcal{F}^{\downarrow}(\mathcal{D}_f)$ but is smoothed
in elements of $\mathcal{F}^{\downarrow}(\mathcal{D}_s)$. Using an argument similar to the one used in
the case $k=1$, we see that for every admissible circuit partition
$\pi=(\mathcal{D},S)\in A(\mathcal{D})$, $S$ does not contain any crossing preceding $c$ when we travel along the strands of $\mathcal{D}$ naturally Thus each $(\mathcal{D},S)\in \mathcal{A}(\mathcal{D})$ can be identified
with $(\mathcal{D}_f,S)\in \mathcal{A}(\mathcal{D}_f)$ if $c\not\in S$, and with $(\mathcal{D}_s,S\setminus \{c\})\in \mathcal{A}(\mathcal{D}_s)$ if $c\in S$.
Let $\jmath: \mathcal{A}(\mathcal{D}_f)\cup \mathcal{A}(\mathcal{D}_s)\to \mathcal{A}(\mathcal{D})$ be the inverse of this identifying map (which is a bijection, of course).
Since $\mathcal{D}_f$ and $\mathcal{D}_s$ are the children
of the root vertex of $\mathcal{T}^{\downarrow}({\mathcal{D}})$, $\mathcal{T}^{\downarrow}({\mathcal{D}_f})$ and $\mathcal{T}^{\downarrow}({\mathcal{D}_s})$
both have at least one less vertex than $\mathcal{T}^{\downarrow}({\mathcal{D}})$ does. By the induction hypothesis, $h_{\mathcal{D}_f}$ and $h_{\mathcal{D}_s}$
are bijections that map $\mathcal{F}^{\downarrow}({\mathcal{D}_f})$ to $\mathcal{A}(\mathcal{D}_f)$ and $\mathcal{F}^{\downarrow}({\mathcal{D}_s})$ to $\mathcal{A}(\mathcal{D}_s)$ respectively.
It follows that the mapping $h_{D_f\cup \mathcal{D}_s}: \mathcal{F}^{\downarrow}({\mathcal{D}_f})\cup \mathcal{F}^{\downarrow}({\mathcal{D}_s})\to \mathcal{A}(\mathcal{D}_f)\cup \mathcal{A}(\mathcal{D}_s)$ defined
by $h_{\mathcal{D}_f\cup \mathcal{D}_s}(\pi)=h_{D_f}(\pi)$ if $\pi\in \mathcal{F}^{\downarrow}({\mathcal{D}_f})$ and $h_{\mathcal{D}_f\cup \mathcal{D}_s}(\pi)=h_{D_s}(\pi)$ if $\pi\in \mathcal{F}^{\downarrow}({\mathcal{D}_s})$
is a bijection. Since $h_\mathcal{D}=\jmath\circ h_{D_f\cup \mathcal{D}_s}$, $h_\mathcal{D}$ is a bijection between $\mathcal{F}^{\downarrow}({\mathcal{D}})$ and $ \mathcal{A}(\mathcal{D})$.
This concludes our proof.
\end{proof}
\begin{example}
{\em
For the descending tree represented in Figure~\ref{fig:cpd}, the
admissible circuit partitions corresponding to the leaf vertices are
depicted on the left hand side of the picture. For each leaf the
corresponding admissible circuit partition is at the same level. Note
that for each corresponding pair the same crossings of the braid at the root
are smoothed.}
\end{example}
The bijection introduced in Proposition~\ref{thm:da} has the property that
$\mathcal{U}\in \mathcal{F}^{\downarrow}({\mathcal{D}})$ and
$h_{\mathcal{D}}(\mathcal{U})$ are identical except the signs at some crossings. Hence we have
$
\gamma(\mathcal{U})=\gamma(h_{\mathcal{D}}(\mathcal{U}))$, $t(\mathcal{U})=t(h_{\mathcal{D}}(\mathcal{U}))$ and
$t^\prime(\mathcal{U})=t^\prime(h_{\mathcal{D}}(\mathcal{U}))$. This leads to the following result.
\begin{corollary}
F.\ Jaeger's expansion of the HOMFLY polynomial (\ref{J_equation})~\cite[Proposition 3]{Ja} is an immediate consequence of
(\ref{e1}).
\end{corollary}
As a final note to this section, we point out that if we change the definition of admissible circuit partition by defining a smoothed crossing $c$ in $\pi$ to be { admissible}
if the first passage at $c$ is a left tangence when $c$ is negative, and the first passage at $c$ is a right tangence if $c$ is positive, then we can use (\ref{e2}) to show that F.\ Jaeger's expansion of the HOMFLY polynomial (\ref{J_equation}) becomes
\begin{equation}
P({\mathcal{D}},z,a)=a^{n-1-w({\mathcal{D}})} \sum_{\pi\in \mathcal{A}^*(\mathcal{D})} (-1)^{t^\prime(\pi)}z^{t(\pi)}((1-a^{-2})z^{-1})^{\gamma(\pi)-1}
\label{J_equation2}
\end{equation}
where $A^*(\mathcal{D})$ is the set of admissible circuit partitions under this new definition.
\section{Ending remarks}
We end this paper by noting the potential application of Theorem \ref{p3} to other classes of links that are presented in a closed braid form. It is also an interesting yet challenging question to explore whether there exist other classes of links that allow formulations similar to the ones in Theorem \ref{p3}. These will be the future research directions of the authors.
\section*{Acknowledgement}
The research of the third author was partially supported by a grant from
the Simons Foundation (\#245153 to G\'abor Hetyei). The authors thank
Alex Stoimenow for his insightful comments. They are also indebted to an
anonymous referee for the careful reading of this manuscript and
for suggesting several substantial improvements. The alternative argument given in Remark \ref{remark4.4} is also due to the referee.
| {
"timestamp": "2016-12-08T02:02:05",
"yymm": "1604",
"arxiv_id": "1604.06127",
"language": "en",
"url": "https://arxiv.org/abs/1604.06127",
"abstract": "It is well known that any link can be represented by the closure of a braid. The minimum number of strings needed in a braid whose closure represents a given link is called the braid index of the link and the well known Morton-Frank-Williams inequality reveals a close relationship between the HOMFLY polynomial of a link and its braid index. In the case that a link is already presented in a closed braid form, Jaeger derived a special formulation of the HOMFLY polynomial. In this paper, we prove a variant of Jaeger's result as well as a dual version of it. Unlike Jaeger's original reasoning, which relies on representation theory, our proof uses only elementary geometric and combinatorial observations. Using our variant and its dual version, we provide a direct and elementary proof of the fact that the braid index of a link that has an $n$-string closed braid diagram that is also reduced and alternating, is exactly $n$. Until know this fact was only known as a consequence of a result due to Murasugi on fibered links that are star products of elementary torus links and of the fact that alternating braids are fibered.",
"subjects": "Geometric Topology (math.GT)",
"title": "The HOMFLY Polynomial of Links in Closed Braid Form",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347845918813,
"lm_q2_score": 0.727975443004307,
"lm_q1q2_score": 0.7093645939920813
} |
https://arxiv.org/abs/2009.13730 | A Fully Parallel Primal-Dual Algorithm for Centralized and Distributed Optimization | In this paper, a centralized two-block separable optimization is considered for which a fully parallel primal-dual discrete-time algorithm with fixed step size is derived based on monotone operator splitting method. In this algorithm, the primal variables are updated in an alternating fashion like Alternating Direction Method of Multipliers (ADMM). However, unlike existing discrete-time algorithms such as Method of Multipliers (MM), ADMM, Bi-Alternating Direction Method of Multipliers (BiADMM), and Primal-Dual Fixed Point (PDFP) algorithms, that all suffer from sequential updates, all primal and dual variables are updated in parallel in the sense that to update a variable at each time, updated version of other variable(s) is not required. One of advantages of the proposed algorithm is that its direct extension to multi-block optimization is still convergent. Then the method is applied to distributed optimization for which a fully parallel primal-dual distributed algorithm is obtained. Finally, since direct extension of ADMM may diverge for multi-block optimization, a numerical example of a three-block optimization is given for which the direct extension of the proposed algorithm is shown to converge to a solution. | \section{Introduction}
In this paper, we consider the following two-block decomposable optimization (and its extension to multi-block optimization) with affine constraint\footnote{For better comparison between ADMM and our proposed algorithm, we use the same notations for (\ref{1}) as in \cite{boydADMM}.}:
\begin{equation}\label{1}
\begin{aligned}
& \underset{x,z}{\text{min}}
& & f(x)+g(z) \\
& \text{subject to}
& & Ax+Bz=c \\
\end{aligned}
\end{equation}
where $x \in \Re^{n}$ and $z \in \Re^{m}$ are decision variables, $f,g$ are convex functions, and $A \in \Re^{p \times n},B \in \Re^{p \times m}, c \in \Re^{p}$ where $\Re$ denotes the set of all real numbers. The problem (\ref{1}) has found applications in many areas of engineering such as signal and image denoising and restoration \cite{image}, compressed sensing \cite{compressedsensing1}-\cite{compressedsensing2}, and channel estimation and coding \cite{coding}. The research challenge is how to approach an optimal solution of (\ref{1}) by translating the original difficult optimization with respect to primal and dual variables into solving easier problems (unconstrained ones). Motivated by the seminal Arrow-Hurwicz-Uzawa primal-dual dynamics \cite{arrowhurwicz}, several researchers have paid attention to develop primal-dual algorithms to solve (\ref{1}) \cite{wangelia2011}-\cite{unifiedframework}. Note that Arrow-Hurwicz-Uzawa dynamics has been used for distributed optimization in \cite{wangelia2011} (see recent Survey \cite{survey2020} for more references).
\textit{Method of Multipliers}\footnote{Method of Multipliers is also called \textit{Augmented Lagrangian Method} (ALM) \cite{survey2020}.} (MM) \cite{MM1}-\cite{MM3} is a general-purpose iterative solver for constrained optimization (\ref{1}). MM decomposes the original problem into sub-problems that can be solved easily. MM can handle nonsmoothness of objective functions and generic problem constraints and has strong convergence guarantees \cite{qlearnbertsekas}. Although MM has been beneficial, it suffers from sequential updates. Distributed MM has been investigated by some researchers \cite{mmdist1}-\cite{mmdist3}.
Another iterative solver for (\ref{1}) is \textit{Alternating Direction Method of Multipliers} (ADMM) which was proposed in \cite{ADMM1}-\cite{ADMM2} (see \cite{boydADMM} for more references). Similar to MM, ADMM decomposes the original problem into sub-problems that can be solved easily. ADMM also can handle nonsmoothness of objective functions with strong convergence \cite{boydADMM}. Unlike MM, primal variables $x$ and $z$ in ADMM are updated in an \textit{alternating} fashion that decomposes the updates of primal variables into easier sub-problems. Although ADMM has been beneficial, extension of ADMM to multi-block optimization is still a challenging problem since \textit{direct} extension of ADMM to multi-block is \textit{not} necessarily convergent if no further assumption is imposed (see \cite{ADMMdiverges} for details). In this regard, several variants of ADMM have been investigated to overcome this disadvantage such as 3-block ADMM \cite{ADMM3block1}-\cite{ADMM3block2}, Jacobian decomposition of augmented Lagrangian method with proximal terms \cite{ADMMJacobian}, and prediction-correction methods \cite{couple14}. ADMM and its extensions to multi-block optimization suffer from sequential updates. ADMM has been extended to distributed case in \cite{ADMMdistributedorigin1}-\cite{ADMMdistributedorigin2} for estimation on wireless sensor networks, and some authors have paid much attention to develop several distributed versions of ADMM \cite{018}-\cite{adda}, to cite a few. Recently, \textit{Primal-Dual Method of Multipliers}, which can be taken as an extension of ADMM for solving problems over graphs, has been proposed in \cite{priamdualmm1}-\cite{priamdualmm2}.
\textit{Bi-Alternating Direction Method of Multipliers} (BiADMM) \cite{priamdualmm1}-\cite{biadmm} is another iterative solver for (\ref{1}). BiADMM iteratively minimizes an augmented bi-conjugate function. Unlike ADMM that always involves three updates (two primal variables and one dual variable) per iteration, BiADMM performs either two or three updates per iteration, depending on the functional construction. Similar to MM and ADMM, BiADMM suffers from sequential updates.
Another iterative algorithm to solve (\ref{1}) is \textit{Primal-Dual Fixed Point} (PDFP) algorithm. PDFP algorithm was originally proposed in \cite{fixedpointprimaldualChen} for separable convex optimization, and some researchers have proposed several PDFP algorithms (see \cite{fixedpointprimaldual1}-\cite{fixedpointprimaldual2} and references therein). PDFP algorithm is based on proximal forward-backward splitting and fixed point iterations (see \cite{fixedpointprimaldualChen} for details). PDFP algorithms have been extended to multi-block optimization in \cite{fixedpointprimaldualmultiblock}. Similar to previous algorithms, PDFP algorithms suffer from sequential updates.
Recently, a unified framework for existing algorithms for centralized separable convex optimization has been proposed in \cite{unifiedframework}, that also requires sequential updates. All aforementioned algorithms suffer from sequential updates that make them slow in practice. Also applying \textit{forward-backward-forward method} \cite{modifiedforwardbackwardTseng} to (\ref{1}) does \textit{not} yield parallel algorithms (see \cite[Sec. 4]{modifiedforwardbackwardTseng} for details). Recently, a survey paper \cite{survey2020} considers existing distributed primal-dual algorithms.
\textbf{Contribution:} In this paper, we consider the centralized two-block convex optimization (\ref{1}). By Applying \textit{forward-reflected-backward method} recently proposed in \cite{forwardreflectedbackward}, we derive a fully parallel primal-dual discrete-time algorithm with fixed step size, called \textit{Parallel Alternating Direction Primal-Dual} (PADPD) algorithm. In the PADPD algorithm, the two primal variables $x$ and $z$ are updated in an \textit{alternating} fashion like ADMM, but all (primal and dual) variables are updated in \textit{parallel} in the sense that to update a variable at each time, updated version of other variable(s) is not needed. One of main advantages of the proposed algorithm for centralized optimization is that it can be extended \textit{directly} to \textit{any} finite multi-block optimization while preserving its convergence. Then the approach is applied to distributed optimization for which a fully parallel primal-dual distributed algorithm is obtained.
This paper is organized as follows. In Section II, some preliminaries are given. PADPD algorithm and its extension to multi-block optimization are derived in Section III. In Section IV, the method is applied to distributed optimization. Finally, since direct extension of ADMM diverges for a three-block optimization \cite{ADMMdiverges}, a numerical example of the three-block optimization is given for which the direct extension of the proposed algorithm converges.
\textit{Notations:} $\Re$ denotes the set of all real numbers. $\mathbb{N}$ denotes the set of all natural numbers. $\emptyset$ represents the empty set. $(.)^{T}$ represents the transpose of a matrix or a vector. For any vector $z \in \Re^{n}, \Vert z \Vert_{2}=\sqrt{z^{T}z},$ and for any matrix $Z \in \Re^{n \times n}, \Vert Z \Vert_{2}=\sqrt{\lambda_{max}(Z^{T}Z)}=\sigma_{max}(Z)$ where $\lambda_{max}$ represents the maximum eigenvalue, and $\sigma_{max}$ represents the largest singular value. Sorted in an increasing order with respect to real parts, $\lambda_{2}(Z)$ represents the second eigenvalue of a matrix $Z$. For any matrix $Z \in \Re^{n \times n}$ with $Z=[z_{ij}]$, $\Vert Z \Vert_{1}= max_{1 \leq j \leq n} \{\sum_{i=1}^{n} \vert z_{ij} \vert \}$ and $\Vert Z \Vert_{\infty}= max_{1 \leq i \leq n} \{\sum_{j=1}^{n} \vert z_{ij} \vert \}$. $col\{x_{1},\hdots,x_{m}\}:=[x_{1}^{T}, \hdots,x_{m}^{T}]^{T}$ denotes the column vector stacked with vectors $x_{1},\hdots,x_{m}$. $\textbf{0}_{n}$ represents the vectors of dimension $n$ whose elements are all zero.
\section{Preliminaries}
A vector $v \in \Re^{n}$ is said to be a \textit{stochastic vector} when its components $v_{i}, i=1,2,...,n$, are non-negative and their sum is equal to 1; a square $n \times n$ matrix $V$ is said to be a \textit{stochastic matrix} when each row of $V$ is a stochastic vector. A square $n \times n$ matrix $V$ is said to be \textit{doubly stochastic} when both $V$ and $V^{T}$ are stochastic matrices.
Let $\mathcal{H}$ be a real Hilbert space with norm $\Vert . \Vert $ and inner product $\langle .,. \rangle$. A mapping $H:\mathcal{H} \longrightarrow \mathcal{H}$ is said to be $K$\textit{-Lipschitz continuous} if there exists a $K > 0$ such that
$\Vert Hx-Hy \Vert \leq K \Vert x-y \Vert$ for all $x,y \in \mathcal{H}$.
\textbf{Definition 1:} \textit{Resolvent} of an operator $A$ is defined as $J_{A}:=(I+A)^{-1}$ where $I$ denotes the Identity operator.
\textbf{Remark 1} \cite{minty}-\cite{maximalandproximalRackafler}: If $f: \mathcal{H} \longrightarrow \Re \cup \{+\infty\}$ is a proper closed convex function, the resolvent of its subdifferential, namely $J_{\partial f}$, is equal to the Moreau's proximity operator of $f$ \cite{moreauproximity}, i.e.,
$$prox_{f}(x):=\arg\min_{z \in \mathcal{H}}[f(z)+\frac{1}{2} \Vert z-x \Vert^{2}].$$
\textbf{Definition 2:} A multi-valued mapping $T:\mathcal{H} \rightrightarrows \mathcal{H}$ is called \textit{monotone operator} if $\langle x'-y',x-y \rangle \geq 0$ whenever $x' \in T(x), y' \in T(y).$ It is called \textit{maximal monotone} if in addition its graph $\{(x,x'): x \in \mathcal{H}, x' \in T(x)\}$ is not properly contained in the graph of any monotone operator on $\mathcal{H}.$
\textbf{Remark 2} \cite{subdiffmaximal}: The subdifferential $\partial f$ of a proper closed convex function $f(x)$ is maximal monotone.
\textbf{Remark 3} \cite{rudinfunctionalbook}: In a finite dimensional space, weak convergence implies strong convergence.
\textbf{Lemma 1} [Cor. 2.6]\cite{forwardreflectedbackward}: Let $\Phi: \mathcal{H} \rightrightarrows \mathcal{H}$ be maximally monotone, let $H:\mathcal{H} \longrightarrow \mathcal{H}$ be monotone and $L$-Lipschitz, and suppose that $(\Phi+H)^{-1}(\textbf{0}) \neq \emptyset.$ Choose $\eta \in (0,\frac{1}{2L})$. Given $x_{0},x_{-1} \in \mathcal{H}$, define the sequence $\{x_{k}\}$ according to
\begin{equation}\label{algorithm1}
x_{k+1}=J_{\eta \Phi}(x_{k}-2 \eta H(x_{k})+\eta H(x_{k-1})), \quad{} \forall k \in \mathbb{N}.
\end{equation}
Then $\{x_{k}\}_{-1}^{\infty}$ converges weakly to a point contained in $(\Phi+H)^{-1}(\textbf{0}).$
\textbf{Lemma 2} \cite{threedec}: Let $M \in \Re^{m \times m}$. Then $\Vert M \Vert_{2} \leq \sqrt{\Vert M \Vert_{1} \Vert M \Vert_{\infty}}$.
\textbf{Definition 3:} $B:\mathcal{H} \longrightarrow \mathcal{H}$ is $\beta$-\textit{cocoercive}, $\beta >0,$ if
$$\langle x-y,B(x)-B(y) \rangle \geq \beta \Vert B(x)-B(y) \Vert^{2}, \quad{} \forall x,y \in \mathcal{H}.$$
\textbf{Preposition 1} \cite[Preps. 6.4.1 and 6.4.2]{bertsekasconvexbook}: Let the optimal value $f^{*}$ of the following optimization be finite
\begin{equation}\label{bertsekasbookoptimization}
\begin{aligned}
& \underset{x}{\text{min}}
& & f(x) \\
& \text{subject to}
& & x \in C \\
&&& e_{i}'x-d_{i}=0, \quad{} i=1, \hdots, m, \\
&&& a_{j}'x-b_{j} \leq 0, \quad{} j=1, \hdots, r,
\end{aligned}
\end{equation}
where $C \subseteq \Re^{n}$ is a nonempty convex set, and the cost function $f:C \longrightarrow \Re$ is convex. If
\begin{align*}
F :=\{x \in C: e_{i}'x-d_{i}=0, \quad{} i=1, \hdots, m, \quad{}
a_{j}'x-b_{j} \leq 0, \quad{} j=1, \hdots, r \}
\end{align*}
contains a relative interior point of $C$, then the set of geometric multipliers is nonempty.
\textbf{Preposition 2} \cite[Prep. 6.1.2]{bertsekasconvexbook}: Assume that the primal problem (\ref{bertsekasbookoptimization}) has at least one optimal solution $x^{*}$. Then the set of Lagrange multipliers associated with $x^{*}$ and the set of geometric multipliers coincide.
\section{Centralized Optimization}
\subsection{Two-Block Optimization}
Consider optimization (\ref{1}). The \textit{augmented Lagrangian} \cite{boydADMM} for (\ref{1}) is
\begin{align}
\mathcal{L}_{\rho}(x,z,y):= f(x)+g(z)+y^{T} (Ax+Bz-c) +\frac{\rho}{2} \Vert Ax+Bz-c \Vert_{2}^{2} \label{augmentedlagrangian}
\end{align}
where $\rho >0$ is called the \textit{penalty parameter} \cite{boydADMM}, and $y$ is the \textit{Lagrange multiplier} \cite{boydconvexbook} associated with the equality constraint in (\ref{1}). Note that $\mathcal{L}_{0}$ is the standard Lagrangian for the problem (see \cite{boydconvexbook}).
Now we impose the following assumptions on (\ref{1}).
\textbf{Assumption 1:} $f:\Re^{n} \longrightarrow \Re \cup \{+\infty\}$ and $g:\Re^{n} \longrightarrow \Re \cup \{+\infty\}$ are proper, closed, and convex.
Assumption 1 ensures that the single-valued operators $prox_{f}$ and $prox_{g}$ exist (see Remark 1).
\textbf{Assumption 2:} The standard Lagrangian $\mathcal{L}_{0}$ has a saddle point, namely
\begin{align}
\max_{y \in \Re^{p}} \quad{} \min_{x \in \Re^{n},z \in \Re^{m}} \mathcal{L}_{0} (x,z,y)=
\min_{x \in \Re^{n},z \in \Re^{m}} \quad{} \max_{y \in \Re^{p}} \quad{} \mathcal{L}_{0} (x,z,y) \label{saddlepointproblem}
\end{align}
has a solution.
Assumption 2 ensures that the inclusion problem below derived from first order optimality condition of the (augmented) Lagrangian (\ref{augmentedlagrangian}) has a solution.
Now we give the main theorem in this subsection.
\textbf{Theorem 1:} Consider optimization (\ref{1}) with Assumptions 1 and 2. Let $ \rho \in [0,+\infty)$ and $\eta \in (0,\frac{1}{2L})$ where $\Vert M_{\rho} \Vert_{2} \leq L$ and $M_{\rho}$ is defined in (\ref{matrixM}). Then starting from any initial points $x_{0},x_{-1},z_{0},z_{-1},y_{0},y_{-1}$, the sequences $\{x_{k}\}_{-1}^{\infty}$ and $\{z_{k}\}_{-1}^{\infty}$ generated by Algorithm 1 converges to a solution of (\ref{1}).
\begin{algorithm}
\caption{Parallel Alternating Direction Primal Dual (PADPD) algorithm}
\begin{align*}
\hat{x}_{k} &=x_{k}-2 \eta A^{T} y_{k}-2 \eta \rho A^{T} A x_{k}-2 \eta \rho A^{T} B z_{k} +\eta A^{T} y_{k-1}
+ \eta \rho A^{T} A x_{k-1}+ \eta \rho A^{T} B z_{k-1} \nonumber \\
&\quad{}+\eta \rho A^{T} c \\
\hat{z}_{k} &= z_{k}-2 \eta B^{T}y_{k}-2 \eta \rho B^{T}A x_{k}-2 \eta \rho B^{T}B z_{k} + \eta B^{T}y_{k-1}+ \eta \rho B^{T}A x_{k-1}+ \eta \rho B^{T}B z_{k-1} \nonumber \\
&\quad{}+\eta \rho B^{T}c \\
x_{k+1}&=\displaystyle \arg\min_{u \in \Re^{n}} (\eta f(u)+\frac{1}{2} \Vert u-\hat{x}_{k} \Vert_{2}^{2}) \\
z_{k+1}&=\displaystyle \arg\min_{r \in \Re^{m}} (\eta g(r)+\frac{1}{2} \Vert r-\hat{z}_{k} \Vert_{2}^{2}) \\
y_{k+1}&= y_{k}+2 \eta A x_{k}+2 \eta B z_{k}-\eta Ax_{k-1}-\eta Bz_{k-1}-\eta c
\end{align*}
\end{algorithm}
\textit{Proof:} Through the first order optimality condition of (\ref{saddlepointproblem}), the saddle point problem (\ref{saddlepointproblem}) can be formulated as the following inclusion problem:
\textit{Find $col\{x^{*},z^{*},y^{*}\}$ such that}
\begin{align}
\textbf{0}_{n} &\in \partial f(x^{*})+A^{T}y^{*}+\rho A^{T} A x^{*}+ \rho A^{T} B z^{*}- \rho A^{T} c \label{inclusion1} \\
\textbf{0}_{m} &\in \partial g(z^{*})+B^{T}y^{*}+\rho B^{T} Ax^{*}+\rho B^{T}Bz^{*}-\rho B^{T}c \label{inclusion2} \\
\textbf{0}_{p} &= -(Ax^{*}+Bz^{*}-c). \label{inclusion3}
\end{align}
Now we consider the Hilbert space $\mathcal{H}=(\Re^{n+m+p}, \Vert . \Vert_{2})$. The inclusion (\ref{inclusion1})-(\ref{inclusion3}) can be rewritten as
\begin{equation}\label{iiiiinclusion}
\textbf{0}_{n+m+p} \in \Phi (\Pi)+H(\Pi)
\end{equation}
where
\begin{align}
\Pi &:=col\{x,z,y\} \\
\Phi (\Pi) &:=col\{ \partial f(x), \partial g(z),\textbf{0}_{p}\} \label{operatorphi} \\
H(\Pi) &:= M_{\rho} \Pi+V_{\rho} \label{operatorH}
\end{align}
in which
\begin{equation}\label{matrixM}
M_{\rho}:=\begin{pmatrix}
\rho A^{T} A & \rho A^{T}B & A^{T} \\
\rho B^{T}A & \rho B^{T}B & B^{T} \\
-A & -B & \textbf{0}_{p \times p}
\end{pmatrix},
\end{equation}
and
\begin{equation}\label{matrixV}
V_{\rho}:=col\{-\rho A^{T} c, - \rho B^{T} c,c\}.
\end{equation}
From Assumption 1 and Remark 2, we can conclude that the operator $\Phi(\Pi)$ defined in (\ref{operatorphi}) is maximally monotone. Since $\rho \geq 0$, we obtain
\begin{equation}\label{hhhhhh}
(\Pi-\Pi')^{T} (H(\Pi)-H(\Pi')) \geq 0, \quad{} \forall \Pi,\Pi' \in \mathcal{H},
\end{equation}
which implies that the operator $H(\Pi)$ defined in (\ref{operatorH}) is monotone. Since we have not imposed any assumptions on matrices $A$ and $B$, simple calculation shows that the operator $H(\Pi)$ is not cocoercive (see Definition 3). For example, when $\rho=0$, the matrix $M_{0}$ defined in (\ref{matrixM}) is skew symmetric; consequently,
\begin{equation}\label{dddddddddd}
(\Pi-\Pi')^{T} (H(\Pi)-H(\Pi')) = 0, \quad{} \forall \Pi,\Pi' \in \mathcal{H},
\end{equation}
which implies that $H(\Pi)$ is not cocoercive. It is obvious that $H(\Pi)$ defined in (\ref{operatorH}) is $L$-Lipschitz where $\Vert M_{\rho} \Vert_{2} \leq L$.
\textbf{Remark 4:} It seems that applying \textit{forward-backward splitting method} \cite{forwardbackwardfirstalgorithm} to (\ref{iiiiinclusion}) results in a parallel algorithm. However, the forward-backward method \cite{forwardbackwardfirstalgorithm} requires $H(\Pi)$ to be cocoercive (see Definition 3). We have shown above that $H(\Pi)$ is \textit{not} cocoercive. Therefore, we need to apply modified forward-backward method which does not need cocoercivity assumption such as \cite{modifiedforwardbackwardTseng}. Nevertheless, applying \textit{forward-backward-forward method} in \cite{modifiedforwardbackwardTseng} does not yield a parallel algorithm (see \cite[Sec. 4]{modifiedforwardbackwardTseng} for details). \textit{Here, we show that applying \textit{forward-reflected-backward method} without requiring cocoercivity assumption in \cite{forwardreflectedbackward} results in a fully parallel algorithm.}
\textbf{Remark 5:} One may use Lemma 2 to choose $$L=max\{\Vert \begin{pmatrix}
\rho A^{T} A \\
\rho B^{T}A \\
-A
\end{pmatrix} \Vert_{1},\Vert \begin{pmatrix}
\rho A^{T}B \\
\rho B^{T}B \\
-B
\end{pmatrix} \Vert_{1},\Vert [A,B] \Vert_{\infty} \}.$$
Since the saddle-point problem (\ref{saddlepointproblem}) has a solution, the inclusion problem (\ref{inclusion1})-(\ref{inclusion3}) has a solution, i.e., $(\Phi+H)^{-1}(\textbf{0}) \neq \emptyset.$ Therefore, the conditions of Lemma 1 are satisfied, and we can apply Algorithm (\ref{algorithm1}) given $\Pi_{0},\Pi_{-1} \in \mathcal{H},$ i.e.,
\begin{equation}\label{alggggggg}
\Pi_{k+1}=J_{\eta \Phi}(\Pi_{k}-2 \eta H(\Pi_{k})+\eta H(\Pi_{k-1})).
\end{equation}
The sequence $\{\Pi_{k}\}$ generated by (\ref{alggggggg}) converges strongly to a point in $(\Phi+H)^{-1}(\textbf{0})$ since $\mathcal{H}$ is finite dimensional (see Lemma 1 and Remark 3). We obtain
\begin{equation}\label{bbbbbbbbbbb}
\Pi_{k}-2 \eta H(\Pi_{k})+\eta H(\Pi_{k-1})=col\{\Theta_{1,k},\Theta_{2,k},\Theta_{3,k}\}
\end{equation}
where
\begin{align}
\Theta_{1,k} &:=x_{k}-2 \eta A^{T} y_{k}-2 \eta \rho A^{T} A x_{k}-2 \eta \rho A^{T} B z_{k} +\eta A^{T} y_{k-1}
+ \eta \rho A^{T} A x_{k-1}+ \eta \rho A^{T} B z_{k-1} \nonumber \\
&\quad{}+\eta \rho A^{T} c \\
\Theta_{2,k} &:= z_{k}-2 \eta B^{T}y_{k}-2 \eta \rho B^{T}A x_{k}-2 \eta \rho B^{T}B z_{k} + \eta B^{T}y_{k-1}+ \eta \rho B^{T}A x_{k-1}+ \eta \rho B^{T}B z_{k-1} \nonumber \\
&\quad{}+\eta \rho B^{T}c \\
\Theta_{3,k} &:= y_{k}+2 \eta A x_{k}+2 \eta B z_{k}-\eta c -\eta Ax_{k-1}-\eta Bz_{k-1}.
\end{align}
We also have
\begin{equation}\label{ppppppppppppp}
J_{\eta \Phi}(\Pi)=\begin{pmatrix}
\displaystyle \arg\min_{u \in \Re^{n}} (\eta f(u)+\frac{1}{2} \Vert u-x \Vert_{2}^{2}) \\
\displaystyle \arg\min_{r \in \Re^{m}} (\eta g(r)+\frac{1}{2} \Vert r-z \Vert_{2}^{2}) \\
\textbf{0}_{p}
\end{pmatrix}
\end{equation}
which exists by Assumption 1. From (\ref{alggggggg})-(\ref{ppppppppppppp}) we obtain Algorithm 1. We call Algorithm 1 \textit{Parallel Alternating Direction Primal Dual} (PADPD) algorithm since primal variables are updated in alternating fashion, and updates of all primal and dual variables are performed in parallel. This completes the proof of Theorem 1.
A particular interest is for standard Lagrangian $\mathcal{L}_{0}$ defined in (\ref{augmentedlagrangian}). In this case, we have the following corollary.
\textbf{Corollary 1:} Consider optimization (\ref{1}) with Assumptions 1 and 2. Let $\eta \in (0,\frac{1}{2L})$ where $\Vert M_{0} \Vert_{2} \leq L$, and $M_{0}$ is defined in (\ref{matrixM}). Then starting from any initial points $x_{0},x_{-1},z_{0},z_{-1},y_{0},y_{-1}$, the sequences $\{x_{k}\}_{-1}^{\infty}$ and $\{z_{k}\}_{-1}^{\infty}$ generated by Algorithm 2 converges to a solution of (\ref{1}).
\begin{algorithm}
\caption{}
\begin{align*}
\hat{x}_{k} &= x_{k}-2 \eta A^{T} y_{k}+\eta A^{T} y_{k-1} \nonumber \\
\hat{z}_{k} &= z_{k}-2 \eta B^{T}y_{k} + \eta B^{T}y_{k-1} \\
x_{k+1}&=\displaystyle \arg\min_{u \in \Re^{n}} (\eta f(u)+\frac{1}{2} \Vert u-\hat{x}_{k} \Vert_{2}^{2}) \\
z_{k+1}&=\displaystyle \arg\min_{r \in \Re^{m}} (\eta g(r)+\frac{1}{2} \Vert r-\hat{z}_{k} \Vert_{2}^{2}) \\
y_{k+1}&= y_{k}+2 \eta A x_{k}+2 \eta B z_{k} -\eta Ax_{k-1}-\eta Bz_{k-1}-\eta c
\end{align*}
\end{algorithm}
\subsection{Extension to Multi-Block Optimization}
\begin{equation}\label{multiblockoptim}
\begin{aligned}
& \underset{x_{i}, i=1, \hdots,q}{\text{min}}
& & \sum_{i=1}^{q} f_{i}(x_{i}) \\
& \text{subject to}
& & \sum_{i=1}^{q} A_{i} x_{i}=c \\
\end{aligned}
\end{equation}
where $x_{i} \in \Re^{n_{i}}, n_{i} \in \mathbb{N},$ are decision variables, $f_{i}$ are convex functions, and $A_{i} \in \Re^{p \times n_{i}}, i=1, \hdots, q, q \in \mathbb{N}$.
The \textit{augmented Lagrangian} \cite{boydADMM} for (\ref{multiblockoptim}) is
\begin{align}
\mathcal{L}_{\rho}(x_{1},\hdots,x_{q},y):=\sum_{i=1}^{q} f_{i}(x_{i}) +y^{T} (\sum_{i=1}^{q} A_{i} x_{i}-c) +\frac{\rho}{2} \Vert \sum_{i=1}^{q} A_{i} x_{i}-c \Vert_{2}^{2} \label{augmlagrangmultiblock}
\end{align}
where $\rho >0$ is the \textit{penalty parameter} \cite{boydADMM}, and $y$ is the \textit{Lagrange multiplier} \cite{boydconvexbook} associated with the equality constraint in (\ref{multiblockoptim}). $\mathcal{L}_{0}$ is the standard Lagrangian for the problem (see \cite{boydconvexbook}).
We impose the following assumptions on (\ref{multiblockoptim}).
\textbf{Assumption 3:} $f_{i}:\Re^{n_{i}} \longrightarrow \Re \cup \{+\infty\}, i=1, \hdots,q,$ are proper, closed, and convex.
Assumption 3 ensures that the single-valued operators $prox_{f_{i}}, i=1, \hdots,m,$ exist (see Remark 1).
\textbf{Assumption 4:} The standard Lagrangian $\mathcal{L}_{0}$ has a saddle point, i.e.,
\begin{align}
\max_{y \in \Re^{p}} \quad{} \quad{} \min_{x_{i} \in \Re^{n_{i}},i=1,\hdots,q} \quad{} \mathcal{L}_{0} (x_{1},\hdots,x_{q},y)= \min_{x_{i} \in \Re^{n_{i}},i=1,\hdots,q} \quad{} \quad{} \max_{y \in \Re^{p}} \quad{} \quad{} \mathcal{L}_{0} (x_{1},\hdots,x_{q},y) \label{saddlepointproblemmulti}
\end{align}
has a solution.
Assumption 4 ensures that the inclusion problem below derived from first order optimality condition of the (augmented) Lagrangian (\ref{augmlagrangmultiblock}) has a solution.
\begin{algorithm}
\caption{Parallel Alternating Direction Primal Dual (PADPD) Algorithm for Multi-Block Optimization (\ref{multiblockoptim})}
\begin{align*}
\hat{x}_{1,k} &=x_{1,k}-2 \eta A_{1}^{T} y_{k}-2 \eta \rho \sum_{j=1}^{q} A_{1}^{T} A_{j} x_{j,k} +\eta A_{1}^{T} y_{k-1}+\eta \rho \sum_{j=1}^{q} A_{1}^{T} A_{j} x_{j,k-1}+\rho A_{1}^{T} c \\
&\quad{} \vdots \nonumber \\
\hat{x}_{i,k} &=x_{i,k}-2 \eta A_{i}^{T} y_{k}-2 \eta \rho \sum_{j=1}^{q} A_{i}^{T} A_{j} x_{j,k} +\eta A_{i}^{T} y_{k-1}+\eta \rho \sum_{j=1}^{q} A_{i}^{T} A_{j} x_{j,k-1}+\rho A_{i}^{T} c \\
&\quad{} \vdots \nonumber \\
\hat{x}_{q,k} &=x_{q,k}-2 \eta A_{q}^{T} y_{k}-2 \eta \rho \sum_{j=1}^{q} A_{q}^{T} A_{j} x_{j,k} +\eta A_{q}^{T} y_{k-1}+\eta \rho \sum_{j=1}^{q} A_{q}^{T} A_{j} x_{j,k-1}+\rho A_{q}^{T} c \\
x_{1,k+1}&=\displaystyle \arg\min_{u_{1} \in \Re^{n_{1}}} (\eta f_{1}(u_{1})+\frac{1}{2} \Vert u_{1}-\hat{x}_{1,k} \Vert_{2}^{2}) \\
&\quad{} \vdots \nonumber \\
x_{i,k+1}&=\displaystyle \arg\min_{u_{i} \in \Re^{n_{i}}} (\eta f_{i}(u_{i})+\frac{1}{2} \Vert u_{i}-\hat{x}_{i,k} \Vert_{2}^{2}) \\
&\quad{} \vdots \nonumber \\
x_{q,k+1}&=\displaystyle \arg\min_{u_{q} \in \Re^{n_{q}}} (\eta f_{q}(u_{q})+\frac{1}{2} \Vert u_{q}-\hat{x}_{q,k} \Vert_{2}^{2}) \\
y_{k+1}&=y_{k}+2 \eta \sum_{i=1}^{q} A_{i}x_{i,k}-\eta \sum_{i=1}^{q} A_{i}x_{i,k-1} -\eta c
\end{align*}
\end{algorithm}
\textbf{Theorem 2:} Consider optimization (\ref{multiblockoptim}) with Assumptions 3 and 4. Let $ \rho \in [0,+\infty)$ and $\eta \in (0,\frac{1}{2 \tilde{L}})$ where $\Vert \tilde{M}_{\rho} \Vert_{2} \leq \tilde{L}$ and $\tilde{M}_{\rho}$ is defined in (\ref{matrixMmulti}). Then starting from any initial points $x_{1,0},x_{1,-1},\hdots,x_{q,0},x_{q,-1},y_{0},y_{-1}$, the sequences $\{x_{i,k}\}_{-1}^{\infty}, i=1,\hdots,q,$ generated by Algorithm 3 converge to a solution of (\ref{multiblockoptim}).
\textit{Proof:} Through the first optimality condition of (\ref{saddlepointproblemmulti}), the saddle point problem (\ref{saddlepointproblemmulti}) can be formulated as the following inclusion problem:
\textit{Find $col\{x_{1}^{*},\hdots,x_{q}^{*},y^{*}\}$ such that}
\begin{align}
\textbf{0}_{n_{1}} &\in \partial f_{1}(x_{1}^{*}) +A_{1}^{T} y^{*} +\rho \sum_{j=1}^{q} A_{1}^{T} A_{j} x_{j}^{*}-\rho A_{1}^{T} c \label{inclusionmulti1} \\
\textbf{0}_{n_{2}} &\in \partial f_{2}(x_{2}^{*}) +A_{2}^{T} y^{*} +\rho \sum_{j=1}^{q} A_{2}^{T} A_{j} x_{j}^{*}-\rho A_{2}^{T} c \label{inclusionmulti2} \\
&\vdots \nonumber \\
\textbf{0}_{n_{i}} &\in \partial f_{i}(x_{i}^{*}) +A_{i}^{T} y^{*} +\rho \sum_{j=1}^{q} A_{i}^{T} A_{j} x_{j}^{*}-\rho A_{i}^{T} c \label{inclusionmulti3} \\
&\vdots \nonumber \\
\textbf{0}_{p} &= -(\sum_{i=1}^{q} A_{i} x_{i}-c). \label{inclusionmulti4}
\end{align}
Now we consider the Hilbert space $\tilde{\mathcal{H}}=(\Re^{p+\sum_{i=1}^{q} n_{i}}, \Vert . \Vert_{2})$. The inclusion (\ref{inclusionmulti1})-(\ref{inclusionmulti4}) can be rewritten as
\begin{equation}
\textbf{0}_{\Re^{p+\sum_{i=1}^{q} n_{i}}} \in \tilde{\Phi} (\tilde{\Pi})+\tilde{H}(\tilde{\Pi})
\end{equation}
where
\begin{align}
\tilde{\Pi} &:=col\{x_{1},\hdots,x_{m},y\} \\
\tilde{\Phi} (\tilde{\Pi}) &:=col\{ \partial f_{1}(x_{1}), \hdots, \partial f_{q}(x_{q}),\textbf{0}_{p}\} \label{operatorphimulti} \\
\tilde{H}(\tilde{\Pi}) &:= \tilde{M}_{\rho} \tilde{\Pi}+\tilde{V}_{\rho} \label{operatorHmulti}
\end{align}
in which
\begin{equation}\label{matrixMmulti}
\tilde{M}_{\rho}:=\begin{pmatrix}
\rho A^{T}_{1} A_{1} & \rho A^{T}_{1}A_{2} & \hdots & \rho A^{T}_{1}A_{q} & A^{T}_{1} \\
\rho A^{T}_{2} A_{1} & \rho A^{T}_{2}A_{2} & \hdots & \rho A^{T}_{2}A_{q} & A^{T}_{2} \\
\vdots & \vdots & \ddots & \vdots & \vdots \\
\rho A^{T}_{q} A_{1} & \rho A^{T}_{q}A_{2} & \hdots & \rho A^{T}_{q}A_{q} & A^{T}_{q} \\
-A_{1} & -A_{2} & \hdots & -A_{q} & \textbf{0}_{p \times p}
\end{pmatrix},
\end{equation}
and
\begin{equation}\label{matrixVmulti}
\tilde{V}_{\rho}:=col\{-\rho A^{T}_{1} c, - \rho A^{T}_{2} c, \hdots,- \rho A^{T}_{q} c, c\}.
\end{equation}
From Assumption 3 and Remark 2, we conclude that the operator $\tilde{\Phi} (\tilde{\Pi})$ defined in (\ref{operatorphimulti}) is maximally monotone. Since $\rho \geq 0,$ we obtain
\begin{equation}\label{hhhhhhmulti}
(\tilde{\Pi}-\tilde{\Pi}')^{T} (\tilde{H}(\tilde{\Pi})-\tilde{H}(\tilde{\Pi}')) \geq 0, \quad{} \forall \tilde{\Pi},\tilde{\Pi}' \in \tilde{\mathcal{H}},
\end{equation}
which implies that the operator $\tilde{H}(\tilde{\Pi})$ defined in (\ref{operatorHmulti}) is monotone. Since we have not imposed any assumptions on matrices $A_{i},i=1\hdots,m,$ one can show that the operator $\tilde{H}(\tilde{\Pi})$ is not cocoercive (see Definition 3). It is obvious that $\tilde{H}(\tilde{\Pi})$ defined in (\ref{operatorHmulti}) is $\tilde{L}$-Lipschitz where $\Vert \tilde{M}_{\rho} \Vert_{2} \leq \tilde{L}$.
\textbf{Remark 6:} One may use Lemma 2 to choose
\begin{align*}
\tilde{L}&=max\{\Vert \begin{pmatrix}
\rho A^{T}_{1} A_{1} \\
\rho A^{T}_{2} A_{1} \\
\vdots \\
\rho A^{T}_{q} A_{1} \\
-A_{1}
\end{pmatrix} \Vert_{1},\Vert \begin{pmatrix}
\rho A^{T}_{1} A_{2} \\
\rho A^{T}_{2} A_{2} \\
\vdots \\
\rho A^{T}_{q} A_{2} \\
-A_{2}
\end{pmatrix} \Vert_{1}, \hdots, \Vert \begin{pmatrix}
\rho A^{T}_{1} A_{q} \\
\rho A^{T}_{2} A_{q} \\
\vdots \\
\rho A^{T}_{q} A_{q} \\
-A_{q}
\end{pmatrix} \Vert_{1}, \Vert [A_{1},A_{2},\hdots,A_{q}] \Vert_{\infty} \}.
\end{align*}
Since the saddle-point problem (\ref{saddlepointproblemmulti}) has a solution, the inclusion problem (\ref{inclusionmulti1})-(\ref{inclusionmulti4}) has a solution, i.e., $(\tilde{\Phi}+\tilde{H})^{-1}(\textbf{0}) \neq \emptyset.$ Therefore, the conditions of Lemma 1 are satisfied, and we can apply Algorithm (\ref{algorithm1}) given $\tilde{\Pi}_{0},\tilde{\Pi}_{-1} \in \tilde{\mathcal{H}},$ i.e.,
\begin{equation}\label{algggggggmulti}
\tilde{\Pi}_{k+1}=J_{\eta \tilde{\Phi}}(\tilde{\Pi}_{k}-2 \eta \tilde{H}(\tilde{\Pi}_{k})+\eta \tilde{H}( \tilde{\Pi}_{k-1})).
\end{equation}
The sequence $\{\tilde{\Pi}_{k}\}$ generated by (\ref{algggggggmulti}) converges strongly to a point in $(\tilde{\Phi}+\tilde{H})^{-1}(\textbf{0})$ since $\tilde{\mathcal{H}}$ is finite dimensional (see Lemma 1 and Remark 3). We obtain
\begin{align}
&\tilde{\Pi}_{k}-2 \eta \tilde{H}(\tilde{\Pi}_{k})+\eta \tilde{H}(\tilde{\Pi}_{k-1})= \nonumber \\
&col\{\tilde{\Theta}_{1,k},\hdots,\tilde{\Theta}_{i,k},\hdots, \tilde{\Theta}_{q+1,k}\} \label{bbbbbbbbbbbmulti}
\end{align}
where
\begin{align}
\tilde{\Theta}_{1,k} &:=x_{1,k}-2 \eta A_{1}^{T} y_{k}-2 \eta \rho \sum_{j=1}^{q} A_{1}^{T} A_{j} x_{j,k} +\eta A_{1}^{T} y_{k-1}+\eta \rho \sum_{j=1}^{q} A_{1}^{T} A_{j} x_{j,k-1}+\rho A_{1}^{T} c \\
&\quad{} \vdots \nonumber \\
\tilde{\Theta}_{i,k} &:=x_{i,k}-2 \eta A_{i}^{T} y_{k}-2 \eta \rho \sum_{j=1}^{q} A_{i}^{T} A_{j} x_{j,k} +\eta A_{i}^{T} y_{k-1}+\eta \rho \sum_{j=1}^{q} A_{i}^{T} A_{j} x_{j,k-1}+\rho A_{i}^{T} c \\
&\quad{} \vdots \nonumber \\
\tilde{\Theta}_{q+1,k} &:=y_{k}+2 \eta \sum_{i=1}^{q} A_{i}x_{i,k}-\eta \sum_{i=1}^{q} A_{i}x_{i,k-1} -\eta c.
\end{align}
We have that
\begin{equation}\label{pppppppppppppmulti}
J_{\eta \tilde{\Phi}}(\tilde{\Pi})=\begin{pmatrix}
\displaystyle \arg\min_{u_{1} \in \Re^{n_{1}}} (\eta f_{1}(u_{1})+\frac{1}{2} \Vert u_{1}-x_{1} \Vert_{2}^{2}) \\
\vdots \\
\displaystyle \arg\min_{u_{q} \in \Re^{n_{q}}} (\eta f_{q}(u_{q})+\frac{1}{2} \Vert u_{q}-x_{q} \Vert_{2}^{2}) \\
\textbf{0}_{p}
\end{pmatrix}
\end{equation}
which exists by Assumption 3. From (\ref{algggggggmulti})-(\ref{pppppppppppppmulti}) we obtain Algorithm 3. Thus the proof of Theorem 2 is complete.
\begin{algorithm}
\caption{}
\begin{align*}
\hat{x}_{1,k} &= x_{1,k}-2 \eta A^{T}_{1} y_{k}+\eta A^{T}_{1} y_{k-1} \nonumber \\
&\quad{} \vdots \\
\hat{x}_{i,k} &= x_{i,k}-2 \eta A^{T}_{i} y_{k}+\eta A^{T}_{i} y_{k-1} \nonumber \\
&\quad{} \vdots \\
\hat{x}_{q,k} &= x_{q,k}-2 \eta A^{T}_{q} y_{k}+\eta A^{T}_{q} y_{k-1} \nonumber \\
x_{1,k+1}&=\displaystyle \arg\min_{u_{1} \in \Re^{n_{1}}} (\eta f_{1}(u_{1})+\frac{1}{2} \Vert u_{1}-\hat{x}_{1,k} \Vert_{2}^{2}) \\
&\quad{} \vdots \\
x_{i,k+1}&=\displaystyle \arg\min_{u_{i} \in \Re^{n_{i}}} (\eta f_{i}(u_{i})+\frac{1}{2} \Vert u_{i}-\hat{x}_{i,k} \Vert_{2}^{2}) \\
&\quad{} \vdots \\
x_{q,k+1}&=\displaystyle \arg\min_{u_{q} \in \Re^{n_{q}}} (\eta f_{q}(u_{q})+\frac{1}{2} \Vert u_{q}-\hat{x}_{q,k} \Vert_{2}^{2}) \\
y_{k+1}&=y_{k}+2 \eta \sum_{i=1}^{q} A_{i}x_{i,k}-\eta \sum_{i=1}^{q} A_{i}x_{i,k-1} -\eta c
\end{align*}
\end{algorithm}
A particular interest is for standard Lagrangian $\mathcal{L}_{0}$ defined in (\ref{augmlagrangmultiblock}). In this case, we have the following corollary.
\textbf{Corollary 2:} Consider optimization (\ref{multiblockoptim}) with Assumptions 3 and 4. Let $\eta \in (0,\frac{1}{2 \tilde{L}})$ where $\Vert \tilde{M}_{0} \Vert_{2} \leq \tilde{L}$ and $\tilde{M}_{0}$ is defined in (\ref{matrixMmulti}). Then starting from any initial points $x_{1,0},x_{1,-1},\hdots,$ $x_{q,0},x_{q,-1},y_{0},y_{-1}$, the sequences $\{x_{i,k}\}_{-1}^{\infty}, i=1,\hdots,m,$ generated by Algorithm 4 converge to a solution of (\ref{multiblockoptim}).
\section{Distributed Optimization}
Now we consider a network model for the distributed optimization considered below. A network of $m \in \mathbb{N}$ nodes labeled by the set $\mathcal{V}=\lbrace 1,2,...,m \rbrace $ is considered. The topology of the interconnections among nodes is fixed $\mathcal{G}=(\mathcal{V},\mathcal{E})$ where $\mathcal{E}$ is the \textit{unordered} edge set $\mathcal{E} \subseteq \mathcal{V} \times \mathcal{V}$. We write $\mathcal{N}_{i}$ for the labels of agent $i$'s neighbors. We define the weighted graph matrix $\mathcal{W}=[\mathcal{W}_{ij}]$ with $\mathcal{W}_{ij}=a_{ij}$ for $j \in \mathcal{N}_{i} \cup \{ i \}$, and $\mathcal{W}_{ij}=0$ otherwise, where $a_{ij}>0$ is the scalar constant weight that agent $i$ assigns to the information $x_{j}$ received from agent $j$.
Now we define the distributed optimization problem as follows: for each node $i \in \mathcal{V}$, we associate a private convex cost function $f_{i}:\Re^{n} \longrightarrow \Re$ which is known to node $i$. The objective of each agent is to collaboratively seek the solution of the following optimization problem using local information exchange with the neighbors:
$$\underset{s}\min \sum_{i=1}^{m} f_{i}(s)$$
where $s \in \Re^{n}$. We assume that there is no communication delay or noise in delivering a message from agent $j$ to agent $i$.
The full formulation of the above distributed optimization problem is as follows:
\begin{equation}\label{11}
\begin{aligned}
& \underset{x}{\text{min}}
& & f(x):=\sum_{i=1}^{m} f_{i}(x_{i}) \\
& \text{subject to}
& & x_{1}=x_{2}=...=x_{m}
\end{aligned}
\end{equation}
where $x=[x_{1}^{T},...,x_{m}^{T}]^{T}, x_{i} \in \Re^{n}, i=1,2,...,m$, $f_{i}:\Re^{n} \longrightarrow \Re$ is a private cost function known to node $i$, and the constraint is achieved through interactions with neighbors.
Now we impose the following assumptions on (\ref{11}).
\textbf{Assumption 5:} The solution set $\mathcal{X}^{*}$ of optimization (\ref{11}) is nonempty.
\textbf{Assumption 6:} The cost functions $f_{i}:\Re^{n} \longrightarrow \Re, i=1, \hdots, m,$ are convex.
Assumptions 5-6 imply that the solution set of (\ref{11}) is nonempty, closed, and convex. Under Assumption 6, strong duality \cite{boydconvexbook} holds for (\ref{11}) and its Lagrange's dual problem (by Weak Slater's condition). Moreover, under Assumptions 5-6, the conditions of Preposition 1 are satisfied, and we can guarantee the existence of a Lagrange multiplier associated with $s^{*} \in \mathcal{X}^{*}$ by Preposition 2 since (\ref{11}) is a special case of (\ref{bertsekasbookoptimization}). Note that Assumption 1 is satisfied, for example, if the cost functions $f_{i}$ satisfy some growth conditions.
Now we impose the following assumptions on the weighted matrix of the graph.
\textbf{Assumption 7:} $\mathcal{W}=\mathcal{W}^{T}$ and $\mathcal{W} \textbf{1}_{m}=\textbf{1}_{m}.$
Assumption 7 implies that the links in the graph are undirected, and the weighted matrix of the graph is doubly stochastic.
\textbf{Assumption 8} \cite{alavianiACC2017}-\cite{alavianiTAC}: The graph is connected, i.e., $\lambda_{2} (I_{m}-\mathcal{W}) >0.$
Assumption 8 ensures that the information sent from each node will be finally obtained by every other node through a path.
Now, a solution to (\ref{11}) under Assumptions 7-8 can be obtained by solving the following problem:
\begin{equation}\label{12}
\begin{aligned}
& \underset{x}{\text{min}}
& & f(x):=\sum_{i=1}^{m} f_{i}(x_{i}) \\
& \text{subject to}
& & Wx=x
\end{aligned}
\end{equation}
where $W=\mathcal{W} \otimes I_{n}$. The proof that a solution to (\ref{12}) provides a solution to (\ref{11}) is given in \cite[App. B]{alavianiTAC}. Now we have the following corollary for distributed optimization (\ref{12}).
\textbf{Corollary 3}: Consider distributed optimization (\ref{12}) with Assumptions 5-8. Let $\eta \in (0,\frac{1}{4})$. Then starting from any initial points $x_{i,0},x_{i,-1}, i=1,\hdots,m,$ the sequences $\{x_{i,k}\}_{-1}^{\infty}$ generated by Algorithm 5 converge to a solution of (\ref{11}).
\textbf{Remark 7:} It is obvious that choosing the parameter $\eta$ in Corollary 3 does \textit{not} require structure of the graph or any global information.
\begin{algorithm}
\caption{}
\begin{align*}
\hat{x}_{i,k} &= x_{i,k} +(\eta y_{i,k-1}-2 \eta y_{i,k})-\sum_{j \in \mathcal{N}_{i} \cup \{i\}} \mathcal{W}_{ij} (\eta y_{j,k-1}-2 \eta y_{j,k}) \nonumber \\
x_{i,k+1}&=\displaystyle \arg\min_{u_{i} \in \Re^{n}} (\eta f_{i}(u_{i})+\frac{1}{2} \Vert u_{i}-\hat{x}_{i,k} \Vert_{2}^{2}) \\
y_{i,k+1}&= y_{i,k}+ (2 \eta x_{i,k} -\eta x_{i,k-1}) -\sum_{j \in \mathcal{N}_{i} \cup \{i\}} \mathcal{W}_{ij} (2 \eta x_{j,k} -\eta x_{j,k-1})
\end{align*}
\end{algorithm}
\textit{Proof of Corollary 3:} It is clear that optimization (\ref{12}) is of the form (\ref{1}). Hence, we can apply Corollary 1 once its conditions are satisfied. By Assumptions 5-6, a Lagrange multiplier associated with a solution of (\ref{12}) exists from Prepositions 1-2. Therefore, Assumptions 1-2 are satisfied. Hence, the conditions of Corollary 1 are satisfied, and we can apply Algorithm 2 for the above distributed optimization problem. By the fact that
$$\Vert I_{mn}-W \Vert_{\infty} \leq 2 \quad{} \text{and} \quad{}\Vert I_{mn}-W \Vert_{1} \leq 2$$
(see Assumption 7), we obtain from Lemma 2 that
$$\Vert M_{0} \Vert_{2}^{2} \leq \Vert M_{0} \Vert_{1} \Vert M_{0} \Vert_{\infty}= 4 \quad{} \Rightarrow \quad{} L=2.$$
Algorithm 2 reduces to the following distributed algorithm:
\begin{align*}
\hat{x}_{k} &= x_{k}-2 \eta (I_{mn}-W) y_{k}+\eta (I_{mn}-W) y_{k-1} \nonumber \\
x_{k+1}&=\displaystyle \arg\min_{u \in \Re^{mn}} (\eta f(u)+\frac{1}{2} \Vert u-\hat{x}_{k} \Vert_{2}^{2}) \\
y_{k+1}&= y_{k}+2 \eta (I_{mn}-W) x_{k} -\eta (I_{mn}-W)x_{k-1}
\end{align*}
The above algorithm is in a compact form and can be viewed as Algorithm 5 based
on local information for each agent $i$. Thus the proof of Corollary 3 is complete.
\section{Numerical Example}
We give the following example where the \textit{direct} extension algorithm of ADMM is divergent, but Algorithm 3 can converge to a solution of the problem.
\textit{Example 1}: Consider the following three-block optimization \cite[Rem. 3.2]{ADMMdiverges}:
\begin{equation}\label{ADMMdivergentoptim}
\begin{aligned}
& \underset{}{\text{min}}
& & \frac{1}{2} x_{1}^{2} \\
& \text{subject to}
& & \begin{pmatrix}
1 & 1\\
1 & 1 \\
1 & 1
\end{pmatrix}\begin{pmatrix}
x_{1} \\
x_{2}
\end{pmatrix}+\begin{pmatrix}
1 \\
1 \\
2
\end{pmatrix} x_{3}+\begin{pmatrix}
1 \\
2 \\
2
\end{pmatrix} x_{4}=0. \\
\end{aligned}
\end{equation}
Optimization (\ref{ADMMdivergentoptim}) has a unique optimal solution $x_{1}=x_{2}=x_{3}=x_{4}=0$ for which the optimal value is finite. Consequently, the conditions of Prepositions 1 and 2 are satisfied, and we can guarantee existence of Lagrange multiplier associated with the unique solution. Therefore, Assumption 4 is satisfied. Hence, conditions of Theorem 2 are satisfied.
It has been shown analytically in \cite{ADMMdiverges} that direct extension of ADMM for (\ref{ADMMdivergentoptim}) diverges for \textit{any} $\rho >0$ in augmented Lagrangian (\ref{augmlagrangmultiblock}) and \textit{any} initial conditions. Now we simulate Algorithm 3 where $\rho=1$ and Algorithm 4 (which is Algorithm 3 where $\rho=0$) and show that the sequences generated by the algorithms converge to the solution of (\ref{ADMMdivergentoptim}) by selecting random initial conditions.
Here, we have that
$$A_{1}=\begin{pmatrix}
1 & 1 \\
1 & 1 \\
1 & 1
\end{pmatrix}, A_{2}=\begin{pmatrix}
1 \\
1 \\
2
\end{pmatrix}, A_{3}=\begin{pmatrix}
1 \\
2 \\
2
\end{pmatrix}, c= \textbf{0}_{3},$$
$$ f_{1}(x_{1},x_{2})=\frac{1}{2} x_{1}^{2}, f_{2}(x_{3})=0, f_{3}(x_{4})=0.$$
Hence, Algorithm 3 reduces to the following algorithm:
\begin{align}
\begin{pmatrix}
x_{1,k+1} \\
x_{2,k+1}
\end{pmatrix} &=\begin{pmatrix}
\frac{1}{1+\eta} & 0 \\
0 & 1
\end{pmatrix}[\begin{pmatrix}
1-6 \eta \rho & -6 \eta \rho \\
-6 \eta \rho & 1-6 \eta \rho
\end{pmatrix} \begin{pmatrix}
x_{1,k} \\
x_{2,k}
\end{pmatrix} -2 \eta \begin{pmatrix}
1 & 1 & 1 \\
1 & 1 & 1
\end{pmatrix} y_{k} -2 \eta \rho \begin{pmatrix}
4 \\
4
\end{pmatrix} x_{3,k} \nonumber \\
&\quad{}-2 \eta \rho \begin{pmatrix}
5 \\
5
\end{pmatrix} x_{4,k}+\eta \begin{pmatrix}
1 & 1 & 1 \\
1 & 1 & 1
\end{pmatrix} y_{k-1} +\eta \rho \begin{pmatrix}
3 & 3 \\
3 & 3
\end{pmatrix} \begin{pmatrix}
x_{1,k-1} \\
x_{2,k-1}
\end{pmatrix} +\eta \rho \begin{pmatrix}
4 \\
4
\end{pmatrix} x_{3,k-1} \nonumber \\
&\quad{}+\eta \rho \begin{pmatrix}
5\\
5
\end{pmatrix} x_{4,k-1} ] \label{examplealg1}
\end{align}
\begin{align}
x_{3,k+1} &=(1-12 \eta \rho) x_{3,k}-2 \eta \begin{pmatrix}
1 & 1 & 2
\end{pmatrix} y_{k} -2 \eta \rho \begin{pmatrix}
4 & 4
\end{pmatrix} \begin{pmatrix}
x_{1,k} \\
x_{2,k}
\end{pmatrix} -14 \eta \rho x_{4,k} \nonumber \\
&\quad{}+\eta \begin{pmatrix}
1 & 1 & 2
\end{pmatrix} y_{k-1}+\eta \rho \begin{pmatrix}
4 & 4
\end{pmatrix} \begin{pmatrix}
x_{1,k-1} \\
x_{2,k-1}
\end{pmatrix} + 6 \eta \rho x_{3,k-1}+7 \eta \rho x_{4,k-1} \\
x_{4,k+1} &=(1-18 \eta \rho)x_{4,k}
-2 \eta \begin{pmatrix}
1 & 2 & 2
\end{pmatrix} y_{k} -2 \eta \rho \begin{pmatrix}
5 & 5
\end{pmatrix} \begin{pmatrix}
x_{1,k} \\
x_{2,k}
\end{pmatrix} -14 \eta \rho x_{3,k} \nonumber \\
&\quad{} +\eta \begin{pmatrix}
1 & 2 & 2
\end{pmatrix} y_{k-1} +\eta \rho \begin{pmatrix}
5 & 5
\end{pmatrix} \begin{pmatrix}
x_{1,k-1} \\
x_{2,k-1}
\end{pmatrix} + 7 \eta \rho x_{3,k-1}+9 \eta \rho x_{4,k-1} \\
y_{k+1} &= y_{k}+2 \eta \begin{pmatrix}
1 & 1 \\
1 & 1 \\
1 & 1
\end{pmatrix} \begin{pmatrix}
x_{1,k} \\
x_{2,k}
\end{pmatrix} +2 \eta \begin{pmatrix}
1 \\
1 \\
2
\end{pmatrix} x_{3,k} +2 \eta \begin{pmatrix}
1 \\
2 \\
2
\end{pmatrix} x_{4,k}- \eta \begin{pmatrix}
1 & 1 \\
1 & 1 \\
1 & 1
\end{pmatrix}\begin{pmatrix}
x_{1,k-1} \\
x_{2,k-1}
\end{pmatrix} \nonumber \\
&\quad{}- \eta \begin{pmatrix}
1 \\
1 \\
2
\end{pmatrix} x_{3,k-1}-\eta \begin{pmatrix}
1 \\
2 \\
2
\end{pmatrix} x_{4,k-1} \label{examplealg2}.
\end{align}
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.6]{Fig1}
\caption{Top: Variables $x_{1,k},x_{2,k},x_{3,k},$ and $x_{4,k}$ of Algorithm (\ref{examplealg1})-(\ref{examplealg2}) where $\rho=1$. The result shows that the variables are approaching the origin. Bottom: The error $e_{k}=\Vert [x_{1,k},x_{2,k},x_{3,k},x_{4,k}] \Vert_{2}$. The result shows that the error is converging to zero.}
\label{figure1}
\end{figure}
We use MATLAB software for simulation. First, we set $\rho=1$ in Algorithm (\ref{examplealg1})-(\ref{examplealg2}) and calculate $\Vert \tilde{M}_{1} \Vert_{2}=21.3217$. Hence, we select $\eta=\frac{1}{50}$. The result obtained by the algorithm is given in Figure \ref{figure1}. Finally, we set $\rho=0$ in
Algorithm (\ref{examplealg1})-(\ref{examplealg2}) (to obtain Algorithm 4) and calculate $\Vert \tilde{M}_{0} \Vert_{2}=4.5129$. Hence, we select $\eta=0.1$. The result obtained by the algorithm is given in Figure \ref{figure2}. The results in both figures show that the sequences $\{x_{1,k}\},\{x_{2,k}\},\{x_{3,k}\},$ and $\{x_{4,k}\}$ generated by Algorithms 3 and 4 converge to the solution of (\ref{ADMMdivergentoptim}).
\begin{figure}[thpb]
\centering
\includegraphics[scale=0.6]{Fig2}
\caption{Top: Variables $x_{1,k},x_{2,k},x_{3,k},$ and $x_{4,k}$ of Algorithm (\ref{examplealg1})-(\ref{examplealg2}) where $\rho=0$. The result shows that the variables are approaching the origin. Bottom: The error $e_{k}=\Vert [x_{1,k},x_{2,k},x_{3,k},x_{4,k}] \Vert_{2}$. The result shows that the error is converging to zero.}
\label{figure2}
\end{figure}
\section{Conclusions and Future Work}
In this paper, we consider a centralized two-block separable optimization for which we derive a fully parallel primal-dual discrete-time algorithm based on monotone operator splitting method. In this algorithm, the primal variables are updated in an alternating fashion like ADMM. However, unlike existing discrete-time algorithms such as MM, ADMM, BiADMM, and PDFP algorithms, that all suffer from sequential updates, all primal and dual variables are updated in parallel. One of advantages of the proposed algorithm is that it can be directly extended to any finite multi-block optimization while preserving its convergence. Then the method is applied to distributed optimization to derive a fully parallel primal dual distributed algorithm. Finally, a numerical example of a three-block optimization is given for which the direct extension of proposed algorithm is shown to converge to a solution, whereas the direct extension of ADMM diverges for any choice of $\rho >0$ and any initial conditions. Rate of convergence of the proposed algorithms remains a topic of future research.
| {
"timestamp": "2020-09-30T02:08:42",
"yymm": "2009",
"arxiv_id": "2009.13730",
"language": "en",
"url": "https://arxiv.org/abs/2009.13730",
"abstract": "In this paper, a centralized two-block separable optimization is considered for which a fully parallel primal-dual discrete-time algorithm with fixed step size is derived based on monotone operator splitting method. In this algorithm, the primal variables are updated in an alternating fashion like Alternating Direction Method of Multipliers (ADMM). However, unlike existing discrete-time algorithms such as Method of Multipliers (MM), ADMM, Bi-Alternating Direction Method of Multipliers (BiADMM), and Primal-Dual Fixed Point (PDFP) algorithms, that all suffer from sequential updates, all primal and dual variables are updated in parallel in the sense that to update a variable at each time, updated version of other variable(s) is not required. One of advantages of the proposed algorithm is that its direct extension to multi-block optimization is still convergent. Then the method is applied to distributed optimization for which a fully parallel primal-dual distributed algorithm is obtained. Finally, since direct extension of ADMM may diverge for multi-block optimization, a numerical example of a three-block optimization is given for which the direct extension of the proposed algorithm is shown to converge to a solution.",
"subjects": "Optimization and Control (math.OC)",
"title": "A Fully Parallel Primal-Dual Algorithm for Centralized and Distributed Optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347912737016,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7093645931054895
} |
https://arxiv.org/abs/1706.00439 | Tensor Contraction Layers for Parsimonious Deep Nets | Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. Tensor methods are noted for their ability to discover multi-dimensional dependencies, and tensor decompositions in particular, have been used to produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting two popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. Applying the TCL to the task of image recognition, using the CIFAR100 and ImageNet datasets, we evaluate the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance. |
\section{Introduction}
Following their successful application to computer vision,
speech recognition, and natural language processing,
deep neural networks have become ubiquitous
in the machine learning community.
And yet many questions remain unanswered:
Why do deep neural networks work?
How many parameters are really necessary
to achieve state of the art performance?
Recently, tensor methods have been used
in attempts to better understand
the success of deep neural networks
\cite{cohen2015expressive,haeffele2015global}.
One class of broadly useful techniques within tensor methods
are tensor decompositions.
While the properties of tensors have long been studied,
in the past decade they have come to prominence
in machine learning in such varied applications
as learning latent variable models \cite{anandkumar2014tensor},
and developing recommender systems \cite{karatzoglou2010multiverse}.
Several recent papers apply tensor learning
and tensor decomposition to deep neural networks
for the purpose of devising neural network learning algorithms
with theoretical guarantees of convergence
\cite{sedghi2016training,janzamin2015generalization}.
Other lines of research have investigated
practical applications of tensor decomposition
to deep neural networks with aims including
multi-task learning \cite{yang2016deep},
sharing residual units \cite{chen2017sharing},
and speeding up convolutional neural networks \cite{lebedev2014speeding}.
Several recent papers apply decompositions
for either initialization \cite{yang2016deep}
or post-training \cite{novikov2015tensorizing}.
These techniques then often require additional fine-tuning
to compensate for the loss of information \cite{yong2015compression}.
However, to our knowledge,
no attempt has been made to apply tensor contractions
as a generic layer directly on the activations or weights
of a deep neural network and to train the resulting network end-to-end.
In deep convolutional neural networks,
the output of each layer is a tensor.
We posit that tensor algebraic techniques
can
exploit multidimensional dependencies
in the activation tensors.
We propose to leverage that structure
by incorporating Tensor Contraction Layers (TCLs) into neural networks.
Specifically, in our experiments,
we apply TCLs directly to the third-order activation tensors
produced by the final convolutional layer of an image recognition network.
Traditional networks flatten this activation tensor,
passing it to subsequent fully-connected layers.
However, the flattening process loses information
about the multidimensional structure of the tensor.
Our experiments show that incorporating TCLs
into several popular deep convolutional networks
can improve their performance,
despite reducing the number of parameters.
Moreover, inference on TCL-equipped networks,
which contain less parameters,
requires considerably fewer floating point operations.
We organize the rest of this paper as follows:
Section~\ref{seq:math} introduces prerequisite concepts
needed to understand the TCL;
Section~\ref{seq:TCL} explains the TCL in detail; Section~\ref{seq:experiments} experimentally evaluates the TCL.
\subsection{Tensor Contraction}
\label{seq:math}
\paragraph{Notation: }
We define tensors as multidimensional arrays,
denoting first-order tensors \(\myvector{v}\) as \emph{vectors},
second-order tensors \(\mymatrix{M}\) as \emph{matrices}
and by \(\mytensor{X}\),
refer to tensors of order 3 or greater.
\(\mymatrix{M}\myT\) denotes the transpose of \(\mymatrix{M}\).
\paragraph{Tensor unfolding: }
Given a tensor,
\( \mytensor{X} \in \myR^{D_1 \times D_2 \times \cdots \times D_N}\),
the mode-\(n\) unfolding of \(\mytensor{X}\) is a matrix \(\mymatrix{X}_{[n]} \in \myR^{D_n, D_{(-n)}}\),
with \(D_{(-n)} = \prod_{\substack{k=1,\\k \neq n}}^N D_k\)
and is defined by the mapping from element
\( (d_1, d_2, \cdots, d_N)\) to \((d_n, e)\), with
\(
e = \sum_{\substack{k=1,\\k \neq n}}^N d_k \times \prod_{m=k+1}^N D_m
\).
\paragraph{n-mode product: }
For a tensor \(\mytensor{X} \in \myR^{D_1 \times D_2 \times \cdots \times D_N}\) and a matrix \( \mymatrix{M} \in \myR^{R \times D_n} \), the n-mode product of \(\mytensor{X}\) by \( \mymatrix{M}\) is a tensor of size \(\left(D_1 \times \cdots \times D_{n-1} \times R \times D_{n+1} \times \cdots \times D_N\right)\) and
can be expressed using the unfolding of \(\mytensor{X}\) and the classical matrix multiplication as:
\begin{equation}
\mytensor{X} \times_n \mymatrix{M} = \mymatrix{M} \mytensor{X}_{[n]} \in \myR^{D_1 \times \cdots \times D_{n-1} \times R \times D_{n+1} \times \cdots \times D_N}
\end{equation}
\paragraph{Tensor contraction: }
Given a tensor \(\mytensor{X} \in \myR^{D_1 \times D_2 \times \cdots \times D_N} \),
we can decompose it into a low-dimensional core tensor \(\mytensor{G} \in \myR^{R_1 \times R_2 \times \cdots \times R_N}\) through projection along each of its modes by projection factors
\( \left( \mymatrix{U}^{(1)}, \cdots,\mymatrix{U}^{(N)} \right) \), with \(\mymatrix{U}^{(k)} \in \myR^{R_k, D_k}, k \in (1, \cdots, N)\).
In other words, we can write:
\begin{equation}
\mytensor{G} =
\mytensor{X} \times_1 \mymatrix{U}^{(1)}
\times_2 \mymatrix{U}^{(2)} \times
\cdots
\times_N \mymatrix{U}^{(N)}
\end{equation}
or, in short:
\begin{equation}
\mytensor{G} =
\mytucker{\mytensor{X}}{\mymatrix{U}^{(1)},
\cdots,
\mymatrix{U}^{(N)}}
\end{equation}
In the case of tensor decomposition,
the factors of the contraction
are obtained by solving a least squares problem.
In particular, closed form solutions
can be obtained for the factor by considering the \(n-\)mode unfolding of \(\mytensor{X}\)
that can be expressed as:
\footnotesize
\begin{equation}
\mymatrix{G}_{[n]} = \mymatrix{U}^{(n)} \mymatrix{X}_{[n]}
\left(\mymatrix{U}^{(1)}
\otimes \cdots
\mymatrix{U}^{(n-1)}
\otimes \mymatrix{U}^{(n+1)}
\otimes \cdots
\otimes \mymatrix{U}^{(N)} \right)^T
\label{eq:unfold_tucker}
\end{equation}
\normalsize
We refer the interested reader
to the seminal work of Kolda and Bader \cite{kolda2009tensor}.
\subsection{Networks with Large fully connected layers}
Many popular convolutional neural networks for computer vision,
e.g. AlexNet, ResNet, and Inception,
require hundreds of millions of parameters
to achieve the reported results.
This can be problematic
when running these networks for inference
on resource-constrained devices,
where it may not be easy to execute
hundreds of millions of calculations
just to classify a single image.
While these widely used architectures
exhibit considerable variety,
they also exhibit some commonalities.
Often, they consist of blocks containing
convolution, activation and pooling layers
followed by fully-connected layers
before the final classification layer.
Both the popular networks AlexNet \cite{alexnet}
and VGG \cite{vgg} follow this meta-architecture,
with both containing two fully-connected layers
of $4096$ hidden units each.
In both networks, these fully-connected layers
hold over $80$ percent of the parameters.
In VGG, the hidden units contain 119,545,856
of the 138,357,544 total parameters,
and in AlexNet the hidden units contain 54,534,144
out the 62,378,344 total parameters.
\begin{figure}
\begin{center}
\includegraphics[width=8.3cm]{TCL}
\end{center}
\caption{A representation of the Tensor Contraction Layer (TCL) applied on a tensor of order 3.
The input tensor \(\mytensor{X}\) is contracted into a low-dimensionality core \(\mytensor{G}\).
}
\label{fig:TCL_visual}
\end{figure}
Given the enormous computational costs
for both training and running inference in
these networks,
we desire techniques that preserve high accuracy
while reducing the number of parameters
in the network.
Notable work in this direction includes approaches
to induce and exploit sparsity in the parameters
during training \cite{han2015deep}.
\section{Tensor Contraction Layer}
\label{seq:TCL}
In this paper, we propose to incorporate
the tensor contraction
into convolutional neural networks
as an end-to-end trainable layer,
applying it to the third order activation tensor
output by the final convolutional layer.
In particular, given an activation tensor \(\mytensor{X}\)
of size \( \left( D_1, \cdots, D_N \right) \),
we seek a low dimensional core \(\mytensor{G}\)
of smaller size \( \left( R_1, \cdots, R_N \right) \) such that:
\begin{equation}
\mytensor{G} =
\mytensor{X} \times_1 \mymatrix{V}^{(1)}
\times_2 \mymatrix{V}^{(2)} \times
\cdots
\times_N \mymatrix{V}^{(N)}
\end{equation}
with \(\mymatrix{V}^{(k)} \in \myR^{R_k, D_k}, k \in (1, \cdots, N)\).
We leverage this formulation and define a new layer
that takes the activation tensor \(\mytensor{X}\)
obtained from a previous layer
and applies such a projection to it (Figure.~\ref{fig:TCL_visual}).
We optimize the projection factors \( \left(\mymatrix{V}^{(k)}\right)_{k \in [1, \cdots N]}\)
to obtain a low dimensional projection
of the activation tensor
as the output of the layer.
We learn the projection factors by backpropagation
jointly with the rest of the network's parameters.
We call this new layer the tensor contraction layer
and denote by \emph{size--\(\left(R_1, \cdots, R_N\right)\) TCL}, or \emph{TCL--\(\left(R_1, \cdots, R_N\right)\)}
a TCL producing a contracted output of size \(\left(R_1, \cdots, R_N\right)\).
The gradients with respect to each of the factors
can be derived easily from \ref{eq:unfold_tucker}.
Specifically, for each \(k \in {1, \cdots, N}\),
we use the following equivalences:
\scriptsize
\begin{align*}
\myd{\mytensor{G}}{\mymatrix{V}^{(k)}}=&
\myd{\mytensor{X} \times_1 \mymatrix{V}^{(1)}
\times_2 \mymatrix{V}^{(2)} \times
\cdots
\times_N \mymatrix{V}^{(N)}}{\mymatrix{V}^{(k)}}
= \myd{\mytensor{G}_{[k]}}{\mymatrix{V}^{(k)}} \\
= & \myd{\mymatrix{V}^{(k)} \mymatrix{X}_{[k]}
\left(\mymatrix{V}^{(1)}
\otimes \cdots
\mymatrix{V}^{(k-1)}
\otimes \mymatrix{V}^{(k+1)}
\otimes \cdots
\otimes \mymatrix{V}^{(N)} \right)^T
}{\mymatrix{V}^{(k)}}\\
\label{eq:layer_gradient}
\end{align*}
\normalsize
\begin{figure}
\begin{center}
\includegraphics[width=7cm, height=10cm]{TCLsym}
\end{center}
\caption{A representation of the symbolic graph of the Tensor Contraction Layer.}
\label{fig:TCL_sym}
\end{figure}
In practice, with minibatch training,
we might think of the first mode of an activation tensor
as corresponding to the batch-size.
Technically, it is possible to applying a transformation
along this dimension too,
but we leave this consideration for future work.
It is trivial to address this case by either
starting the \(n-\)mode products at the second mode
or by setting the first factor
to be the Identity and not optimize over it.
Therefore, in the remainder of the paper,
we consider the activation tensor
for a single sample for clarity,
without loss of generality.
Figure.~\ref{fig:TCL_sym} presents the symbolic graph
of the tensor contraction layer.
Note that when taking the \(n\)-mode product
over different modes,
the order in which the \(n\)-mode products are computed
does not matter.
\subsection{Complexity of the TCL}
In this section, we detail the number of parameters
and complexity of the tensor contraction layer.
\paragraph{Number of parameters}
Let \(\mytensor{X}\) be an activation tensor of size \( \left( D_1, \cdots, D_N \right) \) which we pass through a size--\( \left(R_1, \cdots, R_N \right) \) tensor contraction layer.
This TCL has a total of \( \sum_{k=1}^{N} D_k \times R_k \) parameters
(corresponding to the factors of the \(N\) \(n-\)mode products)
and produces as input a tensor of size \( \left(R_1, \cdots, R_N \right) \).
By comparison, a fully-connected layer producing an output of the same size,
i.e. with \(H = \prod_{k=1}^{N} R_k \) hidden units,
and taking the same (flattened) tensor as input
would have a total of \( \prod_{k=1}^{N} D_k \times \prod_{k=1}^{N} R_k \) parameters.
\input{alexnet_cifar}
\paragraph{Complexity}
As previously exposed, one way to look at the TCL
is as a series of matrix multiplications
between the factors of the contraction
and the unfolded activation tensor.
Let's place ourselves in the setting previously detailed
with an activation tensor \(\mytensor{X}\) of size \( \left( D_1, \cdots, D_N \right) \)
and a TCL--\( \left(R_1, \cdots, R_N \right) \)
of complexity \(O(C_{\text{TCL}})\).
We can write \(C_{\text{TCL}} = \sum_{k=1}^N C_k \)
where \(C_k \) is the complexity of the \(k^{\text{th}}\) \(n-\)mode product.
Note that the order in which the products are taken does not matter
due to the commutativity of the \(n-\)mode product over disjoint modes (e.g. it is commutative for \(\mytensor{X} \times_i \mymatrix{U}^{(i)} \times_j \mymatrix{U}^{(j)}\) as long as \( i \neq j\)).
However, for illustrative purposes,
we consider them to be done in order,
from the first mode to the \(N^{\text{th}}\).
We then have:
\begin{equation}
C_k = R_k \times D_k \prod_{i=1}^{k-1}R_i \prod_{j=k+1}^N D_j
\end{equation}
It follows that the overall complexity of the TCL is:
\begin{equation}
C_{\text{TCL}} = \sum_{k=1}^N \prod_{i=1}^{k}R_i \prod_{j=k}^N D_j
\end{equation}
\paragraph{Comparison with a fully-connected layer}
A fully-connected layer with \(H\) hidden units has complexity \(O(C_{\text{FC}})\), with:
\begin{equation}
C_{\text{FC}} = H \prod_{i=1}^N D_i
\end{equation}
Consider a TCL that maintains the size of its input, i.e., for any \(k\) in \(\myrange{1}{N}\),
\(R_k = D_k\).
In other words, \( C_k = D_k \prod_{i=1}^N D_i\). Therefore,
\begin{equation}
C_{\text{TCL}} = \sum_{k=1}^N D_k \prod_{i=1}^N D_i
\end{equation}
By comparison, a fully-connected layer that also maintains the size of its input, i.e. \(H = \prod_{k=1}^N D_k\), would have a complexity of:
\begin{equation}
C_{\text{FC}} = \left( \prod_{i=1}^N D_i \right)^2
\end{equation}
Notice the product in the fully-connected case versus a sum for the TCL case.
\subsection{Incorporating TCL in a network}
We see several straightforward ways
to incorporate the TCL
into existing neural network architectures.
\paragraph{TCL as An Additional Layer}
First, we can insert a tensor contraction layer
following the last pooling layer,
reducing the dimensionality of the activation tensor
before feeding it to the subsequent two fully-connected layers
and softmax output of the network.
In general, flattening induces a loss of information.
By applying tensor contraction we reduce dimensionality
efficiently by leveraging the multidimensional dependencies
in the activation tensor.
\paragraph{TCL as Replacement of a Fully Connected Layer}
We can also incorporate the TCL into existing architectures
by completely replacing fully-connected layers.
This has the advantage of significantly reducing
the number of parameters in our model.
Concretely, consider an activation tensor
of size \(\left(256, 7, 7\right)\)
that is fed to either a fully-connected layer
(after having been flattened) or to a TCL.
A fully-connected layer with \(4096\) hidden units
has \(256 \times 7 \times 7 \times 4096 = 51,380,224\) parameters.
A TCL that preserves the size of its input, on the other hand,
only has \(256^2 + 7^2 + 7^2 = 1,712,622\) parameters.
The TCL has \(30\) times fewer parameters than the fully-connected layer.
Similarly, a TCL--\( \left(128, 5, 5\right)\)
(approximately half size) will have only
\( 256 \times 128 + 7 \times 5 + 7 \times 5 = 32,838\) parameters,
or \(1,500\) times fewer parameters than a fully-connected layer.
\section{Experiments}
\label{seq:experiments}
\input{vgg_cifar}
Our experiments investigate the representational power
of the TCL,
demonstrating results on the CIFAR100 dataset \cite{cifar}.
Subsequently, we offer some preliminary results on the ImageNet 1k dataset \cite{imagenet}.
We hypothesize that a TCL
can efficiently represent an activation tensor
for processing by subsequent layers of the network,
allowing for a large reduction in parameters without a reduction in accuracy.
We conduct our investigation on CIFAR100
using the AlexNet \cite{alexnet} and VGG \cite{vgg} architectures,
each modified to take $32\times32$ images as inputs.
We also present results with a traditional AlexNet on ImageNet.
In all cases we report the accuracy (top-1) as well as the space saved, which we quantify as:
\[
\text{space savings} = 1 - \frac{n_{\text{TCL}}}{n_{\text{original}}}
\]
where \(n_{\text{original}}\) is the number of parameters in the fully-connected layers of the standard network and \(n_{\text{TCL}}\) is the number of parameters in the fully-connected layers of the network modified to include the TCL.
To avoid vanishing or exploding gradients, and to make the TCL more robust to changes in the initialization of the factors, we added a batch normalization layer \cite{ioffe2015batch} before and after the TCL.
\subsection{Results on CIFAR100}
The CIFAR100 dataset is composed of 100 classes
containing 600 $32\times32$ images each,
with 500 training images and 100 testing images per class.
In all cases, we report performance on the testing set
in term of accuracy (Top-1).
We implemented all models using the MXNet library \cite{mxnet}
and ran all experiments training with data parallelism across multiple GPUs on Amazon Web Services,
with two NVIDIA k80 GPUs.
Because both the original AlexNet and VGG architectures
were defined for the ImageNet data set,
which has a larger input image size,
to adapt them for CIFAR100 by adjusting the stride size
on the input convolution layer of both networks
so that they would take $32\times32$ input images.
We investigate two sets of experiments, described below.
\begin{description}
\item[Added TCL]
In the first experiments, we added a TCL as additional layer
after the last pooling layer
and perform the contraction
along the two spacial modes of the image,
leaving the modes corresponding to the channel and the batch size untouched.
We gradually reduced the number of hidden units in these last two layers with and without the TCL included and retrain the nets until convergence to demonstrate how the TCL can learn more compact representations without compromising accuracy.
\item[TCL substitution] In this case, we completely replace one or both of the fully-connected layers
by a tensor contraction layer.
We reduce the number of hidden units in the subsequent layers proportionally to the reduction in the size of the activation tensor.
\end{description}
\paragraph{Network architectures}
We experimented with an AlexNet,
with an adjusted stride and filter size
in the final convolutional layer.
From the last convolutional layer,
we get an activation tensor of size
\(\left(\textit{batch\_size}, 256, 3, 3\right)\).
Similarly, in the case of the VGG network, we obtain activation tensors of size
\(\left(\textit{batch\_size}, 512, 3, 3\right)\).
We experiment with several variations
of the tensor contraction layer.
First, we consider the case where we project the activations
to a tensor of identical shape.
Additionally, we evaluate the effect
of reducing the dimensionality of the activation tensor
by 25\% and by 50\%.
For AlexNet, because the spatial modes
already compact are already,
we preserve the spatial dimensions,
and reduce dimensionality along the channel.
\input{alexnet_imagenet}
\subsubsection{Results}
Table~\ref{tab:alexnet_cifar} summarizes our results on CIFAR100 using the AlexNet, while results with VGG are presented in Table~\ref{tab:vgg_cifar}.
The first column presents the method,
the second specifies whether a tensor contraction
was added and when this is the case,
the size of the contracted core.
Columns 3 and 4 specify the number of hidden units
in the fully connected layers
or the size of the TCL used instead when relevant.
Column 5 presents the top-1 accuracy on the validation. Finally, the last column presents the reduction factor
in the number of parameters in the fully connected layers (which represent, as previously mentioned,
more than 80\% of the total number of parameters
of the networks) where the reference
is the original network without any modification (\emph{Baseline}).
A first observation is that adding a tensor contraction layer (\emph{Added TCL} in Tables ~\ref{tab:alexnet_cifar} and \ref{tab:vgg_cifar}) consistently increases performance while having minimal impact on the overall number of parameters. Replacing the first fully-connected layer (\emph{1 TCL substitution} in the Tables) allows us to reduce the number of parameters in the fully connected layers by a factor of more than $3$, while observing the same performance as the original network. By replacing both fully connected layers (\emph{2 TCL substitutions} in the Tables) we can obtain a reduction of more than $92$\(\times\), with only a $2.5\%$ decrease in performance.
\subsection{Results on ImageNet}
In this section, we present preliminary experiments using the larger ILSVRC 2012 (ImageNet) dataset \cite{imagenet}, using the AlexNet architecture.
ImageNet is composed of $1.2$ millions image
for testing and 50,000 for validation
and comprises 1,000 labeled classes.
For these experiments, we trained each network
simultaneously on 4 NVIDIA k80 GPUs
using data parallelism and report preliminary results.
We report Top-1 accuracy on the validation set,
across all 1000 classes. All experiments were run using the same setting.
\paragraph{Network architecture}
We use a standard AlexNet \cite{alexnet}. From the last convolutional layer, we get an activation tensor of size \(\left(\textbf{batch\_size}, 256, 5, 5\right)\).
As in the CIFAR100 case, we experiment with several variations of the tensor contraction layer.
We first insert a TCL before the fully-connected layers,
either a size-preserving TCL (i.e. projecting to a tensor of the same size)
or with a smaller size TCL and a proportionally smaller number
of hidden units in the subsequent fully-connected layers.
We then experiment with replacing completely the first fully-connected layer with a TCL.
\subsubsection{Results}
In Table \ref{tab:alexnet_imagenet}
we summarize the results from a standard AlexNet (\emph{Baseline}, first row), with an added tensor contraction layer (\emph{Added TCL}) that preserves the dimensionality of its input (row 2) or reduces it (last row).
We also report result for substituting
the first fully connected layer
with a TCL (\emph{1 TCL substitution}, last row).
Simply adding the TCL improves performance
while the increase in number of parameters
in the fullly connected layers is negligible.
We can obtain similar performance by first adding a TCL to reduce the dimensionality of the activation tensor and reducing the number of hidden units in the fully-connected layers, leading to a large space saving with virtually no decrease in performance. Replacing the first fully-connected layer with a size-preserving TCL results in a similar space savings while maintaining the same performance as the standard network.
\section{Discussion}
We introduced a new neural network layer
that performs a tensor contraction
on an activation tensor to yield
a low dimensional representation of it.
By exploiting the natural multi-linear structure
of the data in the activation tensor,
where each mode corresponds to a distinct modality
(i.e. the dimensions of the image and the channels),
we are able to decrease the size
of the data representation
passed to subsequent layers in the network
without compromising accuracy on image recognition tasks.
The biggest practical contribution of the TCL is the drastic reduction in the number of parameters with little to no performance penalty. This also allows neural networks to perform faster inference with fewer parameters by increasing their representational power.
We demonstrated this via the performance of TCLs
on the widely used CIFAR100 dataset
with two established architectures, namely AlexNet and VGG.
We also show results with AlexNet on the ImageNet dataset.
Our proposed tensor contraction layer seems to be able to capture the underlying structure in the activation tensor and improve performance when added to an existing network. When we replace fully-connected layers with TCLs,
we significantly reduce the number of parameters
and nevertheless maintain (or in some cases even improve) performance.
Going forward, we plan to extend our work
to more network architectures,
especially in settings where raw data or learned representations exhibit natural multi-modal structure
that we might capture via high-order tensors.
We also endeavor to advance our experimental study
of TCLS for large-scale, high-resolutions vision datasets.
Given the time required to train a
large network on such datasets
we are investigating ways to reduce the dimension
of the tensor contractions of an already trained model and simply fine tune.
In addition, recent work \cite{shi2016tensor} has shown that
new extended BLAS primitives can avoid transpositions needed to compute the tensor contractions.
This will further speed up the computations and we plan to implement it in future.
Furthermore, we will look into methods
to induce and exploit sparsity in the TCL,
to understand the parameter reductions this method can yield
over existing state-of-the-art pruning methods.
Finally, we are working on an extension to the TCL:
a tensor regression layer to replace both
the fully-connected and final output layers,
potentially yielding increased accuracy with even greater parameter reductions.
{\small
\bibliographystyle{ieee}
| {
"timestamp": "2017-06-05T02:00:36",
"yymm": "1706",
"arxiv_id": "1706.00439",
"language": "en",
"url": "https://arxiv.org/abs/1706.00439",
"abstract": "Tensors offer a natural representation for many kinds of data frequently encountered in machine learning. Images, for example, are naturally represented as third order tensors, where the modes correspond to height, width, and channels. Tensor methods are noted for their ability to discover multi-dimensional dependencies, and tensor decompositions in particular, have been used to produce compact low-rank approximations of data. In this paper, we explore the use of tensor contractions as neural network layers and investigate several ways to apply them to activation tensors. Specifically, we propose the Tensor Contraction Layer (TCL), the first attempt to incorporate tensor contractions as end-to-end trainable neural network layers. Applied to existing networks, TCLs reduce the dimensionality of the activation tensors and thus the number of model parameters. We evaluate the TCL on the task of image recognition, augmenting two popular networks (AlexNet, VGG). The resulting models are trainable end-to-end. Applying the TCL to the task of image recognition, using the CIFAR100 and ImageNet datasets, we evaluate the effect of parameter reduction via tensor contraction on performance. We demonstrate significant model compression without significant impact on the accuracy and, in some cases, improved performance.",
"subjects": "Machine Learning (cs.LG)",
"title": "Tensor Contraction Layers for Parsimonious Deep Nets",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347816221829,
"lm_q2_score": 0.7279754430043072,
"lm_q1q2_score": 0.7093645918302139
} |
https://arxiv.org/abs/2208.03941 | A high-resolution dynamical view on momentum methods for over-parameterized neural networks | Due to the simplicity and efficiency of the first-order gradient method, it has been widely used in training neural networks. Although the optimization problem of the neural network is non-convex, recent research has proved that the first-order method is capable of attaining a global minimum for training over-parameterized neural networks, where the number of parameters is significantly larger than that of training instances. Momentum methods, including heavy ball method (HB) and Nesterov's accelerated method (NAG), are the workhorse first-order gradient methods owning to their accelerated convergence. In practice, NAG often exhibits better performance than HB. However, current research fails to distinguish their convergence difference in training neural networks. Motivated by this, we provide convergence analysis of HB and NAG in training an over-parameterized two-layer neural network with ReLU activation, through the lens of high-resolution dynamical systems and neural tangent kernel (NTK) theory. Compared to existing works, our analysis not only establishes tighter upper bounds of the convergence rate for both HB and NAG, but also characterizes the effect of the gradient correction term, which leads to the acceleration of NAG over HB. Finally, we validate our theoretical result on three benchmark datasets. | \section{Introduction}
Momentum methods utilize the history of gradients, which exhibit accelerated convergence rates compared to the gradient descent method~(GD).
Momentum methods are frequently used in training neural networks due to the high computational cost of deep learning.
The convergence properties of two well-known momentum methods, the heavy-ball method~(HB)~\cite{polyak1964some} and Nesterov's accelerated method~(NAG)~\cite{nesterov27method} have been extensively investigated in the convex setting.
In this paper, we consider optimizing a two-layer neural network with $m$ hidden nodes
\begin{equation}
\label{nn}
f(\bm{W}, \a, \bm{x}) = \frac{1}{\sqrt{m}}\sum_{r=1}^m \a_r \delta(\bm{w}_r^{\top} \bm{x}),
\end{equation}
where $\bm{x} \in \mathbb{R}^d$ is the input, $\delta(z)=\max\{z, 1-z\}$ denotes the ReLU activation function, $\bm{W}=\{\bm{w}_1,\cdots, \bm{w}_m\} \in \mathbb{R}^{d \times m}$ is the parameter of the hidden layer and $\a \in \mathbb{R}^m$ is the parameter of the output layer.
In general, it minimizes the square loss to optimize the neural network
\begin{equation}
\label{two-layer}
L(\bm{W}, \a) = \frac{1}{2}\sum_{i=1}^n (f(\bm{W}, \a, \bm{x}_i) - y_i)^2,
\end{equation}
where $\{(\bm{x}_i, y_i)\}_{i=1}^n$ denotes the training set.
Following~\cite{Du2019,Bu2021}, we only update $\bm{W}$ and keep $\a$ fixed during training.
Note that problem~(\ref{two-layer}) is both non-convex and non-smooth.
Recently, plenty of works theoretically provided global convergence guarantees for gradient-based methods in optimizing (\ref{two-layer})~\cite{Du2019,Bu2021,Arora2019}.
From a continuous viewpoint, Bu~{et al.}~\cite{Bu2021} derived the convergence rates of HB and NAG in optimizing (\ref{two-layer}).
Specifically, they consider a non-linear dissipative dynamical system with $b > 0$
\begin{equation}
\label{ode: momentum}
\ddot{\bm{w}}_r(t) + b\dot{\bm{w}}_r(t) + \frac{\partial L(\bm{W}(t), \a)}{\partial \bm{w}_r(t)} = 0.
\end{equation}
which corresponds to the limiting ODE for both HB and NAG\footnote{NAG has several variants~\cite{nesterov2013introductory}.
In this paper, we consider NAG with constant momentum parameter.} when the learning rate tends to infinitesimal~\cite{Wilson2021}.
Consequently, (\ref{ode: momentum}) fails to distinguish the convergence properties between HB and NAG.
To tackle this problem, Shi~{et al.}~\cite{Shi2021} applied dimensional analysis to obtain high-resolution ODEs for HB and NAG, which are in better agreement with the discrete methods compared to (\ref{ode: momentum}) (see Fig.2 in~\cite{Shi2021}).
However, their convergence analysis is in the convex setting.
In this paper, we analyze the convergence of HB and NAG in training the non-convex neural network.
Based on the high-resolution modeling,
we show that HB and NAG have different convergence rates.
In addition, our results provide tighter upper bounds of convergence for both HB and NAG's high-resolution ODEs.
\section{Preliminary}
\subsection{Prior result of the two-layer ReLU over-parameterized neural network}
We first introduce an important Gram matrix induced by the two-layer ReLU neural network
\begin{equation}
\label{H}
\H(t) := \sum_{r=1}^m \frac{\partial \bm{f}(t)}{\partial \bm{w}_r(t)} (\frac{\partial \bm{f}(t)}{\partial \bm{w}_r(t)})^{\top},
\end{equation}
where $\bm{f}(t) = \{\bm{f}_1(t), \cdots, \bm{f}_n(t)\}$.
When $m$ goes to infinity, $\H$ at the initialization has
\begin{equation}
\H^{\infty} := \lim_{m \to \infty} \H(0)=\mathbb{E}_{w_r \sim \mathcal{N}(\bm{0},\bm{I})} [\bm{x}_i^{\top}\bm{x})_j \mathbb{I}\{\bm{w}_r^{\top}\bm{x}_i \geq 0, \bm{w}_r^{\top}\bm{x}_j \geq 0\}].
\end{equation}
Then $\H^{\infty}$ is positive definite under the following condition.
\begin{lemma}[Theorem 3.1 in \cite{Du2019}]
Suppose $\bm{x}_i \nparallel \bm{x}_j$ for any $i \neq j$, then $\lambda_0 :=\lambda_{min}(\H^{\infty}) > 0$.
\end{lemma}
In addition, when the width is wide enough,
$\H(0)$ is also positive definite.
\begin{lemma}[Lemma 3.1 in \cite{Du2019}]
If $m=\Omega\left(\frac{n^2}{\lambda_{0}^{2}} \log \left(\frac{n^2}{\delta}\right)\right)$, with probability at least $1-\delta$, it has $\left\|\H(0)-\H^{\infty}\right\|_{2} \leq \frac{\lambda_{0}}{4}$ and $\lambda_{\min }(\H(0)) \geq \frac{3}{4} \lambda_{0}$.
\label{lem:B1}
\end{lemma}
Next, we introduce a lemma that shows for any $t$, if $\bm{w}_r(t)$ is close to $\bm{w}_r(0)$, then $\H(t)$ is identical to $\H(0)$, which leads to the positive definite of $\H(t)$.
\begin{lemma}[Lemma 3.2 in \cite{Du2019}]
Assume $\bm{w}_r(0) \sim \mathcal{N}(\bm{0},\bm{I})$ for $r \in [m]$ and $\left\|\bm{w}_{r}(0)-\bm{w}_{r}\right\|_{2} \leq \frac{c \delta \lambda_{0}}{n^2} =: R$ for some small positive constant $c$, then the following holds with probability at least $1-\delta$: $\|\bm{H}(t)-\bm{H}(0)\|_2<\frac{\lambda_0}{4}$, $\lambda_{\min}(\bm{H}(t))>\frac{\lambda_0}{2}$ and $\lambda_{max}(\H(t)) < \lambda_m :=\lambda_{max}(\H^{\infty}) + \frac{\lambda_0}{4}$.
\label{lem:B2}
\end{lemma}
\subsection{High-resolution ODEs}
According to \cite{Shi2021}, HB has the following high-resolution ODE
\begin{equation}
\label{eq:HB}
\ddot{\bm{w}}_r(t) + 2\sqrt{\alpha}\dot{\bm{w}}_r(t) + (1+\sqrt{ \alpha s})\frac{\partial L(\bm{W}(t), \a)}{\partial \bm{w}_r(t)} = 0,
\end{equation}
where $\alpha \geq 0$ and $s > 0$.
On the other hand, NAG has a different ODE representation
\begin{equation}
\label{eq:NAG}
\ddot{\bm{w}}_r(t) + 2\sqrt{\alpha}\dot{\bm{w}}_r(t) + \sqrt{s} \frac{\partial^2 L(\bm{W}(t), a)}{\partial \bm{w}_r^2(t)} \dot{\bm{w}}_r(t) + (1+\sqrt{\alpha s}) \frac{\partial L(\bm{W}(t), \a)}{\partial \bm{w}_r(t)} = 0.
\end{equation}
When $s \to 0$, (\ref{eq:HB}) and (\ref{eq:NAG}) degenerate to (\ref{ode: momentum}).
Following~\cite{Bu2021}, it has
\begin{eqnarray}
\label{dot_f}
\dot{\bm{f}}_i = \sum_r \frac{\partial \bm{f}_i}{\partial \bm{w}_r}\dot{\bm{w}_r}, \;\;\;\;
\label{ddot_f}
\ddot{\bm{f}}_i = \sum_{r, l \in [m]} \dot{\bm{w}}_r^{\top} \frac{\partial^2 \bm{f}_i}{\partial \bm{w}_r \partial \bm{w}_l}\dot{\bm{w}}_l + \sum_{r \in [m]} \frac{\partial \bm{f}_i}{\partial \bm{w}_r} \ddot{\bm{w}}_r
\overset{a.s.}{=} \sum_{r\in [m]} \frac{\partial \bm{f}_i}{\partial \bm{w}_r} \ddot{\bm{w}}_r.
\end{eqnarray}
\begin{eqnarray}
\label{second_order}
\frac{\partial^2 L }{\partial \bm{w}_r^2} = \left( \frac{\partial \bm{f}}{\partial \bm{w}_r}\right)^{\top}\frac{\partial \bm{f}}{\partial \bm{w}_r} + \frac{\partial^2 \bm{f}}{\partial \bm{w}_r^2}(\bm{f}-\bm{y}) \overset{a.s.}{=} \left( \frac{\partial \bm{f}}{\partial \bm{w}_r}\right)^{\top}\frac{\partial \bm{f}}{\partial \bm{w}_r}.
\end{eqnarray}
When multiply (\ref{eq:HB}) with $\frac{\partial \bm{f}}{\partial \bm{w}_r}$ and sum over all $r\in[m]$, it has the following ODE based on (\ref{H}) and (\ref{dot_f})
\begin{equation}
\label{eq:HB_predy}
\ddot{\bm{f}}(t) + 2\sqrt{\alpha}\dot{\bm{f}}(t) + (1+\sqrt{\alpha s})\H(t)(\bm{f}(t)-\bm{y})=0,
\end{equation}
where $\bm{y} = \{y_1, \cdots, y_n\}$.
Then the dynamics of the error $\Delta = \bm{f} - \bm{y}$ has
\begin{equation}
\label{eq:HB_predy_err}
\ddot{\Delta}(t) + 2\sqrt{\alpha}\dot{\Delta}(t)+(1+\sqrt{\alpha s})\H(t)\Delta(t) = 0.
\end{equation}
Instead of analyzing the non-convex $L$ over $\bm{W}$, we turn to study the optimization of the pseudo-loss $\hat{L}(t):=\frac{1}{2}\Delta^{\top}(t)\H(t) \Delta(t)$, which is $\frac{\lambda_0}{2}-$strongly convex over $\Delta(t)$ according to Lemma~\ref{lem:B2}.
Then, (\ref{eq:HB}) can be transferred into
\begin{equation}
\label{eq:HB_error}
\ddot{\Delta}(t) + \sqrt{2\lambda_0}\dot{\Delta}(t)+(1+\sqrt{\frac{\lambda_0 s}{2}})\frac{\partial \hat{L}(t)}{\partial \Delta(t)} = 0,
\end{equation}
where $\alpha = \frac{\lambda_0}{2}$ according to \cite{Shi2021}.
Similarly, NAG has the following dynamics of error as
\begin{eqnarray}
\label{eq:NAG_error}
\ddot{\Delta}(t) + \sqrt{2\lambda_0}\dot{\Delta}(t)+ \sqrt{s}\H(t)\dot{\Delta}(t) + (1+\sqrt{\frac{\lambda_0 s}{2}})\frac{\partial \hat{L}(t)}{\partial \Delta(t)} = 0.
\end{eqnarray}
\section{Main results}
In this section, we derive the convergence of HB and NAG by analyzing the Lyapunov function along their training trajectories.
\subsection{Convergence analysis of HB}
\begin{thm}
\label{thm_HB}
Suppose $m = \Omega(\frac{n^6}{\delta^3\lambda_0^4})$, $\bm{w}_r(0) \sim \mathcal{N}(0, 1)$ and $\a_r \sim unif\{-1, 1\}$ for any $r \in [m]$, with probability at least $1-\delta$, it has
\begin{equation}
L(t) \leq \frac{6\hat{L}(0)}{\lambda_0} e^{-(2-\sqrt{2})\sqrt{\lambda_0/2} t}.
\end{equation}
\end{thm}
The above theorem implies HB is capable of attaining the global minimum of (2) at a linear rate.
Compared to the $\frac{\sqrt{\lambda_0}}{4\sqrt{2}}$ rate of HB in~\cite{Shi2021}, we provide a tighter upper bound.
To prove Theorem 1, we first demonstrate the linear convergence of $L$ and the upper bound of the distance between $\bm{w}$ and its initial under the positive definite assumption of $\H$.
\begin{lemma}
\label{lemma:B4}
Assume $\lambda_{min}(\H(i)) \geq \frac{\lambda_0}{2}$ for $0 \leq i \leq t$.
With $0<s \leq 2/\lambda_m$ and $\dot{\bm{w}}_r(0)=0$ for any $r\in [m]$, it has $L(t) \leq \frac{6\hat{L}(0)}{\lambda_0} e^{-(2-\sqrt{2})\sqrt{\lambda_0/2} t}$ and $\|\bm{w}_r(t) - \bm{w}_r(0) \|_2 \leq R^{'} := 10\sqrt{\frac{6\hat{L}(0) n}{\lambda^3_0 m}} $.
\end{lemma}
\begin{proof}
Motivated by~\cite{Shi2021,Sun2020}, we use the Lyapunov function
\begin{eqnarray}
V(t) := (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) + \frac{1}{4}\|\dot{\Delta}(t)\|_2^2 + \frac{1}{4}\|\dot{\Delta}(t)+\sqrt{2\lambda_0}\Delta(t)\|_2^2.
\end{eqnarray}
According to the Young's inequality, it has
\begin{equation}
V(t) \leq (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) + \frac{1}{4}(2+\frac{1}{\phi})\|\dot{\Delta}(t)\|_2^2 + \frac{(1+\phi)\lambda_0}{2} \|\Delta(t)\|_2^2 \nonumber,
\end{equation}
where $\phi > 0$.
In addition, the derivative of $V(t)$ has the bound
\begin{eqnarray}
\label{HB:derivate_lya}
\dot{V}(t) &= &(1+\sqrt{\frac{\lambda_0 s}{2}})(\dot{\Delta}(t)^{\top}H(t)\Delta(t) + \frac{1}{2} \Delta(t)^{\top}\dot{H}(t)\Delta(t)) + \frac{1}{2}\langle \dot{\Delta}(t), \ddot{\Delta}(t) \rangle \nonumber\\
& &+ \frac{1}{2}\langle \dot{\Delta}(t)+\sqrt{2\lambda_0}\Delta(t) , \ddot{\Delta}(t)+\sqrt{2\lambda_0}\dot{\Delta}(t) \rangle \nonumber\\
&\overset{a}{=}& (1+\sqrt{\frac{\lambda_0 s}{2}})\dot{\Delta}(t)^{\top}H(t)\Delta(t) + \frac{1}{2}\langle \dot{\Delta}(t), -\sqrt{2\lambda_0}\dot{\Delta}(t)-(1+\sqrt{\frac{\lambda_0 s}{2}})H(t)\Delta(t) \rangle \nonumber\\
&& + \frac{1}{2}\langle \dot{\Delta}(t)+\sqrt{2\lambda_0}\Delta(t), -(1+\sqrt{\frac{\lambda_0 s}{2}})H(t)\Delta(t) \rangle \nonumber \\
&\overset{}{=}&\!\!\!\! -\sqrt{\frac{\lambda_0}{2}}\bigg(\|\dot{\Delta}(t)\|^2 \!+\! (1\!+\!\sqrt{\frac{\lambda_0 s}{2}})\langle H(t)\Delta(t), \Delta(t) \rangle \bigg) \nonumber\\
&\overset{b}{\leq}& -\sqrt{\frac{\lambda_0}{2}}\bigg((1+\sqrt{\frac{\lambda_0 s}{2}})2z\hat{L}(t) + \|\dot{\Delta}(t)\|_2^2 +(1-z){\frac{\lambda_0}{2}}\|\Delta(t)\|_2^2\bigg),\nonumber
\end{eqnarray}
where (a) uses $\dot{H}(t) \overset{a.s.}{=} 0$ according to~\cite{Bu2021}, (b) uses $\Delta^{\top}(t)H(t)\Delta(t)\geq \frac{\lambda_0}{2}\|\Delta(t)\|_2^2$ and $0\leq z \leq 1$.
Thus, HB has the convergence rate $\rho_{HB}^*$ as
\begin{equation}
\label{convergence_rate_HB}
\rho_{HB}^* = \max_{\phi>0, 0\leq z\leq 1} \min\{2z, \frac{4}{2+\frac{1}{\phi}}, \frac{1-z}{1+\phi}\}\sqrt{\frac{\lambda_0}{2}} =(2-\sqrt{2})\sqrt{\frac{\lambda_0}{2}} = (2-\sqrt{2})\sqrt{\frac{\lambda_0}{2}},
\end{equation}
which results in
\begin{equation}
\dot{V}(t) \leq -\rho_{HB}^* V(t).
\end{equation}
Applying Gronwall’s inequality, it has
\begin{eqnarray}
\label{HB: ode_result}
V(t) \leq e^{-\rho_{HB}^* t} V(0).
\end{eqnarray}
Expanding $V$, it is easy to see that
\begin{eqnarray}
(1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) \leq V(t) &\leq& e^{-\rho_{HB}^* t}\big( (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(0) + \frac{1}{4}\|\dot{\Delta}(0)\|^2 + \frac{1}{4}\|\dot{\Delta}(0)+\sqrt{2\lambda_0}\Delta(0)\|^2 \big). \nonumber
\end{eqnarray}
With the initial value $\dot{w}_r(0)= 0$ for any $r \in [m]$, it has
\begin{eqnarray}
\hat{L}(t) &\leq& e^{-\rho_{HB}^* t} \frac{3+\sqrt{\frac{\lambda_0 s}{2}}}{1+\sqrt{\frac{\lambda_0 s}{2}}} \hat{L}(0) \nonumber \\
L(t) &\leq& \frac{6\hat{L}(0)}{\lambda_0} e^{-\rho_{HB}^* t},
\end{eqnarray}
where the last inequality uses $\hat{L}(t) \geq \frac{\lambda_0}{2} L(t)$.
Then, we turn to prove the bound of the distance between $\bm{w}_r(t)$ and $\bm{w}_r(0)$.
Based on (\ref{eq:HB}), we obtain
\begin{eqnarray}
\frac{d}{dt}(e^{\sqrt{2\lambda_0}t}\dot{\bm{w}}_r) &=& -e^{\sqrt{2\lambda_0}t}(1+\sqrt{\frac{\lambda_0 s}{2}})\frac{\a_r}{\sqrt{m}}\sum_{i=1}^m (\bm{f}_i - y_i)\bm{x}_i\mathbb{I}\{\bm{w}_r^{\top} \bm{x}_i \geq 0\}.
\end{eqnarray}
By integrating both sides of the above formula, we get
\begin{eqnarray}
\dot{\bm{w}}_r(t) &=& -e^{-\sqrt{2\lambda_0}t} \int_0^t e^{\sqrt{2\lambda_0}t^{'}}(1+\sqrt{\frac{\lambda_0 s}{2}})\frac{\a_r}{\sqrt{m}}\sum_i (\bm{f}_i(t^{'}) - y_i)\bm{x}_i\mathbb{I}\{\bm{w}_r^{\top}\bm{x}_i \geq 0\}dt^{'}.\nonumber
\end{eqnarray}
Taking the norm and applying $\sum_{i=1}^n \|\bm{x}_i\|_2 \leq \sqrt{n} \|\bm{x}\|_2$, we have
\begin{eqnarray}
\label{distance_HB_new}
\|\dot{\bm{w}}_r(t)\| &\leq& (1+\sqrt{\frac{\lambda_0 s}{2}})\frac{e^{-\sqrt{2\lambda_0}t}\sqrt{n}}{\sqrt{m}}\int_0^t e^{\sqrt{2\lambda_0}t^{'}} \|\bm{f}(t^{'}) - \bm{y}\|_2 dt^{'} \nonumber\\
&\leq& (1+\sqrt{\frac{\lambda_0 s}{2}})\sqrt{\frac{12\hat{L}(0) n}{\lambda_0 m}} e^{-\sqrt{2\lambda_0}t}\frac{e^{(\sqrt{2\lambda_0} - \rho^*/2)t} - 1}{\sqrt{2\lambda_0} - \rho_{HB}^*/2}\leq \sqrt{\frac{24\hat{L}(0) n}{\lambda^2_0 m}}e^{-\frac{\sqrt{2}-1}{2}\sqrt{\lambda_0}t}.
\end{eqnarray}
Applying Cauchy-Schwarz inequality on (\ref{distance_HB_new}), it has
\begin{eqnarray}
\|\bm{w}_r(t) - \bm{w}_r(0)\|_2 \leq \int_0^t \|\dot{\bm{w}}_r(t^{'})\|_2 d t^{'} \leq 10\sqrt{\frac{6\hat{L}(0) n}{\lambda^3_0 m}}. \nonumber
\end{eqnarray}
\end{proof}
When $R^{'}$ defined in Lemma~\ref{lemma:B4} is less than $R$ in Lemma~\ref{lem:B2}, we can get the conclusion in Theorem 1.
\begin{lemma}
\label{lem_B5}
Assume $R^{'} < R$, it has (1) $\lambda_{min}(\H(t)) \geq \lambda_0/2$, (2) $\|\bm{w}_r(t) - \bm{w}_r(0)\|_2 \leq R^{'}$ for all $r \in [m]$ and (3) $L(t) \leq \frac{6\hat{L}(0)}{\lambda_0} e^{-(2-\sqrt{2})\sqrt{\lambda_0/2} t}$.
\end{lemma}
\begin{proof}
Our proof follows~\cite{Du2019,Bu2021}.
Suppose the conclusion fails to hold at time $t$, it can decompose into three situations (1) $\lambda_{min}(\H(t)) < \lambda_0/2$, (2) $\|\bm{w}_r(t) - \bm{w}_r(0)\|_2 > R^{'}$ for all $r \in [m]$ or (3) $L(t) > \frac{6\hat{L}(0)}{\lambda_0} e^{-(\sqrt{2}-1)\sqrt{\lambda_0} t}$.
When $\lambda_{min}(\H(t)) < \lambda_0/2$, based on Lemma~\ref{lem:B2}, it has
$\|\bm{w}_r(t) - \bm{w}_r(0)\|_2 > R$.
Thus,
there exists a $t_0$ that
\begin{equation}
t_0 = \inf\{t^{'}:\max_{r \in [m]}\|\bm{w}_r(t^{'}) - \bm{w}_r(0)\|_2 \geq R\}.
\end{equation}
As a result, it exists a $r \in [m]$ satisfied $\|\bm{w}_r(t_0) - \bm{w}_r(0)\|_2=R$.
Applying Lemma~\ref{lem:B2}, it has $\lambda_{min}(H(t^{'})) > \lambda_0/2$ for $t^{'} \leq t_0$.
According to Lemma~\ref{lemma:B4}, it has $\|\bm{w}_r(t_0) - \bm{w}_r(0)\| \leq R^{'} < R$, which makes a contradiction.
Similarly, it exists a $i$ that $\lambda_{min}(\H(i)) < \lambda_0/2$ for case (2) or (3) according to Lemma~\ref{lem:B2}.
The rest of the proof is similar as case (1).
\end{proof}
To satisfy the assumption $R^{'} < R$ in Lemma~\ref{lem_B5},
it has $m=\Omega(\frac{n^6}{\delta^3\lambda_0^4})$ with the lower bound of $\hat{L}(0) = \Omega(n\lambda_0/\delta)$ as proved in~\cite{Bu2021}.
Finally, it completes the proof of Theorem 1 when width $m=\Omega(\frac{n^6}{\delta^3\lambda_0^4})$.
\subsection{Convergence analysis of NAG}
In this subsection, we focus on the high-resolution ODE (\ref{eq:NAG_error}) of NAG.
\begin{thm}
\label{thm_HB}
Suppose $m = \Omega(\frac{n^6}{\delta^3\lambda_0^4})$, $\bm{w}_r(0) \sim \mathcal{N}(0, 1)$ and $a_r \sim unif\{-1, 1\}$ for any $r \in [m]$, with probability at least $1-\delta$, it has
\begin{equation}
L(t) \leq \frac{26\hat{L}(0)}{3\lambda_0} e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t},
\end{equation}
\end{thm}
where
$$\rho_{NAG}^*(\alpha) = \frac{1}{2}(4+3\alpha - \sqrt{8+16\alpha + \alpha^2}), \;\; \alpha = \sqrt{2\lambda_0 s}/4.$$
Compared to the result of HB in Theorem~\ref{thm_HB}, NAG has a slower convergence rate.
Also, this results provides a tighter upper bound compared to~\cite{Shi2021,Sun2020}.
\begin{lemma}
Assume $\lambda_{min}(\H(i)) \geq \frac{\lambda_0}{2}$ for $0 \leq i \leq t$.
With $0<s \leq 2/\lambda_m$ and $\dot{\bm{w}}_r(0) = 0$ for any $r\in[m]$, NAG has $L(t) \leq \frac{26\hat{L}(0)}{3\lambda_0} e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t}$ and $\|\bm{w}_r(t) - \bm{w}_r(0) \| \leq 25\sqrt{\frac{\hat{L}(0) n}{\lambda^3_0 m}}$
\end{lemma}
\begin{proof}
Consider the Lyapunov function
\begin{eqnarray}
V(t) \!\!\!&=&\!\!\! (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) + \frac{1}{4}\|\dot{\Delta}(t)\|^2 + \frac{1}{4}\|\dot{\Delta}(t)+\sqrt{2\lambda_0}\Delta(t) + \sqrt{s}\H(t) \Delta(t)\|^2 \nonumber \\
&\leq&\!\!\!(1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) + \frac{1}{4}\big(1+(1+\frac{1}{\phi})(1+h)\big)\|\dot{\Delta}(t)\|^2 + \frac{s}{4}(1+\frac{1}{\phi})(1+\frac{1}{h})\|\H(t)\Delta(t)\|^2 \nonumber \\
&&+ \frac{\lambda_0 (1+\phi)}{2}\|\Delta(t)\|^2, \nonumber\\
&\leq& (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) + \frac{1}{4}(2+\frac{1}{\phi})\|\dot{\Delta}(t)\|^2 + \frac{\lambda_0(1+\phi)}{2}\|\Delta(t)\|^2 + \sqrt{\frac{\lambda_0 s}{2}}(1+\phi)2\hat{L}(t) \nonumber \\
&+& \frac{s(1+\phi)}{4}\|\H(t)\Delta(t)\|^2 \nonumber \\
\end{eqnarray}
where $\frac{\lambda_0}{2}\|\Delta(t)\|^2\leq 2\hat{L}(t)$, $\phi>0$ and $h>0$.
Also, the derivative of V has the bound
\begin{small}
\begin{eqnarray}
\dot{V}(t) \!\!\!\!&=&\!\!\!\! (1+\sqrt{\frac{\lambda_0 s}{2}})(\dot{\Delta}(t)^{\top}\H(t)\Delta(t) + \frac{1}{2} \Delta(t)^{\top}\dot{\H}(t)\Delta(t)) + \frac{1}{2}\langle \dot{\Delta}(t),\ddot{\Delta}(t)\rangle \nonumber \\
&&+ \frac{1}{2}\langle \dot{\Delta}(t)+\sqrt{2\lambda_0}\Delta(t) + \sqrt{s}\H(t) \Delta(t), \ddot{\Delta}(t)+\sqrt{2\lambda_0}\dot{\Delta}(t) + \sqrt{s}\dot{\H}(t) \Delta(t)+\sqrt{s}\H(t)\dot{\Delta}(t)\rangle \nonumber \\
&=& -\frac{\sqrt{2\lambda_0}}{2}\|\dot{\Delta}(t)\|^2 -\frac{\sqrt{s}}{2} \dot{\Delta}^{\top}(t)\H(t)\dot{\Delta}(t)-(1+\sqrt{\frac{\lambda_0 s}{2}})\sqrt{2\lambda_0}\hat{L}(t) - \frac{\sqrt{s}}{2}(1+\sqrt{\frac{\lambda_0 s}{2}})\|\H(t)\Delta(t)\|^2 \nonumber\\
&\leq& -\frac{\sqrt{2\lambda_0}}{2}(1+\frac{\sqrt{2\lambda_0 s}}{4})\|\dot{\Delta}(t)\|^2 -(1+\sqrt{\frac{\lambda_0 s}{2}})\sqrt{2\lambda_0}\hat{L}(t) - \frac{\sqrt{s}}{2}(1+\sqrt{\frac{\lambda_0 s}{2}})\|\H(t)\Delta(t)\|^2 \nonumber
\end{eqnarray}
\end{small}
where the last inequality uses $1\geq \sqrt{\frac{\lambda_0 s}{2}}$.
Thus, NAG has the convergence rate $\rho^*$ as
\begin{eqnarray}
\label{convergence_rate_new}
\rho_{NAG}^* &=& \max_{\phi>0} \min\{ \frac{2}{3+2\phi}, \frac{4(1+\frac{\sqrt{2\lambda_0 s}}{4})}{2+1/\phi}, \frac{4}{1+\phi}\} \sqrt{\frac{\lambda_0}{2}}. \nonumber
\end{eqnarray}
Let $\alpha = \frac{\sqrt{2\lambda_0 s}}{4}$, which has a bound $0\leq \alpha \leq \frac{1}{2\sqrt{\kappa}}\leq 1/2$.
Then it has
\begin{eqnarray}
\rho_{NAG}^*(\alpha) = \frac{1}{2}(4+3\alpha-\sqrt{8+16\alpha+9\alpha^2}),
\end{eqnarray}
where is a monotonous increasing value in $\alpha$ when $\alpha > 0$.
Therefore, it has
\begin{eqnarray}
\label{HB: ode_result}
V(t) \leq e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t} V(0).
\end{eqnarray}
With the initial value $\dot{w}_r(0)= 0$ for any $r \in [m]$, it has
\begin{eqnarray}
V(t) &\leq& e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t} V(0) \nonumber \\
(1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(t) \leq V(t) &\leq& e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t}\big( (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(0) + \frac{1}{4}\|\sqrt{2\lambda_0}\Delta(0)+\sqrt{s}\H(t)\Delta(t)\|^2 \big) \nonumber \\
&\leq& e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t}\big( (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(0) + \lambda_0 \|\Delta(0)\|_2^2 + \frac{s}{2}\|\H(0)\Delta(0)\|_2^2\big) \nonumber \\
&\leq&e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t}\big( (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(0) + \lambda_0 \|\Delta(0)\|_2^2 + \frac{s}{2}\|\H(0)\Delta(0)\|_2^2\big) \nonumber \\
&\leq&e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t}\big( (1+\sqrt{\frac{\lambda_0 s}{2}})\hat{L}(0) + \frac{4}{3}\hat{L}(0) + 2\hat{L}(0)\big), \nonumber
\end{eqnarray}
where the last inequality uses $\hat{L}(0) \geq \frac{3\lambda_0}{4}\|\Delta(0)\|_2^2$ and $\|\H(0)\Delta(0)\|_2^2 \leq \Delta(0)^{\top}\H(0)\Delta(0)\|\H(0)\|_2\leq 2\hat{L}(0)\lambda_m$.
Then, it has
\begin{eqnarray}
\hat{L}(t) &\leq& e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t} \frac{13/3+\sqrt{\frac{\lambda_0 s}{2}}}{1+\sqrt{\frac{\lambda_0 s}{2}}} \hat{L}(0) \nonumber \\
L(t) &\leq& \frac{26\hat{L}(0)}{3\lambda_0} e^{-\rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2} t}.
\end{eqnarray}
Then, we turn to prove the bound of the distance between $\bm{w}_r(t)$ and $\bm{w}_r(0)$.
Based on (\ref{eq:HB}), it has
\begin{eqnarray}
\frac{d}{dt}(e^{(\sqrt{2\lambda_0}+\sqrt{s} \frac{\partial^2 L(\bm{W}(t), \a)}{\partial \bm{w}_r^2(t)})t}\dot{\bm{w}}_r) &=& -e^{(\sqrt{2\lambda_0}+\sqrt{s} \frac{\partial^2 L(\bm{W}(t), \a)}{\partial \bm{w}_r^2(t)})t}(1+\sqrt{\frac{\lambda_0 s}{2}})\frac{\a_r}{\sqrt{m}}\sum_{i=1}^m (\bm{f}_i - y_i)\bm{x}_i\mathbb{I}\{\bm{w}_r^{\top} \bm{x}_i \geq 0\}, \nonumber
\end{eqnarray}
where the first equality uses $\frac{\partial^2 \bm{f}_i}{\partial \bm{w}_r \partial \bm{w}_l} \overset{a.s.}{=} 0$ for any $r, l \in [m]$ as proved in~\cite{Bu2021}.
In addition, according to (\ref{second_order}), it has
\begin{eqnarray}
\frac{\partial^2 L(\bm{W}(t), a)}{\partial \bm{w}_r^2(t)} &\overset{a.s.}{=}& \left( \frac{\partial \bm{f}}{\partial \bm{w}_r}\right)^{\top}\frac{\partial \bm{f}}{\partial \bm{w}_r}=\sum_{i=1}^n\|\frac{\partial \bm{f}_i}{\partial \bm{w}_r}\|_2^2 = \sum_{i=1}^n \|\frac{1}{\sqrt{m}}\a_r\bm{x}_i \mathbb{I}\{\bm{w}_r^{\top}\bm{x}_i>0\}\|_2^2\leq n/m.
\end{eqnarray}
By integrating both sides of the above formula, it has
\begin{eqnarray}
\label{distance_HB}
\dot{\bm{w}}_r(t) &=& -e^{-(\sqrt{2\lambda_0}+\sqrt{s} \frac{\partial^2 L(\bm{W}(t), a)}{\partial \bm{w}_r^2(t)})t} \int_0^t e^{(\sqrt{2\lambda_0}+\sqrt{s} \frac{\partial^2 L(\bm{W}(t^{'}), a)}{\partial \bm{w}_r^2(t^{'})})t^{'}}(1+\sqrt{\frac{\lambda_0 s}{2}})\frac{\a_r}{\sqrt{m}}\sum_i (\bm{f}_i(t^{'}) - y_i)\bm{x}_i\mathbb{I}\{\bm{w}_r^{\top}\bm{x}_i \geq 0\}dt^{'}. \nonumber
\end{eqnarray}
Taking the norm and applying $\sum_{i=1}^n \|\bm{x}_i\|_2 \leq \sqrt{n} \|\bm{x}\|_2$, then it has
\begin{eqnarray}
\|\dot{\bm{w}}_r(t)\| &\leq& (1+\sqrt{\frac{\lambda_0 s}{2}})\frac{e^{-\sqrt{2\lambda_0}t}\sqrt{n}}{\sqrt{m}}\int_0^t e^{(\sqrt{2\lambda_0}+\sqrt{s}n/m )t^{'}} \|\bm{f}(s) - \bm{y}\|_2 dt^{'} \nonumber\\
&\leq& (1+\sqrt{\frac{\lambda_0 s}{2}})\sqrt{\frac{52\hat{L}(0) n}{3\lambda_0 m}} e^{-\sqrt{2\lambda_0}t}\frac{e^{(\sqrt{2\lambda_0} + \sqrt{s}n/m - \rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2}/2)t} - 1}{\sqrt{2\lambda_0} + \sqrt{s}n/m - \rho_{NAG}^*(\alpha)\sqrt{\lambda_0/2}/2}\leq 1.701\sqrt{\frac{52\hat{L}(0) n}{3\lambda^2_0 m}}e^{-0.292\sqrt{\lambda_0}t},\nonumber
\end{eqnarray}
where the last inequality uses $m \geq C \sqrt{s}n/\sqrt{\lambda_0}$ for a sufficiently large $C>0$.
Applying Cauchy-Schwarz inequality, it has
\begin{eqnarray}
\|\bm{w}_r(t) - \bm{w}_r(0)\|_2 \leq \int_0^t \|\dot{\bm{w}}_r(t^{'})\|_2 d t^{'} \leq 25\sqrt{\frac{\hat{L}(0) n}{\lambda^3_0 m}}. \nonumber
\end{eqnarray}
\end{proof}
When $0< s \leq 2/\lambda_m$, it has $0<\alpha\leq 1/(2\sqrt{\kappa}) \leq 0.5$.
\begin{figure}[!t]
\label{NAG_con}
\centering
\includegraphics[scale=0.8]{NAG.pdf}
\caption{The value of $\rho_{NAG}^*(\alpha)$ with respect to $\alpha$.}
\label{interval}
\end{figure}
From Fig.~\ref{NAG_con}, it can be observed that $\rho_{NAG}^*(\alpha)$ is a monotonic increasing function w.r.t to $\alpha$ when $0< \alpha \leq 0.5$.
Thus, it has
\begin{equation}
2 - \sqrt{2} < \rho_{NAG}^*(\alpha) \leq (11-\sqrt{73})/4,
\end{equation}
then we prove that NAG converges faster than HB.
Besides, we provide a tight convergence rate compared to the coefficient in current researches, such as 0.25 in~\cite{Shi2021} and $3/7$ in~\cite{Sun2020}.
Similarly, it can get the conclusion of the Theorem 3.2 following the proof of Lemma ~\ref{lem_B5}.
In addition, the required width $m = \Omega\big(\max\{{\sqrt{s}n/\sqrt{\lambda_0}}, \frac{n^6}{\delta^3\lambda_0^4}\}\big) = \Omega(\frac{n^6}{\delta^3\lambda_0^4})$ using $\lambda_0 \leq 1/2$ as proved in~\cite{Bu2021}.
\section{Conclusion}
In this paper, we distinguish the convergence properties between HB and NAG from a high-resolution dynamical view.
Furthermore, our analysis provides tighter upper bounds on the convergence rates for the high-resolution ODEs of HB and NAG.
\bibliographystyle{unsrt}
| {
"timestamp": "2022-09-07T02:15:36",
"yymm": "2208",
"arxiv_id": "2208.03941",
"language": "en",
"url": "https://arxiv.org/abs/2208.03941",
"abstract": "Due to the simplicity and efficiency of the first-order gradient method, it has been widely used in training neural networks. Although the optimization problem of the neural network is non-convex, recent research has proved that the first-order method is capable of attaining a global minimum for training over-parameterized neural networks, where the number of parameters is significantly larger than that of training instances. Momentum methods, including heavy ball method (HB) and Nesterov's accelerated method (NAG), are the workhorse first-order gradient methods owning to their accelerated convergence. In practice, NAG often exhibits better performance than HB. However, current research fails to distinguish their convergence difference in training neural networks. Motivated by this, we provide convergence analysis of HB and NAG in training an over-parameterized two-layer neural network with ReLU activation, through the lens of high-resolution dynamical systems and neural tangent kernel (NTK) theory. Compared to existing works, our analysis not only establishes tighter upper bounds of the convergence rate for both HB and NAG, but also characterizes the effect of the gradient correction term, which leads to the acceleration of NAG over HB. Finally, we validate our theoretical result on three benchmark datasets.",
"subjects": "Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Optimization and Control (math.OC)",
"title": "A high-resolution dynamical view on momentum methods for over-parameterized neural networks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347890464284,
"lm_q2_score": 0.7279754371026368,
"lm_q1q2_score": 0.7093645914840894
} |
https://arxiv.org/abs/1711.04954 | Impartial Triangular Chocolate Bar Games | Chocolate bar games are variants of the game of Nim in which the goal is to leave your opponent with the single bitter part of the chocolate bar. The rectangular chocolate bar game is a thinly disguised form of classical multi-heap Nim. In this work, we investigate the mathematical structure of triangular chocolate bar games in which the triangular chocolate bar can be cut in three directions. In the triangular chocolate bar game, a position is a $\mathcal{P}$-position if and only if $x \oplus y \oplus z = 0$, where the numbers $x,y,z$ stand for the maximum number of times that the chocolate bar can be cut in each direction. Moreover, the Grundy number of a position $(x,y,z)$ is not always equal to $x \oplus y \oplus z $, and a generic formula for Grundy numbers in not known. Therefore, the mathematical structure of triangular chocolate bar game is different from that of classical Nim. | \section{Introduction}\label{intro}
The original chocolate bar game \cite{robin} consists of square boxes in which one square is blue and other squares are brown.
Brown squares are sweet, and the blue square is considered too bitter to eat. For example, see Figure \ref{robinchoco1}.
Each player takes turns breaking the bar in a straight line along the grooves and eating the piece that does not contain the bitter part. The player who breaks the chocolate bar and leaves his opponent with the single bitter blue square is the winner.
Since the horizontal and vertical grooves are independent, the rectangular chocolate bar of Figure \ref{robinchoco1} is equivalent to a game of Nim with three heaps of 3 stones, 7 stones and 4 stones. Therefore, a rectangular chooclate bar game is mathematically the same as the game of Nim \cite{bouton}.
We have previously studied chocolate bar games such as those shown in Figure \ref{demochocolate4} and Figure \ref{2yzchoco1} in \cite{integer2015}.
In these chocolate bar games, a vertical break can
reduce the number of horizontal breaks, and the mathematical structure of these games is different from that of classical Nim and rectangular chocolate bar games. We can still consider the game as being
played with heaps, but now a single move may change more than one heap.
We have also studied chocolate bars games such as those shown in Figure \ref{demochocolate4} and Figure \ref{2yzchoco1} with a pass move in \cite{integer2016}.
There are other types of chocolate bar games. One of the most well known chocolate bar games is CHOMP \cite{gale}, which uses a rectangular chocolate bar. The players take turns to
choose one block, and the players eat this block together with the blocks below it and to its right. The top left block is bitter, and the players cannot eat this block. Although many people have studied this game, the winning strategy has yet to be discovered.
In this study, we consider triangular chocolate bar games such as that shown in Figure \ref{demochocolate3}.
A triangular chocolate bar can be cut in three directions. We previously studied a simple type of triangular chocolate bar game in \cite{ipsj1}, and
the results of \cite{ipsj1} are generalized in this article.
In Section \ref{sectionforcomputer}, Mathematica programs and CGSuite programs are presented. By these programs, readers can check the results of this article with their own computers.
\begin{exam}
Examples of chocolate bar games.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth,bb=0 0 260 292]{robinchoco3.pdf}
\caption{example (1)}
\label{robinchoco1}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth,bb=0 0 260 58]{demochocol4.pdf}
\caption{example (2)}
\label{demochocolate4}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 260 93]{demochocol2.pdf}
\caption{example (3)}
\label{2yzchoco1}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 116 57]{choco647k3.pdf}
\caption{example (4)}
\label{demochocolate3}
\end{minipage}
\end{figure}
\end{exam}
\section{Definitions and Theorems of Game Theory}
Throughout this study, we denote the set of non-negative integers by $Z_{\geq0}$, and $N$ is the set of natural numbers.
For completeness, we quickly review the necessary game theory concepts used in this study; see \cite{lesson} or \cite{combysiegel} for more details.
As chocolate bar games are impartial games without draws, there will only be two outcome classes.
\begin{defn}\label{NPpositions}
$(i)$ $\mathcal{N}$-positions are positions from which the next player can force a win, as long as he plays correctly at every stage.\\
$(ii)$ $\mathcal{P}$-positions are positions from which the previous player (the player who will play after the next player) can force a win, as long as he plays correctly at every stage.
\end{defn}
The outcome of this game is not pre-determined; however, there is nothing that the potential loser can do if the potential winner plays correctly at ever stage. The potential winner cannot afford to make a single mistake, or his opponent can exploit the mistake and win the game.
One of the most important aims in the study of chocolate bar games is the identification of all $\mathcal{P}$-positions and $\mathcal{N}$-positions.
\begin{defn}\label{sumofgames}
The \textit{disjunctive sum} of two games, denoted by $G+H$, is a super-game in which a player may move either in $G$ or $H$, but not in both.
\end{defn}
\begin{defn}
For any position $\mathbf{p}$, there exists a set of positions that can be reached by making precisely one move from $\mathbf{p}$, which we will denote by \textit{move}$(\mathbf{p})$.
\end{defn}
Examples \ref{chocoexmp1} and \ref{defofmovek} demonstrate the use of \textit{move}.
\begin{defn}\label{defofmexgrundy}
$(i)$ The \textit{minimum excluded value} $(\textit{mex})$ of a set, $S$, of non-negative integers is the smallest non-negative integer not in S. \\
$(ii)$ Each position $\mathbf{p}$ of an impartial game has an associated Grundy number, which is denoted by $\mathcal{G}(\mathbf{p})$.
The Grundy number of the end position is $0$, and the Grundy number is found recursively for all other positions:
$\mathcal{G}(\mathbf{p}) = \textit{mex}\{\mathcal{G}(\mathbf{h}): \mathbf{h} \in move(\mathbf{p})\}.$
\end{defn}
The power of the Sprague--Grundy theory for impartial games is contained in the following theorem.
\begin{thm}\label{thmforsumofgame}
Let $G$ and $H$ be impartial games, and let $\mathcal{G}_{G}$ and $\mathcal{G}_{H}$ be the Grundy numbers of $G$ and $H$, respectively. Then, the following relationships hold:\\
$(i)$ For any position $\mathbf{g}$ of $G$ we have
$\mathcal{G}_{G}(\mathbf{g})=0$ if and only if $\mathbf{g}$ is a $\mathcal{P}$-position.\\
$(ii)$ The Grundy number of a position $\{\mathbf{g},\mathbf{h}\}$ in the game $G+H$ is
$\mathcal{G}_{G}(\mathbf{g})\oplus \mathcal{G}_{H}(\mathbf{h})$.
\end{thm}
Please see \cite{lesson} for a proof of this theorem.
Finally, we define nim-sum, which is important for the theory of chocolate bar games.
\begin{defn}\label{definitionfonimsum11}
Let $x,y$ be non-negative integers written in base $2$ so that $x = \sum\limits_{i = 0}^n {{x_i}} {2^i}$ and $y = \sum\limits_{i = 0}^n {{y_i}} {2^i}$ with ${x_i},{y_i} \in \{0,1\}$.\\
We define the nim-sum $x \oplus y$ by
\begin{align}
x \oplus y = \sum\limits_{i = 0}^n {{w_i}} {2^i},
\end{align}
where $w_{i}=x_{i}+y_{i} \ (mod\ 2)$.
\end{defn}
When we use $ \sum\limits_{i = 0}^n {{x_i}} {2^i}$ and $y = \sum\limits_{i = 0}^n {{y_i}} {2^i}$, we assume that at least one $x_n$ and $y_n$ term is not zero.
\section{Rectangular Chocolate Bar Games}\label{rectangle}
We first define rectangular chocolate bar games. Please consult the chocolate bar in Figure \ref{robinchoco} as examples for definitions \ref{defofgeneralchoco} and \ref{defofchocoandcoordinate}.
\begin{defn}\label{defofgeneralchoco}
The chocolate bar consists of square boxes, where one block is blue and the others are brown.
Brown blocks are sweet, and the blue block is considered too bitter to eat. This game is played by two players in turn.
Each player breaks the chocolate (along a black line) into two areas.
The player eats the area that does not contain the bitter blue block. The player who breaks the chocolate and leaves his opponent with the single bitter blue block is the winner.
\end{defn}
\begin{exam}
The chocolate bars shown in Figure \ref{robinchoco} and Figure \ref{robinchocog} were proposed by Robin \cite{robin}.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 260 292]{robinchoco3.pdf}
\caption{Position (3,7,4)}
\label{robinchoco}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 170 153]{robinchoco3g.pdf}
\caption{Position (x,y,z)}
\label{robinchocog}
\end{minipage}
\end{figure}
\end{exam}
\begin{defn}\label{defofchocoandcoordinate}
We can cut these chocolates along the segments in three ways:\\
$(i)$ vertically on the left side of the bitter blue block;\\
$(ii)$ horizontally above the bitter blue block; and\\
$(iii)$ vertically on the right side of the bitter blue block.\\
Therefore, this chocolate bar can be represented with $(x,y,z)$, where $x,y,z$ stand for the maximum number of times the chocolate bar can be cut in each direction.
\end{defn}
\begin{rem}
For example, in Figure\ref{robinchoco}, we can make at most three vertical cuts on the left side of the bitter blue block, seven horizontal cuts above the bitter blue block, and four vertical cuts on the right side of the bitter blue block. Therefore, we have $x=3$, $y = 7$, and $z=4$, and we represent the chocolate bar in Figure \ref{robinchoco} with the position $(3,7,4)$.
We can make chocolates as large as we want. For example, the chocolate bar in Figure \ref{robinchocog} has position $(x,y,z)$, where $x,y,z \in Z_{\geq0} $.
In Definition \ref{defofchocoandcoordinate}, we introduce the three ways to cut rectangular chocolates.
Example \ref{chocoexmp1} shows how to cut chocolates, and we define how to cut chocolate mathematically in Definition \ref{defofmoverect}.
\end{rem}
\begin{rem}
Note that, in Definition \ref{defofgeneralchoco}, we are not considering a mis\`ere play since the player who breaks the chocolate bar for the last time is the winner. Therefore, the chocolate bar games in this paper are normal play games.
\end{rem}
\begin{defn}\label{defofmoverect}
For $x,y,z \in Z_{\ge 0}$, we define $move((x,y,z))=\{(u,y,z);0 \leq u<x) \cup \{(x,v,z);0 \leq v<y) \cup \{(x,y,w);0 \leq w<z\}$, where $u,v,w \in Z_{\ge 0}$.
\end{defn}
Here, $move((x,y,z))$ is the set of states that can be directly reached from the state $(x,y,z)$.
\begin{exam}\label{chocoexmp1}
$(i)$ If we start with the chocolate bar in Figure \ref{robinchoco} and cut vertically to remove three columns to the right of the blue part, we get the chocolate bar in Figure \ref{robinchoco31}.
Then, from the chocolate bar in Figure \ref{robinchoco}, we cut horizontally to remove the seven rows above the blue part, and we get the chocolate bar in Figure \ref{robinchocol1}.
Therefore, $(3,7,1)$, $(3,0,4)$$\in move((3,7,4))$.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 176 279]{robinchoco31.pdf}
\caption{Position (3,7,1)}
\label{robinchoco31}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 57 43]{robinchoco3l1.pdf}
\caption{Position (3,0,4)}
\label{robinchocol1}
\end{minipage}
\end{figure}
\noindent
$(ii)$ If we start with the chocolate bar in Figure \ref{robinchoco}, we cannot directly go to the chocolates in Figures \ref{robinchoco32}, \ref{robinchoco3l2}, and \ref{robinchoco33}. Therefore, $(3,2,1)$, $(0,4,4)$, $(0,0,0) \notin move((3,7,4))$.
\noindent
On the other hand, we can move directly from Figure \ref{robinchoco31} to Figure \ref{robinchoco32} by removing five rows at the same time. Therefore, $(3,2,1) \in move((3,7,1))$.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.05\columnwidth,bb=0 0 35 176]{robinchoco33.pdf}
\vspace{-10mm}
\caption{Position (0,0,0)}
\label{robinchoco33}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 175 107]{robinchoco32.pdf}
\caption{Position (3,2,1)}
\label{robinchoco32}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.4\columnwidth,bb=0 0 113 115]{robinchoco3l2.pdf}
\caption{Position (0,4,4)}
\label{robinchoco3l2}
\end{minipage}
\end{figure}
\end{exam}
\begin{rem}
It is easy to prove that the chocolate bar in Figure \ref{robinchoco3l2} is a $\mathcal{P}$-position. Note that the numbers of rows and columns are the same in the initial chocolate bar. With the first move, the number of columns will be different from the number of the rows. Then, the opposing player can break the bar to make the numbers of rows and columns the same. In this way, the opposing player will always keep the numbers of rows and columns the same, winning the game by moving to the single bitter block of Figure \ref{robinchoco33} that is represented by the position $(0,0,0)$ .
\end{rem}
\begin{thm}
A position $(x,y,z)$ of a rectangular chocolate bar is a $\mathcal{P}$-position if and only if $x \oplus y \oplus z = 0$.
\end{thm}
For the proof of this theorem, please see \cite{robin}.
\section{Triangular Chocolate Bar Games}\label{introforkyxzgame}
Herein, we assume that $k$ is a fixed natural number such that $k = 4m+3$ for some $m \in Z_{\geq0}$.
In this section, we study triangular chocolate bar games.
Examples of triangular chocolate bars are presented in Example \ref{exampleoftrichoco}.
\begin{exam}\label{exampleoftrichoco} Triangular chocolate bars are shown in Figure \ref{choco747a} and Figure \ref{choco929a}.
These chocolate bars consist of polygons, where only one triangle is blue and other polygons are brown.
Brown polygons are sweet, and the blue triangle is considered too bitter to eat.
A triangular chocolate bar can be cut in three ways: $(i)$ diagonally from the upper right to the lower left above the blue triangle; $(ii)$ horizontally above the blue triangle; or $(iii)$ diagonally from the upper left to the lower right above the blue triangle.
In Figure \ref{choco747a}, the numbers $7,4,7$ represent the maximum number of times we can cut the chocolate bar directions $(i)$, $(ii)$, and $(iii)$, respectively; in
Figure \ref{choco929a}, the numbers $9,2,9$ represent the maximum number of times we can cut the chocolate bar directions $(i)$, $(ii)$, and $(iii)$, respectively.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 222 113]{chocok3747.pdf}
\caption{Position $(7,4,7)$}
\label{choco747a}
\end{minipage}
\begin{minipage}[!htb]{0.05\columnwidth}
~
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 228 115]{chocok5727.pdf}
\caption{Position $(9,2,9)$}
\label{choco929a}
\end{minipage}
\end{figure}
\end{exam}
We more precisely define the coordinates and cuts available in triangular chocolate bars in Definition \ref{definitionoflines} and
Definition \ref{definitionofchoco}.
\begin{defn}\label{definitionoflines}
In the $x$ and $y$ coordinate system, we draw three groups of lines:
$(i)$ $y = x+1+2r$ for $r \in Z_{\geq0}$.\\
$(ii)$ $y = ks$ for $s \in N$.\\
$(iii)$ $y = -x+1+2t$ for $t \in Z_{\geq0}$.
\end{defn}
The lines defined in Definition \ref{definitionoflines} are shown in Figure \ref{keq3lines} and
Figure \ref{keq5lines} for $k=3$ and $k=7$, respectively.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 260 137]{nanamelinehline.pdf}
\caption{lines for $k=3$}
\label{keq3lines}
\end{minipage}
\begin{minipage}[!htb]{0.05\columnwidth}
~
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 260 160]{nanamelinehline5.pdf}
\caption{lines for $k=7$}
\label{keq5lines}
\end{minipage}
\end{figure}
\begin{defn}\label{definitionofchoco}
Let $u,v,w \in Z_{\geq0}$ such that $kv \leq u+w$.\\
We denote the area of the chocolate bar described by the following four inequalitiesas position $(u,v,w)$:\\
$(a)$ $y \leq x+1+2u$;\\
$(b)$ $y \leq k(v+1)$;\\
$(c)$ $y \leq -x+1+2w$; and\\
$(d)$ $y \geq 0$.\\
The areas denoted by $(u,v,w)$ are colored brown, for except the triangular area defined by
$y \leq x+1, y \leq -x+1$ and $ y \geq 0$, which is colored blue.
This chocolate bar game is played by two players in turn.
Each player breaks the chocolate bar (along a black line) into two areas.
The player eats the area that does not contain the bitter blue triangle. The player who breaks the chocolate bar and leaves his opponent with the single bitter blue triangle is the winner.
The three numbers $u,v,w$ represent the maximum number of times we can cut the chocolate bar each direction.
\end{defn}
\begin{exam}\label{examkequal3}
Let $k=3$. Here, we present examples of triangular chocolate bars of position $(u,v,w)$ for $u,v,w \in Z_{\ge 0}$
such that $kv \leq u + w$.
\noindent
$(i)$ Let $(u,v,w) =(7,4,7)$. Then, by inequalities in $(a)$, $(b)$, $(c)$, and $(d)$ of Definition \ref{definitionofchoco},
we have the following inequalities:
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 260 144]{demok31.pdf}
\caption{Position $(7,4,7)$}
\label{demok31}
\end{figure}
\noindent
$(a.1)$ $y \leq x + 15$;\\
$(b.1)$ $y \leq 15$;\\
$(c.1)$ $y \leq -x+15$; and\\
$(d.1)$ $y \geq 0$.\\
Lines $y = x + 15$, $y = 15$, and $y = -x+15$ are represented by red lines in Figure \ref{demok31}.
Here, we can omit inequality $(b.1)$ since the inequalities in $(a.1)$ and $(c.1)$ imply inequality $(b.1)$.\\
It is easy to see that the three numbers $7,4,7$ represent the maximum number of times we can cut the chocolate bar each direction.
\noindent
$(ii)$ Let $(u,v,w) =(5,4,7)$. Then, by the inequalities in $(a)$, $(b)$, $(c)$, $(d)$ of Definition \ref{definitionofchoco},
we have the following inequalities:
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 260 169]{demok32.pdf}
\caption{Position $(5,4,7)$}
\label{demok32}
\end{figure}
\noindent
$(a.2)$ $y \leq x+11$;
\noindent
$(b.2)$ $y \leq 15$;
\noindent
$(c.2)$ $y \leq -x+15$; and
\noindent
$(d.2)$ $y \geq 0$.
\noindent
Lines $y = x+11$, $y = 15$ and $y = -x+15$ are represented by red lines in Figure \ref{demok32}.
Here, we can also omit inequality $(b.2)$ since the inequalities in $(a.2)$ and $(c.2)$ imply inequality $(b.2)$.
Next, we present an example that requires all inequalities in $(a)$, $(b)$, $(c)$, and $(d)$ of Definition \ref{definitionofchoco}.
\noindent
$(iii)$ Let $(u,v,w) =(4,2,7)$. Then, by the inequalities in $(a)$, $(b)$, $(c)$, $(d)$ of Definition \ref{definitionofchoco},
we have the following inequalities:
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 260 115]{demok33.pdf}
\caption{Position $(4,2,7)$}
\label{demok33}
\end{figure}
\noindent
$(a.3)$ $y \leq x+9$;
\noindent
$(b.3)$ $y \leq 9$;
\noindent
$(c.3)$ $y \leq -x+15$; and
\noindent
$(d.3)$ $y \geq 0$.
\noindent
Lines $y = x+9$, $y=9$, and $y=-x+15$ are represented by red lines in Figure \ref{demok33}.
Here, we cannot omit any inequalities.
\end{exam}
\begin{exam}\label{examkequal7} Let $k =7$.
Here, we present examples of triangular chocolate bars of position $(u,v,w)$ for $u,v,w \in Z_{\ge 0}$
such that $kv \leq u + w$.
\noindent
$(i)$ Let $(u,v,w) =(9,2,9).$ Then, by the inequalities in $(a)$, $(b)$, $(c)$, and $(d)$ of Definition \ref{definitionofchoco},
we have the following inequalities:
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 260 148]{demok51.pdf}
\caption{Position $(9,2,9)$}
\label{demok51}
\end{figure}
\noindent
$(a.4)$ $y \leq x+19$;
\noindent
$(b.4)$ $y \leq 21$;
\noindent
$(c.4)$ $y \leq -x+19$; and
\noindent
$(d.4)$ $y \geq 0$.
\noindent
Lines $y = x+19$, $y=21$, and $y=-x+19$ are represented by red lines in Figure \ref{demok33}.
Here, we can omit inequality $(b.4)$ since the inequalities in $(a.4)$ and $(c.4)$ imply inequality $(b.4)$.
\noindent
$(ii)$ Let $(u,v,w) =(6,2,8).$ Then, by the inequalities in $(a)$, $(b)$, $(c)$, and $(d)$ of Definition \ref{definitionofchoco}, we have the following inequalities:
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 260 171]{demok52.pdf}
\caption{Position $(6,2,8)$}
\label{demok52}
\end{figure}
\noindent
$(a.5)$ $y \leq x+13$;
\noindent
$(b.5)$ $y \leq 21$;
\noindent
$(c.5)$ $y \leq -x+17$; and
\noindent
$(d.5)$ $y \geq 0$.
\noindent
Lines $y = x+13$, $y=21$, and $y=-x+17$ are represented by red lines in Figure \ref{demok52}.
\noindent
Here, we can also omit inequality $(b.5)$ since the inequalities in $(a.5)$ and $(c.5)$ imply inequality $(b.5)$.
\noindent
$(iii)$ Let $(u,v,w) =(6,0,8)$. Then, by the inequalities in $(a)$, $(b)$, $(c)$, and $(d)$ of Definition \ref{definitionofchoco},
we have the following inequalities:
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 260 83]{demok53.pdf}
\caption{Position $(6,0,8)$}
\label{demok53}
\end{figure}
\noindent
$(a.6)$ $y \leq x+13$;
\noindent
$(b.6)$ $y \leq 7$;
\noindent
$(c.6)$ $y \leq -x+17$; and
\noindent
$(d.6)$ $y \geq 0$.
Lines $y = x+13$, $y=7$, and $y=-x+17$ are represented by red lines in Figure \ref{demok53}
\noindent
Here, we cannot omit inequality $(b.6)$ since the inequalities in $(a.6)$ and $(c.6)$ do not imply inequality $(b.6)$.
\end{exam}
In Definition \ref{definitionofchoco}, Example \ref{examkequal7}, and Example \ref{examkequal3}, we used the $x$ and $y$ coordinates to define the shape and position of the chocolate bar. However, in the remainder of this paper, we study chocolate bars and their positions without explicitly describing the coordinate system.
\begin{exam}\label{howtocut}
Let $k=3$. Here, chocolates are presented with their positions $(x,y,z)$.
These positions clearly satisfy the inequality
$3y \leq x + z$, i.e., $y \leq \lfloor \frac{x+z}{3} \rfloor $, where $x,y,z$ are the first, second, and the third number of the position, respectively.
This example shows how to cut triangular chocolate bars.
\noindent
$(i)$ Suppose that we cut the chocolate bar in Figure \ref{chocok3747} to get the bar in Figure \ref{chocok3547} in the way demonstrated in Figure \ref{chocok3547cut}.
Starting with the chocolate bar in Figure \ref{chocok3747} that has the position $(x,y,z)=(7,4,7)$ and reducing $x$ to $5$,
we have Figure \ref{chocok3547} with the position $(x,y,z)=(5,4,7)$.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 222 113]{chocok3747.pdf}
\caption{Position $(7,4,7)$}
\label{chocok3747}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 193 98]{chocok3547.pdf}
\caption{Position $(5,4,7)$}
\label{chocok3547}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.45\columnwidth,bb=0 0 170 94]{chco547.pdf}
\caption{cut the chocolate bar in Figure \ref{chocok3747}}
\label{chocok3547cut}
\end{figure}
\noindent
$(ii)$ Suppose that we cut the chocolate bar in Figure \ref{chocok3747} to get the bar in Figure \ref{chocok3727} in the way demonstrated in Figure \ref{chococut727}.
Starting with the chocolate bar in Figure \ref{chocok3747} that has the position $(x,y,z)=(7,4,7)$ and reducing $y$ to $2$,
we have Figure \ref{chocok3727} with the position $(x,y,z)=(7,2,7)$.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 260 84]{chocok3727.pdf}
\caption{Position $(7,2,7)$}
\label{chocok3727}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.7\columnwidth,bb=0 0 170 94]{chco727.pdf}
\caption{Cut the chocolate bar in Figure \ref{chocok3727}}
\label{chococut727}
\end{minipage}
\end{figure}
\noindent
$(iii)$ Suppose that we cut the chocolate bar in Figure \ref{chocok3747} to get the chocolate in Figure \ref{chocok3732} in the way demonstrated in Figure \ref{chococut}.
Starting with the chocolate bar in Figure \ref{chocok3747} that has the position $(x,y,z)=(7,4,7)$ and reducing $z$ to $2$,
the second number $y=4$ of the position will also be reduced to $y=3$. Therefore, we have Figure \ref{chocok3732} with the position $(x,y,z)=(7,3,2)$.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth,bb=0 0 260 84]{chocok3732.pdf}
\caption{Position $(7,3,2)$}
\label{chocok3732}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.8\columnwidth,bb=0 0 113 71]{chco732.pdf}
\caption{Cut the chocolate bar in Figure \ref{chocok3732}}
\label{chococut}
\end{minipage}
\end{figure}
\end{exam}
Next, we are going to define the function $movek ((x, y, z))$ for position $(x, y, z)$ of triangular chocolate bars, where $movek ((x, y, z))$ is the set of all positions that can be reached from position $(x, y, z)$ in one step (directly).
\begin{defn}\label{defofmovek} Let $k$ be a natural number such that $k = 4m+3$ for some $m \in Z_{\geq0}$.
For $x,y,z \in Z_{\ge 0}$, we define $movek((x,y,z))=\{(u,y,z);u<x\} \cup \{(x,v,z);v<y\} \cup \{(x,y,w);w<z\} \cup \{(u,\min(y, \lfloor \frac{u+z}{k} \rfloor ),z);u<z\} $
\\$\cup \{(x,\min(y, \lfloor \frac{x+w}{k} \rfloor ),w);w<z\}$, where $u,v,w \in Z_{\ge 0}$.
\end{defn}
\begin{exam}
Here, we study the case of $k=3$.
\noindent
$(i)$ By Example \ref{howtocut}, $ (5,4,7) \in move3((7,4,7))$.
\noindent
$(ii)$ Starting with the chocolate bar in Figure \ref{chocok3747} and reducing $z$ to $2$,
the second number of the position will be $\min(4, \lfloor \frac{7+2}{3} \rfloor )=\min(4,3)=3$. Therefore, we have the chocolate bar in Figure \ref{chocok3732} with the position $(7,3,2)$, and $ (7,3,2) \in move3((7,4,7))$.
\noindent
$(iii)$ Similarly, the chocolate bar with $(7,2,7)$ can be easily obtained by reducing the second number of the position to 2. Therefore, $ (7,2,7) \in move3((7,4,7))$.
\end{exam}
\section{Sequences of Three Functions}\label{studyofsequences}
Let $k = 4m+3$ for some $m \in Z_{\geq0}$. In this section, we study sequences constructed by three functions. These sequences of three functions will be used in Section \ref{relation1} to study the chocolate bar games that satisfy the inequality $y \leq \lfloor \frac{x+z}{k} \rfloor$.
\begin{defn}\label{definitionp1p2p3}
We define three functions
${P_1}^k(x)=2x+2$, ${P_2}^k(x)=2x$, and ${P_3}^k(x)=2x+1-k$ for any $x \in Z_{\geq0}$.
\end{defn}
Note that ${P_1}^k(x) > {P_2}^k(x) > {P_3}^k(x)$ for any $x \in Z_{\geq0}$.
\begin{exam}\label{nonproperseq}
Let $k = 7$. Then, ${P_1}^7(x)=2x+2$, ${P_2}^7(x)=2x$, and ${P_3}^7(x)=2x+1-7=2x-6$.
\noindent
$(i)$ If we apply ${P_2}^7,{P_3}^7,{P_2}^7,{P_3}^7,{P_1}^7$ repeatedly to $2$, then
we have
$\{2,{P_2}^7(2)=4, {P_3}^7(4)=2,{P_2}^7(2)=4,
{P_3}^7 (4)=2, {P_1}^7 (2)=6\}$
$=\{2, 4,2,4,2,6\}$.\\
$(ii)$
If we apply ${P_2}^7,{P_3}^7,{P_2}^7,{P_2}^7,{P_1}^7$ repeatedly to $2$, then we have
$\{2,{P_2}^7(2)=4, {P_3}^7(4)=2,{P_2}^7(2)=4,
{P_2}^7(4)=8,{P_1}^7 (8)=18\}$
$=\{2, 4,2,4,8,18\}$.\\
$(iii)$ If we apply ${P_3}^7,{P_1}^7,{P_2}^7,{P_1}^7,{P_3}^7,{P_1}^7$ repeatedly to $2$, then we have
$\{2,{P_3}^7(2)=-2, {P_1}^7(-2)= -2,
{P_2}^7 (-2)=-4,{P_1}^7(-4)=-6,{P_3}^7(-6)=-18, {P_1}^7(-18)=-34\}$
$=\{2,-2,-2,-4,-6,-18,-34\}$.
\end{exam}
In Example \ref{nonproperseq}, we start with $2$ and make a sequence by repeatedly applying one of the three functions [${P_1}^k$, ${P_2}^k$, or ${P_3}^k$]. In this section, all sequences are constructed in this way. Note that these sentences are made by even numbers since we get only even numbers by applying ${P_1}^k$, ${P_2}^k$, or ${P_3}^k$ to $2$ repeatedly when $k$ is an odd number.
\begin{defn}\label{typeabc}
We define three types of sequences of integers:
\noindent
$(i)$ A sequence $\{b_0,b_1,b_2,...,b_n\}$ is said to be Type 1 if $0 \leq b_i < k$ for each $i = 0,1,2,...,n$.
\noindent
$(ii)$ A sequence $\{b_0,b_1,b_2,...,b_n\}$ is said to be Type 2 if there exists a non-negative number $t$ such that $0 \leq b_i < k$ for each $i = 0,1,2,...,t$ and $b_i \geq k$ for each $i = t+1,...,n$.
\noindent
$(iii)$ A sequence $\{b_0,b_1,b_2,...,b_n\}$ is said to be Type 3 if there exists a non-negative $t$ such that $0 \leq b_i < k$ for each $i = 0,1,2,...,t$ and $b_i < 0$ for each $i = t+1,...,n$.
\end{defn}
\begin{rem}
Sequences $(i)$, $(ii)$, and $(iii)$ in Example \ref{nonproperseq} are Type 1, Type 2, and Type 3 sequeneces, respectively.
\end{rem}
\begin{Lem}\label{therearethreetype}
Any sequence $\{b_0,b_1,b_2,...,b_n\}$ is Type 1, or Type 2, or Type 3, and nothing else.
\noindent
Regarding the criteria of these types, we have the following conditions:
\noindent
$(i)$ The sequence is Type 1 if and only if $0 \leq b_n < k$.
\noindent
$(ii)$ The sequence is Type 2 if and only if $ b_n \geq k$.
\noindent
$(iii)$ The sequence is Type 3 if and only if $b_n < 0$.
\end{Lem}
\begin{proof}
Suppose that $0 \leq b_i < k$ for each $i = 0,1,2,...,n$. This is Type 1.
\noindent
Suppose that there exists $t$ such that $0 \leq b_i < k$ for each $i = 0,1,2,...,t$ and $0 \leq b_{t+1} < k$ is not true. Then, we have
$b_{t+1} < 0$ or $b_{t+1} \geq k$. We study these cases separately.
\noindent
If $b_{t+1} < 0$, then $b_{t+1} \leq -2$ since $b_{t+1}$ is even. Then, by applying ${P_1}^k,{P_2}^k,{P_3}^k$ repeatedly to $b_{t+1}$, we get only a negative number, and the sequence we construct is Type 3.
\noindent
If $b_{t+1} \geq k$, by applying ${P_1}^k,{P_2}^k,{P_3}^k$ repeatedly to $b_{t+1}$, we get numbers bigger than $k$, and the sequence we construct is Type 2.
\noindent
Therefore, $\{b_0,b_1,b_2,...,b_n\}$ is Type 1, or Type 2, or Type 3, and nothing else.
\noindent
$(i)$ If the sequence is Type 1, then $0 \leq b_n < k$.
\noindent
Conversely, if $0 \leq b_n < k$, then by Definition \ref{typeabc}, this sequence is neither Type 2 nor Type 3. Therefore, this sequence is Type 1.
\noindent
$(ii)$ If the sequence is Type 2, then $ b_n \geq k$.
\noindent
Conversely, if $ b_n \geq k$, then by Definition \ref{typeabc}, this sequence is neither Type 1 nor Type 3. Therefore, this sequence is Type 2.
\noindent
$(iii)$ If the sequence is Type 3, then $b_n < 0$.
\noindent
Conversely, if $b_n < 0$, then by Definition \ref{typeabc}, this sequence is neither Type 1 nor Type 2. Therefore, this sequence is Type 3.
\end{proof}
\begin{Lem}\label{type2subsequence}
Let $t,n \in Z_{\geq 0}$ such that $t \leq n$.
If $\{b_0,b_1,b_2,...,b_t\}$ is Type 2, then $\{b_0,b_1,b_2,...,b_n\}$ is Type 2.
\end{Lem}
\begin{proof}
Since $b_t \geq k$, by Definition \ref{typeabc}, $\{b_0,b_1,b_2,...,b_n\}$ is not Type 1 or Type 3. Therefore, by Lemma \ref{therearethreetype}
$\{b_0,b_1,b_2,...,b_n\}$ is Type 2.
\end{proof}
\begin{Lem}\label{lemmaforwhichk}
For any even number $h$ with $0 \leq h < k$, we have the following statements:
\noindent
$(i)$ If $0 \leq h \leq 2m$, then $0 \leq {P_2}^k(h) < {P_1}^k(h) < k$ and ${P_3}^k(h) < 0$.
\noindent
$(ii)$ If $2m+2 \leq h < k = 4m+3$, then $0 < {P_3}^k(h) < k$ and $k < {P_2}^k(h) < {P_1}^k(h)$.
\noindent
$(iii)$ If $k < {P_u}^k(h) $ for some $u \in \{1,2,3\}$, then $2m+2 \leq h $ and $u = 1$ or $u = 2$.
\noindent
Note that $h \neq 2m+1$ since $h$ is an even number.
\end{Lem}
\begin{proof}
$(i)$ Suppose that $0 \leq h \leq 2m$. Then, we have $0 \leq 2h < 2h+2 \leq 4m+2 < 4m+3 = k$, and by Definition \ref{definitionp1p2p3},
we have $0 \leq {P_2}^k(h) < {P_1}^k(h) < k$.
\noindent
We also have $2h+1-k \leq 4m+1-(4m+3) = -2 < 0$, and by Definition \ref{definitionp1p2p3}, ${P_3}^k(h) < 0$.
\noindent
$(ii)$ Suppose that $2m+2 \leq h < k = 4m+3$. Then, we have
$2m+2 \leq h \leq k-1$, and
$2(2m+2)+1-k \leq 2h+1-k \leq 2(k-1)+1-k$.
\noindent
Therefore, we have $2 \leq 2h+1-k \leq k-1 < k$.
By Definition \ref{definitionp1p2p3}, $0 < {P_3}^k(h) < k$.
\noindent
We also have $k < 4m+4 \leq 2h < 2h+2$, and by Definition \ref{definitionp1p2p3}, $k < {P_2}^k(h) < {P_1}^k(h)$.
\noindent
$(iii)$ Suppose that
\begin{align}\label{ksmallerthp}
k < {P_u}^k(h)
\end{align}
for some $u \in \{1,2,3\}$.
By the contraposition of $(i)$ in this lemma, $2m+2 \leq h $. By $(ii)$ of this lemma, $0 < {P_3}^k(h) < k$ and $k < {P_2}^k(h) < {P_1}^k(h)$.
Therefore, by the inequality in (\ref{ksmallerthp}), we have $u = 1$ or $u = 2$.
\end{proof}
\begin{Lem}\label{lemmaforwhichk2}
A sequence $\{b_0,b_1,b_2,...,b_n\}$ is Type 1 if and only if
for each $i$ we have one of the following cases:
\noindent
$(i)$ If $0 \leq b_i \leq 2m$, then $b_{i+1} = {P_1}^k(b_i)$ or $b_{i+1} = {P_2}^k(b_i)$.
\noindent
$(ii)$ If $2m+2 \leq b_i < k = 4m+3$, then $b_{i+1} = {P_3}^k(b_i)$.
\noindent
Note that $b_i \neq 2m+1$ since $b_i$ is an even number.
\end{Lem}
\begin{proof}
This follows directly from Lemma \ref{lemmaforwhichk}.
\end{proof}
\begin{Lem}\label{lemmaforlonger}
If a sequence $\{b_0,b_1,b_2,...,b_t\}$ is Type 1 and $t < n$, then
we can define $b_i$ for $i = t+1, t+2,...,n$ so that the sequence $\{b_0,b_1,b_2,...,b_n\}$ is Type 1.
\end{Lem}
\begin{proof}
We define $b_{i+1}$ for $i = t,...,n-1$, and we consider two cases.
\noindent
\underline{Case $(a)$}
If $0 \leq b_i \leq 2m$, let $b_{i+1} = {P_1}^k(b_i)$ or $b_{i+1} = {P_2}^k(b_i)$. Then, by $(i)$ of Lemma \ref{lemmaforwhichk}, we have $0 \leq b_{i+1} \leq k$.
\noindent
\underline{Case $(b)$} If $2m+2 \leq b_i < k$, let $b_{i+1} = {P_3}^k(b_i)$. Then, by $(ii)$ of Lemma \ref{lemmaforwhichk}, we have $0 \leq b_{i+1} \leq k$.
By Case $(a)$ and Case $(b)$, we construct a sequence $\{b_0,b_1,b_2,...,b_n\}$ such that $0 \leq b_j \leq k$ for $j = 0,1,...,n$.
This sequence is Type 1.
\end{proof}
\begin{Lem}\label{lemmaforwhichk3}
A sequence $\{c_0,c_1,c_2,...,c_n\}$ is Type 2 if and only if there is a sequence $\{b_0,b_1,b_2,...,b_n\}$ that satisfies one of the following cases:
\noindent
$(i)$ $\{b_0,b_1,b_2,...,b_n\}$ is Type 1.
\noindent
$(ii)$ There is a non-negative number $j$ such that $b_t = c_t$ for $t = 0,1,...,j, 2m+2 \leq c_j=b_j < k$, $b_{j+1} = {P_3}^k(b_j)= {P_3}^k(c_j)$ and $c_{j+1} = {P_1}^k(c_j)$ or ${P_2}^k(c_j)$.
\end{Lem}
\begin{proof}
Let $\{c_0,c_1,c_2,...,c_n\}$ be Type 2. Then, there is a non-negative number $j$ such that $0 \leq c_t < k$ for $t = 0,1,...,j$ and
$k < c_{j+1} $. By $(iii)$ of Lemma \ref{lemmaforwhichk}, we have
$2m+2 \leq c_j < k$ and $c_{j+1} ={P_2}^k(c_j)$ or $ c_{j+1}={P_1}^k(c_j)$.
Let $b_i = c_i$ for $i = 0,...,j$. Then, $\{b_0,b_1,b_2,...,b_j\}$ is Type 1. By Lemma \ref{lemmaforlonger},
we can construct a sequence $\{b_0,b_1,b_2,...,b_n\}$ that is Type 1.
\noindent
Conversely, suppose that
there is a sequence $\{b_0,b_1,b_2,...,b_n\}$ that satisfies $(i)$ and $(ii)$ of this Lemma.
Then, $0 \leq c_t < k$ for $t = 0,1,...,j$ and
$2m+2 \leq c_j=b_j < k$, and $c_{j+1} = {P_1}^k(c_j)$ or ${P_2}^k(c_j)$.
Then, by $(ii)$ of Lemma \ref{lemmaforwhichk},
$c_{j+1} ={P_2}^k(c_j)>k$ or $ c_{j+1}={P_1}^k(c_j)>k$.
Therefore, $\{c_0,c_1,c_2,...,c_n\}$ is Type 2.
\end{proof}
\section{Sequence Generated by {\boldmath $x,y,z$}}\label{sequencemadebyxyz}
Let $x,y,z \in Z_{\geq0}$ in base $2$, so
\begin{align}\label{xyzinbase}
x = \sum_{i=0}^n x_i 2^i , y = \sum_{i=0}^n y_i 2^i \text{ and } z = \sum_{i=0}^n z_i 2^i \text{ with } x_i,y_i,z_i \in \{0,1\}.
\end{align}
Throughout this section, we suppose that
\begin{align}\label{defnofsequencesn11}
x \oplus y \oplus z = 0
\end{align}
and
\begin{align}\label{defnofsequencesn21}
x_n=z_n=1 \text{ \ and \ } y_n=0.
\end{align}
By (\ref{defnofsequencesn11}),
$x_i + y_i + z_i = 0 \ (mod \ 2)$, and hence
\begin{align}\label{defnofsj00000}
(x_i,y_i,z_i) = (1,0,1) \text{ or } (0,0,0) \text{ or } (1,1,0) \text{ or } (0,1,1).
\end{align}
\begin{defn}\label{defnofsequencesj}
We define a sequence of non-negative integers
$s_0,s_1,...,s_n$ by
\begin{align}\label{defnofsj00}
s_j=\sum\limits_{i = n-j}^n {({x_i}+{z_i}-k{y_i})} {2^{i+j-n}}
\end{align}
for $j = 0,1,2,...,n$.
\noindent
This sequence $s_0,s_1,...,s_n$ is said to be generated by $x,y,z$.
\end{defn}
\begin{Lem}\label{valueofs0sn}
By (\ref{defnofsequencesn21}) and (\ref{defnofsj00}),
\begin{align}
& s_0=\sum\limits_{i = n}^n {({x_i}+{z_i}-k{y_i})} {2^{i-n}} = x_n + z_n -ky_n = 2, \label{sequenceforj0} \\
& s_1=\sum\limits_{i = n-1}^n {({x_i}+{z_i}-k{y_i})} {2^{i+1-n}} =(x_{n-1} + z_{n-1} -ky_{n-1}) 2^0 \nonumber\\
& + (x_n + z_n -ky_n)2^1 \label{sequenceforj1} \\
&\text{and } \nonumber \\
& s_n=\sum\limits_{i = 0}^n {({x_i}+{z_i}-k{y_i})} {2^{i}} = x + z -ky. \label{xzkyrelation}
\end{align}
\end{Lem}
\begin{proof}
This lemma follows directly from Definition \ref{defnofsequencesj}.
\end{proof}
\begin{Lem}\label{threesequence}
$(i)$ If $(x_{n-j-1},y_{n-j-1},z_{n-j-1})=(1,0,1)$, then
\begin{align}\label{caseof1}
s_{j+1}=2s_j + 2 = {P_1}^k(s_j).
\end{align}
$(ii)$ If $(x_{n-j-1},y_{n-j-1},z_{n-j-1})=(0,0,0)$, then
\begin{align}\label{caseof2}
s_{j+1}=2s_j ={P_2}^k(s_j).
\end{align}
$(iii)$ If $(x_{n-j-1},y_{n-j-1},z_{n-j-1})=(1,1,0)$ or $(0,1,1)$, then
\begin{align}\label{caseof3}
s_{j+1}= 2s_j+1-k={P_3}^k(s_j).
\end{align}
\end{Lem}
\begin{proof}
By (\ref{defnofsj00}),
\begin{align}
s_{j+1}=& \sum\limits_{i = n-j-1}^n {({x_i}+{z_i}-k{y_i})} {2^{i+j+1-n}} \nonumber \\
= & 2(\sum\limits_{i = n-j}^n {({x_i}+{z_i}-k{y_i})} {2^{i+j-n}})+x_{n-j-1}+z_{n-j-1}-k y_{n-j-1} \nonumber \\
=& 2s_j+x_{n-j-1}+z_{n-j-1}-k y_{n-j-1}. \label{defsjplus}
\end{align}
By (\ref{defsjplus}), we have $(i)$, $(ii)$, or $(iii)$.
\end{proof}
By Lemma \ref{threesequence}, $s_0,s_1,...,s_n$
is the sequence generated by applying ${P_1}^k$, ${P_2}^k$ and ${P_3}^k$
repeatedly to 2.
\section{Relationship Between the Inequality and the Sequence}\label{relation1}
\begin{Lem}\label{criteriaofineqa}
For $x,y,z \in Z_{\geq0}$, the following conditions hold:
\noindent
$(i)$ $0 \leq x+z-ky < k$ if and only if $ y = \lfloor \frac{x+z}{k} \rfloor$;
\noindent
$(ii)$ $k \leq x+z-ky $ if and only if $ y < \lfloor \frac{x+z}{k} \rfloor$; and
\noindent
$(iii)$ $ x+z-ky < 0$ if and only if $ y > \lfloor \frac{x+z}{k} \rfloor$.
\end{Lem}
\begin{proof}
$(i)$ $0 \leq x+z-ky < k$ if and only if $ky \leq x+z < ky +k$
if and only if $y \leq \frac{x+z}{k} < y+1$ if and only if
$ y = \lfloor \frac{x+z}{k} \rfloor$.\\
Similarly, we prove $(ii)$ and $(iii)$, and we finish the proof.
\end{proof}
\begin{Lem}\label{lemmaforwhichk20}
Let $x,y,z \in Z_{\geq0}$ such that
\begin{align}\label{xyzoplus0}
x \oplus y \oplus z = 0,
\end{align}
$x_n = z_n = 1$, and $y_n = 0$,
and let $s_0,s_1,...,s_n$ be the sequence generated from $x,y,z$.
Then, $y = \lfloor \frac{x+z}{k} \rfloor $ if and only if
$s_0,s_1,...,s_n$ is Type 1. For each $j$,
the following conditions hold:
\noindent
$(a)$ If $s_j \leq 2m$, then we have $(a.1)$ or $(a.2)$:
\noindent
$(a.1)$
$(x_{n-j-1},y_{n-j-1},z_{n-j-1})$$=(1,0,1)$ and
\begin{align}
s_{j+1}={P_1}^k(s_j).
\end{align}
$(a.2)$ $(x_{n-j-1},y_{n-j-1},z_{n-j-1})$$=(0,0,0)$ and
\begin{align}
s_{j+1}=2s_j ={P_2}^k(s_j).
\end{align}
$(b)$ If $s_j \geq 2m+2$, then $(x_{n-j-1},y_{n-j-1},z_{n-j-1})$ $=(1,1,0)$ or $(0,1,1)$,
and
\begin{align}
s_{j+1}= 2s_j+1-k={P_3}^k(s_j).
\end{align}
\end{Lem}
\begin{proof}
Suppose that $y = \lfloor \frac{x+z}{k} \rfloor$.
Then, by Lemma \ref{criteriaofineqa} and (\ref{xzkyrelation}), we have $0 \leq s_n=x+z-ky < k$, and by $(i)$ of Lemma \ref{therearethreetype}, the sentence $s_0,s_1,...,s_n$ is Type 1.
Conversely, if the sentence $s_0,s_1,...,s_n$ is Type 1, then by Lemma \ref{criteriaofineqa}, (\ref{xzkyrelation}), and Lemma \ref{therearethreetype},
$y = \lfloor \frac{x+z}{k} \rfloor$.
By Lemma \ref{lemmaforwhichk2}, for each $j$,
we have the following $(a)$ or $(b)$.
\noindent
$(a)$ If $s_j \leq 2m$, then we have $(a.1)$ or $(a.2)$:
\noindent
$(a.1)$
If
\begin{align}
s_{j+1}={P_1}^k(s_j),
\end{align}
then by (\ref{caseof1}), $(x_{n-j-1},y_{n-j-1},z_{n-j-1})$$=(1,0,1)$.\\
$(a.2)$
If
\begin{align}
s_{j+1}=2s_j ={P_2}^k(s_j),
\end{align}
then by (\ref{caseof2}), $(x_{n-j-1},y_{n-j-1},z_{n-j-1})$ $=(0,0,0)$.\\
$(b)$ If $s_j \geq 2m+2$,
\begin{align}
s_{j+1}= 2s_j+1-k={P_3}^k(s_j)
\end{align}
and $(x_{n-j-1},y_{n-j-1},z_{n-j-1})$ $=(1,1,0)$ or $(0,1,1)$.
\end{proof}
\begin{Lem}\label{nimsum0lemma}
Let $x \in Z_{\geq0}\ $, and let $y_i,z_i \in \{0,1\}$ such that $x_i\oplus y_i\oplus z_i=0 $ for $i = n, n-1,...,n-t$ for a fixed natural number $t$, and $x_n=1, y_n = 0, z_n=1.$
We define the sequence $s_j, j = 0,1,2,...,t$ by
\begin{align}\label{defnofsj002}
s_j=\sum\limits_{i = n-j}^n {({x_i}+{z_i}-k{y_i})} {2^{i+j-n}}.
\end{align}
Suppose that $0 \leq s_i < k$ for $i=1,2,...,t$. Then, there exist unique $y_i,z_i \in \{0,1\}$ \\
for $i = n-t-1, n-t-2,...,0$ such that
\begin{align}\label{nimsum0xyz}
x\oplus y\oplus z=0 \text{ and } y = \lfloor \frac{x+z}{k} \rfloor,
\end{align}
where $y = \sum\limits_{i = 0}^n {{y_i}} {2^i}$ and $z = \sum\limits_{i = 0}^n {{z_i}} {2^i}$.
\end{Lem}
\begin{proof}
Let $x \in Z_{\geq0}\ $.
We write $x$ in base $2$, so \\
\begin{align}\label{alignfxyz}
x = \sum\limits_{i = 0}^n {{x_i}} {2^i},
\end{align}
We define sequences $y_i,z_i \in \{0,1\}$ for $i = n-t-1, n-t-2,...,0$, and we define
$s_j, j = t+1,..., n-1,n$ using
(\ref{defnofsj002}) step by step.
First, we define $y_{n-t-1}, z_{n-t-1}$ and $s_{t+1}$.\\
We have the following two cases:\\
\underline{Case $(a)$}
Suppose that
\begin{align}\label{casea12}
0 \leq s_t \leq 2m.
\end{align}
Then, we have Subcase $(a.1)$ and Subcase $(a.2)$:\\
\underline{Subcase $(a,1)$}
If $x_{n-t-1}=1$, then let $(x_{n-t-1},y_{n-t-1},z_{n-t-1}) = (1,0,1)$. Then, by (\ref{caseof1}),
$s_{t+1} = {P_1}^n(s_t)$,
and by Lemma \ref{lemmaforwhichk} and (\ref{casea12}), we have $0 \leq s_{t+1} < k$.\\
\underline{Subcase $(a,2)$}
If $x_{n-t-1}=0$, then let $(x_{n-t-1},y_{n-t-1},z_{n-t-1}) = (0,0,0)$. Then, by (\ref{caseof2}),
$s_{t+1} = $${P_2}^n(s_t)$, and by
Lemma \ref{lemmaforwhichk} and (\ref{casea12}), we have $0 \leq s_{t+1} < k$.\\
\underline{Case $(b)$}
Suppose that
\begin{align}\label{caseb12}
2m+2 \leq s_t < k
\end{align}
Then, we have Subcase $(b.1)$ and Subcase $(b.2)$:\\
\underline{Subcase $(b,1)$}
If $x_{n-t-1}=1$, then let $(x_{n-t-1},y_{n-t-1},z_{n-t-1}) = (1,1,0)$. Then, by (\ref{caseof3}), $s_{t+1} = $${P_3}^n(s_t)$, and by
Lemma \ref{lemmaforwhichk} and (\ref{caseb12}), we have $0 \leq s_{t+1} < k$.\\
\underline{Subcase $(b,2)$}
If $x_{n-t-1}=0$, then let $(x_{n-t-1},y_{n-t-1},z_{n-t-1}) = (0,1,1)$. Then, by (\ref{caseof3}), $s_{t+1} = $${P_3}^n(s_t)$, and by
Lemma \ref{lemmaforwhichk} and (\ref{caseb12}), we have $0 \leq s_{t+1} < k$.\\
By Case $(a)$ and Case $(b)$, we have
\begin{align}\label{nimsumxyzj1}
x_{n-t-1} \oplus y_{n-t-1} \oplus z_{n-t-1} =0
\end{align}
and
\begin{align}\label{nimsumxyzj12}
0 \leq s_{t+1} < k.
\end{align}
Clearly, $y_{n-t-1}, z_{n-t-1}$ are unique, non-negative integers that satisfy (\ref{nimsumxyzj1}) and (\ref{nimsumxyzj12}) when $x$ is a given non-negative integer.
Next, we define $y_{n-t-2}, z_{n-t-2}$, and $s_{t+2}$ using a method very similar to that used in Case $(a)$ and Case $(b)$.
In this way, we construct sequences $y_i, i=n-t-3, n-t-4, ...,0$, $z_i, i = n-t-3, n-t-4, ...,0$ and $s_j, j = t+3,t+4,...,n$ such that
\begin{align}\label{nimsum0xyz1}
x_i \oplus y_i \oplus z_i =0
\end{align}
and $0 \leq s_j < k$. Then, the sequence $s_n, s_{n-1},...,s_2,s_1,s_0$ is Type 1, and
by Lemma \ref{lemmaforwhichk20}, $y = \lfloor \frac{x+z}{k} \rfloor $, where $y = \sum\limits_{i = 0}^n {{y_i}} {2^i}$ and $z = \sum\limits_{i = 0}^n {{z_i}} {2^i}$.
The uniqueness of $y$ and $z$ is clear from the procedure used to determine the value of $y_i$ and $z_i$.
\end{proof}
\begin{Lem}\label{lammeofkyxz12}
Let $x,y,z,v,w \in Z_{\geq0}\ $ such that
\begin{align}\label{nimsuminequaxyz}
x\oplus y\oplus z=0 \text{ and } y = \lfloor \frac{x+z}{k} \rfloor
\end{align}
and
\begin{align}\label{nimsumineuvw2}
x\oplus v\oplus w=0 \text{ and } v < \lfloor \frac{x+w}{k} \rfloor.
\end{align}
Then, there exists $t \in Z_{\geq0}$ such that
$y_{n-j} = v_{n-j}$ and $z_{n-j} = w_{n-j}$ for $j = 0,1,2,...,t$ and $y_{n-t-1} > v_{n-t-1}$.
In particular, $y > v$.
\end{Lem}
\begin{proof}
We define a sequences of non-negative integers
$r_0,r_1,...,r_n$ by
\begin{align}\label{defnofsjj2}
r_j=\sum\limits_{i = n-j}^n {({x_i}+{w_i}-k{v_i})} {2^{i+j-n}}.
\end{align}
By $(ii)$ of Lemma \ref{therearethreetype}, $(ii)$ of Lemma \ref{criteriaofineqa}, (\ref{xzkyrelation}), and (\ref{nimsumineuvw2}), the sequence $r_0,r_1,...,r_n$ is Type 2. Hence,
there exists a natural number $t$ such that $0 \leq r_j < k$ for $j = 0,1,2,...,t$ and $k < r_{t+1}$.
By $(ii)$ of Lemma \ref{lemmaforwhichk}, $2m+2 \leq r_t < k$ and $r_{t+1} = {P_2}^k(r_t)$ or $r_{t+1} = {P_1}^k(r_t)$.
Therefore, by Lemma \ref{lemmaforwhichk}, (\ref{caseof1}), and (\ref{caseof2}), we have two cases:\\
Case $(a)$ Suppose that $x_{n-t-1} = 1$. Then, $r_{t+1} = {P_2}^k(r_t)>k$ and \\
$(x_{n-t-1},v_{n-t-1},w_{n-t-1})$$=(1,0,1)$.\\
Let $(v^{\prime}_{n-i},w^{\prime}_{n-i})$$=(v_{n-i},w_{n-i})$ for $i = 0,1,...,t$ and
$(v^{\prime}_{n-t-1},w^{\prime}_{n-t-1})$$=(1,0)$. Then,
$x_{n-j} \oplus v^{\prime}_{n-j} \oplus w^{\prime}_{n-j} = 0$ for $j = 0,1,2,...,t+1$.
By Lemma \ref{nimsum0lemma}, there exist
$v^{\prime}_i,w^{\prime}_i \in \{0,1\}$ for $i = n-t-1, n-t-2,...,0$ such that
\begin{align}\label{nimsum0xyzbb}
x\oplus v^{\prime}\oplus z=0 \text{ and } v^{\prime} = \lfloor \frac{x+w^{\prime}}{k} \rfloor,
\end{align}
where $v^{\prime}= \sum\limits_{i = 0}^n {{v^{\prime}_i}} {2^i}$ and $w^{\prime} = \sum\limits_{i = 0}^n {{w^{\prime}_i}} {2^i}$.
By Lemma \ref{nimsum0lemma},
$v^{\prime}, w^{\prime}$ are unique non-negative integers that satisfy (\ref{nimsum0xyzbb}). Hence,
we have $y = v^{\prime}$ and $z = w^{\prime}$. Then, $y_{n-j} = v_{n-j}$ and $z_{n-j} = w_{n-j}$ for $j = 0,1,2,...,t$ and $y_{n-t-1} > v_{n-t-1}$.
In particular, $y > v$.
Case $(b)$ Suppose that $x_{n-t-1} = 0$. Then, $r_{t+1} = {P_1}^k(r_t)>k$ and \\
$(x_{n-t-1},v_{n-t-1},w_{n-t-1})$$=(0,0,0)$.\\
Let $(v^{\prime}_{n-t-1},w^{\prime}_{n-t-1})$$=(1,1)$.
Using a similar method to that used in Case $(a)$, we finish the proof.
\end{proof}
\begin{thm}\label{theoremfor4mplus3a}
Suppose that $x \oplus y \oplus z=0$ and $y \le \lfloor {\frac{x+z}{k}} \rfloor$. Then, the following conditions hold:\\
$(i)$ $u \oplus y \oplus z \ne 0$ for any $u\in Z_{\geq 0}$ with $u<x$;\\
$(ii)$ $x \oplus v \oplus z \ne 0$ for any $v\in Z_{\geq 0}$ with $v<y$;\\
$(iii)$ $x \oplus y \oplus w \ne 0$ for any $w\in Z_{\geq 0}$ with $w<z$;\\
$(iv)$ $x \oplus v \oplus w \ne 0$ for any $v,w\in Z_{\geq 0}$ with $v<y,w<z$ and $v = \lfloor {\frac{x+w}{k}} \rfloor$; and\\
$(v)$ $u \oplus v \oplus z \ne 0$ for some $u,v \in {Z_{ \ge 0}}$ with $u<x,v<y$ and $v=\lfloor {\frac{u+z}{k}}\rfloor$.
\end{thm}
\begin{proof}
Here, $(i)$, $(ii)$, and $(iii)$ come directly from definition of nim-sum $\oplus$.\\
We suppose that $x \oplus v \oplus w = 0$ and $v = \lfloor {\frac{x+w}{k}} \rfloor$ for some $v,w \in {Z_{ \ge 0}}$ with $v<y,w<z$.
If $y < \lfloor {\frac{x+z}{k}} \rfloor $, then by Lemma \ref{lammeofkyxz12}, we have $y<v$. This contradicts the fact $v<y$.
If $y = \lfloor {\frac{x+z}{k}} \rfloor $, then by Lemma \ref{nimsum0lemma},
we have $y=v$. This contradicts the fact $v<y$. Therefore, $x \oplus v \oplus w \ne 0$, and we have $(iv)$\\
$(v)$ This case can be proved using the same method as that used in $(iv)$.
\end{proof}
\begin{Lem}\label{lemmaforimportcase}
Suppose that
\begin{align}\label{conditionp1}
y \le \lfloor {\frac{x+z}{k}} \rfloor
\end{align}
and
\begin{align}\label{conditionp2}
x_i +y_i +z_i =0\ (mod\ 2) \ \text{for } i=m+1,m+2,...,n.
\end{align}
We define $s_j$ for $j=0,1,2,...,n-m-1$ by
\begin{align}\label{defnofsjj1b}
s_j=\sum\limits_{i = n-j}^n {({x_i}+{z_i}-k{y_i})} {2^{i+j-n}}.
\end{align}
Then, $s_0,s_1,...,s_{n-m-1}$ is Type 1 or Type 2.
\end{Lem}
\begin{proof}
Let $(u_i,v_i,w_i)=(x_i,y_i,z_i)$ for $i=m+1,m+2,...,n$ and
$(u_i,v_i,w_i)=(0,0,0)$ for $i=0,1,2,...,m$.
Then, by (\ref{conditionp1}), we have $\lfloor \frac{v}{2^{m+1}} \rfloor \le \frac{\lfloor \frac{u}{2^{m+1}} \rfloor+\lfloor \frac{w}{2^{m+1}} \rfloor}{k}$. Multiplying both sides of the inequality by $2^{m+1}$, we get
\begin{align}\label{ineqvux}
v \le \lfloor {\frac{u+w}{k}} \rfloor .
\end{align}
We define $r_j$ for $j=0,1,2,...,n$ by
\begin{align}\label{defnofsjj1}
r_j=\sum\limits_{i = n-j}^n {({u_i}+{w_i}-k{v_i})} {2^{i+j-n}}.
\end{align}
Then, by (\ref{ineqvux}), Lemma \ref{criteriaofineqa}, Lemma \ref{valueofs0sn}, and Lemma \ref{therearethreetype},
$r_0,r_1,...,r_n$ is Type 1 or Type 2. Since $s_j = r_j$ for $j=0,1,...,n-m-1$, we prove this lemma.
\end{proof}
\begin{thm}\label{theoremfor4mplus3b}
Suppose that $x \oplus y \oplus z \ne 0$ and $y \le \lfloor {\frac{x+z}{k}} \rfloor$.\\
Then, at least one of the following statements is true:\\
$(i)$ $x \oplus y \oplus w = 0$ for some $w \in {Z_{ \ge 0}}$ with $w<z$ and $y \le \lfloor \frac{x+w}{k} \rfloor$;\\
$(ii)$ $x \oplus v \oplus z = 0$ for some $v \in {Z_{ \ge 0}}$ with $v < y \le \lfloor \frac{x+z}{k} \rfloor$;\\
$(iii)$ $u \oplus y \oplus z = 0$ for some $u \in {Z_{ \ge 0}}$ with $u<x$ and $y \le \lfloor \frac{u+z}{k} \rfloor$;\\
$(iv)$ $x \oplus v \oplus w = 0$ for some $v,w \in {Z_{ \ge 0}}$ with $v<y,w<z$ and $v=\lfloor {\frac{x+w}{k}}\rfloor$; or\\
$(v)$ $u \oplus v \oplus z = 0$ for some $u,v \in {Z_{ \ge 0}}$ with $u<x,v<y$ and $v=\lfloor {\frac{u+z}{k}}\rfloor$.
\end{thm}
\begin{proof}
Suppose that $x_i +y_i +z_i =0\ (mod\ 2)$ for $i=n, n-1,...,n-m+1$ and $x_{n-m} +y_{n-m} +z_{n-m} \neq 0\ (mod\ 2)$.
We consider three cases:\\
\underline{Case $(a)$} Suppose that $z_{n-m} =1$ and $x_{n-m} =y_{n-m} =0$.
We define $s_j $ for $j = 0,1,2,...,m$ by
\begin{align}\label{defnofsjj10}
s_j=\sum\limits_{i = n-j}^n {({x_i}+{z_i}-k{y_i})} {2^{i+j-n}}.
\end{align}
By Lemma \ref{lemmaforimportcase}, the sequence $\{s_0,s_1,...,s_{m-1}\}$
is Type 1 or Type 2.
We have two subcases:\\
\underline{Subcase$(a.1)$} Suppose that the sequence $s_j $ for $j = 0,1,2,...,m-1$ is Type 2.
Let $w_i = x_i+y_i \ (mod \ 2)$ for $i=n-m, n-m-1,...,1,0$ and $w_i = z_i$ for $i=n,n-1,...,n-m+1$.
We define
$r_j $ for $j = 0,1,2,...,n$ by
\begin{align}\label{defnofsjj11}
r_j=\sum\limits_{i = n-j}^n {({x_i}+{w_i}-k{y_i})} {2^{i+j-n}}.
\end{align}
Since the sequence $\{s_0,s_1,...,s_{m-1}\}$ is Type 2 and $r_j=s_j$ for $j=0,1,...,m-1$,
by Lemma \ref{type2subsequence}, the sequence $\{r_0,r_1,...,r_{n}\}$ is Type 2.
Therefore, we have $y \le \lfloor {\frac{x+w}{k}} \rfloor$. Then, we have $(i)$.
\noindent
\underline{Subcase $(a.2)$} Suppose that the sequence $s_j $ for $j = 0,1,2,...,m-1$ is Type 1.
Let $w_i = z_i$ for $i=n,n-1,...,n-m+1$ and $w_{n-m} = 0$.
We define $r_j $ for $j = 0,1,2,...,m$ by
\begin{align}\label{defnofsjj12}
r_j=\sum\limits_{i = n-j}^n {({x_i}+{w_i}-k{y_i})} {2^{i+j-n}}.
\end{align}
Then, we have two subsubcases:
\noindent
\underline{Subsubcase $(a.2.1)$} Suppose that the sequence $r_j $ for $j = 0,1,2,...,m$ is Type 2.
Then,
let $w_i = x_i+y_i \ (mod \ 2)$ for $i=n-m-1,...,1,0$.
We define
$r_j $ for $j = m+1,m+2,...,n$ by
\begin{align}\label{defnofsjj13}
r_j=\sum\limits_{i = n-j}^n {({x_i}+{w_i}-k{y_i})} {2^{i+j-n}}.
\end{align}
Since the sequence $\{r_0,r_1,...,r_{m}\}$ is Type 2,
by Lemma \ref{type2subsequence}, the sequence $\{r_0,r_1,...,r_{n}\}$ is Type 2.
Therefore, we have $y \le \lfloor {\frac{x+w}{k}} \rfloor$. Then, we have $(i)$.
\noindent
\underline{Subsubcase $(a.2.2)$} Suppose that the sequence $r_j $ for $j = 0,1,2,...,m$ is Type 1.
By Lemma \ref{nimsum0lemma}, there exist unique $v_i,w_i \in \{0,1\}$ for $i = n-m-1,...,0$ such that
\begin{align}\label{nimsum0xyz2}
x\oplus u\oplus w=0 \text{ and } u = \lfloor \frac{x+w}{k} \rfloor,
\end{align}
Then, we have $(iv)$.
\noindent
\underline{Case $(b)$} Suppose that $x_{n-m} =1$ and $z_{n-m} =y_{n-m} =0$. We can use the same method used in Case $(a)$.
\noindent
\underline{Case $(c)$} Suppose that $y_{n-m} =1$ and $z_{n-m} =x_{n-m} =0$. Let $v_i = x_i+y_i$ for $i=n-m, n-m-1, n-m-2,...,1,0$.
Then, we have $x \oplus v \oplus z = 0$ and $v < y \le \lfloor \frac{x+z}{k} \rfloor$, and this is $(ii)$ of this lemma.
\end{proof}
\begin{defn}\label{defofABkybiggerxz}
Here, we define sets of positions of chocolate bars.\\
Let $A_{k}=\{(x,y,z);x,y,z\in Z_{\geq 0},y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$, $B_{k}=\{(x,y,z);x,y,z\in Z_{\geq 0},y \leq \lfloor \frac{x+z}{k} \rfloor$, and $x\oplus y \oplus z\neq 0\}$.
\end{defn}
\begin{thm}\label{theoremforkyxzchocol}
Let $k=4m+3$. Then, $A_k$ and $B_k$ are, respectively, the sets of $\mathcal{P}$-positions
and $\mathcal{N}$-positions
of the chocolate bar game that satisfies the inequality $y \leq \lfloor \frac{x+z}{k} \rfloor$.
\end{thm}
To prove Theorem \ref{theoremforkyxzchocol}, we need two additional theorems.
First, we prove that starting with an element of $A_k$, any move leads to an element of $B_k$.
\begin{rem}\label{remarkforot}
For $k=4m+1$, a conjecture for the generic formula for $\mathcal{P}$-positions is presented in Subsection \ref{4mplus1}.
When $k$ is an even number, there seems to be any formula for $\mathcal{P}$-positions. See Subsection \ref{evencase}.
\end{rem}
\begin{thm}\label{thforkfrAtoB}
For any $(x,y,z) \in A_k$, we have $movek((x,y,z)) \subset B_k$.
\end{thm}
\begin{proof}
Let $(x,y,z) \in A_k$. Then, we have
\begin{align}
x \oplus y \oplus z \oplus =0
\end{align}
and
\begin{align}
y \leq \lfloor \frac{x+z}{k} \rfloor.
\end{align}
Suppose that we move from $(x,y,z)$ to $(p,q,r)$, i.e., $(p,q,r) \in movek((x,y,z))$. We prove that $(p,q,r) \in B_k$.
\noindent
Since $movek((x,y,z))=\{(u,y,z);u<x\} \cup \{(x,v,z);v<y\} \cup \{(x,y,w);w<z\} \cup \{(u,\min(y, \lfloor \frac{u+z}{k} \rfloor ),z);u<x\} \cup \{(x,\min(y, \lfloor \frac{x+w}{k} \rfloor ),w);w<z\}$, where $u,v,w \in Z_{\ge 0}$,
we have one of the following cases:
\noindent
$(1)$ $(p,q,r)= (u,y,z)$ with $u<x$;
\noindent
$(2)$ $(p,q,r)= (x,v,z)$ with $v<y$;
\noindent
$(3)$ $(p,q,r)= (x,y,w)$ with $w<z$;
\noindent
$(4)$ $(p,q,r)= (u,\min(y,\lfloor \frac{u+z}{k} \rfloor ),z)$ with $u<x$; or
\noindent
$(5)$ $(p,q,r)= (x,\min(y,\lfloor \frac{x+w}{k} \rfloor ),w)$ with $w<z$.
\noindent
For each of these cases, we can use Theorem \ref{theoremfor4mplus3a} to get $p \oplus q \oplus r \oplus \ne 0$.
\end{proof}
Next, we prove that starting with an element of $B_k$, there is a proper move that leads to an element of $A_k$.
\begin{thm}\label{thforkfrBtoA}
Let $(x,y,z) \in B_k$, then $movek((x,y,z))\cap A_k \ne \phi$.
\end{thm}
\begin{proof}
Let $(x,y,z) \in B_k$. Then, we have
\begin{align}
x \oplus y \oplus z \oplus \ne 0
\end{align}
and
\begin{align}
y \leq \lfloor \frac{x+z}{k} \rfloor.
\end{align}
Then, we have one of the five cases of Theorem \ref{theoremfor4mplus3b}.
Since $movek((x,y,z))=\{(u,y,z);u<x\} \cup \{(x,v,z);v<y\} \cup \{(x,y,w);w<z\} \cup \{(u,\min(y, \lfloor \frac{u+z}{k} \rfloor ),z);u<x\} \cup \{(x,\min(y, \lfloor \frac{x+w}{k} \rfloor ),w);w<z\}$, there exists $(p,q,r) \in movek ((x,y,z))$ such that
$p\oplus q\oplus r= 0$. Therefore, $(p,q,r) \in movek((x,y,z))\cap A_k$
\end{proof}
By Theorem \ref{thforkfrAtoB} and \ref{thforkfrBtoA}, we finish the proof of Theorem \ref{theoremforkyxzchocol}. Starting the game with a position $(x,y,z)\in A_{k}$, by Theorem \ref{thforkfrAtoB}, any option (move) leads to position $(p,q,r)$ in $B_k$. From this position $(p,q,r)$, by Theorem \ref{thforkfrBtoA}, the opposing player can choose a proper option that leads to a position in $A_k$. Note that any option reduces some of the numbers of the position. In this way, the opposing player can always reach a position in $A_k$, winning by reaching $(0,0,0)\in A_{k}$. Therefore, $A_k$ is the set of $\mathcal{P}$-positions.
Starting the game with a position $(x,y,z)\in B_{k}$, by Theorem \ref{thforkfrBtoA}, we can choose a proper option that leads to a position $(p,q,r)$ in $A_k$. From $(p,q,r)$, any option by the opposing player leads to a state in $B_k$. In this way, we win the game by reaching $(0, 0, 0)$. Therefore, $B_k$ is the set of $\mathcal{N}$-positions.
By Theorem \ref{theoremforkyxzchocol}, $(x,y,z)$ is a $\mathcal{P}$-position if and only if $x\oplus y \oplus z=0 $. Then,
it is natural to wonder whether the Grundy number of a position $(x,y,z)$ is equal to $x \oplus y \oplus z$.
The Grundy number of a position does not equal $x\oplus y\oplus z$, and Example \ref{exampofgrundynotnim} presents a counter example.
Example \ref{exambymathematica} shows that the number of positions whose Grundy number is equal to the nim-sum is smaller than the number of positions whose Grundy number is not equal to the nim-sum.
\begin{exam}\label{exampofgrundynotnim}
In the chocolate game that satisfies the inequality
$y \leq \lfloor \frac{x+z}{3} \rfloor $, the Grundy number of a position $(x,y,z)$ is not always equal to $x \oplus y \oplus z$.
We show this by example.
By Definition \ref{defofmexgrundy} and Definition \ref{defofmovek},
\begin{align}\label{grundydeff}
\mathcal{G}((x,y,z))= \textit{mex}\{\mathcal{G}((u,v,w)): (u,v,w) \in move((x,y,z))\}.
\end{align}
We calculate Grundy numbers for the positions of chocolates in Figures \ref{grundyex000}, \ref{grundyex100}, \ref{grundyex001}, \ref{grundyex101}, \ref{grundyex002}, \ref{grundyex102}, and \ref{grundyex112} by using (\ref{grundydeff}).
\noindent
$(i)$ Since $(0,0,0)$ is the end position, by Definition \ref{defofmexgrundy}, we have $\mathcal{G}((0,0,0)) = 0$.
\noindent
$(ii)$ Here, $movek((1,0,0)) =\{(0,0,0)\}$ and $\mathcal{G}((0,0,0)) = 0$. Hence, by Definition \ref{defofmexgrundy}, $\mathcal{G}((1,0,0)) = 1$.
\noindent
$(iii)$ Similarly, we have $\mathcal{G}((0,0,1)) = 1$.
\noindent
$(iv)$ Here, $movek((1,0,1)) =\{(1,0,0),(0,0,1)\}$, $\mathcal{G}((1,0,0)) = 1$, and $\mathcal{G}((0,0,1)) = 1$. Hence, by Definition \ref{defofmexgrundy}, $\mathcal{G}((1,0,1)) = 0$.
\noindent
$(v)$ Here, $movek((0,0,2)) =\{(0,0,0),(0,0,1)\}$, $\mathcal{G}((0,0,0)) = 0$, and $\mathcal{G}((0,0,1)) = 1$. Hence, by Definition \ref{defofmexgrundy}. $\mathcal{G}((0,0,2)) = 2$.
\noindent
$(vi)$ Here, $movek((1,0,2)) =\{(1,0,0),(1,0,1),(0,0,2)\}$, $\mathcal{G}((1,0,0)) = 1$, $\mathcal{G}((1,0,1)) = 0$, and $\mathcal{G}((0,0,2)) = 2$. Hence, by Definition \ref{defofmexgrundy}, $\mathcal{G}((1,0,2)) = 3$.
\noindent
$(vii)$ Here, $movek((1,1,2)) =\{(1,0,0),(1,0,1),(1,0,2),(0,0,2)\}$, $\mathcal{G}((1,0,0)) = 1$,
$\mathcal{G}((1,0,1)) = 0$, $\mathcal{G}((1,0,2)) = 3$, and $\mathcal{G}((0,0,2)) = 2$. Hence, by Definition \ref{defofmexgrundy}, $\mathcal{G}((1,1,2)) = 4$.
\noindent
By $(vii)$, we have $\mathcal{G}((1,1,2)) = 4 \ne 1 \oplus 1 \oplus 2$.
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.3\columnwidth,bb=0 0 57 34]{grundyexch000.pdf}
\caption{Position (0,0,0)}
\label{grundyex000}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth,bb=0 0 56 31]{grundyexch100.pdf}
\caption{Position (1,0,0)}
\label{grundyex100}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 56 31]{grundyexch001.pdf}
\caption{Position (0,0,1)}
\label{grundyex001}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 85 46]{grundyexch101.pdf}
\caption{Position (1,0,1)}
\label{grundyex101}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 85 46]{grundyexch002.pdf}
\caption{Position (0,0,2)}
\label{grundyex002}
\end{minipage}
\begin{minipage}[!htb]{0.45\columnwidth}
\centering
\includegraphics[width=0.6\columnwidth,bb=0 0 57 24]{grundyexch102.pdf}
\caption{Position (1,0,2)}
\label{grundyex102}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.3\columnwidth,bb=0 0 57 30]{grundyexch112.pdf}
\caption{Position (1,1,2)}
\label{grundyex112}
\end{figure}
In the next section, Example \ref{exambymathematica2} and Example \ref{examplecgsuitet2} present calculations by computer that show the Grundy number of a position $(x,y,z)$ is not always equal to $x \oplus y \oplus z$ in this game.
\end{exam}
\section{Computer Program for the Triangle Chocolate Bar Game}\label{sectionforcomputer}
\subsection{Computer Program to Calculate the $\mathcal{P}$-positions in the Chocolate Bar Game}
In this subsection, we present computer programs that show $\{(x,y,z);x,y,z\in Z_{\geq 0},y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$ is the set of $\mathcal{P}$-positions.
Example \ref{exambymathematica} presents a Mathematica program, and
Example \ref{exambycgsuite} presents a Combinatorial Game Suite $($CGSuite$)$ program.
\begin{exam}\label{exambymathematica}
Here, let $k=3$.\\
$(i)$ This Mathematica program
presents the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $(x,y,z)$ is $\mathcal{P}$-position$\} - $
$\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$ that
is a list of $\mathcal{P}$-positions whose nim-sum is not zero.
\begin{verbatim}
k = 3; ss = 20; al =
Flatten[Table[{a, b, c}, {a, 0, ss}, {b, 0, ss}, {c, 0, ss}], 2];
allcases = Select[al, (1/k) (#[[1]] + #[[3]]) >= #[[2]] &];
move[z_] := Block[{p}, p = z;
Union[Table[{t1, Min[Floor[(1/k) (t1 + p[[3]])], p[[2]]],
p[[3]]}, {t1, 0, p[[1]] - 1}],
Table[{p[[1]], t2, p[[3]]}, {t2, 0, p[[2]] - 1}],
Table[{p[[1]], Min[Floor[(1/k) (t3 + p[[1]])], p[[2]]], t3},
{t3, 0, p[[3]] - 1}]]]
Mex[L_] := Min[Complement[Range[0, Length[L]], L]];
Gr[pos_] := Gr[pos] = Mex[Map[Gr, move[pos]]]
pposition = Select[allcases, Gr[#] == 0 &];
Select[pposition, BitXor[#[[1]], #[[2]], #[[3]]] > 0 &]
\end{verbatim}
The output shows that the list is empty, which implies that the nim-sum of a $\mathcal{P}$-position is zero.
\begin{verbatim}
{}
\end{verbatim}
\noindent
$(ii)$ The next Mathematica program presents the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $ (x,y,z)$ is $\mathcal{N}$-position$\} \cap \{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$
that is a list of $\mathcal{N}$-positions whose nim-sum is zero.
\begin{verbatim}
Select[Complement[allcases, pposition],
BitXor[#[[1]], #[[2]], #[[3]]] == 0 &]
\end{verbatim}
This produces the following list.
\begin{verbatim}
{}
\end{verbatim}
The output shows that the list is empty, which implies that the nim-sum of a $\mathcal{N}$-position is not zero.
By $(i)$ and $(ii)$, $(x,y,z)$ is a $\mathcal{P}$-position if and only if $x\oplus y \oplus z=0$.
\end{exam}
\begin{exam}\label{exambycgsuite}
Here, let $k=3$. This $($CGSuite version1.1.1$)$ program shows
$\{(x,y,z);x,y,z\in Z_{\geq 0},y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$ is the set of $\mathcal{P}$-positions.
\noindent
$(i)$ First, we open the following file using CGSuite.
\begin{small}
\begin{verbatim}
class Choco3D extends ImpartialGame
var x,y,z,k;
method Choco3D(x,y,z,k)
end
override method Options(Player player)
result := [];
// x
for x1 from 0 to x-1 do
result.Add(Choco3D(x1,y.Min(((x1+z)/k).Floor),z,k));
end
// y
for y1 from 0 to y-1 do
result.Add(Choco3D(x,y1,z,k));
end
// z
for z1 from 0 to z-1 do
result.Add(Choco3D(x,y.Min(((x+z1)/k).Floor),z1,k));
end
result.Remove(this);
if x==0 and y==0 and z==0 then
return {};
else
return result;
end
end
override property ToString.get
return "Choco3D("+x.ToString+","+y.ToString+","
+z.ToString+","+k.ToString+")";
end
end
\end{verbatim}
\end{small}
\noindent
$(ii)$ By typing the following command, we get the lists in $(a)$ and $(b)$.
\noindent
$(a)$ $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $(x,y,z)$ is $\mathcal{P}$-position$\} - \{(x,y,z):x,y,z\in Z_{\geq 0}, x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$ that
is a list of $\mathcal{P}$-positions whose nim-sum is not zero. The output is an empty set.
\noindent
$(b)$ $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $ (x,y,z)$ is $\mathcal{N}$-position$\} \cap \{(x,y,z):x,y,z\in Z_{\geq 0}, x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $x\oplus y \oplus z=0\}$
that is a list of $\mathcal{N}$-positions whose nim-sum is zero.
The output is an empty set.
\begin{small}
\begin{verbatim}
x:=20;
z:=20;
y:=20;
k:=3;
setA:={};
setB:={};
for z1 from 0 to z do
for x1 from 0 to x do
for y1 from 0 to y.Min(((z1+x1)/k).Floor) do
if examples.Choco3D(x1,y1,z1,k).CanonicalForm== 0 then
if *x1+*y1+*z1!=0 then
setA.Add([x1,y1,z1]);
end
else
if *x1+*y1+*z1==0 then
setB.Add([x1,y1,z1]);
end
end
end
end
end
Worksheet.Print("The Set of (Grundy Number = 0 and Nim-Sum > 0) -> "
+ setA.ToString);
Worksheet.Print("The Set of (Grundy Number > 0 and Nim-Sum = 0) -> "
+ setB.ToString);
\end{verbatim}
\end{small}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\columnwidth,bb=0 0 380 42]{cgsuiteresult1.pdf}
\caption{Result (1)}
\label{CGSuiteResult1}
\end{figure}
Since the list in $(a)$ is empty, we have $x \oplus y \oplus z =0$ for any $\mathcal{P}$-position $(x,y,z)$.
Since the list in $(b)$ is empty, we have $x \oplus y \oplus z \ne 0$ for any $\mathcal{N}$-position $(x,y,z)$.
Therefore, $(x,y,z)$ is a $\mathcal{P}$-position if and only if $x \oplus y \oplus z = 0$.
\end{exam}
\subsection{Computer Program to Compare the Grundy Numbers and Nim-Sum of Positions in the Chocolate Bar Game}
In this subsection, we present computer programs that compare the Grundy number
$\mathcal{G}((x,y,z))$ and $x\oplus y \oplus z$ for a position $(x,y,z)$.
Example \ref{exambymathematica2} presents a Mathematica program, and Example \ref{examplecgsuitet2} presents a CGSuite program.
\begin{exam}\label{exambymathematica2}
Here, let $k=3$. This Mathematica program
calculates the list $\mathcal{G}((x,y,z)):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 20, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor\}$.
\begin{verbatim}
k=3;ss=20;al=
Flatten[Table[{a,b,c},{a,0,ss},{b,0,ss},{c,0,ss}],2];
allcases=Select[al,(1/k)(#[[1]]+#[[3]])>=#[[2]] &];
move[z_]:=Block[{p},p=z;
Union[Table[{t1,Min[Floor[(1/k)(t1+p[[3]])],p[[2]]],
p[[3]]},{t1,0,p[[1]]-1}],
Table[{p[[1]],t2,p[[3]]},{t2,0,p[[2]]-1}],
Table[{p[[1]],Min[Floor[(1/k)(t3+p[[1]])],p[[2]]],t3},
{t3,0,p[[3]]-1}]]]
Mex[L_]:=Min[Complement[Range[0,Length[L]],L]];
Gr[pos_]:=Gr[pos]=Mex[Map[Gr,move[pos]]]
pposition=Select[allcases,Gr[#]==0 &];
\end{verbatim}
\begin{verbatim}
nimequal=Select[allcases,BitXor[#[[1]],#[[2]],#[[3]]]==Gr[#] &]
//Length
\end{verbatim}
The output of this code is the number of positions whose Grundy numbers are equal to nim-sum.
\begin{verbatim}
977
\end{verbatim}
\begin{verbatim}
nimnonequal=Select[allcases,!(BitXor[#[[1]],#[[2]],#[[3]]]
==Gr[#]) &]//Length
\end{verbatim}
The output of this code is the number of positions whose Grundy numbers are not equal to nim-sum.
\begin{verbatim}
2257
\end{verbatim}
\end{exam}
\begin{exam}\label{examplecgsuitet2}
Here, let $k=3$. This CGSuite program
calculates the number of positions whose Grundy numbers are equal to nim-sum and the number of positions whose Grundy numbers are not equal to nim-sum.
\begin{small}
\begin{verbatim}
x:=20;
z:=20;
y:=20;
k:=3;
nimequal:=0;
nimnonequal:=0;
for z1 from 0 to z do
for x1 from 0 to x do
for y1 from 0 to y.Min(((z1+x1)/k).Floor) do
if examples.Choco3D(x1,y1,z1,k).CanonicalForm==0 then
if *x1+*y1+*z1==0 then
nimequal:=nimequal+1;
else
nimnonequal:=nimnonequal+1;
end
else
if examples.Choco3D(x1,y1,z1,k).CanonicalForm== *x1+*y1+*z1 then
nimequal:=nimequal+1;
else
nimnonequal:=nimnonequal+1;
end
end
end
end
end
Worksheet.Print("The Number of Grundy Number == Nim-Sum -> "
+ nimequal.ToString);
Worksheet.Print("The Number of Grundy Number != Nim-Sum -> "
+ nimnonequal.ToString);
\end{verbatim}
\end{small}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\columnwidth,bb=0 0 371 44]{cgsuiteresult2.pdf}
\caption{Result (2)}
\label{CGSuiteResult2}
\end{figure}
\end{exam}
\subsection{Computer Program to Calculate the $\mathcal{P}$-positions in the Chocolate Bar Game When {\boldmath $4m+1$} for Some {\boldmath $m \in Z_{\geq 0}$}}\label{4mplus1}
Let $k=4m+1$. In this subsection, we present computer programs that show
$\{(x,y,z);x,y,z\in Z_{\geq 0}, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $(x-1)\oplus y \oplus (z-1)=0\}$ is the set of $\mathcal{P}$-positions when $k=4m+1$ for some $m \in Z_{\geq 0}$.
Example \ref{exambymathematicafor5} presents a Mathematica program, and
Example \ref{exambycgsuitefor5} presents a Combinatorial Game Suite (CGSuite) program.
\begin{exam}\label{exambymathematicafor5}
Here, let $k=5$.
\noindent
$(i)$ This Mathematica program
presents the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $(x,y,z)$ is $\mathcal{P}$-position$\} - \{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $(x-1)\oplus y \oplus (z-1)=0\}$ that
is a list of $\mathcal{P}$-positions $(x,y,z)$ such that $(x-1)\oplus y \oplus (z-1) \ne 0$.
\begin{verbatim}
k = 5; al =
Flatten[Table[{a,b,c},{a,0,20},{b,0,10},{c,0,20}],2];
allcases=Select[al,(1/k)(#[[1]]+#[[3]])>=#[[2]] &];
move[z_]:=Block[{p},p=z;
Union[Table[{t1,Min[Floor[(1/k)(t1+p[[3]])],p[[2]]],
p[[3]]},{t1,0,p[[1]]-1}],
Table[{p[[1]],t2,p[[3]]},{t2,0,p[[2]]-1}],
Table[{p[[1]],Min[Floor[(1/k)(t3+p[[1]])],p[[2]]],t3},
{t3,0,p[[3]]-1}]]]
Mex[L_]:=Min[Complement[Range[0,Length[L]],L]];
Gr[pos_]:=Gr[pos]=Mex[Map[Gr,move[pos]]]
pposition=Select[allcases,Gr[#]==0 &];
Select[pposition,BitXor[#[[1]]-1,#[[2]],#[[3]]-1]>0 &]
\end{verbatim}
The output shows that the list is empty, which implies that the nim-sum $(x-1)\oplus y \oplus (z-1)=0$ for any $\mathcal{P}$-position $(x,y,z)$.
\begin{verbatim}
{}
\end{verbatim}
\noindent
$(ii)$ The next Mathematica program presents the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $ (x,y,z)$ is $\mathcal{N}$-position$\} \cap $
$\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $(x-1)\oplus y \oplus (z-1)=0\}$
that is a list of $\mathcal{N}$-positions $(x,y,z)$ such that $(x-1)\oplus y \oplus (z-1)=0$.
\begin{verbatim}
Select[Complement[allcases,pposition],
BitXor[#[[1]]-1,#[[2]],#[[3]]-1]==0 &]
\end{verbatim}
This produces the following list.
\begin{verbatim}
{}
\end{verbatim}
The output shows that the list is empty, which implies that the nim-sum of a $\mathcal{N}$-position is not zero.\\
By $(i)$ and $(ii)$, $(x,y,z)$ is a $\mathcal{P}$-position if and only if $(x-1)\oplus y \oplus (z-1)=0$.
\end{exam}
\begin{exam}\label{exambycgsuitefor5}
Here, let $k=5$. \\
$(i)$ This $($CGSuite version1.1.1$)$ program shows
$\{(x,y,z);x,y,z\in Z_{\geq 0},y \leq \lfloor \frac{x+z}{k} \rfloor$ and $(x-1)\oplus y \oplus (z-1)=0\}$ is the set of $\mathcal{P}$-positions.
First, we open the code in $(i)$ of Example \ref{exambycgsuite}.
\noindent
$(ii)$
By typing the following command, we get the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor, $ where $(x,y,z)$ is $\mathcal{P}$-position$\} - \{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $(x-1)\oplus y \oplus (z-1)=0\}$ that
is a list of $\mathcal{P}$-positions $(x,y,z)$ such that $(x-1)\oplus y \oplus (z-1) \ne 0$. The output is an empty set.
By typing the following command, we get the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $ (x,y,z)$ is $\mathcal{N}$-position$\} \cap \{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 20, y \leq 10, z \leq 20, y \leq \lfloor \frac{x+z}{k} \rfloor$ and $(x-1)\oplus y \oplus (z-1)=0$
that is a list of $\mathcal{N}$-positions $(x,y,z)$ such that $(x-1)\oplus y \oplus (z-1)=0$.
The output is an empty set.
\begin{small}
\begin{verbatim}
x:=20;
z:=20;
y:=10;
k:=5;
setA:={};
setB:={};
for z1 from 0 to z do
for x1 from 0 to x do
for y1 from 0 to y.Min(((z1+x1)/k).Floor) do
if examples.Choco3D(x1,y1,z1,k).CanonicalForm==0 then
if ((x1-1).NimSum(y1)).NimSum(z1-1)>0 then
setA.Add([x1,y1,z1]);
end
else
if ((x1-1).NimSum(y1)).NimSum(z1-1)==0 then
setB.Add([x1,y1,z1]);
end
end
end
end
end
Worksheet.Print("The list of P-positions and Nim-Sum > 0 -> "
+ setA.ToString);
Worksheet.Print("The list of N-positions and Nim-Sum = 0 -> "
+ setB.ToString);
\end{verbatim}
\end{small}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\columnwidth,bb=0 0 319 45]{cgsuiteresult3.pdf}
\caption{Result (3)}
\label{CGSuiteResult1for5}
\end{figure}
\end{exam}
\begin{conjecture}
When $k = 4m+1$ for some $m \in Z_{ \geq 0}$,
$(x,y,z)$ is a $\mathcal{P}$-position if and only if $(x-1)\oplus y \oplus (z-1)=0$.
\end{conjecture}
\subsection{Computer Program to Calculate the $\mathcal{P}$-positions in the Chocolate Bar Game for Some Even Number {\boldmath $k$}}\label{evencase}
In this subsection, we present computer programs that present $\mathcal{P}$-positions of chocolate game when $k$ is an even number.
Example \ref{exambymathematicafor2} presents a Mathematica program, and
Example \ref{exambycgsuitefor2} presents a Combinatorial Game Suite (CGSuite) program.
\begin{exam}\label{exambymathematicafor2}
Here, let $k=2$. This Mathematica program
presents the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 10, y \leq 10, z \leq 10, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $(x,y,z)$ is a $\mathcal{P}$-position$\}$.
\begin{verbatim}
k=2;ss=10;al=
Flatten[Table[{a,b,c},{a,0,ss},{b,0,ss},{c,0,ss}],2];
allcases=Select[al,(1/k)(#[[1]]+#[[3]])>=#[[2]] &];
move[z_]:=Block[{p},p=z;
Union[Table[{t1,Min[Floor[(1/k)(t1+p[[3]])],p[[2]]],
p[[3]]},{t1,0,p[[1]]-1}],
Table[{p[[1]],t2,p[[3]]},{t2,0,p[[2]]-1}],
Table[{p[[1]],Min[Floor[(1/k)(t3+p[[1]])],p[[2]]],t3},
{t3,0,p[[3]]-1}]]]
Mex[L_]:=Min[Complement[Range[0,Length[L]],L]];
Gr[pos_]:=Gr[pos]=Mex[Map[Gr,move[pos]]]
pposition=Select[allcases,Gr[#]==0 &]
\end{verbatim}
By the following output, there seems to be no generic formula for $ \mathcal{P}$-position.
\begin{verbatim}
{(0,0,0),(1,0,1),(1,1,2),(2,0,2),(2,1,1),(3,0,3),(3,1,4),
(3,2,5),(3,3,6),(3,4,7),(3,5,8),(4,0,4),(4,1,3),(4,2,6),
(4,3,5),(4,4,8),(4,5,7),(5,0,5),(5,1,6),(5,2,3),(5,3,4),
(5,4,9),(5,5,10),(6,0,6),(6,1,5),(6,2,4),(6,3,3),(6,4,10),
(6,5,9),(7,0,7),(7,1,8),(7,2,9),(7,3,10),(7,4,3),(7,5,4),
(8,0,8),(8,1,7),(8,2,10),(8,3,9),(8,4,4),(8,5,3),(9,0,9),
(9,1,10),(9,2,7),(9,3,8),(9,4,5),(9,5,6),(10,0,10),
(10,1,9),(10,2,8),(10,3,7),(10,4,6),(10,5,5)}
\end{verbatim}
\end{exam}
\begin{exam}\label{exambycgsuitefor2}
Here, let $k=2$. This $($CGSuite version1.1.1$)$ program presents the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 10, y \leq 10, z \leq 10, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $(x,y,z)$ is a $\mathcal{P}$-position$\}$.
First, we open the code in $(i)$ of Example \ref{exambycgsuite}.
By typing the following command, we get the list $\{(x,y,z):x,y,z\in Z_{\geq 0},x \leq 10, y \leq 10, z \leq 10, y \leq \lfloor \frac{x+z}{k} \rfloor$, where $(x,y,z)$ is a $\mathcal{P}$-position$\}$.
By the following output, there seems to be no generic formula for $ \mathcal{P}$-position.
\begin{small}
\begin{verbatim}
x:=10;
z:=10;
y:=10;
k:=2;
setA:={};
for z1 from 0 to z do
for x1 from 0 to x do
for y1 from 0 to y.Min(((z1+x1)/k).Floor) do
if examples.Choco3D(x1,y1,z1,k).CanonicalForm==0 then
setA.Add([x1,y1,z1]);
end
end
end
end
Worksheet.Print(setA.ToString);
\end{verbatim}
\end{small}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth,bb=0 0 507 102]{cgsuiteresult4.pdf}
\label{CGSuiteResult1for2}
\caption{Result (4)}
\end{figure}
\end{exam}
| {
"timestamp": "2017-11-15T02:40:11",
"yymm": "1711",
"arxiv_id": "1711.04954",
"language": "en",
"url": "https://arxiv.org/abs/1711.04954",
"abstract": "Chocolate bar games are variants of the game of Nim in which the goal is to leave your opponent with the single bitter part of the chocolate bar. The rectangular chocolate bar game is a thinly disguised form of classical multi-heap Nim. In this work, we investigate the mathematical structure of triangular chocolate bar games in which the triangular chocolate bar can be cut in three directions. In the triangular chocolate bar game, a position is a $\\mathcal{P}$-position if and only if $x \\oplus y \\oplus z = 0$, where the numbers $x,y,z$ stand for the maximum number of times that the chocolate bar can be cut in each direction. Moreover, the Grundy number of a position $(x,y,z)$ is not always equal to $x \\oplus y \\oplus z $, and a generic formula for Grundy numbers in not known. Therefore, the mathematical structure of triangular chocolate bar game is different from that of classical Nim.",
"subjects": "Combinatorics (math.CO)",
"title": "Impartial Triangular Chocolate Bar Games",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347883040039,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7093645909436225
} |
https://arxiv.org/abs/1503.00725 | Intrinsic random walks and sub-Laplacians in sub-Riemannian geometry | On a sub-Riemannian manifold we define two type of Laplacians. The \emph{macroscopic Laplacian} $\Delta_\omega$, as the divergence of the horizontal gradient, once a volume $\omega$ is fixed, and the \emph{microscopic Laplacian}, as the operator associated with a sequence of geodesic random walks. We consider a general class of random walks, where \emph{all} sub-Riemannian geodesics are taken in account. This operator depends only on the choice of a complement $\mathbf{c}$ to the sub-Riemannian distribution, and is denoted $L^c$.We address the problem of equivalence of the two operators. This problem is interesting since, on equiregular sub-Riemannian manifolds, there is always an intrinsic volume (e.g. Popp's one $P$) but not a canonical choice of complement. The result depends heavily on the type of structure under investigation. On contact structures, for every volume $\omega$, there exists a unique complement $c$ such that $\Delta_\omega=L^c$. On Carnot groups, if $H$ is the Haar volume, then there always exists a complement $c$ such that $\Delta_H=L^c$. However this complement is not unique in general. For quasi-contact structures, in general, $\Delta_P \neq L^c$ for any choice of $c$. In particular, $L^c$ is not symmetric w.r.t. Popp's measure. This is surprising especially in dimension 4 where, in a suitable sense, $\Delta_P$ is the unique intrinsic macroscopic Laplacian.A crucial notion that we introduce here is the N-intrinsic volume, i.e. a volume that depends only on the set of parameters of the nilpotent approximation. When the nilpotent approximation does not depend on the point, a N-intrinsic volume is unique up to a scaling by a constant and the corresponding N-intrinsic sub-Laplacian is unique. This is what happens for dimension smaller or equal than 4, and in particular in the 4-dimensional quasi-contact structure mentioned above. | \section{Convergence of random walks}\label{a:randomwalk}
We let $M$ be a smooth, geodesically complete (sub)-Riemannian manifold. Our goal here is to give fairly general conditions for a sequence of random walks on $M$ to converge. In particular, we will work with a larger class of random walks than those treated elsewhere in the paper, which will require some differences in notation. In this section we work in the general (sub)-Riemannian framework, so that geodesics are specified by the choice of an initial covector $\lambda \in T^*M$ and the (sub)-Riemannian distribution has rank $k \leq n = \dim M$.
\subsection{A general class of random walks}
By a random walk, in this section, we mean a process $X_t$ that travels along a geodesic (with constant speed) for a fixed length of time (say $h>0$), at which time a new geodesic is chosen at random (and independently of the particle's past), which is then traveled for time $h$, and this procedure repeats indefinitely. The result is that $X_0, X_{h},X_{2h},\ldots$ is a Markov chain on $M$, and for $t\in(ih,(i+1)h)$, $X_t$ interpolates between $X_{ih}$ and $X_{(i+1)h}$ along a geodesic between them. We distinguish this from the related scheme of following a randomly chosen geodesic for a random, exponentially distributed length of time, at which point a new geodesic is chosen and a new (independent) exponential clock is started, and so on, which we will refer to as a random flight. (The logic being that a walk involves regular steps, and one does not turn midstep, whereas a flight seems to evoke a mode of travel which can change direction at any time.) A random flight has the nice property that, because the exponential is memory-less, it is a time-homogeneous process (assuming the choice of geodesic is time-homogeneous, of course).
More concretely, for $h>0$, we have a family of probability measures $\mu^{h}_q$ on the cotangent spaces $T^*_qM$. Then $X^{h}_{h(i+1)} = \exp_{X^{h}_{h i}}(h,\lambda/h)$ where $\lambda$ is a covector chosen according to $\mu^{h}_{X^{h}_{h i}}$ independently of the previous steps, and the path travels from $X^{h}_{h i}$ to $X^{h}_{h(i+1)}$ along the geodesic determined by $\lambda$, at constant speed given by $\|\lambda\|/h$ (and for a distance given by $\|\lambda\|$). That is, for $t\in [h i,h(i+1)]$, we have $X^{h}_t = \exp_{X^{h}_{h i}}\left( (t-h i),\lambda/h \right)$.
Note that, unlike earlier in the paper, here we index our random walks by the size of the time-step, instead of by the size of the spacial-step. This is because we no longer require that $\mu^{h}_q$ be supported on covectors of fixed length. Indeed, earlier the paper, we have considered the case when $\mu_q$ is a probability measure on covectors of length 1, and $\mu^{h}_q$ is given by parabolic scaling of $\mu_q$; that is, if $f_{h}$ is the fiber-preserving map that takes a covector $\lambda$ to $\sqrt{2h k} \lambda$, then $\mu^{h}_q$ is the pushforward of $\mu_q$ by $f_{h}$. (Also compare the present set-up with Definition \ref{Def:MainOperator}.) However, here we work in the generality of random walks as just described.
We are interested in the convergence of a sequence of random walks $X^{h}_t$ (on $M$) as $h\rightarrow0$ to a (continuous time) diffusion. A key role is played by the following associated operator (which we have already seen). For any $\phi\in C_0^{\infty}(M)$, let
\[
L_{h} \phi(q) = \frac{1}{h}\left(\mathbb{E}\left[ \left . \phi\left( X^{h}_{h}\right) \right| X^{h}_0=q\right] -\phi(q)\right) =
\frac{1}{h}\left(\int_{T^*_qM} \phi\left( \exp_q(h,\lambda/h)\right) \mu^{h}_q(\lambda)-\phi(q) \right) .
\]
The idea is that, under suitable assumptions, the random walks will converge to the diffusion generated by $\lim_{h\rightarrow0}L_{h}$ as $h\rightarrow0$. That the behavior in the limit is governed only by the second-order operator, and not any other features of the random walks, should be seen as a version of Donsker invariance.
\subsection{Background on diffusions and pathspace}
Next, we recall some basic facts about the diffusions which will arise as limits of our random walks. In what follows, we assume that $L$ is a smooth second-order operator with non-negative-definite principal symbol and without zeroth-order term. This means that, in local coordinates $x_1,\ldots,x_n$, $L$ can be written as
\[
\sum_{i,j=1}^n a_{ij}\partial_{x_i}\partial_{x_j} + \sum_{i=1}^n b_i \partial_{x_i},
\]
where the $a_{ij}$ and $b_i$ are smooth functions, and the matrix $\left[ a_{ij}\right]$ is symmetric and non-negative definite. Alternatively, $L$ can be locally written as
\[
Y_0 + \sum_{i=1}^n Y_i^2
\]
for smooth vector fields $Y_0,Y_1,\ldots, Y_n$. Recall that, in this case, there exists a unique diffusion generated by $L$, although this diffusion may explode in finite time (that is, with positive probability, a path might exit every compact subset of $M$ in finite time). For simplicity, we assume that the diffusion generated by $L$ does not explode (indeed, our analysis is fundamentally local). Let $P_q$ be the measure on $\Omega_M$ (the space of continuous paths from $[0,\infty)$ to $M$) corresponding to this diffusion (starting from $q$). Then $P_q$ is uniquely characterized by the fact that
\[
\phi(\omega_t)-\int_0^t L \phi\left( \omega_s\right) \, ds \quad\text{is a $P_q$-martingale with $P_q\left( \omega_0=q\right)=1$}.
\]
A flexible and powerful approach to the convergence of random walks, based on martingale theory, was essentially already provided by Stroock and Varadhan in \cite{SAndV} (specifically, in Section 11.2). However, they develop their approach in the setting of Euclidean space. To generalize to the Riemannian or sub-Riemannian setting is relatively straight-forward. The main things one needs to do are to replace the systematic use of Euclidean coordinates by the use of coordinate-free or local coordinate-based expressions and to replace the linear interpolation from $X_{ih}$ and $X_{(i+1)h}$ via the appropriate geodesic segment. In regard to the local versus global nature of the problem, in principle we could even allow for the limiting diffusion to explode in finite time, as just described. However, this requires a more substantial reworking of the underlying framework ($M$ should be compactified by adding a point at infinity-- the Alexandrov one-point compactification, the space of continuous paths on $M$ should be suitably modified, etc.), which would be a substantial digression from the main thrust of this work. Indeed, we are most interested in understanding various choices of sub-Laplacian on a sub-Riemannian manifold, and this is really a local problem. Thus, there is no harm in restricting our attention to the situation when the underlying diffusion on $M$ does not explode.
Following \cite[Section 1.3]{SAndV}, for $\omega\in\Omega_M$, let $\omega_t$ be the position of $\omega$ at time $t$. We define a metric on $\Omega_M$ by
\begin{equation}\label{Eqn:PathspaceDist}
d_{\Omega_M}(\omega,\tilde{\omega}) = \sum_{i=1}^{\infty} \frac{1}{2^i}\frac{\sup_{0\leq t\leq i}d_M(\omega_t,\tilde{\omega}_t)}{1+ \sup_{0\leq t\leq i}d_M(\omega_t,\tilde{\omega}_t)} .
\end{equation}
This metric makes $\Omega_M$ into a Polish space, and the notion of convergence that it induces is that of uniform convergence on bounded time intervals. We equip $\Omega_M$ with its Borel $\sigma$-algebra $\mathcal{M}$ and give it the natural filtration $\mathcal{M}_t$ generated by $\{\omega_s:0\leq s\leq t\}$. For probability measures on $\Omega_M$, we will always work with respect to the weak topology (in probabilists' terminology; this is the weak* topology in the language of functional analysis, corresponding to convergence against bounded, continuous test functions). We will generally be interested in a (Markov) family of probability measures, indexed by points of $M$. (Indeed, we have already seen that our random walks correspond to such families.) In that case, we will write them as $P_q$ for $q\in M$.
The case of $M=\mathbb{R}^n$ (with the standard Euclidean metric) is the one considered by Stroock and Varadhan. Rather than generalize their approach directly (which would perhaps be more mathematically satisfying, but would also take longer), we exploit a standard trick to make the general case a special case of the Euclidean case. In particular, first note that the sub-Riemannian metric on $M$ can be extended to a Riemannian metric (not necessarily in any canonical way, but that doesn't matter here). Then the Nash embedding theorem implies that $M$ can be isometrically embedded into some Euclidean space of high enough dimension, say $\mathbb{R}^N$ (and this embedding is smooth, although not necessarily proper if $M$ is not compact).
\subsection{Control of the interpolating paths}
Assuming that the operators $L_{h}$ converge to an operator $L$ of the type just discussed, in the sense that, for any $\phi\in C_0^{\infty}(M)$, we have $L_{h} \phi\rightarrow L\phi$ uniformly on compacts as $h\rightarrow0$, is almost the only condition that we need. (Indeed, for $M=\mathbb{R}^N$ this is enough.) However, the convergence of the $L_{h}$ depends only on the positions of the random walks at the sequences of times $0,h, 2h,\ldots$, and does not necessarily control the path traveled between these times. To give a simple example of what can go wrong, consider $M=\mathbb{S}^1$ with the standard metric. If $X^{h}_0=q$, let the walk travel a full circle either clockwise or counter-clockwise, each with probability $1/2$, at constant speed in time $h$. In other words, if $\theta$ is the standard coordinate, we let $\mu^{h}_q = \pm 2\pi d\theta$ where the sign has probability $1/2$ of being positive or negative. Then $L^{h}$ is the zero operator for all $h$ (since $q=X^{h}_0=X^{h}_{h}=X^{h}_{2h}=\cdots$), but it's clear from Eq.~\eqref{Eqn:PathspaceDist} that the walks are not converging. This is in contrast to Euclidean space, where the entire path $X^{h}_t$ for $t\in[ih,(i+1)h]$ is determined by $X^{h}_{ih}$ and $X^{h}_{(i+1)h}$. (Indeed, in \cite{SAndV}, each step of the random walk is determined by choosing a point in $\mathbb{R}^N$ according to, in slightly modified notation, a probability measure $\Pi^h_q$ on $\mathbb{R}^N$. Then the path of the particle during this step is given by linear interpolation. The appropriate geometric generalization of this is our measure $\mu_q^h$ on the co-tangent space, which we can think of as giving the next step in the walk along with a path for the particle to travel to get there.)
Thus, we will also need to assume that for any $\rho>0$, any compact $Q\subset M$, and any $\alpha>0$, there exists $h_0>0$ such that
\begin{equation}\label{Eqn:PathRegularity}
\frac{1}{h}P^{h}_{q}\left[ \sup_{0\leq s\leqh} d_M\left( X^{h}_{0} , X^{h}_{s}\right) \leq \rho\right] > 1-\alpha
\end{equation}
whenever $q\in Q$ and $h<h_0$. This assumption is not especially restrictive. First of all, on Euclidean space, Lemmas 11.2.1 and 11.2.2 of \cite{SAndV} show that if $L_{h} \phi\rightarrow L\phi$ uniformly on compacts as $h\rightarrow0$ for any $\phi\in C_0^{\infty}(\mathbb{R}^N)$, then, under some global uniformity assumptions on the $\mu^{h}_q$, for any $\rho>0$ we have
\[
\lim_{h\rightarrow0} \sup_{\substack{q\in \mathbb{R}^N \\ i\in\{0,1,2,\ldots\}}} \frac{1}{h}P^{h}_{q}\left[ d_{\mathbb{R}^N}\left( X^{h}_{ih},X^{h}_{(i+1)h}\right)>\rho\right] =0 ,
\]
and thus for any $0<T<\infty$,
\[
\lim_{h\rightarrow0} \sup_{q\in\mathbb{R}^N} P^{h}_{q}\left[ \sup_{\substack{i:0\leq h i\leq T \\ 0\leq s\leq h}} d_{\mathbb{R}^N}\left( X^{h}_{ih} , X^{h}_{ih+s}\right) \leq \rho\right] =1 .
\]
Thus, we are essentially assuming that the behavior of $X^{h}_t$ for $t\in(ih,(i+1)h)$ is comparable to the behavior of the step $X^{h}_{(i+1)ih}|X^{h}_{ih}$, which as already mentioned, is automatic in the Euclidean case but not necessarily in other situations.
Further, we note that the condition of Eq.~\eqref{Eqn:PathRegularity} follows from assuming that $\frac{1}{h}\mu^{h}_q\left[ \|\lambda\| \leq \rho\right]$ approaches 1 as $h\rightarrow 0$ uniformly for $q$ in any compact set (here $\lambda$ is a covector in $T_q^*M$). In particular, for random walks of the type we consider earlier in the paper, this condition is trivially satisfied by taking $h < \rho^2/2k$, because by construction a step of $X^{h}_t$ consists in traveling a geodesic for distance $\sqrt{2 h k}$; that is, $\mu^{h}_q$ is supported on covectors of length $\sqrt{2h k}$ for all $q$ (again, see Definition \ref{Def:MainOperator}).
\subsection{Convergence results}
We can now prove the following general convergence theorem. Note that the argument is essentially soft, since all of the serious estimates are implicitly already taken care of in \cite{SAndV}.
\begin{theorem}\label{AppendixConvergence}
Let $M$ be a (sub)-Riemannian manifold with a smooth second-order operator $L$ with non-negative-definite principal symbol and without zeroeth-order term. Further, suppose that the diffusion generated by $L$, which we call $X^0$, does not explode, and let $P_q$ be the corresponding probability measure on $\Omega_M$ starting from $q$. Similarly, let $P^{h}_q$ be the probability measures on $\Omega_M$ corresponding to a sequence of random walks $X^{h}_{t}$ as above with $X^{h}_0=q$, and let $L_{h}$ be the associated operators. Suppose that, for any $\phi\in C_0^{\infty}(M)$, we have that
\begin{equation}\label{Eqn:OperatorConv}
L_{h} \phi\rightarrow L\phi \quad\text{uniformly on compacts as $h\rightarrow0$,}
\end{equation}
and also suppose that the condition of Eq.~\eqref{Eqn:PathRegularity} holds for the $X^{h}_{t}$.
Then if $q_{h}\rightarrow q$ as $h\rightarrow0$, we have that $P^{h}_{q_{h}}\rightarrow P_q$ as $h\rightarrow0$.
\end{theorem}
\begin{proof}
First, suppose that $M$ is compact. Then $M$ can be realized as a compact (isometrically embedded) submanifold of $\mathbb{R}^N$ via Nash embedding. This makes $\Omega_M$ a closed subset of $\Omega_{\mathbb{R}^N}$. Further, we claim that there is an increasing, continuous function $u:[0,\infty)\rightarrow[0,\infty)$ with $u(0)=0$ such that, for $q,\tilde{q}\in M$,
\begin{equation}\label{Eqn:DistanceComp}
d_{\mathbb{R}^N}\left( q,\tilde{q}\right) \leq d_{M}\left( q,\tilde{q}\right) \leq u\left( d_{\mathbb{R}^N}\left( q,\tilde{q}\right) \right) .
\end{equation}
In particular, the topology on $\Omega_M$ induced as a subset of $\Omega_{\mathbb{R}^N}$ agrees with the original topology on $\Omega_M$. To see that this is true, first observe that the inequality $d_{\mathbb{R}^N}\left( q,\tilde{q}\right) \leq d_{M}\left( q,\tilde{q}\right)$ is immediate from the isometric embedding and the optimality of Euclidean geodesics in $\mathbb{R}^N$. Next, we let $\overline{d}_M$ be the Riemannian distance on $M$ (induced by the Riemannian extension metric of the sub-Riemannian metric). Then \cite[Theorem 1.2]{Jea-2014} implies that every point of $M$ has a neighborhood on which we can find a function $u$ as above for which $d_{M}\left( q,\tilde{q}\right) \leq u\left( \overline{d}_M\left( q,\tilde{q}\right) \right)$. One only needs to check that the constants in \cite[Theorem 1.2]{Jea-2014} can be chosen uniformly, and this follows from the fact that, in \cite[Lemma 1.1]{Jea-2014}, the commutators $X_{I_i}$ can be used on the entire neighborhood (assuming it's small enough) and then the function $\varphi(t_1,\ldots,t_n)$ depends continuously on the base point. Since $M$ is compact, this $u$ can be chosen for all of $M$. Further, since the Riemannian distance $\overline{d}_M$ and the Euclidean distance $d_{\mathbb{R}^N}$ are Lipschitz-comparable under the embedding, potentially multiplying $u$ by a positive constant allows us to replace $\overline{d}_M$ by $d_{\mathbb{R}^N}$. This establishes Eq.~\eqref{Eqn:DistanceComp} (and in fact, $u$ can be taken to be H\"older-continuous, though we don't need that here).
Next, there is no problem extending $L$ to a smooth operator on all of $\mathbb{R}^N$ with bounded coefficients (where we mean the coefficients used when writing $L$ with respect to the Euclidean coordinates). We also extend the family of random walks $X^{h}_t$ to a family of Euclidean random walks $\tilde{X}^{h}$ so that we have a random walk starting from any point of $\mathbb{R}^N$ in such a way that the convergence of Eq.~\eqref{Eqn:OperatorConv} and the condition of Eq.~\eqref{Eqn:PathRegularity} still hold (with $X^{h}_t$ replaced by $\tilde{X}^{h}_t$). Here $\tilde{X}^{h}_t$ is not exactly an extension, since we interpolate between $\tilde{X}^{h}_{ih}$ and $\tilde{X}^{h}_{(i+1)h}$ by a Euclidean geodesic. However, we still have that, if $X^{h}_0=\tilde{X}^{h}_0\in M$, then $X^{h}_{ih}=\tilde{X}^{h}_{ih}$ for all $i$, so that the underlying Markov chains on $M$ agree. Further, it's easy to see that these extensions can be performed in such a way that the assumptions of Theorem 11.2.3 of \cite{SAndV} are satisfied (indeed, this just means some extra global assumptions on the random walks and operators, but since we start from a compact set, we can make $L$ and $\tilde{X}^{h}_t$ be anything we'd like outside of some large ball, just by using a bump function).
We let $\tilde{P}^{h}$ be the measures on $\Omega_{\mathbb{R}^N}$ corresponding to the $\tilde{X}^{h}$. If we now apply Theorem 11.2.3 of \cite{SAndV} to the situation at hand, see that $\tilde{P}^{h}_{q_{h}}\rightarrow P_q$ as $h\rightarrow0$ (as probability measures on $\Omega_{\mathbb{R}^N}$). It remains only to see that the same holds for $P^{h}_{q_{h}}$, that is, that whether or not we go from $X_{ih}$ to $X_{(i+1)h}$ via a Euclidean geodesic or via the original $M$-geodesic doesn't matter in the limit.
We now pass to any discrete subsequence $h_j$ such that $h_j\rightarrow 0$ as $j\rightarrow \infty$. However, for simplicity, we continue just to write $h$. By Theorem 1.3.1 of \cite{SAndV} and the fact that $\{\tilde{P}^{h}_{q_{h}}\}$ is precompact (indeed, it converges as $h\rightarrow 0$), we see that, for every $\rho>0$ and $0<T<\infty$,
\[
\lim_{\delta\searrow 0} \inf_{h} \tilde{P}^{h}_{q_{h}}\left[ \sup_{\substack{0\leq s\leq t\leq T \\ t-s\leq\delta}} d_{\mathbb{R}^N}\left( \tilde{X}^{h}_t,\tilde{X}^{h}_s\right)\leq \rho \right] =1 .
\]
(Here the $h$ on both the measure $P^{h}$ and the corresponding random path $\tilde{X}^{h}$ is somewhat redundant, but clear nonetheless. In a moment we will make better use of this notation.)
By assumption, the condition of Eq.~\eqref{Eqn:PathRegularity} holds. Since $M$ is compact and $X^{h}_{ih}$ is a Markov chain, simply summing the probability of the distance exceeding $\rho$ at each step of the walk gives that for any $0<T<\infty$, any $\rho>0$, and any $\alpha>0$, there exists $h_0>0$ such that
\begin{equation}\label{Eqn:PathRegularity2}
\sup_{q\in M}P^{h}_{q}\left[ \sup_{\substack{i : 0\leq h i\leq T \\ s<h}} d_M\left( X^{h}_{ih} , X^{h}_{ih+s}\right) > \rho\right] \leq \frac{T+1}{h} \sup_{q\in M} P^{h}_{q}\left[ \sup_{0\leq s\leqh} d_M\left( X^{h}_{0} , X^{h}_{s}\right) > \rho\right] \leq (T+1)\alpha
\end{equation}
whenever $h<h_0$. By Eq.~\eqref{Eqn:DistanceComp}, this also holds if we replace $d_M$ with $d_{\mathbb{R}^N}$, perhaps by taking $h$ smaller. Thus, for any $0<T<\infty$, any $\rho>0$, any $\alpha>0$ and small enough $h$, we have that
\[
P^{h}_{q_{h}}\left[ \sup_{\substack{i : 0\leq h i\leq T \\ s<h}} d_{\mathbb{R}^N}\left( X^{h}_{ih} , X^{h}_{ih+s}\right) \leq \rho\right] > 1-\alpha.
\]
In order to compare $\tilde{P}^{h}_{q_{h}}$ and $P^{h}_{q_{h}}$, it is convenient to realize them as push-forwards of a single measure on some probability space. In some sense, this is how we have described these random walks as being generated, in terms of a sequence of independent random variables which we use to draw the new cotangent vector at every step. However, it is more direct just to note that the paths $\tilde{X}_t$ can be recovered from the paths $X_t$. More concretely, for any path $\omega$ on $M$ that is piecewise geodesic with respect to the times $0,h,2h,\ldots$, let $F^{h}(\omega)$ be the path in $\mathbb{R}^N$ that interpolates between $\omega_{ih}$ and $\omega_{(i+1)h}$ by the appropriate Euclidean geodesic for each $i$. Then $F^{h}$ is defined on the support of $P^{h}_{q_{h}}$, and $\tilde{P}^{h}_{q_{h}}$ is the pushforward of $P^{h}_{q_{h}}$ under $F^{h}$ for each $h$. So it is natural to view both $X^{h}$ and $\tilde{X}^{h}$ as random variables under $P^{h}_{q_{h}}$, with $\tilde{X}^{h}= F^{h}\left( X^{h}\right)$.
It now follows from the above (recall in particular that $\tilde{P}^{h}_{q_{h}}$ and $P^{h}_{q_{h}}$ have the same marginals on the sequence of times $0, h, 2h,\ldots$) that, given $0<T<\infty$, $\rho>0$, and $\alpha>0$, for all sufficiently small $h$ we have
\[
P^{h}_{q_{h}}\left[ \sup_{0\leq t \leq T} d_{\mathbb{R}^N}\left( \tilde{X}^{h}_{t} , X^{h}_{t}\right) \leq \rho\right] > 1-\alpha .
\]
In light of Eq.~\eqref{Eqn:PathspaceDist}, we conclude that, for any $i\in\{1,2,\ldots\}$, $\rho>0$, any $\alpha>0$, and sufficiently large $h$ we have:
\[
P^{h}_{q_{h}}\left[ d_{\mathbb{R}^N}\left( \tilde{X}^{h}, X^{h}\right) \leq \rho + \frac{1}{2^i} \right] > 1-\alpha.
\]
Next, let $\Phi$ be a bounded, uniformly continuous function on $\Omega_{\mathbb{R}^N}$, and choose any $\delta>0$. Then there exists $\eta>0$ such that $\left|\Phi(\omega)-\Phi(\tilde{\omega})\right|<\delta$ whenever $d_{\mathbb{R}^N}(\omega,\tilde{\omega})<\eta$. Now choose $i$ large enough so that $\sum_{j=i}^{\infty}1/2^j < \eta/2$ and let $\rho<\eta/2$ and $\alpha<\delta$ in the above. Then there exists $h_0>0$ such that
\[
P^{h}_{q_{h}}\left[ d_{\mathbb{R}^N}\left( \tilde{X}^{h}, X^{h}\right) > \eta \right] < \delta ,
\]
and thus
\[
\mathbb{E}^{P^{h}_{q_{h}}}\left[ \left| \Phi\left( \tilde{X}^{h}\right)-\Phi\left( X^{h}\right)\right| \right] <\delta + 2\delta \|\Phi\|_{\infty}
\]
for $h<h_0$ (here $ \|\Phi\|_{\infty}$ is the $L^{\infty}$-norm of $\Phi$). Further, using $\tilde{P}^{h}_{q_{h}}\rightarrow P_q$, after perhaps decreasing $h_0$, we have
\[
\left| \mathbb{E}^{P^{h}_{q_{h}}}\left[ \Phi\left( \tilde{X}^{h}\right)\right] - \mathbb{E}^{P_{q}}\left[ \Phi\left( X^{0}\right)\right]\right|<\delta
\]
for $h<h_0$. Then linearity of expectation and the triangle inequality imply that
\[
\left| \mathbb{E}^{P^{h}_{q_{h}}}\left[ \Phi\left( X^{h}\right)\right] - \mathbb{E}^{P_{q}}\left[ \Phi\left( X^{0}\right)\right]\right|<2\delta\left( 1+ \|\Phi\|_{\infty}\right)
\]
for $h<h_0$, and thus, since $\delta>0$ is arbitrary
\[
\lim_{h\rightarrow 0} \mathbb{E}^{P^{h}_{q_{h}}}\left[ \Phi\left( X^{h}\right)\right] = \mathbb{E}^{P_{q}}\left[ \Phi\left( X^{0}\right)\right] .
\]
This convergence holds for any bounded, uniformly continuous function $\Phi$ on $\Omega_{\mathbb{R}^N}$, and this is sufficient (by the portemanteau theorem), to show that $P^{h}_{q_{h}}\rightarrow P_q$. Here there is some ambiguity as to whether or not we think of $P^{h}_{q_{h}}$ and $P_q$ as probability measures on $\Omega_M$ or $\Omega_{\mathbb{R}^N}$, but the point is that it doesn't matter. As measures on $\Omega_{\mathbb{R}^N}$, they are nonetheless supported on $\Omega_M$. Further, weak convergence over $\Omega_{\mathbb{R}^N}$ implies weak convergence over $\Omega_M$, and thus the convergence pulls back to $\Omega_M$. Since this convergence holds for any discrete sequence $h_j\rightarrow 0$, it holds as $h\rightarrow 0$ via all positive reals, and this completes the proof in the case when $M$ is compact.
If $M$ is not compact, we proceed by exhaustion, using Lemma 11.1.1 of \cite{SAndV}. In this context, note that this lemma is stated for the case when $\Omega$ is $\Omega_{\mathbb{R}^N}$, but it is proved by a general and elementary method, and thus the statement and proof apply, as written, to the case when $\Omega=\Omega_M$.
Let $A_k\subset M$ be an exhaustion of $M$ by compact subsets (with smooth boundary, possible by Sard's theorem). We assume that $q$ and all of the $q_{h}$ are contained in $A_1$. If $\tau_k$ is the first hitting time of the complement of the interior of $A_k$, then $\tau_k$ is a non-decreasing sequence of lower semi-continuous stopping times that increases to $\infty$ for each $\omega\in\Omega_M$. Now let $M_k$ be a sequence of compact Riemannian manifolds into which $A_k$ can be isometrically included. (In other words, we truncate, in a geometrically reasonable way, $M$ outside of $A_k$ to get $M_k$.) For each $k$, extend $L$ to a smooth operator $L_k$ on $M_k$, and extend the random walk to $M_k$ in such a way that the probability measures $P^{h, k}_{q_{h}}$ corresponding to the random walks on $M_k$ converge to the the probability measure corresponding to the diffusion generated by $L_k$, which we denote $Q^k_{q}$. The previous argument for compact $M$ makes it clear that this is possible.
By construction, the probability measure $P^{h, k}_{q_{h}}$ agrees with $P^{h}_{q_h}$ on $\mathcal{M}_{\tau_k}$. Also, by the previous argument, for any $k\geq 1$ and any discrete sequence $h_j\rightarrow 0$, the family $\left\{ P^{h_j, k}_{q_{{h_j}}} :j\geq 1 \right\}$ converges to $Q^k_q$, and $Q^k_q$ equals $P_q$ on $\mathcal{M}_{\tau_k}$. These are exactly the assumptions of Lemma 11.1.1 of \cite{SAndV}, and so we conclude that $P^{h_j}_{q_{h_j}}\rightarrow P_q$ as $j\rightarrow \infty$. As before, since this holds for any discrete sequence $h_j\rightarrow 0$ we have that $P^{h}_{q_h}\rightarrow P_q$ as $h\rightarrow 0$.
\end{proof}
\begin{rmk}
Note that nowhere in the proof did we use that the path from $X^{h}_{h i}$ to $X^{h}_{h(i+1)}$ was a geodesic. Indeed, all that matters is that the path is continuous and satisfies the condition of Eq.~\eqref{Eqn:PathRegularity} (and goes from $X^{h}_{h i}$ to $X^{h}_{h(i+1)}$, of course). Thus, Theorem \ref{AppendixConvergence} holds in slightly more generality, where we extend the class of random walks under consideration to include random walks where the interpolations from $X^{h}_{h i}$ to $X^{h}_{h(i+1)}$ are not necessarily geodesics. Perhaps the most natural such situation would be where each step of the random walk is given by flowing along the integral curve of some horizontal vector field for a small time (as opposed to traveling along a geodesic for a small time).
\end{rmk}
Finally, we return to the special case of the type of random walks discussed earlier in the paper. Then we can show that the convergence of a single step of the random walk, used to (first) define the microscopic Laplacian with respect to a splitting, also implies the convergence of the random walk to the natural limiting diffusion.
\begin{theorem}\label{AppendixConvergence2}
For a (sub)-Riemannian manifold $M$, consider a splitting $TM =\distr \oplus \V$ (if $M$ is Riemannian, the splitting is necessarily trivial) and some choice of adapted measure $\{\mu_q\}_{q \in M}$, and let $L^\V$ be the associated microscopic Laplacian, as in Definition \ref{Def:MainOperator}. Then there is a unique diffusion $X^{0,q}_{t}$ generated by $L^\V$ starting from any $q$. Let $X^{h,q}_{t}$ be the random walk, starting from $q$, determined by the parabolic scaling of the family $\{\mu_q\}$, as discussed above. Assume that $X^{0,q}_{t}$ does not explode, for any $q$. If $q_{h}\rightarrow q_0$ as $h\rightarrow 0$, then the random walks $X^{h,q_{h}}_t$ converge to the diffusion $X^{0,q_0}_t$ as $h\rightarrow 0$, in the sense that the corresponding probability measures on $\Omega_M$ converge weakly.
\end{theorem}
\begin{proof}
By Theorem \ref{t:microformula}, the operator $L^\V$ satisfies the assumptions of Theorem \ref{AppendixConvergence}. Also, the existence and uniqueness of $X^{0,q}_t$ follows. Now let $L_{h}$ be the operator associated to the random walks $X^{h,\cdot}_t$, as described above. Then for any $\phi\in C_0^{\infty}(M)$, we see that $L_{h} \phi\rightarrow L^\V \phi$ uniformly (on all of $M$) as $h\rightarrow0$, by Theorem \ref{t:microformula} and the fact that $L_{h} \phi$, for all small $h$, has support in a fixed compact set by construction. Finally, we have already noted that the condition of Eq.~\eqref{Eqn:PathRegularity} holds trivially for the random walks $X^{h,q}_{t}$. Thus we can apply Theorem \ref{AppendixConvergence}, which gives the desired convergence of randoms walks.
\end{proof}
\section{Carnot groups}\label{s:carnot}
A Carnot group $G$ of step $m$ is a simply connected Lie group whose Lie algebra of left-invariant vector fields $\mathfrak{g}$ admits a nilpotent stratification of step $m$, namely
\begin{equation}
\mathfrak{g} = \mathfrak{g}_1 \oplus \dots\oplus \mathfrak{g}_m, \qquad \mathfrak{g}_i \neq \{0\}, \qquad \forall i=1,\ldots,m,
\end{equation}
with
\begin{equation}
[\mathfrak{g}_1,\mathfrak{g}_j] = \mathfrak{g}_{1+j}, \qquad \forall 1\leq j\leq m-1, \qquad \text{and} \qquad \mathfrak{g}_{m+1} = \{0\}.
\end{equation}
A left-invariant sub-Riemannian structure on $G$ is obtained by defining a scalar product on $\mathfrak{g}_1$ or, equivalently, by declaring a set $X_1,\ldots,X_{k} \in \mathfrak{g}_1$ a global orthonormal frame. In particular, $\distr|_q = \mathfrak{g}_1|_q$, for all $q \in G$. The group exponential map,
\begin{equation}
\mathrm{exp}_{G} : \mathfrak{g} \to G,
\end{equation}
associates with $v \in \mathfrak{g}$ the element $\gamma(1)$, where $\gamma: [0,1] \to G$ is the unique integral line of the vector field $v$ such that $\gamma(0) = 0$. Since $G$ is simply connected and $\mathfrak{g}$ is nilpotent, $\mathrm{exp}_G$ is a smooth diffeomorphism. Thus we identify $G \simeq \R^m$, endowed with a polynomial product law.
The adjoint endomorphism $\mathrm{ad}_{X}:\mathfrak{g} \to \mathfrak{g}$ is:
\begin{equation}
\mathrm{ad}_X (Y) := [X,Y], \qquad \forall X,Y \in \mathfrak{g}.
\end{equation}
Notice that, if $X \in \mathfrak{g}_j$, then
\begin{equation}
\ad_X|_{\mathfrak{g}_\ell} : \mathfrak{g}_\ell \to \mathfrak{g}_{\ell+j}, \qquad \forall \ell,j=1,\ldots,m,
\end{equation}
by the graded structure of $\mathfrak{g}$.
\begin{rmk}
In the literature, these structures are also referred to as \emph{Carnot groups} of type $(k,n)$, where $k = \dim \distr = \rank \mathfrak{g}_1$ is the \emph{rank} of the distribution and $n$ is the dimension of $G$.
\end{rmk}
By definition, Carnot groups are left-invariant sub-Riemannian structures (see Definition~\ref{d:left-invariant}). Then it is natural to fix a left-invariant volume, that is proportional to Popp's one $\popp$ by Corollary~\ref{c:popp=haar}. Moreover, we restrict to left-invariant complements $\V$. In this setting, we rewrite the compatibility condition $\chi^{(\V,\popp)} = 0$ in a more invariant fashion. To do this, we choose left invariant orthonormal frames $X_1,\ldots,X_k$ for $\distr$ and left-invariant frames $X_{k+1},\ldots,X_n$ for $\V$. Thanks to the splitting $\distr \oplus \V$ we have the projections $\pi_\V$ and $\pi_\distr$ on $TM$.
\begin{lemma}
For Carnot groups, the compatibility condition for left-invariant volumes and complements is
\begin{equation}
\tr(\pi_\V \circ \ad_{X_i}) = 0, \qquad \forall i =1,\ldots,k.
\end{equation}
\end{lemma}
\begin{proof}
We rewrite the compatibility condition as
\begin{equation}
\sum_{j=k+1}^{n} c_{ji}^j X_i +\cancel{X_i(\theta)} = \sum_{j=k+1}^n \nu^j([X_j,X_i]) = - \tr(\pi_\V\circ \mathrm{ad}_{X_i}\circ \pi_\V ) = -\tr(\pi_\V \circ \ad_{X_i}) = 0,
\end{equation}
where we used the cyclic property of the trace and the fact that projections are idempotent. The function $\theta$ appearing in the compatibility condition is constant, since both $\popp$ and $X_1,\ldots,X_n$ are left-invariant.
\end{proof}
On Carnot groups, thanks to the canonical identification $\mathfrak{g}_j = \distr^j/\distr^{j-1}$, there is a natural left-invariant Riemannian extension. In fact, it is sufficient to consider the orthogonal direct sum
\begin{equation}
\mathfrak{g} = \distr \oplus \mathfrak{g}_2 \oplus \ldots\oplus \mathfrak{g}_m,
\end{equation}
where on each factor $\mathfrak{g}_j$, with $g\geq 2$, we have a well defined scalar product induced by the maps $\pi_j$ of Eq.~\eqref{eq:pimap}. This gives a smooth left-invariant scalar product on $TM$. Popp's volume $\popp$ is the Riemannian volume of such natural Riemannian extension. We can address questions \textbf{Q2} and \textbf{Q3} of Section~\ref{s:intro}, restricted to the class of left-invariant volumes and complements.
The most natural complement comes from the very definition of the Carnot structure:
\begin{equation}
\V_0 := \mathfrak{g}_2 \oplus \dots\oplus \mathfrak{g}_m,
\end{equation}
\begin{prop}
For any Carnot group $G$, we have $\Delta_{\popp} = L^{\V_0}$.
\end{prop}
\begin{proof}
By definition of $\V_0$ and the stratified structure of the Carnot algebra, we have
\begin{equation}
\pi_{\V_0}\circ \mathrm{ad}_{X_i}\circ \pi_{\V_0} = \mathrm{ad}_{X_i}|_{\V_0},
\end{equation}
that is indeed nilpotent, since $\ad_{X_i}$ is, and $\tr(\ad_{X_i}) =0$. Then the compatibility condition is satisfied.
\end{proof}
Any distinct left-invariant complement is the graph of a non-trivial linear map $\ell: \V_0 \to \distr$, that is
\begin{equation}
\V_\ell := \{X+ \ell( X)\mid X \in \V_0\}.
\end{equation}
\begin{prop}\label{p:nonuniquenessCarnot}
If $\Delta_\popp = L^\V$, then $\V = \V_\ell$, with
\begin{equation}
\tr(\ell \circ \ad_{X_i}) = 0, \qquad \forall i=1,\ldots,k.
\end{equation}
\end{prop}
\begin{proof}
We use the shorthand $\pi_\ell$ (resp. $\pi_0$) for the two projections $\pi_{\V_\ell}$ and $\pi_{\V_0}$ respectively. Indeed $\pi_\ell = \pi_0 + \ell \circ \pi_0$. The new complement is compatible iff, for all $i=1,\ldots,k$
\begin{equation}
0 = \tr(\pi_\ell \circ \ad_{X_i} \circ \pi_\ell) = \tr(\pi_\ell^2 \circ \mathrm{ad}_{X_i}) = \tr(\pi_\ell \circ \mathrm{ad}_{X_i}) = \tr(\ell \circ \pi_0 \circ \ad_{X_i}) = \tr(\ell \circ \ad_{X_i}),
\end{equation}
where we used the cyclic property of the trace, the fact that projectors are idempotent, that $\V_0$ satisfies the compatibility condition and the fact that the image of $\ad$ is contained in $\V_0$.
\end{proof}
Any $\ell$ such that $\ell(\mathfrak{g}_2) = 0$ gives an example where the compatible complement is non-unique. This class is always non-empty for any Carnot group of step $m \geq 3$, but is trivial for step $2$.
\begin{example}
We provide an example of non-uniqueness for corank 1 Carnot groups. Choose a structure with
\begin{equation}
\mathfrak{g}_1 = \spn\{X_1,\ldots,X_k\}, \qquad \mathfrak{g}_2 = \spn\{X_0\},
\end{equation}
\begin{equation}
[X_i,X_j] = A_{ij} X_0, \qquad i,j=1,\ldots,k
\end{equation}
for a skew symmetric matrix $A \in \mathfrak{so}(k)$ (the singular values of $A$ define uniquely any Carnot group of type $(k,k+1)$, up to isometries, see for example \cite{LR-Enumerative}). Any other compatible complement is of the form:
\begin{equation}
\V = \spn\left\lbrace X_0 + \sum_{j=1}^k \ell_j X_j\right\rbrace, \qquad \text{with} \qquad \ell \in \ker A.
\end{equation}
Then the number of distinct compatible complements is equal to $\dim \ker A$. In particular, the only case in which we have uniqueness is when $\ker A = \{0\}$, that is a contact Carnot group.
\end{example}
\section{Corank $1$ structures}\label{s:corank1}
Consider a sub-Riemannian structure $(M,\distr,\metr)$ with $\rank \distr= \dim M -1$. We assume that $\distr$ has step $2$, that is $\distr_q^2 = T_qM$ for all $q \in M$. This is a popular class of structures including contact, quasi-contact, and more degenerate ones. Locally $\distr = \ker \eta$ for some one-form $\eta$. The endomorphism $J_\eta: \Gamma(\distr) \to \Gamma(\distr)$ is defined by
\begin{equation}
g(X,JY) = d\eta(X,Y), \qquad \forall X,Y \in \Gamma(\distr).
\end{equation}
It is skew-adjoint w.r.t. the sub-Riemannian scalar product, namely $J^* = -J$. Moreover, $J \neq 0$. In fact, if this were the case, a simple application of Cartan's formula gives, for any local frame $X_1,\ldots,X_k$ of $\distr$:
\begin{equation}
\eta([X_i,X_j]) = -d\eta(X_i,X_j) + \cancel{X_i(\eta(X_j))}-\cancel{X_j(\eta(X_i))} = g(JX_j,X_i) = 0,
\end{equation}
in contradiction with the step $2$ condition. It follows that $\tr(JJ^*) > 0$. This observation leads the following construction. If $g \in C^\infty(M)$, then $g \eta$ gives the same distribution. On the other hand one can check that if $\eta' = g\eta$, then $d\eta'|_{\distr} = g d\eta|_{\distr}$, that is $J_{\eta'} = g J_\eta$. We can fix $\eta$, up to a sign, with the condition
\begin{equation}
\| J\|^2 = \tr(JJ^*) = \sum_{i,j=1}^k g(X_i,JX_j)^2 = 1.
\end{equation}
In this case, we say that the local one-form $\eta$ is \emph{normalized}. The existence of a global, normalized one-form depends on the distribution. In particular there exists a unique (up to a sign), normalized global one-form if and only if $\distr$ is co-orientable (i.e. the quotient bundle $TM/\distr$ is orientable or, equivalently, trivial).
\begin{rmk}
If $TM/\distr$ is not orientable, a global normalized one-form does not exist. This is the case, for example, for the structure on $M = \R^2 \times \R P^1$ with
\begin{equation}
\distr:=\ker(\sin\theta dx - \cos\theta dy),
\end{equation}
where we identify $\R P^1 \simeq \R/\pi\mathbb{Z}$ with the coordinate $\theta$. One still could work with a global object $[\eta]$, section of the bundle whose fibers are $(T^*_q M\setminus\{0\})/\mathbb{Z}_2$. In particular the value $[\eta](q)$ is the equivalence class $\{\pm\eta(q)\}$. All the results and formulas appearing in the following do not depend on the choice of the representative of the class at each point. For this reason, it is not a restriction to assume that $\eta$ is a well-defined global one-form.
\end{rmk}
We investigate the relation $\Delta_\omega = L^\V$. We first rewrite the compatibility condition $\chi^{(\V,\omega)}=0$ in this case. Observe that any complement is (locally) generated by a vector field $X_0$, that we normalize with $\eta(X_0)=1$.
\begin{lemma}\label{l:compatibility-corank1}
Let $\V = \spn\{X_0\}$, with $\eta(X_0)=1$. The compatibility equation $\chi^{(\omega,\V)} = 0$ is
\begin{equation}
\sum_{i=1}^k d\eta(X_0,X_i) X_i = \grad(\theta) \qquad \text{with} \qquad \theta = \log|\omega(X_1,\ldots,X_k,X_0)|.
\end{equation}
\end{lemma}
\begin{proof}
In the local adapted frame $X_1,\ldots,X_k,X_0$, the compatibility equation $\chi^{(\omega,\V)} = 0$ is
\begin{equation}
\sum_{i=1}^k c_{0i}^0 X_i + \grad(\theta) = 0.
\end{equation}
This simple form is due to the fact that the structure has corank $1$. Then we rewrite
\begin{equation}
c_{0i}^0 = \eta([X_0,X_i]) = - d\eta(X_0,X_i) +\cancel{X_0(\eta(X_i))} - \cancel{X_i(\eta(X_0))},
\end{equation}
where we used Cartan's formula and the normalization $\eta(X_0) = 1$.
\end{proof}
The existence and uniqueness of compatible complements in this case depends on the dimension of $\ker J$.
\begin{prop}\label{p:volumeuniquecomplement}
Fix a volume $\omega$, and let $p:=\dim\ker J$. Then:
\begin{itemize}
\item if $p =0$, then $\exists!$ complement $\V$ such that $L^\V = \Delta_\omega$;
\item if $p>0$, the space of $\V$ such that $L^\V = \Delta_\omega$ is either empty, or an affine space over $\ker J$.
\end{itemize}
\end{prop}
\begin{proof}
Let $p=0$, we check that the equation $\chi^{(\omega,\V)} = 0$ has a unique solution for fixed $\omega$. Assume $\V = \spn\{Z + \xi\}$, where $Z$ is transverse to $\distr$ such that $\eta(Z)=1$ and $\xi \in \Gamma(\distr)$. Then, by Lemma~\ref{l:compatibility-corank1}, the compatibility condition gives
\begin{equation}
\grad(\theta) = d\eta(Z+\xi,X_i)X_i =d\eta(Z,X_i)X_i+ g(\xi,JX_i)X_i =d\eta(Z,X_i)X_i -J\xi,
\end{equation}
There is a unique $\xi$ satisfying this equation, namely $\xi = -J^{-1}[\grad(\theta)+d\eta(Z,X_i)X_i]$. Now let $p >0$, and suppose that $X_0$ and $X_0'$ are two different (normalized) generators for complements $\V$ and $\V'$, compatible with the volume $\omega$, with $\eta(X_0)=\eta(X_0')=1$. The normalization implies $X_0-X_0' \in \distr$. According to Lemma~\ref{l:compatibility-corank1}
\begin{equation}
\sum_{i=1}^k d\eta(X_0,X_i)X_i = \grad(\theta), \qquad\text{and}\qquad \sum_{i=1}^k d\eta(X_0',X_i)X_i =\grad(\theta)',
\end{equation}
with $\theta' = \log| \omega(X_1,\ldots,X_k,X_0')|$. Since $X_0-X_0' \in \distr$,
\begin{equation}
\theta = \log| \omega(X_1,\ldots,X_k,X_0)| = \log |\omega(X_1,\ldots,X_k,X_0')| = \theta'.
\end{equation}
Then we have $d\eta(X_0-X_0',X_i) = 0$, for all $i =1,\ldots,k$. This implies $X_0-X_0' \in \ker J$. Conversely, one can check that for any (normalized) generator $X_0$ of a compatible complement $\V$, any other complement generated by $X_0 \oplus \ker J$ is compatible with the the same volume $\omega$.
\end{proof}
\subsection{On natural Riemannian extensions and Popp's volume}
We end this section with a general result about volumes of Riemannian extensions of corank $1$ structures.
\begin{prop}\label{p:naturalriemannian}
Let $(M,\distr,\g)$ a corank $1$ sub-Riemannian structure with normalized one-form $\eta$. Take any local vector field $Z$ such that $\eta(Z) =1$. Consider the (local) Riemannian extension of $(M,\distr,\g)$ obtained by declaring $Z$ an orthonormal vector field. Then the Riemannian volume of this Riemannian extension does not depend on the choice of $Z$ and is equal to Popp's volume.
\end{prop}
\begin{proof}
This statement is an application of the explicit formula of \cite[Proposition 16]{nostropopp}, stated in the contact case, whose proof holds unchanged for corank $1$ sub-Riemannian structures (it only requires that $\tr(JJ^*) \neq 0$).
\end{proof}
Proposition~\ref{p:naturalriemannian} gives another reason for considering Popp's volume in the corank $1$ case (besides the fact, valid in general, that Popp's volume is N-intrinsic). We now specialize Proposition~\ref{p:volumeuniquecomplement} to the contact and quasi-contact cases, giving an answer to {\bf Q2} and {\bf Q3} of Section~\ref{s:intro}.
\section{Contact structures}\label{s:contact}
Let $M$ be a smooth manifold with $\dim M = 2d +1$. A corank sub-Riemannian structure is \emph{contact} if $\ker J = 0$. This is the least degenerate case. Since $M$ is odd dimensional, there exists a unique vector $Z$, called \emph{Reeb vector field}, such that $d\eta(Z,\cdot) = 0$ and $\eta(Z) =1$.
The next corollaries follow from (the proof of) Proposition~\ref{p:volumeuniquecomplement} and give positive answers to {\bf Q2} and {\bf Q3} mentioned in Section~\ref{s:intro}.
\begin{cor}
For any volume $\omega$ there exists a unique complement $\V$ such that $L^\V = \Delta_\omega$. This complement is generated by the vector
\begin{equation}
X_0 = \Z -J^{-1}\grad(\theta),
\end{equation}
where $\theta = \log|\omega(X_1,\ldots,X_k,\Z)|$.
\end{cor}
Contact sub-Riemannian structures have a natural Riemannian extension, obtained by declaring $\Z$ a unit vector orthonormal to $\distr$. By Proposition~\ref{p:naturalriemannian}, Popp's volume is the Riemannian volume of this extension.
\begin{cor}\label{c:reebpopp}
Let $\popp$ be the Popp's volume. The unique complement $\V$ such that $L^{\V} = \Delta_{\popp}$ is generated by the Reeb vector field. Moreover, $\popp$ is the unique volume (up to constant rescaling) with this property.
\end{cor}
\begin{proof}
By \cite[Rmk. 4]{nostropopp}, $\theta=1$. Then $\grad (\theta )=0$ and $X_0 = \Z$. The uniqueness follows from Lemma~\ref{l:uniqueness}.
\end{proof}
\subsection{Integrability conditions}\label{s:integrability}
The inverse problem, namely for a fixed complement $\V$, find a volume $\omega$ such that $L^\V = \Delta_\omega$ is more complicated (and, in general, has no solution). In the contact case we find an explicit integrability condition.
\begin{prop}\label{p:integrability}
Let $\V = \spn\{X_0\}$. Define the one-form $\alpha := \frac{i_{X_0} d\eta}{\eta(X_0)}$ and the function $g=\frac{d\alpha \wedge\eta \wedge (d\eta)^{d-1}}{\eta\wedge (d\eta)^d }$. Then there exists a volume $\omega$ such that $L^{\V} = \Delta_\omega$ if and only if
\begin{equation}
d\alpha - dg \wedge \eta - g d\eta = 0.
\end{equation}
In this case, $\omega$ is unique up to constant rescaling.
\begin{rmk}
If $\V$ is the Reeb direction, $\alpha = 0$, the integrability condition is satisfied and we recover Corollary~\ref{c:reebpopp}.
\end{rmk}
\end{prop}
\begin{proof}
We assume $\V = \spn\{\Z + \xi\}$ for some $\xi \in \Gamma(\distr)$. This is equivalent to normalize $X_0$ in such a way that $\eta(X_0) =1$. A volume $\omega$ is uniquely specified by the function $\theta = \log|\omega(X_1,\ldots,X_k,\Z)|$. The same proof of Proposition~\ref{p:volumeuniquecomplement} leads to the compatibility condition
\begin{equation}
-J \xi = \grad(\theta).
\end{equation}
We have to solve the following problem: given an horizontal vector field $X$, find a function $\theta \in C^\infty(M)$ such that $X = \grad(\theta) $. With $X$ we can associate a one-form $\alpha$ such that
\begin{equation}
\alpha(Y) = g(X,Y), \qquad \forall Y \in \Gamma(\distr).
\end{equation}
For our case, $X=-J\xi$. Then $\alpha(Y) = g(-J\xi,Y) = -g(Y,J\xi) = -d\eta(Y,\xi) = i_\xi Y = i_{X_0} d\eta (Y)$ and in this case we set $\alpha :=i_{X_0} d\eta$. In the language of forms, the above problem is equivalent to $\alpha|_{\distr} = d\theta|_{\distr}$, and has solution iff
\begin{equation}
\alpha + g\eta = d \theta, \qquad \text{ for some } g \in C^\infty(M).
\end{equation}
A (local) necessary and sufficient condition for the existence of such $\theta$ is that there exists a closed representative in the class $\alpha + g\eta$. Namely $g$ must satisfy
\begin{equation}\label{eq:compat}
d \alpha + dg\wedge \eta + g d\eta = 0.
\end{equation}
If such a $g$ exists, is uniquely expressed in terms of $\alpha$. Taking the appropriate wedge product with $\eta$, and then with $d\eta$ $d-1$ times, we get
\begin{equation}
g \eta\wedge (d\eta)^d = - \eta \wedge d\alpha \wedge (d\eta)^{d-1} \qquad \Rightarrow \qquad g = -\frac{d\alpha \wedge\eta \wedge (d\eta)^{d-1}}{\eta\wedge (d\eta)^d }.
\end{equation}
The $n$-form at the denominator is never zero, it is equivalent to the non-degeneracy assumption $\ker J =\{0\}$. We have to check that such a $g$ gives a closed representative, i.e. that $f$ solves Eq.~\eqref{eq:compat}. Plugging the explicit expression of $g$ into Eq.~\eqref{eq:compat} gives the condition of our statement. Uniqueness follows from Lemma~\ref{l:uniqueness}.
\end{proof}
\begin{rmk}
In the three dimensional case $d=1$ and the integrability condition is simplified. Fix a basis $X_1,X_2$. We can assume $d\eta(X_1,X_2) = 1$ and we get for the function $g$
\begin{equation}
g = -\frac{d\alpha \wedge\eta \wedge (d\eta)^{d-1}}{\eta\wedge (d\eta)^d } = -\frac{d\alpha \wedge\eta(X_1,X_2,\Z)}{\eta\wedge d\eta (X_1,X_2,\Z)} = - d\alpha(X_1,X_2).
\end{equation}
One can check that, when restricted to $\distr$, Eq.~\eqref{eq:compat} is always satisfied. Then the only non-trivial condition is given by taking the contraction with the Reeb field: $i_\Z d\alpha + d(d\alpha(X_1,X_2))= 0$.
\end{rmk}
\section{Quasi-contact structures} \label{s:quasicontact}
Let $M$ a smooth manifold with $\dim M =2d+2$. A corank $1$ sub-Riemannian structure $(M,\distr,\metr)$ is \emph{quasi-contact} if $\distr_0:=\ker J$ has positive dimension. We assume \emph{minimal degeneration}, that is $\dim \distr_0 = 1$.
\subsection{The quasi-Reeb vector field}\label{s:quasireeb}
We discuss the construction of a ``canonical'' vector transverse to $\distr$, analogous to the Reeb vector field, in the quasi-contact case. We learned this construction from \cite{gregquasi} (where a different normalization is used).
In the construction that follows we always assume that $J_q :\distr_q \to \distr_q$ has distinct eigenvalues at all points on $M$. This is true for a generic quasi-contact structure and outside a codimension 3 closed stratified subset. Thus, $J$ has distinct imaginary eigenvalues $\pm i \lambda_j$ with
\begin{equation}
0= \lambda_0 < \lambda_1 < \dots < \lambda_d \in \R.
\end{equation}
where $\lambda_i$ are smooth functions on $M$. The normalization condition gives $2\sum_{i=1}^d \lambda_i^2 =1$.
For any $j$, denote by $\distr_j \subset \distr$ the $2$-dimensional (real) eigenspace associated with the eigenvalues $\pm i\lambda_j$ (we use the same notation for the eigenspaces and the associated sub-bundle). By definition of $J$, the $\distr_j$ are mutually orthogonal, for all $i=0,1,\ldots,d$. We can choose generators $X_j,Y_j$ of each distribution $\distr_j$ by taking smooth families of (generalized) eigenvectors of $J$. Namely
\begin{equation}
J (X_j+ i Y_j) = i \lambda_j (X_j+iY_j), \qquad \forall j =1,\ldots,d.
\end{equation}
Equivalently
\begin{equation}
J X_j = - \lambda_j Y_j, \qquad J Y_j = \lambda_j X_j, \qquad \forall j=1,\ldots,d.
\end{equation}
Notice that $\lambda_j g(X_j,Y_j) = g(J Y_j,Y_j) = 0$ since $J$ is skew-symmetric. Then $X_j$ and $Y_j$ are orthogonal. We choose them to be orthonormal. Moreover, let $W \in \ker J$ of unit norm. In particular, $X_1,Y_1,\ldots,X_d,Y_d,W$ is a local orthonormal frame for $\distr$.
\begin{rmk}
In terms of the orthonormal frame $X_1,Y_1,\ldots,X_d,Y_d,W \in \Gamma(\distr)$, the matrix representing $J$ is
\begin{equation}
J \simeq \begin{pmatrix}
\lambda_1 K & & & \\
& \ddots & &\\
& & \lambda_d K&\\
& & & 0
\end{pmatrix}, \qquad K: = \begin{pmatrix}
0 & 1 \\
-1& 0
\end{pmatrix}.
\end{equation}
\end{rmk}
Choose $j \in \{1,\ldots,d \}$. Let $\distr_j^2:=[\distr_j,\distr_j]+\distr_j$ be the distribution generated by taking all possible brackets of sections of $\distr_j$ of length up to $2$. Notice that $[\distr_j,\distr_j]$ is transverse to $\distr$. In fact, for any $V,W \in \Gamma(\distr_j)$:
\begin{equation}
\eta([V,W]) = -d\eta(V,W) + \cancel{V(\eta(W))} - \cancel{W(\eta(V))} = g(W,JV)
\end{equation}
and we conclude using the fact that $J|_{\distr_j}$ is non-degenerate. Then $[\distr_j,\distr_j]$ is a three dimensional distribution, generated by the orthonormal vector fields $X_j,Y_j$ defined above and their bracket $[X_j,Y_j]$.
\begin{definition}\label{d:quasireeb}
The \emph{quasi-Reeb} vector field $\Z_j$ is the unique vector field such that
\begin{equation}
\Z_j \in [\distr_j,\distr_j], \qquad d\eta(\Z_j,\distr_j) = 0, \qquad \eta(\Z_j) = 1.
\end{equation}
\end{definition}
\begin{prop}\label{p:quasireebformula}
In terms of the orthonormal generators $X_j,Y_j$ of $\distr_j$, we have
\begin{equation}
\Z_j = - \frac{1}{\lambda_j}[X_j,Y_j] +\frac{d\eta([X_j,Y_j],Y_j)}{\lambda^2_j} X_j -\frac{d\eta([X_j,Y_j],X_j)}{\lambda^2_j} Y_j.
\end{equation}
\end{prop}
\begin{proof}
It's a linear-algebra computation with Cartan's formula. By the first condition
\begin{equation}
\Z_j = \alpha X_j + \beta Y_j + \gamma [X_j,Y_j], \qquad \alpha,\beta,\gamma \in C^\infty(M).
\end{equation}
Using the second condition with $X_j \in \distr_j$ we get
\begin{equation}
0 = d\eta (\Z_j,X_j) = \beta d\eta(Y_j,X_j) + \gamma d\eta([X_j,Y_j],X_j), \qquad \Rightarrow \qquad \beta = \gamma\frac{d\eta([X_j,Y_j],X_j)}{\lambda_j}.
\end{equation}
Using the second condition with $Y_j \in \distr_j$ we get
\begin{equation}
0 = d\eta (\Z_j,Y_j) = \alpha d\eta(X_j,Y_j) + \gamma d\eta([X_j,Y_j],Y_j), \qquad \Rightarrow \qquad \alpha = -\gamma\frac{d\eta([X_j,Y_j],Y_j)}{\lambda_j}.
\end{equation}
We conclude the proof using the third condition (normalization), which gives
\[
1 = \eta(\Z_j) = \gamma\eta([X_j,Y_j]) = -\gamma d\eta(X_j,Y_j) = \gamma g(Y_j,JX_j) = -\gamma \lambda_j. \qedhere
\]
\end{proof}
\begin{rmk}
If the structure is nilpotent (of step $2$), any Lie bracket of length greater than $2$ vanishes, and by the explicit formula above we have
\begin{equation}
\Z_j = - \frac{1}{\lambda_j}[X_j,Y_j].
\end{equation}
If the structure is also a Carnot group, $[X_j,Y_j]$ belongs to the second stratum $\mathfrak{g}_2$, for any $j$. Since there is a unique vector field in the second stratum such that $\eta(\Z_j) = 1$, it follows that $\Z_j$ is the same for all $j \in \{1,\ldots,d\}$. Then for quasi-contact Carnot groups this construction is canonical and does not depend on the choice of $j$.
\end{rmk}
\begin{rmk}
At the points on the manifold where the eigenvalues of $J$ cross, the quasi-Reeb vector fields $\Z_j$ are no longer well defined. However, the volume of the Riemannian extensions obtained by declaring $\Z_j$ a unit vector orthogonal to $\distr$, can be extended to a well defined, global smooth volume. In fact, as a consequence of Proposition~\ref{p:naturalriemannian}, the Riemannian volume of any one of these extensions coincides with Popp's volume (as $\eta(\Z_j) = 1$), and the latter is clearly a well defined volume form on the whole manifold.
\end{rmk}
\begin{rmk}
The construction above defines $d$ different vector fields, defined on the regions of the manifold where the associated eigenvalue $i\lambda_j$ has algebraic multiplicity equal to $1$. In some cases one can choose a canonical, smooth $\Z_j$: for example when the smallest eigenvalue has globally minimal multiplicity. For the case $d=1$ there is always such a choice. In general, one could define a unique natural transverse vector by taking the average $\Z:=\frac{1}{d}\sum_{j} \Z_j$. Its regularity properties, however, are not clear when the the spectrum of $J$ is not simple.
\end{rmk}
\subsection{An example of non-existence}
Proposition~\ref{p:volumeuniquecomplement} states that, in the contact case (i.e. when $J$ is non-degenerate), for any volume $\omega$ there exists a unique complement $\V$. In the quasi-contact case the situation changes dramatically. Compatible complements are never unique as soon as $\ker J \neq \{0\}$. Surprisingly, compatible complements might not exist. We discuss an example where for a given volume (Popp's one) there are \emph{no} compatible complements.
\begin{example}
Consider the quasi-contact structure on $M =\mathbb{R}^4$, with coordinates $(x,y,z,w)$, defined by
\begin{equation}
\eta = \frac{g}{\sqrt{2}} \left(dw - \frac{y}{2} dx+\frac{x}{2} dy\right),
\end{equation}
where $g$ is any monotone, strictly positive function (e.g. $g = e^z$).
The metric is defined by the following global orthonormal frame for $\distr$:
\begin{equation}
X = \frac{1}{\sqrt{g}}\left( \partial_x + \frac{1}{2}y \partial_w \right), \qquad Y = \frac{1}{\sqrt{g}}\left(\partial_y - \frac{1}{2} x \partial_w\right), \qquad \Z = \frac{1}{\sqrt{g}} \partial_z.
\end{equation}
This is essentially $\mathbb{H}_3 \oplus \mathbb{R}$ with a scaled metric. One can check that
\begin{equation}
d \eta = \frac{\dot{g}}{\sqrt{2}} \left(dz \wedge dw + \frac{y}{2} dx \wedge dz -\frac{x}{2} dy \wedge dz \right) + \frac{g}{\sqrt{2}} dx \wedge dy,
\end{equation}
where the $\dot{g} = \partial_z g$. The representative matrix $A$ of $d\eta$ in coordinates $(x,y,z,w)$ is
\begin{equation}
A \simeq \frac{1}{\sqrt{2}}\begin{pmatrix}
0 & g & \tfrac{y}{2}\dot{g} & 0 \\
-g & 0 & -\tfrac{x}{2} \dot{g} & 0 \\
-\tfrac{y}{2} \dot{g} & \tfrac{x}{2}\dot{g} & 0 & \dot{g} \\
0 & 0 & -\dot{g} & 0
\end{pmatrix}, \qquad \text{with} \qquad \det A = \frac{ g^2 \dot{g}^2}{4} > 0.
\end{equation}
In particular, $\ker d\eta =\{0\}$. Moreover, one can check that
\begin{equation}
J X = -\frac{1}{\sqrt{2}}Y, \qquad J Y = \frac{1}{\sqrt{2}} X, \qquad J \Z = 0.
\end{equation}
Thus we have the correct normalization
\begin{equation}
\|J\|^2 = \tr(JJ^*) = 1.
\end{equation}
Choose $\omega = \popp$, the Popp's volume. We look for a complement $\V = \spn\{X_0\}$ such that $L^\V = \Delta_\omega$. We can assume, rescaling $X_0$, that $\eta(X_0) = 1$. Using the explicit formula from \cite{nostropopp}, we have
\begin{equation}
\theta_{\popp} = \log| \popp(X_1,\ldots,X_k,X_0)| = \frac{1}{\sqrt{\sum_{i,j=1}^k d\eta(X_i,X_j)^2}} = \frac{1}{\|J\|} = 1.
\end{equation}
We look for solutions of the equation $\chi^{(\popp,\V)} = 0$. Since $\theta_{\popp} = 1$, we get from Lemma~\ref{l:compatibility-corank1}
\begin{equation}
d\eta(X_0,X_i) = 0, \qquad \forall i =1,\ldots,k.
\end{equation}
This implies $X_0 \in \ker d\eta$, but $d\eta$ has trivial kernel.
\end{example}
\begin{rmk}
It can be shown that $L^\V \neq \Delta_\popp$ is a generic property for quasi-contact structures. The precise statement and the proof involves techniques from transversality theory and is out of the scope of this paper.
\end{rmk}
This result in the case $d=1$ (that is $\dim M =4$, the lowest dimension for quasi-contact structures) is particularly surprising, for the following reason. The nilpotent approximation of any corank $1$ sub-Riemannian structure is a corank $1$ Carnot group. This is a sub-Riemannian structure on $\R^n$ that, in coordinates $(x,z) \in \R^{n-1} \times \R$ is generated by the global orthonormal frame:
\begin{equation}\label{eq:frameC}
X_i = \frac{\partial}{\partial x_i} - \frac{1}{2}\sum_{j=1}^{n-1}A_{ij} x_j \frac{\partial}{\partial z}, \qquad i =1,\ldots,n-1,
\end{equation}
where $A \in \mathfrak{so}(n-1)$. Isometry classes of corank $1$ Carnot groups are determined by the string $0\leq \alpha_1 < \ldots < \alpha_p$ of the $p$ distinct non-negative singular values of $A$, up to multiplication for a global constant, and their geometric multiplicities (see \cite{LR-Enumerative} for corank $1$ structures, and \cite[Rmk. 1]{ALG-path} for higher corank). In particular, for $n =4$, $A \in \mathfrak{so}(3)$ there is a unique such a string: $(0,1)$ where $0$ has multiplicity $1$ and $1$ has multiplicity $2$. In particular, any corank $1$ Carnot group in dimension $n=4$ is isometric to the one defined by the frame~\eqref{eq:frameC}, with:
\begin{equation}
A = \begin{pmatrix}
0 & 1 & 0 \\
-1 & 0 & 0 \\
0 & 0 & 0
\end{pmatrix}.
\end{equation}
It follows that any corank $1$ sub-Riemannian in dimension $4$ is quasi-contact and equi-nilpotentizable. In this case, we proved that there exists a unique N-intrinsic volume (up to scaling) and is given by Popp's one, and a unique N-intrinsic Laplacian $\Delta_\popp$ (see Theorem~\ref{t:equinilp}). In our example, this unique N-intrinsic Laplacian has \emph{no} compatible complement or, in other words, the macroscopic diffusion operator has no microscopic counterpart.
\section{On volume forms in (sub)-Riemannian geometry}\label{s-intrinsic}
The aim of this section is to introduce a rigorous definition of N-intrinsic volume as the one that ``depends only on the first order approximation of the (sub)-Riemannian structure'' (see the discussion in Section~\ref{s:intro}).
Let $M$ be an orientable $n$-dimensional smooth manifold. A (smooth) \emph{volume form} is a positive $n$-form $\omega$, that is $\omega(X_1,\ldots,X_n) > 0$ for any oriented local frame $X_1,\ldots,X_n$. Any volume form $\omega$ defines a positive measure on Borel sets of $M$, that we still call $\omega$. Let $f \in C^\infty(M)$, and $U$ a Borel set. In this way, $\int_U f \omega$ denotes the integral over $U$ of the function $f$ with respect to the measure induced by $\omega$.
\begin{definition}\label{d:diver}
Let $X \in \Gamma(TM)$ and $\omega$ be a volume form. The \emph{divergence} $\div_\omega (X)$ is the function defined by
\begin{equation}
\mathcal{L}_X \omega = \div_\omega(X) \omega,
\end{equation}
where $\mathcal{L}_X$ denotes the Lie derivative in the direction of $X$.
\end{definition}
The next two lemmas are easy consequences of the definition.
\begin{lemma}
Let $f \in C_0^\infty(M)$ and $\omega$ be a volume form. Then
\begin{equation}
\int_M f \div_\omega(X) \omega = - \int_M df(X) \omega.
\end{equation}
\end{lemma}
\begin{lemma}
Let $f \in C^\infty(M)$, with $f \neq 0$ and $\omega$ a volume form. Let $\omega' = f\omega$. Then
\begin{equation}
\div_{\omega'}(X) = \div_{\omega}(X) + X(\log |f|).
\end{equation}
\end{lemma}
\subsection{The nilpotent approximation}
A key ingredient is the definition of \emph{nilpotent approximation}: a (sub)-Riemannian structure on the tangent space at a point $T_q M$ which, in a sense, is the ``first order approximation'' of the (sub)-Riemannian structure. Let $\distr$ be an equiregular, bracket-generating distribution. The \emph{step} of $\distr$ is the first $m$ such that $\distr_q^m = T_q M$.
\begin{definition}
The \emph{nilpotentization} of $\distr$ at a point $q \in M$ is the graded vector space
\begin{equation}
\hat{M}_q := \distr_q \oplus \distr_q^2/\distr_q \oplus \dots \oplus \distr^m_q/\distr_q^{m-1}.
\end{equation}
\end{definition}
The vector space $\hat{M}_q$ can be endowed with a Lie algebra structure, which respects the grading, induced by the Lie bracket, as follows. Let $X_q \in \distr^i_q/\distr^{i-1}_q$ and $Y_q \in \distr^j_q/\distr^{j-1}_q$. Let $X,Y \in \Gamma(TM)$ smooth extensions of any representative of $X_q,Y_q$. Then the Lie product between $X_q$ and $Y_q$ is
\begin{equation}
[X_q,Y_q] := [X,Y]_q \mod \distr_q^{i+j-1} \in \distr_q^{i+j}/\distr_q^{i+j-1}.
\end{equation}
Under the equiregularity assumption, this product is well defined and does not depend on the choice of the representatives and of the extensions. Observe that the graded Lie algebra $\hat{M}_q$ is nilpotent. Then there is a unique connected, simply connected group, such that its Lie algebra is $\hat{M}_q$ (that we identify with the group itself). The global, left-invariant vector fields obtained by the group action on any orthonormal basis of $\distr_q \subset \hat{M}_q$ give $\hat{M}_q$ the structure of a left-invariant (sub)-Riemannian manifold, which is called the \emph{nilpotent approximation} of the sub-Riemannian structure $(M,\distr,\g)$ at the point $q$. We stress that:
\begin{itemize}
\item the base manifold is the vector space $\hat{M}_q \simeq T_q M$ (the latter identification is \emph{not} canonical);
\item the distribution and the scalar product are obtained by extending $\distr_q,\g_q$ using the left action of the group.
\end{itemize}
It can be proved that $\hat{M}_q$ is isometric (as a metric space) to the Gromov tangent cone at $q$ of the metric space $(M,d)$ where $d$ is the Carnot-Carath\'eodory distance induced by the (sub)-Riemannian structure (see~\cite{montgomerybook,bellaiche,mitchell}).
\subsection{Popp's volume}\label{s:Popp}
In this section we provide the definition of Popp's volume. Our presentation follows closely the one of \cite{nostropopp,montgomerybook}. The definition rests on the following lemmas.
\begin{lemma} \label{l:mont1}
Let $E$ be an inner product space, and let $\pi:E\to V$ be a surjective linear map. Then $\pi$ induces an inner product on $V$ such that the length of $v \in V$ is
\begin{equation} \label{eq:final}
\|v\|_V = \min\{ \|e\|_E \text{ s.t. } \pi(e) = v \}.
\end{equation}
\end{lemma}
\begin{lemma} \label{l:mont2}
Let $E$ be a vector space of dimension $n$ with a flag of linear subspaces $\{0\} = F^0 \subset F^1\subset F^2 \subset \ldots\subset F^m = E$. Let $\mathbf{gr}(F) = F^1\oplus F^2/F^1\oplus \ldots \oplus F^m/F^{m-1}$ be the associated graded vector space. Then there is a canonical isomorphism $\theta: \wedge^n E \to \wedge^n \mathbf{gr}(F)$.
\end{lemma}
The proofs can be found in \cite{nostropopp}. We report here the proof of Lemma~\ref{l:mont2} since it contains the definition of the important map $\theta$.
\begin{proof}
We only give a sketch of the proof. For $0\leq i \leq m$, let $k_i:= \dim F^i$. Let $X_1,\dots,X_n$ be a adapted basis for $E$, i.e. $X_1,\dots, X_{k_i}$ is a basis for $F^i$. We define the linear map $\widehat{\theta}: E \to \mathbf{gr}(F)$ which, for $0\leq j \leq m-1$, takes $X_{k_j+1}, \dots, X_{k_{j+1}}$ to the corresponding equivalence class in $F^{j+1}/F^j$. This map is indeed a non-canonical isomorphism, which depends on the choice of the adapted basis. In turn, $\widehat{\theta}$ induces a map $\theta : \wedge^n E \to \wedge^n \mathbf{gr}(F)$, which sends $X_1\wedge\ldots\wedge X_n$ to $\widehat{\theta}(X_1)\wedge\ldots\wedge\widehat{\theta}(X_n)$. The proof that $\hat{\theta}$ does not depend on the choice of the adapted basis is a straightforward check, and boils down to the fact that two different adapted basis are related by an upper triangular matrix (see \cite{nostropopp,montgomerybook}).
\end{proof}
The idea behind Popp's volume is to define an inner product on each $\distr^i_q/\distr^{i-1}_q$ which, in turn, induces an inner product on the orthogonal direct sum $\hat{M}_q$. The latter has a natural volume form, such that its value on any oriented orthonormal basis is $1$. Then, we employ Lemma~\ref{l:mont2} to define an element of $(\wedge^n T_q M)^*$, which is Popp's volume form computed at $q$.
Fix $q \in M$. Then, let $v,w \in \distr_q$, and let $V,W$ be any horizontal extensions of $v,w$. Namely, $V,W \in \Gamma(\distr)$ and $V(q) =v$, $W(q) = w$.
Let $2\leq i \leq m$. The linear maps $\pi_i: \otimes^i \distr_q \to \distr_q^i/\distr_q^{i-1}$
\begin{equation}\label{eq:pimap}
\pi_i(v_1\otimes\dots\otimes v_i) = [V_1,[V_2,\dots,[V_{i-1},V_i]]]_q \mod \distr^{i-1}_q,
\end{equation}
are well defined and do not depend on the choice of the horizontal extensions $V_1,\dots,V_i$ of $v_1,\dots,v_i$.
By the bracket-generating condition, $\pi_i$ are surjective and, by Lemma~\ref{l:mont1}, they induce an inner product space structure on $\distr_q^i/\distr_q^{i-1}$. Therefore, the nilpotentization of the distribution at $q$, namely
\begin{equation}
\hat{M}_q = \distr_q \oplus \distr_q^2/\distr_q \oplus \ldots \oplus \distr_q^m/\distr_q^{m-1}\,,
\end{equation}
is an inner product space, as the orthogonal direct sum of a finite number of inner product spaces. As such, it is endowed with a canonical volume (defined up to a sign) $\omega_q \in (\wedge^n \hat{M}_q)^*$ such that $\omega_q(v_1,\ldots , v_n)=1$ for any orthonormal basis $v_1,\ldots,v_n$ of the vector space $\hat{M}_q$.
Finally, Popp's volume (computed at the point $q$) is obtained by transporting the volume of $\hat{M}_q$ to $T_q M$ through the map $\theta_q :\wedge^n T_q M \to \wedge^n \hat{M}_q$ defined in Lemma~\ref{l:mont2}. Namely
\begin{equation}\label{eq:popppoint}
\popp_q =\theta_{q}^{*}(\omega_{q})= \omega_q \circ \theta_q,
\end{equation}
where $\theta_{q}^{*}$ denotes the dual map. Eq.~\eqref{eq:popppoint} is defined only in the domain of the chosen local frame. Since $M$ is orientable, with a standard argument, these local $n$-forms can be glued to a global one, called Popp's volume $\popp$. Moreover, Popp's volume is smooth, as follows, for example, by the explicit formula in \cite[Theorem 1]{nostropopp}. It is also clear that, in the Riemannian case, Popp's volume is the standard Riemannian one. Notice that Popp's volume is well defined only for equiregular structures.
\subsubsection{Behavior under isometries}
In the Riemannian setting, an isometry is a diffeomorphism such that its differential preserves the Riemannian scalar product. The concept is easily generalized to the sub-Riemannian case.
\begin{definition}
Let $(M,\distr_M,\metr_M)$ and $(N,\distr_N,\metr_N)$ be two (sub)-Riemannian structures. A (local) diffeomorphism $\phi: M \to N$ is a \emph{(local) isometry} if its differential $\phi_* : TM \to TN$ preserves the (sub)-Riemannian structure, namely (i) $\phi_* \distr_{M} = \distr_N$, and (ii) $\phi^* \metr_N = \metr_M$.
\end{definition}
The following is a trivial, but crucial property of Popp's volume that follows by its construction.
\begin{prop}\label{p:volumepres}
(Sub)-Riemannian (local) isometries $\phi:M \to M$ preserve Popp's volume, namely $\phi^*\popp = \popp$.
\end{prop}
\subsection{Intrinsic volumes}
An \emph{intrinsic definition of volume} is some algorithm that associates with any (sub)-Riemannian structure, a volume form, and this association is preserved by isometries. Let us be more precise.
\begin{definition}
An \emph{intrinsic definition of volume} is a map that associates, with any (sub)-Riemannian structure $(M,\distr,\g)$ a volume form $\omega_M$ on $M$ such that if $\phi: M \to N$ is a (sub)-Riemannian isometry between $(M,\distr_M,\g_M)$ and $(N,\distr_N,\g_N)$, then $\phi^* \omega_N = \omega_M$.
\end{definition}
In the following we avoid a this verbose terminology and we often talk about ``(intrinsic) volume'' to mean either the actual volume form $\omega_M$ or the map $M \mapsto \omega_M$.
\begin{example}
For any equiregular (sub)-Riemannian structure $(M,\distr,\metr)$, we can consider Popp's volume $\popp_M$. As a consequence of Proposition~\ref{p:volumepres}, this is an intrinsic definition of volume. Restricted to Riemannian structures, this definition of volume is the classical, Riemannian one.
\end{example}
Some intrinsic volumes are more intrinsic than others. Since the nilpotent approximation is the first order approximation of the (sub)-Riemannian structure at a point $q$, particularly simple definition of volumes are those that, loosely speaking, depend only on the metric invariants of the nilpotent approximation.
We want to make the above statement more precise. To to this, we must go back to the very definition of nilpotent approximation. Recall that there is no canonical way to identify $T_q M$ with the nilpotentization $\hat{M}_q$. Still, as an immediate consequence of Lemma~\ref{l:mont2} we can identify the top wedge product of these vector spaces.
\begin{cor}
Let $(M,\distr,g)$ a (sub)-Riemannian manifold, $q \in M$ and $\hat{M}_q$ its nilpotent approximation at $q$. Then there exists a canonical map $\theta_q^* : (\wedge^n T_q M)^* \to (\wedge^n \hat{M}_q)^*$.
\end{cor}
\begin{proof}
This map is just the dual map of $\theta$ defined in the proof of Lemma~\ref{l:mont2}, where $E = T_q M$ and $F^i = \distr^i_q$.
\end{proof}
\begin{rmk}
In other terms the map $\theta_q$ canonically identifies parallelotopes living in $T_q M$ and those living in $\hat{M}_q = T_0 \hat{M}_q$. Notice that the (sub)-Riemannian metric does not play any role in the definition of $\theta_q$.
\end{rmk}
\begin{definition}
An intrinsic definition of volume $\omega$ is \emph{N-intrinsic} if, for any (sub)-Riemannian manifold $(M,\distr,\g)$ and any $q \in M$ the following diagram commutes
\begin{displaymath}
\begin{CD}
M @>{\omega}>> \omega_M(q) \\
@V{\mathrm{nil}_q}VV @VV{\theta^*_q}V\\
\hat{M}_q @>>\omega > \omega_{\hat{M}_q}(0)
\end{CD}
\end{displaymath}
where $\mathrm{nil}_q$ associates with any (sub)-Riemannian manifold its nilpotent approximation at $q$.
\end{definition}
In other words, an intrinsic definition of volume is N-intrinsic if at any $q$ the volume form $\omega_M(q)$ agrees with the nilpotent volume form $\omega_{\hat{M}_q}(0)$ under the identification given by $\theta_q$.
\begin{example}
Let $(M,\distr,\g)$ be a Riemannian manifold ($\distr = TM$ and $\g$ is a Riemannian metric). The Riemannian volume on $M$ is the volume form $\omega_M$ such that, for any $q \in M$ and any oriented orthonormal parallelotope $v_1 \wedge \dots \wedge v_n$ at $q$ gives $\omega_M(q)(v_1 \wedge \dots \wedge v_n) = 1$.
We prove that this definition of volume is N-intrinsic. In this case, for any $q \in M$, the nilpotent approximation $\hat{M}_q = T_q M$ with the scalar product given by $\g_q$. Then the map $\theta_q$ is just the identity map between $T_q M$ and $\hat{M}_q$ (that are indeed the same vector space). Thus
\begin{equation}
\theta_q (v_1 \wedge \dots \wedge v_n) = v_1 \wedge \dots \wedge v_n,
\end{equation}
where the right hand side is seen as an element of $\wedge^n \hat{M}_q = \wedge^n (T_0 \hat{M}_q)$. Clearly $v_1 \wedge \dots \wedge v_n$ is an orthonormal parallelotope either when seen as an element of $\wedge^n (T_q M)$ or as an element of $\wedge^n \hat{M}_q$. By definition $\omega_M(q)(v_1 \wedge \dots \wedge v_n) =1$ and also $\omega_{\hat{M}_q}(0)(\theta (v_1 \wedge \dots \wedge v_n)) = 1$. Thus the Riemannian volume is N-intrinsic.
\end{example}
\begin{example}
Let $(M,\distr,g)$ be a Riemannian manifold ($\distr = TM$ and $g$ is a Riemannian metric). Let $\kappa : M \to \R$ be the scalar curvature function. We give a definition of volume $\omega$ as follows. For any $q \in M$ and any oriented normal parallelotope $v_1\wedge \dots \wedge v_n$ at $q$:
\begin{equation}
\omega_M(q)(v_1 \wedge \ldots \wedge v_n) = 1 + \kappa(q)^2.
\end{equation}
One can check that this is an intrinsic definition of volume (due to the fact that $\kappa$ is preserved by isometries). However, this is not N-intrinsic. In fact, without going into the details of the identification $\theta_q$ as above, we have that for any $q$, $\hat{M}_q$ is a flat Riemannian manifold (a vector space with inner product) thus $\hat{\kappa} = 0$. Then
\begin{equation}
\omega_{\hat M_q}(0)( \theta_q (v_1 \wedge \dots \wedge v_n)) = \omega_{\hat M_q}(0)(v_1 \wedge \ldots \wedge v_n) = 1 \neq 1+ \kappa(q)^2 = \omega_M(q)(v_1\wedge \dots \wedge v_n).
\end{equation}
\end{example}
\begin{example}
Popp's volume is N-intrinsic by construction. In fact it defined precisely by requiring the commutativity of the diagram above (see Section~\ref{s:Popp}).
\end{example}
\begin{example}
The spherical Hausdorff volume is N-intrinsic. In fact, in \cite{ABB-Hausdorff}, is is proved that the spherical Hausdorff volume is absolutely continuous w.r.t. any smooth volume form. In particular, the Radon-Nykodim derivative of the spherical Hausdorff volume w.r.t. Popp's volume is proportional to the left-invariant Haar measure of the unit ball in the nilpotent approximation. This function, in general, is not smooth, but only continuous.
\end{example}
We collect the results from these examples in the following propositions.
\begin{prop}\label{p:poppintrinsic}
For any (sub)-Riemannian structure $(M,\distr,\metr)$, let $\popp_M$ be Popp's volume. Then the definition of volume $M \mapsto \popp_M$ is N-intrinsic.
\end{prop}
\begin{prop}\label{p:riemannintrinsic}
For any Riemannian structure $(M,\g)$, let $\mathcal{R}_M$ be the Riemannian volume. Let $f \in C^\infty(M)$ be any function invariant by isometries. Then the definition of volume (for Riemannian manifolds) $M \mapsto f \mathcal{R}_M$ is N-intrinsic if and only if $f$ is constant.
\end{prop}
In other words, up to constant rescaling, the Riemannian one is the unique N-intrinsic definition of volume for Riemannian manifolds. This is due to the fact that the nilpotent approximation of Riemannian manifolds have no non-trivial metric invariants. In the sub-Riemannian case, this is not always true, due to the well-known presence of moduli for their nilpotent approximations. We close this section by discussing class of (sub)-Riemannian structures admitting a unique N-intrinsic definition of volume.
\subsubsection{Equi-nilpotentizable structures}
Equi-nilpotentizable (sub)-Riemannian structure have the same nilpotent approximation at any point.
\begin{definition}
A sub-Riemannian structure $(M,\distr,\g)$ is \emph{equi-nilpotentizabile} if for any $q,p \in M$ the nilpotent approximations $\hat{M}_q$ and $\hat{M}_p$ are isometric.
\end{definition}
The next theorem generalizes Proposition~\ref{p:riemannintrinsic} to equi-nilpotentizable structures (including Riemannian ones).
\begin{theorem}\label{t:equinilp}
Let $\omega$ and $\omega'$ two N-intrinsic definitions of volume. Then for any (sub)-Riemannian structure $(M,\distr,\g)$ there exists a smooth function $c_M :M \to \R$ such that $\omega_M = c_M \omega'_M$. The value of $c_M$ at $q$ depends only on the isometry class of the nilpotent approximation of the structure at $q$. In particular, for equi-nilpotentizable sub-Riemannian manifolds, any N-intrinsic definition of volume agrees up to a constant.
\end{theorem}
\begin{rmk}
By Proposition~\ref{p:poppintrinsic} Popp's one is an N-intrinsic definition of volume. Then, for equi-nilpotentizable (sub)-Riemannian structures, any N-intrinsic volume $M \mapsto \omega_M$ gives, up to a constant, Popp's one.\end{rmk}
\begin{proof}
Assume that there are two N-intrinsic definitions of volume $\omega$ and $\omega'$. Then, to any sub-Riemannian manifold $M$ we associate a smooth, never vanishing function:
\begin{equation}
c_M:= \omega_M/\omega'_M : M \to \R.
\end{equation}
For any intrinsic definition of volume $\omega$ and isometry $\phi: M \to N$ of (sub)-Riemannian manifolds :
\begin{equation}
\phi^* \omega_N = \omega_M.
\end{equation}
Then, if we have two intrinsic definition of volumes $\omega$ and $\omega'$ we get, for isometric structures:
\begin{equation}\label{eq:isometric}
c_M = \frac{\omega_M}{\omega_M'} = \frac{\phi^* \omega_N}{\phi^* \omega_N'} = c_N \circ \phi.
\end{equation}
\begin{lemma}\label{l:constantf}
Let $(M,\distr,\g)$ be a left-invariant structure (see Definition~\ref{d:left-invariant}. Then $c_M$ is constant and depends only on the isometry class of $M$. Namely, if $(M,\distr_M,\g_M)$ and $(N,\distr_N,\g_N)$ are two isometric left-invariant structures, then $c_M = c_N$.
\end{lemma}
\begin{proof}
If in Eq.~\eqref{eq:isometric} we set $M=N$ and $\phi = L_q$, for $q \in M$, we get that $c_M$ is a left-invariant function, and then constant. Moreover, if $M$ and $N$ are two isometric left-invariant structures, then the two constants $c_M$ and $c_N$ must be equal.
\end{proof}
Let us go back to $(M,\distr,\g)$ and the two N-intrinsic definitions of volumes $\omega$ and $\omega'$. By definition of N-intrinsic, we have:
\begin{equation}
c_M(q) = \frac{\omega_M(q)}{\omega_M'(q)} = \frac{\omega_{\hat{M}_q}(0)}{\omega'_{\hat{M}_q}(0)} = c_{\hat{M}_q}(0), \qquad \forall q \in M.
\end{equation}
The structure on $\hat{M}_q$ is left-invariant. Then, by Lemma~\ref{l:constantf}, the function $c_{\hat{M}_q}$ depends only on the isometry class of $\hat{M}_q$ (that is the same for all $q \in M$. So $c_M$ is a constant that depends only on the isometry class of $\hat{M}_q$.
\end{proof}
\subsubsection{Homogeneous (sub)-Riemannian structures}
Another class of structures that admit a unique N-intrinsic definition of volume (actually, a unique intrinsic definition of volume) is the following.
\begin{definition}
A (sub)-Riemannian structure $(M,\distr,\metr)$ is \emph{homogenous} if the group $\mathrm{Iso}(M)$ of (sub)-Riemannian isometries of $M$ acts transitively.
\end{definition}
Indeed homogenous structures are equi-nilpotentizable, thus Theorem~\ref{t:equinilp} applies and any two N-intrinsic definition of volumes are proportional (and proportional to Popp's one). Still, something stronger holds. In fact, for these structures, Popp's one is the unique volume form (up to scaling) preserved by (sub)-Riemannian isometries (see \cite[Proposition 6]{nostropopp}). Then we have the following corollary.
\begin{cor}
For homogeneous (sub)-Riemannian structures, any two intrinsic (definitions of) volumes are N-intrinsic, and proportional to Popp's one.
\end{cor}
In particular this is true for left-invariant (sub)-Riemannian structures on Lie groups.
\begin{definition}\label{d:left-invariant}
Let $M$ be a Lie group, and, for $q \in M$, let $L_q: M \to M$ denote the left multiplication. A (sub)-Riemannian structure $(M,\distr,\metr)$ is \emph{left-invariant} if $L_q : M \to M$ is an isometry for any $q \in M$.
\end{definition}
Since, on left-invariant structures, Popp volume is left-invariant we have the following.
\begin{cor}\label{c:popp=haar}
For left-invariant (sub)-Riemannian structures, any two intrinsic (definitions of) volumes are N-intrinsic, and proportional to Haar left-invariant volume.
\end{cor}
\section{Introduction}\label{s:intro}
\subsection{The Riemannian setting}\label{RiemIntro}
Let $M$ be a (smooth, connected, orientable, complete) $n$-dimensional Riemannian manifold. In Riemannian geometry the infinitesimal conservation condition for a smooth scalar quantity $\phi$, a function of the point $q$ and of the time $t$ (for example, the temperature, the concentration of a diffusing substance, the noise, the probability density of a randomly-moving particle, etc.), that flows via a flux $F$
(which says how much of $\phi$ is, infinitesimally, crossing a unit area of surface and in a unit of time) is expressed via the ``continuity'' equation $\partial_t\phi +\div(F)=0$, where $\div(\cdot)$ is the divergence computed with respect to the Riemannian volume. If one postulates that the flux is proportional to minus the Riemannian gradient (the constant of proportionality being fixed to 1 for simplicity), that is $F=-\grad(\phi)$, one obtains the Riemannian heat equation
\begin{equation}\label{heat-1}
\partial_t\phi =\Delta\phi,
\end{equation}
where $\Delta = \div \circ \grad$ is the Laplace-Beltrami operator.
Since equation~\eqref{heat-1} has been obtained from a continuity equation that views $\phi$ as a fluid without a microscopic structure (the fluid is modeled as a continuous substance), in the following we refer to it as to the {\em macroscopic heat equation} and to the corresponding operator $\Delta$ as the {\em macroscopic Laplacian}. It is useful to write $\Delta$ in terms of an orthonormal frame. If $ X_1,\ldots, X_n $ is a local orthonormal frame for the Riemannian structure we have the formulas
\begin{equation}
\grad(\phi)=\sum_{i=1}^n X_i(\phi) X_i \qquad \text{and} \qquad \Delta(\phi)=\sum_{i=1}^n (X_i^2 +\div(X_i)X_i )(\phi).
\end{equation}
It is well known that the heat equation (\ref{heat-1}) also admits a stochastic interpretation as the evolution equation associated to a diffusion process. Actually, we need to be more precise here, since there are two evolution equations associated to a diffusion process. In particular, there is the evolution of the expectation of a test function $f\in C_0^{\infty}(M)$ evaluated along the paths of the diffusion, called the forward Kolmogorov equation, and there is the evolution equation for the transition density of the diffusion (relative to a smooth volume), called the backward Kolmogorov equation. The second-order operators in these two equations are adjoints (with respect to the same smooth volume as the transition density). However, thanks to the geodesic completeness assumption, the Laplace-Beltrami operator is essentially self-adjoint (with respect to the Riemannian volume) so that both operators, and thus both parabolic PDEs, coincide.
Moreover, the diffusion process associated to the (Riemannian) heat equation can be obtained as a (parabolic) scaling limit of a random walk. We think of the random walk as giving a ``microscopic'' viewpoint on the evolution of the quantity measured by $\phi$, since, at least in an idealized sense, it models the motion of an individual particle. In taking the scaling limit, we pass from the small-scale behavior of an individual particle to the large-scale average behavior of the particle, or equivalently, to the aggregate behavior of a large number of such particles, and thus pass from the microscopic to the macroscopic viewpoint. We now need to explain these random walks and the associated ideas in more detail.
In the Riemannian case currently under discussion, we are interested in an isotropic random walk. In particular, starting from a point $q$, the unit sphere $\mathbb{S}^{n-1}$ in the tangent space $T_{q}M$ has the standard $(n-1)$-volume induced by the metric on $T_{q}M$. Normalizing this to a probability measure $\mu_{q}$ (by dividing by the total volume $2\pi^{n/2}\Gamma(n/2)$), we can then choose a direction $\theta\in\mathbb{S}^{n-1}$ at $q$ randomly, in a way which is obviously isotropic with respect to the Riemannian structure. The particle then travels along the geodesic tangent to $\theta$ for a distance $\varepsilon$ in time $\delta t$ (at constant speed, determined by these conditions). We let $X^{\varepsilon}_t$ be the (random) position of the particle at time $t\in[0,\delta t]$. We can continue this process by next choosing a direction, at the point $X^{\varepsilon}_{\delta t}$, at random via the measure $\mu_{X^{\varepsilon}_{\delta t}}$ on the unit sphere $\mathbb{S}^{n-1}\subset T_{X^{\varepsilon}_{\delta t}}M$, and independently from the previous choice of direction $q$. Then the particle follows the geodesic from $X^{\varepsilon}_{\delta t}$ in this direction for distance $\varepsilon$ in time $\delta t$. We can continue this process indefinitely, since after $i$ steps, the particle has traveled along a piecewise geodesic for a total distance of $i\varepsilon$, and since we are assuming that $M$ is geodesically complete, a next step can always be made (equivalently, this process cannot explode by exiting every compact set in finite time). The result is a random (piecewise geodesic) path $X^{\varepsilon}_t$ (for $t\in[0,\infty)$) starting from $q=X^{\varepsilon}_0$. Further, the positions of the particle at the times when it randomly changes direction, namely, the sequence $X^{\varepsilon}_0, X^{\varepsilon}_{\delta t},X^{\varepsilon}_{2\delta t},\ldots$ is a Markov chain on $M$, and for $t\in(i\delta t,(i+1)\delta t)$, $X^{\varepsilon}_t$ interpolates between $X_{i\delta t}$ and $X_{(i+1)\delta t}$ along a geodesic between them (for $i=0,1,2,\ldots$).
As mentioned, we are interested in a scaling limit of such random walks (and this is why we have already indexed our walk $X^{\varepsilon}_t$ by the step-size $\varepsilon$). We will take $\delta t= \varepsilon^2/\alpha$, where $\alpha$ is a normalizing constant to be chosen later. Then $\delta t\rightarrow 0$ as $\varepsilon\rightarrow 0$, and moreover, this is the parabolic scaling (which we note is in a sense responsible for the infinite velocity of propagation of the heat).
We are interested in the behavior of a single step under this parabolic scaling, which by the Markov property and homogeneity in time may as well be the first step. If we consider the change in the expectation of a function $\phi\in C_0^{\infty}(M)$ sampled after the step, normalized by dividing by $\delta t$, we obtain an operator which we denote by $L_{\varepsilon}$. More concretely, we have
\begin{equation}
\begin{split}
(L_{\varepsilon} \phi)(q) &= \frac{ \mathbb{E}\left[ \left . \phi\left( X^{\varepsilon}_{\delta t}\right) \right| X^{\varepsilon}_0=q\right] -\phi\left( q\right)}{\delta t} \\
&= \frac{\alpha}{\varepsilon^2}\left( \mathbb{E}\left[ \left . \phi\left( X^{\varepsilon}_{\varepsilon^2/\alpha}\right) \right| X^{\varepsilon}_0=q\right] -\phi\left( q\right) \right) \\
&= \frac{\alpha}{\varepsilon^2} \left( \int_{\mathbb{S}^{n-1}}\phi\left(\exp_{q} \left(\varepsilon,\theta\right)\rp \mu_{q}\left( \theta\right) -\phi(q)\right) ,
\end{split}
\end{equation}
where $\exp_{q}(\varepsilon,\theta)$ is the point obtained by following the arc-length parametrized geodesic from $q$ in the direction of $\theta$ for distance $\varepsilon$.
It turns out that the limiting behavior of the sequence of random walks, as $\varepsilon\rightarrow 0$, is governed by the limiting behavior of these operators. Indeed, Section \ref{a:randomwalk} is devoted to a discussion of this issue and to the proof of theorems on the convergence of a sequence of random walks to a diffusion in the Riemannian or sub-Riemannian context (see Theorem~\ref{AppendixConvergence} for a general convergence result and Theorem~\ref{AppendixConvergence2} for the case of random walks of the type just described) . At any rate, expanding the exponential map in normal coordinates around $q$ shows that $L_{\varepsilon}$ converges to an operator $L$ as $\varepsilon\rightarrow 0$ (in the sense that $L_{\varepsilon}\phi\rightarrow L\phi$ uniformly on compacts for $\phi\in C_0^{\infty}(M)$), and that $L=\frac{\alpha}{2n}\Delta$ (the computation is a special case of the (sub)-Riemannian computations below, so we don't reproduce it). It follows that the sequence of random walks converges to the diffusion generated by $L$ (or the diffusion that solves the martingale problem for $L$), which we denote $X^0_t$. Thus, if $\phi$ is a smooth scalar quantity depending on a point $q$ and a time $t$ such that its evolution is governed infinitesimally by the dynamics of the random walk (in the sense that $\partial_t \phi(q,t)=\lim_{\delta\rightarrow0}\frac{1}{\delta}\mathbb{E}\left[ \phi\left( X^0_{\delta},t\right) -\phi(q)|X^0_0=q\right]$), we obtain the equation
\begin{eqnarray}
\partial_t \phi=L\phi,
\label{eq-heat-2}
\end{eqnarray}
where
\begin{eqnarray}
L\phi(q,t)=\lim_{\varepsilon\to0} \frac{\alpha}{\varepsilon^2} \left(\int_{\mathbb{S}^{n-1}}\phi\left(\exp_{q} (\varepsilon,\theta),t\right) \mu_{q}\left( \theta\right) -\phi(q,t)\right).
\end{eqnarray}
In the following we refer to the equation (\ref{eq-heat-2}) as to the ``microscopic heat equation'' and to $L$ as to the ``microscopic Laplacian''. As we've already seen, if we take $\alpha=2n$, then we have
\begin{equation}\label{eq-Delta=L}
\Delta=L.
\end{equation}
(As an aside, the need for the normalizing constant $\alpha$ to grow with the dimension $n$ is a manifestation of the concentration of measure on the sphere.) Hence the macroscopic diffusion equation and the microscopic one coincide.
Further, define the heat kernel $p_t(q,q_0)$ as the density of the random variable $\left. X^0_t\right| X^0_0=q_0$ with respect to the Riemannian volume $\mathcal{R}$.
Said more analytically, $p_t$ is the fundamental solution to Equation \eqref{eq-heat-2}, so that
\begin{equation}
\phi(q_0,t) = \int_M \phi(q) p_t(q,q_0) \mathcal{R}(q) = \mathbb{E}\left[ \phi\left( X^0_t\right) -\phi(q_0) |X^0_0=q_0 \right]
\end{equation}
solves the Cauchy problem for Equation \eqref{eq-heat-2} with initial condition $\phi(q_0,0)=\phi(q_0)$. Then because $L$ is essentially self-adjoint with respect to the Riemannian volume (it's just a scalar multiple of $\Delta$), $p_t(\cdot,q_0)$ also satisfies the microscopic heat equation \ref{eq-heat-2}. Thus $p_t$, which measures the probability density of the random paths themselves, rather than a quantity that is sampled along them, can be understood in terms of the same equations.
All of this can be viewed in the following way. On one side, the microscopic perspective is a good interpretation of the macroscopic heat equation. On the other side, the microscopic Laplacian $L$ is a good operator because it is essentially self-adjoint with respect to a volume (the Riemannian one). This is due to the fact that it is symmetric since it can be written in divergence form thanks to Equation \eqref{eq-Delta=L} and to the geodesic completeness of the manifold.
The essential self-adjointness of $L$ with respect to some volume $\omega$, even if not necessary to define the corresponding process, is important, both because it means that the heat kernel satisfies the same equation and because it permits one to study the evolution equation \eqref{eq-heat-2} in $L^2(M,\omega)$. See Theorem~\ref{t:selfad} below.
\begin{rmk}
The Riemannian volume $\mathcal{R}$ can be defined equivalently by $\mathcal{R}(X_1,\ldots,X_n)=1$ for any oriented local orthonormal frame $X_1,\ldots,X_n$, or as the $n$-dimensional Hausdorff or spherical-Hausdorff volume (up to a constant). The above construction gives an alternative to characterize the Riemannian volume: it is the unique volume (up to constant rescaling) such that the microscopic Laplacian can be written in divergence form. These fact are much less trivial in the sub-Riemannian context. Indeed in sub-Riemannian geometry there are several notions of intrinsic volumes and the definition of the microscopic Laplacian requires additional structure.
\end{rmk}
\subsection{The sub-Riemannian setting}
In this paper, a sub-Riemannian structure is a triple $(M,\distr,\metr)$, where $M$ is a $n$-dimensional differentiable manifold, $\distr$ is a smooth distribution of constant rank $k<n$ satisfying the H\"ormander condition and $\metr$ is a Riemannian metric on $\distr$.
Locally the structure can be assigned by an orthonormal frame $X_1,\ldots,X_k \in \Gamma(\distr)$. Here, $\Gamma(E)$ denotes the $C^\infty(M)$-module of smooth sections of any vector bundle $E$ over $M$.
With the term (sub)-Riemannian, we mean structures that can be also Riemannian, i.e. defined as above but with $k\leq n$. Riemannian manifolds are by definition equiregular. All the information about the structure is contained in the Hamiltonian function $H: T^*M \to \R$ defined by
\begin{equation}
H(\lambda) = \frac{1}{2}\sum_{i=1}^k \langle \lambda,X_i \rangle^2, \qquad \lambda \in T^*M,
\end{equation}
where $\langle \lambda, \cdot\rangle$ denotes the action of covectors on vectors. It is a well known fact in (sub)-Riemannian geometry that integral lines $\lambda(\cdot)$ of the Hamiltonian flow defined by $H$,
laying on the level set $H=1/2$, project to smooth curves $\gamma(t):=\pi(\lambda(t))$ on $M$ that are arc-length parametrized geodesics, i.e. $ \|\dot{\gamma}(t)\|=1$ and, for every sufficient small interval $[t_1,t_2]$, the restriction $\gamma|_{[t_1,t_2]}$ is a minimizer of the \emph{sub-Riemannian length} $\ell(\gamma) = \int_{t_1}^{t_2} \|\dot{\gamma}(t)\| dt$ (with fixed endpoints). Here $\|\cdot\|$ denotes the norm induced by $\metr$ on $\distr$.
In Riemannian geometry these curves are precisely all the arc-length parameterized geodesics of the structure. In the sub-Riemannian case these are called \emph{normal geodesics}. There is also another class of geodesics called \emph{abnormal geodesics}, that may not follow the above dynamic.
\subsubsection{The sub-Riemannian macroscopic Laplacian}
As we will see later, in sub-Riemannian geometry the definition of an intrinsic volume, playing the role of the Riemannian volume is a subtle question. For the moment, let us assume that a volume $\omega$ is fixed on the sub-Riemannian manifold. In this case one can write the macroscopic heat equation similarly to the Riemannian case. The only difference is that one should postulate that the flux is proportional to the horizontal gradient. The horizontal gradient $\grad_H(\cdot)$ of a $C^\infty$ function $\phi$ is defined similarly to the Riemannian gradient but is a vector field belonging to the distribution (see, for instance, \cite{laplacian}):
\begin{equation}
\metr_q(v,\grad_H(\phi)_q)=d_q\phi(v), \qquad \forall v\in\distr_q.
\end{equation}
We have then for the macroscopic heat equation
\begin{equation}
\partial_t\phi=\Delta_\omega \phi,
\end{equation}
where
\begin{equation}\label{eq-LB-H}
\Delta_\omega =\div_\omega\circ \grad_H
\end{equation}
is the macroscopic Laplacian.
In terms of a local orthonormal frame of the sub-Riemannian manifold we have the same formulas as in the Riemannian case, but summing up to the rank of the distribution:
\begin{equation}
\grad_H(\phi)=\sum_{i=1}^k X_i(\phi) X_i, \qquad \Delta_\omega(\phi)=\sum_{i=1}^k (X_i^2 +\div_\omega(X_i)X_i )(\phi).
\end{equation}
Since $\grad_H$ coincides with the standard gradient in the Riemannian case, we suppress the $H$ from the notation.
\subsubsection{On intrinsic volumes}
The difficulty concerning the definition of a macroscopic Laplacian is related to the choice of $\omega$. What is the ``correct'' volume in the sub-Riemannian case, the analogue to the Riemannian one? One needs some algorithm to assign, with any sub-Riemannian structure on $M$, a volume form $\omega_M$. Moreover, the construction of $\omega_M$ should depend, loosely speaking, only on the metric invariants of the structure.
\begin{definition}
An \emph{intrinsic definition of volume} is a map that associates, with any (oriented) (sub)-Riemannian structure $(M,\distr,\g)$ a volume form $\omega_M$ on $M$ such that if $\phi: M \to N$ is a sub-Riemannian isometry between $(M,\distr_M,\g_M)$ and $(N,\distr_N,\g_N)$, then $\phi^* \mu_N = \mu_M$.
\end{definition}
\begin{rmk}
In order to avoid this verbose terminology, with the term ``intrinsic volume'' we mean either the actual volume form $\omega_M$ or the definition of volume given by a map $M \mapsto \omega_M$.
\end{rmk}
Even in the Riemannian case, there are many intrinsic volumes. The classical Riemannian one is the unique volume form $\mathcal{R}$ such that $\mathcal{R}(X_1,\ldots,X_n)=1$ for any (oriented) orthonormal frame. But what about the volume form defined by $\mathcal{R}'(X_1,\ldots,X_n) = 1+\kappa^2$, where $\kappa$ is the scalar curvature? Both are perfectly fine definitions of volume, according to our definition. The first, loosely speaking, is more ``intrinsic'' than the second. In fact, it is true that both depend only on the metric invariant of the structure, but $\mathcal{R}'$ involves second order information about the structure. To rule out $\mathcal{R}'$ we need a more precise definition.
Roughly speaking we say that an intrinsic definition of volume is \emph{N-intrinsic} if its value of $\omega_M$ at $q$ depends only on the metric invariants of the nilpotent approximation of the structure at $q$ (the metric tangent space to the structure). For the precise definition see Section \ref{s-intrinsic}. In the Riemannian case, there is only one nilpotent approximation, which is the flat $\R^n$, and it has no non-trivial metric invariants. As a consequence, there is a unique N-intrinsic volume, the Riemannian one. As it is well known, in the sub-Riemannian case, nilpotent approximations may be different at different points.
\begin{definition}
A (sub)-Riemannian manifold is \emph{equi-nilpotentizable} if nilpotent approximations at different points are isometric.
\end{definition}
As just said above, Riemannian manifolds are equi-nilpotentizable. Equi-nilpotentizable structures are equiregular (see Section~\ref{s:prel}). By definition, all Carnot groups are equi-nilpotentizable.
It is well known that all equiregular sub-Riemannian structures in dimension less than or equal to four are equi-nilpotentizable (see \cite{ABB-Hausdorff}). In particular this is true for 3D sub-Riemannian manifolds (the nilpotent approximation is given by the Heisenberg group $\mathbb{H\\
}_3$ at each point). The simplest non-equi-nilpotentizable sub-Riemannian structure is given by a generic structure of rank $4$ in dimension $5$. Other examples of non-equi-nilpotentizable sub-Riemannian manifolds are given by generic contact manifolds of dimension greater than or equal to 5 and by generic quasi-contact sub-Riemannian manifolds in dimension greater than or equal to 6.
For equiregular sub-Riemannian structures a general N-intrinsic volume, called \emph{Popp volume} and denoted with $\popp$, was defined in Montgomery's book \cite{montgomerybook} and later studied in \cite{laplacian,nostropopp,ABB-Hausdorff}. This definition was pioneered by Brockett for contact structures in \cite{brockett}. It turns out that, in the Riemannian case, Popp's construction recovers the Riemannian volume. Moreover, in \cite{ABB-Hausdorff}, it has been proved that, for non-equinilpotentizable structures, Popp volume does not coincide with the (spherical) Hausdorff one. Finally, in Section~\ref{s-intrinsic} we prove the following result.
\begin{prop}
Let $(M,\distr,\metr)$ be a equi-nilpotentizable (sub)-Riemannian manifold. Then Popp volume $\popp$ is the unique N-intrinsic definition of volume, up to a multiplicative constant.
\end{prop}
In this sense Popp volume generalizes the Riemannian one. When $\omega$ is an N-intrinsic volume, we say that $\Delta_\omega$ is an \emph{N-intrinsic macroscopic Laplacian}. Since the divergence of a vector field does not change if the volume is multiplied by a constant, we have the following.
\begin{prop}
Let $(M,\distr,\metr)$ be a equi-nilpotentizable (sub)-Riemannian manifold. Then there exists a unique N-intrinsic macroscopic Laplacian, $\Delta_\popp$ that is built with Popp's volume.
\end{prop}
\subsubsection{The sub-Riemannian microscopic Laplacian}
Another problem in the sub-Riemannian case is the definition of microscopic Laplacian, as limit process coming from a random walk along geodesics starting from a given point. Indeed, while in the Riemannian context the geodesics starting from a given point can always be parametrized by the direction of the initial velocity, i.e., by the points of a $(n-1)$-dimensional sphere, in the sub-Riemannian context the geodesics starting from a given point are always parameterized by a non-compact set, namely by the points of a cylinder $\cyl_{q}=H^{-1}(1/2)\cap T_{q}^\ast M$
having the topology of $\mathbb{S}^{k-1}\times \R^{n-k}$. How to define an intrinsic finite volume on $\cyl_{q}$, and thus a probability measure, is a non-trivial question.
Moreover, while in Riemannian geometry, for sufficiently small $\varepsilon$ the Riemannian metric sphere of radius $\varepsilon$ centered at $q$ coincides with the endpoints of the geodesics starting from $q$ and having length $\varepsilon$, in sub-Riemannian geometry this is not true: for a fixed a point $q$, there are geodesics starting from $q$ that lose optimality arbitrarily close to $q$. Hence one has to decide whether to average only on the sub-Riemannian sphere (i.e., on geodesics that are optimal up to length $\varepsilon$) or on the sub-Riemannian front (i.e., on all geodesics of length $\varepsilon$).
Another approach is to choose a $k$-dimensional linear subspace $\mathbf{h}_q$ in $T^*_{q}M$ transverse to the non compact directions of the cylinder (playing the role of the ``most horizontal'' geodesics) and to average on $\mathbf{h}_q\cap\cyl_{q}$. This last approach is essentially the one followed in \cite{GordLae,GordLaeOlder,grong1,grong2}. Certainly the problem of finding a canonical subspace $\mathbf{h}_q$ in $T^\ast_{q}M$ is a non-trivial question and in general it is believed that it is not possible.
All these problems are encompassed in the specification of a measure $\mu_q$ (possibly singular) on $\cyl_{q}$ for all $q \in M$. Once such a collection of measures $\mu = \{\mu_{q}\}_{q \in M}$ is fixed, we define the \emph{microscopic Laplacian} as
\begin{equation}\label{eq-microscopic}
(L_{\mu}\phi)(q):=\lim_{\varepsilon\to0}
\frac{\alpha}{\varepsilon^2} \Big(\int_{\cyl_{q}}\phi\big(\exp_{q} (\varepsilon,\lambda)\big) \mu_{q}(\lambda)-\phi(q)\Big) ,
\end{equation}
assuming the limit exists.
Here $t \mapsto \exp_{q} (t,\lambda)$ is the arc-length parametrized normal geodesics starting from $q$ with initial covector $\lambda\in\cyl_{q}$. Moreover $\alpha$ is a constant of proportionality that will be fixed later.
\begin{rmk}\label{abnormal}
In this paper, the microscopic Laplacian is built only with the geodesics of the exponential map that, by definition, are normal.
In sub-Riemannian geometry there is also another type of geodesic, called (strict) \emph{abnormal}, that are not described by the sub-Riemannian exponential map (i.e. they are not the projection of the Hamiltonian flow described above). The ``size'' of the set of points reached by abnormal geodesics (starting from a fixed origin) is a hard open problem known as the \emph{Sard conjecture in sub-Riemannian geometry}, see for instance \cite{open,riffordbook}). It is believed that only a set of measure zero is reached by abnormal geodesics. What is known in general is that the set of points reached by optimal normal geodesics is open and dense (see \cite{smoothness,curvature} for precise statements). We notice that, on contact structures, there are no nontrivial abnormal geodesics.
If one would like to include also abnormal geodesics for the construction of the microscopic Laplacian, one should decide which measure give to them and in principle one could get a different operator. This research direction is beyond the purpose of the present paper.
\end{rmk}
In view of Theorem~\ref{t-lmu}, we restrict the class of possible measures, and we consider only those induced by a complement as follows. For all $q \in M$, consider a complement $\V_q$ such that $T_qM = \distr_q \oplus \V_q$. By duality $T_q^*M = \mathbf{v}_q \oplus \mathbf{h}_q$, where $\mathbf{v}_q:= \distr_q^\perp$ (resp. $\mathbf{h}_q:=\V^\perp$) denote the annihilators of $\distr_q$ (resp. $\V_q$). We can see $\mathbf{v}_q$ as the space of ``vertical'' covectors, and $\mathbf{h}_q$ the space of ``horizontal'' ones. Now we can define an Euclidean structure on $\mathbf{h}_q$ by identifying it with $\distr_q$ (this is equivalent to considering the restriction $2H|_{\mathbf{h}_q}$, which is a positive definite quadratic form). The intersection $\mathbb{S}_q^{k-1} = \cyl_q \cap \mathbf{h}_q$ (See Figure~\ref{f:verhor} in Section~\ref{s-luca-formula}) is precisely the Euclidean sphere in this space. Then the cylinder of initial covectors splits as
\begin{equation}
\cyl_q = \mathbb{S}_q^{k-1}\times \mathbf{v}_q.
\end{equation}
We stress that this identification depends on the choice of $\V$. We restrict to the class of product measures induced by the choice of a complement on $\cyl_q$ of the form
\begin{equation}\label{eq:product}
\mu_{\V_q} = \mu_{\mathbb{S}^{k-1}_q} \times \mu_{\mathbf{v}_q},
\end{equation}
where $\mu_{\mathbb{S}^{k-1}_q}$ is the Euclidean probability measure on $\mathbb{S}_q^{k-1} = \cyl_q \cap \mathbf{h}_q$ and $\mu_{\mathbf{v}_q}$ is any probability measure on $\mathbf{v}_q$. Moreover, we assume that $\mu_{\mathbf{v}_q}$ is sufficiently regular as a function of $q$, and $2$-decreasing, namely that any linear function on $\mathbf{v}_q$ is $L^2(\mathbf{v}_q,\mu_{\mathbf{v}_q})$ (see Definition~\ref{d:decreasing}). Any such a measure is called \emph{adapted}.
Repeating this construction at each point we recover a differential operator, as in Eq.~\eqref{eq-microscopic}, that we call $L_{\mu_\V}$. It turns out that the latter does not depend on the choice of adapted measure, but only on the complement $\V$. In Section~\ref{s-luca-formula} we prove the following main result (which does not need any equiregularity assumption).
\begin{theorem}\label{t-lmu}
Let $TM = \distr \oplus \V$ and let $\mu_\V$ any adapted measure. Then $L_{\mu_\V}$ depends only on $\V$. Moreover, let $X_1,\ldots,X_k$ a local orthonormal frame for $\distr$, and $X_{k+1},\ldots,X_n$ a local frame for $\V$. Then
\begin{equation}
L_{\mu_\V} = \sum_{i=1}^k X_i^2 + \sum_{i,j=1}^k c_{ji}^j X_i,
\end{equation}
where the structural functions $c_{ij}^\ell \in C^\infty(M)$ are defined by $[X_i,X_j] = \sum_{\ell=1}^n c_{ij}^\ell X_\ell$ for all $i,j=1,\ldots,n$. Finally, the convergence of Eq.~\eqref{eq-microscopic} is uniform on compact sets.
\end{theorem}
Thanks to Theorem~\ref{t-lmu}, in the following we use the notation $L^\V:=L_{\mu_\V}$.
\begin{rmk}
One of the byproducts of Theorem \ref{t-lmu} is the following. Fixing $\V$ is equivalent to assign a subspace of ``horizontal covectors'' $\mathbf{h}_q$ in $T^*_qM$. The expression of $L^\V$ at $q \in M$ is the same averaging only on the horizontal geodesics (i.e. with a measure $\mu_{\V_q} = \mu_{\mathbb{S}^{k-1}_q}\times \delta_{\mathbf{v}_q}$, where $\delta$ is the Dirac delta) or averaging on all possible geodesics with a measure of the type~\eqref{eq:product}. The particular choice $\mu_{\V_q} = \delta_{\mathbf{v}_q}$ recovers the construction of \cite{grong1,grong2,GordLae,GordLaeOlder}), where the authors choose a Riemannian extension and use this to define the space of horizontal covectors.
\end{rmk}
\begin{rmk}
We can rewrite the operator $L^\V$ in a more elegant way by introducing the concept of \emph{horizontal divergence}. As we will make precise in Section~\ref{horizontal-div}, the horizontal divergence of a vector field $X$ computes the infinitesimal change of volume, under the flow of a vector field $X$, of the standard parallelotope of $\distr$. To do this, we need a well defined projection $\pi: TM \to \distr$ that, indeed, requires the choice of a complement $\V$. We denote with $\div^\V(X)$ the horizontal divergence of $X$, and we have
\begin{equation}
L^\V = \div^\V \circ \grad.
\end{equation}
\end{rmk}
\subsection{The equivalence problem}
Once a volume $\omega$ on $M$ is chosen (hence a a macroscopic Laplacian is defined) and a complement $\V$ is fixed (hence a microscopic Laplacian is defined), it is natural to ask: \\[2mm]
{\bf Q1:} Under which conditions on $\omega$ and $\V$ do we have $\Delta_\omega=L^\V$?\\[2mm]
In other words, we would like to know when a macroscopic Laplacian admits a microscopic interpretation and when a microscopic Laplacian can be written in divergence form (and hence is symmetric) w.r.t. some volume on the manifold. Moreover:\\[2mm]
{\bf Q2:} Given a volume $\omega$, is it possible to find a complement $\V$ such that $\Delta_\omega=L^\V$? If so, is it unique?\\[2mm]
This question is interesting since on any sub-Riemannian structure there is a smooth, intrinsic volume, Popp's one. Then an answer to {\bf Q2} gives a way to assign an intrinsic complement $\V$. A counting argument suggests that such a question has an affirmative answer. In fact, a volume form is given by a non-zero function, while a complement is given by $(n-k)k$ functions. However the answer is more complicated because some integrability conditions must be taken in account.
A more specific question is the following:\\[2mm]
{\bf Q3:} Let $\V$ a complement, and let $\metr_\V$ a smooth scalar product on $\V$. Then the orthogonal direct sum $\metr \oplus \metr_\V$ is a \emph{Riemannian extension} of the sub-Riemannian structure (also called taming metric). Let $\omega_{\V}$ be the corresponding Riemannian volume. Is it true that for the sub-Riemannian diffusion operator $\Delta_{\omega_{\V}}=L^{ \V}$?\\[2mm]
This last question is even more interesting when it is possible to find an intrinsic Riemannian extension, i.e. some choice of $\V$ and $\metr_\V$ that depends only on the sub-Riemannian structure $(M,\distr,\metr)$. Intrinsic Riemannian extensions can be made in several cases (see for instance~\cite{diniz,Hladky-connection,Hladky-complement}). However, in general they are not known and (even if this is a non-apophantic statement) it is believed that they do not exist.
In Section~\ref{s:equiv} we answer to \textbf{Q1}, with the following theorem.
\begin{theorem}\label{t-general}
For any complement $\V$ and volume $\omega$, the macroscopic operator $\Delta_\omega$ and the microscopic operator $L^\V$ have the same principal symbol and no constant term. Moreover $L^\V = \Delta_\omega$ if and only if
\begin{equation}\label{eq:espressione}
\chi^{(\V,\omega)} := L^\V - \Delta_\omega = \sum_{i=1}^k \sum_{j=k+1}^n c_{ji}^j X_i +\grad( \theta)= 0,
\end{equation}
where $\theta = \log |\omega(X_1,\ldots,X_n)|$ and $c_{ij}^\ell$ are the structural functions associated with an orthonormal frame $X_1,\ldots,X_k$ for $\distr$ and a frame $X_{k+1},\ldots,X_n$ for $\V$.
\end{theorem}
The condition $\chi^{(\V,\omega)} = 0$ is a coordinate-free version of \cite[Th. 5.13]{GordLae}, which appeared online while this paper was under redaction. In \cite{grong1} the same condition is obtained when $\V$ is the orthogonal complement w.r.t. some Riemannian extension and $\omega$ is the corresponding Riemannian volume. The particularly simple form of Eq.~\ref{eq:espressione} in our frame formalism permits us to go further in the study of its solutions.
We address {\bf Q2} and {\bf Q3} for Carnot groups, contact and quasi-contact structures and, more generally, corank $1$ structures. Here we collect only the main results. For precise definition see the corresponding sections.
\subsubsection{Contact structures}
\begin{theorem}
Let $(M,\distr,\metr)$ be a contact sub-Riemannian structure. For any volume $\omega$ there exists a unique complement $\V$ such that $L^\V = \Delta_\omega$. In this case $\V = \spn\{X_0\}$, with
\begin{equation}
X_0 = \Z - J^{-1}\grad(\theta), \qquad \theta = \log| \omega(X_1,\ldots,X_k,Z)|,
\end{equation}
where $\Z$ is the Reeb vector field and $J:\distr \to \distr$ is the contact endomorphism.
\end{theorem}
Contact structures have a natural Riemannian extension, obtained by declaring the Reeb vector field a unit vector orthonormal to $\distr$. It turns out that the Riemannian volume of this extension is Popp's volume.
\begin{cor}
Let $\popp$ be the Popp's volume. The unique complement $\V$ such that $L^{\V} = \Delta_{\popp}$ is generated by the Reeb vector field. Moreover, $\popp$ is the unique volume (up to constant rescaling) with this property.
\end{cor}
\begin{rmk}
In the results above we always use the normalization $\|J\| = 1$ (this fixes the contact form up to a sign). We stress that if we choose a different normalization the Reeb field would be different.
\end{rmk}
In Section~\ref{s:integrability} we also discuss the inverse problem, namely for a fixed $\V$, find a volume $\omega$ such that $L^\V = \Delta_\omega$. This is a more complicated problem (and in general has no solution). In the contact case, thanks to the non-degeneracy of $J$, we find explicitly a necessary and sufficient condition.
\begin{prop}
Let $\V = \spn\{X_0\}$. Define the one-form $\alpha := \frac{i_{X_0} d\eta}{\eta(X_0)}$ and the function $g=\frac{d\alpha \wedge\eta \wedge (d\eta)^{d-1}}{\eta\wedge (d\eta)^d }$. Then there exists a volume $\omega$ such that $L^{\V} = \Delta_\omega$ if and only if
\begin{equation}
d\alpha - dg \wedge \eta - g d\eta = 0.
\end{equation}
In this case, $\omega$ is unique up to constant rescaling.
\end{prop}
\subsubsection{Carnot groups}
By definition, Carnot groups are particular cases of left-invariant sub-Riemannian structures (see Definition~\ref{d:left-invariant}). Then it is natural to choose a left-invariant volume i.e. a volume proportional to the Haar's one (for example, Popp's volume $\popp$) and left-invariant complements $\V$. Contrary to the contact case (where we always have existence and uniqueness of $\V$ for any fixed $\omega$) here we lose uniqueness.
\begin{prop}
For any Carnot group $G$, we have $\Delta_{\popp} = L^{\V_0}$, where $\V_0$ is the left invariant complement
\begin{equation}
\V_0 := \mathfrak{g}_2 \oplus \dots\oplus \mathfrak{g}_m,
\end{equation}
where $\mathfrak{g}_i$ denotes the $i$-th layer of the Carnot group.
\end{prop}
Any left-invariant complement is the graph of a linear map $\ell: \V_0 \to \distr$, that is $\V_\ell := \{X+ \ell( X)\mid X \in \V_0\}$.
\begin{prop}
$\Delta_\popp = L^\V$ if and only if $\V = \V_\ell$, with
\begin{equation}
\tr(\ell \circ \ad_{X_i}) = 0, \qquad \forall i=1,\ldots,k.
\end{equation}
\end{prop}
The equation above has not unique solution $\ell$ in many cases, as we show in Section~\ref{s:carnot}.
\subsubsection{Quasi-contact structures}
For all the relevant definitions, we refer to Section~\ref{s:quasicontact}. We only stress that, analogously to the contact case, one can define a quasi-contact endomorphism $J:\distr \to \distr$ that, in the quasi-contact case, is degenerate (and its kernel is usually assumed to have dimension $1$). In this case, if for some $\omega$ there is $\V$ such that $L^\V = \Delta_\omega$, then $\V$ is \emph{never} unique. In particular, we have the following.
\begin{theorem}
For any fixed $\omega$, the space of $\V$ such that $L^\V = \Delta_\omega$ is an affine space over $\ker J$.
\end{theorem}
Surprisingly, we not only lose uniqueness but also existence. Consider the quasi-contact structure on $M= \R^4$, with coordinates $(x,y,z,w)$ defined by $\distr = \ker \eta$ with
\begin{equation}
\eta = \frac{g}{\sqrt{2}}\left(dw-\frac{y}{2}dx+\frac{x}{2}dy\right), \qquad \text{with} \qquad g = e^z.
\end{equation}
($g$ can be any strictly monotone, positive function). The metric is defined by the global orthonormal frame:
\begin{equation}
X = \frac{1}{\sqrt{g}}\left( \partial_x + \frac{1}{2}y \partial_w \right), \qquad Y = \frac{1}{\sqrt{g}}\left(\partial_y - \frac{1}{2} x \partial_w\right), \qquad \Z = \frac{1}{\sqrt{g}} \partial_z .
\end{equation}
Choose $\omega=\popp$, the Popp's volume, that is
\begin{equation}
\popp = \tfrac{g^{5/2}}{\sqrt{2}}dx\wedge dy \wedge dz \wedge dw.
\end{equation}
\begin{prop}
For the structure above $L^\V \neq \Delta_\popp$ for any choice of the complement $\V$.
\end{prop}
Even though this is out of the scope of the present paper, it turns out that this is the typical (generic) picture in the quasi-contact case. This result is quite surprising because, on sub-Riemannian structures with dimension smaller or equal then $4$, there is a unique N-intrinsic volume up to scaling, and a unique N-intrinsic Laplacian given by $\Delta_\popp$. In our example, this unique N-intrinsic Laplacian has \emph{no} compatible complement or, in other words, the macroscopic diffusion operator has no microscopic counterpart.
On a quasi-contact manifold one can build an analogue of the Reeb vector field (actually, a one-parameter family parametrized by the distinct eigenvalues of $J$), that provides a standard Riemannian extension (see Section~\ref{s:quasireeb}). It turns out that the Riemannian volume of these extensions is once again Popp's volume. Thus, the above non-existence result provides also an answer to {\bf Q3}: the macroscopic operator $\Delta_\omega$ provided by the quasi-Reeb Riemannian extension is not the microscopic operator $L^\V$ provided by the quasi-Reeb complement.
\subsubsection{Convergence of random walks}
Finally, in Section \ref{a:randomwalk}, we provide the probabilistic side of the construction of the microscopic Laplacian. Namely, as we see in equation \eqref{eq-microscopic}, the microscopic Laplacian is built from the scaling limit of single step of a random walk. In Theorem~\ref{AppendixConvergence2}, we show that this convergence of a single-step (in the sense of the convergence of the induced operator on smooth functions) can be promoted to the convergence of the random walks to the diffusion generated by the limit operator. This connects the microscopic Laplacian back to the heuristic discussion in the Section \ref{RiemIntro}. Moreover, in Theorem~\ref{AppendixConvergence}, the convergence of a sequence of random walks to the diffusion generated by some appropriate second-order operator is established in much more generality than required for the present paper (for example, the probability measure for choosing a co-vector at each step of the walk can be supported on the entire co-tangent space), with an eye toward a wider variety of possible approximation schemes.
\subsection{Comparison with recent literature}\label{s:literature}
In the Riemannian setting, different authors have analyzed the convergence of geodesic random walks to the Brownian motion generated by the Laplace-Beltrami operator. In particular, in \cite{PinksyRiem}, Pinsky considers a process (a random flight in our terminology, that he calls the isotropic transport process) that, starting from $x \in M$, moves along a geodesic with initial vector uniformly chosen on the unit-sphere, for a randomly chosen, exponentially-distributed time, after which a new vector is chosen. The lift of this walk to the tangent bundle is a Markov process that can be understood in terms of the corresponding semi-group. Using this, the author shows that under appropriate (parabolic) rescaling, the semi-group of the random flight converges to the Brownian motion semigroup (whose generator is the Laplace-Beltrami operator). In \cite{LebeauRiem}, Lebeau and Michel investigate a random walk on smooth, compact, connected Riemannian manifolds, that at each step jump to a uniformly chosen (according to the Riemannian volume measure) point in the ball of radius $h$ around the current position. A natural modification of such a random walk (based on the Metropolis algorithm) approximates Brownian motion on the manifold when $h$ is sent to zero and time is rescaled appropriately. Moreover, the authors consider the transition operator of the random walk, and prove that its rescaled spectrum approximates the spectrum of the Laplace-Beltrami operator. This provides a sharp rate of convergence of this random walk to its stationary distribution.
Horizontal Laplacians and diffusions on sub-Riemannian manifolds and random walk approximations to these have appeared several times in the literature recently. We make use of our notation and terminology in describing these results, in order to make the connection with the present paper clearer.
In \cite{LebeauSriem}, Lebeau and Michel study the spectral theory of a reversible Markov chain associated to a random walk on a sub-Riemannian manifold $M$, where, at each step, one walks along the integral lines (not geodesics, in general) of a fixed set of divergence-free vector fields $X_1,\ldots,X_k$ (with respect to some fixed volume $\omega$). This random walk depends on a parameter $h$. In particular, they prove the convergence when $h\to 0$ to an associated hypoelliptic diffusion, which, being that the vector fields are divergence free, coincides with $\Delta_\omega$. In a similar spirit as above, they also consider the rate of convergence to equilibrium of a random walk of this type.
In \cite{GordLaeOlder}, Gordina and Laetsch use a Riemannian extension of a sub-Riemannian metric to determine an orthogonal complement $\V_{\g}$ to $\distr$, which is equivalent to determining a subspace of horizontal co-vectors. This allows them to define a horizontal Laplacian by averaging over second-derivatives in the horizontal directions. The result is, in our terminology, a microscopic Laplacian that, in fact, depends only on the choice of complement $\V_{\g}$. They then introduce a corresponding family of horizontal random flights (in our terminology, though they call them random walks-- see the discussion in Appendix~\ref{a:randomwalk}), given by choosing a horizontal co-vector of fixed length uniformly at random, and then following the resulting geodesic at a constant speed for an exponentially distributed length of time, before repeating the procedure. They show that, under the natural parabolic scaling of the length of the co-vectors and the mean of the exponential ``travel times,'' these random flights converge to the diffusion generated by the horizontal Laplacian. To do this, they use resolvent formalism in order to prove the convergence of the relevant semi-groups, in a similar vein to the paper of Pinsky \cite{PinksyRiem} mentioned above.
In \cite{GordLae}, Gordina and Laetsch give a more systematic discussion of horizontal Laplacians relative to a choice of vertical complement. In particular, they take up the question of when a horizontal Laplacian relative to a choice of complement is equal to the divergence of the horizontal gradient with respect to some volume (or, as we have framed the question here, when the macroscopic Laplacian $\Delta_\omega$ is equal to the microscopic Laplacian $L^\V$). They give (see \cite[Theorem 5.13]{GordLae}) a condition for this that is equivalent to our Theorem \ref{t:compatibility}, though stated in terms of some Riemannian extension of the sub-Riemannian metric. Having given this result, they consider the application to three concrete examples of 3-dimensional Lie groups, the Heisenberg group, $SU(2)$, and the affine group.
In \cite{grong1}, Grong and Thalmaier define the horizontal Laplacian, corresponding to what we have called the microscopic Laplacian, by extending the sub-Riemannian metric to a Riemannian metric, considering an associated connection, and then taking the trace of a projection of the Hessian (see \cite[Def. 2.1]{grong1}). The resulting operator depends only on the orthogonal complement $\V_{\g}$ to $\distr$ with respect to the Riemannian extension $\g$. Random walks are not considered in this approach, although it also gives a construction of the horizontal Laplacian associated to a choice of horizontal subspace. They do consider the question of when their horizontal Laplacian (with respect to $\V_{\g}$) is equal to the Laplacian determined by the Riemannian volume $\mathcal{R}_\g$ of the extension metric, and the result is essentially our Theorem \ref{t:compatibility} applied to this situation, presented in terms of the extension metric $\g$ (and with a similar proof). Nonetheless, the primary interest of \cite{grong1} is curvature-dimension inequalities for sub-Riemannian structures associated to certain Riemannian foliations, in the spirit of Baudoin and Garofalo \cite{FabriceJEMS}, not in a discussion of sub-Laplacians per se. Indeed, they go on to select, among all possible complements (and then associated diffusions) special ones, that are integrable and ``metric-preserving'' in a way suitable for their purpose.
In light of the above, the novelty of the present paper lies primarily in the following areas. We abandon the Riemannian formalism, and we use frame formalism, which seems better suited to the sub-Riemannian context (for instance, we get operators that depend on the choice of complement $\V$ by construction). Also, having given the condition for the equivalence of macroscopic and microscopic Laplacians (``the equivalence problem'') in frame formalism (again, Theorem \ref{t:compatibility}), we proceed to analyze many broad classes of examples in some detail (for example, any Carnot group and any corank-$1$ structure). In the contact case, we solve completely the equivalence problem. We do the same in the quasi-contact case, for the Popp volume $\omega = \popp$. This leads to the perhaps-surprising fact that there are 4-dimensional quasi-contact structures for which the macroscopic Laplacian with respect to the Popp volume, which is in a precise sense the canonical volume, cannot be realized as a microscopic Laplacian (indeed, this is generically the case, though the complete proof is beyond the scope of the present paper). More generally, we discuss to what extent volume measures on a sub-Riemannian manifold can be thought of as canonical, which has implications for the degree to which a macroscopic Laplacian can be thought of as canonical.
On the probabilistic side, we allow random walks where the choice of co-vector is supported on the entire unit cylinder in the co-tangent space, rather than just on the horizontal subspace (relative to some choice of complement). (This really just says that sub-Riemannian geometry is rather insensitive to adding some independent vertical component to the initial co-vectors in a random walk, which is not surprising.) Also, our Theorem \ref{AppendixConvergence} gives a general convergence result for random walks on (sub)-Riemannian manifolds, going beyond the particular type of random walks considered elsewhere in the paper. From a technical perspective, this convergence of random walks is proved using martingale methods, rather than the semi-group approach mentioned above.
\section{The macroscopic Laplacian}\label{s:macro}
Let $(M,\distr,\metr)$ a sub-Riemannian structure and fix a volume form $\omega$ (not necessarily intrinsic or N-intrinsic). In this setting, we have a well defined notion of gradient and divergence (see Definitions~\ref{d:grad} and~\ref{d:diver}). The distribution is not assumed to be equiregular, unless one explicitly chooses $\omega = \popp$.
\begin{definition}
The \emph{macroscopic Laplacian} (w.r.t. $\omega$) is the differential operator
\begin{equation}
\Delta_\omega = \div_\omega\circ\grad.
\end{equation}
\end{definition}
This is a second order differential operator, symmetric on $C^\infty_0(M)$ w.r.t.\ the $L^2$ product induced by the measure $\omega$. A classical result by Strichartz (see \cite{strichartz,strichartzerrata}) is the following:
\begin{theorem}\label{t:selfad}
If $(M,\distr,\g)$ is complete as a metric space, then $\Delta_\omega$ is essentially self-adjoint on $C_0^\infty(M)$ and the associated heat operator is given by a positive, smooth, symmetric heat kernel.
\end{theorem}
We provide explicit formulas for $\Delta_\omega$, in terms of orthonormal frames. The proofs are routine computations.
\begin{lemma}
Let $X_1,\ldots,X_k$ be a (local) orthonormal frame for $\distr$ and $X_{k+1},\ldots,X_n$ some complement such that $X_1,\ldots,X_n$ is an oriented local frame for $TM$. Let $\theta \in C^\infty(M)$ defined by $\omega(X_1,\ldots,X_n) = e^\theta$. Then
\begin{equation}
\grad(f) = \sum_{i=1}^k X_i(f) X_i, \qquad f \in C^\infty(M),
\end{equation}
\begin{equation}
\div_\omega (X_i) = \sum_{\alpha=1}^n c_{\alpha i}^\alpha + d\theta (X_i), \qquad X \in \Gamma(TM).
\end{equation}
\end{lemma}
\begin{prop}
The macroscopic Laplacian w.r.t. the choice of $\omega$ is
\begin{equation}\label{eq:macroformula}
\Delta_{\omega} = \mathrm{div}_\omega \circ \grad = \sum_{i=1}^k X_i^2 + \mathrm{div}_\omega(X_i) X_i = \sum_{i=1}^k X_i^2 + \sum_{i=1}^k \sum_{\alpha=1}^n c_{\alpha i}^\alpha X_i + \grad(\theta).
\end{equation}
where the horizontal gradient $\grad(\theta)$ is seen as a derivation.
\end{prop}
Notice that, for any choice of $\omega$, the principal symbol of $\Delta_\omega$ is the Hamiltonian function $2H :T^*M \to \R$.
\begin{lemma}[Change of volume]\label{l:changeofvolume} Let $\omega$ be a volume form and $\omega' = e^g \omega$, with $g \in C^\infty(M)$. Then
\begin{equation}
\Delta_{\omega'} = \Delta_\omega + \grad (g),
\end{equation}
where $\grad (g)$ is meant as a derivation.
\end{lemma}
\begin{proof}
It follows from the change of volume formula for the divergence. In fact, for $X \in \Gamma(TM)$,
\begin{equation}
\div_{\omega'}(X) \omega' = \mathcal{L}_X \omega' = \mathcal{L}_X(e^g \omega) = X(g)e^g \omega + e^g \mathcal{L}_X \omega = (X(g) + \div_\omega(X))\omega'.
\end{equation}
Then $\div_{\omega'}(X) = \div_{\omega}(X) + X(g)$. The statement follows from the definition of $\Delta_\omega = \div_\omega \circ \grad$.
\end{proof}
\section{The microscopic Laplacian}\label{s-luca-formula}
Let $\V$ be a \emph{complement} (for the distribution $\distr$), namely a smooth sub-bundle of $TM$ such that
\begin{equation}
T_qM = \distr_q \oplus \V_q, \qquad \forall q \in M.
\end{equation}
By duality we have an analogous splitting for the cotangent bundle, namely
\begin{equation}
T_q^*M = \mathbf{h}_q \oplus \mathbf{v}_q, \qquad \forall q \in M,
\end{equation}
where $\mathbf{h}$ (resp. $\mathbf{v}$) denotes the annihilator bundle of $\V$ (resp. of $\distr$). We call $\mathbf{v}$ the space of \emph{vertical covectors} and consequently $\mathbf{h}$ is the space of \emph{horizontal covectors} (see Fig.~\ref{f:verhor}). The Hamiltonian function is a fiber-wise non-negative quadratic form $H_q:=H|_{T_q^*M}$, with $\ker H_q = \mathbf{v}_q$. Then its restriction to $\mathbf{h}_q$ is positive definite, thus is a scalar product on it.
\begin{figure}
\centering
\begin{tikzpicture}
\node[draw, thick, trapezium, trapezium left angle=75, trapezium right angle=-75, minimum height=3cm, minimum width=5cm,
trapezium stretches=true, shape border uses incircle,shape border rotate=0] at (0,0) {};
\draw[thick, -stealth] (0,0) -- (-1,3.5);
\node[draw, cylinder, minimum width=3cm, minimum height=5cm,aspect=4,,shape border rotate=90] at (10,0) {};
\draw[thick, -stealth] (10,0) -- (10,3.5);
\draw[-triangle 60] (3,0) -- node[above] {``annihilator''} (6,0);
\draw[dashed, rotate around={12:(10,0)}] (10,0) ellipse (1.5cm and 10pt);
\draw[thick, xshift=0.3cm,yshift=0.2cm,rotate around={12:(6,-2)}] (6,-2) -- (12.5,-2) -- (14,0);
\draw[thick, xshift=0.3cm,yshift=0.2cm,rotate around={12:(6,-2)}] (14,0) -- (11.74,0);
\draw[thick, dashed, xshift=0.3cm,yshift=0.2cm,rotate around={12:(6,-2)}] (11.74,0) -- (8.7,0);
\draw[thick, xshift=0.3cm,yshift=0.2cm,rotate around={12:(6,-2)}] (8.7,0) -- (7.5,0) -- (6,-2);
\node at (-0.6,3.5) {$\V_q$};
\node[above] at (1.35,-1.5) {$\distr_q$};
\node at (13,3.5) {$T^*_qM$};
\node at (12,-1.5) {$\cyl_q$};
\node[right] at (11.7,1) {$\mathbf{h}_q = \V_q^\perp$};
\node[right] at (10,3.5) {$\mathbf{v}_q = \distr_q^\perp$};
\node at (2,3.5) {$T_qM$};
\end{tikzpicture}
\caption{Cotangent splitting $T_q^*M = \mathbf{v}_q \oplus \mathbf{h}_q$. Vertical covectors annihilate horizontal vectors ($\mathbf{v}_q = \distr_q^\perp$), and correspond to the trivial geodesic. Horizontal covectors annihilate the transverse vectors ($\mathbf{h}_q = \V^\perp_q$). They correspond to normal geodesics that are the ``most horizontal'' w.r.t. the choice of $\V$.}\label{f:verhor}
\end{figure}
\begin{rmk}
An alternative definition, in the spirit of \cite{montgomerybook,GordLaeOlder,GordLae}, is the following. Extend $\metr$ to a Riemannian metric such that $\V$ and $\distr$ are orthogonal. Then we have a dual co-metric on $T^*M$. This co-metric depends on the choice of the orthogonal extension, but its restriction on $\mathbf{h}$ does not, and induces an Euclidean structure.
\end{rmk}
We obtain a $k-1$-dimensional sphere bundle over $M$ by taking the fiber-wise intersection $\cyl \cap \mathbf{h}$. In fact
\begin{equation}
\cyl_q \cap \mathbf{h}_q =\{\lambda \in T_q^*M \mid 2H(\lambda) =1,\; \lambda \in \mathbf{h}_q\}, \qquad \forall q \in M,
\end{equation}
is the Euclidean sphere $\mathbb{S}^{k-1}_q$ on the vector space $\mathbf{h}_q$ w.r.t. the scalar product $2H|_{\mathbf{h}_q}$.
\subsection{A class of adapted measures}
We define a class of probability measures on $\cyl_q$, obtained as the product of the standard probability measure on $\mathbb{S}^{k-1}_q$ and a probability measure on the complement $\mathbf{h}_q$. We are interested in a sufficiently general model, where \emph{all} geodesics on the cylinder potentially have a non-zero probability density. For this reason we include measures with non-compact support. This is indeed possible, but we need the probability to go to zero sufficiently fast at infinity along the ``non compact directions'' of $\cyl_q = \mathbb{S}_q^{k-1} \times \mathbf{v}_q$.
\begin{definition}
Let $E$ be a vector space and $\alpha \in \mathbb{N}$. A Borel measure $\mu$ on $E$ is $\alpha$-\emph{decreasing} if any linear function $f : E \to \R$ belongs to $L^\alpha(E,\mu)$.
\end{definition}
If a measure is $\alpha$-\emph{decreasing} then it is $\beta$-\emph{decreasing} for any $\beta \leq \alpha$. A compactly supported probability measure is $\alpha$-decreasing for all $\alpha$. Finally, we notice that one needs to check the condition only for a complete set of linear projections (the functions $(x_1,\ldots,x_m) \mapsto x_m$ in terms of some basis).
\begin{definition}\label{d:decreasing}
Let $\mu_q$ be a probability measure on $\cyl_q = \mathbb{S}^{k-1}_q \times \mathbf{v}_q$. We say that the collection of measures $\{\mu_q\}_{q \in M}$ is \emph{adapted to the splitting} if there exists a $2$-decreasing probability measure $\mu_{\mathbf{v}_q}$ on $\mathbf{v}_q$ such that
\begin{equation}
\mu_q = \mu_{\mathbb{S}_q^{k-1}} \times \mu_{\mathbf{v}_q}, \qquad \forall q \in M,
\end{equation}
where $\mu_{\mathbb{S}_q^{k-1}}$ denotes the standard uniform measure on $\mathbb{S}_q^{k-1} = \cyl_q \cap \mathbf{h}_q$. Moreover, we assume that if $f: T^*M \to \R$ is a continuous and fiberwise linear function, the map $q \mapsto \int_{\cyl_q} f^2 \mu_q$ (which is well defined) is continuous.
\end{definition}
\begin{rmk}
As we will see, the choice of $\alpha=2$ is forced by the properties of the exponential map. In the Riemannian case, there is no need of this, since the fibers of the unit tangent bundle are compact. The regularity assumption is needed for the uniform convergence in the next definition.
\end{rmk}
For the next definition recall that, by Lemma~\ref{l:prolong}, there exists $\varepsilon_0 >0$ such that all arc-length parametrized geodesics $\gamma_\lambda(t) = \exp_q(t,\lambda)$, with $\lambda \in \cyl_q$, are well defined for $t \in [0,\varepsilon_0)$.
\begin{definition}\label{Def:MainOperator}
Consider a splitting $TM =\distr \oplus \V$ and some choice of adapted measure $\{\mu_q\}_{q \in M}$. The \emph{microscopic Laplacian} is the differential operator:
\begin{equation}
(L^\V \phi)(q):=\lim_{t\to 0^+} \frac{2 k}{ t^2}\int_{\cyl_q} (\phi(\exp_q(t,\lambda))-\phi(q))\mu_q(\lambda), \qquad \forall q \in M.
\end{equation}
\end{definition}
Surprisingly, this definition does not depend on the choice of the measure adapted to the splitting, and justifies the notation $L^\V$. Moreover, the right hand side converges uniformly on compact sets for $t \to 0$. The proof of these facts are contained in the proof of Theorem~\ref{t:microformula}.
\begin{rmk}
As discussed in Section~\ref{s:intro}, this is the operator associated with the limit of a random walk, where, at each step, normal geodesics are chosen on $\cyl_q$ with the given probability measure $\mu_q$. Our construction generalizes the one given in \cite{GordLaeOlder,GordLae}. The latter can be recovered by setting
\begin{equation}
\mu_q = \mu_{\mathbb{S}^{k-1}_q} \times \delta_{\mathbf{v}_q}, \qquad \forall q \in M,
\end{equation}
where $\delta_{\mathbf{v}_q}$ is a Dirac mass centered at $0 \in \mathbf{v}$. This is certainly an $\alpha$-decreasing measure $\forall\alpha\geq 0$, and satisfies the regularity assumptions of definition~\ref{d:decreasing}. With this choice, at each step of the associated random walk one moves only on a $k$-dimensional submanifold.
\end{rmk}
\begin{theorem}\label{t:microformula}
The microscopic sub-Laplacian $L^\V$ depends only on the choice of the complement $\V$, and the convergence of~\ref{Def:MainOperator} is uniform on compact sets. Moreover, in terms of a local orthonormal frame $X_1,\ldots,X_k$ for $\distr$ and a local frame $X_{k+1},\ldots,X_n$ for $\V$, we have:
\begin{equation}\label{eq:microformula}
L^\V = \sum_{i=1}^k X_i^2 + \sum_{i,j=1}^k c_{ji}^j X_i,
\end{equation}
where $c_{ij}^k \in C^\infty(M)$ are the structural functions associated with the given frame.
\end{theorem}
\begin{proof}
Let $\nu_1,\ldots,\nu_n$ the co-frame dual to $X_1,\ldots,X_n$:
\begin{align}
\distr = & \spn\{X_1,\ldots,X_k\}, & \mathbf{h} = & \spn\{\nu_1,\ldots,\nu_k\},\\
\V = & \spn\{X_{k+1},\ldots,X_n\}, & \mathbf{v} = & \spn\{\nu_{k+1},\ldots,\nu_{n}\}.
\end{align}
Fix $q \in M$. In coordinates $(h_1,\ldots,h_n):T_q^*M \to \R^n$ induced by the choice of the frame, we have:
\begin{equation}
\cyl_{q} := \cyl \cap T_{q}^*M = \{(h_1,\ldots,h_k,h_{k+1},\ldots,h_n) \mid h_1^2+\ldots+ h_k^2 = 1\} \simeq \mathbb{S}^{k-1} \times \R^{n-k}.
\end{equation}
Since $q$ is fixed, we dub $\mathbf{v}_q = \R^{n-k}$ and $\mu_{\mathbf{v}_q} = \mu_{n-k}$. By plugging in the Taylor expansion of Lemma~\ref{l:taylor} in the definition of $L^\V$ we get:
\begin{equation}
(L^\V \phi)(q) := \lim_{t \to 0^+} \frac{2k}{t^2} \int_{\cyl_q}\left\lbrace t h_i X_i(\phi) + \frac{1}{2}t^2\left[h_j c_{ji}^\alpha h_\alpha X_i(\phi) + h_i h_j X_j(X_i(\phi))\right] + t^3 r_\lambda(t) \right\rbrace \mu(\lambda),
\end{equation}
where repeated indices are summed (according to the generalized Einsten's convention) and we suppressed some of the explicit evaluations at $q$. We compute the three terms of the integrand.
1. The linear in $t$ term vanishes. In fact, the integrand depends only on the variables $h_1,\ldots,h_k$, and
\begin{equation}
\int_{\cyl_q} h_i X_i(\phi) \mu(\lambda) = X_i(\phi) \int_{\R^{n-k}} \mu_{n-k} \int_{\mathbb{S}^{n-k}} h_i \mu_{\mathbb{S}^{k-1}} = 0,
\end{equation}
since the integral of any linear function on the sphere vanishes.
2. For the quadratic in $t$ term, we distinguish two contributions:
\begin{equation}
h_j c_{ji}^\alpha h_\alpha X_i(\phi) + h_i h_j X_j(X_i(\phi)) = \underbrace{ h_j c_{ji}^{\bar\ell} h_{\bar\ell} X_i(\phi)}_{1} + \underbrace{ h_j c_{ji}^\ell h_\ell X_i(\phi) + h_i h_j X_j(X_i(\phi))}_{2},
\end{equation}
where, we recall that the range of summation of barred latin indices is from $k+1$ to $n$. The first term, not present in the Riemannian case, is the product of a linear function on $\mathbb{S}^{k-1}$ and a linear function on $\mathbb{R}^{n-k}$. Since the measure $\mu$ is a product, by Fubini, we can first perform the integral on $\mathbb{S}^{k-1}$:
\begin{equation}
\int_{\cyl_q} h_j c_{ji}^{\bar\ell} h_{\bar\ell} X_i(\phi) = c_{ji}^{\bar\ell}X_i(\phi)\int_{\mathbb{S}^{k-1}} h_j \mu_{\mathbb{S}^{k-1}} \int_{\R^{n-k}}\mu_{n-k} = 0.
\end{equation}
On the other hand, the second term does not contain any $h_{\bar\ell}$, for $\bar\ell=k+1,\ldots,n$ and is the restriction on $\mathbb{S}^{k-1}$ of a quadratic form $Q = Q_{ij}h_i h_j$, with:
\begin{equation}
Q_{ij} = X_i(X_j(\phi)) + c_{i\ell}^j X_\ell(\phi), \qquad i,j=1,\ldots,k.
\end{equation}
Recall that for any quadratic form $Q : \R^k \to \R$ we have
\begin{equation}
\int_{\mathbb{S}^{k-1}} Q(v) \mu_{\mathbb{S}^{k-1}}(v) = \frac{1}{k} \tr(Q).
\end{equation}
Thus we get, for the quadratic in $t$ term, the following expression:
\begin{equation}
k\int_{\cyl_q} Q_{ij}h_i h_j \mu = k \int_{\R^{n-k}} \mu_{n-k} \int_{\mathbb{S}^{k-1}} Q_{ij}h_i h_j \mu_{\mathbb{S}^{k-1}} = \tr(Q) = \sum_{i=1}^k X_i^2(\phi) + \sum_{i,\ell =1}^k c_{i\ell}^i X_\ell(\phi).
\end{equation}
where we restored the explicit summations in the last term.
3. The final term to compute is the remainder. By Lemma~\ref{l:taylor}, if $t$ is sufficiently small, the remainder $r_\lambda(t)$ of the Taylor's expansion is uniformly bounded by a quadratic polynomial in the unbounded variables $h_{\bar\imath}$.
\begin{equation}
\left\lvert\frac{k}{t}^2\int_{\cyl_q} r_\lambda(t) t^3\right\rvert \leq k t \int_{\R^{n-k}} \left(A +B_{\bar\ell}|h_{\bar\ell}| + C_{\bar\imath\bar\jmath}|h_{\bar\imath}||h_{\bar\jmath}|\right)\mu_{n-k}\int_{\mathbb{S}^{k-1}} \mu_{\mathbb{S}^{k-1}}, \qquad \forall t \leq \varepsilon_0.
\end{equation}
By Lemma~\ref{l:taylor} this estimate holds uniformly on an open ball $B(q,\varepsilon_0)$. Then, for $q$ in a compact set $K$:
\begin{equation}
\left\lvert\frac{k}{t}^2\int_{\cyl_q} r_\lambda(t) t^3\right\rvert\leq t D \sum_{\bar\ell = k+1}^n \int_{\cyl_q} (h_{\bar\ell})^2 \mu_q = t g(q).
\end{equation}
where $D$ is a constant and, by our assumption on the family of measures, $g(q)$ is continuous (recall that the $h_{\alpha}$ are precisely the linear, smooth functions $\lambda \mapsto \langle \lambda, X_\alpha\rangle$ on $T^*M$). Then the remainder goes to zero uniformly on $K$. The fact that $L^\V$ depends only on the choice of the complement (and not on the adapted measure) is a consequence of Eq.~\eqref{eq:microformula}.
\end{proof}
\begin{rmk}
We stress that equiregularity is not assumed in Theorem~\ref{t:microformula}, for Lemma~\ref{l:taylor} is completely general.
\end{rmk}
\subsection{Horizontal divergence}\label{horizontal-div}
We can write the operator $L^\V$ in a more classical fashion, introducing the so-called \emph{horizontal divergence}. As the classical divergence depends on the choice of a volume form (or a density), the horizontal one depends on the choice of the complement $\V$.
\begin{definition}
Let $m \in \mathbb{N}$. The bundle of \emph{horizontal $m$-alternating tensors} on $M$ is the disjoint union
\begin{equation}
\Lambda_\distr^m M := \bigsqcup_{q\in M} (\wedge^m \distr_q)^*.
\end{equation}
where $(\wedge^m \distr_q)^*$ is the space of $m$-alternating functionals on $\distr_q$. Sections of $\Lambda_\distr^m M$ are called \emph{horizontal $m$-forms}.
\end{definition}
Any $m$-form induces an horizontal $m$-form by restriction. Given the complement $\V$, the fiber-wise linear projection $\pi_\distr : TM \to \distr$ is well defined, and for any horizontal $m$-form $\eta$, we can define a $m$-form $\tilde{\eta}:= \eta \circ \pi_\distr$.
\begin{definition}
Given an horizontal $m$-form $\eta$, and a vector field $X \in \Gamma(TM)$, we define the \emph{horizontal Lie derivative} $\mathcal{L}_X^\V\eta$ as the horizontal $m$-form such that its value at $q \in M$ is
\begin{equation}
(\mathcal{L}_X^\V \eta)_q = \left.\frac{d}{dt}\right\rvert_{t=0}(P_t^* \eta \circ \pi_\distr)_q,
\end{equation}
where $P_t$ is the flow of $X$.
\end{definition}
This is the same definition of the Lie derivative of $m$-forms, with the addition of $\pi_\distr$, needed here since $\eta$ acts only on $m$-tuples of horizontal vectors. One obtains readily:
\begin{equation}\label{eq:hordivformula}
(\mathcal{L}^\V_X \eta) (Y_1,\ldots,Y_m):= X(\eta(Y_1,\ldots,Y_m)) - \sum_{i=1}^m \eta(Y_m,\ldots,\pi_\distr [X,Y_i],\ldots,Y_m),
\end{equation}
where $Y_1,\ldots,Y_m \in \Gamma(\distr) $. If $\distr$ is oriented (as a vector bundle), with $\rank \distr = k$, we define a canonical horizontal $k$-form $\eta$ such that
\begin{equation}
\eta(X_1,\ldots,X_k) = 1,
\end{equation}
for any oriented orthonormal frame $X_1,\ldots,X_k \in \Gamma(\distr) $.
\begin{definition}
Let $X \in \Gamma(TM)$, and let $\eta$ be a non-zero $k$ form. The \emph{horizontal divergence} $\div^\V(X)$ is the function defined by
\begin{equation}
\mathcal{L}_X^\V \eta = \div^\V(X) \eta.
\end{equation}
\end{definition}
\begin{rmk}
The definition does not depend on the choice of orientation. A general definition for non-orientable distributions can be given in a standard way with horizontal densities (projectivized non-vanishing $k$ forms).
\end{rmk}
Indeed $\div^\V(X)$ computes the infinitesimal change of volume, under the flow of $X$, of the projection on $\distr$ of the standard parallelotope of $\distr$.
As a direct consequence of Eq.~\eqref{eq:microformula} and Eq.~\eqref{eq:hordivformula}, we obtain the following.
\begin{prop}
Let $TM = \distr \oplus \V$. The microscopic Laplacian is
\begin{equation}
L^\V = \div^\V \circ \grad = \sum_{i=1}^k X_i^2 + \div^\V(X_i) X_i,
\end{equation}
where $X_1,\ldots,X_k$ is an orthonormal frame, $\grad$ is the horizontal gradient and $\div^\V$ is the horizontal divergence.
\end{prop}
This formula must be compared directly with Eq~\eqref{eq:macroformula} for the macroscopic operator, where the horizontal divergence $\div^\V$ is replaced by the divergence $\div_\omega$ w.r.t. some volume form $\omega$ on $M$.
\section{The equivalence problem}\label{s:equiv}
Fix a volume form $\omega$ and a complement $\V$. We consider both $L^\V$ and $\Delta_\omega$ as second order differential operators on the space of compactly smooth supported functions $C^\infty_0(M)$, with the $L^2$ product:
\begin{equation}
(f_1,f_2)_\omega = \int_M f_1 f_2 \omega, \qquad f_1,f_2 \in C^\infty_0(M).
\end{equation}
The operator $\Delta_\omega$ is symmetric by construction. What about $L^\V$? The principal symbol of both operators, as a function on the cotangent bundle, is twice the Hamiltonian $2H:T^*M \to \R$.
\begin{theorem}\label{t:compatibility}
Let $\V$ be a complement and $\omega$ a volume form. Then $L^\V$ is symmetric w.r.t.\ the $L^2$ product induced by $\omega$ if and only if $L^\V = \Delta_\omega$ or, equivalently, if and only if
\begin{equation}
\chi^{(\V,\omega)} := \Delta_\omega -L^\V= \sum_{i=1}^k \sum_{j=k+1}^n c_{ji}^j X_i +\grad( \theta)= 0,
\end{equation}
where $\theta = \log |\omega(X_1,\ldots,X_n)|$ and $c_{ij}^\ell \in C^\infty(M)$ are the structural functions associated with an orthonormal frame $X_1,\ldots,X_k$ for $\distr$ and a frame $X_{k+1},\ldots,X_n$ for $\V$.
\end{theorem}
\begin{proof}
To get the explicit formula for $\chi^{(\V,\omega)}$ compare Eq.~\eqref{eq:macroformula} with Eq.~\eqref{eq:microformula}. For what concerns the symmetry, suppose that $L^\V$ is symmetric w.r.t. some choice of $\omega$. Since $\Delta_\omega$ is symmetric, $\chi^{(\V,\omega)}$ is also symmetric, that is (suppressing some notation):
\begin{equation}
(\chi(f_1),f_2)_\omega = (f_1,\chi(f_2))_\omega, \qquad \forall f_1,f_2 \in C^\infty_0(M).
\end{equation}
If we choose $f_2 = 1$ on some domain larger than the support of $f_1$ this is equivalent to $\int_M \chi(f_1) \omega = 0$ for any $f_1 \in C^\infty_0$. In particular $\chi =0$. The converse is clear.
\end{proof}
The condition $\chi^{(\V,\omega)} = 0$ is equivalent to the following system of PDEs:
\begin{equation}
X_i(\theta) + \sum_{\bar\jmath = k+1}^n c_{\bar\jmath i}^{\bar\jmath} = 0, \qquad i=1,\ldots,k.
\end{equation}
In particular $L^\V$ is symmetric w.r.t. some volume $\omega$ if and only if, for fixed $\V$, the above system admits a global solution $\theta$. If this is the case, the associated volume is given by $\omega(X_1,\ldots,X_n) = e^\theta$.
\begin{rmk}
The \emph{compatibility condition} $\chi^{(\V,\omega)}= 0$ is the same appearing in \cite[Theorem 5.13]{GordLae}, written in a different form. We call \emph{compatible} the pairs $(\V,\omega)$ solving the compatibility condition.
\end{rmk}
Finally, if $L^\V$ is symmetric w.r.t. some choice of volume $\omega$, the latter is unique (up to constant rescaling).
\begin{lemma}\label{l:uniqueness}
If $L^\V = \Delta_\omega$, then $L^\V = \Delta_{\omega'}$ if and only if $\omega = c \omega'$, where $c$ is a non-zero constant.
\end{lemma}
\begin{proof}
Let $\omega' = e^g \omega$ with $g \in C^\infty(M)$. Then, using the change of volume formula (see Lemma~\ref{l:changeofvolume}):
\begin{equation}
\chi^{(\V,\omega')} = \Delta_{\omega'} - L^\V = \Delta_{\omega} +\grad(g) - L^\V = \chi^{(\V,\omega)} + \grad(g).
\end{equation}
Thus both pairs are compatible iff $\grad (g) = 0$, that is (by the bracket generating condition) iff $g$ is constant.
\end{proof}
\section{Preliminaries}\label{s:prel}
We discuss some preliminaries in sub-Riemannian geometry. We essentially follow \cite{nostrolibro}, but see also \cite{montgomerybook,riffordbook,Jea-2014}.
\begin{definition}
A \emph{sub-Riemannian manifold} is a triple $(M,\distr,\metr)$ where:
\begin{itemize}
\item $M$ is smooth, connected manifold;
\item $\distr \subset TM$ is a smooth distribution of constant rank $k < n$, satisfying \emph{H\"ormander’s condition}.
\item $\metr$ is a smooth scalar product on $\distr$: for all $q \in M$, $\metr_q$ is a positive definite quadratic form on $\distr_q$, smooth as a function of $q$.
\end{itemize}
This definition does not include Riemannian structures, for which $k =n$ and $\distr = TM$. We use the term \emph{(sub)-Riemannian} to refer to structures $(M,\distr,\metr)$ that are either Riemannian or sub-Riemannian.
\end{definition}
\begin{definition}
Define $\distr^{1}:=\distr$, $\distr^{i+1}:=\distr^{i}+[\distr^{i},\distr]$, for every $i\geq1$. A sub-Riemannian manifold is said to be \emph{equiregular} if for each $i\geq1$, the dimension of $\distr^{i}_{q}$ does not depend on the point $q\in M$.
\end{definition}
Smooth sections of $\distr$ are called \emph{horizontal vector fields}. Hormander's condition guarantees that any two points in $M$ can be joined by a Lipschitz continuous curve whose velocity is a.e. in $\distr$ (Chow-Rashevskii theorem). We call such curves \emph{horizontal}. Horizontal curves $\gamma : I \to M$ have a well-defined length, given by
\begin{equation}
\ell(\gamma) = \int_I \|\gamma(t)\| dt,
\end{equation}
where $\| \cdot \|$ is the norm induced by the (sub)-Riemannian scalar product. The \emph{Carnot-Caratheodory distance} between two points $p,q \in M$ is
\begin{equation}
d(p,q) = \inf\{ \ell(\gamma) \mid \gamma \text{ horizontal curve connecting $q$ with $p$} \}.
\end{equation}
This distance turns $(M,\distr,\metr)$ into a metric space that has the same topology of $M$.
\begin{definition}
The \emph{Hamiltonian function} $H: T^*M \to \R$ associated with a sub-Riemannian structure is
\begin{equation}
H(\lambda) := \frac{1}{2} \sum_{i=1}^k \langle\lambda,X_i\rangle^2,
\end{equation}
for any choice of a local orthonormal frame $X_1,\ldots,X_k$ of horizontal fields, i.e. $\metr(X_i,X_j) =\delta_{ij}$.
\end{definition}
On each fiber, $H$ is non-negative quadratic form, and provides a way to measure the ``length'' of covectors. By setting $H_q:= H|_{T_q^*M}$, one can check that $\ker H_q = \distr_q^\perp$, the set of covectors that vanish on the distribution. In the Riemannian case, $H$ is precisely the fiber-wise inverse of the metric $\metr$.
Let $\sigma$ the natural symplectic structure on $T^*M$, and $\pi$ the projection $\pi: T^*M \to M$. The \emph{Hamiltonian vector field} $\vec{H}$ is the unique vector field on $T^*M$ such that $dH = \sigma(\cdot,\vec{H})$. Integral lines of $\vec{H}$ are indeed smooth curves on the cotangent bundle that satisfy Hamilton's equations $\dot{\lambda}(t) = \vec{H}(\lambda(t))$.
\begin{definition}
The projections $\gamma(t) = \pi(\lambda(t))$ of integral lines of $\vec{H}$ are called \emph{normal (sub)-Riemannian geodesics}.
\end{definition}
Normal geodesics are indeed smooth and, as in the Riemannian case, are locally minimizing (i.e. any sufficiently small segment of $\gamma(t)$ minimizes the distance between its endpoints).
For any $\lambda \in T^*M$ we consider the associated normal geodesic $\gamma_\lambda(t)$, obtained as the projection of the integral line $\lambda(t)$ of $\vec{H}$ with initial condition $\lambda(0) = \lambda$. The \emph{initial covector} $\lambda$ plays the same role in sub-Riemannian geometry of the initial vector of Riemannian geodesics, with the important difference that an infinite number of distinct sub-Riemannian normal geodesics, with different initial covector, have the same initial vector. Notice that the Hamiltonian function, which is constant on integral lines $\lambda(t)$, measures the speed of the normal geodesic:
\begin{equation}
2H(\lambda) = \|\dot\gamma_\lambda(t)\|^2,\qquad \lambda \in T^*M.
\end{equation}
We are ready for the following important definition.
\begin{definition}
Let $D_q\subseteq [0,\infty) \times T_q^*M$ the set of the pairs $(t,\lambda)$ such that the normal geodesic with initial covector $\lambda$ is well defined up to time $t$. The (sub)-Riemannian \emph{exponential map} (at $q \in M$) is the map $\exp_q: D_q \to M$ that associates with $(t,\lambda)$ the point $\gamma_\lambda(t)$.
\end{definition}
When clear, we suppress the initial point $q$ in $\exp_q$. It is easy to show that, for any $\alpha >0 $, we have
\begin{equation}
\gamma_{\alpha \lambda}(t) = \gamma_{\lambda}(\alpha t).
\end{equation}
This rescaling property, due to the fact that $H$ is fiber-wise homogeneous of degree $2$, justifies the restriction to the subset of initial covectors lying in the level set $2H=1$.
\begin{definition}
The \emph{unit cotangent bundle} is the set of initial covectors such that the associated normal geodesic has unit speed, namely
\begin{equation}
\cyl := \{\lambda \in T^*M \mid 2H(\lambda) = 1\} \subset T^*M.
\end{equation}
\end{definition}
\begin{rmk}
We stress that, in the sub-Riemannian case, the non-negative quadratic form $H_q = H|_{T_q^*M}$ has non-trivial kernel. It follows that the fibers $\cyl_q$ are cylinders and thus, non-compact, in sharp contrast with the Riemannian case (where the fibers $\cyl_q$ are spheres).
\end{rmk}
For any $\lambda \in \cyl$, the corresponding geodesic $\gamma_\lambda(t)$ is parametrized by arc-length, namely $\ell(\gamma|_{[0,T]}) = T$. Even if $\cyl$ is not compact, all arc-length parametrized geodesics are well defined for a sufficiently small time. The next Lemma is a consequence of the form of Hamilton's equations and the compactness of small balls.
\begin{lemma}\label{l:prolong}
There exists $\varepsilon >0$ such that $[0,\varepsilon) \times \cyl_q \subseteq D_q$. In other words all arc-length parametrized normal geodesics $\gamma_\lambda(t)$ are well defined on the interval $[0,\varepsilon)$.
\end{lemma}
In the Riemannian case, the gradient of a function is a vector field that points in the direction of the greatest rate of increase of the function. The generalization to the (sub)-Riemannian setting is straightforward.
\begin{definition}\label{d:grad}
Let $f \in C^\infty(M)$. The \emph{horizontal gradient} $\grad(f) \in \Gamma(\distr)$ is defined by
\begin{equation}
df(X) = \metr(\grad(f),X), \qquad \forall X \in \Gamma(\distr).
\end{equation}
\end{definition}
Since, in the Riemannian case, it is the usual gradient, this notation will cause no confusion.
\subsection{Computations with frames}
If $E$ is a smooth vector bundle over $M$, the symbol $\Gamma(E)$ denotes the $C^\infty(M)$-module of smooth sections of $E$. Horizontal vector fields are then elements of $\Gamma(\distr)$.
In sub-Riemannian geometry, computations are most effectively done in terms of orthonormal frames (Riemannian normal coordinates are not available in general). Then, let $X_1,\ldots,X_k$ a (local) orthonormal frame for the sub-Riemannian structure. Moreover, consider some complement $X_{k+1},\ldots,X_n$, namely a local frame that completes $X_1,\ldots,X_k$ to a local frame for $TM$. Let $c_{ij}^\ell \in C^\infty(M)$ (the \emph{structural functions}) be defined by:
\begin{equation}
[X_i,X_j] = \sum_{\ell=1}^n c_{ij}^\ell X_\ell, \qquad i,j = 1,\ldots,n.
\end{equation}
Now define the functions $h_i : T^*M \to \R$ (that are linear on fibers) as:
\begin{equation}
h_i(\lambda) := \langle \lambda, X_i\rangle, \qquad i = 1,\ldots,n.
\end{equation}
We have
\begin{equation}
H = \frac{1}{2}\sum_{i=1}^k h_i^2, \qquad \vec{H} = \sum_{i=1}^k h_i \vec{h}_i,
\end{equation}
where $\vec{h}_i$ is the Hamiltonian vector field associated with $h_i$, namely $\sigma(\cdot,\vec{h}_i) = dh_i$. Indeed, for any fixed $q$, the restriction of $(h_1,\ldots,h_n) : T_{q}^*M \to \R^n$ gives coordinates to $T_{q}^*M$, associated with the choice of $X_1,\ldots,X_n$. In terms of these coordinates the fibers of the unit cotangent bundle are
\begin{equation}
\cyl_q := T_{q}^*M \cap \cyl = \{(h_1,\ldots,h_k,h_{k+1},\ldots,h_n) \mid h_1^2+\ldots+ h_k^2 = 1\} \simeq \mathbb{S}^{k-1} \times \R^{n-k}.
\end{equation}
The last identification depends on the choice of the frame $X_1,\ldots,X_n$. Finally, for any normal geodesic $\gamma_\lambda(t)$
\begin{equation}
\dot{\gamma}_\lambda(t) = \pi_* \dot{\lambda}(t) = \pi_* \vec{H}(\lambda(t)) = \sum_{i=1}^k h_i(\lambda(t)) X_i,
\end{equation}
where we used the fact that $\pi_* \vec{h}_i = X_i$.
\subsection{A Taylor expansion with frames}
In this section we prove a Taylor expansion for smooth functions along normal geodesics. This formula will play a key role in the following. To keep a compact notation we often employ the \emph{generalized Einstein's convention}: repeated indices are summed, albeit on different ranges: Greek indices ($\alpha,\beta,\gamma,\dots$) from $1$ to $n$; Latin indices ($i,j,\ell,\dots$) from $1$ to $k$; barred Latin indices ($\bar{\imath},\bar{\jmath},\bar{\ell},\dots$) from $k+1$ to $n$.
\begin{lemma}\label{l:taylor}
Let $\phi \in C^\infty(M)$ and consider the geodesic $\gamma_\lambda(t)$, emanating from $q$. Then
\begin{equation}
\phi(\gamma_\lambda(t)) = \phi(q)+ t h_i X_i(\phi)(q) + \frac{1}{2}t^2 \left[h_j c_{ji}^\alpha h_\alpha X_i(\phi)(q) + h_i h_j X_j(X_i(\phi))(q)\right] + t^3 r_\lambda(t),
\end{equation}
where the initial covector $\lambda \in T_q^*M$ has coordinates $(h_1,\ldots,h_n)$ and $r_\lambda(t)$ is a remainder term. Moreover for any $q \in M$, there exists an $\varepsilon_0 >0$ and constants $A,B_{\bar\ell},C_{\bar\imath\bar\jmath} \geq 0$ such that, for any $\lambda \in \cyl$ with $\pi(\lambda) \in B(q,\varepsilon_0)$ (the metric ball with center $q$ and radius $\varepsilon_0$) we have
\begin{equation}
|r_\lambda(t)|\leq A + B_{\bar{\ell}} |h_{\bar\ell}| + C_{\bar\imath\bar\jmath} |h_{\bar\imath}| |h_{\bar\jmath}|, \qquad \forall t \leq \varepsilon_0.
\end{equation}
\end{lemma}
\begin{rmk}
In coordinates $\lambda = (h_1,\ldots,h_n)$ with $h_1^2+\ldots+h_k^2 = 1$. Thus, the estimate above shows how the remainder term depends on the ``unbounded'' coordinates $h_{k+1},\ldots,h_n$ of the initial covector.
\end{rmk}
\begin{proof}
The geodesic $\gamma_\lambda(t)$ is the projection of the integral curve $\lambda(t)$. Its initial covector $\lambda \in T_{q}^*M$ has coordinates $(h_1,\ldots,h_n) \in \R^n$. We drop the subscript $\lambda$, since it's fixed. As we described above,
\begin{equation}\label{eq:firstder}
\frac{d}{d t}\phi(\gamma(t)) = \dot\gamma(t) ( \phi) = h_i(t) X_i(\phi)(\gamma(t)),
\end{equation}
where $h_i(t)$ is a shorthand for $h_i(\lambda(t))$. Similarly, for any function $g \in C^\infty(T^*M)$, we set $g(t)= g(\lambda(t))$. With this notation, Hamilton's equations are:
\begin{equation}
\dot{g}(t) = \{H,g\}(t),
\end{equation}
where $\{\cdot,\cdot\}$ denotes the Poisson bracket, and the dot the derivative w.r.t. $t$. For $h_i:T^*M \to \R$ we get
\begin{equation}
\dot h_i = \{H,h_i\} = h_j\{h_j,h_i\} = h_j c_{ji}^\alpha h_\alpha,
\end{equation}
where we suppressed the explicit evaluation at $t$. We apply the chain rule to Eq.~\eqref{eq:firstder} and we get
\begin{equation}\label{eq:seconder}
\frac{d^2}{d t^2}\phi(\gamma(t)) = \dot{h}_i X_i(\phi) + h_i h_j X_j(X_i(\phi)) = h_j c_{ji}^\alpha h_\alpha X_i(\phi) + h_i h_j X_j(X_i(\phi)).
\end{equation}
Evaluating at $t=0$ we get the second order term. Now let $t \leq \varepsilon_0$. The remainder of Taylor's expansion in Lagrange's form is:
\begin{equation}
r_\lambda(t) = \frac{1}{3!}\left.\frac{d^3}{d t^3}\right|_{t=t_*}\phi(\gamma(t)), \qquad t_* \in [0,t].
\end{equation}
To compute it we apply the chain rule to Eq.~\eqref{eq:seconder}. By Hamilton's equations $\dot{h}_\alpha = h_i c_{i\alpha}^\beta h_\beta$, we reduce it to a polynomial in $h_1(t),\ldots,h_n(t)$, structural functions $c_{\alpha\beta}^\gamma$ and their first derivatives $X_i(c_{\alpha\beta}^\gamma)$:
\begin{equation}
\begin{aligned}
\frac{d^3}{d t^3}\phi(\gamma(t))& = h_\ell c_{\ell j}^\beta h_\beta c_{ji}^\alpha h_\alpha X_i(\phi) + h_j h_\ell X_\ell(c_{ji}^\alpha) h_\alpha X_i(\phi) + h_j c_{ji}^\alpha h_\ell c_{\ell \alpha}^\beta h_\beta X_i(\phi) + h_j c_{ji}^\alpha h_\alpha h_\ell X_\ell( X_i(\phi))+ \\
& +h_\ell c_{\ell i}^\alpha h_\alpha h_j X_j(X_i(\phi)) + h_i h_\ell c_{\ell j}^\alpha h_\alpha X_j(X_i(\phi)) + h_i h_j h_\ell X_\ell(X_j(X_i(\phi))).
\end{aligned}
\end{equation}
We stress that everything on the r.h.s. is computed at $t$. Since $\lambda \in \cyl_q$, and $2H$ is a constant of the motion, each $|h_i(t)| \leq 1$ for $i=1,\ldots,k$. Fix $q \in M$, and consider the sub-Riemannian closed ball $\overline{B(q,2\varepsilon_0)}$, that is compact for sufficiently small $\varepsilon_0$. If $\lambda \in \cyl \cap \pi^{-1}(B(q,\varepsilon_0))$, the length parametrized geodesic $\gamma_\lambda$ does not exit the compact $\overline{B(q,2\varepsilon_0)}$. Therefore also the structural functions and their derivatives are bounded by their maximum on $\overline{B(q,2\varepsilon_0)}$. The only a priori uncontrolled terms are then the $h_{\bar\imath}(t)$, for $\bar\imath=k+1,\ldots,n$. Thus:
\begin{equation}\label{eq:remainder_t}
|r_\lambda(t)| \leq P + Q_{\bar{\ell}} |h_{\bar{\ell}}(t)| + R_{\bar{\imath}\bar{\jmath}} |h_{\bar{\imath}}(t)| |h_{\bar{\jmath}}(t)|, \qquad \forall t \leq \varepsilon_0,
\end{equation}
where $P,Q_{\bar{\ell}}, R_{\bar{\imath}\bar{\jmath}} \geq 0$ are constants. Gronwall's Lemma shows that, for $t\leq \varepsilon_0$ we have
\begin{equation}
h_{\bar\imath}(t) \leq D_{\bar\imath} + E_{\bar\imath\bar\jmath} | h_{\bar{\jmath}}(0)|, \qquad \bar\imath = k+1,\ldots,n,
\end{equation}
for constants $D_{\bar\imath},E_{\bar\imath\bar\jmath} \geq 0$. By plugging this last equation into Eq.~\eqref{eq:remainder_t}, we obtain the result.
\end{proof}
\subsection*{Acknowledgments}
The authors are grateful to Fr\'ed\'eric Jean for his advice on non-equiregular structures, and in particular, for explaining how to derive Eq.~\eqref{Eqn:DistanceComp}. We also thank Andrei Agrachev, Gr\'egoire Charlot, Erlend Grong, and Jose Veloso for useful and stimulating discussions. We finally thank the Institut Henri Poincar\'e (Paris) for the hospitality during the Trimester ``Geometry, Analysis and Dynamics on sub-Riemannian manifolds,'' where most of this research has been carried out.
This research has been partially supported by the European Research Council, ERC StG 2009 ``GeCoMethods'', contract n. 239748, by the iCODE institute (research project of the Idex Paris-Saclay), by the SMAI project ``BOUM'', the Grant ANR-15-CE40-0018 of the ANR. This research benefited from the support of the ``FMJH Program Gaspard Monge in optimization and operation research'' and from the support to this program from EDF.
\bibliographystyle{abbrv}
| {
"timestamp": "2016-01-11T02:09:01",
"yymm": "1503",
"arxiv_id": "1503.00725",
"language": "en",
"url": "https://arxiv.org/abs/1503.00725",
"abstract": "On a sub-Riemannian manifold we define two type of Laplacians. The \\emph{macroscopic Laplacian} $\\Delta_\\omega$, as the divergence of the horizontal gradient, once a volume $\\omega$ is fixed, and the \\emph{microscopic Laplacian}, as the operator associated with a sequence of geodesic random walks. We consider a general class of random walks, where \\emph{all} sub-Riemannian geodesics are taken in account. This operator depends only on the choice of a complement $\\mathbf{c}$ to the sub-Riemannian distribution, and is denoted $L^c$.We address the problem of equivalence of the two operators. This problem is interesting since, on equiregular sub-Riemannian manifolds, there is always an intrinsic volume (e.g. Popp's one $P$) but not a canonical choice of complement. The result depends heavily on the type of structure under investigation. On contact structures, for every volume $\\omega$, there exists a unique complement $c$ such that $\\Delta_\\omega=L^c$. On Carnot groups, if $H$ is the Haar volume, then there always exists a complement $c$ such that $\\Delta_H=L^c$. However this complement is not unique in general. For quasi-contact structures, in general, $\\Delta_P \\neq L^c$ for any choice of $c$. In particular, $L^c$ is not symmetric w.r.t. Popp's measure. This is surprising especially in dimension 4 where, in a suitable sense, $\\Delta_P$ is the unique intrinsic macroscopic Laplacian.A crucial notion that we introduce here is the N-intrinsic volume, i.e. a volume that depends only on the set of parameters of the nilpotent approximation. When the nilpotent approximation does not depend on the point, a N-intrinsic volume is unique up to a scaling by a constant and the corresponding N-intrinsic sub-Laplacian is unique. This is what happens for dimension smaller or equal than 4, and in particular in the 4-dimensional quasi-contact structure mentioned above.",
"subjects": "Differential Geometry (math.DG); Analysis of PDEs (math.AP); Optimization and Control (math.OC); Probability (math.PR)",
"title": "Intrinsic random walks and sub-Laplacians in sub-Riemannian geometry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347860767305,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7093645893222221
} |
https://arxiv.org/abs/2008.02489 | On a minimax principle in spectral gaps | The minimax principle for eigenvalues in gaps of the essential spectrum in the form presented by Griesemer, Lewis, and Siedentop in [Doc. Math. 4 (1999), 275--283] is adapted to cover certain abstract perturbative settings with bounded or unbounded perturbations, in particular ones that are off-diagonal with respect to the spectral gap under consideration. This in part builds upon and extends the considerations in the author's appendix to [J. Spectr. Theory 10 (2020), 843--885]. Several monotonicity and continuity properties of eigenvalues in gaps of the essential spectrum are deduced, and the Stokes operator is revisited as an example. | \section{Introduction and main result}\label{sec:intro}
The standard Courant minimax values $\lambda_k(A)$ of a lower semibounded operator $A$ on a Hilbert space ${\mathcal H}$ are given by
\begin{equation*}
\lambda_k(A)
=
\inf_{\substack{{\mathfrak M}\subset\Dom(A)\\ \dim{\mathfrak M}=k}} \sup_{\substack{x\in{\mathfrak M}\\ \norm{x}=1}} \scprod{ x , Ax }
=
\inf_{\substack{{\mathfrak M}\subset\Dom(|A|^{1/2})\\ \dim{\mathfrak M}=k}} \sup_{\substack{x\in{\mathfrak M}\\ \norm{x}=1}} {\mathfrak a}[x,x]
\end{equation*}
for $k\in\mathbb{N}$ with $k\le\dim{\mathcal H}$, see, e.g.,~\cite[Theorem~12.1]{LL01} and also~\cite[Section~12.1 and Exercise~12.4.2]{Schm12}.
Here, $\langle\cdot,\cdot\rangle$ denotes the inner product of ${\mathcal H}$, and ${\mathfrak a}$ with
${\mathfrak a}[x,x] = \scprod{ \abs{A}^{1/2}x , \sign(A)\abs{A}^{1/2}x }$ for $x\in\Dom(\abs{A}^{1/2})$ is the form associated with $A$.
The above minimax values have proved to be a powerful description of the eigenvalues below the essential spectrum of $A$; in
fact, they agree with these eigenvalues in nondecreasing order counting multiplicities. A standard application in this context is
that these eigenvalues exhibit a monotonicity with respect to the operator: for two self-adjoint operators $A$ and $B$ with
$A\le B$ in the sense of quadratic forms one has $\lambda_k(A) \le \lambda_k(B)$ for all $k$, see,
e.g.,~\cite[Corollary~12.3]{Schm12}.
Matters get, however, much more complicated when eigenvalues in a gap of the essential spectrum are considered. If $A_+$ is the
(lower semibounded) part of $A$ with spectrum in an interval of the form $(\gamma,\infty)$, $\gamma\in\mathbb{R}$, then the minimax
values for $A_+$ still describe the eigenvalues of $A_+$ below its essential spectrum and thus the eigenvalues of $A$ in
$(\gamma, \infty)$ below the essential spectrum of $A$ above $\gamma$. However, the subspaces over which the corresponding
infimum is taken are chosen within the spectral subspace for $A$ associated with the interval $(\gamma,\infty)$ and therefore
usually depend on the operator itself rather than just its domain. This makes it difficult to compare minimax values in spectral
gaps of two different operators $A$ and $B$, even if their domains agree.
In~\cite{GLS99}, Griesemer, Lewis, and Siedentop devised an abstract minimax principle for eigenvalues in spectral gaps that
allows to overcome these problems. However, the corresponding hypotheses seem to be hard to verify on an abstract level,
cf.~Remark~\ref{rem:neg}\,(2) below. In the particular situation of bounded additive perturbations, the present author has
adapted this abstract minimax principle in the appendix to~\cite{NSTTV18} with hypotheses that can in some cases be verified
explicitly by means of the Davis-Kahan $\sin2\Theta$ theorem from~\cite{DK70} and variants thereof. This has been successfully
applied in~\cite{NSTTV18} to study lower bounds on the movement of eigenvalues in gaps of the essential spectrum and of edges of
the essential spectrum. In the present note, the considerations from~\cite[Appendix~A]{NSTTV18} are supplemented and extended to
cover also certain unbounded perturbations, in particular ones that are off-diagonal with respect to the spectral gap under
consideration. It should be mentioned that some of the results discussed here might also be obtained with the alternative
approaches from~\cite{DES00, DES06, MM15, LS16}. However, the present work focuses on~\cite{GLS99} as a starting point since the
techniques employed to apply that abstract minimax principle promise to be of a broader interest.
\subsection*{Main results}
In order to formulate our main results, it is convenient to fix the following notational setup in the case where $\gamma = 0$;
the case of general $\gamma \in \mathbb{R}$ can of course always be reduced to this situation by spectral shift,
cf.~Remark~\ref{rem:spectralShift} and also the proofs of Proposition~\ref{prop:boundedPert} below.
\begin{hypothesis}\label{hyp:minimax}
Let $A$ be a self-adjoint operator on a Hilbert space. Denote the spectral projectors for $A$ associated with the intervals
$(0,\infty)$ and $(-\infty,0]$ by $P_+$ and $P_-$, respectively, that is,
\begin{equation*}
P_+ := \mathsf{E}_A\bigl((0,\infty)\bigr)
,\quad
P_- := I-P_+
,
\end{equation*}
and let
\begin{equation*}
{\mathcal D}_\pm := \Ran P_\pm \cap \Dom(A)
,\quad
{\mathfrak D}_\pm := \Ran P_\pm \cap \Dom(\abs{A}^{1/2})
.
\end{equation*}
Moreover, let $B$ be another self-adjoint operator on the same Hilbert space with analogously defined spectral projections
\begin{equation*}
Q_+ := \mathsf{E}_B\bigl((0,\infty)\bigr)
,\quad
Q_- := I - Q_+
,
\end{equation*}
and denote by ${\mathfrak b}$ the form associated with $B$, that is,
\begin{equation*}
{\mathfrak b}[x,y] = \scprod{ \abs{B}^{1/2}x , \sign(B)\abs{B}^{1/2}y }
\end{equation*}
for $x,y\in\Dom({\mathfrak b}) = \Dom(\abs{B}^{1/2})$.
\end{hypothesis}
Here, $\mathsf{E}_A$ and $\mathsf{E}_B$ stand for the projection-valued spectral measures for the operators $A$ and $B$, respectively, and
$\Ran P_\pm$ denotes the range of $P_\pm$. We have also used the notation $I$ for the identity operator.
Denoting the form associated with $A$ by ${\mathfrak a}$, the minimax values of the positive part $A|_{\Ran P_+}$ of $A$ can clearly be
written as
\begin{equation*}
\lambda_k(A|_{\Ran P_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathcal D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathcal D}_-\\ \norm{x}=1}} \scprod{ x , Ax }
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathfrak D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathfrak D}_-\\ \norm{x}=1}} {\mathfrak a}[x,x]
\end{equation*}
for $k\in\mathbb{N}$ with $k \le \dim \Ran P_+$. In our main results below we give conditions on $B$ under which the minimax values for
the positive part $B|_{\Ran Q_+}$ of $B$ admit the same representations with $\scprod{x , Ax }$ and ${\mathfrak a}[x , x]$ replaced by
$\scprod{x , Bx}$ and ${\mathfrak b}[x , x]$, respectively, but with the infima taken over the same respective families of subspaces as for
$A$. It is natural to consider this in an perturbative framework where $B$ is obtained by an operator or form perturbation of $A$
and, thus, one has $\Dom(A) = \Dom(B)$ and/or $\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$.
Four results in this direction are presented here, each addressing different situations. We first treat the case of operator
perturbations and start with the direct extension of~\cite[Theorem~A.2]{NSTTV18} to infinitesimal perturbations. Recall that an
operator $V$ is called $A$-bounded with $A$-bound $b_* \ge 0$ if $\Dom(V) \supset \Dom(A)$ and for all $b > b_*$ there is some
$a \ge 0$ with
\begin{equation*}
\norm{ Vx }
\le
a\norm{x} + b\norm{Ax}
\quad\text{ for all }\
x \in \Dom(A)
.
\end{equation*}
If $b_* = 0$, then $V$ is called infinitesimal with respect to $A$.
\begin{theorem}\label{thm:genOpInfinitesimal}
Assume Hypothesis~\ref{hyp:minimax}. Suppose, in addition, that $B$ is of the form $B = A + V$ with some symmetric operator $V$
that is infinitesimal with respect to $A$. Furthermore, suppose that $\norm{P_+Q_-} < 1$ and that
\begin{equation*}
\scprod{ x , Bx } \le 0 \quad\text{ for all }\ x \in {\mathcal D}_-.
\end{equation*}
Then,
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathcal D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathcal D}_-\\ \norm{x}=1}} \scprod{ x , Bx }
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathfrak D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathfrak D}_-\\ \norm{x}=1}} {\mathfrak b}[x,x]
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim\Ran P_+$.
\end{theorem}
Two remarks regarding Theorem~\ref{thm:genOpInfinitesimal} are in order:
(1)
also certain perturbations $V$ that are not infinitesimal with respect to $A$ can be considered here, but at the cost of a
stronger assumption on $\norm{P_+Q_-}$, see Remark~\ref{rem:relBoundK} below;
(2)
the condition $\norm{P_+Q_-} < 1$ is satisfied if the stronger inequality $\norm{P_+-Q_+} < 1$ holds. In the latter case, the
subspaces $\Ran P_+$ and $\Ran Q_+$ have the same dimension, that is, $\dim \Ran P_+ = \dim \Ran Q_+$, see
Remark~\ref{rem:PQbij}\,(a) below.
The stronger condition~$\norm{P_+-Q_+} < 1$ just mentioned in fact also opens the way to employ a different approach than the one
used to prove Theorem~\ref{thm:genOpInfinitesimal}. This alternative approach has previously been used in the context of block
diagonalization of operators and forms, see Section~\ref{sec:blockDiag} below, and is particularly attractive if the unperturbed
operator $A$ is semibounded.
\begin{theorem}\label{thm:genSemibounded}
Assume Hypothesis~\ref{hyp:minimax}. Suppose, in addition, that $A$ is semibounded and that $\norm{P_+ - Q_+} < 1$.
If $\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$ and ${\mathfrak b}[ x , x ] \le 0$ for all $x \in {\mathfrak D}_-$, then
\begin{equation}\label{eq:genSemibounded:form}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathfrak D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathfrak D}_-\\ \norm{x}=1}} {\mathfrak b}[x,x]
\end{equation}
for all $k \le \dim\Ran P_+ = \dim\Ran Q_+$. If even $\Dom(A) = \Dom(B)$ and $\scprod{ x , Bx } \le 0$ for all $x \in {\mathcal D}_-$,
then also
\begin{equation}\label{eq:genSemibounded:op}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathcal D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathcal D}_-\\ \norm{x}=1}} \scprod{ x , Bx }
\end{equation}
for all $k \le \dim\Ran P_+ = \dim\Ran Q_+$.
\end{theorem}
It should be emphasized that the conditions $\Dom(A) = \Dom(B)$ and $\scprod{ x , Bx } \le 0$ for all $x \in {\mathcal D}_-$ in
Theorem~\ref{thm:genSemibounded} indeed imply that one has also $\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$ and
${\mathfrak b}[ x , x ] \le 0$ for all $x \in {\mathfrak D}_-$, see Lemma~\ref{lem:GLS} below. Note also that in contrast to
Theorem~\ref{thm:genOpInfinitesimal}, Theorem~\ref{thm:genSemibounded} makes no assumptions on how the operator $B$ is obtained
from $A$. The latter will, however, be relevant when the hypotheses of Theorem~\ref{thm:genSemibounded} are to be verified in
concrete situations.
The condition $\scprod{ x , Bx } \le 0$ for all $x \in {\mathcal D}_-$ plays an important role in both
Theorems~\ref{thm:genOpInfinitesimal} and~\ref{thm:genSemibounded}. In the case where $B = A + V$ with some $A$-bounded symmetric
operator $V$ as in Theorem~\ref{thm:genOpInfinitesimal}, this condition is automatically satisfied if $\scprod{ x , Vx } \le 0$
for all $x \in {\mathcal D}_-$ since $\scprod{ x , Ax } \le 0$ holds for all $x \in {\mathcal D}_-$ by definition. A particular instance of such
perturbations $V$ are so-called~\emph{off-diagonal} perturbations with respect to the decomposition $\Ran P_+ \oplus \Ran P_-$,
in which case also the condition $\norm{P_+-Q_+} < 1$ can be verified efficiently. In comparison with
Theorem~\ref{thm:genOpInfinitesimal}, we may even relax the assumption on the $A$-bound of $V$ here.
\begin{theorem}\label{thm:offdiagOp}
Assume Hypothesis~\ref{hyp:minimax}. Suppose, in addition, that $B$ has the form $B = A + V$ with some symmetric $A$-bounded
operator $V$ with $A$-bound smaller than $1$ and which is off-diagonal on $\Dom(A)$ with respect to the decomposition
$\Ran P_+ \oplus \Ran P_-$, that is,
\begin{equation*}
P_-VP_- x = 0 = P_+VP_+ x \quad\text{ for all }\ x \in \Dom(A).
\end{equation*}
Then, one has $\dim\Ran P_+ = \dim\Ran Q_+$ and
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathcal D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathcal D}_-\\ \norm{x}=1}} \scprod{ x , Bx }
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathfrak D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathfrak D}_-\\ \norm{x}=1}} {\mathfrak b}[x,x]
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim\Ran Q_+$.
\end{theorem}
The last theorem can to some extend be formulated also for off-diagonal form perturbations, at least in the semibounded setting.
The latter restriction is commented on in Section~\ref{sec:blockDiag} below.
\begin{theorem}\label{thm:offdiagForm}
Assume Hypothesis~\ref{hyp:minimax}. Suppose, in addition, that $B$ is semibounded and that its form ${\mathfrak b}$ is given by
${\mathfrak b} = {\mathfrak a} + {\mathfrak v}$, where ${\mathfrak a}$ is the form associated with $A$ and ${\mathfrak v}$ is a symmetric sesquilinear form satisfying
\begin{equation*}
{\mathfrak v}[ P_+x , P_+y ]
=
0
=
{\mathfrak v}[ P_-x , P_-y ]
\quad\text{ for all }\
x,y \in \Dom[{\mathfrak a}] \subset \Dom[{\mathfrak v}]
\end{equation*}
and
\begin{equation}\label{eq:formRelBound}
\abs{ {\mathfrak v}[ x , x ] }
\le
a\norm{x}^2 + b\abs{{\mathfrak a}[ x , x ]}
\quad\text{ for all }\
x \in \Dom(\abs{A}^{1/2}) = \Dom[{\mathfrak a}]
\end{equation}
with some constants $a,b \ge 0$.
Then, one has $\dim\Ran P_+ = \dim\Ran Q_+$ and
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathfrak D}_+\\ \dim{\mathfrak M}_+=k}} \sup_{\substack{x\in{\mathfrak M}_+ \oplus {\mathfrak D}_-\\ \norm{x}=1}} {\mathfrak b}[x,x]
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim\Ran Q_+$.
\end{theorem}
The semiboundedness of $B$ in Theorem~\ref{thm:offdiagForm} forces $A$ to be semibounded as well, see the proof of
Theorem~\ref{thm:offdiagForm} below. In turn, it is natural to suppose that $A$ is semibounded and then to guarantee the
semiboundedness of $B$ via the well-known KLMN theorem by ensuring~\eqref{eq:formRelBound} with $b < 1$. In this regard,
Theorem~\ref{thm:offdiagForm} can be interpreted as a particular case of Theorem~\ref{thm:genSemibounded} with
$\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$, in which the remaining hypotheses are automatically satisfied due to the structure
of the perturbation.
The rest of this note is organized as follows. In Section~\ref{sec:applications} we discuss applications of the main theorems and
revisit the Stokes operator as an example in the framework of Theorem~\ref{thm:offdiagForm}.
Section~\ref{sec:abstrMinimax} is devoted to an abstract minimax principle based on~\cite{GLS99}.
Two approaches are then used to verify the hypotheses of this abstract minimax principle, the~\emph{graph norm approach} and
the~\emph{block diagonalization approach}, respectively, which are discussed separately in Sections~\ref{sec:graphNorm}
and~\ref{sec:blockDiag} below.
Theorem~\ref{thm:genOpInfinitesimal} is proved in Section~\ref{sec:graphNorm}, which is based on the author's appendix
to~\cite{NSTTV18} and extends the corresponding considerations to certain unbounded perturbations $V$.
Theorems~\ref{thm:genSemibounded}--\ref{thm:offdiagForm} are proved in
Section~\ref{sec:blockDiag}, which builds upon recent developments on block diagonalization of operators and forms
from~\cite{MSS16} and~\cite{GKMSV17}, respectively.
Finally, Appendix~\ref{sec:heinz} provides some consequences of the well-known Heinz inequality that are used at various spots in
this note and are probably folklore.
\section{Applications and examples}\label{sec:applications}
In this section, we use the main results from Section~\ref{sec:intro} to prove monotonicity and continuity properties of minimax
values in gaps of the essential spectrum in various situations and also revisit the well-known Stokes operator in the framework
of Theorem~\ref{thm:offdiagForm} as an example. We first consider the situation of indefinite or semidefinite bounded
perturbations, which has essentially been discussed in a slightly different form in~\cite{NSTTV18}.
For a bounded self-adjoint operator $V$ we define bounded nonnegative operators $V^{(p)}$ and $V^{(n)}$ with
$V = V^{(p)} - V^{(n)}$ via functional calculus by
\begin{equation}\label{eq:defVpVn}
V^{(p)} := (1 + \sign(V))V / 2
,\quad
V^{(n)} := (\sign(V) - 1)V / 2
.
\end{equation}
We clearly have $\norm{ V^{(p)} } \le \norm{V}$ and $\norm{ V^{(n)} } \le \norm{V}$.
\begin{proposition}\label{prop:boundedPert}
Let the finite interval $(c,d)$ belong to the resolvent set of the self-adjoint operator $A$, and let $V$ be a bounded
self-adjoint operator on the same Hilbert space. Define ${\mathcal D}_+ := \Ran \mathsf{E}_A([d,\infty)) \cap \Dom(A)$ and
${\mathcal D}_- := \Ran \mathsf{E}_A((-\infty,c]) \cap \Dom(A)$.
If $\norm{V^{(p)}} + \norm{V^{(n)}} < d - c$ with $V^{(p)}$ and $V^{(n)}$ as in~\eqref{eq:defVpVn}, then the interval
$(c+\norm{V^{(p)}} , d-\norm{V^{(n)}})$ belongs to the resolvent set of the operator $A + V$, and one has
$\dim\Ran \mathsf{E}_A([d,\infty)) = \dim \Ran\mathsf{E}_{A+V}([d-\norm{V^{(n)}},\infty))$ and
\begin{equation*}
\lambda_k\bigl ((A+V)|_{\Ran\mathsf{E}_{A+V}([d-\norm{V^{(n)}},\infty))} \bigr)
=
\inf_{\substack{{\mathfrak M}_+ \subset {\mathcal D}_+\\ \dim{\mathfrak M}_+ = k}} \sup_{\substack{x \in {\mathfrak M}_+ \oplus {\mathcal D}_-\\ \norm{x}=1}}
\scprod{ x , (A+V)x }
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim \Ran\mathsf{E}_{A+V}([d-\norm{V^{(n)}},\infty))$.
\end{proposition}
\begin{remark}
A corresponding representation of the minimax values in terms of the form associated with $A+V$ as in
Theorems~\ref{thm:genOpInfinitesimal}--\ref{thm:offdiagOp} holds here as well. However, for the sake of simplicity and since
this is not needed in Corollaries~\ref{cor:monotonicity} and~\ref{cor:continuity} below, this has not been formulated in
Proposition~\ref{prop:boundedPert}.
\end{remark}
The above proposition includes the particular cases where $V$ satisfies $\norm{V} < (d - c)/2$ and where $V$ is semidefinite with
$\norm{V} < d - c$, which essentially have been discussed in the proofs of Theorems~3.14 and~3.15 in~\cite{NSTTV18}; cf.~also the
discussion after Corollaries~\ref{cor:monotonicity} and~\ref{cor:continuity} below. However, Proposition~\ref{prop:boundedPert}
allows also certain indefinite perturbations $V$ with $(d - c)/2 \le \norm{V} < d - c$ that were not covered before.
We discuss here two proofs of Proposition~\ref{prop:boundedPert}, one based on Theorem~\ref{thm:genOpInfinitesimal} that is close
to the proofs of Theorems~3.14 and~3.15 in~\cite{NSTTV18} and the other one based on Theorem~\ref{thm:offdiagOp}. Both emphasize
different aspects on how to deal with the perturbation $V$.
\begin{proof}[Proof of Proposition~\ref{prop:boundedPert} based on Theorem~\ref{thm:genOpInfinitesimal}]
By Proposition~2.1 in~\cite{Seel20}, the interval $(c + \norm{V^{(p)}} , d - \norm{V^{(n)}})$ belongs to the resolvent set of
the operator $A + V$.
Pick $\gamma \in (c + \norm{V^{(p)}} , d - \norm{V^{(n)}})$. We then have $\mathsf{E}_{A-\gamma}((0,\infty)) = \mathsf{E}_A([d,\infty))$ as
well as $\mathsf{E}_{A-\gamma}((-\infty,0]) = \mathsf{E}_A((-\infty,c])$.
For $x \in {\mathcal D}_-$ we clearly have
\begin{align*}
\scprod{ x , (A+V-\gamma)x }
&=
\scprod{ x , (A-\gamma)x } + \scprod{ x , V^{(p)}x } - \scprod{ x , V^{(n)}x }\\
&\le
(c - \gamma + \norm{V^{(p)}}) \norm{x}^2
<
0
.
\end{align*}
Moreover, with $P_+ := \mathsf{E}_A([d,\infty))$ and $Q_+ := \mathsf{E}_{A+V}([d-\norm{V^{(n)}},\infty))$, the variant of the Davis-Kahan
$\sin2\Theta$ theorem in~\cite[Theorem~1.1]{Seel20} implies that
\begin{equation*}
\norm{ P_+ - Q_+ }
\le
\sin\Bigl( \frac{1}{2} \arcsin \frac{\norm{V^{(p)}} + \norm{V^{(n)}}}{d-c} \Bigr)
<
\frac{\sqrt{2}}{2}
<
1
.
\end{equation*}
Taking into account that $\mathsf{E}_{A+V-\gamma}((0,\infty)) = Q_+$ and
\begin{equation*}
\lambda_k((A+V-\gamma)|_{\Ran Q_+}) = \lambda_k((A+V)|_{\Ran Q_+}) - \gamma
\end{equation*}
for all $k \le \dim\Ran Q_+$, the claim now follows by applying Theorem~\ref{thm:genOpInfinitesimal}; cf.~also
Remark~\ref{rem:PQbij}\,(1) below.
\end{proof}%
\begin{proof}[Proof of Proposition~\ref{prop:boundedPert} based on Theorem~\ref{thm:offdiagOp}]
As in the first proof, the interval $(c+\norm{V^{(p)}} , d-\norm{V^{(n)}})$ belongs to the resolvent set of the operator $A+V$.
Pick again $\gamma \in (c+\norm{V^{(p)}} , d-\norm{V^{(n)}})$.
Let $A_+ := A|_{\Ran\mathsf{E}_A([d,\infty))}$ and $A_- := A|_{\Ran\mathsf{E}_A((-\infty,c])}$ denote the parts of $A$ associated with
$\Ran \mathsf{E}_{A}([d,\infty))$ and $\Ran \mathsf{E}_A((-\infty,c])$, respectively. Moreover, for $\bullet \in \{p,n\}$, decompose
$V^{(\bullet)}$ as
\begin{equation*}
V^{(\bullet)}
=
V_{\text{diag}}^{(\bullet)} + V_{\text{off}}^{(\bullet)}
,
\end{equation*}
where $V_{\text{diag}}^{(\bullet)} = V_+^{(\bullet)} \oplus V_-^{(\bullet)}$ is the diagonal part of $V^{(\bullet)}$ and
$V_{\text{off}}^{(\bullet)}$ is the off-diagonal part of $V^{(\bullet)}$ with respect to
$\Ran \mathsf{E}_A([d,\infty)) \oplus \Ran \mathsf{E}_A((-\infty,c])$. We clearly have $V_\pm^{(\bullet)} \ge 0$ and
$\norm{V_\pm^{(\bullet)}} \le \norm{V^{(\bullet)}}$. Thus,
\begin{equation*}
A_- + V_-^{(p)} - V_-^{(n)}
\le
c + \norm{V^{(p)}}
<
\gamma
<
d - \norm{V^{(n)}}
\le
A_+ + V_+^{(p)} - V_+^{(n)}
,
\end{equation*}
so that
\begin{equation*}
\mathsf{E}_A([d,\infty))
=
\mathsf{E}_{A + V_{\text{diag}}^{(p)} - V_{\text{diag}}^{(n)}}((\gamma,\infty))
\end{equation*}
and
\begin{equation*}
\mathsf{E}_A((-\infty,c])
=
\mathsf{E}_{A + V_{\text{diag}}^{(p)} - V_{\text{diag}}^{(n)}}((-\infty,\gamma])
,
\end{equation*}
cf.~the proof of~\cite[Proposition~2.1]{Seel19}. Taking into account that $V_\text{off}^{(p)} - V_\text{off}^{(n)}$ is
off-diagonal with respect to $\Ran \mathsf{E}_A([d,\infty)) \oplus \Ran \mathsf{E}_A((-\infty,c])$ and that
$A+V = (A + V_{\text{diag}}^{(p)} - V_{\text{diag}}^{(n)}) + V_\text{off}^{(p)} - V_\text{off}^{(n)}$, the claim now follows
from Theorem~\ref{thm:offdiagOp} via a spectral shift by $\gamma$ as in the first proof.
\end{proof}%
As corollaries to Proposition~\ref{prop:boundedPert}, we obtain the following monotonicity and continuity statements for the
minimax values in gaps of the essential spectrum.
\begin{corollary}[{cf.~\cite[Theorem~3.15\,(2) and Theorem~3.14]{NSTTV18}}]\label{cor:monotonicity}
Let $A$ be as in Proposition~\ref{prop:boundedPert}, and let $V_0$ and $V_1$ be bounded self-adjoint operators on the same
Hilbert space satisfying $\max\{ \norm{V_0^{(p)}} + \norm{V_0^{(n)}} , \norm{V_1^{(p)}} + \norm{V_1^{(n)}} \} < d - c$.
If, in addition, $V_0 \le V_1$, then
\begin{equation*}
\lambda_k\bigl( (A+V_0)|_{\Ran\mathsf{E}_{A+V_0}([d-\norm{V_0^{(n)}},\infty))} \bigr)
\le
\lambda_k\bigl( (A+V_1)|_{\Ran\mathsf{E}_{A+V_1}([d-\norm{V_1}^{(n)},\infty))} \bigr)
\end{equation*}
for $k \le \dim\Ran\mathsf{E}_A([d,\infty)) = \dim\Ran\mathsf{E}_{A+V_j}([d-\norm{V_j^{(n)}},\infty))$, $j\in\{0,1\}$.
\end{corollary}
\begin{corollary}\label{cor:continuity}
Let $A$ and $V$ be as in Proposition~\ref{prop:boundedPert}. Then, the interval $(c+\norm{V^{(p)}} , d-\norm{V^{(n)}})$ belongs
to the resolvent set of every $A+tV$, $t \in [0,1]$, and for each
$k \le \dim \Ran \mathsf{E}_A([d,\infty)) = \dim \Ran \mathsf{E}_{A+tV}([d-t\norm{V^{(n)}},\infty))$, $t \in [0,1]$, the mapping
\begin{equation*}
[0,1]
\ni
t
\mapsto
\lambda_k((A+tV)|_{\Ran\mathsf{E}_{A+tV}([d-t\norm{V^{(n)}},\infty))})
\end{equation*}
is Lipschitz continuous with Lipschitz constant $\norm{V}$.
\end{corollary}
\begin{proof}
Taking into account that
\begin{equation*}
\scprod{ x , (A+sV)x } - \abs{t-s}\norm{V}
\le
\scprod{ x , (A+tV)x }
\le
\scprod{ x , (A+sV)x } + \abs{t-s}\norm{V}
\end{equation*}
for all $x \in \Dom(A)$, the claim follows immediately from Proposition~\ref{prop:boundedPert}.
\end{proof}%
It should again be mentioned that the above statements include the particular cases where the norm of the perturbations is less
than $(d - c)/2$ or where the perturbations are semidefinite with a norm less than $d - c$. These cases have essentially been
discussed in~\cite{NSTTV18}. There, especially lower bounds on the movement of eigenvalues in gaps of the essential spectrum
under certain conditions and the behaviour of edges of the essential spectrum have been studied. However, since this is not the
main focus of the present note, this is not pursued further here.
The second proof of Proposition~\ref{prop:boundedPert} discussed above is flexible enough to handle also unbounded
perturbations that are small enough in a certain sense, at least in the semibounded setting. This is demonstrated in the
following result for the case where $A$ is lower semibounded.
\begin{proposition}\label{prop:unboundedPert}
Let $A$, $(c,d)$, and ${\mathcal D}_\pm$ be as in Proposition~\ref{prop:boundedPert}, and suppose, in addition, that $A$ is lower
semibounded. Let $V$ be a symmetric operator that is $A$-bounded with $A$-bound smaller than $1$. Moreover, suppose that there
are constants $a, b \ge 0$, $b < 1$, with
\begin{equation*}
\abs{ \scprod{ x , Vx } }
\le
a \norm{x}^2 + b \scprod{ x , Ax }
\quad\text{ for all }\
x \in \Dom(A)
\end{equation*}
and
\begin{equation}\label{eq:unboundedPert}
2a + b(c+d) < d - c.
\end{equation}
Then, the interval $(a+(1+b)c , (1-b)d-a)$ belongs to the resolvent set of $A+V$, and one has
$\dim \Ran \mathsf{E}_A( [d,\infty) ) = \dim \mathsf{E}_{A+V}( [(1-b)d-a,\infty) )$ and
\begin{equation*}
\lambda_k((A+V)|_{\Ran\mathsf{E}_{A+V}([(1-b)d-a,\infty))})
=
\inf_{\substack{{\mathfrak M}_+ \subset {\mathcal D}_+\\ \dim{\mathfrak M}_+ = k}} \sup_{\substack{x\in{\mathfrak M}_+\oplus{\mathcal D}_-\\ \norm{x}=1}}
\scprod{ x , (A+V)x }
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim\Ran \mathsf{E}_{A+V}([(1-b)d-a,\infty))$.
\end{proposition}
\begin{proof}
For $x \in \Dom(A) = \Dom(A+V)$, we clearly have
\begin{equation*}
(1-b)\scprod{ x , Ax } - a\norm{x}^2
\le
\scprod{ x , (A+V)x }
\le
(1+b)\scprod{ x , Ax } + a\norm{x}^2
.
\end{equation*}
According to~\eqref{eq:unboundedPert}, we may pick $\gamma \in \mathbb{R}$ satisfying the two-sided inequality
$a + (1+b)c < \gamma < (1-b)d - a$. Let $A_\pm$ be as in the second proof of Proposition~\ref{prop:boundedPert}, and again
decompose the perturbation $V$ as $V = V_\text{diag} + V_\text{off}$ with diagonal part $V_\text{diag} = V_+ \oplus V_-$
and off-diagonal part $V_\text{off}$. The above then gives
\begin{equation*}
A_- + V_-
\le
a + (1+b)c
<
\gamma
<
(1-b)d - a
\le
A_+ + V_+
,
\end{equation*}
so that $\mathsf{E}_A([d,\infty)) = \mathsf{E}_{A+V_\text{diag}}((\gamma,\infty)) = \mathsf{E}_{A+V_\text{diag}}([(1-b)d-a,\infty))$ and
$\mathsf{E}_A((-\infty,c]) = \mathsf{E}_{A+V_\text{diag}}((-\infty,\gamma])$, and the interval $(a + (1+b)c , (1-b)d - a)$ belongs to the
resolvent set of $A+V_\text{diag}$. By~\cite[Theorem~1]{MS06} (cf.~also~\cite[Theorem~2.1]{AL95}), the interval
$(a + (1+b)c , (1-b)d - a)$ then belongs also to the resolvent set of $A + V = (A+V_\text{diag}) + V_\text{off}$. The rest of
the claim is now proved as in the second proof of Proposition~\ref{prop:boundedPert} via Theorem~\ref{thm:offdiagOp} and a
spectral shift by $\gamma$.
\end{proof}%
\begin{remark}\label{rem:unboundedPert}
(1)
If $A$ is lower semibounded and the symmetric operator $V$ is $A$-bounded with $A$-bound smaller than $1$, then constants
$a, b \ge 0$, $b < 1$, with
\begin{equation*}
\abs{ \scprod{ x , Vx } }
\le
a\norm{x}^2 + b \scprod{ x , Ax }
\quad\text{ for all }\
x \in \Dom(A)
\end{equation*}
exist by~\cite[Theorem~VI.1.38]{Kato95}. Condition~\eqref{eq:unboundedPert} can then be guaranteed for $tV$ instead of $V$ for
$t \in \mathbb{R}$ with sufficiently small modulus.
(2)
A similar result as in Proposition~\ref{prop:unboundedPert} holds also if instead $A$ is upper semibounded. In this case, one
requires constants $a, b \ge 0$, $b < 1$, satisfying $\abs{ \scprod{ x , Vx } } \le a\norm{x}^2 - b \scprod{ x , Ax }$ for all
$x \in \Dom(A)$ and $2a - b(c+d) < d-c$. We then get in a completely analogous way a representation for the minimax values of
$(A+V)|_{\Ran\mathsf{E}_{A+V}([(1+b)d-a))}$.
\end{remark}
As another consequence of Theorem~\ref{thm:offdiagOp}, we obtain the following lower bound for the minimax values in the setting
of off-diagonal operator perturbations.
\begin{corollary}\label{cor:lowerboundOffOp}
In the situation of Theorem~\ref{thm:offdiagOp}, we have
\begin{equation*}
\lambda_k(A|_{\Ran P_+})
\le
\lambda_k(B|_{\Ran Q_+})
\end{equation*}
for all $k \le \dim\Ran P_+ = \dim \Ran Q_+$.
\end{corollary}
\begin{proof}
Let ${\mathfrak M}_+ \subset {\mathcal D}_+$ with $\dim{\mathfrak M}_+ = k$. Since $\scprod{ x , Vx } = 0$ for all $x \in {\mathcal D}_+$ by hypothesis, we have
\begin{equation*}
\sup_{\substack{x \in {\mathfrak M}_+\\ \norm{x}=1}} \scprod{ x , Ax }
=
\sup_{\substack{x \in {\mathfrak M}_+\\ \norm{x}=1}} \scprod{ x , (A+V)x }
\le
\sup_{\substack{x \in {\mathfrak M}_+ \oplus {\mathcal D}_-\\ \norm{x}=1}} \scprod{ x , (A+V)x }
.
\end{equation*}
Taking the infimum over all such subspaces ${\mathfrak M}_+$ proves the claim by Theorem~\ref{thm:offdiagOp}.
\end{proof}%
As in Corollary~\ref{cor:continuity}, we also obtain a continuity statement in the situation of Theorem~\ref{thm:offdiagOp} with
bounded off-diagonal perturbations. Here, however, we do not have to impose any condition on the norm of the perturbation.
\begin{corollary}\label{cor:continuityOffOp}
Let $A$ and $V$ be as in Theorem~\ref{thm:offdiagOp}, and suppose that $V$ is bounded. Then, for each
$k \le \dim \Ran\mathsf{E}_A((0,\infty)) = \dim \Ran\mathsf{E}_{A+tV}((0,\infty))$, $t \in \mathbb{R}$, the mapping
\begin{equation*}
\mathbb{R}
\ni
t
\mapsto
\lambda_k\bigl( (A+tV)|_{\Ran\mathsf{E}_{A+tV}((0,\infty))} \bigr)
\end{equation*}
is Lipschitz continuous with Lipschitz constant $\norm{V}$.
\end{corollary}
In the particular case where $B$ is semibounded, Theorem~\ref{thm:offdiagForm} allows us to extend
Corollaries~\ref{cor:lowerboundOffOp} and~\ref{cor:continuityOffOp} to some degree to off-diagonal form perturbations. Recall
here, that semiboundedness of $B$ implies that also $A$ is semibounded, see the proof of Theorem~\ref{thm:offdiagForm} below.
\begin{corollary}\label{cor:offForm}
Assume the hypotheses of Theorem~\ref{thm:offdiagForm}.
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item For each $k \in \mathbb{N}$ with $k \le \dim\Ran P_+ = \dim\Ran Q_+$ one has
$\lambda_k(A|_{\Ran P_+}) \le \lambda_k(B|_{\Ran Q_+})$.
\item Denote for $t \in (-1/b , 1/b)$ by $B_t$ the self-adjoint operator associated with the form ${\mathfrak b}_t := {\mathfrak a} + t{\mathfrak v}$ with
form domain $\Dom[{\mathfrak b}_t] := \Dom[{\mathfrak a}]$. Then, for each
$k \le \dim\Ran\mathsf{E}_A((0,\infty)) = \dim\Ran\mathsf{E}_{B_t}((0,\infty))$, the mapping
\begin{equation*}
( -1/b , 1/b )
\ni
t
\mapsto
\lambda_k(B_t|_{\Ran\mathsf{E}_{B_t}((0,\infty))})
\end{equation*}
is locally Lipschitz continuous.
\end{enumerate}
\end{corollary}
\begin{proof}
(a).
Taking into account that ${\mathfrak v}[ x , x ] = 0$ for all $x \in {\mathfrak D}_+$ by hypothesis, the inequality
$\lambda_k(A|_{\Ran P_+}) \le \lambda_k(B|_{\Ran Q_+})$ is proved by means of Theorem~\ref{thm:offdiagForm} in a way analogous
to Corollary~\ref{cor:lowerboundOffOp}.
(b).
Upon a rescaling, we may assume without loss of generality that $b < 1$. Also recall that each $B_t$ is indeed a semibounded
self-adjoint operator with $\Dom[{\mathfrak b}_t] = \Dom(\abs{B_t}^{1/2})$ by the well-known KLMN theorem, and note that each $t{\mathfrak v}$
satisfies the hypotheses of Theorem~\ref{thm:offdiagForm}.
%
Pick $t,s \in (-1/b , 1/b)$ with $b\abs{t-s} \le 1-b\abs{s}$.
Consider first the case where $A$ (and hence ${\mathfrak a}$) is lower semibounded with lower bound $m \in \mathbb{R}$. We then have
$\abs{{\mathfrak a}[x,x]} \le {\mathfrak a}[x,x] + (\abs{m}-m)\norm{x}^2$ for all $x \in \Dom[{\mathfrak a}]$. With $\tilde{a} := a + \abs{m} - m$, this
gives
\begin{equation*}
\abs{{\mathfrak v}[x,x]}
\le
\tilde{a}\norm{x}^2 + b{\mathfrak a}[x,x]
\le
\tilde{a}\norm{x}^2 + b{\mathfrak b}_s[x,x] + b\abs{s}\abs{{\mathfrak v}[x,x]}
\end{equation*}
and, hence,
\begin{equation*}
\abs{{\mathfrak v}[x,x]}
\le
\frac{\tilde{a}}{1-b\abs{s}}\norm{x}^2 + \frac{b}{1-b\abs{s}}{\mathfrak b}_s[x,x]
\end{equation*}
for all $x \in \Dom[{\mathfrak a}] = \Dom[{\mathfrak b}_s]$. Since ${\mathfrak b}_t = {\mathfrak b}_s + (t-s){\mathfrak v}$, we thus obtain
\begin{equation*}
-\frac{\tilde{a}\abs{t-s}}{1-b\abs{s}} + \Bigl( 1 - \frac{b\abs{t-s}}{1-b\abs{s}} \Bigr) {\mathfrak b}_s
\le
{\mathfrak b}_t
\le
\frac{\tilde{a}\abs{t-s}}{1-b\abs{s}} + \Bigl( 1 + \frac{b\abs{t-s}}{1-b\abs{s}} \Bigr) {\mathfrak b}_s
.
\end{equation*}
Abbreviating $\lambda_k(t):=\lambda_k(B_t|_{\Ran\mathsf{E}_{B_t}((0,\infty))})$, Theorem~\ref{thm:offdiagForm} then implies that
\begin{equation*}
-\frac{\tilde{a}\abs{t-s}}{1-b\abs{s}} + \Bigl( 1 - \frac{b\abs{t-s}}{1-b\abs{s}} \Bigr) \lambda_k(s)
\le
\lambda_k(t)
\le
\frac{\tilde{a}\abs{t-s}}{1-b\abs{s}} + \Bigl( 1 + \frac{b\abs{t-s}}{1-b\abs{s}} \Bigr) \lambda_k(s)
\end{equation*}
and, therefore,
\begin{equation}\label{eq:Lipschitz}
\abs{\lambda_k(t) - \lambda_k(s)}
\le
\frac{\tilde{a}\abs{t-s}}{1-b\abs{s}} + \frac{b\abs{t-s}}{1-b\abs{s}} \abs{\lambda_k(s)}
.
\end{equation}
This proves that $t \mapsto \lambda_k(t)$ is continuous on $(-1/b,1/b)$ and, in particular, bounded on every compact
subinterval of $(-1/b , 1/b)$. In turn, it then easily follows from~\eqref{eq:Lipschitz} that this mapping is even locally
Lipschitz continuous, which concludes the case where $A$ is lower semibounded.
If $A$ is upper semibounded with upper bound $m \in \mathbb{R}$, we proceed similarly. We then have
$\abs{{\mathfrak a}[x,x]} \le -{\mathfrak a}[x,x] + (m+\abs{m})\norm{x}^2$ for all $x \in \Dom[{\mathfrak a}]$. With $\tilde{a} := a + m + \abs{m}$, this
leads to
\begin{equation*}
\abs{{\mathfrak v}[x,x]}
\le
\frac{\tilde{a}}{1-b\abs{s}}\norm{x}^2 - \frac{b}{1-b\abs{s}}{\mathfrak b}_s[x,x]
\end{equation*}
for all $x \in \Dom[{\mathfrak a}] = \Dom[{\mathfrak b}_s]$. Analogously as above, we then eventually obtain again~\eqref{eq:Lipschitz}, which
proves the claim in the case where $A$ is upper semibounded. This completes the proof.
\end{proof}%
\begin{remark}\label{rem:offForm}
(1)
In part~(a) of Corollary~\ref{cor:offForm}, one can also give an upper bound for $\lambda_k(B|_{\Ran Q_+})$ in terms of the
form bounds of ${\mathfrak v}$: If $A$ is lower semibounded with lower bound $m \in \mathbb{R}$, then
\begin{align*}
\abs{ {\mathfrak v}[ x , x ] }
&\le
(a + b\abs{m})\norm{x}^2 + b({\mathfrak a}-m)[ x , x ]\\
&=
(a + b\abs{m} - bm)\norm{x}^2 + b{\mathfrak a}[ x , x ]
\end{align*}
for all $x \in \Dom[{\mathfrak a}]$, leading to
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
\le
(1+b)\lambda_k(A|_{\Ran P_+}) + (a + b\abs{m} - bm)
\end{equation*}
for all $k \le \dim\Ran P_+ = \dim\Ran Q_+$. Similarly, if $A$ is upper semibounded with upper bound $m \in \mathbb{R}$, we have
\begin{align*}
\abs{ {\mathfrak v}[ x , x ] }
&\le
(a + b\abs{m})\norm{x}^2 + b(m-{\mathfrak a})[ x , x ]\\
&=
(a + b\abs{m} + bm)\norm{x}^2 - b{\mathfrak a}[ x , x ]
\end{align*}
for all $x \in \Dom[{\mathfrak a}]$. If, in addition, $b \le 1$, this then leads to
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
\le
(1-b)\lambda_k(A|_{\Ran P_+}) + (a + b\abs{m} + bm)
\end{equation*}
for all $k \le \dim\Ran P_+ = \dim\Ran Q_+$.
(2)
A similar continuity result as in part~(b) of Corollary~\ref{cor:offForm} can be formulated also in the framework of
Proposition~\ref{prop:unboundedPert}: in addition to the hypotheses of Proposition~\ref{prop:unboundedPert}, let
$I \subset (-1/b,1/b)$ be an interval such that for all $t \in I$ we have $2a\abs{t} + b\abs{t}(c+d) < d-c$. Then, for all
$k \in \mathbb{N}$ satisfying $k \le \dim \Ran \mathsf{E}_A([d,\infty)) = \dim \mathsf{E}_{A+tV}([(1-b\abs{t})d-a\abs{t},\infty))$, the mapping
\begin{equation*}
I \ni t
\mapsto
\lambda_k((A+tV)|_{\Ran\mathsf{E}_{A+tV}([(1-b\abs{t})d-a\abs{t},\infty))})
\end{equation*}
is locally Lipschitz continuous. The proof is analogous to the one of part~(b) of Corollary~\ref{cor:offForm}.
A corresponding result can be formulated also in the framework of Theorem~\ref{thm:genSemibounded}, provided that the interval
$I \subset (-1/b,1/b)$ is then chosen such that for all $t \in I$ we have $\norm{\mathsf{E}_A((0,\infty))-\mathsf{E}_{A+tV}((0,\infty))} < 1$
and $\scprod{ x , (A+tV)x } \le 0$ for all $x \in {\mathcal D}_-$.
\end{remark}
\subsection*{An example. The Stokes operator}
We now briefly revisit the Stokes operator in the framework of Theorem~\ref{thm:offdiagForm}. Here, we mainly rely
on~\cite{GKMSV19}, but the reader is referred also to~\cite[Section~7]{GKMSV17},~\cite[Chapter~5]{SchmDiss},
~\cite{FFMM00}, and the references cited therein.
Let $\Omega \subset \mathbb{R}^n$, $n \ge 2$, be a bounded domain with $C^2$-boundary, and let $\nu > 0$ and $v_* \ge 0$. On the Hilbert
space ${\mathcal H} = {\mathcal H}_+ \oplus {\mathcal H}_-$ with ${\mathcal H}_+ = L^2(\Omega)^n$ and ${\mathcal H}_- = L^2(\Omega)$, we consider the closed, densely defined,
and nonnegative form ${\mathfrak a}$ with $\Dom[{\mathfrak a}] := H_0^1(\Omega)^n \oplus L^2(\Omega)$ and
\begin{equation*}
{\mathfrak a}[ v \oplus q , u \oplus p ]
:=
\nu \sum_{j=1}^n \int_\Omega \scprod{ \partial_j v(x) , \partial_j u(x) }_{\mathbb{C}^n} \,\mathrm d x
\end{equation*}
for $u \oplus p, v \oplus q \in \Dom[{\mathfrak a}]$. Clearly, ${\mathfrak a}$ is the form associated to the nonnegative self-adjoint operator
$A := -\nu\mathbf{\Delta} \oplus 0$ on the Hilbert space ${\mathcal H} = {\mathcal H}_+ \oplus {\mathcal H}_-$ with
$\Dom(A) := (H^2(\Omega) \cap H_0^1(\Omega))^n \oplus L^2(\Omega)$ and $\Dom(\abs{A}^{1/2}) = \Dom[{\mathfrak a}]$, where
$\mathbf{\Delta} = \Delta\cdot I_{\mathbb{C}^n}$ is the vector-valued Dirichlet Laplacian on $\Omega$. Moreover,
$P_+ := \mathsf{E}_A((0,\infty))$ and $P_- := \mathsf{E}_A((-\infty,0]) = \mathsf{E}_A(\{0\})$ are the orthogonal projections onto ${\mathcal H}_+$ and
${\mathcal H}_-$, respectively. In particular, we have
\begin{equation*}
{\mathfrak D}_+
:=
\Ran P_+ \cap \Dom(\abs{A}^{1/2})
=
H_0^1(\Omega)^n \oplus 0
\end{equation*}
and
\begin{equation*}
{\mathfrak D}_-
:=
\Ran P_- \cap \Dom(\abs{A}^{1/2})
=
0 \oplus L^2(\Omega)
.
\end{equation*}
Define the symmetric sesquilinear form ${\mathfrak v}$ on ${\mathcal H} = {\mathcal H}_+ \oplus {\mathcal H}_-$ with domain $\Dom[{\mathfrak v}] := \Dom[{\mathfrak a}]$ by
\begin{equation*}
{\mathfrak v}[ v \oplus q , u \oplus p ]
:=
-v_* \scprod{ \divgc v , p }_{L^2(\Omega)} - v_* \scprod{ q , \divgc u }_{L^2(\Omega)}
\end{equation*}
for $u \oplus p, v \oplus q \in \Dom[{\mathfrak a}]$. One can show that
$\nu\norm{\divgc u}_{L^2(\Omega)}^2 \le {\mathfrak a}[u \oplus 0 , u \oplus 0]$ for all $u \in {\mathfrak D}_+ = H_0^1(\Omega)^n$, see,
e.g.,~\cite[Proof of Theorem~5.12]{SchmDiss}. Using Young's inequality, this then implies that ${\mathfrak v}$ is infinitesimally form
bounded with respect to ${\mathfrak a}$, see~\cite[Remark~5.1.3]{SchmDiss}; cf.~also~\cite[Section~2]{GKMSV19}. Indeed, for $\varepsilon > 0$ and
$f = u \oplus p \in \Dom[{\mathfrak a}]$ we obtain
\begin{equation}\label{eq:StokesRelBound}
\begin{aligned}
\abs{{\mathfrak v}[f , f]}
&\le
2 v_* \abs{ \scprod{ p , \divgc u }_{L^2(\Omega)} }
\le
2 v_* \norm{p}_{L^2(\Omega)} \norm{\divgc u}_{L^2(\Omega)}\\
&\le
\varepsilon \nu \norm{\divgc u}_{L^2(\Omega)}^2 + \varepsilon^{-1} \nu^{-1} v_*^2 \norm{p}_{L^2(\Omega)}^2\\
&\le
\varepsilon {\mathfrak a}[ u \oplus 0 , u \oplus 0 ] + \varepsilon^{-1} \nu^{-1} v_*^2 \norm{f}_{\mathcal H}^2\\
&=
\varepsilon {\mathfrak a}[ f , f ] + \varepsilon^{-1} \nu^{-1} v_*^2 \norm{f}_{\mathcal H}^2
.
\end{aligned}
\end{equation}
Thus, by the well-known KLMN theorem, the form ${\mathfrak b}_S := {\mathfrak a} + {\mathfrak v}$ with $\Dom[{\mathfrak b}_S] = \Dom[{\mathfrak a}] = \Dom(\abs{A}^{1/2})$ is
associated to a unique lower semibounded self-adjoint operator $B_S$ on ${\mathcal H}$ with $\Dom(\abs{B_S}^{1/2}) = \Dom(\abs{A}^{1/2})$,
the so-called~\emph{Stokes operator}. It is a self-adjoint extension of the (non-closed) upper dominant block operator matrix
\begin{equation*}
\begin{pmatrix}
-\nu\mathbf{\Delta} & v_*\grad\\
-v_*\divgc & 0
\end{pmatrix}
\end{equation*}
defined on $(H^2(\Omega) \cap H_0^1(\Omega))^n \oplus H^1(\Omega)$. In fact, the closure of the latter is a self-adjoint
operator, see~\cite[Theorems~3.7 and~3.9]{FFMM00}, which yields another characterization of the Stokes operator $B_S$.
By rescaling, one obtains from~\cite[Theorem~3.15]{FFMM00} that the essential spectrum of $B_S$ is given by
\begin{equation*}
\spec_\mathrm{ess}(B_S)
=
\Bigl\{ -\frac{v_*^2}{\nu} , -\frac{v_*^2}{2\nu} \Bigr\}
,
\end{equation*}
see~\cite[Remark~2.2]{GKMSV19}. In particular, the essential spectrum of $B_S$ is purely negative. In turn, the positive spectrum
of $B_S$, that is, $\spec(B_S) \cap (0,\infty)$, is discrete~\cite[Theorem~2.1\,(i)]{GKMSV19}.
The above shows that the hypotheses of Theorem~\ref{thm:offdiagForm} are satisfied in this situation, so that we obtain from
Theorem~\ref{thm:offdiagForm} and Corollary~\ref{cor:offForm} the following result.
\begin{proposition}\label{prop:Stokes}
Let $B_S$ be the Stokes operator as above. Then, the positive spectrum of $B_S$, $\spec(B_S) \cap (0,\infty)$, is discrete, and
the positive eigenvalues $\lambda_k(B_S|_{\Ran\mathsf{E}_{B_S}((0,\infty))})$, $k \in \mathbb{N}$, of $B_S$, enumerated in nondecreasing
order and counting multiplicities, admit the representation
\begin{equation*}
\lambda_k(B_S|_{\Ran\mathsf{E}_{B_S}((0,\infty))})
=
\inf_{\substack{{\mathfrak M}_+ \subset H_0^1(\Omega)^n\\ \dim{\mathfrak M}_+ = k}}
\sup_{\substack{u \oplus p \in {\mathfrak M}_+ \oplus L^2(\Omega)\\ \norm{u}_{L^2(\Omega)^n}^2 + \norm{p}_{L^2(\Omega)}^2 = 1}}
{\mathfrak b}_S[ u \oplus p , u \oplus p ]
.
\end{equation*}
The latter depend locally Lipschitz continuously on $\nu$ and $v_*$ and satisfy the two-sided estimate
\begin{equation*}
\nu\lambda_k(-\mathbf{\Delta})
\le
\lambda_k(B_S|_{\Ran\mathsf{E}_{B_S}((0,\infty))})
\le
\nu\lambda_k(-\mathbf{\Delta}) + \frac{v_*^2}{\nu}
.
\end{equation*}
\end{proposition}
\begin{proof}
In view the above considerations, the representation of the eigenvalues, the continuity statement, and the lower bound on the
eigenvalues follow from Theorem~\ref{thm:offdiagForm} and Corollary~\ref{cor:offForm}. It remains to show the upper bound on
the eigenvalues. To this end, let ${\mathfrak M}_+ \subset H_0^1(\Omega)^n$ with $\dim {\mathfrak M}_+ = k \in \mathbb{N}$, and let
$f = u \oplus p \in H_0^1(\Omega)^n \oplus L^2(\Omega)$ be a normalized vector with $u \neq 0$. Then,
$\mu := {\mathfrak a}[ u \oplus 0 , u \oplus 0 ] / \norm{u}_{L^2(\Omega)^n}^2 = {\mathfrak a}[ f , f ] / \norm{u}_{L^2(\Omega)^n}^2$ is positive
and satisfies
\begin{equation}\label{eq:muBound}
\mu
\le
\sup_{\substack{v \in {\mathfrak M}_+\\ \norm{v}_{L^2(\Omega)^n}^2 = 1}} {\mathfrak a}[ v \oplus 0 , v \oplus 0 ]
\end{equation}
and
\begin{equation*}
\frac{\nu\norm{\divgc u}_{L^2(\Omega)}^2}{\mu}
=
\frac{\norm{u}_{L^2(\Omega)^n}^2 \nu \norm{\divgc u}_{L^2(\Omega)}^2}{{\mathfrak a}[ u \oplus 0 , u \oplus 0 ]}
\le
\norm{u}_{L^2(\Omega)^n}^2
\le
1
.
\end{equation*}
Similarly as in~\eqref{eq:StokesRelBound}, we now obtain by means of Young's inequality that
\begin{align*}
\abs{ {\mathfrak v}[ f , f ] }
&\le
2v_*\norm{p}_{L^2(\Omega)} \norm{\divgc u}_{L^2(\Omega)}
\le
\mu \norm{p}_{L^2(\Omega)}^2 + \frac{v_*^2 \norm{\divgc u}_{L^2(\Omega)}^2}{\mu}\\
&\le
\mu \norm{p}_{L^2(\Omega)}^2 + \frac{v_*^2}{\nu}
.
\end{align*}
Since ${\mathfrak a}[ f , f ] = \mu\norm{u}_{L^2(\Omega)^n}^2$, this gives
\begin{equation}\label{eq:StokesFormUpperBound}
{\mathfrak b}_S[ f , f ]
\le
{\mathfrak a}[ f , f ] + \mu \norm{p}_{L^2(\Omega)}^2 + \frac{v_*^2}{\nu}
=
\mu + \frac{v_*^2}{\nu}
.
\end{equation}
In light of ${\mathfrak b}_S[ 0 \oplus p , 0 \oplus p ] = {\mathfrak a}[ 0 \oplus p , 0 \oplus p ] = 0$, we conclude from~\eqref{eq:muBound}
and~\eqref{eq:StokesFormUpperBound} that
\begin{equation*}
\sup_{\substack{u \oplus p \in {\mathfrak M}_+ \oplus L^2(\Omega)\\ \norm{u}_{L^2(\Omega)^2}^2 + \norm{p}_{L^2(\Omega)}^2 = 1}}
{\mathfrak b}_S[ u \oplus p , u \oplus p ]
\le
\sup_{\substack{v \in {\mathfrak M}_+\\ \norm{v}_{L^2(\Omega)^n}^2 = 1}} {\mathfrak a}[ v \oplus 0 , v \oplus 0 ] + \frac{v_*^2}{\nu}
,
\end{equation*}
and taking the infimum over subspaces ${\mathfrak M}_+ \subset H_0^1(\Omega)^n$ with $\dim{\mathfrak M}_+ = k$ proves the upper bound. This
completes the proof.
\end{proof}%
\begin{remark}
(1)
Choosing $\varepsilon = 1$ in~\eqref{eq:StokesRelBound}, the upper bound from Remark~\ref{rem:offForm}\,(1) reads
\begin{equation*}
\lambda_k(B_S|_{\Ran\mathsf{E}_{B_S}((0,\infty))})
\le
2\nu\lambda_k(-\mathbf{\Delta}) + \frac{v_*^2}{\nu}
\end{equation*}
for all $k \in \mathbb{N}$, while the choice $\varepsilon = v_*$ in~\eqref{eq:StokesRelBound} leads to
\begin{equation*}
\lambda_k(B_S|_{\Ran\mathsf{E}_{B_S}((0,\infty))})
\le
(1+v_*)\nu\lambda_k(-\mathbf{\Delta}) + \frac{v_*}{\nu}
\end{equation*}
for all $k \in \mathbb{N}$.
(2)
For the particular case of $k = 1$, a similar upper bound has been established in the proof
of~\cite[Theorem~2.1\,(i)]{GKMSV19}:
\begin{equation*}
\nu\lambda_1(-\Delta)
\le
\lambda_1(B_S|_{\Ran\mathsf{E}_{B_S}((0,\infty))})
\le
\nu\lambda_1(-\Delta) + v_*\norm{\divgc u_0}_{L^2(\Omega)}
,
\end{equation*}
where $u_0 \in (H^2(\Omega) \cap H_0^1(\Omega))^n$ is a normalized eigenfunction for $-\mathbf{\Delta}$ corresponding to the
first positive eigenvalue $\lambda_1(-\mathbf{\Delta}) = \lambda_1(-\Delta)$.
\end{remark}
\section{An abstract minimax principle in spectral gaps}\label{sec:abstrMinimax}
We rely on the following abstract minimax principle in spectral gaps, part~(a) of which is extracted from~\cite{GLS99} and
part~(b) of which is its natural adaptation to the operator framework; cf.~also~\cite[Proposition~A.3]{NSTTV18}.
\begin{proposition}[{cf.~\cite[Theorem 1]{GLS99},~\cite[Proposition~A.3]{NSTTV18}}]\label{prop:GLS}
Assume Hypothesis \ref{hyp:minimax}.
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item
If we have $\Dom(\abs{B}^{1/2}) = \Dom(\abs{A}^{1/2})$, ${\mathfrak b}[x,x]\le0$ for all $x\in{\mathfrak D}_-$, and
$\Ran(P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$, then
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathfrak D}_+\\ \dim({\mathfrak M}_+)=k}} \sup_{\substack{x\in{\mathfrak M}_+\oplus{\mathfrak D}_-\\ \norm{x}=1}} {\mathfrak b}[x,x]
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim\Ran P_+$.
\item
If we have $\Dom(B) = \Dom(A)$, $\scprod{ x , Bx } \le 0$ for all $x \in {\mathcal D}_-$, and $\Ran(P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$,
then
\begin{equation*}
\lambda_k(B|_{\Ran Q_+})
=
\inf_{\substack{{\mathfrak M}_+\subset{\mathcal D}_+\\ \dim({\mathfrak M}_+)=k}} \sup_{\substack{x\in{\mathfrak M}_+\oplus{\mathcal D}_-\\ \norm{x}=1}}
\scprod{ x , Bx }
\end{equation*}
for all $k \in \mathbb{N}$ with $k \le \dim\Ran P_+$.
\end{enumerate}
\end{proposition}
\begin{proof}
For part~(a), we first recall that the spectral projections $P_+$ and $Q_+$ map
${\mathfrak D} := \Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$ into itself, so that $P_+$ maps $\Ran Q_+ \cap {\mathfrak D}$ into ${\mathfrak D}_+$. Next, we
observe that under the hypotheses of part~(a) the restriction
\begin{equation}\label{eq:restriction}
P_+|_{\Ran Q_+\cap{\mathfrak D}} \colon \Ran Q_+\cap{\mathfrak D}\to{\mathfrak D}_+
\end{equation}
is bijective. Indeed, its surjectivity follows directly from the hypothesis $\Ran(P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$. For the
injectivity, we follow Step~2 of the proof of~\cite[Theorem~1]{GLS99}: assume to the contrary that $P_+x = 0$ for some non-zero
$x \in \Ran Q_+ \cap {\mathfrak D}$. Then, on the one hand we have ${\mathfrak b}[x , x] > 0$, and on the other hand $x \in \Ran P_-$, that is,
$x \in {\mathfrak D}_-$. The latter gives ${\mathfrak b}[x , x] \le 0$ by hypothesis, a contradiction. The claim of part~(a) now follows by Step~1
of the proof of~\cite[Theorem~1]{GLS99}.
Replacing form domains with operator domains in the above reasoning and the cited Step~1 of the proof in~\cite{GLS99}, that is,
${\mathfrak D}$ by $\Dom(A)=\Dom(B)$ and ${\mathfrak D}_\pm$ by ${\mathcal D}_\pm$, part~(b) can be proved in the same manner.
\end{proof}%
\begin{remark}\label{rem:spectralShift}
The above proposition is tailored towards spectral gaps around zero, but by a spectral shift we can of course handle also
spectral gaps around any point $\gamma \in \mathbb{R}$. Indeed, we have $\mathsf{E}_{A-\gamma}((0,\infty)) = \mathsf{E}_A((\gamma,\infty))$ for
$\gamma \in \mathbb{R}$ and analogously for $B$. Moreover, the form associated to the operator $B - \gamma$ is known to agree with the
form ${\mathfrak b} - \gamma$. The latter can be seen for instance with an analogous reasoning as
in~\cite[Proposition~10.5\,(a)]{Schm12}; cf.~also Lemma~\ref{cor:formpert} in Appendix~\ref{sec:heinz} below.
\end{remark}
\begin{remark}\label{rem:neg}
(1)
The hypothesis ${\mathfrak b}[x,x]\le0$ for all $x\in{\mathfrak D}_-$ in part~(a) of Proposition~\ref{prop:GLS} is used not only to verify the
injectivity of the restriction~\eqref{eq:restriction} but is also a crucial ingredient in the cited Step~1 of the proof
of~\cite[Theorem~1]{GLS99}. The same applies for the hypothesis $\langle x,Bx \rangle\le 0$ for all $x\in{\mathcal D}_-$ in part~(b).
(2)
Since $P_+$ and $Q_+$ are spectral projections for the respective operators, we always have
$\Ran(P_+Q_+|_{{\mathfrak D}_+}) \subset {\mathfrak D}_+$ and $\Ran(P_+Q_+|_{{\mathcal D}_+}) \subset {\mathcal D}_+$. In this respect, the
condition $\Ran(P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$ in part~(a) of Proposition~\ref{prop:GLS} actually means that the restriction
$P_+Q_+|_{{\mathfrak D}_+} \colon {\mathfrak D}_+ \to {\mathfrak D}_+$ is surjective. This has not been formulated explicitly in the statement
of~\cite[Theorem~1]{GLS99} but has instead been guaranteed by the stronger condition
\begin{equation}\label{eq:GLS}
\norm{(\abs{A}+I)^{1/2}P_+Q_-(\abs{A}+I)^{-1/2}} < 1.
\end{equation}
Since ${\mathfrak D}_+ = \Ran ((\abs{A}+I)^{-1/2}|_{\Ran P_+})$, a standard Neumann series argument then even gives bijectivity of the
restriction $P_+Q_+|_{{\mathfrak D}_+}$, see Step~2 of the proof of~\cite[Theorem~1]{GLS99}. In this reasoning, the operators
$(\abs{A}+I)^{\pm 1/2}$ can be replaced by $(\abs{A}+\alpha I)^{\pm 1/2}$ for any $\alpha>0$; if $\abs{A}$ has a bounded
inverse, also $\alpha=0$ can be considered here.
Of course, the above reasoning also applies in the situation of part~(b) of Proposition~\ref{prop:GLS}, but with
$(\abs{A}+\alpha I)^{\pm 1/2}$ replaced by $(\abs{A}+\alpha I)^{\pm 1}$.
Condition~\eqref{eq:GLS} has been considered in~\cite{GLS99} in the setting of Dirac operators, but it seems to be hard to
verify it on an abstract level. This approach is therefore not pursued further here.
\end{remark}
In the context of our main theorems, the restriction $P_+Q_+|_{\Ran P_+}$, understood as an endomorphism of $\Ran P_+$, will
always be bijective, cf.~Remark~\ref{rem:PQbij}\,(1) below. It turns out that then the hypotheses of part~(b) in
Proposition~\ref{prop:GLS} imply those of part~(a), in which case both representations for the minimax values in
Proposition~\ref{prop:GLS} are valid. More precisely, we have the following lemma, essentially based on the well-known Heinz
inequality, cf.~Appendix~\ref{sec:heinz} below.
\begin{lemma}\label{lem:GLS}
Assume Hypothesis~\ref{hyp:minimax} with $\Dom(A) = \Dom(B)$.
\begin{enumerate}
\renewcommand{\theenumi}{\alph{enumi}}
\item One has $\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$.
\item If $\scprod{x , Bx} \le 0$ for all $x \in {\mathcal D}_-$, then ${\mathfrak b}[x , x] \le 0$ for all $x \in {\mathfrak D}_-$.
\item If the restriction $P_+Q_+|_{\Ran P_+} \colon \Ran P_+ \to \Ran P_+$ is bijective and
$\Ran(P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$, then also $\Ran(P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$.
\end{enumerate}
\end{lemma}
\begin{proof}
(a).
This is a consequence of the well-known Heinz inequality, see, e.g., Corollary~\ref{cor:SchmDiss} below. Alternatively, this
follows by classical considerations regarding operator and form boundedness, see Remark~\ref{rem:SchmDiss} below.
(b).
It follows from part~(a) that the operator $\abs{B}^{1/2} ( \abs{A}^{1/2} + I )^{-1}$ is closed and everywhere defined, hence
bounded by the closed graph theorem. Thus,
\begin{equation*}
\norm{\abs{B}^{1/2}x} \le \norm{\abs{B}^{1/2}(\abs{A}^{1/2}+I)^{-1}}\cdot \norm{(\abs{A}^{1/2}+I)x}
\end{equation*}
for all $x\in \Dom(\abs{A}^{1/2})=\Dom(\abs{B}^{1/2})$. Since ${\mathcal D}_-$ is a core for the operator
$\abs{A|_{\Ran P_-}}^{1/2}=\abs{A}^{1/2}|_{\Ran P_-}$ with $\Dom(\abs{A}^{1/2}|_{\Ran P_-})={\mathfrak D}_-$, the inequality
${\mathfrak b}[x,x]\le 0$ for $x\in{\mathfrak D}_-$ now follows from the hypothesis $\scprod{ x,Bx } \le 0$ for all $x\in{\mathcal D}_-$ by approximation.
(c).
We clearly have $\Ran(P_+Q_+|_{{\mathcal D}_+})={\mathcal D}_+$, ${\mathcal D}_+=\Dom(A|_{\Ran P_+})$, and ${\mathfrak D}_+=\Dom(\abs{A|_{\Ran P_+}}^{1/2})$.
Applying Corollary~\ref{cor:fracBij} below with the choices $\Lambda_1=\Lambda_2=A|_{\Ran P_+}$ and $S = P_+Q_+|_{\Ran P_+}$
therefore implies that $\Ran(P_+Q_+|_{{\mathfrak D}_+})={\mathfrak D}_+$, which proves the claim.
\end{proof}%
\begin{remark}\label{rem:PQbij}
(1)
In light of the identity $P_+Q_+ = P_+ - P_+Q_-$, the bijectivity of $P_+Q_+|_{\Ran P_+} \colon \Ran P_+ \to \Ran P_+$ can be
guaranteed, for instance, by the condition $\norm{P_+Q_-} < 1$ via a standard Neumann series argument. Since
$P_+-Q_+=P_+Q_- - P_-Q_+$ and, in particular, $\norm{P_+Q_-}\le\norm{P_+-Q_+}$, this condition holds if the stronger inequality
$\norm{P_+-Q_+}<1$ is satisfied. In the latter case, there is a unitary operator $U$ with $Q_+U=UP_+$, see,
e.g.,~\cite[Theorem~I.6.32]{Kato95}, which implies that $\dim \Ran P_+=\dim\Ran Q_+$. It is this situation we encounter in
Theorems~\ref{thm:genSemibounded}--\ref{thm:offdiagForm}.
(2)
In the case where $B$ is an infinitesimal operator perturbation of $A$, the inequality $\norm{P_+Q_-}<1$ already implies that
$\Ran(P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$, see the following section; the particular case where $B$ is a bounded perturbation of
$A$ has previously been considered in~\cite[Lemma~A.6]{NSTTV18}. For more general, not necessarily infinitesimal,
perturbations, this remains so far an open problem.
\end{remark}
\section{Proof of Theorem~\ref{thm:genOpInfinitesimal}: The graph norm approach}\label{sec:graphNorm}
In this section we show that the inequality $\norm{P_+Q_-} < 1$ in the context of Theorem~\ref{thm:genOpInfinitesimal} implies
that $\Ran(P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$, which is essentially what is needed to deduce Theorem~\ref{thm:genOpInfinitesimal}
from Proposition~\ref{prop:GLS} and Lemma~\ref{lem:GLS}. The main technique used to accomplish this can in fact be formulated in
a much more general framework:
Recall that for a closed operator $\Lambda$ on a Banach space with norm $\norm{\,\cdot\,}$, its domain $\Dom(\Lambda)$ can be
equipped with the~\emph{graph norm}
\begin{equation*}
\norm{x}_\Lambda := \norm{x} + \norm{\Lambda x},\quad x\in\Dom(\Lambda),
\end{equation*}
which makes $(\Dom(\Lambda),\norm{\,\cdot\,}_\Lambda)$ a Banach space. Also recall that a linear operator $K$ with
$\Dom(K) \supset \Dom(\Lambda)$ is called~\emph{$\Lambda$-bounded with $\Lambda$-bound $\beta_* \ge 0$} if for all
$\beta > \beta_*$ there is an $\alpha \ge 0$ with
\begin{equation}\label{eq:relBound}
\norm{Kx}
\le
\alpha \norm{x} + \beta \norm{\Lambda x}
\quad\text{ for all }\
x \in \Dom(\Lambda)
.
\end{equation}
The following lemma extends part~(a) of~\cite[Proposition~A.5]{NSTTV18}, taken from Lemma~3.9 in the author's
Ph.D.~thesis~\cite{SeelDiss}, to relatively bounded commutators.
\begin{lemma}\label{lem:specRad}
Let $\Lambda$ be a closed operator on a Banach space, $K$ be $\Lambda$-bounded with $\Lambda$-bound $\beta_* \ge 0$, and let
$S$ be bounded with $\Ran(S|_{\Dom(\Lambda)})\subset\Dom(\Lambda)$ and
\begin{equation*}
\Lambda Sx - S\Lambda x
=
Kx
\quad\text{ for all }\
x\in\Dom(\Lambda)
.
\end{equation*}
Then, $S|_{\Dom(\Lambda)}$ is bounded on $\Dom(\Lambda)$ with respect to the graph norm for $\Lambda$, and the corresponding
spectral radius satisfies
\begin{equation*}
r_\Lambda(S)
:=
\lim_{k\to\infty} \norm{(S|_{\Dom(\Lambda)})^k}_\Lambda^{1/k}\le\norm{S} + \beta_*
.
\end{equation*}
\end{lemma}
\begin{proof}
Only small modifications to the reasoning from~\cite[Lemma~3.9]{SeelDiss},~\cite[Proposition~A.5]{NSTTV18} are necessary. For
the sake of completeness, we reproduce the full argument here:
Let $\beta > \beta_*$ and $\alpha \ge 0$ such that~\eqref{eq:relBound} holds. Then, for $x\in\Dom(\Lambda)$ one has
\begin{equation*
\norm{\Lambda Sx} \le \norm{S}\norm{\Lambda x} + \norm{Kx} \le (\norm{S}+\beta)\norm{\Lambda x} + \alpha\norm{x},
\end{equation*}
so that
\begin{equation*}
\norm{Sx}_\Lambda = \norm{Sx} + \norm{\Lambda Sx} \le \bigl(\norm{S}+\beta\bigr)\norm{x}_\Lambda + \alpha\norm{x}.
\end{equation*}
In particular, $S|_{\Dom(\Lambda)}$ is bounded with respect to the graph norm $\norm{\,\cdot\,}_\Lambda$ with
$\norm{S}_\Lambda\le \norm{S} + \beta + \alpha$.
Now, a straightforward induction yields
\begin{equation*}
\norm{S^kx}_\Lambda
\le
\bigl(\norm{S} + \beta\bigr)^k\norm{x}_\Lambda + k\alpha\bigl(\norm{S}+\beta\bigr)^{k-1}\norm{x},
\quad x\in\Dom(\Lambda),
\end{equation*}
for $k\in\mathbb{N}$. Hence, $\norm{(S|_{\Dom(\Lambda)})^k}_\Lambda \le (\norm{S}+\beta)^k + k\alpha(\norm{S}+\beta)^{k-1}$, so that
\begin{align*}
r_\Lambda(S)
&=
\lim_{k\to\infty} \norm{(S|_{\Dom(\Lambda)})^k}_\Lambda^{1/k}
\le
\lim_{k\to\infty} \bigl( (\norm{S}+\beta)^k+k\alpha(\norm{S}+\beta)^{k-1} \bigr)^{1/k}\\
&=
\norm{S}+\beta
.
\end{align*}
Since $\beta > \beta_*$ was chosen arbitrarily, this proves the claim.
\end{proof}%
We are now in position to prove Theorem~\ref{thm:genOpInfinitesimal}.
\begin{proof}[Proof of Theorem~\ref{thm:genOpInfinitesimal}]
We mainly follow the line of reasoning in the proof of~\cite[Lemma~A.6]{NSTTV18}. Only a few additional considerations are
necessary in order to accommodate unbounded perturbations $V$ by means of Lemma~\ref{lem:specRad}. For convenience of the
reader, we nevertheless reproduce the whole argument here.
Define $S,T \colon \Ran P_+ \to \Ran P_+$ by
\begin{equation*}
S := P_+Q_-|_{\Ran P_+},\quad T := P_+Q_+|_{\Ran P_+} = I_{\Ran P_+} - S.
\end{equation*}
By hypothesis, we have $\norm{S} \le \norm{P_+Q_-} < 1$, so that $T$ is bijective. In light of Proposition~\ref{prop:GLS} and
Lemma~\ref{lem:GLS}, it now remains to show the inclusion $\Ran(P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$, that is,
$\Ran(T^{-1}|_{{\mathcal D}_+})\subset{\mathcal D}_+$. To this end, we rewrite $T^{-1}$ as a Neumann series,
\begin{equation*}
T^{-1}
=
(I_{\Ran P_+} - S)^{-1}
=
\sum_{k=0}^\infty S^k
.
\end{equation*}
Clearly, $S$ maps the domain ${\mathcal D}_+=\Dom(A|_{\Ran P_+})$ into itself, so that the inclusion $\Ran(T^{-1}|_{{\mathcal D}_+})\subset{\mathcal D}_+$
holds if the above series converges also with respect to the graph norm for the closed operator $\Lambda:=A|_{\Ran P_+}$. This,
in turn, is the case if the corresponding spectral radius $r_\Lambda(S)$ of $S$ is smaller than $1$.
For $x\in{\mathcal D}_+\subset\Ran P_+$ we compute
\begin{align*}
\Lambda Sx
&=
AP_+Q_-x
= P_+(A+V)Q_-x - P_+VQ_-x\\
&=
P_+Q_-(A+V)x - P_+VQ_-x\\
&=
S\Lambda x + Kx
\end{align*}
with
\begin{equation*}
K := (P_+Q_-V - P_+VQ_-)|_{\Ran P_+}.
\end{equation*}
We show that the operator $K$ is $\Lambda$-bounded with $\Lambda$-bound $0$. Indeed, let $b > 0$, and choose $a\ge0$ with
$\norm{Vx}\le a\norm{x}+b\norm{Ax}$ for all $x\in\Dom(A)$; recall that $V$ is infinitesimal with respect to $A$ by hypothesis.
Then,
\begin{equation*}
\norm{VQ_-x}
\le
a\norm{Q_-x} + b\norm{AQ_-x}
\le
a\norm{x} + b\norm{(A+V)x} + b\norm{VQ_-x},
\end{equation*}
so that
\begin{align*}
\norm{VQ_-x}
&\le
\frac{a}{1-b}\norm{x} + \frac{b}{1-b}\bigl(\norm{Ax}+\norm{Vx}\bigr)\\
&\le
\frac{a(1+b)}{1-b}\norm{x} + \frac{b(1+b)}{1-b}\norm{Ax}.
\end{align*}
Thus,
\begin{equation}\label{eq:relBoundK}
\begin{aligned}
\norm{Kx}
&\le
\norm{P_+Q_-}\norm{Vx} + \norm{VQ_-x}\\
&\le
a\Bigl(\norm{P_+Q_-} + \frac{1+b}{1-b}\Bigr)\norm{x} + b\Bigl(\norm{P_+Q_-}+\frac{1+b}{1-b}\Bigr)\norm{\Lambda x}
\end{aligned}
\end{equation}
for $x\in\Dom(\Lambda)={\mathcal D}_+$. Since $b>0$ was chosen arbitrarily, this implies that $K$ is $\Lambda$-bounded with
$\Lambda$-bound $0$. It therefore follows from Lemma~\ref{lem:specRad} that $r_\Lambda(S)\le\norm{S}<1$, which completes the
proof.
\end{proof}%
\begin{remark}\label{rem:relBoundK}
(1)
Estimate~\eqref{eq:relBoundK} suggests that also relatively bounded perturbations $V$ that are not necessarily infinitesimal
with respect to $A$ can be considered here. In fact, if $b_*\in[0,1)$ is the $A$-bound of $V$, then by~\eqref{eq:relBoundK} and
Lemma~\ref{lem:specRad} we have
\begin{equation*}
r_\Lambda(S)
\le
\norm{P_+Q_-} + b_*\Bigl(\norm{P_+Q_-}+\frac{1+b_*}{1-b_*}\Bigr)
,
\end{equation*}
and the right-hand side of the latter is smaller than $1$ if and only if
\begin{equation*}
\norm{P_+Q_-} < \frac{1-2b_*-b_*^2}{1-b_*^2}.
\end{equation*}
This is a reasonable condition on the norm $\norm{P_+Q_-}$ only for $b_*<\sqrt{2}-1$.
(2)
A similar result as in (1) can be obtained in terms of the $(A+V)$-bound of $V$: If for some $\tilde{b}\in[0,1)$ and
$\tilde{a}\ge 0$ one has $\norm{Vx} \le \tilde{a}\norm{x} + \tilde{b}\norm{(A+V)x}$ for all $x\in\Dom(A)=\Dom(A+V)$, then
standard arguments as in the above proof of Theorem~\ref{thm:genOpInfinitesimal} show that
\begin{equation*}
\norm{Vx}
\le
\frac{\tilde{a}}{1-\tilde{b}}\norm{x} + \frac{\tilde{b}}{1-\tilde{b}}\norm{Ax}
\end{equation*}
and, in turn,
\begin{align*}
\norm{VQ_-x}
&\le
\tilde{a}\norm{x} + \tilde{b}\norm{(A+V)x}
\le
\tilde{a}\norm{x} + \tilde{b}\norm{Ax} + \tilde{b}\norm{Vx}\\
&\le
\tilde{a}\Bigl( 1 + \frac{\tilde{b}}{1-\tilde{b}} \Bigr)\norm{x}
+ \tilde{b}\Bigl( 1 + \frac{\tilde{b}}{1-\tilde{b}} \Bigr)\norm{Ax}\\
&=
\frac{\tilde{a}}{1-\tilde{b}}\norm{x} + \frac{\tilde{b}}{1-\tilde{b}}\norm{Ax}
\end{align*}
for all $x\in\Dom(A)$. Plugging these into~\eqref{eq:relBoundK} gives
\begin{align*}
\norm{Kx}
&\le
\norm{P_+Q_-}\norm{Vx} + \norm{VQ_-x}\\
&\le
(1+\norm{P_+Q_-})\Bigl( \frac{\tilde{a}}{1-\tilde{b}}\norm{x} + \frac{\tilde{b}}{1-\tilde{b}}\norm{\Lambda x} \Bigr)
\end{align*}
for all $x \in \Dom(\Lambda) = {\mathcal D}_+$, which eventually leads to
\begin{equation*}
r_\Lambda(S)
\le
\norm{P_+Q_-} + \frac{\tilde{b}(1+\norm{P_+Q_-})}{1-\tilde{b}}
=
\frac{\norm{P_+Q_-}+\tilde{b}}{1-\tilde{b}}
.
\end{equation*}
The right-hand side of the latter is smaller than $1$ if and only if
\begin{equation*}
\norm{P_+Q_-} < 1-2\tilde{b}.
\end{equation*}
This is a reasonable condition on $\norm{P_+Q_-}$ only for $\tilde{b} < 1/2$.
\end{remark}
\section{The block diagonalization approach}\label{sec:blockDiag}
In this section, we discuss an approach to verify the hypotheses of Proposition~\ref{prop:GLS} and Lemma~\ref{lem:GLS} which
relies on techniques previously discussed in the context of block diagonalizations of operators and forms, for instance
in~\cite{MSS16} and~\cite{GKMSV17}, respectively; see also Remark~\ref{rem:MSS16} below.
Recall that for the two orthogonal projections $P_+$ and $Q_+$ from Hypothesis~\ref{hyp:minimax} the inequality
$\norm{P_+-Q_+}<1$ holds if and only if $\Ran Q_+$ can be represented as
\begin{equation}\label{eq:graph}
\Ran Q_+ = \{ f \oplus Xf \mid f\in\Ran P_+ \}
\end{equation}
with some bounded linear operator $X\colon\Ran P_+\to\Ran P_-$; in this case, one has
\begin{equation}\label{eq:normPQX}
\norm{P_+-Q_+}
=
\frac{\norm{X}}{\sqrt{1+\norm{X}^2}}
,
\end{equation}
see, e.g.,~\cite[Corollary~3.4\,(i)]{KMM03:181}. The orthogonal projection $Q_+$ can then be represented as the $2\times2$ block
operator matrices
\begin{equation}\label{eq:reprQ}
\begin{aligned}
Q_+
&=
\begin{pmatrix}
(I_{\Ran P_+}+X^*X)^{-1} & (I_{\Ran P_+}+X^*X)^{-1}X^*\\
X(I_{\Ran P_+}+X^*X)^{-1} & X(I_{\Ran P_+}+X^*X)^{-1}X^*
\end{pmatrix}\\
&=
\begin{pmatrix}
(I_{\Ran P_+}+X^*X)^{-1} & X^*(I_{\Ran P_-}+XX^*)^{-1}\\
(I_{\Ran P_-}+XX^*)^{-1}X & XX^*(I_{\Ran P_-}+XX^*)^{-1}
\end{pmatrix}
\end{aligned}
\end{equation}
with respect to $\Ran P_+\oplus\Ran P_-$, see, e.g.,~\cite[Remark~3.6]{KMM03:181}. In particular, we have
\begin{equation}\label{eq:PQY}
P_+Q_+|_{\Ran P_+}
=
(I_{\Ran P_+} + X^*X)^{-1}
,
\end{equation}
which is in fact the starting point for the current approach. With regard to the desired relations
$\Ran (P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$ and $\Ran (P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$, we need to establish that the operator
$I_{\Ran P_+} + X^*X$ maps ${\mathcal D}_+$ and ${\mathfrak D}_+$ into ${\mathcal D}_+$ and ${\mathfrak D}_+$, respectively.
Define the skew-symmetric operator $Y$ via the $2\times2$ block operator matrix
\begin{equation}\label{eq:defY}
Y
=
\begin{pmatrix} 0 & -X^*\\ X & 0 \end{pmatrix}
\end{equation}
with respect to $\Ran P_+\oplus\Ran P_-$. Then, the operators $I\pm Y$ are bijective with
\begin{equation}\label{eq:Ypm}
(I-Y)(I+Y)
=
\begin{pmatrix}
I_{\Ran P_+}+X^*X & 0\\
0 & I_{\Ran P_-}+XX^*
\end{pmatrix}
.
\end{equation}
The following lemma is extracted from various sources. We comment on this afterwards in Remark~\ref{rem:PQXY} below.
\begin{lemma}\label{lem:PQXY}
Suppose that the projections $P_+$ and $Q_+$ from Hypothesis~\ref{hyp:minimax} satisfy $\norm{P_+ - Q_+} < 1$, and let the
operators $X$ and $Y$ be as in~\eqref{eq:graph} and~\eqref{eq:defY}, respectively. Moreover, let ${\mathcal C}$ be an invariant subspace
for $P_+$ and $Q_+$ such that ${\mathcal C} = ({\mathcal C} \cap \Ran P_+) \oplus ({\mathcal C} \cap \Ran P_-) =: {\mathcal C}_+ \oplus {\mathcal C}_-$.
Then, the following are equivalent:
\begin{enumerate}
\renewcommand{\theenumi}{\roman{enumi}}
\item $I_{\Ran P_+} + X^*X$ maps ${\mathcal C}_+$ into itself;
\item $I_{\Ran P_-} + XX^*$ maps ${\mathcal C}_-$ into itself;
\item $Y$ maps ${\mathcal C}$ into itself;
\item $(I+Y)$ maps ${\mathcal C}$ into itself;
\item $(I-Y)$ maps ${\mathcal C}$ into itself.
\end{enumerate}
\end{lemma}
\begin{proof}
Clearly, the hypotheses imply that $P_+Q_+$ maps ${\mathcal C}$ into ${\mathcal C}_+$ and $P_-Q_+$ maps ${\mathcal C}$ into ${\mathcal C}_-$.
(i)$\Rightarrow$(ii).
Let $g \in {\mathcal C}_-$. Using the first representation in~\eqref{eq:reprQ}, we then have
$(I_{\Ran P_+} + X^*X)^{-1}X^*g = (P_+Q_+|_{\Ran P_-})g \in {\mathcal C}_+$. Hence, $X^*g \in {\mathcal C}_+$ by~(i) and, in turn,
$h := (I_{\Ran P_+} + X^*X)X^*g \in {\mathcal C}_+$. Using again~\eqref{eq:reprQ}, this yields
\begin{align*}
(I_{\Ran P_-} + XX^*)g
&=
g + XX^*g
=
g + X(I_{\Ran P_+}+X^*X)^{-1}h\\
&=
g + (P_-Q_+|_{\Ran P_+})h \in {\mathcal C}_-
.
\end{align*}
As a byproduct, we have also shown that $X^*$ maps ${\mathcal C}_-$ into ${\mathcal C}_+$.
(ii)$\Rightarrow$(i).
Using the identities $(I_{\Ran P_-} + XX^*)^{-1}X = P_-Q_+|_{\Ran P_+}$ and $X^*(I_{\Ran P_-}+XX^*)^{-1} = P_+Q_+|_{\Ran P_-}$
taken from the second representation in~\eqref{eq:reprQ}, the proof is completely analogous to the implication
(i)$\Rightarrow$(ii). In particular, we likewise obtain as a byproduct that $X$ maps ${\mathcal C}_+$ into ${\mathcal C}_-$.
(i),(ii)$\Rightarrow$(iii).
We have already seen that $X$ maps ${\mathcal C}_+$ into ${\mathcal C}_-$ and that $X^*$ maps ${\mathcal C}_-$ into ${\mathcal C}_+$. Taking into account that
${\mathcal C} = {\mathcal C}_+ \oplus {\mathcal C}_-$, this means that $Y$ maps ${\mathcal C}$ into itself.
(iii)$\Leftrightarrow$(iv),(v).
This is clear.
(iv),(v)$\Rightarrow$(i),(ii).
This follows immediately from identity~\eqref{eq:Ypm}.
\end{proof}%
\begin{remark}\label{rem:PQXY}
The proof of the equivalence (i)$\Leftrightarrow$(ii) and the one of the implication (i),(ii)$\Rightarrow$(iii) in
Lemma~\ref{lem:PQXY} are extracted from the proof of~\cite[Theorem~5.1]{GKMSV17}; see also~\cite[Theorem~6.3.1 and
Lemma~6.3.3]{SchmDiss}.
The equivalence (iv)$\Leftrightarrow$(v) can alternatively be directly obtained from the identity
\begin{equation*}
\begin{pmatrix}
I_{\Ran P_+} & 0\\
0 & -I_{\Ran P_-}
\end{pmatrix}
(I + Y)
\begin{pmatrix}
I_{\Ran P_+} & 0\\
0 & -I_{\Ran P_-}
\end{pmatrix}
=
I - Y
.
\end{equation*}
Such an argument has been used in the proof of~\cite[Proposition~3.3]{MSS16}.
The implication (iv),(v)$\Rightarrow$(i) can essentially be found in the proof of~\cite[Theorem~5.1]{GKMSV17}
and~\cite[Remark~6.3.2]{SchmDiss}.
\end{remark}
Below, we apply Lemma~\ref{lem:PQXY} with ${\mathcal C} = \Dom(A) = \Dom(B) = {\mathcal D}_+ \oplus {\mathcal D}_-$ or
${\mathcal C} = \Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2}) = {\mathfrak D}_+ \oplus {\mathfrak D}_-$, depending on the situation. The easiest case is
encountered in Theorem~\ref{thm:genSemibounded}:
\begin{proof}[Proof of Theorem~\ref{thm:genSemibounded}]
Let $\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$ and ${\mathfrak b}[x , x] \le 0$ for all $x \in {\mathfrak D}_-$. We then have ${\mathfrak D}_- = \Ran P_-$
if $A$ is bounded from below and ${\mathfrak D}_+ = \Ran P_+$ if $A$ is bounded from above. Hence, item~(ii) or (i) in
Lemma~\ref{lem:PQXY}, respectively, with ${\mathcal C} = {\mathfrak D}_+ \oplus {\mathfrak D}_-$ is automatically satisfied. In any case, we have by
Lemma~\ref{lem:PQXY} that $I_{\Ran P_+} + X^*X$ maps ${\mathfrak D}_+$ into ${\mathfrak D}_+$, which by identity~\eqref{eq:PQY} means that
$\Ran (P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$. The representation~\eqref{eq:genSemibounded:form} now follows from
Proposition~\ref{prop:GLS}\,(a) and Remark~\ref{rem:PQbij}\,(1). If even $\Dom(A) = \Dom(B)$ and $\scprod{ x , Bx } \le 0$ for
all $x \in {\mathcal D}_-$, we use the same reasoning as above with ${\mathfrak D}_+$ and ${\mathfrak D}_-$ replaced by ${\mathcal D}_+$ and ${\mathcal D}_-$, respectively,
and obtain representation~\eqref{eq:genSemibounded:op} from Proposition~\ref{prop:GLS}\,(b) and Remark~\ref{rem:PQbij}\,(1).
The representation~\eqref{eq:genSemibounded:form} is then still valid by Lemma~\ref{lem:GLS} and the first part of the proof.
\end{proof}%
While certain conditions for Proposition~\ref{prop:GLS} and Lemma~\ref{lem:GLS} are part of the hypotheses of
Theorems~\ref{thm:genOpInfinitesimal} and~\ref{thm:genSemibounded}, in the situations of Theorems~\ref{thm:offdiagOp}
and~\ref{thm:offdiagForm} these need to be verified explicitly from the specific hypotheses at hand. Here, we rely on previous
considerations on block diagonalizations for block operator matrices and forms. In case of Theorem~\ref{thm:offdiagOp}, the
crucial ingredient is presented in the following result, extracted from~\cite{MSS16}. An earlier result in this direction is
commented on in Remark~\ref{rem:MSS16}\,(2) below.
\begin{proposition}[see~{\cite[Theorem~6.1]{MSS16}}]\label{prop:MSS16}
In the situation of Theorem~\ref{thm:offdiagOp} one has $\norm{P_+-Q_+} \le \sqrt{2}/2 <1$, and the operator identity
\begin{equation}\label{eq:blockDiag}
(I-Y)(A+V)(I-Y)^{-1} = A-YV
\end{equation}
holds with $Y$ as in~\eqref{eq:defY}.
\end{proposition}
\begin{proof}
Set $V_{\text{off}} := V|_{\Dom(A)}$, so that we have $B = A+V = A+V_\text{off}$ as well as $A-YV = A-YV_\text{off}$. Clearly,
the hypotheses on $V$ ensure that $V_\text{off}$ is $A$-bounded with $A$-bound $b_*<1$ and off-diagonal with respect to the
decomposition $\Ran P_+\oplus\Ran P_-$. By~\cite[Lemma~6.3]{MSS16} we now have
\begin{equation*}
\Ker(A+V_\text{off})
\subset
\Ker A
\subset
\Ran P_-
.
\end{equation*}
In light of~\eqref{eq:normPQX}, the claim therefore is just an instance of~\cite[Theorem~6.1]{MSS16}.
\end{proof}%
\begin{remark}\label{rem:MSS16}
(1)
Let $A_\pm:=A|_{\Ran P_\pm}$ be the parts of $A$ associated with the subspaces $\Ran P_\pm$, and write
\begin{equation*}
V|_{\Dom(A)}
=
\begin{pmatrix} 0 & W\\ W^* & 0 \end{pmatrix}
,
\end{equation*}
where $W\colon \Ran P_-\supset{\mathcal D}_-\to\Ran P_+$ is given by $Wx:=P_+Vx$, $x\in{\mathcal D}_-$. Then,
\begin{equation*}
A - YV
=
\begin{pmatrix}
A_+ - X^*W^* & 0\\
0 & A_- + XW
\end{pmatrix}
.
\end{equation*}
In this sense, identity~\eqref{eq:blockDiag} can be viewed as a block diagonalization of the operator $A+V$. For a more
detailed discussion of block diagonalizations and operator Riccati equations in the operator setting, the reader is referred
to~\cite{MSS16} and the references cited therein.
(2)
In the particular case where $0$ belongs to the resolvent set of $A$, the conclusion of Proposition~\ref{prop:MSS16} can be
inferred also from~\cite[Theorems~2.7.21 and~2.8.5]{Tre08}.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:offdiagOp}]
For $x \in {\mathcal D}_-$, we have
\begin{equation*}
\scprod{ x , Vx }
=
\scprod{ P_-x , VP_-x }
=
\scprod{ x , P_-VP_-x }
=
0
\end{equation*}
and, thus,
\begin{equation*}
\scprod{ x , (A+V)x }
=
\scprod{ x , Ax }
\le
0
.
\end{equation*}
Moreover, by Proposition~\ref{prop:MSS16} the inequality $\norm{P_+-Q_+}<1$ is satisfied. Let $Y$ be as in~\eqref{eq:defY}.
Since $\Dom(A+V)=\Dom(A)=\Dom(A-YV)$, it then follows from identity~\eqref{eq:blockDiag} that $I-Y$ maps
${\mathcal C} := \Dom(A) = {\mathcal D}_+ \oplus {\mathcal D}_-$ into itself. In turn, Lemma~\ref{lem:PQXY} implies that $I_{\Ran P_+} + X^*X$ maps
${\mathcal D}_+$ into itself, which by identity~\eqref{eq:PQY} means that $\Ran(P_+Q_+|_{{\mathcal D}_+}) \supset {\mathcal D}_+$. The claim now follows
from Proposition~\ref{prop:GLS}, Lemma~\ref{lem:GLS}, and Remark~\ref{rem:PQbij}\,(1).
\end{proof}
To the best of the author's knowledge, no direct analogue of Proposition~\ref{prop:MSS16} is known so far in the setting of form
rather than operator perturbations. Although the inequality $\norm{P_+-Q_+} \le \sqrt{2}/2$ can be established here as well under
fairly reasonable assumptions, see~\cite[Theorem~3.3]{GKMSV17}, the mapping properties of the operators $I \pm Y$ connected with
a corresponding diagonalization related to~\eqref{eq:blockDiag} are much harder to verify. The situation is even more subtle
there since also the domain equality $\Dom(\abs{A}^{1/2}) = \Dom(\abs{B}^{1/2})$ needs careful treatment. The latter is
conjectured to hold in a general off-diagonal form perturbation framework~\cite[Remark~2.7]{GKMV13}. Some characterizations have
been discussed in~\cite[Theorem~3.8]{Schm15}, but they all are hard to verify in a general abstract setting. A compromise in this
direction is to require that the form ${\mathfrak b}$ is semibounded, see~\cite[Lemma~3.9]{Schm15} and~\cite[Lemma~2.7]{GKMSV17}, which
forces the diagonal form ${\mathfrak a}$ to be semibounded as well, see below. As in the proof of Theorem~\ref{thm:genSemibounded} above,
this simplifies the situation immensely:
\begin{proof}[Proof of Theorem~\ref{thm:offdiagForm}]
Set ${\mathfrak v}_{\text{off}} := {\mathfrak v}|_{\Dom[{\mathfrak a}]}$, so that ${\mathfrak b} = {\mathfrak a} + {\mathfrak v} = {\mathfrak a} + {\mathfrak v}_{\text{off}}$. For
$x \in {\mathfrak D}_- = \Ran P_- \cap \Dom[{\mathfrak a}]$ we have
\begin{equation*}
{\mathfrak v}_{\text{off}}[ x , x ]
=
{\mathfrak v}[ P_-x , P_-x ]
=
0
\end{equation*}
and, thus,
\begin{equation*}
{\mathfrak b}[ x , x ]
=
{\mathfrak a}[ x , x ]
\le
0
.
\end{equation*}
In the same way, we see that ${\mathfrak b}[ x , x ] = {\mathfrak a}[ x , x ]$ for $x \in {\mathfrak D}_+$, which by the identity
${\mathfrak a}[ x , x ] = {\mathfrak a}[ P_+x , P_+x ] + {\mathfrak a}[ P_-x , P_-x ]$ for all $x \in \Dom[{\mathfrak a}]$ implies that along with ${\mathfrak b}$ the form ${\mathfrak a}$
is semibounded as well; cf.~also the proof of~\cite[Lemma~2.7]{GKMSV17}. In particular, we have ${\mathfrak D}_- = \Ran P_-$ if ${\mathfrak a}$ is
bounded from below and ${\mathfrak D}_+ = \Ran P_+$ if ${\mathfrak a}$ is bounded from above.
Let $m \in \mathbb{R}$ be the lower (resp.~upper) bound of ${\mathfrak a}$. We then have
\begin{equation*}
\abs{({\mathfrak a}-m)[ x , x ]}
=
\norm{ \abs{A-m}^{1/2}x }^2
\le
\norm{ \abs{A-m}^{1/2}(\abs{A}^{1/2}+I)^{-1} } \norm{ (\abs{A}^{1/2}+I)x }^2
\end{equation*}
for all $x \in \Dom[{\mathfrak a}]$, where $\abs{A-m}^{1/2}(\abs{A}^{1/2}+I)^{-1}$ is closed and everywhere defined, hence bounded by the
closed graph theorem. From this and the hypothesis on ${\mathfrak v}$ we see that
\begin{equation*}
\abs{ {\mathfrak v}[ x , x ] }
\le
\beta\bigl( \norm{\abs{A}^{1/2}x}^2 + \norm{x}^2 \bigr)
\end{equation*}
for some $\beta \ge 0$ and all $x \in \Dom[{\mathfrak a}]$, which means that ${\mathfrak b} = {\mathfrak a} + {\mathfrak v}$ is a semibounded saddle-point form in the
sense of~\cite[Section~2]{GKMSV17}.
Since $\Dom(\abs{B}^{1/2}) = \Dom[{\mathfrak b}] = \Dom[{\mathfrak a}] = \Dom(\abs{A}^{1/2})$ by hypothesis, ${\mathcal C} = \Dom[{\mathfrak a}]$ is invariant for
both $P_+$ and $Q_+$. Moreover, by~\cite[Theorem~3.3]{GKMSV17} (cf.~also~\cite[Theorem~2.13]{Schm15}) we have
\begin{equation*}
\Ker B
\subset
\Ker A
\subset
\Ran P_-
\end{equation*}
and $\norm{ P_+ - Q_+ } \le \sqrt{2}/2 < 1$. Taking into account that ${\mathfrak a}$ is semibounded as observed above,
Lemma~\ref{lem:PQXY} with ${\mathcal C} = \Dom[{\mathfrak a}] = {\mathfrak D}_+ \oplus {\mathfrak D}_-$ and identity~\eqref{eq:PQY} then imply as in the proof of
Theorem~\ref{thm:genSemibounded} that $\Ran(P_+Q_+|_{{\mathfrak D}_+}) \supset {\mathfrak D}_+$. The claim now follows from
Proposition~\ref{prop:GLS}\,(a) and Remark~\ref{rem:PQbij}\,(1).
\end{proof}%
| {
"timestamp": "2020-08-07T02:09:53",
"yymm": "2008",
"arxiv_id": "2008.02489",
"language": "en",
"url": "https://arxiv.org/abs/2008.02489",
"abstract": "The minimax principle for eigenvalues in gaps of the essential spectrum in the form presented by Griesemer, Lewis, and Siedentop in [Doc. Math. 4 (1999), 275--283] is adapted to cover certain abstract perturbative settings with bounded or unbounded perturbations, in particular ones that are off-diagonal with respect to the spectral gap under consideration. This in part builds upon and extends the considerations in the author's appendix to [J. Spectr. Theory 10 (2020), 843--885]. Several monotonicity and continuity properties of eigenvalues in gaps of the essential spectrum are deduced, and the Stokes operator is revisited as an example.",
"subjects": "Spectral Theory (math.SP); Mathematical Physics (math-ph); Functional Analysis (math.FA)",
"title": "On a minimax principle in spectral gaps",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9744347823646075,
"lm_q2_score": 0.7279754371026367,
"lm_q1q2_score": 0.7093645866198878
} |
https://arxiv.org/abs/1803.10918 | A simplified presentation of Specht modules | Fulton and Kraskiewicz gave a presentation of Specht modules as a quotient of the space of column tabloids by dual Garnir relations. We simplify this presentation by showing that it can be generated by a single relation for each pair of columns of a tableau with ordered columns, thereby significantly reducing the number of generators given in the original construction. Our presentation applies to all Specht modules, and is of a similar nature to a recent result by Friedmann-Hanlon-Stanley-Wachs that applies to staircase partitions. We show that our presentation implies the Friedmann-Hanlon-Stanley-Wachs presentation. | \section{Introduction}
Representations of the symmetric group $S_m$ have a long and beautiful history in mathematics. Partitions of $m$ biject with the irreducible representations of $S_{m}$ given by Specht modules; these representations have a basis corresponding to standard Young tableaux. The relations that allow us to express any tableau as a linear combination of standard Young tableaux are called Garnir relations.
For a partition $\lambda = (\lambda_{1}, \dots , \lambda_k)$ of $m$, let $\lambda^{'} = (\lambda^{'}_{1}, \dots , \lambda^{'}_{j})$ be the conjugate of $\lambda$
and let $S^{\lambda}$ be the Specht module corresponding to $\lambda.$ Also, let $\mathcal{T}_{\lambda}$ be the set of Young tableaux of shape $\lambda$ in which each element of $[m]$ appears exactly once.
For any $t \in \mathcal{T}_{\lambda}$, let $R_{t}$ be the row stabilizer of $t$, let $C_{t}$ be the column stabilizer of $t$, let$\{ t \} $ be the associated row tabloid, and let
\[ \varepsilon_{t} = \sum_{\beta \in C_{t}} \sgn(\beta) \{\beta t \} \] be the associated row polytabloid of $t$.
It is a classical result that the set of all $\varepsilon_{t}$ where $t$ is a standard Young tableau forms a basis of $S^{\lambda}$.
In \cite{Kras} and \cite{fulton}, both Kraskiewicz and Fulton introduce a dual construction of the Specht module, $\tilde{S^{\lambda}},$ using column tabloids rather than row tabloids. Column tabloids are quite similar to row tabloids: a row tabloid is an equivalence class of numberings of a Young diagram such that two row tabloids are equivalent if they have the same entries in each row. Dually, a \emph{column tabloid}, denoted $[t]$, is an equivalence class of numberings of a Young diagram such that two column tabloids are equivalent \emph{up to sign} if they have the same entries in each column. Herein lies a key difference between row and column tabloids: unlike row tabloids, column tabloids are antisymmetric within columns. That is, for a column tabloid $[t]$ and $\beta \in C_{t}$, we have $[t] = \sgn(\beta) \beta [t] = \sgn(\beta) [\beta t].$
Let $\tilde{M}^{\lambda}$ be the vector space generated by all $[t]$ where $t$ is a Young tableau of shape $\lambda$, modulo the antisymmetry relations which are generated by $[t] - \sgn(\beta)[\beta t]$ for each $\beta \in C_{t}$. Thus a basis of $\tilde{M}^{\lambda}$ is given by all {\sl ordered} column tabloids of shape $\lambda$, where by ``ordered" we mean that the numbers in the tableaux increase going down the columns.
The symmetric group acts on $[t] \in \tilde{M}^{\lambda}$ in the natural way: $\sigma [t] = [\sigma t].$ Fulton defines $\tilde{S^{\lambda}}$ to be the subspace of $\tilde{M}^{\lambda}$ spanned by elements of the form $\sum_{\alpha \in R_{t}} \alpha [t]$. He shows that this dual construction of a Specht module is isomorphic to its row tabloid counterpart, $S^{\lambda}$ \cite{fulton}.
In order to prove this result, Fulton defines a dual straightening algorithm which gives a presentation of Specht modules as a quotient space of $\tilde{M}^{\lambda}$ by dual Garnir relations. This presentation also appeared in \cite{Kras} two years earlier. There is a dual Garnir relation for each $t\in \mathcal{T}_\lambda$, each choice of adjacent columns, and each $k$ up to the length of the shorter column. In Section \ref{newSp}, we simplify this presentation significantly: we show that we need only a single relation called $\eta$ for each choice of adjacent columns of an ordered column tabloid $[t]\in \tilde{M}^{\lambda}$ (Theorem \ref{spechtgarnir}). Our result applies to all partitions, thereby extending a simplification achieved in \cite{FHSW} that applied only to staircase partitions.
We then use the relation $\eta$ in the study of the action of the symmetric group on a generalization of free Lie algebras introduced in \cite{FHSW}. This work is based on the following generalization of the bi-linear Lie bracket $[\cdot, \cdot]$ to an $n$-linear commutator $[\cdot, \cdot, \dots, \cdot]$, which arose from the study of the correspondence between ADE singularities and ADE Lie algebras in \cite{Friedmann}, and appeared previously in other contexts \cite{Fi, Ta, DT, Ka, Li, BL, Gu}.
\begin{definition}
A \emph{Lie Algebra $\mathcal{L}$ of the $n^{th}$ kind (LAnKe)} is a vector space equipped with an $n$-linear bracket such that the following hold.
\begin{enumerate}
\item The bracket is antisymmetric: $[x_{1}, \dots x_{n}] = \sgn(\sigma)[x_{\sigma(1)}, \dots, x_{\sigma(n)}].$
\item The \emph{generalized Jacobi identity} holds:
\begin{equation}\label{jacobiidentity}
[[x_{1}, \dots ,x_{n}],y_{1}, \dots y_{n-1}] = \sum_{i=1}^{n}(-1)^{n-i} [[y_{1}, \dots, y_{n-1}, x_{i}], x_{1}, \dots \hat{x_{i}}, \dots, x_{n}]
\end{equation}
for every $x_{i}, y_{j} \in \mathcal{L}$.
\end{enumerate}
\end{definition}
Many natural objects in the Lie case generalize to the LA$n$Ke. This includes homomorphisms, ideals, and subalgebras; for a more complete description, see \cite{Friedmann}. In particular, we can generalize the free Lie algebra on a set $X$.
\begin{definition} [\cite{FHSW}]
A \emph{free LA$n$Ke} on a set $X$ is a LA$n$Ke $\mathcal L$ and map $i:X \to \mathcal L$ with the universal property that for any LA$n$Ke $\mathcal K$ and map $f: X \to \mathcal K$, there exists a unique LA$n$Ke homomorphism $F$ making the following diagram commute: \\
\begin{center}
\begin{tikzcd}
X \arrow{r}{i} \arrow{dr}{f}
& \mathcal L \arrow{d}{F}\\
& \mathcal K
\end{tikzcd}
\end{center}
\end{definition}
Just as the free Lie algebra on a set $X$ is the space generated by all Lie bracketings subject to the antisymmetry and bi-linearity of the Lie bracket and the Jacobi identity, the free LA$n$Ke on $X$ is the space generated by all $n$-bracketed elements in $X$ subject to the $n$-linear, antisymmetric bracket and the generalized Jacobi identity given in Equation (\ref{jacobiidentity}).
The \emph{multi-linear component} of the free LA$n$Ke on $X$ is the vector subspace spanned by $n$-bracketed words in which each generator in $X$ appears exactly once. In this paper, we will take all vector spaces to be over $\mathbb{C}$.
The free Lie algebra admits a natural grading given by the number of times the Lie bracket is applied, denoted here by $k-1$. This $k$ is still relevant for the free LA$n$Ke. However, the free LA$n$Ke takes into account a second variable as well: the number of entries in each bracket, denoted $n$.
For an element of the form $[[[...]..]..]$, for example, we say $k=4$ and $n=3$. Through this lens, the free Lie algebra is simply the case where $n=2$. It follows that the multi-linear component of the free LA$n$Ke involving $k-1$ bracketings will involve $kn-n-k+2$ generators; in the case of the free Lie algebra, this is exactly $k$.
When $n=2$, acting by permutation on these generators gives the famed $(k-1)!$-dimensional representations of the symmetric group $S_{k}$ on the multi-linear component of the free Lie algebra on $k$ generators.
These representations, denoted $\lie(k)$, are an object of longtime fascination to algebraic combinatorialists.
Here we continue the study initialized in \cite{FHSW} of the natural generalization of $\lie(k)$ to the representations of $S_{kn-n-k+2}$ on the multi-linear component of the free LA$n$Ke on $kn-n-k+2$ generators. This representation is called $\rho_{n,k}$. In particular, we study the case where $k=3$; it was proved in \cite{FHSW} that $\rho_{n,3}$ is isomorphic to the Specht module $S^{2^{n-1}1}$ and therefore has dimension given by the Catalan numbers. In Section \ref{catiso} we give an independent proof of this result which has the advantage of including an explicit isomorphism between the two spaces (Theorem \ref{iso}). This isomorphism allows us to find an elegant basis for $\rho_{n,3}$ corresponding to standard Young tableaux of shape $2^{n-1}1$ (Corollary \ref{rhon3basis}).
\section{A new presentation of Specht modules}\label{newSp}
\subsection{Garnir relations for column tabloids}
In this section we recall known presentations of Specht modules in terms of dual Garnir relations.
In \cite{fulton}, Fulton introduces a map $$\alpha: \tilde{M}^{\lambda} \to S^{\lambda}$$ given by
\[ \alpha: [t] \mapsto \varepsilon_{t}. \]
The map $\alpha$ is equivariant and surjective. Moreover, $\ker(\alpha)$ is generated by a set of relations which Fulton calls the dual Garnir relations.
The dual Garnir relations are constructed as follows.
For a fixed column $c$ of a tableau $t$ of shape $\lambda$, and for $1\leq k \leq \lambda^{'}_{c+1}$, let $\pi_{c,k}(t)$ be the sum of column tabloids obtained from all possible ways of exchanging the top $k$ elements of the $(c+1)^{st}$ column of $t$ with any subset of size $k$ of the elements of column $c$, and fixing all other elements of $t$. For example, for
\[ t = \ytableausetup{centertableaux}
\begin{ytableau}
1 & 4 & 6\\
2 & 5 \\
3
\end{ytableau} \]
we have
\begin{center}
\includegraphics[scale = .25]{pi11.pdf}
\end{center}
Then the \emph{dual Garnir relation} $g_{c,k}(t)$ is
\begin{equation} \label{garnir} g_{c,k}(t) = [t] - \pi_{c,k}(t) .\end{equation}
Note that $t$ can be any tableau, not necessarily with increasing columns. The relation $g_{c,k}(t)$ is called a dual Garnir relation, and varying over $c$ and $k$ gives a straightening algorithm for column tabloids.
\begin{thm}[\cite{fulton}] \label{fultonthm}
Let $\tilde{G}^{\lambda}$ be the subspace of $\tilde{M}^{\lambda}$ generated by $g_{c,k}(t)$ where $t$ varies across all $t \in \mathcal{T}_{\lambda}$, $1 \leq c \leq \lambda_1 -1$ and $1 \leq k \leq \lambda^{'}_{c+1}.$ Then
the kernel of $\alpha$ is generated by $\tilde{G}^{\lambda}$. That is,
\[ S^{\lambda} \cong \tilde{M}^{\lambda} /\tilde{G}^{\lambda} . \]
\end{thm}
Fulton shows in an exercise in \cite{fulton} that this presentation can be simplified further using only the $g_{c,1}$ relations. A corollary to this theorem is another proof of the classical result that a basis of $S^{\lambda}$ is given by polytabloids of standard Young tableaux of shape $\lambda.$ Our main contribution to this theory will be to give a new presentation of $S^{\lambda}$ that
reduces the number of generators for $\tilde{G}^{\lambda}$ even further.
\subsection{One relation to generate them all}
In this section we derive a presentation of $S^\lambda$ that requires far fewer relations than those needed in Theorem \ref{fultonthm}.
We begin by narrowing our study to partitions $\mu$ of $n+m$ with shape $2^{m}1^{n-m}$, so $\mu$ has a column of size $n$ and a column of size $m$ for $1 \leq m \leq n$, and $\mu'=(n,m)$. We shall generalize these results to partitions of any shape at the end of this Section.
Note that by the antisymmetry of column tabloids, $\tilde{M}^{\mu}$ can be induced from the Young subgroup $S_{n} \times S_{m} \leq S_{n+m}$ as follows:
\[ \tilde{M}^{\mu} \cong \ind_{S_{n} \times S_{m}}^{S_{n+m}} \left ( \sgn_{n} \otimes \sgn_{m} \right ) \cong \bigoplus_{i=0}^{m} S^{2^{i}1^{n+m - 2i}} .\]
To ease our discussion, we introduce the following map. For $t \in \mathcal{T}_\lambda$, let $\pi_{c,1}^{\ell}([t])$ correspond to the $\pi_{c,1}$ relations obtained from switching the $\ell^{th}$ element in column $c+1$ of $[t]$ with each of the elements in column $c$ of $[t]$. For example,
\begin{center}
\includegraphics[scale = .25 ]{pi211.pdf}
\end{center}
We can therefore define
\[ g_{c,1}^\ell ([t]): =[t]-\pi_{c,1}^\ell ([t]) . \]
\begin{prop}
The set $\{ g_{c,1}^\ell ([t]) \mid [t]\in \tilde{M}^{\lambda} \; , 1\leq \ell \leq \lambda_{c+1}' \}$ is the same as the set of the dual Garnir relations $\{ g_{c,1}(t) \mid t \in\mathcal{T}_\lambda \}$.
\end{prop}
\begin{proof} The statement follows directly from the definitions.
\end{proof}
The maps $\pi_{c,1}^\ell$ and $g_{c,1}^\ell$ allow us to narrow our study to tableaux with ordered columns. Additionally, they allow us to define a new linear transformation from
$\tilde{M}^{\mu}$ to $\tilde{M}^{\mu}$.
\begin{definition} \label{defeta}
For $\mu = 2^m1^{n-m}$, let $\eta: \tilde{M}^{\mu} \to \tilde{M}^{\mu}$ be the map
\[ \eta: [t] \mapsto \sum _{j=1}^m g_{1,1}^j ([t]) = m [t] - \sum_{j=1}^{m} \pi_{1,1}^{j}([t]). \]
\end{definition}
Because $\eta$ is a sum of equivariant linear maps, it follows that $\eta$ is equivariant as well. Furthermore, it follows from Theorem \ref{fultonthm} that $\im(\eta) \subseteq \ker(\alpha)$, as each term in the summand in $\eta$ is a dual Garnir relation. Using a technique employed in \cite{FHSW}, we will now show that the relations generated by $\eta$ are all that is needed to generate $\tilde{G}^{\mu}$.
\begin{thm} \label{imetakeralpha}
For $\mu = 2^m1^{n-m}$, $\ker(\eta) \cong S^\mu$, and thus $\im(\eta) = \ker(\alpha)$ for $\alpha : \tilde{M}^\mu \rightarrow S^\mu$.
\end{thm}
Note that because
\[ \tilde{M}^{\mu} \cong \bigoplus_{i=0}^{m} S^{2^{i}1^{n+m - 2i}}\]
is multiplicity-free, by Schur's Lemma $\eta$ acts as a scalar on each irreducible submodule of $\tilde{M}^{\mu}$. Thus, finding the kernel of $\eta$ is equivalent to finding the irreducible submodules of $\tilde{M}^{\mu}$ on which $\eta$ acts like the 0 scalar.
We proceed by computing the action of $\eta$ on each irreducible submodule of $\tilde{M}^{\mu}$. For each $T \in \binom{[n+m]}{n}$, let $v_{T}\in \tilde{M}^{\mu}$ be the column tabloid with first column $T$ (both columns assumed to be in increasing order). For any $v\in \tilde{M}^{\mu}$, let $\langle v, v_T \rangle$ be the coefficient of $v_T$ in the expansion of $v$ in the basis of all $v_T$.
\begin{lem}\label{lemma}
For every $S, T \in \binom{[n+m]}{n}$,
\[ \langle \eta(v_{S}), v_{T} \rangle = \begin{cases}
m & \textrm{if } S = T \\
0 & \textrm{if } |S \cap T| < n-1 \\
(-1)^{x+y} & \textrm{if } |S \cap T| = n -1 \textrm{ with}\\
& x \in S \backslash T, y \in T \backslash S
\end{cases} \]
\end{lem}
\begin{proof}
The first two cases follow easily from the definition of $\eta$. For the last case, suppose $x$ is in the $r_{x}^{th}$ row in the first column of $v_{S}$ and $y$ is in the $r_{y}^{th}$ row of the second column of $v_{S}$. Then there are precisely $x-1$ numbers smaller than $x$ altogether, with $r_{x}-1$ of them in the first column. It follows that there are $x - r_{x}$ numbers smaller than $x$ in the second column. Similarly, there are $r_{y} - 1$ numbers smaller than $y$ in the second column and $y - r_{y}$ numbers smaller than $y$ in the first column.
There are two cases: $x< y$ or $y < x$.
Suppose $x<y$ and swap the positions of $x$ and $y$. Then in order to obtain an element in the basis of $\tilde{M}^{\lambda}$, we must move $y$ to the $(y-r_{y})^{th}$ row of the first column and $x$ to the $(x - r_{x} +1)^{st}$ row of the second column. This means moving $y$ from the $r_{x}^{th}$ row down to the $(y-r_{y})^{th}$ row, which requires $y - r_{y} - r_{x}$ transpositions. Similarly moving $x$ up from the $r_{y}^{th}$ row to the $(x - r_{x} + 1)^{st}$ row requires $r_{y} - x + r_{x} - 1$ transpositions. Altogether, this amounts to a sign change of
\[ (-1)^{r_{y} - x + r_{x} - 1 + y - r_{y} - r_{x}} = (-1)^{y-x-1} .\]
Finally, taking into account that $\eta$ itself contributes a sign change of $(-1)$, we obtain the coefficient $(-1)^{x+y}$ for $\langle \eta(v_{S}), v_{T} \rangle$. The case $y<x$ is similar.
\end{proof}
We next calculate the scalar action of $\eta$ on the irreducible submodules of $\tilde{M}^{\mu}$.
\begin{thm} \label{eta}
On the irreducible submodule of $\tilde{M}^{\mu}$ isomorphic to $S^{2^{i}1^{(n+m) - 2i}}$, the operator $\eta$ acts like a scalar $\omega_{i}$, where
\[ \omega_{i} := 2(m-i) .\]
\end{thm}
\begin{proof}
For simplicity, take $T = [n]$.
Then for a given $i$, we take $t$ to be the Young tableau given by
\begin{center}
\includegraphics[scale = .45]{tableau.pdf}
\end{center}
Recall that $C_{t}$ is the column stabilizer of $t$ and $R_{t}$ is the row stabilizer of $t$. Then we denote by $e_{t}$ the Young symmetrizer of $t$:
\[ e_{t} = \sum_{\alpha \in R_{t}} \sum_{\beta \in C_{t}} \sgn(\beta)\alpha \beta. \]
As in \cite{FHSW}, we adopt a slight abuse of notation by referring to the restriction to $S^{2^{i}1^{n+m-2i}}$ of the space spanned by $\tau e_{t}v_{T}$ for $\tau \in S_{n+m}$ to be $S^{2^{i}1^{n+m-2i}}$ itself.
We define $d_{t}$, $f_{t}$ and $r_{t}$ as in \cite{FHSW}; that is $r_{t} = \sum_{\alpha \in R_{t}} \alpha$, while $d_{t}$ is the signed sum of column permutations stabilizing $\{ 1, 2, \dots , n \}, \{ n+1, \dots, n+i \}$, and $\{ n+i+1, \dots n+ m \}$, and $f_{t}$ is the signed sum of permutations in $C_{t}$ that maintain the vertical order of these sets. Then $e_{t}v_{T} = r_{t}f_{t}d_{t}v_{T}$.
The antisymmetry of column tabloids ensures that $d_{t}v_{T}$ is a scalar multiple of $v_{T},$ because it simply permutes within columns. Therefore we can conclude that $r_{t}f_{t}v_{T}$ is a scalar multiple of $e_{t}v_{T}$, and in particular that $e_{t}v_{T}$ is nonzero, as the coefficient of $v_{T}$ in $r_{t}f_{t}v_{T}$ is nonzero.
Consider $\eta(r_{t}f_{t}v_{T}).$ In the subspace restricted to $S^{2^{i}1^{n+m-2i}}$, the fact that $\eta$ acts on $e_{t}v_{T}$ as a scalar implies the same is true of $r_{t}f_{t}v_{T}.$ In fact, because the coefficient of $v_{T}$ in $r_{t}f_{t}v_{T}$ is 1, we can determine precisely what this scalar is by computing $\langle \eta(r_{t}f_{t}v_{T}), v_{T} \rangle$. In particular, we wish to show that
\[ \langle \eta(r_{t}f_{t}v_{T}), v_{T} \rangle = \omega_{i} = 2(m-i) .\]
Again, following \cite{FHSW} we have
\[ r_{t}f_{t}v_{T} = \sum_{S \in \binom{[n+m]}{n}} \langle r_{t}f_{t}v_{T}, v_{S} \rangle v_{S} .\]
Applying the linear operator $\eta$ thus gives
\[ \eta(r_{t}f_{t}v_{T}) = \sum_{S \in \binom{[n+m]}{n}} \langle r_{t}f_{t}v_{T}, v_{S} \rangle \eta(v_{S}) .\]
Note that when $T = S$, by Lemma \ref{lemma} we have $\langle r_{T}, r_{S} \rangle = m$. With this, we can compute the coefficient of $v_{T}$ in general by
\begin{align} \label{equation}
\hskip -3em \langle \eta (r_{t} f_{t} v_{T}), v_{T} \rangle = \sum_{S \in \binom{[n+m]}{n}} \langle r_{t}f_{t}v_{T}, v_{S} \rangle \langle \eta(v_{S}), v_{T} \rangle = m + \sum_{S \in \binom{[n+m]}{n} \backslash \{ T \} } \langle r_{t}f_{t}v_{T}, v_{S} \rangle \langle \eta(v_{S}), v_{T} \rangle .
\end{align}
By Lemma \ref{lemma}, for $T \neq S$, $\langle \eta(v_{S}), v_{T} \rangle \neq 0$ only when $S$ and $T$ differ by a single element. In the sum $r_{t}f_{t}v_{T}$, there are two types of possible $v_{S}$ that fulfill this criterion.
\begin{enumerate}
\item \emph{We can obtain $v_{S}$ from a single row swap.} That is, up to signs, $v_{S}$ is given by $(j, n+j)v_{T}$ for $1 \leq j \leq i$, so $(j, n+j) \in R_{t}$. In this case, in order to write $(j, n+j)v_{T}$ in our basis, we must move $j$ from the $j^{th}$ row to the $1^{st}$ row of the second column and $n+j$ from the $j^{th}$ row to the $n^{th}$ row of the first column. In total, this gives a sign change of $(-1)^{j-1+n-j}= (-1)^{n-1}$.
By Lemma \ref{lemma}, for such a $v_{S}$, we get $\langle \eta(v_{S}), v_{T}) \rangle = (-1)^{n+j +j} = (-1)^{n}$. Hence overall we get a contribution to Equation \ref{equation} of
\[ \langle r_{t}f_{t}v_{T}, v_{S} \rangle \langle \eta(v_{S}), v_{T} \rangle = (-1)^{n-1 + n} = -1. \]
There are $i$ such possible $v_{S}$. Therefore this case contributes $-i$ to Equation \ref{equation}.
\item \emph{We can obtain $v_{S}$ by a swap coming from a column permutation $\sigma$ in $f_{t}$.} Note that because $f_{t}$ maintains the order of $\{ 1, 2, \dots , n \}, \{ n+1, \dots, n+i \}$, and $\{ n+i+1, \dots n+ m \}$ and we require that $|S \cap T| = n-1$, it must be that $S = \{ 1, 2, \dots, n-1, n+i+1 \}$. Suppose $\sigma$ moves $n$ to the $(n+\ell)^{th}$ row of $t$ for $1 \leq \ell \leq m-i$. To calculate the sign of $\sigma$, note that in order to move $n$ to the $(n+ \ell)^{th}$ row of $t$, it follows that
\[ \sigma = (n, n+ i + \ell)(n, n+i + \ell -1) \dots (n, n+ i + 1), \]
so $\sgn(\sigma) = \ell$.
In $\sigma v_{T}$, $n$ is in the $(i + \ell)^{th}$ row of the second column. In order to put this in our basis, we must move $n$ to the first row in the second column, which requires $i + \ell - 1$ transpositions.
Combining these, we have a sign change of $(-1)^{\ell + i + \ell -1} = (-1)^{i-1}$.
For such a $v_{S}$, the coefficient of $\langle \eta(v_{S}), v_{T}) \rangle$ is $(-1)^{n+i +1 +n} = (-1)^{i+1}$.
Thus for such a $v_{S}$ and $\sigma$, we get a total coefficient of $(-1)^{i+1 + i -1} = 1.$
There are $m-i$ possible $\sigma$ (one for each $\ell$), and so we get a contribution to Equation \ref{equation} of $m-i$.
\end{enumerate}
Thus combining the $T=S$ case with the two cases above, we have
\[ \omega_i=\langle \eta (r_{t} f_{t} v_{T}), v_{T} \rangle = m + (- i) + (m-i) = 2(m-i) .\] \end{proof}
\begin{proof}[Proof of Theorem \ref{imetakeralpha}] By Theorem \ref{eta}, $\omega_i$ is 0 only when $m=i$, so $\ker(\eta) \cong S^{\mu}$. Thus $\im(\eta) \cong \tilde{M}^{\mu} / \ker(\eta) = \ker(\alpha)$, and the theorem is proved.
\end{proof}
Theorem \ref{imetakeralpha} allows us to generate $\tilde{G^{\mu}}$ for any $\mu$ with two columns using only the single $\eta$ relation.
We now consider any partition $\lambda = (\lambda_{1}, \dots , \lambda_{k})$ with conjugate $\lambda^{'} = (\lambda^{'}_{1}, \dots , \lambda^{'}_{j})$.
For $t \in \mathcal{T}_{\lambda}$, let $h_{c}(t)$ be the image of $\eta$ on the $c$ and $(c+1)^{st}$ columns of $t$ that leaves the other columns of $[t]$ fixed.
\begin{thm}\label{spechtgarnir}
For any partition $\lambda$ of $m$, let $\tilde{H}^{\lambda}$ be the space generated by $h_{c}([t])$ for every $[t] \in \tilde{M}^{\lambda}$ and $1 \leq c \leq \lambda_{1}-1$. Then
\[ S^{\lambda} \cong \tilde{M}^{\lambda} / \tilde{H}^{\lambda} .\]
\end{thm}
Theorem \ref{spechtgarnir} dramatically reduces the number of generators needed to find $\tilde{G}^{\lambda}$. The original construction of Theorem \ref{fultonthm} required enumerating over every $1 \leq k \leq \lambda _{c+1}'$ for every pair of columns $c$ and $c+1$ of every $t \in \mathcal{T}_\lambda$. Even Fulton's simplification using only $g_{c,1}$ relations requires enumerating over $t \in \mathcal{T}_\lambda$ for every pair of columns $c$ and $c+1$. By contrast, our construction uses a single relation for every pair of adjacent columns, and $[t]$ varies in $\tilde{M}^{\lambda}$, a significantly smaller space than $\mathcal{T}_\lambda$.
\section{A CataLAnKe Isomorphism}\label{catiso}
In this section, we will restrict our attention to tableaux of shape $2^{n-1}1$, and turn to the multi-linear component of the free LAnKe.
\subsection{The Jacobi identity and Garnir relations}
There is an intimate link between the space of column tabloids and the multilinear component of the free LAnKe.
Take the case $k=3$ and $n=3$. A typical bracket looks like
\[ [[1,2,3],4,5]. \]
Note that when $k=3$ and there are two brackets, we can always have the internal bracket justified to the left:
\[ [4,5,[1,2,3]] = -[4,[1,2,3],5]=[[1,2,3],4,5]. \]
We call brackets that are justified to the left \emph{combs}. The antisymmetry of column tabloids is precisely the antisymmetry in the combs. For example,
$$ [[1,2,3],4,5]=-[[1,2,3],5,4]=[[1,3,2],5,4] $$
corresponds to
\begin{center}
\includegraphics[scale = .25 ]{antisym.pdf}
\end{center}
Following \cite{FHSW}, we let $V_{n,3}$ be the space of antisymmetric multilinear, left comb brackets without imposing the Jacobi identity. It follows that $V_{n,3}$ and $\tilde{M}^{2^{n-1}1}$ are isomorphic as $S_{2n-1}$--modules:
\[ V_{n,3} \cong \bigoplus_{i=0}^{n-1} S^{2^{i}1^{2n-2i-1}} \cong \tilde{M}^{2^{n-1}1}. \]
We can formalize this by defining \[\Omega: V_{n,3} \to \tilde{M}^{2^{n-1}1}\] to be the map that sends a bracket $v_{T}$ to its corresponding column tabloid. For the remainder of this paper, we will abuse notation by referring to $\Omega(v_{T})$ and $v_{T}$ interchangeably.
As in \cite{FHSW}, we define an $S_{2n-1}$--module homomorphism $\varphi: V_{n,3} \to V_{n,3}$ by
\begin{eqnarray*} \varphi([[x_{1}, \dots, x_{n}], y_{1}, \dots, y_{n-1}] ) && \\ && \hskip -3.5cm =[[x_{1}, \dots, x_{n}], y_{1}, \dots, y_{n-1}] - \sum_{i=1}^{n}(-1)^{n-i} [[y_{1}, \dots y_{n-1}, x_{i}], x_{1}, \dots \hat{x_{i}}, \dots x_{n}], \end{eqnarray*}
so $\varphi =0$ if the Jacobi identity holds. Thus by construction, $\ker(\varphi) = \rho_{n,3}$.
Note that by our above argument, we can define $\varphi:\tilde{M}^{2^{n-1}1} \to \tilde{M}^{2^{n-1}1}$ by composition with $\Omega.$
\begin{prop}\label{garnirJI}
The image of $[t]$ under $\varphi: \tilde{M}^{2^{n-1}1} \to \tilde{M}^{2^{n-1}1}$ is a dual Garnir relation for each $[t]$.
\end{prop}
\begin{proof}
The image $\varphi([t])$ is the relation $g_{1,n-1}(t)$.
\end{proof}
Proposition \ref{garnirJI} will prove informative in constructing our isomorphism from $\rho_{n,3}$ to $S^{2^{n-1}1}$, as will the following lemma.
\begin{lem}\label{kernelalpha}
For $\mu = 2^{n-1}1$, we have $\im(\eta) \subseteq \im(\varphi).$
\end{lem}
\begin{proof}[Proof of Lemma \ref{kernelalpha}]
As before, let $T = [n]$ and
$$v_{T} = [[1, \dots ,n ], n+1, \dots , 2n-1].$$
For $i\in [n]$ and $j\in [n-1]$, let $R_{i} = \{ n+1, \dots 2n-1, i \} $ and $S_{i,j} = \{ 1, \dots \hat{i}, \dots, n, n+j \}$, so that
\begin{align*}
v_{R_{i}} &= [[ i, n+1, \dots 2n-1], 1, \dots \hat{i}, \dots n],\\
v_{S_{i,j}} &= [[1, \dots \hat{i}, \dots, n, n+j],i, n+1, \dots, \widehat{(n+j)}, \dots 2n-1].
\end{align*}
We will show that $\eta( v_{T}) \in \im(\varphi)$.
The image of $v_{T}$ by $\varphi$ is
\begin{equation*}
\varphi (v_{T}) =v_{T} - \sum_{i=1}^{n} (-1)^{i-1} v_{R_{i}}.
\end{equation*}
We now claim that
\begin{equation} \label{eqneta}
\eta(v_{T}) = -\left ( \varphi(v_{T}) + \sum_{i = 1}^{n} (-1)^{i-1} \varphi(v_{R_{i}}) \right ) ,\end{equation}
from which the lemma follows. To see why equation (\ref{eqneta}) is true, consider each $\varphi(v_{R_{i}})$. One can verify that
\begin{equation}
\varphi(v_{R_{i}}) = v_{R_{i}} - \left ( \sum_{j = 1}^{n-1} (-1)^{j+n-1} v_{S_{i,j}} + (-1)^{i-1}v_{T} \right ) .
\end{equation}
Now consider
\begin{equation} \label{x} \varphi(v_{T}) + \sum_{i = 1}^{n} (-1)^{i-1} \varphi(v_{R_{i}}) .\end{equation}
By our above discussion, we can rewrite equation (\ref{x}) as
\[ \Big(v_{T} - \sum_{i =1}^{n} (-1)^{i-1} v_{R_{i}} \Big) + \sum_{i = 1}^{n} (-1)^{i-1} \Big ( v_{R_{i}} - \Big ( \sum_{j = 1}^{n-1} (-1)^{j+n-1} v_{S_{i,j}} + (-1)^{i-1}v_{T} \Big ) \Big) .\]
Noting that the $v_{R_{i}}$ cancel for every $i$ and adjusting for signs, we simplify this to
\begin{align*}
-(n-1)v_{T}-\sum_{i=1}^{n} \Big( \sum_{j=1}^{n-1}(-1)^{n+j+i}v_{S_{i,j}} \Big) .\\
\end{align*}
Observe that by Definition \ref{defeta} this is precisely $- \eta(v_{T})$.
It follows that $\im(\eta) \subseteq \im(\varphi)$.
\end{proof}
\subsection{The isomorphism between $\rho_{n,3}$ and $S^{2^{n-1}1}$}
We now move from the world of column tabloids back to the more standard one of row tabloids and row polytabloids. Recall that for a tableau $t \in \mathcal{T}_\lambda$,
$\{ t \}$ is the associated row tabloid and $\varepsilon_{t}$ is the associated row polytabloid.
We define a map $\Psi: \rho_{n,3} \to S^{2^{n-1}1}$ by first defining a map $\tilde{\Psi}: V_{n,3} \to S^{2^{n-1}1}$ and then considering the restriction of this map to $\rho_{n,3}.$
For a bracket $v = [[x_{1}, \dots , x_{n}], y_{1}, \dots, y_{n-1}] \in V_{n,3}$, let $t(v)$ be the tableau labeled compatibly with the bracket, as in:
\[ t(v) =\ytableausetup
{mathmode, boxsize=2.25em, centertableaux}
\begin{ytableau}
x_{1} & y_{1} \\
x_{2} & y_{2} \\
\vdots & \vdots \\
x_{n-1} & y_{n-1} \\
x_{n} \\
\end{ytableau} \; . \]
\begin{definition} \label{Psidef}
The map $\tilde{\Psi}: V_{n,3} \to S^{2^{n-1}1} $ is given by
\[ \tilde{\Psi}: v \mapsto \varepsilon_{t(v)} .\]
\end{definition}
Note that $\tilde{\Psi}(v) = (\alpha \circ \Omega)(v).$
It is therefore clear that $\tilde{\Psi}$ is a well-defined $S_{2n-1}$--module homomorphism and that $\ker(\alpha)\cong \ker(\tilde{\Psi})$. By Theorem \ref{fultonthm} we know that every dual Garnir relation is in $\ker(\alpha)$. Since by Proposition \ref{garnirJI}, the Jacobi identity relations are dual Garnir relations, they are in $\ker(\tilde{\Psi})$. That is, $\tilde \Psi (\varphi (u))=0$ for every $u\in V_{n,3}$. An independent proof which makes this fact more explicit appears in Appendix \ref{appendixpf}.
Because $\varphi:V_{n,3} \to V_{n,3}$ is an $S_{2n-1}$-module homomorphism, we can write $V_{n,3} \cong \ker(\varphi) \oplus \im(\varphi)$ and let
$$\gamma: \ker(\varphi) \oplus \im(\varphi) \to \ker(\varphi)$$ be the projection map. Since $\tilde \Psi (\im \varphi ) =0$, we can consider the restriction of $\tilde{\Psi}$ to $\rho_{n,3}\cong \ker(\varphi)$.
\begin{definition}
For $x \in \rho_{n,3}$, let $\Psi: \rho_{n,3} \to S^{2^{n-1}1}$ be defined by
\[\Psi(x) = \tilde{\Psi}|_{{\rho_{n,3}}}(x) = (\tilde{\Psi} \circ \gamma)(x). \]
\end{definition}
We now state the main theorem of this section.
\begin{thm}\label{iso}
The map $\Psi$ is an $S_{2n-1}$--module isomorphism.
\end{thm}
\begin{proof}
It is clear that $\Psi$ is surjective. It remains to show that $\Psi$ is injective. Note that it is sufficient to show that $\im(\varphi)$ is isomorphic to $\ker(\tilde\Psi).$ By Theorem \ref{imetakeralpha}, $\ker(\alpha) = \im(\eta)$ and we have that $\im (\varphi)\subseteq \ker(\tilde\Psi)$. Because $V_{n,3} \cong \tilde{M^{2^{n-1}1}}$, it follows (with a slight abuse of notation) that $\im(\varphi) \subseteq \im(\eta).$ By Lemma \ref{kernelalpha}, the other containment $\im (\eta) \subseteq \im (\varphi)$ holds as well, implying that $\im (\varphi)\cong \ker(\tilde\Psi)$ as needed. \end{proof}
\begin{cor} \label{catalanke} [\cite{FHSW}, The CataLAnKe Theorem] The representation $\rho_{n,3}$ is isomorphic to $S^{2^{n-1}1}$.
\end{cor}
An alternative method to prove Theorem \ref{iso} was pointed out to us by Michelle Wachs. In \cite{lie}, the $g_{1,1}$ relations are shown to be equivalent to the Jacobi identity. Combining this result with the observation in \cite{fulton} that over $\mathbb{C}$, the $g_{c,1}$ relations generate $\ker(\alpha)$ gives another way to show the isomorphism between $\rho_{n,3}$ and $S^{2^{n-1}1}$ by $\Psi$.
We can use Theorem \ref{iso} to find a basis for $\rho_{n,3}$ by using the iconic basis of the Specht module.
\begin{definition}
A bracket $[[x_{1}, \dots, x_{n}],y_{1}, \dots , y_{n-1}]$ is \emph{standard} if $x_{1} < x_{2}< \dots < x_{n}$, $y_{1} < y_{2}< \dots < y_{n-1}$ and $x_{j} < y_{j}$ for every $j \in [n-1].$
\end{definition}
\begin{cor}\label{rhon3basis}
The set of standard brackets forms a basis for $\rho_{n,3}$.
\end{cor}
\paragraph{Acknowledgements} The authors would like to thank Michelle Wachs, Vic Reiner, Marissa Miller, and Leslie Nordstrom for useful conversations. The authors gratefully acknowledge the National Science Foundation (DMS–1143716) and Smith College for their support of the Center for Women in Mathematics and the Post-Baccalaureate Program. This work was also partially funded by a Smith College Summer Research Fund.
\vskip 1cm
\nocite{*}
\bibliographystyle{alpha}
| {
"timestamp": "2018-03-30T02:04:42",
"yymm": "1803",
"arxiv_id": "1803.10918",
"language": "en",
"url": "https://arxiv.org/abs/1803.10918",
"abstract": "Fulton and Kraskiewicz gave a presentation of Specht modules as a quotient of the space of column tabloids by dual Garnir relations. We simplify this presentation by showing that it can be generated by a single relation for each pair of columns of a tableau with ordered columns, thereby significantly reducing the number of generators given in the original construction. Our presentation applies to all Specht modules, and is of a similar nature to a recent result by Friedmann-Hanlon-Stanley-Wachs that applies to staircase partitions. We show that our presentation implies the Friedmann-Hanlon-Stanley-Wachs presentation.",
"subjects": "Combinatorics (math.CO); Representation Theory (math.RT)",
"title": "A simplified presentation of Specht modules",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232945094719,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7093460475680203
} |
https://arxiv.org/abs/2111.11860 | A Discrete-Time Compartmental Epidemiological Model for COVID-19 with a Case Study for Portugal | In [Ecological Complexity 44 (2020) Art. 100885, DOI:https://doi.org/10.1016/j.ecocom.2020.100885] a continuous-time compartmental mathematical model for the spread of the Coronavirus disease 2019 (COVID-19) is presented with Portugal as case study, from 2 March to 4 May 2020, and the local stability of the Disease Free Equilibrium (DFE) is analysed. Here, we propose an analogous discrete-time model and, using a suitable Lyapunov function, we prove the global stability of the DFE point. Using COVID-19 real data, we show, through numerical simulations, the consistence of the obtained theoretical results. | \section{Introduction}
The coronavirus belong to the family of \emph{Coronaviridae}. It is a virus that causes infection
to humans, other mammals and birds. The infection usually affects the respiratory system,
and the symptoms may vary from simple colds to pneumonia. To date, eight coronaviruses are known.
The new coronavirus SARS-CoV-2 originates COVID-19, and was identified for the first time
in December 2019 in Wuhan, China. Two other coronaviruses have caused outbreaks: SARS-CoV,
in 2002--2003, and MERS-CoV in 2012 \citep{Covid19}. The COVID-19 pandemic in an ongoing pandemic,
probably the most significant one in human history. The effects are so severe that it has been
necessary to use quarantining to control the spread. It has forced humans to adapt
to this new situation \cite{MyID:461}.
The use of mathematical compartmental models to study the spread and consequences
of infectious diseases have been used successfully for a long time.
The techniques to study compartmental models are vast: continuous models, fractional models,
and discrete models, among others \cite{MyID:461}. The complexity of the models is determined
by the effects and consequences of the disease. Several models have been proposed for
COVID-19 spread; \mbox{see \cite{MyID:461,PST,ST,NT}}, among others. In this work, we are
interested in discretizing the model presented in \cite{PST}, using the Mickens method
\cite{Mickens94,Mickens02}, which was applied successfully in several
different contexts \cite{VD}.
\textls[-10]{In Section~\ref{sec2}, we recall the SAIQH
(susceptible--asymptomatic--infectious--quarantined--}hospitalized) continuous-time
mathematical model, the equilibrium points and the stability results of \cite{PST}.
Section~\ref{sec3} is dedicated to the original results of our work:
the new discrete-time model, the proof of its well-posedness,
the computation of the equilibrium points, and the proof
of the stability of the disease-free equilibrium point. We end Section~\ref{sec3}
with numerical simulations showing the consistency of our results.
Finally, Section~\ref{sec4} is dedicated to discussion
of the obtained results and some possible future work directions.
\section{Preliminaries}
\label{sec2}
This section is dedicated to the presentation of the continuous-time model \cite{PST}
and its results, which establish the stability of the equilibrium points.
For more information, \mbox{see \cite{PST}}.
The total living population under study at time $t \geq 0$ is denoted by $N(t)$ as follows:
\begin{equation}
N(t)=S(t)+A(t)+I(t)+Q(t)+H(t)+\overline{H}(t)
\end{equation}
where $S(t)$ represents the susceptible individuals; $A(t)$ the infected individuals
without symptoms or with mild ones; $I(t)$ the infected individuals; $Q(t)$ the
individuals in quarantine, that is, in isolation at home; $H(t)$ the hospitalized
individuals; and, finally, $\overline{H}(t)$ the hospitalized individuals in intensive care units.
There is another class that is also considered, $D(t)$, that gives the cumulative
number of deaths due to COVID-19 for all $t \geq 0$.
Regarding the parameters, all of them are positive.
The recruitment rate into the susceptible class is $\Lambda$;
all individuals of all classes are subject to the natural
death rate $\mu$ along all time $t \geq 0$ under study.
Susceptible individuals may become infected with COVID-19 at the following rate:
\begin{equation}
\lambda(t)=\frac{\beta (l_{A} A(t)+I(t)+l_{H}H(t))}{N(t)},
\end{equation}
where $\beta$ is the human-to-human transmission rate per unit of time (day)
and $l_{A}$ and $l_{H}$ quantify the relative transmissibility of asymptomatic
individuals and hospitalized individuals, respectively. The class $\overline{H}$
does not enter in $\lambda$ because the number of health care workers that become
infected by SARS-CoV-2 in intensive care units is very low and can be neglected.
A fraction $p \in [0,1]$ of the susceptible population is in quarantine at home,
at rate $\phi$. Consequently, only a fraction $1-p \in [0,1]$ of susceptible individuals
are assumed to be able to become infected. Since there is uncertainty about long-immunity
after recovery, it is assumed that individuals of class $Q$ will become susceptible
again at a rate $\omega$. It is also considered that only a fraction $m \in[0,1]$
of the quarantined individuals move from class $Q$ to $S$. It means that $(m \times 100)\%$
of the quarantined individuals return to class $S$ at the end of $\frac{1}{\omega}$ days.
These assumptions are justified by the state of calamity that was immediately decreed
by the government of Portugal to address the state of emergency, which was fully respected
by the Portuguese population. After $\frac{1}{\nu}$ days of infection, only
a fraction $q \in [0,1]$ of infected individuals without (or with mild) symptoms
have severe symptoms. Thus, $(q \times 100)\%$ of individuals of compartment
$A$ move to $I$ at rate $\nu$. A fraction $f_{1} \in [0,1]$ of infected individuals
with severe symptoms are treated at home and the other fraction $(1-f_{1}) \in [0,1]$
are hospitalized, both at rate $\delta_{1}$. The model considers the three following scenarios
for hospitalized individuals:
\begin{enumerate}
\item A fraction $f_{2} \in [0,1]$ of individuals in class $H$ can evolve
to a state of severe health status, needing an invasive intervention,
such as artificial respiration, so they need to move to intensive care,
at rate $\delta_{2}$;
\item A fraction $f_{3} \in [0,1]$ of individuals in class $H$ die due to COVID-19,
the disease related death rate associated with hospitalized individuals being $\alpha_{1}$;
\item A fraction $(1-f_{2}-f_{3}) \in [0,1]$ of individuals in class $H$
recover and, consequently, return home in quarantine/isolation
at rate $\delta_{2}$.
\end{enumerate}
Regarding the hospitalized individuals in intensive care units,
the model considers two possibilities as follows:
\begin{enumerate}
\item A fraction $(1-\kappa) \in [0,1]$ of individuals in class $\overline{H}$
recover and move to the class $H$ at rate $\eta$;
\item A fraction $\kappa \in [0,1]$ of individuals in class $\overline{H}$
die due to COVID-19, the disease-related death rate associated with
hospitalized individuals in intensive care units being $\alpha_{2}$.
\end{enumerate}
Compiling all the previous assumptions, one has the following mathematical model:
\begin{equation}
\label{eq:model}
\begin{cases}
\dot{S}(t) = \Lambda + \omega m Q(t) - [ \lambda(t) (1-p) + \phi p + \mu] S(t),\\[0.2 cm]
\dot{A}(t) = \lambda(t)(1-p) S(t) - (q v +\mu) A(t), \\[0.2 cm]
\dot{I}(t) = q v A(t) - ( \delta_{1} + \mu)I(t),\\[0.2 cm]
\dot{Q}(t) = \phi p S(t) + \delta_{1} f_{1} I(t) + \delta_{2}(1-f_{2}-f_{3})H(t)
- (\omega m + \mu) Q(t),\\[0.2 cm]
\dot{H}(t)=\delta_{1}(1-f_{1})I(t)+\eta(1-\kappa)\overline{H}(t)
-(\delta_{2}(1-f_{2}-f_{3})+\delta_{2}f_{2}+ \alpha_{1}f_{3}+\mu)H(t),\\[0.2 cm]
\dot{\overline{H}}(t)=\delta_{2}f_{2} H(t)
- (\eta(1-k) + \alpha_{2} \kappa + \mu)\overline{H}(t), \\[0.2 cm]
\dot{D}(t)=\alpha_{1} f_{3} H(t)+ \alpha_{2} \kappa \overline{H}(t).
\end{cases}
\end{equation}
It can be seen in \cite{PST} that \eqref{eq:model} is equivalent to the following:
\begin{equation}
\label{eq:modelC}
\begin{cases}
\dot{S}(t) = \Lambda + \omega m Q(t) - [ \lambda(t) (1-p) + \phi p + \mu] S(t),\\[0.2 cm]
\dot{A}(t) = \lambda(t)(1-p) S(t) - (q v +\mu) A(t), \\[0.2 cm]
\dot{I}(t) = q v A(t) - ( \delta_{1} + \mu)I(t),\\[0.2 cm]
\dot{Q}(t) = \phi p S(t) + \delta_{1} f_{1} I(t) + \delta_{2}(1-f_{2}-f_{3})H(t)
- (\omega m + \mu) Q(t),\\[0.2 cm]
\dot{H}(t)=\delta_{1}(1-f_{1})I(t)+\eta (1-\kappa)\overline{H}(t)-(\delta_{2}(1-f_{2}-f_{3})
+\delta_{2}f_{2}+ \alpha_{1}f_{3}+\mu)H(t), \\[0.2 cm]
\dot{\overline{H}}(t)=\delta_{2}f_{2} H(t) - (\eta(1-k) + \alpha_{2} \kappa + \mu)\overline{H}(t).
\end{cases}
\end{equation}
Table~\ref{tab1} presents all parameters and initial conditions
of mathematical model \eqref{eq:modelC}.
In \cite{PST}, it is shown that the biologically feasible region is given by the following:
\begin{equation}
\Omega=\left\{(S,A,I,Q,H,\overline{H})
\in ( \mathbb{R}_{0}^{+})^{6}: N \leq \frac{\Lambda}{\mu} \right\},
\end{equation}
which is positively invariant for \eqref{eq:modelC} for all
non-negative initial conditions. To simplify the expressions
in the computations, the following notation is used:
\begin{enumerate}
\item[(i)] $a_{0}:= q v + \mu$;
\item[(ii)] $a_{1}:= \delta_{1}+ \mu$;
\item[(iii)] $a_{2}:= m \omega + \mu$;
\item[(iv)] $a_{3}:= \delta_{2}(1-f_{2}-f_{3}) + \delta_{2}f_{2} + \alpha_{1} f_{3} + \mu$;
\item[(v)] $a_{4}:= \delta_{2}(1-f_{2}-f_{3})$;
\item[(vi)] $a_{5}:= p \phi + \mu$;
\item[(vii)] $a_{6}:= \delta_{1} (1 - f_{1})$;
\item[(viii)] $a_{7}:=\alpha_2 \kappa + \eta_{k} + \mu$;
\item[(ix)] $\eta_{k} := \eta(1-\kappa)$;
\item[(x)] $\chi :=a_{3} a_{7} - \delta_{2} \eta_{k} f_{2}$.
\end{enumerate}
\begin{specialtable}[H]
\caption{Description of the parameters and initial conditions
of model \eqref{eq:modelC}.\label{tab1}}
\begin{tabular}{cm{11.2cm}<{\raggedright}}
\toprule
\textbf{Parameter} & \textbf{Description}\\
\midrule
$\Lambda$ & Recruitment Rate \\
$\mu$ & Natural death rate \\
$\beta$ & Human-to-human transmission rate \\
$l_{A}$ & Relative transmissibility of individuals in class $A$\\
$l_{H}$ & Relative transmissibility of individuals in class $H$\\
$\phi$ & Rate associated with movement from $S$ to $Q$\\
$\nu$ & Rate associated with movement from $A$ to $I$\\
$\delta_{1}$ & Rate associated with movement from $I$ to $Q/H$\\
$\delta_{2}$ & Rate associated with movement from $H$ to $Q/\overline{H}$\\
$\eta$ & Rate associated with movement from $\overline{H}$ to $H$\\
$\omega$ & Rate associated with movement from $Q$ to $S$\\
$\alpha_{1}$ & Disease-related death rate of class $H$\\
$\alpha_{2}$ & Disease-related death rate of class $\overline{H}$\\
$p$ & Fraction of susceptible individuals putted in quarantine\\
$q$ & Fraction of infected individuals with severe symptoms\\
$f_{1}$ & Fraction of infected individuals with severe symptoms in quarantine\\
$f_{2}$ & Fraction of hospitalized individuals transferred to $\overline{H}$\\
$f_{3}$ & Fraction of hospitalized individuals who die of COVID-19\\
$\kappa$ & Fraction of hospitalized individuals in intensive care units\\
& who die from COVID-19\\
$m$ & Fraction of individuals who moves from $Q$ to $S$\\
$S(0)=S_{0}$& Individuals in class $S$ at $t=0$\\
$A(0)=A_{0}$ & Individuals in class $A$ at $t=0$\\
$I(0)=I_{0}$ & Individuals in class $I$ at $t=0$\\
$Q(0)=Q_{0}$ &Individuals in class $Q$ at $t=0$\\
$H(0)=H_{0}$ & Individuals in class $H$ at $t=0$\\
$\overline{H}(0)=\overline{H}_{0}$ & Individuals in class $\overline{H}$ at time $t=0$\\
\bottomrule
\end{tabular}
\end{specialtable}
Model \eqref{eq:modelC} has the following reproduction number:
\begin{equation}
\label{R0}
\mathcal{R}_{0}=\dfrac{\beta a_{2} (1-p)[(l_{H} a_{6} qv + (a_{1} + qv)a_{3})a_{7}
-\delta_{2} \eta_{k} f_{2} (q v + a_{1})]}{a_{0} a_{1}
\chi (p \phi +a_{2})}=\dfrac{\mathcal{N}}{\mathcal{D}}
\end{equation}
and two equilibrium points, the disease free equilibrium (DFE) point
\begin{equation}
\label{DFE}
\Sigma_{0}= \left(S_{0},A_{0}, I_{0},Q_{0}, H_{0}, \overline{H}_{0} \right)
=\left(\frac{\Lambda a_{2}}{(p \phi + a_{2})\mu}, 0, 0,
\frac{p \phi \Lambda}{(p \phi + a_{2}) \mu}, 0, 0 \right)
\end{equation}
and the endemic equilibrium (EE) point that is given by the following:
\begin{equation}
\label{endemic}
\Sigma^{\ast}=\left(S^{\ast}, A^{\ast}, I^{\ast}, Q^{\ast},
H^{\ast}, \overline{H}^{\ast} \right)
\end{equation}
with
\begin{equation*}
\begin{split}
S^{\ast}&=\frac{\lambda^{\ast} a_{0} a_{1} a_{2} \chi}{D^{\ast}},
\quad A^{\ast}= \frac{a_{1} a_{2} \chi \Lambda \lambda^{\ast}}{ D^{\ast}},\\
I^{\ast}&=\frac{\chi \Lambda a_{2} q v \lambda^{\ast} }{D^{\ast}},
\quad Q^{\ast}=\frac{\Lambda((\chi \delta_{1} f_{1}+a_{4} a_{6} a_{7})q v \lambda^{\ast}
+ a_{0}a_{1} p \phi \chi)}{D^{\ast}},\\
H^{\ast}&=\frac{\Lambda a_{2}a_{6} a_{7} q v \lambda^{\ast}}{ D^{\ast}}, \quad
\overline{H}^{\ast}=\frac{\delta_{2} f_{2} \Lambda a_{2} a_{6} q v \lambda^{\ast}}{D^{\ast}},
\end{split}
\end{equation*}
where $D^{\ast}= ( \chi ( -f_{1} m \omega q v \delta_{1} + a_{0} a_{1} a_{2})-a_{4} a_{6} a_{7}
m \omega q v) \lambda^{\ast} + \mathcal{D} \mu$. Assuming that the transmission rate is strictly
positive, we have the following:
\begin{equation}
\label{steady}
\lambda^{\ast}=\frac{\beta (A^{\ast} +I^{\ast} +l_{H} H^{\ast})(1-p)}{N^{\ast}}.
\end{equation}
Using \eqref{endemic} in \eqref{steady}, it can be seen
that the endemic equilibrium satisfies the following:
\begin{equation}
\lambda^{\ast}=\dfrac{(\mathcal{N}-\mathcal{D}) \beta (1-p)}{\mathcal{N}
+ qv(a_{2} a_{6} (a_{7} (1-l_{H}) + \delta_{2} f_{2}) + \delta_{1} f_{1} \chi
+ a_{4} a_{6} a_{7}) \beta (1-p) }.
\end{equation}
Regarding the stability of the equilibrium points, the following results hold.
\begin{Lemma}[Lemma 3.2 of \cite{PST}]
The disease free equilibrium point $\Sigma_{0}$ \eqref{DFE}
is locally asymptotically stable if $\mathcal{R}_{0}<1$ and unstable
if $\mathcal{R}_{0}>1$, where $\mathcal{R}_{0}$ is the basic reproduction
number \eqref{R0}.
\end{Lemma}
\begin{Lemma}[Lemma 3.3 of \cite{PST}]
The model \eqref{eq:modelC} has a unique endemic equilibrium
point whenever $\mathcal{R}_{0} >1.$
\end{Lemma}
\section{Results}
\label{sec3}
In this section, we begin by presenting our proposal for a COVID-19 discrete-time model.
After that, we show the well-posedness of the model, that is, we prove that the solutions
of the model are positive and bounded and that the equilibrium points coincide with those
of the continuous-time model presented in Section~\ref{sec2}. We finalize this section
by proving the global stability of the disease-free equilibrium point, using a
suitable Lyapunov function and with the presentation of some numerical simulations,
using real data, showing the consistency of our theoretical results.
\subsection{The Discrete-Time Model}
One of the important features of the discrete-time epidemic models obtained by the Mickens method is that
they present the same features of the corresponding original continuous-time models. Our
nonstandard finite discrete difference (NSFD) scheme for solving \eqref{eq:modelC} is
a numerical dynamically consistent method based on \cite{Mickens05}. Let us define the time
instants $t_{n} = n h$ with $n$ integer, the step size as $h = t_{n+1} - t_{n}$,
and $(S_{n}, A_{n}, I_{n},Q_{n}, H_{n}, \overline{H}_{n})$ as the approximated values of the following:
$$
(S(nh), A(nh), I(nh),Q(nh), H (nh), \overline{H}(nh)).
$$
Discretizing system \eqref{eq:modelC}
using the NSFD scheme, we obtain the following:
\begin{adjustwidth}{-4.6cm}{0cm}
\begin{equation}
\label{eq:modelD}
\begin{cases}
\dfrac{S_{n+1}-S_{n}}{\psi(h)} = \Lambda + \omega m Q_{n+1}
- \left[ \lambda_{n} (1-p) + \phi p + \mu\right] S_{n+1},\\[0.2 cm]
\dfrac{A_{n+1}-A_{n}}{\psi(h)}= \lambda_{n}(1-p) S_{n+1} - (q v +\mu) A_{n+1}, \\[0.2 cm]
\dfrac{I_{n+1}-I_{n}}{\psi(h)} = q v A_{n+1} - ( \delta_{1} + \mu)I_{n+1},\\[0.2 cm]
\dfrac{Q_{n+1}-Q_{n}}{\psi(h)} = \phi p S_{n+1} + \delta_{1} f_{1} I_{n+1}
+ \delta_{2}(1-f_{2}-f_{3})H_{n+1} - (\omega m + \mu) Q_{n+1},\\[0.2 cm]
\dfrac{H_{n+1}-H_{n}}{\psi(h)}=\delta_{1}(1-f_{1})I_{n+1}+\eta (1-\kappa)\overline{H}_{n+1}
-\left(\delta_{2}(1-f_{2}-f_{3})+\delta_{2}f_{2}+ \alpha_{1}f_{3}+\mu\right)H_{n+1}, \\[0.2 cm]
\dfrac{\overline{H}_{n+1}-\overline{H}_{n}}{\psi(h)}=\delta_{2}f_{2} H_{n+1}
- \left(\eta(1-\kappa) + \alpha_{2} \kappa + \mu\right)\overline{H}_{n+1},
\end{cases}
\end{equation}
\end{adjustwidth}
where the denominator function is $\psi(h)=\frac{exp(\mu h)-1}{\mu}$ \cite{Mickens07}.
Throughout this work, for brevity, we write $\psi(h)=\psi$.
\begin{Remark}
In the continuous model, the reproduction number \eqref{R0} can be rewritten as the following:
\begin{equation}
R_{0}=\dfrac{\beta a_{2} (1-p) [l_{H} a_{6} a_{7} q v + (a_{1} + q v) \chi]}{ a_{0} a_{1}
\chi (p \phi +a_{2})}=\frac{\beta a_{2} (1-p) [l_{H} a_{6} a_{7} q v + (a_{0}
+\delta_{1}) \chi]}{a_{0} a_{1} \chi (p \phi +a_{2})}.
\end{equation}
\end{Remark}
Let us consider the region $\Omega=\left\{(S,A,I,Q,H,\overline{H})
\in (\mathbb{R}_{0}^{+})^{6} : 0 < N \le \frac{\Lambda}{\mu}\right\}$.
Our next lemma shows that the feasible region is equal to the continuous case.
\begin{Lemma}
Any solution of $(S_{n},A_{n},I_{n},Q_{n}, H_{n},\overline{H}_{n})$
of model \eqref{eq:modelD} with positive initial conditions
is positive and ultimately bounded in $\Omega$.
\end{Lemma}
\begin{proof}
Since model \eqref{eq:modelD} is linear in
$(S_{n},A_{n},I_{n},Q_{n}, H_{n},\overline{H}_{n})$, we can rewrite it as the following:
\begin{equation}
\label{eq:modelED}
\begin{cases}
S_{n+1} = \dfrac{ \Lambda \psi + \omega m \psi Q_{n+1}
+ S_{n}}{1+ [ \lambda_{n} (1-p) + \phi p + \mu] \psi},\\[0.4 cm]
A_{n+1}=\dfrac{A_{n}+ \lambda_{n}(1-p)\psi S_{n+1}}{1 + (q v +\mu)\psi }, \\[0.4 cm]
I_{n+1}=\dfrac{I_{n}+ q v \psi A_{n+1}}{1+ ( \delta_{1} + \mu)\psi},\\[0.4 cm]
Q_{n+1} = \dfrac{Q_{n} + \psi( \phi p S_{n+1} + \delta_{1} f_{1} I_{n+1}
+ \delta_{2}(1-f_{2}-f_{3})H_{n+1})}{1+ (\omega m + \mu)\psi},\\[0.4 cm]
H_{n+1}=\dfrac{H_{n}+ \psi(\delta_{1}(1-f_{1})I_{n+1}+\eta (1-\kappa)\overline{H}_{n+1})}{1
+(\delta_{2}(1-f_{2}-f_{3})+\delta_{2}f_{2}+ \alpha_{1}f_{3}+\mu)\psi}\\[0.4 cm]
\overline{H}_{n+1}=\dfrac{\overline{H}_{n}+\delta_{2}f_{2}
\psi H_{n+1}}{1 + (\eta(1-\kappa) + \alpha_{2} \kappa + \mu)\psi}.
\end{cases}
\end{equation}
Since all parameters of model \eqref{eq:modelED} are positive and the initial conditions
are also positive, then, by induction, $S_{n}\ge 0$, $A_{n}\ge 0$, $I_{n}\ge 0$, $Q_{n}\ge 0$, $H_{n}\ge 0$
and $\overline{H}_{n}\ge 0$ for all $n \in \mathbb{N}$. Regarding the boundedness of the solutions,
the total population $N_{n}=S_{n}+A_{n}+I_{n}+Q_{n}+H_{n}+\overline{H}_{n}$.
Adding all equations of \eqref{eq:modelED}, we obtain the following:
\begin{align*}
\dfrac{N_{n+1}-N_{n}}{\psi}&=\Lambda -\mu N_{n+1}- \alpha_{1}
f_{3} H_{n+1} - \alpha_{2} \kappa \overline{H}_{n+1}\le \Lambda - \mu N_{n+1}\\
\Leftrightarrow N_{n+1} &\le \dfrac{\Lambda \psi}{1+\mu \psi}+ \dfrac{N_{n}}{1+\mu \psi}.
\end{align*}
By Lemma~2.2 of Shi and Dong \cite{S},
\begin{equation*}
N_{n} \le \dfrac{\Lambda}{\mu} + \left( \dfrac{1}{1 + \mu \psi}\right)^{n}\left(
\dfrac{\Lambda}{\mu} - N_{0}\right)
\end{equation*}
so, if $N_{0} \le \frac{\Lambda}{\mu}$, then $N_{n} \le \frac{\Lambda}{\mu}$ for all $n \in \mathbb{N}$.
Thus, $\Omega$ is the biologically feasible region.
\end{proof}
Solving $X^{\ast}=F(X^{\ast})$ in system \eqref{eq:modelED}
we can see that there exists two equilibrium points:
\begin{itemize}
\item The disease free equilibrium (DFE) point
\begin{equation}
\label{E0}
E_{0}=\left( \dfrac{\Lambda a_{2}}{ \mu (p \phi +a_{2})},0,
\dfrac{\Lambda p \phi}{\mu (p \phi +a_{2})}, 0, 0,0 \right)
=(S_{0}, A_{0}, Q_{0}, I_{0}, H_{0}, \overline{H}_{0});
\end{equation}
\item The endemic equilibrium (EE) point
\begin{equation}
E^{\ast}=(S^{\ast}, A^{\ast}, I^{\ast}, Q^{\ast}, H^{\ast}, \overline{H}^{\ast}),
\end{equation}
where
\begin{equation}
\label{EE}
\begin{gathered}
S^{\ast} =\dfrac{a_{0} a_{1} a_{2} \Lambda \chi}{D_{A}},
\quad A^{\ast}=\dfrac{a_{1} a_{2} \Lambda \chi \lambda^{\ast}(1-p)}{D_{A}}, \\
I^{\ast}=\dfrac{a_{2} q v \Lambda \chi \lambda^{\ast}(1-p)}{D_{A}},
\quad Q^{\ast} = \dfrac{\Lambda (a_{0} a_{1} p \phi \chi + \lambda^{\ast} (1-p)
q v (\chi f_{1} \delta_{1} + a_{4} a_{6} a_{7}))}{D_{A}},\\
H^{\ast}=\dfrac{a_{2} a_{6} a_{7} q v \Lambda \lambda^{\ast}(1-p)}{D_{A}},
\quad \overline{H}^{\ast}=\dfrac{a_{2} a_{6} \delta_{2} f_{2} q v \Lambda \lambda^{\ast} (1-p)}{D_{A}},
\end{gathered}
\end{equation}
\end{itemize}
and $D_{A}= \mu \mathcal{D} + \lambda^{\ast} (1-p) ( \chi ( a_{2} a_{1} a_{0}
-f_{1} \delta_{1} \omega m q v) - a_{4} a_{6} a_{7} q v w m)$. In this point,
\begin{equation}
\lambda^{\ast}=\dfrac{\mathcal{D} (\mathcal{R}_{0} -1)}{(1-p)(\mathcal{N}
+ q v ( a_{2} a_{6} ( a_{7} (1-l_{H}) + \delta_{2}f_{2})
+ \chi f_{1} \delta_{1} + a_{4} a_{6} a_{7}))}.
\end{equation}
\subsection{Global Stability}
We now prove the global stability of \eqref{E0}.
\begin{Theorem}
For the discretized system, the DFE point $E_{0}$ is globally stable if
$\mathcal{R}_{0} <1$. If $\mathcal{R}_{0} >1$, then the DFE is unstable.
\end{Theorem}
\begin{proof}
Let us define the discrete Lyapunov function $L_{n}$ as the following:
\begin{equation}
L_{n}(S_{n}, A_{n}, I_{n},Q_{n},H_{n}, \overline{H}_{n})
=\frac{1}{\psi}\left[ S_{0} G\left( \frac{S_{n}}{S_{0}} \right)
+ A_{n} + I_{n} + Q_{0} G\left( \frac{Q_{n}}{Q_{0}}\right) + H_{n}
+ \overline{H}_{n} \right].
\end{equation}
Hence, $L_{n}(S_{n}, A_{n}, I_{n},Q_{n},H_{n}, \overline{H}_{n}) \ge 0$
for all $\xi_{n} \ge 0$, $\xi \in \left\{S, A, I,Q,H, \overline{H} \right\}$.
Moreover, $L_{n}(S_{n}, A_{n}, I_{n},Q_{n},H_{n}, \overline{H}_{n}) =0$
if and only if $(S_{n}, A_{n}, I_{n},Q_{n},H_{n}, \overline{H}_{n})=E_{0}$.
Computing $\Delta L_{n}=L_{n+1}-L_{n}$, we have the following:
\begin{align*}
&\Delta L_{n}=\frac{1}{\psi}\left[ S_{0} \left(G\left( \frac{S_{n+1}}{S_{0}}\right)
-G\left( \frac{S_{n}}{S_{0}}\right) \right) +(A_{n+1}-A_{n}) +(I_{n+1}-I_{n})\right]\\
& + \frac{1}{\psi}\left[Q_{0} \left(G\left( \frac{Q_{n+1}}{Q_{0}}\right)
-G\left( \frac{Q_{n}}{Q_{0}}\right)\right) +(H_{n+1}-H_{n})+ (\overline{H}_{n+1}
-\overline{H}_{n})\right],
\end{align*}
where $G(x)= x - \ln(x) -1$. Note that $G(x) \ge 0$, for all $x \ge 0$, and $G(x)=0$
if and only if $x=1$. For each $\xi \in \left\{S, A, I,Q,H, \overline{H} \right\}$,
\begin{equation*}
G\left(\frac{\xi_{n+1}}{\xi_{0}}\right)-G\left(\frac{\xi_{n}}{\xi_{0}}\right)
\le \frac{1}{\xi_{0}} \left( 1- \frac{\xi_{0}}{\xi_{n+1}} \right)\left( \xi_{n+1}-\xi_{n}\right).
\end{equation*}
Therefore,
\begin{align*}
&\Delta L_{n} \le \frac{1}{\psi}\left[ \left( 1 - \frac{S_{0}}{S_{n+1}}\right)(S_{n+1}-S_{n})
+ (A_{n+1}-A_{n})+(I_{n+1}+I_{n})\right]\\
&\quad +\frac{1}{\psi}\left[\left(1-\frac{Q_{0}}{Q_{n+1}}\right)(Q_{n+1}-Q_{n})
+(H_{n+1}-H_{n})+(\overline{H}_{n+1}-\overline{H}_{n})\right].
\end{align*}
From equations of system \eqref{eq:modelD}, we obtain the following:
\begin{align*}
&\Delta L_{n}\le \left( 1-\frac{S_{0}}{S_{n+1}}\right)(\Lambda + \omega m Q_{n+1}
- (\lambda_{n} (1-p) + \phi p + \mu)S_{n+1} )\\
&+(\lambda_{n} (1-p)S_{n+1}-(q v +\mu)A_{n+1}) + (q v A_{n+1} - (\delta_{1}+ \mu)I_{n+1})\\
&+\left( 1-\frac{Q_{0}}{Q_{n+1}}\right)(\phi p S_{n+1}+\delta_{1}f_{1}I_{n+1}
+\delta_{2}(1-f_{2}-f_{3})H_{n+1} - (\omega m + \mu) Q_{n+1} ) \\
&+(\delta_{1}(1-f_{1}) I_{n+1} + \eta (1-\kappa) \overline{H}_{n+1}
-\left[ \delta_{2}(1-f_{2}-f_{3})+\delta_{2}f_{2} +\alpha_{1} f_{3} + \mu\right] H_{n+1})\\
&+ \delta_{2} f_{2} H_{n+1} - (\eta(1-\kappa) + \alpha_{2} \kappa + \mu) \overline{H}_{n+1}
\end{align*}
and, at the disease-free equilibrium point $E_{0}$, we have the following relations:
\begin{equation}
\begin{cases}
0= \Lambda + \omega m Q_{0} - (\phi p + \mu) S_{0}, \\
0=\phi p S_{0} - (\omega m + \mu) Q_{0},
\end{cases}
\Leftrightarrow
\begin{cases}
\Lambda= - \omega m Q_{0} + (\phi p + \mu) S_{0} ,\\
\phi p S_{0}= (\omega m + \mu) Q_{0}.
\end{cases}
\end{equation}
Substituting in $\Delta L_{n}$, and simplifying, we obtain the following:
\begin{align*}
&\Delta L_{n} \le -\frac{(\phi p + \mu)}{S_{n+1}} (S_{n+1}-S_{0})^{2}
- \mu (A_{n+1}+ I_{n+1}) - \left( \alpha_{1} f_{3} + \mu\right) H_{n+1} \\
&-\frac{Q_{0}}{Q_{n+1}} \left( \delta_{1} f_{1} I_{n+1}
+ \delta_{2} (1-f_{2} -f_{3})H_{n+1}\right)-(\alpha_{2} \kappa + \mu) \overline{H}_{n+1}
+\lambda_{n} (1-p)S_{0}\\
&+ \omega m Q_{0} \left( 1-\frac{S_{0}}{S_{n+1}}\right) \left( \frac{Q_{n+1}}{Q_{0}}-1\right)
+\phi p S_{0} \left( 1-\frac{Q_{0}}{Q_{n+1}}\right)\left(
\frac{S_{n+1}}{S_{0}}-\frac{Q_{n+1}}{Q_{0}}\right).
\end{align*}
Let
\begin{equation*}
F(I,Q,H, \overline{H}) = \left( \alpha_{1} f_{3} + \mu\right) H
+\frac{Q_{0}}{Q} \left( \delta_{1} f_{1} I + \delta_{2} (1-f_{2} -f_{3})H\right)
+(\alpha_{2} \kappa + \mu) \cdot \overline{H}.
\end{equation*}
Then,
\begin{align}
\label{ineqL}
&\Delta L_{n} \le -\frac{(\phi p + \mu)}{S_{n+1}} (S_{n+1}-S_{0})^{2}
- \mu (A_{n+1}+ I_{n+1})-F(I_{n+1},Q_{n+1},H_{n+1}, \overline{H}_{n+1}) \nonumber \\
&+ \omega m Q_{0} \left( 1-\frac{S_{0}}{S_{n+1}}\right) \left( \frac{Q_{n+1}}{Q_{0}}-1\right)
+\phi p S_{0} \left( 1-\frac{Q_{0}}{Q_{n+1}}\right)\left(
\frac{S_{n+1}}{S_{0}}-\frac{Q_{n+1}}{Q_{0}}\right)\\
&+\lambda_{n} (1-p)S_{0}.\nonumber
\end{align}
From the equations of \eqref{eq:modelED}, it can be seen that
\begin{align}
\label{ineqlambda}
\lambda_{n} (1-p) S_{0} & \le \frac{\beta \Lambda (1-p) a_{2}}{ \mu (p \phi
+ a_{2})}(A_{n}+ I_{n}+ l_{H} H_{n}) \nonumber\\
& \le \frac{\beta \Lambda (1-p) a_{2}}{ \mu (p \phi + a_{2})}\left( A_{n}
+ \frac{q v}{a_{1}} A_{n} + \frac{a_{6} a_{7} q v}{ \chi a_{1}} l_{H} A_{n}\right)\nonumber\\
&\le \frac{\Lambda \mathcal{N}}{ \mu (p \phi + a_{2})} A_{n}
\end{align}
and
\begin{align}
\label{ineqDFE}
&\omega m Q_{0} \left( 1-\frac{S_{0}}{S_{n+1}}\right) \left( \frac{Q_{n+1}}{Q_{0}}-1\right)
+\phi p S_{0} \left( 1-\frac{Q_{0}}{Q_{n+1}}\right)\left(
\frac{S_{n+1}}{S_{0}}-\frac{Q_{n+1}}{Q_{0}}\right) \nonumber \\
&\le w m Q_{0}\left( g \left( \frac{Q_{n+1}}{Q_{0}}\right)
+ g\left( \frac{S_{0}}{S_{n+1}} \right)- g\left( \frac{S_{0} Q_{n+1}}{S_{n+1} Q_{0}}\right) \right)\\
&\quad +\phi p S_{0} \left( -g\left( \frac{Q_{n+1}}{Q_{0}}\right) - g\left( \frac{Q_{0} S_{n+1}}{Q_{n+1}
S_{0}}\right)+g\left( \frac{S_{n+1}}{S_{0}}\right) \right).\nonumber
\end{align}
Since
\begin{equation*}
w m Q_{0} < (w m + \mu) Q_{0}=a_{2}Q_{0}=\phi p S_{0},
\end{equation*}
\eqref{ineqDFE} is equal to the following:
\begin{align}
\label{ineqDFE1}
&g \left( \frac{Q_{n+1}}{Q_{0}}\right) \left( w m Q_{0}-\phi p S_{0}\right)+w m Q_{0} g\left(
\frac{S_{0}}{S_{n+1}} \right)+\phi p S_{0} g\left( \frac{S_{n+1}}{S_{0}}\right)\nonumber\\
&-w m Q_{0} g\left( \frac{S_{0} Q_{n+1}}{S_{n+1} Q_{0}}\right) -\phi p S_{0}
g\left( \frac{Q_{0} S_{n+1}}{Q_{n+1} S_{0}}\right) \\
\le &-w m Q_{0} g\left( \frac{S_{0} Q_{n+1}}{S_{n+1} Q_{0}}\right) -\phi p S_{0} g\left(
\frac{Q_{0} S_{n+1}}{Q_{n+1} S_{0}}\right)
+\phi p S_{0}\frac{(S_{n+1}-S_{0})^{2}}{S_{0}S_{n+1}} \nonumber\\
&+g \left( \frac{Q_{n+1}}{Q_{0}}\right) \left( w m Q_{0}-\phi p S_{0}\right). \nonumber
\end{align}
Gathering the information from \eqref{ineqlambda} and \eqref{ineqDFE1},
\eqref{ineqL} becomes the following:
\begin{align*}
\Delta L_{n} &\le -\frac{\mu}{S_{n+1}} (S_{n+1}-S_{0})^{2} -w m Q_{0} g\left(
\frac{S_{0} Q_{n+1}}{S_{n+1} Q_{0}}\right) -\phi p S_{0}
g\left( \frac{Q_{0} S_{n+1}}{Q_{n+1} S_{0}}\right)\\
&-g \left( \frac{Q_{n+1}}{Q_{0}}\right) \left( -w m Q_{0}+\phi p S_{0}\right)
-F(I_{n+1},Q_{n+1},H_{n+1}, \overline{H}_{n+1}) \\
&+\frac{\Lambda \mathcal{N}}{ \mu (p \phi + a_{2})} A_{n} - \mu (A_{n+1}+ I_{n+1}).
\end{align*}
Once more, from system \eqref{eq:modelED} it can seen that
$- \mu (A_{n+1}+ I_{n+1}) < -a_{0}A_{n}$, so
\begin{align*}
&\frac{\Lambda \mathcal{N}}{ \mu (p \phi + a_{2})} A_{n} - \mu (A_{n+1}+ I_{n+1})
< \frac{A_{n} \mathcal{D}}{a_{1}\chi (p \phi + a_{2})} (\mathcal{R}_{0}-1).
\end{align*}
Therefore,
\begin{align*}
\Delta L_{n}& \le -\frac{\mu}{S_{n+1}} (S_{n+1}-S_{0})^{2} -w m Q_{0} g\left( \frac{S_{0}
Q_{n+1}}{S_{n+1} Q_{0}}\right) -\phi p S_{0} g\left( \frac{Q_{0} S_{n+1}}{Q_{n+1} S_{0}}\right)\\
&-g \left( \frac{Q_{n+1}}{Q_{0}}\right) \left( -w m Q_{0}+\phi p S_{0}\right)-F(I_{n+1},Q_{n+1},
H_{n+1}, \overline{H}_{n+1})\\
&+\frac{A_{n} \mathcal{D}}{a_{1}\chi (p \phi + a_{2})} (\mathcal{R}_{0}-1).
\end{align*}
Hence, if $\mathcal{R}_{0} <1$, then $\Delta L_{n}\le 0$ for all $n \ge 0$,
that is, $L_{n}$ is a monotone decreasing sequence. We have $L_{n} \ge 0$.
Then there is a limit for $\underset{n \to \infty}{\lim} L_{n} \ge 0$.
Therefore, $\underset{n \to \infty}{\lim} \Delta L_{n}=0$ implies that
$\underset{n \to \infty}{\lim}S_{n}=S_{0}$, $\underset{n \to \infty}{\lim}Q_{n}=Q_{0}$,
$\underset{n \to \infty}{\lim}A_{n}= \underset{n \to \infty}{\lim}I_{n}
=\underset{n \to \infty}{\lim}H_{n}=\underset{n \to \infty}{\lim}\overline{H}_{n}=0$.
So, if $\mathcal{R}_{0} < 1$, then $E_{0}$ is globally asymptotically stable.
\end{proof}
\subsection{Numerical Simulations}
In this section, we show, numerically, that our discretized model describes well
the transmission dynamics of COVID-19 in Portugal, from 2 March to 4 May 2020.
Our data were obtained from the daily reports from DGS, available in \cite{DGS},
and since we consider the spread from 2 March, the initial time $t=0$
corresponds to 2 March 2020.
The initial values are the same as those used in \cite{PST}. Regarding the parameters,
using \cite{Pordata}, we can obtain the values from 2019, so the parameters
were updated with respect to \cite{PST}, but the difference is very small.
All the values used in our simulations are presented in Table~\ref{tab2}.
\begin{specialtable}[H]
\caption{Parameter values and initial conditions of \eqref{eq:modelC}.\label{tab2}}
\begin{tabular}{lm{9.05cm}<{\raggedright}l}
\toprule
\textbf{Parameter} & \textbf{Value} & \textbf{Reference} \\
\midrule
$\Lambda$ & (86,579 + 26,080)/365(person day$^{-1}$) & \cite{Pordata} \\
$\mu$ & 111,793/(365 $\times N_{0}$)(day$^{-1}$)& \cite{Pordata}\\
$\beta$ & 1.93 (day$^{-1}$)& \cite{PST} \\
$l_{A}$ & 1 (dimensionless)& \cite{PST}\\
$l_{H}$ & 0.1 (dimensionless)& \cite{PST}\\
$\phi$ & 1/12& \cite{RP}\\
$\nu$ & 1/5& \cite{Who}\\
$\delta_{1}$ & 1/3 (day$^{-1}$)& \cite{PST}\\
$\delta_{2}$ & 1/3 (day$^{-1}$)& \cite{PST}\\
$\eta$ & 1/7 (day$^{-1}$)& \cite{PST}\\
$\omega$ & 1/31 (day$^{-1}$)& \cite{PST}\\
$\alpha_{1}$ & 1/7 (day$^{-1}$)& \cite{PST}\\
$\alpha_{2}$ & 1/15 (day$^{-1}$)& \cite{PST}\\
$p$ & 0.674 & \cite{Pordata,Negocios}\\
$q$ & 0.15 & \cite{Noticias}\\
$f_{1}$ & 0.96 & \cite{DGS}\\
$f_{2}$ & 0.21 & \cite{DGS}\\
$f_{3}$ & 0.03 & \cite{DGS}\\
$\kappa$ & 0.03 & \cite{PST}\\
$m$ & 0.075& \cite{PST}\\
$S_{0}$& 10,286,285 (person)& \cite{DGS,Pordata,Who,Noticias}\\
$A_{0}$ & 13 (person)& \cite{DGS,Who,Noticias}\\
$I_{0}$ & 2 (person)&\cite{DGS}\\
$Q_{0}$ & 0 (person)& \cite{PST}\\
$H_{0}$ & 0 (person)& \cite{DGS} \\
$\overline{H}_{0}$ & 0 (person) & \cite{DGS}\\
$D_{0}$ & 0 (person)& \cite{DGS}\\
\bottomrule
\end{tabular}
\end{specialtable}
Using the values of Table~\ref{tab2}, in Figure~\ref{fig1}, we compare the number
of infected individuals predicted by our discrete-time model, the ones predicted
by the continuous model of \cite{PST}, and the real data.
\begin{figure}[H]
\includegraphics[width=10.5 cm]{CRD_I}
\caption{Number of infected individuals predicted by the discrete-time model:
solid green line. Real data: dotted line. Number of infected individuals
predicted by the continuous model: solid blue line. \label{fig1}}
\end{figure}
Figure~\ref{fig2} shows the consistency of our result. We compare the predictions
from the continuous model with the predictions from the discrete-time model.
Using the parameter values of Table~\ref{tab2}, $R_{0} \approx 0.95$,
the number of infected and asymptomatic individuals, as well as the ones in the hospital
and intensive care units, tend to disappear. The susceptible individuals
and those in quarantine tend to the equilibrium point $E_{0}$ as time increases.
Comparing the graphics from Figures~\ref{fig2}--\ref{fig7}, we can see that
the asymptotic behavior is similar. However, the discrete-time model predicts, in some instances,
smaller values. All the computations were done using \textsf{Mathematica}, version 12.1.
\begin{figure}[H]
\includegraphics[width=9 cm]{DFE_I}
\caption{The infected individuals tend to disappear with time.\label{fig2}}
\end{figure}
\vspace{-8pt}
\begin{figure}[H]
\includegraphics[width=9 cm]{DFE_S}
\caption{The susceptible individuals tend quickly to $S_{0}$. \label{fig3}}
\end{figure}
\vspace{-8pt}
\begin{figure}[H]
\includegraphics[width=9 cm]{DFE_A}
\caption{The asymptomatic individuals tend to disappear with time. \label{fig4}}
\end{figure}
\vspace{-8pt}
\begin{figure}[H]
\includegraphics[width=9 cm]{DFE_Q}
\caption{The number of individuals in quarantine tend quickly to $Q_{0}$. \label{fig5}}
\end{figure}
\vspace{-8pt}
\begin{figure}[H]
\includegraphics[width=9 cm]{DFE_H}
\caption{The hospitalized individuals tend to disappear with time. \label{fig6}}
\end{figure}
\vspace{-8pt}
\begin{figure}[H]
\includegraphics[width=9 cm]{DFE_IH}
\caption{The individuals in intensive care also tend to disappear with time. \label{fig7}}
\end{figure}
\section{Discussion}
\label{sec4}
The COVID-19 pandemic has been a shocking experience to every human being,
and no one expected the consequences felt worldwide. From this, we can also state that
the number of variables and parameters to be taken into account in a \text{COVID-19}
model are not small. Indeed, if there is such thing as a good model that explains COVID-19,
surely it is not a simple model. Moreover, most of the models that explain epidemiological,
ecological, and economical phenomena are not linear. Therefore, given the tools available,
it is impossible to present an exact solution to the problem. That is one of the reasons
for the development of nonstandard finite difference methods. Indeed, such methods
enable us to construct discrete models from continuous ones, allowing one to present
numerical solutions.
Nowadays, there are several nonstandard finite schemes.
We have used here the one developed by Mickens because it has shown to be dynamically consistent.
Precisely, we discretized the continuous-time model presented and analyzed in \cite{PST}.
With our discrete-time model, we analyzed the evolution of the COVID-19 pandemic in Portugal,
from its beginning up to 4 May 2020. This choice allowed us to compare our results
with those \mbox{of \cite{PST}}. We obtained the equilibrium points of the proposed discrete-time model,
showing that they coincide with the ones from the continuous model. From the qualitative analysis
point of view, we believe that it is important to prove the stability of the equilibrium points
if the model describes real data, which is our case here. For this reason,
\mbox{Figures~\ref{fig2}--\ref{fig7}} show the simulation results made over 2000 days: our intention
is to show the stability of the model. Regarding the stability, it should be noted that,
in contrast with \cite{PST}, which only proves local stability, here, we have proved
the global stability of the disease-free equilibrium (DFE) and attempted to do the
same with the endemic equilibrium (EE) point. However, regarding the EE,
we were not able to prove the global stability analytically,
and the question remains open. Figure~\ref{fig1} compares the predictions
of the continuous model with those of the discrete-time model.
One concludes that the discrete-time model fits a little bit better to the real data.
From Figures~\ref{fig2}--\ref{fig7}, we present the convergence to the disease-free
equilibrium point, using both continuous and discrete models. The asymptotic
behavior is similar; the only difference worth mentioning is that the discrete-time model,
in some small interval of time, predicts smaller values than the continuous one.
In this work, we restricted ourselves to the development of the pandemic
in Portugal. We are aware that new models are emerging and, in a future work,
we intend to analyze a model that fits the data of more than one country and region,
possibly globally. This is under investigation and will be addressed elsewhere.
Another line of research concerns models that take vaccination into
account \cite{Couras}. Here, vaccination is not considered because our model
was created to explain the development of COVID-19 at the beginning of the pandemic.
However, regarding vaccination, individuals do not become immune,
so we think that we need to wait a little longer to see the development.
It remains open the question of how to prove global stability
for the endemic equilibrium.
\authorcontributions{Conceptualization, S.V. and D.F.M.T.;
methodology, S.V. and D.F.M.T.; software, S.V.;
validation, S.V. and D.F.M.T.; formal analysis, S.V. and D.F.M.T.;
investigation, S.V. and D.F.M.T.;
writing---original draft preparation, S.V. and D.F.M.T.;
writing---review and editing, S.V. and D.F.M.T.;
visualization, S.V. All authors have read and agreed
to the published version of \mbox{the manuscript}.}
\funding{The authors were partially supported by
the Portuguese Foundation for Science and Technology (FCT):
Sandra Vaz through the Center of Mathematics and Applications
of \emph{Universidade da Beira Interior} (CMA-UBI),
project UIDB/00212/2020; Delfim F. M. Torres through
the Center for Research and Development in Mathematics
and Applications (CIDMA) of \emph{University of Aveiro},
project UIDB/04106/2020.}
\institutionalreview{Not applicable.}
\informedconsent{Not applicable.}
\dataavailability{The data and information used in this work are accessible
to anyone and can be found in the following links:
\url{https://www.pordata.pt/Portugal} (accessed on 16 September 2021);
\url{http://www.covid19.min-saude.pt} (accessed on 16 September 2021);
\url{https://covid19.min-saude.pt/ponto-de situação atual-em-portugal}
(accessed on 16 September 2021).}
\acknowledgments{\textls[-15]{The authors are grateful
to three reviewers for their several comments and suggestions.}}
\conflictsofinterest{The authors declare no conflict of interest.}
\end{paracol}
\reftitle{References}
| {
"timestamp": "2021-11-24T02:19:43",
"yymm": "2111",
"arxiv_id": "2111.11860",
"language": "en",
"url": "https://arxiv.org/abs/2111.11860",
"abstract": "In [Ecological Complexity 44 (2020) Art. 100885, DOI:https://doi.org/10.1016/j.ecocom.2020.100885] a continuous-time compartmental mathematical model for the spread of the Coronavirus disease 2019 (COVID-19) is presented with Portugal as case study, from 2 March to 4 May 2020, and the local stability of the Disease Free Equilibrium (DFE) is analysed. Here, we propose an analogous discrete-time model and, using a suitable Lyapunov function, we prove the global stability of the DFE point. Using COVID-19 real data, we show, through numerical simulations, the consistence of the obtained theoretical results.",
"subjects": "Numerical Analysis (math.NA); Physics and Society (physics.soc-ph); Populations and Evolution (q-bio.PE)",
"title": "A Discrete-Time Compartmental Epidemiological Model for COVID-19 with a Case Study for Portugal",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232924970204,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7093460461155471
} |
https://arxiv.org/abs/1909.01690 | Characterization of $k-$smooth operators between Banach spaces | We study $k-$smoothness of bounded linear operators defined between arbitrary Banach spaces. As an application, we characterize $k-$smooth operators defined from $\ell_1^n$ to an arbitrary Banach space. We also completely characterize $k-$smooth operators defined between arbitrary two-dimensional Banach spaces. | \section{Introduction}
The characterization of smoothness of operator between Banach spaces is a rich, intricate problem to study. It helps to understand the geometry of operator space. Over the years several mathematicians have been studying the smoothness of operators defined between Banach spaces. The readers may go through \cite{DeK,GY,HR,KY,MPRS,PSG,R,Ra,SPM,SPMR} for more results in this direction. Before proceeding further, we introduce the notations and terminologies to be used throughout the paper.
The letters $\mathbb{X},\mathbb{Y}$ denote real Banach spaces. The unit ball, unit sphere and the dual space of $\mathbb{X}$ are denoted respectively by $B_{\mathbb{X}}=\{x\in \mathbb{X}:\|x\|\leq 1\},S_\mathbb{X}=\{x\in \mathbb{X}:\|x\|\leq 1\}$ and $\mathbb{X}^*.$ The set of all extreme points of $B_\mathbb{X}$ is denoted by $Ext(B_{\mathbb{X}}).$ For any set $A,$ $|A|$ denotes the cardinality of $A.$ The space of all bounded (compact) linear operators is denoted by $\mathbb{L}(\mathbb{X},\mathbb{Y})~(\mathbb{K}(\mathbb{X},\mathbb{Y})).$ An element $x^*\in S_{\mathbb{X}^*}$ is said to be a supporting linear functional of $x\in S_{\mathbb{X}},$ if $x^*(x)=1.$ Suppose $J(x)$ denotes the set of all supporting linear functionals of $x,$ i.e., $J(x)=\{x^*\in S_{\mathbb{X}^*}:x^*(x)=1\}.$ Note that, $J(x)$ is a weak*-compact convex subset of $S_{\mathbb{X}^*}.$ The set of all extreme points of $J(x)$ is denoted by $Ext~J(x).$ An element $x\in S_{\mathbb{X}}$ is said to be smooth if $J(x)$ is singleton. So an interesting problem is to study the ``size" of $J(x),$ whenever $J(x)$ is not singleton. In $2005$, Khalil and Saleh \cite{KS} have turned their attention to this problem. In \cite{KS} they have generalized the notion of smoothness and introduced the notion of $k-$smoothness or multi-smoothness. Following \cite{KS}, we say that an element $x\in S_{\mathbb{X}}$ is $k-$smooth or the order of smoothness of $x$ is $k,$ if $J(x)$ contains exactly $k$ linearly independent vectors, i.e., if $k=dim ~span ~J(x).$ Similarly, an operator $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ is said to be $k-$smooth operator if $k=dim~span~J(T),$ i.e., if there exist exactly $k$ linearly independent functionals in $S_{\mathbb{L}(\mathbb{X},\mathbb{Y})^*}$ supporting the operator $T.$ In \cite{H,Ha,KS,LR,Wa}, the authors have extensively studied $k-$smoothness in Banach spaces and in operator spaces. Though the characterization of $k-$smooth operators defined on Hilbert spaces \cite{Wa} and between some particular Banach spaces are known, the complete characterization between arbitrary Banach spaces is still open. The main purpose of this paper is to proceed substantially in this direction. To do so we will use norm attainment set of an operator defined as : For $T\in \mathbb{L}(\mathbb{X},\mathbb{Y}),$ the norm attainment set, denoted as $M_T$, is the collection of all unit vectors at which $T$ attains its norm, i.e., $M_T=\{x\in S_{\mathbb{X}}:\|Tx\|=\|T\|\}.$ To look into the properties of norm attainment set and its role in the study of smoothness of operators one may go through \cite{MPRS,PSG,S,SPMR}.
In this paper, we first characterize the order of smoothness of some class of operators defined between a finite dimensional Banach space and an arbitrary Banach space depending on the norm attainment sets of the operators. As a result, we can completely characterize $k-$smooth operators defined between $\ell_1^n$ and an arbitrary Banach space. Finally, we characterize the order of smoothness of $T\in \mathbb{L}(\mathbb{X},\mathbb{Y}),$ where $\mathbb{X},\mathbb{Y}$ are arbitrary two-dimensional Banach spaces. To obtain these results, we mainly use the following lemma from \cite[Lemma 3.1]{W}, which characterizes $Ext~J(T)$ in terms of $Ext~J(Tx)$ and $M_T\cap Ext(B_{\mathbb{X}})\ni x$.
\begin{lemma}\cite[Lemma 3.1]{W}\label{lemma-wojcik}
Suppose that $\mathbb{X}$ is a reflexive Banach space. Suppose that $\mathbb{K}(\mathbb{X},\mathbb{Y})$ is an $M-$ideal in $\mathbb{L}(\mathbb{X},\mathbb{Y}).$ Let $T\in \mathbb{L}(\mathbb{X},\mathbb{Y}), \|T\|=1$ and dist$(T,\mathbb{K}(\mathbb{X},\mathbb{Y}))<1.$ Then $M_T\cap Ext(B_\mathbb{X})\neq \emptyset$ and
\[Ext ~J(T)=\{y^*\otimes x\in \mathbb{K}(\mathbb{X},\mathbb{Y})^*:x\in M_T\cap Ext(B_{\mathbb{X}}), y^*\in Ext ~J(Tx)\},\]
where $y^*\otimes x: \mathbb{K}(\mathbb{X},\mathbb{Y})\to \mathbb{R}$ is defined by $y^*\otimes x(S)=y^*(Sx)$ for every $S\in \mathbb{K}(\mathbb{X},\mathbb{Y}).$
\end{lemma}
\section{Main results}
We begin this section with an easy Lemma which will be used later to prove some of the theorems of this section. The proof of the lemma being simple, we omit the proof here.
\begin{lemma}\label{lemma-01}
Suppose $\mathbb{X},\mathbb{Y}$ are finite dimensional Banach spaces. If $\{x_1,x_2,\ldots,x_m\}$ is a linearly independent subset of $\mathbb{X}$ and $\{y_1^*,y_2^*,\ldots,y_n^*\}$ is a linearly independent subset of $\mathbb{Y}^*$ then $\{y_i^*\otimes x_j:1\leq i\leq n,1\leq j\leq m\}$ is a linearly independent subset of $\mathbb{L}(\mathbb{X},\mathbb{Y})^*.$
\end{lemma}
Observe that, if $\mathbb{X}$ is a finite dimensional Banach space, $\mathbb{Y}$ is arbitrary Banach space and if $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ $(=\mathbb{K}(\mathbb{X},\mathbb{Y}))$ is such that $\|T\|=1$ holds, then $\mathbb{X},\mathbb{Y}$ and $T$ satisfies all the conditions of Lemma \ref{lemma-wojcik}. Using Lemma \ref{lemma-wojcik}, we now characterize the order of smoothness of a class of operators defined between a finite dimensional Banach space and an arbitrary Banach space.
\begin{theorem}\label{th-ind}
Suppose $\mathbb{X}$ is a finite dimensional Banach space and $\mathbb{Y}$ is arbitrary Banach space. Suppose that $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ is such that $\|T\|=1$ and $M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1,\pm x_2,\ldots,\pm x_r\},$ where $\{x_1,x_2,\ldots,x_r\}$ is linearly independent in $\mathbb{X}.$ Then $T$ is $k-$smooth if and only if $Tx_i$ is $m_i-$smooth for each $1\leq i\leq r$ and $m_1+m_2+\ldots+m_r=k.$
\end{theorem}
\begin{proof}
Let $dim(\mathbb{X})=n.$ At first suppose that $r<n.$ Extend $\{x_1,x_2,\ldots,x_r\}$ to a basis $\{x_1,x_2,\ldots,x_n\}$ of $\mathbb{X}.$ Suppose $T$ is $k-$smooth and $Tx_i$ is $m_i-$smooth for each $1\leq i\leq r.$ Then by \cite[Prop. 2.1]{LR}, for each $1\leq i\leq r,$ we have,
\begin{eqnarray*}
m_i&=&dim ~span ~J(Tx_i)\\
&=& dim~ span~ Ext~ J(Tx_i).
\end{eqnarray*}
Let $\{y_{ij}^*:1\leq j\leq m_i, y_{ij}^*\in Ext ~J(Tx_i)\}$ be a basis of $span~Ext~J(Tx_i)$ for each $1\leq i\leq r.$ Let
\[W_i=span ~\{y_{ij}^*\otimes x_i:y_{ij}^*\in Ext ~J(Tx_i)\} ~\text{for each } 1\leq i\leq r.\]
We first show that $B_i=\{y_{ij}^*\otimes x_i:1\leq j\leq m_i\}$ is a basis of $W_i.$
Let $\sum\limits_{1\leq j\leq m_i}a_j(y_{ij}^*\otimes x_i)=0,$ where $a_j\in \mathbb{R}$ for all $1\leq j\leq m_i.$ Consider a Hamel basis $\{u_\beta:\beta \in \Lambda\}$ of $\mathbb{Y}.$ For each $\beta\in\Lambda,$ define $S_\beta\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ by
\begin{equation} \label{eq1}
\begin{split}
S_\beta x_i&=u_\beta\\
S_\beta x_l&= 0 \text{~~ for all } 1\leq l(\neq i)\leq n.
\end{split}
\end{equation}
Then for each $\beta \in \Lambda,$\\
$\sum\limits_{1\leq j\leq m_i}a_j(y_{ij}^*\otimes x_i)(S_\beta)=0\Rightarrow\sum\limits_{1\leq j\leq m_i}a_jy_{ij}^*S_\beta(x_i)=0\Rightarrow \sum\limits_{1\leq j\leq m_i}a_jy_{ij}^*(u_\beta)=0\Rightarrow \sum\limits_{1\leq j\leq m_i}a_jy_{ij}^*=0\Rightarrow a_j=0 $ for all $1\leq j\leq m_i.$ Thus, $B_i$ is linearly independent. It can be easily verified that $B_i$ is a spanning set of $W_i.$ Hence, $B_i$ is a basis of $W_i$ and so $dim~W_i=m_i$ for each $1\leq i\leq r.$ Now,
\begin{eqnarray*}
k&=&dim ~span ~J(T)\\
&=& dim~ span~ Ext ~J(T)\\
&=& dim~ span ~\{y_{ij}^*\otimes x_i:y_{ij}^*\in Ext ~J(Tx_i),1\leq i\leq r \}\\
&=& dim~W,\text{~where,}\\
W&=&span ~\{y_{ij}^*\otimes x_i:y_{ij}^*\in Ext~ J(Tx_i),1\leq i\leq r \}.
\end{eqnarray*}
We now show that $W=\oplus_{i=1}^{r}W_i.$ Clearly, $W=W_1+W_2+\ldots+W_r.$ Suppose that $z\in W_i\cap \sum\limits_{\substack{l=1\\l\neq i}}^{r}W_l$ for some $i,$ $1\leq i\leq r.$ Then \[z=\sum_{j=1}^{m_i}a_{ij}(y_{ij}^*\otimes x_i)=\sum\limits_{\substack{1\leq l(\neq i)\leq r}}w_l,\text{ where }w_l=\sum\limits_{\substack{1\leq j\leq m_l}}a_{lj}(y_{lj}^*\otimes x_l)\in W_l~,a_{ij}\in \mathbb{R}.\]
For each $\beta\in\Lambda,$ considering $S_\beta\in \mathbb{L}(\mathbb{X},\mathbb{Y}),$ as defined in (\ref{eq1}), we have, \\ $\sum_{j=1}^{m_i}a_{ij}y_{ij}^*S_{\beta}(x_i)=\sum\limits_{\substack{1\leq l(\neq i)\leq r \\ 1\leq j\leq m_l}}a_{lj}y_{lj}^*S_\beta(x_l)\Rightarrow \sum_{j=1}^{m_i}a_{ij}y_{ij}^*(u_\beta)=0 \Rightarrow a_{ij}=0$ for all $1\leq j \leq m_i.$ Thus, $z=0\Rightarrow W_i\cap \sum\limits_{\substack{l=1\\l\neq i}}^{r}W_l=\{0\}.$ Therefore, $W=\oplus_{i=1}^{r}W_i.$ Hence, $k=dim~W=dim~\oplus_{i=1}^{r}W_i=\oplus_{i=1}^{r}~dim~W_i=m_1+m_2+\ldots+m_r.$ \\
If $r=n,$ then proceeding similarly, we can show that $k=m_1+m_2+\ldots+m_r.$ This completes the proof of the theorem.
\end{proof}
Using Theorem \ref{th-ind}, we can completely characterize the order of smoothness of a linear operator defined from $\ell_1^n~(n\in \mathbb{R})$ to an arbitrary Banach space.
\begin{corollary}
Let $\mathbb{Y}$ be an arbitrary Banach space and $T\in \mathbb{L}(\ell_1^n,\mathbb{Y}),\|T\|=1.$ Then $T$ is $k-$smooth if and only if $M_T\cap Ext(B_{\ell_1^n})=\{\pm x_1,\pm x_2,$ $\ldots,\pm x_r\}$ for some $1\leq r\leq n,$ $Tx_i$ is $m_i-$smooth for each $1\leq i\leq r$ and $m_1+m_2+\ldots+m_r=k.$
\end{corollary}
\begin{proof}
The proof easily follows from Theorem \ref{th-ind} and the fact that $B_{\ell_1^n}$ contains only finitely many extreme points and if $M_T\cap Ext(B_{\ell_1^n})=\{\pm x_1,\pm x_2,\ldots,\pm x_r\}$ for some $1\leq r\leq n,$ then $\{x_1,x_2,\ldots,x_r\}$ is always linearly independent set in $\ell_1^n.$
\end{proof}
\begin{remark}
Note that, if we consider $T\in \mathbb{L}(\ell_\infty^3,\ell_\infty^3)$ defined by $T(x,y,z)=\frac{1}{2}(x+y,x+y,x+y),$ then $M_T\cap Ext(B_{\ell_\infty^3})=\{\pm(1,1,1),\pm(1,1,-1)\}$ and so in this case, we can apply Theorem \ref{th-ind} to conclude that $T$ is $6-$smooth. Whereas if we consider the operator $T\in \mathbb{L}(\ell_\infty^3,\ell_\infty^3)$ defined by $T(x,y,z)=(x,0,0),$ then $M_T\cap Ext(B_{\ell_\infty^3})=\{\pm(1,1,1),\pm(1,1,-1),\pm(-1,1,1),\pm(1,-1,1)\}$ and so we cannot conclude $k-$smoothness of $T$ from Theorem \ref{th-ind}.
\end{remark}
If the dimension of $\mathbb{X}$ is infinite then the Theorem \ref{th-ind} may not be true. To obtain a desired result for infinite dimensional Banach space $\mathbb{X}$, apart from linear independency, we assume additional condition on $M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1,\pm x_2,\ldots,\pm x_r\},$ in the form that $ x_i \bot_B x_j, \forall i,j, i \neq j.$ Note that, in a Banach space $\mathbb{X},$ an element $x$ is Birkhoff-James \cite{B,J} orthogonal to an element $y$, written as, $ x \bot_B y$ if and only if $ \| x + \lambda y\| \geq \|x\| $ for all scalars $\lambda.$ Although the proof of the following theorem is in the same spirit of the Theorem \ref{th-ind}, except for the construction of $S_\beta$, we prove it in details for the convenience of the reader.
\begin{theorem}\label{th-ind2}
Suppose $\mathbb{X}$ is a smooth, reflexive Banach space and $\mathbb{Y}$ is arbitrary Banach space. Let $\mathbb{K}(\mathbb{X},\mathbb{Y})$ be an $M-$ideal in $\mathbb{L}(\mathbb{X},\mathbb{Y}).$ Suppose that $T\in \mathbb{L}(\mathbb{X},\mathbb{Y}), \|T\|=1$ and dist$ (T,\mathbb{K}(\mathbb{X},\mathbb{Y})) < 1. $ Suppose that $ M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1,\pm x_2,\\ \ldots,\pm x_r\},$ where $\{x_1,x_2,\ldots,x_r\}$ is linearly independent in $\mathbb{X}$ and $ x_i \bot_B x_j, \forall i,j, i \neq j.$ Then $T$ is $k-$smooth if and only if $Tx_i$ is $m_i-$smooth for each $1\leq i\leq r$ and $m_1+m_2+\ldots+m_r=k.$
\end{theorem}
\begin{proof}
Suppose $T$ is $k-$smooth and $Tx_i$ is $m_i-$smooth for each $1\leq i\leq r.$ Then by \cite[Prop. 2.1]{LR}, for each $1\leq i\leq r,$ we have, $ m_i= dim~ span~ Ext~ J(Tx_i).$
Let $\{y_{ij}^*:1\leq j\leq m_i, y_{ij}^*\in Ext ~J(Tx_i)\}$ be a basis of $span~Ext~J(Tx_i)$ for each $1\leq i\leq r.$ Let
\[W_i=span ~\{y_{ij}^*\otimes x_i:y_{ij}^*\in Ext ~J(Tx_i)\} ~\text{for each } 1\leq i\leq r.\]
We first show that $B_i=\{y_{ij}^*\otimes x_i:1\leq j\leq m_i\}$ is a basis of $W_i.$
Let $\sum\limits_{1\leq j\leq m_i}a_j(y_{ij}^*\otimes x_i)=0,$ where $a_j\in \mathbb{R}$ for all $1\leq j\leq m_i.$ Since $\mathbb{X}$ is smooth, for each $1\leq i \leq r,$ there exists a unique hyperspace $H_i$ such that $x_i\perp_B H_i.$ Therefore, $x_j\in H_i$ for all $1\leq j(\neq i)\leq r,$ since $x_i\perp_B x_j$ for all $1\leq j(\neq i)\leq r.$ Consider a Hamel basis $\{u_\beta:\beta \in \Lambda\}$ of $\mathbb{Y}.$ For each $\beta\in\Lambda,$ define $S_\beta:\mathbb{X}\to \mathbb{Y}$ as follows:
\begin{equation} \label{eq2}
\begin{split}
S_\beta x_i&=u_\beta\\
S_\beta x&= 0 \text{~~ for all } x\in H_i.
\end{split}
\end{equation}
Then it is easy to see that $S_{\beta}\in \mathbb{L}(\mathbb{X},\mathbb{Y}).$
Now, for each $\beta \in \Lambda,$\\
$\sum\limits_{1\leq j\leq m_i}a_j(y_{ij}^*\otimes x_i)(S_\beta)=0\Rightarrow\sum\limits_{1\leq j\leq m_i}a_jy_{ij}^*S_\beta(x_i)=0\Rightarrow \sum\limits_{1\leq j\leq m_i}a_jy_{ij}^*(u_\beta)=0\Rightarrow \sum\limits_{1\leq j\leq m_i}a_jy_{ij}^*=0\Rightarrow a_j=0 $ for all $1\leq j\leq m_i.$ Thus, $B_i$ is linearly independent. It can be easily verified that $B_i$ is a spanning set of $W_i.$ Hence, $B_i$ is a basis of $W_i$ and so $dim~W_i=m_i$ for each $1\leq i\leq r.$ Now,
\begin{eqnarray*}
k&=&dim ~span ~J(T)\\
&=& dim~ span~ Ext ~J(T)\\
&=& dim~ span ~\{y_{ij}^*\otimes x_i:y_{ij}^*\in Ext ~J(Tx_i),1\leq i\leq r \}\\
&=& dim~W,\text{~where,}\\
W&=&span ~\{y_{ij}^*\otimes x_i:y_{ij}^*\in Ext~ J(Tx_i),1\leq i\leq r \}.
\end{eqnarray*}
We now show that $W=\oplus_{i=1}^{r}W_i.$ Clearly, $W=W_1+W_2+\ldots+W_r.$ Suppose that $z\in W_i\cap \sum\limits_{\substack{l=1\\l\neq i}}^{r}W_l$ for some $i,$ $1\leq i\leq r.$ Then \[z=\sum_{j=1}^{m_i}a_{ij}(y_{ij}^*\otimes x_i)=\sum\limits_{\substack{1\leq l(\neq i)\leq r}}w_l,\text{ where }w_l=\sum\limits_{\substack{1\leq j\leq m_l}}a_{lj}(y_{lj}^*\otimes x_l)\in W_l,a_{ij}\in \mathbb{R}.\]
For each $\beta\in\Lambda,$ considering $S_\beta\in \mathbb{L}(\mathbb{X},\mathbb{Y}),$ as defined in (\ref{eq2}), we have, \\ $\sum_{j=1}^{m_i}a_{ij}y_{ij}^*S_{\beta}(x_i)=\sum\limits_{\substack{1\leq l(\neq i)\leq r \\ 1\leq j\leq m_l}}a_{lj}y_{lj}^*S_\beta(x_l)\Rightarrow \sum_{j=1}^{m_i}a_{ij}y_{ij}^*(u_\beta)=0 \Rightarrow a_{ij}=0$ for all $1\leq j \leq m_i.$ Thus, $z=0\Rightarrow W_i\cap \sum\limits_{\substack{l=1\\l\neq i}}^{r}W_l=\{0\}.$ Therefore, $W=\oplus_{i=1}^{r}W_i.$ Hence, $k=dim~W=dim~\oplus_{i=1}^{r}W_i=\oplus_{i=1}^{r}~dim~W_i=m_1+m_2+\ldots+m_r.$ This completes the proof of the theorem.
\end{proof}
\begin{example}
The above result can be used to determine the order of smoothness of operator $T$ defined on infinite dimensional $\ell_p(1<p(\neq 2)<\infty)$ spaces. As for example consider the operator $T\in \mathbb{L}(\ell_4,\ell_4)$ defined by $$T(a_1,a_2,a_3,a_4,\ldots)=2^{-\frac{3}{4}}(a_1+a_2,a_1-a_2,0,0,\ldots).$$
Then it is easy to see that $M_T\cap Ext(B_{\ell_4})=\Big\{\pm\Big(\frac{1}{2^{\frac{1}{4}}},\frac{1}{2^{\frac{1}{4}}},0,0,0,\ldots\Big),$ $\pm\Big(-\frac{1}{2^{\frac{1}{4}}},\frac{1}{2^{\frac{1}{4}}},0,0,\ldots\Big)\Big\}.$ Since the space $\ell_4$ and the operator $T$ satisfies all the conditions of Theorem \ref{th-ind2}, we can conclude that $T$ is $2-$smooth.
\end{example}
\section{k-smoothness of operators defined between two-dimensional Banach spaces}
In this section, we completely characterize $k-$smoothness of an operator $T\in \mathbb{L}(\mathbb{X},\mathbb{Y}),$ depending on $|M_T\cap Ext(B_{\mathbb{X}})|,$ when both $\mathbb{X},\mathbb{Y}$ are two-dimensional Banach spaces. Consider the case $|M_T\cap Ext(B_{\mathbb{X}})|=2,$ i.e., $M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1\},$ in this case $T$ is smooth if $Tx_1$ is smooth and $T$ is $2-$smooth if $Tx_1$ is non-smooth, which follows clearly from Theorem \ref{th-ind}. Next, consider the case $|M_T\cap Ext(B_{\mathbb{X}})|=4,$ i.e., $M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1,\pm x_2\},$ in this case following Theorem \ref{th-ind}, we can conclude that $T$ is $2-$smooth when both $Tx_1,Tx_2$ are smooth, $T$ is $3-$smooth when only one of $Tx_1,Tx_2$ is smooth and $T$ is $4-$smooth when both $Tx_1,Tx_2$ are non-smooth. In case $|M_T\cap Ext(B_{\mathbb{X}})|>4,$ the situation is little bit complicated and we have to consider the two cases: $|M_T\cap Ext(B_{\mathbb{X}})|=6$ and $|M_T\cap Ext(B_{\mathbb{X}})|\geq 8.$ We first prove the following theorem.
\begin{theorem}\label{th-mt6}
Suppose $\mathbb{X}, \mathbb{Y}$ are two-dimensional Banach spaces and $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ is such that $\|T\|=1$ and $M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1,\pm x_2,\pm x_3\}.$ Then the following holds:\\
(i) If $Tx_i$ is smooth for each $1\leq i\leq 3,$ then $T$ is $3-$smooth.\\
(ii) If $Tx_1$ is not smooth and either $Tx_2,Tx_3$ are interior point of same line segment of unit sphere or $Tx_2,-Tx_3$ are interior point of same line segment of unit sphere, then $T$ is $3-$smooth.\\
(iii) If $Tx_1$ is not smooth, $Tx_2,Tx_3$ are not interior point of the same line segment of unit sphere and $Tx_2,-Tx_3$ are not interior point of the same line segment of unit sphere, then $T$ is $4-$smooth.
\end{theorem}
\begin{proof}
Clearly, $T$ is $k-$smooth for some $1\leq k\leq 4,$ since $dim(\mathbb{X})=dim(\mathbb{Y})=2.$
$(i)$ Suppose $Tx_i$ is smooth for each $1\leq i\leq 3.$ Then $Tx_i$ has unique supporting linear functional for each $1\leq i\leq 3.$ We first show that $Tx_1,Tx_2,Tx_3$ cannot have same supporting linear functional. If possible, suppose that $J(Tx_i)=\{y^*\}$ for all $i=1,2,3.$ Then $y^*(Tx_1)=y^*(Tx_2)=y^*(Tx_3)=1.$ Hence, for all $t\in[0,1],~y^*(tTx_1+(1-t)Tx_2)=1\Rightarrow \|tTx_1+(1-t)Tx_2\|=1,$ since $\|y^*\|=1.$ Thus, $\|T(tx_1+(1-t)x_2)\|=1$ and $\|T\|=1$ together gives that $\|tx_1+(1-t)x_2\|=1$ for all $t\in[0,1].$ This implies that $x_1,x_2$ are on same line segment of unit sphere. Similarly, $x_1,x_3$ and $x_2,x_3$ are on same line segment of unit sphere. This contradicts that $x_1,x_2,x_3$ are distinct extreme points of $B_{\mathbb{X}}.$ Therefore, without loss of generality, we may assume that $J(Tx_i)=\{y_i^*\}$ for all $i=1,2,3$ and $y_1^*\neq \pm y_2^*.$ Since $\mathbb{X}$ is two dimensional and $x_1,x_2,x_3$ are distinct extreme points of $B_{\mathbb{X}},$ we have $x_3=\gamma x_1+\delta x_2$ for some $\gamma(\neq 0),\delta(\neq 0)\in \mathbb{R}.$ Now, $y_1^*\neq \pm y_2^*\Rightarrow \{y_1^*,y_2^*\}$ is linearly independent in $Y^*.$ Therefore, $y_3^*=\alpha y_1^*+\beta y_2^*$ for some $\alpha,\beta\in \mathbb{R}.$ Since $T$ is $k-$smooth,
\begin{eqnarray*}
k&=&dim ~span ~J(T)\\
&=& dim~ span~ Ext ~J(T)\\
&=& dim~ span ~\{y_i^*\otimes x_i:1\leq i\leq 3 \}.
\end{eqnarray*}
We show that $\{y_i^*\otimes x_i:1\leq i\leq 3 \}$ is linearly independent. Let
\begin{eqnarray*}
&&a_1y_1^*\otimes x_1+a_2y_2^*\otimes x_2+a_3y_3^*\otimes x_3=0,~\text{where~} a_1,a_2,a_3\in \mathbb{R}, \\
&\Rightarrow & a_1y_1^*\otimes x_1+a_2y_2^*\otimes x_2+a_3(\alpha y_1^*+\beta y_2^*)\otimes(\gamma x_1+\delta x_2)=0\\
&\Rightarrow& (a_1+a_3\alpha\gamma)y_1^*\otimes x_1+(a_2+a_3\beta\delta)y_2^*\otimes x_2+a_3\alpha\delta y_1^*\otimes x_2+a_3\beta\gamma y_2^*\otimes x_1=0.
\end{eqnarray*}
Now, using Lemma \ref{lemma-01}, we have, $a_1+a_3\alpha\gamma=0,~a_2+a_3\beta\delta=0,~a_3\alpha\delta=0$ and $a_3\beta\gamma=0.$ Solving these $4$ equations, we get $a_1=a_2=a_3=0.$ Therefore, $\{y_i^*\otimes x_i:1\leq i\leq 3 \}$ is linearly independent. Thus, $T$ is $3-$smooth. \\
$(ii)$ Suppose that $Tx_1$ is not smooth. Without loss of generality, assume that $Tx_2,Tx_3$ are interior point of same line segment of unit sphere. Then $Tx_2,Tx_3$ have same unique supporting linear functional say, $z^*,$ i.e., $J(Tx_2)=J(Tx_3)=\{z^*\}.$ Since $Tx_1$ is not smooth and $\mathbb{Y}$ is two-dimensional, it is easy to see that $Ext ~J(Tx_1)=\{y_1^*,y_2^*\}$ for some linearly independent set $\{y_1^*,y_2^*\}$ of $\mathbb{Y}^*.$ Now, $x_3=ax_1+bx_2$ for some $a(\neq 0),b(\neq 0)\in \mathbb{R}$ and $z^*=\alpha y_1^*+\beta y_2^*$ for some $\alpha,\beta\in \mathbb{R}.$ Therefore, $z^*\otimes x_3=(\alpha y_1^*+\beta y_2^*)\otimes(ax_1+bx_2)=a\alpha y_1^*\otimes x_1+a\beta y_2^*\otimes x_1+bz^*\otimes x_2\in span \{y_1^*\otimes x_1,y_2^*\otimes x_1,z^*\otimes x_2\}.$ Thus,
\begin{eqnarray*}
k&=&dim~span~Ext~J(T)\\
&=&dim~span~\{y_1^*\otimes x_1,y_2^*\otimes x_1,z^*\otimes x_2,z^*\otimes x_3\}\\
&=&dim~span~\{y_1^*\otimes x_1,y_2^*\otimes x_1,z^*\otimes x_2\}.
\end{eqnarray*}
We next show that $\{y_1^*\otimes x_1,y_2^*\otimes x_1,z^*\otimes x_2\}$ is linearly independent. Let $a_1y_1^*\otimes x_1+a_2y_2^*\otimes x_1+a_3z^*\otimes x_2=0,$ where $a_i\in \mathbb{R}~(i=1,2,3).$ Then
\begin{eqnarray}\label{eq-01}
a_1y_1^*S( x_1)+a_2y_2^*S( x_1)+a_3z^*S( x_2)=0~\text{for all} ~S\in \mathbb{L}(\mathbb{X},\mathbb{Y}).
\end{eqnarray}
Define $S_1,S_2\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ as follows:
\begin{align*}
S_1x_1 & = 0 & S_2 x_1 & = u_2\\
S_1x_2 & = u_1 & S_2x_2 & = 0,
\end{align*}
where $u_1\notin \ker(z^*)$ and $u_2\in \ker(y_1^*)\setminus \ker(y_2^*).$ Now, putting $S_1,S_2$ in (\ref{eq-01}), we get, $a_2=a_3=0.$ Thus, $a_1y_1^*\otimes x_1=0.$ Since $x_1\neq 0$ and $y_1^*\neq 0,$ we have, $a_1=0.$ Therefore, $\{y_1^*\otimes x_1,y_2^*\otimes x_1,z^*\otimes x_2\}$ is linearly independent subset of $\mathbb{L}(\mathbb{X},\mathbb{Y})^*.$ Thus, $k=3$ and so $T$ is $3-$smooth.\\
$(iii)$ Suppose $Tx_1$ is not smooth, $Tx_2,Tx_3$ are not interior point of the same line segment of unit sphere and $Tx_2,-Tx_3$ are not interior point of the same line segment of unit sphere. Then $Ext~J(Tx_1)=\{y_{11}^*,y_{12}^*\}$ for some linearly independent subset $\{y_{11}^*,y_{12}^*\}$ of $\mathbb{Y}^*$ and there exist $y_2^*\in Ext~J(Tx_2)$ and $y_3^*\in Ext~J(Tx_3)$ such that $y_2^*\neq \pm y_3^*.$ Now,
\begin{eqnarray*}
4\geq k&=&dim~span~Ext~J(T)\\
&\geq&dim~span~\{y_{11}^*\otimes x_1,y_{12}^*\otimes x_1,y_2^*\otimes x_2,y_3^*\otimes x_3\}.
\end{eqnarray*}
As before, choosing $S$ suitably from $\mathbb{L}(\mathbb{X},\mathbb{Y})$ it can be easily shown that $\{y_{11}^*\otimes x_1,y_{12}^*\otimes x_1,y_2^*\otimes x_2,y_3^*\otimes x_3\}$ is linearly independent subset of $\mathbb{L}(\mathbb{X},\mathbb{Y})^*.$ Thus, $k=4$ and so $T$ is $4-$smooth. This completes the proof of the theorem.
\end{proof}
In addition to $|M_T\cap Ext(B_{\mathbb{X}})|=6,$ if we assume the strict convexity of either $\mathbb{X}$ or $\mathbb{Y},$ then the $k-$smoothness of $T$ can be characterized as follows.
\begin{corollary}
Suppose $\mathbb{X},\mathbb{Y}$ are two-dimensional Banach spaces and either $\mathbb{X}$ or $\mathbb{Y}$ is strictly convex. Let $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ be such that $M_T\cap Ext(B_{\mathbb{X}})=\{\pm x_1,\pm x_2,\pm x_3\}.$ Then $T$ is $3-$smooth if and only if $Tx_i$ is smooth for all $i=1,2,3,$ otherwise $T$ is $4-$smooth.
\end{corollary}
\begin{proof}
At first suppose that $\mathbb{X}$ is strictly convex. We only show that case $(ii)$ of Theorem \ref{th-mt6} does not hold. If possible, suppose that $Tx_2,Tx_3$ are interior point of same line segment. Then $Tx_2,Tx_3$ have same supporting linear functional.Then there exists $y^*\in S_{\mathbb{Y}^*}$ such that $y^*(Tx_2)=y^*(Tx_3)=1.$ So for all $t\in[0,1],y^*((1-t)Tx_2+tTx_3)=1\Rightarrow\|(1-t)x_2+tx_3\|=1$ which contradicts that $\mathbb{X}$ is strictly convex. Therefore, case $(ii)$ of Theorem \ref{th-mt6} does not hold and the result follows from Theorem \ref{th-mt6}.\\
When $\mathbb{Y}$ is strictly convex, case $(ii)$ of Theorem \ref{th-mt6} does not arise and the result follows easily.
\end{proof}
The only case remaining to completely characterize $k-$smoothness of an operator $T$ between two-dimensional Banach spaces $\mathbb{X}$ and $\mathbb{Y}$ is $|M_T\cap Ext(B_{\mathbb{X}})|\geq 8.$ In the next theorem, we consider this case.
\begin{theorem}\label{th-mt8}
Suppose $\mathbb{X},\mathbb{Y}$ are two-dimensional Banach spaces. Let $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ be such that $|M_T\cap Ext(B_{\mathbb{X}})|\geq 8.$ Then the following holds:\\
(i) If $Tx$ is not smooth for some $x\in M_T\cap Ext(B_{\mathbb{X}}),$ then $T$ is $4-$smooth.\\
(ii) Suppose $Tx$ is smooth for each $x\in M_T\cap Ext(B_{\mathbb{X}}).$ If there exist $x_i\in M_T\cap Ext(B_{\mathbb{X}}),~y_i^*\in J(Tx_i)$ for $i=1,2,3,4$ such that $x_2=ax_1+bx_3, x_4=cx_1+dx_3$ and $y_2^*=\alpha_1y_1^*+\alpha_2y_3^*,y_4^*=\beta_1y_1^*+\beta_2y_3^*$ with $ \beta_1\alpha_2 ad-\beta_2\alpha_1bc\neq 0,$ then $T$ is $4-$smooth. Otherwise $T$ is $3-$smooth.
\end{theorem}
\begin{proof}
Clearly, $T$ is $k-$smooth for some $1\leq k\leq 4,$ since $dim(\mathbb{X})=dim(\mathbb{Y})=2.$ Since $|M_T\cap Ext(B_{\mathbb{X}})|\geq 8,$ we may assume that $\{\pm x_1,\pm x_2,\pm x_3,\pm x_4\}\subseteq M_T\cap Ext(B_{\mathbb{X}}).$ \\
$(i)$ Assume that $Tx_1$ is not smooth. Without loss of generality, we may assume that $x_1=\frac{(1-s)x_2-sx_4}{\|(1-s)x_2-sx_4\|}$ and $x_3=\frac{(1-t)x_2+tx_4}{\|(1-t)x_2+tx_4\|}$ for some $s,t\in(0,1).$ Let $y_{11}^*,y_{12}^*$ be two linearly independent vectors in $Ext~J(Tx_1).$ Suppose $y_2^*\in Ext~J(Tx_2),y_4^*\in Ext~J(Tx_4).$ Then $y_2^*\neq \pm y_4^*,$ for if $y_2^*= y_4^*,$ then as in Theorem \ref{th-mt6} $(i)$, it can be shown that $\|(1-t)x_2+tx_4\|=1$ for all $t\in[0,1].$ This contradicts that $x_3$ is an extreme point of $B_{\mathbb{X}}.$ Thus, $y_2^*\neq y_4^*.$ Similarly, $y_2^*\neq -y_4^*.$ Thus, $y_2^*$ and $y_4^*$ are linearly independent. Since $T$ is $k-$smooth, we have,
\begin{eqnarray*}
4 \geq k&=& dim~span~Ext~J(T)\\
&\geq & dim ~span~\{y_{11}^*\otimes x_1,y_{12}^*\otimes x_1,y_2^*\otimes x_2,y_4^*\otimes x_4\} .
\end{eqnarray*}
We claim that $\{y_{11}^*\otimes x_1,y_{12}^*\otimes x_1,y_2^*\otimes x_2,y_4^*\otimes x_4\}$ is linearly independent. Let $a y_{11}^*\otimes x_1+b y_{12}^*\otimes x_1+c y_2^*\otimes x_2+d y_4^*\otimes x_4=0,$ where $a,b,c,d\in \mathbb{R}.$ Then
\begin{eqnarray}\label{eq-02}
a y_{11}^*S(x_1)+b y_{12}^*S(x_1)+c y_2^*S( x_2)+d y_4^*S( x_4)=0 ~\forall~S\in \mathbb{L}(\mathbb{X},\mathbb{Y}).
\end{eqnarray}
For $1\leq i\leq 4,$ define $S_i\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ as follows:
\begin{align*}
S_1x_1 & = 0 & S_2 x_1 & = 0 & S_3 x_1&=u_3 & S_4 x_1=u_4\\
S_1x_2 & = u_1 & S_2x_2 & = u_2 & S_3 x_2&=0 & S_4 x_2=0,
\end{align*}
where $u_1\in \ker(y_2^*) \setminus \ker(y_4^*)$ and $u_2\in \ker(y_4^*)\setminus \ker(y_2^*),u_3\in \ker(y_{11}^*) \setminus \ker(y_{12}^*),u_4\in \ker(y_{12}^*) \setminus \ker(y_{11}^*). $ Now, putting $S_1,S_2,S_3,S_4$ in (\ref{eq-02}), we get, $a=b=c=d=0.$ Therefore, $\{y_{11}^*\otimes x_1,y_{12}^*\otimes x_1,y_2^*\otimes x_2,y_4^*\otimes x_4\}$ is linearly independent. Thus, $k=4$ and so $T$ is $4-$smooth.\\
$(ii)$ Suppose $Tx $ is smooth for each $x \in M_T\cap Ext(B_{\mathbb{X}}) $ and $\beta_1\alpha_2 ad-\beta_2\alpha_1bc\neq 0.$ Clearly
$ 4 \geq k= dim~span~Ext~J(T)\geq dim ~span~\{y_i^*\otimes x_i:1\leq i\leq 4\} $.
We claim that $ \{y_i^*\otimes x_i:1\leq i\leq 4\} $ is linearly independent.\\
Let
$a_1y_1^*\otimes x_1+a_2y_2^*\otimes x_2+a_3y_3^*\otimes x_3+a_4y_4^*\otimes x_4=0, $ where $a_i\in \mathbb{R}, 1 \leq i \leq4.$ Then \\
$ a_1y_1^*\otimes x_1+a_2(\alpha_1y_1^*+\alpha_2y_3^*)\otimes (ax_1+bx_3)+
a_3y_3^*\otimes x_3+a_4(\beta_1y_1^*+\beta_2y_3^*)\otimes (cx_1+dx_3)=0$.\\
$\Rightarrow (a_1+a_2\alpha_1 a+a_4\beta_1 c)y_1^*\otimes x_1+(a_2\alpha_1 b+a_4\beta_1 d)y_1^*\otimes x_3+
(a_2\alpha_2 a+a_4\beta_2 c)y_3^*\otimes x_1+(a_3+a_2\alpha_2 b+a_4\beta_2 d)y_3^*\otimes x_3=0.$
Now, using Lemma \ref{lemma-01}, $\{y_1^*\otimes x_1,y_1^*\otimes x_3,y_3^*\otimes x_1,y_3^*\otimes x_3\}$ is a linearly independent set. Hence, $a_1+a_2\alpha_1 a+a_4\beta_1 c=0,a_2\alpha_1 b+a_4\beta_1 d=0,a_2\alpha_2 a+a_4\beta_2 c=0$ and $a_3+a_2\alpha_2 b+a_4\beta_2 d=0.$ Solving these equations, we get, $a_1,a_2,a_3,a_4=0,$ since $\beta_1\alpha_2 ad-\beta_2\alpha_1bc\neq 0.$ Therefore, $dim ~span~\{y_i^*\otimes x_i:1\leq i\leq 4\}=4\Rightarrow k=4.$ Thus, $T$ is $4-$smooth.\\
Now, suppose that for each $\{\pm x_i:1\leq i\leq 4\}\subseteq M_T\cap Ext(B_{\mathbb{X}}) $ and $y_i^*\in J(Tx_i)$ for $i=1,2,3,4,$ $x_2=ax_1+bx_3, x_4=cx_1+dx_3$ and $y_2^*=\alpha_1y_1^*+\alpha_2y_3^*,y_4^*=\beta_1y_1^*+\beta_2y_3^*\Rightarrow \beta_1\alpha_2 ad-\beta_2\alpha_1bc= 0.$ Then $\{y_i^*\otimes x_i:1\leq i\leq 4\}$ is a linearly dependent set. Hence, $k<4.$ Proceeding similarly as in Theorem \ref{th-mt6} $(i)$ we can show that $\{y_i^*\otimes x_i:1\leq i\leq 3\}$ is linearly independent. Therefore, $k=3$ and so $T$ is $3-$smooth. This completes the proof of the theorem.
\end{proof}
Observe that if $\mathbb{X}$ is a two-dimensional Banach space such that the unit sphere of $\mathbb{X}$ is a polygon with more than $6$ vertices, then the identity operator on $\mathbb{X}$ satisfies the hypothesis of Theorem \ref{th-mt8} $(i)$ and so it is $4-$smooth. Now, we exhibit two examples to show that there exist two-dimensional Banach spaces $\mathbb{X},\mathbb{Y}$ and operators $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ such that both the cases of Theorem \ref{th-mt8} $(ii)$ hold.
\begin{example}
(i) Suppose $\mathbb{X}$ is a two-dimensional Banach space such that the unit sphere of $\mathbb{X}$ is a regular octagon with vertices $\pm (1,0),\pm (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}), \pm (0,1), \pm (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}).$ Define $T\in \mathbb{L}(\mathbb{X},\mathbb{X})$ by $T(1,0)=(\frac{1}{2}+\frac{1}{2\sqrt{2}},\frac{1}{2\sqrt{2}}),$ $T(0,1)=(-\frac{1}{2\sqrt{2}},\frac{1}{2}+\frac{1}{2\sqrt{2}}).$ Then $M_T\cap Ext(B_{\mathbb{X}})=\{\pm (1,0),\pm (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}), \pm (0,1), \pm (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})\}$ and $Tx$ is smooth for each $x\in M_T\cap Ext(B_{\mathbb{X}}).$ In this case, it can be verified that $T$ is $3-$smooth.\\
(ii) Suppose that $\mathbb{X},\mathbb{Y}$ are two-dimensional Banach spaces such that $S_{\mathbb{X}}$ is a regular octagon with vertices $\pm (1,0),\pm (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}), \pm (0,1), \pm (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})$ and $S_{\mathbb{Y}}$ is an irregular octagon with vertices $\pm (1,0),\pm \Big(\frac{17\sqrt{2}-30}{324-234\sqrt{2}},\frac{35\sqrt{2}-56}{324-234\sqrt{2}}\Big), \pm (0,1), \pm (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}).$ Define $T\in \mathbb{L}(\mathbb{X},\mathbb{Y})$ by $T(1,0)=(\frac{5\sqrt{2}+4}{12},\frac{2+3\sqrt{2}}{12}),T(0,1)=(-\frac{\sqrt{2}}{4},\frac{2+\sqrt{2}}{4}).$ Then $M_T\cap Ext(B_{\mathbb{X}})=\{\pm (1,0),\pm (\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}), \pm (0,1), \pm (-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}})\}$ and $Tx$ is smooth for each $x\in M_T\cap Ext(B_{\mathbb{X}}).$ In this case, it can be verified that $T$ is $4-$smooth.\\
\end{example}
In \cite[Th. 4.2]{W}, W\'{o}jcik proved that in an $n-$dimensional Banach space $\mathbb{X},$ if an unit vector $x\in \mathbb{X}$ is $n-$smooth, then $x$ is an exposed point. In the following theorem, we prove the converse of \cite[Th. 4.2]{W} for polyhedral Banach space.
\begin{theorem}
Let $\mathbb{X}$ be an $n-$dimensional polyhedral Banach space. If $x\in S_{\mathbb{X}}$ is an exposed point of $\mathbb{X},$ then $x$ is $n-$smooth.
\end{theorem}
\begin{proof}
Suppose $x\in S_{\mathbb{X}}$ is an exposed point of $\mathbb{X}$ and $x$ is $k-$smooth. If possible, suppose that $k<n.$ Let $\{x_1^*,x_2^*,\ldots,x_k^*\}$ be linearly independent subset of $Ext~J(x).$ It is easy to see that $dim(\ker x_1^*\cap \ker x_2^*\cap\ldots\cap \ker x_k^*)=n-k>0.$ Suppose $z\in \cap_{i=1}^k\ker x_i^*.$ Let $Y=span\{x,z\}.$ Then $Y$ is a polygonal Banach space. If possible, suppose that $x$ is $2-$smooth in $Y.$ Then there exist linearly independent vectors $y_1^*,y_2^*\in S_{Y^*}$ such that $y_1^*(x)=y_2^*(x)=1.$ Let $z_1^*,z_2^*$ be two norm preserving extensions of $y_1^*$ and $y_2^*$ respectively. Then $z_1^*,z_2^*\in J(x).$ Thus, $z_1^*,z_2^*\in span ~J(x)=span~Ext~J(x).$ Since $x_i^*(z)=0$ for all $1\leq i\leq k,$ $z_1^*(z)=z_2^*(z)=0.$ Hence, $y_1^*(z)=y_2^*(z)=0,$ contradicting that $y_1^*,y_2^*$ are linearly independent. This proves that $x$ is smooth point in $Y.$ Hence, there exist $x_1,x_2\in S_Y\subseteq S_{\mathbb{X}}$ such that $x=\frac{1}{2}x_1+\frac{1}{2}x_2.$ Thus, $x$ is not an extreme point of $B_{\mathbb{X}}$ and so $x$ is not an exposed point of $B_{\mathbb{X}},$ contradicting the hypothesis of the theorem. Therefore, $k=n.$ This completes the proof of the theorem.
\end{proof}
\bibliographystyle{amsplain}
| {
"timestamp": "2019-09-05T02:13:08",
"yymm": "1909",
"arxiv_id": "1909.01690",
"language": "en",
"url": "https://arxiv.org/abs/1909.01690",
"abstract": "We study $k-$smoothness of bounded linear operators defined between arbitrary Banach spaces. As an application, we characterize $k-$smooth operators defined from $\\ell_1^n$ to an arbitrary Banach space. We also completely characterize $k-$smooth operators defined between arbitrary two-dimensional Banach spaces.",
"subjects": "Functional Analysis (math.FA)",
"title": "Characterization of $k-$smooth operators between Banach spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232894783426,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7093460439368369
} |
https://arxiv.org/abs/2204.04970 | Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares | We consider potentially non-convex optimization problems, for which optimal rates of approximation depend on the dimension of the parameter space and the smoothness of the function to be optimized. In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees, while also providing a posteriori certificates of optimality. Our general formulation builds on infinite-dimensional sums-of-squares and Fourier analysis, and is instantiated on the minimization of multivariate periodic functions. | \section{Introduction}
A well-designed optimization algorithm provides two important types of guarantees. First, it guarantees {\em a priori} that its output will achieve a certain degree of accuracy, with computational complexity that is hopefully adaptive to the specific properties of function to be optimized and possibly even optimal over a certain class of algorithms or functions to minimize. Second, it provides an {\em a posteriori} certificate, i.e., an explicit bound on the solution's accuracy that we can calculate once we run the algorithm. There are many examples of such well-designed optimization algorithms in the convex setting, which often use some form of convex duality \citep[see, e.g.,][]{nemirovski2010accuracy}.
In this paper, our goal is to provide a well-designed algorithm for \emph{non-convex} optimization,
\begin{equation}\label{eq:problem-base}
c_* = \inf_{x \in \mc{X}} f(x),
\end{equation}
with $\mc{X} \subseteq {\mathbb{R}}^d$ and $f$ potentially non-convex. In general, this task is extremely difficult, and in the worst case the computational cost must be exponential in the dimension $d$. However, it is known \citep{novak2006deterministic} that in order for an algorithm to achieve the optimal computational complexity in solving \cref{eq:problem-base}, it must be adaptive to the degree of differentiability of $f$. That is, it should be able to overcome the curse of dimensionality in terms of the approximation error ${\varepsilon}$, when the function is very smooth. More precisely, if $f$ is $m$-times differentiable, then the computational complexity for finding $c_*$ with error ${\varepsilon}$ should be $C_d \, {\varepsilon}^{-d/m}$, where $C_d$ is exponential in $d$ in the worst case, but the dependence on the accuracy scales as only ${\varepsilon}^{-d/m}$, which becomes quite mild once $m$ approaches $d$. In this case, the curse of dimensionality is relegated just to the constant $C_d$, making it possible to efficiently solve non-convex problems to high accuracy as long as $d$ is relatively small, which has applications to tasks like hyperparameter tuning, industrial process optimization, and more.
Establishing well-designed algorithms for non-convex optimization is a difficult task. Many non-convex optimization algorithms in the literature lack a priori guarantees, a posteriori guarantees, or actually any guarantees at all, and many methods used in practice are based on heuristics or can only guarantee convergence to a {\em local} rather than global minimum. Some other algorithms have good a posteriori guarantees, but weak a priori bounds; for example, methods based on polynomial sum of squares \citep{lasserre2001global,parrilo2003semidefinite} are not adaptive, a priori, to the smoothness and are therefore subject to the curse of dimensionality in terms of the accuracy ${\varepsilon}$. The new family of algorithms based on kernel sum of squares~\citep{rudi2020finding} achieves quasi-optimal a priori guarantees, but without any certificate a posteriori.
\paragraph{Our Contribution.} In this paper, we provide a general strategy to derive algorithms that compute a {\em lower bound $\hat{c}$} of $c_*$ with strong guarantees both a priori and a posteriori (see Corollary \ref{cor:all-together}). As a particular example, we consider optimizing smooth, periodic functions $f$ on $\mc{X} = [0,1]^d$ and derive an algorithm that: ({\em a priori}) approximates $c_*$ with almost optimal error ${\varepsilon}$ and computational complexity that scales well with $\varepsilon$, and ({\em a posteriori}) provide a certificate of accuracy, which is adaptive to the specific instance of the problem.
The a priori guarantee is useful since it shows that the proposed algorithm has nearly optimal complexity, which is adaptive to the smoothness of the function to be optimized.
The a posteriori certificate is particularly useful because the accuracy of our estimate of $c_*$ depends on the specific instance at hand, and may be much better than the worst-case, exponential-in-$d$ constant would suggest. Indeed, better-than-worst-case performance is frequently observed in practice, but we need a certificate of accuracy in order to \emph{know} when we are so lucky.
\section{Deriving Well-Designed Algorithms for Smooth Non-Convex Optimization}\label{sec:deriving-well-designed}
Our approach begins with rewriting the problem \eqref{eq:problem-base} as finding the highest lower bound on $f$:
\[
c_* = \max_{c \in {\mathbb{R}}} ~ c ~~~ \textrm{ such that } ~~~ f(x) \geq c ~~ \forall x \in \mc{X}.
\]
The inequality constraint $f - c \geq 0$ can be converted to an equality constraint by introducing a non-negative function $g \geq 0$:
\[
c_* = \max_{c \in {\mathbb{R}}, \ g \geq 0} ~ c ~~~ \textrm{ such that } ~~~ f(x) - c - g(x) = 0 ~~ \forall x \in \mc{X}.
\]
Finally, we can rewrite this again in a penalized form
\begin{equation}\label{eq:prob-Linfty}
\tilde{c} ~~=~~ \max_{c \in {\mathbb{R}}, \ g \geq 0} ~c~ - ~\|f - c - g\|_{L^\infty(\mc{X})}.
\end{equation}
It may not yet be obvious what this accomplishes, but the following Lemma indicates its promise:
\begin{lemma}
The problem \eqref{eq:prob-Linfty} is a concave maximization problem with solution $\tilde{c} = c_*$, and for any feasible $(c,g)$ such that $c \in {\mathbb{R}}$ and $g \geq 0$, $c_\ast$ is lower-bounded by $c- \|f - c - g\|_{L^\infty(\mc{X})}$.
\end{lemma}
\begin{proof}
The objective $V(c,g) := c - \|f - c - g\|_{L^\infty(\mc{X})}$ is concave because it is a linear term $c$ minus a convex term $\|\cdot\|_{L^\infty(\mc{X})}$ composed with a linear function of the optimization variables.
Since for all $x \in \mc{X}$, $f(x) - c - g(x) \geq -
\|f - c - g\|_{L^\infty(\mc{X})}$, for any feasible $(c,g)$, then $\forall x \in \mc{X}$, $f(x) \geq V(c,g)$. Thus, by minimizing with respect to $x$, $c_\ast \geq \tilde{c}$. In order to show the other inequality, we notice that $(c_*,f-c_*)$ is feasible and $V(c_*,f-c_*) = c_*$.
\end{proof}
So, we have reduced the non-convex minimization problem \eqref{eq:problem-base} to a concave maximization problem~\eqref{eq:prob-Linfty}, which is an improvement. However, whereas the original problem had a $d$-dimensional optimization variable, our new problem requires optimizing over the infinite-dimensional space of non-negative functions, and it is not clear how to do this. Furthermore, the quantity $\nrm{f - c - g}_{L^\infty(\mc{X})}$ is just as difficult to compute as $\inf_{x\in\mc{X}}f(x)$ would be. We will now describe several modifications leading to a more tractable optimization problem that maintains the same desirable properties.
\paragraph{A more tractable formulation.}
Specifically, our approach revolves around introducing a more tractable norm $\nrm{\cdot}_W$ on functions from $\mc{X}$ to ${\mathbb{R}}$, and restricting $g \in \mc{G}$, for $\mc{G}$ a tractable subset of all non-negative functions. We will specify $W$ and $\mc{G}$ later. To provide guarantees on the method we only need them to satisfy the following:
\begin{definition}[Norms and models for non-negative functions]\label{def:norms-and-models}
\begin{enumerate}
\item Let $\|\cdot\|_{W}$ be a norm on the space of real-valued functions over $\mc{X}$, such that $\|\cdot\|_{L^\infty(\mc{X})} \leq \|\cdot\|_{W}$. Denote by ${\cal W}$ the associated Banach space.
\item Let ${\cal G}$ be a convex subset of the set of non-negative functions, such that $\cal G$ is a closed subset of~$\cal W$.
\end{enumerate}
\end{definition}
Now, by restricting the problem in \cref{eq:prob-Linfty} on ${\cal G}$ and considering the norm $\|\cdot\|_{W}$, we obtain what can be a more tractable formulation,
\begin{equation}\label{eq:problem-relax}
\bar{c} = \max_{c \in {\mathbb{R}}, \ g \in {\cal G}} \ \ c - \|f - c - g\|_{W}.
\end{equation}
It is, of course, not the case that \emph{any} choice of $\nrm{\cdot}_W$ and $\mc{G}$ would make \cref{eq:problem-relax} easy to solve. However, we will later discuss examples where \cref{eq:problem-relax} is much easier to solve than \cref{eq:prob-Linfty}.
Regardless, we will see in the next Theorem, that this formulation comes with strong guarantees, expressing the error of the algorithm directly in terms of the approximation properties of the class of models~$\mc{G}$ for non-negative functions.
\begin{theorem}[Tightness of \cref{eq:problem-relax}]\label{thm:tight-problem}
Suppose that $f - c \in {\cal W}$ for any $c \in {\mathbb{R}}$, then
$$c_* - q \leq \bar{c} \leq c_*, \qquad q = \min_{g \in {\cal G}} \|f - c_* - g\|_W.$$
Moreover, if there exists $g_* \in {\cal G}$ satisfying $\|f - c_* - g_*\|_W = 0$, then $\bar{c} = c_*$. Finally, any $c \in {\mathbb{R}}$ and $g \in {\cal G}$ leads to a lower bound $c - \|f - c - g\|_{W}$ on $c_\ast$.
\end{theorem}
\begin{proof}
By construction $\bar{c} \leq c_*$, since $c - \|f-c-g\|_{W} \leq c - \|f-c-g\|_{L^\infty(\mc{X})} \leq c_*$ and ${\cal G} \subseteq \{g~|~g \geq 0\}$.
Now the problem is well defined, since it corresponds to a maximization on the closed subset ${\mathbb{R}} \times {\cal G}$ of a concave and continuous objective function (for the topology inherited from~$\mc{W}$).
By setting $c = c_*$ and optimizing over $g$, the optimized objective is exactly $c_* - q$ with $q$ as above, thus $c_\ast - q \leq \bar{c}$. Moreover,we have $q=0$, i.e., $\bar{c} = c_*$ when there exists $g_* \in {\cal G}$ satisfying $\|f - c_* - g_*\|_W = 0$.
\end{proof}
In the Theorem above we see that $\bar{c}$ is lower bound of $c_*$ by construction. Moreover, the error $c_* - \bar{c}$ is bounded by $q$, the {\em approximation error} of $f - c_*$ with respect to the class of models for non-negative functions $\mc{G}$ that we consider, measured in the norm $W$. So far, this all holds for any $\nrm{\cdot}_W$ and $\mc{G}$ satisfying Definition \ref{def:norms-and-models}; we continue by analyzing a specific choice.
\paragraph{The model for non-negative functions.}
We would like to use a class of models $\cal G$ that can approximate smooth non-negative functions using as few parameters as possible, while remaining tractable to optimize over. We consider the class of {\em PSD models} introduced by \citet{marteau2020non} and defined as
$$g(x) = \phi(x)^\ast A \phi(x), \quad A \succeq 0,$$
for a suitable map $\phi:{\mathcal X} \to {\mathcal C}^n$ and $A \in {\mathcal C}^{n \times n}$ Hermitian positive semidefinite\footnote{We need complex numbers because of Fourier analysis, but this extends to symmetric real matrices.}, where $\phi^\ast$ denotes the conjugate transpose of $\phi$. By the definition of positive semidefiniteness, $g(x) \geq 0$ for any $x \in \mc{X}$, and this is also a tractable class to optimize over since $g$ is linear in the parameters $A$. The approximation properties of the model will depend on the choice of the feature map $\phi$, and have been shown to give rates in the order of $n^{-m/d}$ for specific choices \citep{rudi2021psd}. Here, however, we need to study the approximation error with respect to our norm of choice $W$, and we will see that different feature maps than the ones considered by \citet{rudi2020finding} will lead to better rates.
\paragraph{The norms.}
We need norms that bound $\|\cdot\|_{L^\infty(\mc{X})}$ from above as tightly as possible, but are also easy to compute. We consider from this viewpoint two norms:
\begin{enumerate}
\item The ``$F$ norm'': $L^1$ norm of the Fourier transform, i.e.,
$$\|u\|_{F} = \int_{{\mathbb{R}}^d} |\hat{u}(\omega)| d\omega,$$
\item The ``$S$ norm'': The norm associated to a richer reproducing kernel Hilbert space such as the Sobolev space of exponent $(d+1)/2$. Let $S(\omega)$ non negative and integrable, we define
$$\|u\|^2_{S} = C^2_S \int_{{\mathbb{R}}^d} \frac{|\hat{u}(\omega)|^2}{S(\omega)} d\omega,$$
and $C^2_S = \int_{{\mathbb{R}}^d} S(\omega) d\omega$. For example for the Sobolev case we set $S(\omega) = (1+\|\omega\|^2)^{(d+1)/2}$.
\end{enumerate}
\begin{lemma}\label{lem:F-weaker-than-S}
The norms above satisfy $\|\cdot\|_{L^\infty({\mathbb{R}}^d)} \leq \|\cdot\|_F \leq \|\cdot\|_S$, for any non-negative and integrable $S$. Moreover,
\[
\|u\|_F = \min_{S \geq 0,\ S \in L^1({\mathbb{R}}^d)} \|u\|_S
\]
\end{lemma}
\begin{proof}
First, for any $u$ with finite integrable Fourier transform, we have, by H\"older inequality,
\[
\abs{u(x)}
= \abs*{\int \hat{u}(\omega) e^{2\pi i \inner{x}{\omega}}d\omega}
\leq \nrm{\hat{u}}_{L^1({\mathbb{R}}^d)}\nrm{e^{2\pi i \inner{x}{\cdot}}}_{L^\infty({\mathbb{R}}^d)}
\leq \nrm{\hat{u}}_{L^1({\mathbb{R}}^d)}
=: \nrm{u}_F, \qquad \forall x \in {\mathbb{R}}^d
\]
from which we conclude that $\nrm{\cdot}_F$ always upper bounds $\nrm{\cdot}_{L^\infty({\mathbb{R}}^d)}$. Analogously, note that, for any $S \geq 0$ and $S \in L^1({\mathbb{R}}^d)$, and for any $u$ such that $\|u\|_S$ is finite, we have, by Cauchy-Schwartz,
$$
\|u\|^2_F = \|\widehat{u}\|^2_{L^1({\mathbb{R}}^d)} = \big\|S^{1/2} \, \frac{\widehat{u}}{S^{1/2}}\big\|^2_{L^1({\mathbb{R}}^d)} \leq \|S^{1/2}\|^2_{L^2({\mathbb{R}}^d)} \big\|\frac{\widehat{u}}{S^{1/2}}\big\|^2_{L^2({\mathbb{R}}^d)} = C^2_S \int \frac{|\hat{u}(\omega)|^2}{S(\omega)} = \|u\|^2_S.
$$
Finally, for $u$ with finite $F$-norm, $\|u\|_F = \|u\|_S$ when $S = |\widehat{u}|$ for which $S \geq 0, S \in L^1({\mathbb{R}}^d)$.
\end{proof}
The $F$ norm and the $S$ norm are of interest when we can assume that we have access to the Fourier transform of the target function $f$. In particular the norm $S$ can be also computed in closed form in specific scenarios. For example, consider the case when $f$ is of the form
$$f = \sum_{j=1}^M \beta_j h(x-x_i),$$
for some $\beta_1,\dots,\beta_M \in {\mathbb{R}}$ and $x_1,\dots, x_M \in {\mathbb{R}}^d$. This arises, e.g., for mixtures of Gaussians, and learning linear models or RBF networks. If we know the Fourier transform of $h$, then
$$ \|f\|_{S}^2 := \sum_{i,j=1}^M \beta_i \beta_j H(x_i - x_j),$$
where $H$ is the inverse Fourier transform of the function $S(\omega)|\hat{h}(\omega)|^2$.
Perhaps more interestingly, Lemma \ref{lem:F-weaker-than-S} shows that the $F$ norm is weaker than the norm $S$ for any $S$, meaning that the $F$ norm allows us to automatically adapt to certain structures in the function. For example, suppose that $f(x) = h(Px)$ for some unknown $P\in{\mathbb{R}}^{d'\times d}$ with $d' \ll d$ and $h:{\mathbb{R}}^{d'}\to{\mathbb{R}}$. Using the $S$ norm for a certain $S$ that depends on $P$ would allow us to adapt to the low-dimensional structure and depend on $d'$ rather than $d$, but this requires knowing $P$. On the other hand, the $F$ norm is always weaker, so we can take advantage of the low-dimensional structure automatically.
\subsection{PSD Models for Periodic Functions on $[0,1]^d$ and Their Approximation Properties}\label{subsec:psd-models-for-periodic-functions}
The goal of the section is to provide a self-contained introduction to the approximation properties of PSD models. In particular, we consider the problem of approximating smooth $1$-periodic functions (which corresponds to $\mc{X}=[0,1]^d$) using PSD models where $\phi$ is a subset of the Fourier basis. This setting, while already being of interest for practical applications, allows for an elementary proof which highlights the main conceptual steps of the derivation.
The main results of this section are \cref{thm:bound-psd-model} and, in particular, \cref{thm:approximation-error}. With a more refined proof based on the same strategy, it is also possible to obtain results that hold for other scenarios beyond periodic functions on the torus and for more general maps $\phi$. See, for example, \citet{rudi2021psd} for the approximation of non-periodic $C^m$ functions on subsets of ${\mathbb{R}}^d$ via PSD models based on a finite dimensional feature map defined with respect to the Gaussian kernel, or \citet{rudi2020finding} for a feature map defined with respect to any kernel that satisfy some algebraic property, such as the Sobolev kernel.
We have seen in \cref{thm:tight-problem} that the optimization error of \cref{eq:problem-relax} depends on the approximation error of the function $f$ with respect to the class of models for non-negative functions. So, in our analysis there will be three main ingredients: a class of models ${\cal G}_t$ parametrized by its bandwidth $t$, which will depend on the space of functions associated with a feature map $\phi_t$; the space of functions where $f$ lives, which we denote $H_\rho$; and the norm that we use to measure the approximation error, in our case, $\|\cdot\|_F$.
We start by introducing ${\cal G}_t$, parametrized by a bandwidth $t \in {\mathbb{N}}$. We associate each entry in $\phi_t(x)$ with an element of $\crl{k\in\mathbb{Z}^d:\abs{k}\leq t}$ where $|k| = \sum_{j} \abs{k_j}$, with $n = \#( \crl{k\in\mathbb{Z}^d:\abs{k}\leq t}) \leq (2 t + 1)^d$, i.e., $n = O(t^d)$. So for each $\abs{k}\leq t$, we define the feature map $\phi_t: X \to {\mathcal C}^n$ elementwise as $(\phi_t(x))_k = e_k(x)$, where $e_k$ is the $k$-th Fourier component, i.e., $e_k(x) = e^{-2\pi i k^\top x}$.
Consider the class of PSD models
\begin{equation}\label{eq:our-psd-models}
{\cal G}_t = \{ g_{A,t} ~|~ A \in {\mathcal C}^{n \times n}, A \succeq 0\}, \quad g_{A,t}(x) = \phi_t(x)^\ast A \phi_t(x).
\end{equation}
We are thus considering the feature map $\phi_t$ associated with the classical band-limited space of functions. This choice is convenient for our analysis, but there are also many other choices of finite dimensional feature maps for PSD models that can have good approximation properties \citep{rudi2021psd,rudi2020finding}.
We consider continuous, 1-periodic functions $f$ on ${\mathbb{R}}^d$, i.e., functions satisfying $f(x + k) = f(x)$ for any $x \in {\mathbb{R}}^d$ and $k \in {\mathbb Z}^d$. We note that these can therefore be identified with continuous, periodic functions on the torus $[0,1]^d$.
We now introduce $H_\rho$, where $\rho \in \ell_1({\mathbb Z}^d)$ is a strictly positive summable sequence. The space is a separable Hilbert space of periodic functions defined as $H_{\rho} = \{f \in L^2({\mathcal X}) ~|~ \|f\|_\rho < \infty\}$, where $\|f\|^2_\rho = \sum_{k \in {\mathbb Z}^d} |\widehat{f}_k|^2/\rho_k < \infty$ and $\widehat{f}_k = \int_{[0,1]^d} e_k(x) f(x) dx$ is the Fourier series associated to $f$. One classical example of $\rho$ is $\rho_k = (1+\|k\|^2)^{-m}$, with $m > d/2$, corresponding to the Sobolev space $H^m_{2,\textrm{per}}$ of periodic functions whose derivatives up to order $m$ are squared integrable \citep{wahba1990spline}, or the space of periodic entire functions (of order~$1$), corresponding to $\rho_k = \exp(-\sigma \|k\|)$, for some $\sigma > 0$.
We will soon present a Theorem showing that the PSD models ${\cal G}_t$ can approximate functions of the form $f = \sum_{j=1}^q u_j^2$ for $q \in {\mathbb{N}}$ and $u_j \in H_\rho$, using a small $t$ depending on the decreasing quantity
$$R^2_t := \sum_{|k| > t } \rho_k.$$
First, we start with a Lemma concerning the norm $\|\cdot\|_F$ of the pointwise product of functions. This part of the proof is crucial and is handled differently in the other settings \citep[e.g.,][]{rudi2020finding}.
\begin{lemma}\label{lm:pointwise-product-L1}
Let $f, g$ be $1$-periodic functions on $\mc{X}$ with $\nrm{f}_F, \nrm{g}_F < \infty$ and denote by $f \cdot g$ their pointwise product, i.e.,~$(f \cdot g)(x) = f(x)g(x)$. Then
$$\|f \cdot g\|_F \leq \|f\|_F \|g\|_F.$$
\end{lemma}
\begin{proof}
By the convolution property of Fourier series, $(\widehat{f \cdot g})_k = \sum_{j \in {\mathbb Z}^d} \widehat{f}_j \widehat{g}_{k-j}$. By the Young inequality for the convolution of discrete sequences, we have for any two sequences $u,v \in \ell_1({\mathbb Z}^d)$, $\sum_{j,k \in {\mathbb Z}^d} |\widehat{u}_j \widehat{v}_{k-j}| \leq (\sum_{k \in {\mathbb Z}^d} |u_k|) (\sum_{k \in {\mathbb Z}^d} |v_k|)$. The result is obtained by applying this inequality on the Fourier series of $f \cdot g$ and noting that $\sum_{k \in {\mathbb Z}^d} |\widehat{f}_k|$ is exactly $\|f\|_F$, and the same for $g$.
\end{proof}
Now we are ready to state the first theorem that on the approximation error for the PSD models described above.
\begin{theorem}\label{thm:bound-psd-model}
Let $f(x) = \sum_{j=1}^T u_j(x)^2$ for functions $u_j \in H_\rho$ and $T \in {\mathbb{N}}$, then
$$ \min_{g \in {\cal G}_t} \|f - g\|_{F} ~\leq~ C'_f \,\, R_t \,.$$
where ${C_f'}^2 = \sum_{j=1}^{T} \|u_j\|_\rho \|u_j\|_F$.
\end{theorem}
\begin{proof}
Denote by $u_{j,t}$ the function, $u_{j,t}(x) = \sum_{|k| \leq t} (\widehat{u}_j)_k e_k(x)$ (a low-pass filtered version of $u$), and by $v_{j,t}$ the $n$-dimensional vector $(v_{j,t})_k = \widehat{u}_{j,t}$, for any $|k| \leq t$.
Now, define $\bar{A} \in {\mathcal C}^{n \times n}$ as
$$\bar{A} = \sum_{j=1}^T v_{j,t} v_{j,t}^\ast.$$
Since, by construction, $v_{j,t}^\ast \phi(x) = \sum_{|k| \leq t} (\widehat{u}_{j})_k e_k(x) = u_{j,t}(x)$, then
$$g_{\bar{A},t}(x) = \phi_t(x)^\ast A \phi_t(x) = \sum_{j=1}^T \phi_t(x)^\ast(v_{j,t} v_{j,t}^\ast) \phi_t(x) = \sum_{j=1}^T (v_{j,t}^\ast \phi_t(x))^2 = \sum_{j=1}^T u_{j,t}(x)^2.$$
Now note that $u_j^2 - u_{j,t}^2 = (u_j + u_{j,t}) \cdot (u_j - u_{j,t})$, then, by using \cref{lm:pointwise-product-L1},
$$\|f - g_{\bar{A},t}\|_{F} = \Big\|\sum_{j=1}^T ( u_j^2 - u_{j,t}^2) \Big\|_F \leq \sum_{j=1}^T (\|u_j\|_F + \|u_{j,t}\|_F)\|u_{j} - u_{j,t}\|_F.$$
We conclude noting that $\|u_{j,t}\|_F \leq \|u_j\|_F$ by construction and, by Cauchy-Schwartz,
$$\|u_j - u_{j,t}\|_{F} = \sum_{|k| > t} |\widehat{u}_{j}| = \sum_{|k| > t} \sqrt{\rho}_t \tfrac{|\widehat{u}_{j}|}{\sqrt{\rho}_t} \leq R_t \|u_{j,t}\|_\rho \leq R_t \|u_{j}\|_\rho.$$
Therefore,
$\displaystyle \min_{g \in {\cal G}_t} \|f - g\|_F = \min_{A \in {\mathcal C}^{n \times n}, A \succeq 0} \|f - g_{A,t}\|_F \leq \|f - g_{\bar{A},t}\|_F \leq R_t C_f'$.
\end{proof}
The theorem above controls the approximation error of the PSD models of bandwidth $t$ when the target function can be written in terms of a sum of squares of functions belonging to an $H_\rho$ for a given $\rho$. In general it is not clear how to guarantee when a function $f$ can be characterized as a sum of squares of functions in a given space. Luckily, in the case of $m$-times differentiable functions, there exists an easy geometrical characterization. We are going to use this fact, to specify the result above for the case when $f$ is an $m$-times differentiable function. First, we need the following lemma, which is the adaptation to periodic functions of Theorem 2 of \citet{rudi2020finding} (more specifically, of Corollary 2, page 23) .
\begin{lemma}[\cite{rudi2020finding}]\label{lm:cor2}
Let $f$ be an $m+2$-times differentiable non-negative periodic function. Assume that the minimizers of $f$ in ${\mathcal X}$ are finitely many and with strictly positive Hessian. Then, there exists $Q \in {\mathbb{N}}$ and $z_1,\dots, z_Q$ periodic $m$-times differentiable functions, such that $f = \sum_{j=1}^Q z_j^2$.
\end{lemma}
The proof of the lemma above is reported in \cref{app:proof-of-lemma-7}. Now we are ready to specify \cref{thm:bound-psd-model} in the case of an $m$-times differentiable function.
\begin{theorem}
\label{thm:approximation-error}
Let $f$ be an $(m + d/2 + 2)$-times differentiable periodic function, with $m > 0$ and let $c_*$ be its global minimum. Assume that the minimizers of $f$ in ${\mathcal X}$ are finitely many and with strictly positive Hessian. Then, for any $t \in {\mathbb{N}}$,
$$ \min_{g \in {\cal G}_t}\ \|f - c_* - g\|_F ~\leq~ C_f ~ t^{-m},$$
where the constant $C_f$ depends only on $f, m, d$.
\end{theorem}
The proof is self-contained and reported in \cref{app:thm-approx}. It is obtained by first applying Lemma~\ref{lm:cor2} on $f$, and \cref{thm:bound-psd-model} on the resulting characterization. To make this possible and to obtain a sharp rate, a crucial step is to show that the resulting functions belong to the space $H_\rho$ for a specific $\rho$ satisfying $\rho_k \propto |k|^{-2m-d}$, then deriving the bound on the associated residual $R_t$.
\subsection{The Resulting Problem and the Associated A Priori Guarantees}
Now the problem \cref{eq:problem-relax}, with the PSD models \eqref{eq:our-psd-models} and the $F$ norm, takes the following form
\begin{equation}\label{eq:specific-sdp}
\bar{c} = \max_{c \in {\mathbb{R}}, \ A \in {\mathcal C}^{n\times{}n}} c - \nrm{f - c - g_A}_F \quad \textrm{ such that }\ \ A\succeq 0,
\end{equation}
and, combining \cref{thm:tight-problem} and \cref{thm:approximation-error} gives the following a priori guarantee
\begin{corollary
\label{cor:guarantees-appr}
Let $f$ be an $(m + d/2 + 2)$-times differentiable, 1-periodic function with $m > 0$, and let $c_*$ be its global minimum. Also, let $f$ have finitely many minimizers in ${\mathcal X}$, which each have strictly positive Hessian. Then, for any $t \in {\mathbb{N}}$
$$ 0 ~\leq ~ c_ * - \bar{c} ~\leq ~C_f \,\, t^{-m}\,.$$
\end{corollary}
Expressing $t$ with respect to $n$, the dimension of the matrix $A$, we have $t = O(n^{1/d})$. The bound above, then reads as
$$ 0 ~\leq ~ c_* - \bar{c} ~ \leq ~ C' ~ n^{-m/d} \,.$$
This shows that the solution, $\bar{c}$ is always a lower bound of the global minimum $c_*$ and converges to~$c_*$ with a rate depending on the dimension of the matrix $A$ and the degree of differentiability of $f$. E.g., when $m \geq d$, the error goes to zero as quick as $n^{-1}$.
In the following section we see how to solve the optimization problem \cref{eq:specific-sdp} in practice, by making use of the fact that $\|\cdot\|_F$, in the case of the torus, is a sum, which makes it easy to write \cref{eq:specific-sdp} as a stochastic optimization objective.
\section{Solving the Optimization Problem}\label{sec:optimization}
We now describe the process of solving the optimization problem in \cref{eq:problem-relax} in the specific case of the $F$ norm and a PSD model $g_A(x) = \phi(x)^\ast A\phi(x)$ parametrized by positive semidefinite $A \in {\mathcal C}^{n\times{}n}$. For now, we consider an arbitrary feature map $\phi$, but we will also contextualize our results in the specific case of $\phi_t$, the map introduced in Section \ref{subsec:psd-models-for-periodic-functions}.
A serious challenge to solving \eqref{eq:specific-sdp} is that computing $\nrm{f - c - g_A}_F$ or its subgradients exactly will typically be intractable because the $F$ norm is the series:
\[
\nrm{f - c - g_A}_F
= \sum_{k \in \mathbb{Z}^d} \abs{\hat{f}_k - c\indicator{k=0} - \widehat{g_A}_k}.
\]
To circumvent this issue, we recast the problem as a stochastic optimization objective. In particular, we introduce a probability measure, $\pi$, supported on $\mathbb{Z}^d$ and rewrite
\[
\nrm{f - c - g_A}_F
= \sum_{k \in \mathbb{Z}^d} \pi_k \frac{\abs{\hat{f}_k - c\indicator{k=0} - \widehat{g_A}_k}}{\pi_k} = {\mathbb E}_{k \sim \pi}\brk*{\frac{\abs{\hat{f}_k - c\indicator{k=0} - \widehat{g_A}_k}}{\pi_k}} .
\]
Written this way, we can now attack our objective using any number of methods from the stochastic optimization arsenal, such as projected stochastic gradient ascent.
To see how $\pi$ should be chosen, we first note that $g_A(x) = \inner{A}{\phi(x)\phi(x)^\ast}$, and use $M\upk = \widehat{\phi\phi^\ast}_k \in \mathbb{C}^{n \times n}$ to denote the $k$-th Fourier component of $\phi\phi^\ast$, so that $\widehat{g_A}_k = \inner{A}{M\upk}$. Thus, our optimization problem now reads
\[
\bar{c} = \max_{c\in{\mathbb{R}},\ A\in{\mathcal C}^{n\times{}n}} c - {\mathbb E}_{k \sim \pi}\brk*{\frac{\abs{\hat{f}_k - c\indicator{k=0} - \inner{A}{M\upk}}}{\pi_k}} \quad \textrm{ such that }\ A \succeq 0.
\]
Noting that $c$ only appears in two terms, we can also eliminate this variable by solving
\[
\max_c \ c - \abs{\hat{f}_0 - c - \langle A,\,M^{(0)}\rangle} = \hat{f}_0 - \langle A,\,M^{(0)}\rangle.
\]
Putting this all together, we want to solve the stochastic concave maximization problem
\begin{equation}\label{eq:stoch-opt-problem}
\bar{c} = \max_{A\succeq 0}\
{\mathbb E}_{k\sim\pi}\brk*{L_k(A)}
\end{equation}
where
\begin{equation}
L_k(A) = \begin{cases}
\frac{1}{\pi_0}\prn*{\hat{f}_0 - \inner{A}{M^{(0)}}} & k = 0 \\
\frac{-1}{\pi_k}\abs*{\hat{f}_k - \inner{A}{M\upk}} & k \neq 0.
\end{cases}
\end{equation}
\begin{algorithm}
\caption{\textsc{Projected Stochastic Gradient Ascent}}
\label{alg:projected-sgd}
\begin{algorithmic}
\STATE Initialize $A_0 = 0$
\FOR{$t=0,1,\dots,T-1$}
\STATE $\tilde{A}_{t+1} = A_t + \eta\nabla L_{k_t}(A_t) \quad\textrm{for}\ \ k_t \sim \pi$
\STATE $A_{t+1} = \min_{A}\nrm{A - \tilde{A}_{t+1}}_{Frob.}$ s.t.~$A \succeq 0$, $\nrm{A}_{Frob.} \leq R$.
\ENDFOR
\STATE \textbf{Return:} $\bar{A}_T = \frac{1}{T}\sum_{t=1}^T A_t$
\end{algorithmic}
\end{algorithm}
Using projected stochastic gradient ascent yields the following error guarantee:
\begin{theorem}\label{thm:optimization-error}
Let $R \geq \nrm{A^*}_{\rm Frob.}$ be an upper bound the norm of a maximizing $A^*$, and let $\bar{A}_T$ be the output of Algorithm \ref{alg:projected-sgd}, with an optimally-chosen constant stepsize $\eta$ and $\pi_k \propto \nrm{M\upk}_{Frob.} + (1+\sum_{j=1}^d (2 \pi k_j)^{d+1})^{-1}$. Then $\nrm{\bar{A}_T}_{\rm Frob.}\leq R$ and for any $\delta \in (0,1)$, with probability $1-\delta$
\[
{\mathbb E}_{k\sim\pi}\brk*{L_k(\bar{A}_T)} \geq \bar{c} - \frac{20R\log(2/\delta)\prn*{1 + \sum_{k\in\mathbb{Z}^d}\nrm{M\upk}_{Frob.}}}{\sqrt{T}}.
\]
\end{theorem}
The proof, which we defer to Appendix \ref{app:proof-of-thm-optimization-error}, simply requires proving that the functions $L_k$ are Lipschitz-continuous and then appealing to Proposition 2.2 from \citet{nemirovski2009robust}. This result bounds, a priori, the optimization error incurred in trying to estimate $A^*$ which realizes the maximum of \eqref{eq:specific-sdp}. In Section \ref{sec:combining-guarantees}, we combine this with Theorem \ref{thm:approximation-error} and the yet to be presented Theorem \ref{thm:a-posteriori-accuracy} to state our a priori guarantees.
\section{A Posteriori Certification}
Obviously, it is nice to know a priori that our estimate $\bar{A}_T$ will be close to attaining the optimum of \eqref{eq:specific-sdp}. However, with this estimate in hand, what we really want is to compute a lower bound on~$c_*$, so we need to actually evaluate ${\mathbb E}_{k\sim\pi}[L_k(\bar{A}_T)]$, which is non-trivial since $\pi$ has infinite support.
Things are easier when $f$ and $\phi\phi^\ast$ are \emph{band-limited}, meaning that for some $K$, $\abs{k}>K$ implies $\hat{f}_k = 0$ and $M\upk = 0$. Specifically, we can choose $\pi_k \propto \nrm{M\upk}_{\rm Frob.}$, which is only supported on $\crl{k:\abs{k}\leq K}$, and then easily compute ${\mathbb E}_{k\sim\pi}[L_k(\bar{A}_T)]$ to obtain an exact lower bound on $c_*$.
However, if one or both of $f$ and $\phi\phi^\ast$ are not band-limited, then we are forced to estimate the value of an infinite sum. One approach is to draw samples $k \sim \pi$ and estimate the value using a sample average, and under suitable conditions on $f$ and the matrices $M\upk$, this allows us to accurately estimate the value of the lower bound with high-probability. Alternatively, under stronger conditions on $f$ and the matrices $M\upk$, we can compute $L_k$ for a finite set of $k$'s and deterministically bound the contribution of the remaining, uncomputed terms. The following Theorem indicates the accuracy of these methods:
\begin{theorem}\label{thm:a-posteriori-accuracy}
Let $f$ satisfy the conditions of Theorem \ref{thm:approximation-error} with $m > d/2$. Then for any $K$ and $k_1,\dots,k_K \overset{i.i.d.}{\sim} \pi$, for any $\delta \in (0,1)$ and $A\succeq 0$, with probability $1-\delta$,
\begin{align*}
c_* \geq \bar{c} \geq \hat{c}_{1-\delta} &:= \frac{1}{K}\sum_{i=1}^K L_{k_i}(A) - \textrm{Err}_{1-\delta} \geq {\mathbb E}_{k\sim\pi}[L_k(A)] - 2\textrm{Err}_{1-\delta} \\
\textrm{Err}_{1-\delta} &:= (\sqrt{d+1}\|f\|_{C^{d+1}({\mathcal X})} + \nrm{A}_{\rm Frob.})\sqrt{\frac{2\log(2/\delta)}{K}}\Big(1 + \sum_{k\in\mathbb{Z}^d}\nrm{M\upk}_{\rm Frob.}\Big),
\end{align*}
where $\|f\|_{C^{d+1}({\mathcal X})} = \max_{1\leq j\leq d} \max_{1\leq q \leq d+1} \|\frac{\partial^q}{\partial x^q_j} f\|_{L^\infty({\mathcal X})}$.
In addition, for any $K$, the following holds deterministically:
\begin{align*}
c_* \geq \bar{c} \geq \hat{c}_{1} &:= \sum_{k:\abs{k}\leq K} \pi_k L_{k}(A) - \textrm{Err}_1 \geq {\mathbb E}_{k\sim\pi}[L_k(A)] - 2\textrm{Err}_1 \\
\textrm{Err}_1 &:= \sum_{k:\abs{k}>K}\brk*{\abs{\hat{f}_k} + \nrm{A}_{\rm Frob.}\nrm{M\upk}_{\rm Frob.}}
\end{align*}
\end{theorem}
The proof, which we defer to Appendix \ref{app:proof-of-thm-a-posteriori-accuracy}, analyzes $\hat{c}_{1-\delta}$ and $\hat{c}_1$ separately. For the former, we first show that $L_k(A)$ is bounded for each $k$, and then apply Hoeffding's inequality. For the latter, we decompose the sum over $k\in\mathbb{Z}^d$ into those $k$'s with $\abs{k}\leq K$, and those $k$'s with $\abs{k} > K$, and then upper bound this second portion of the sum.
The Theorem shows that the sample average has additive error that decays with $1/\sqrt{K}$ with high probability. Furthermore, this lower bound is tractable given the parameter $\zeta$ and enough knowledge of our feature map for us to upper bound $\sum_{k\in\mathbb{Z}^d}\nrm{M\upk}_{\rm Frob.}$. For the deterministic lower bound on $c_*$, we need to have some control over how quickly $\nrm{M\upk}_{\rm Frob.}$ decays with increasing~$\abs{k}$, but if the feature map is chosen so that this decay is (eventually) rapid, then this lower bound can be tight.
In the particular case of $\mc{G}_t$ introduced in Section \ref{subsec:psd-models-for-periodic-functions}, we show in Appendix \ref{app:bound-our-Mk} that we can bound $\sum_{k\in\mathbb{Z}^d} \nrm{M\upk}_{\rm Frob.} \leq n (8t)^{d}$, and for $K \geq 2t$, $\sum_{k:\abs{k}> K} \nrm{M\upk}_{\rm Frob.} = 0$. Therefore, $\hat{c}_{1-\delta}$ can provide a tight approximation of $\bar{c}$ using $K \gg n (8t)^d$ samples, and $\hat{c}_1$ can once the maximum bandwidth is set $K \geq 2t$, as long as the Fourier coefficients $\hat{f}_k$ decay sufficiently quickly.
With Theorem \ref{thm:a-posteriori-accuracy}, we can use the solution returned by our optimization algorithm to compute a lower bound on $c_*$, one that holds with high probability and one that holds deterministically. However, to actually compute a certificate of the accuracy of our lower bound, we also need an upper bound on $c_*$. Getting \emph{some} upper bound on $c_*$ is as easy as evaluating $f(x)$ at any point $x$, although most $x$'s will not be close to minimizing $f$, so this may not give us much information about~$c_*$. Of course, there are many better ways, and the bulk of the non-convex optimization literature is devoted to designing algorithms for computing approximate minimizers of $f$, i.e., upper bounds on~$c_*$. Upper bounds for $c_*$ are easier to produce, $f(x_0)$ for any point $x_0 \in {\mathcal X}$ is a valid upper bound. We can use the point $x_0$ produced for example by \cite{rudi2020finding}, that converges provably to a global minimizer with a rate that avoids the curse of dimensionality. In our experiments (in low dimensions), we simply compute $f(x_1),\dots,f(x_N)$ for $N$ random points and upper bound $c_* \leq \min_i f(x_i)$, which allows for tight enough certificates.
\section{A Priori and A Posteriori Guarantees}\label{sec:combining-guarantees}
In the previous sections, we have described a method for estimating $c_*$ in the case of periodic functions on $[0,1]^d$, and all the pieces are in place to state our method's a priori and a posteriori guarantees. To summarize so far:
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=0ex,parsep=1ex]
\item Theorem \ref{thm:tight-problem} shows that the solution of the relaxed problem \eqref{eq:problem-relax}, $\bar{c}$, is a lower bound on $c_*$ which is tight up to the error of approximating $f-c_*$ with the class of non-negative functions, $\mc{G}$.
\item In Theorem \ref{thm:approximation-error}, we bound this approximation error for smooth, periodic functions with respect to $\mc{G}_t$, the class of PSD models defined in \eqref{eq:our-psd-models} with the band-limited kernel $\phi_t$.
\item But, we need to actually solve \eqref{eq:problem-relax} defined using the $F$ norm and $\mc{G}_t$. So, in Theorem \ref{thm:optimization-error}, we bound the optimization error of the solution returned by projected stochastic gradient ascent.
\item However, our optimization algorithm returns the parameters $\bar{A}_T$ of a PSD model, and to compute a lower bound on $c_*$ we need to actually evaluate the value of the objective at $\bar{A}_T$. So, finally, Theorem \ref{thm:a-posteriori-accuracy} bounds the estimation error when using $\bar{A}_T$ to estimate lower bounds $\hat{c}_{1-\delta}$ and $\hat{c}_1$ on $c_*$ that holds with high probability and deterministically, respectively.
\end{enumerate}
Therefore, our a priori guarantees amount to combining
(Approximation Error) $+$ (Optimization Error) $+$ (Estimation Error).
On the other hand, given any PSD model parameters, $A$, we can evaluate an a posteriori bound on the error by upper bounding $c_* \leq f(x)$ for any $x$ and lower bounding $c_*$ using Theorem \ref{thm:a-posteriori-accuracy}. The following Corollary summarizes these guarantees:
\begin{corollary}\label{cor:all-together}
For the $F$ norm and family of PSD models $\mc{G}_t$ defined using $\phi_t$, under the conditions of Theorems \ref{thm:approximation-error}, \ref{thm:optimization-error}, and \ref{thm:a-posteriori-accuracy}, let $\hat{c}_{1-\delta}(\bar{A}_T)$ and $\hat{c}_1(\bar{A}_T)$ be lower bound estimates defined in Theorem \ref{thm:a-posteriori-accuracy}. Then for any $\delta \in (0,1)$, we provide the following a priori guarantee with probability $1-2\delta$:
\[
c_* \geq \hat{c}_{1-\delta}(\bar{A}_T)
\geq c_* - C_f t^{-m} - C_d n t^{d} \prn*{\frac{20R\log(2/\delta)}{\sqrt{T}} + \frac{2(\sqrt{d+1}\|f\|_{C^{d+1}({\mathcal X})} + R)\sqrt{2\log(2/\delta)}}{\sqrt{K}}},
\]
with $C_d = 8^d$. At the same time, given any point $x$ and parameters $A$ for the PSD model, we guarantee a posteriori that $f(x) \geq c_* \geq \hat{c}_{1}(A)$ and $f(x) \geq c_* \geq \hat{c}_{1-\delta}(A)$ with probability $1-\delta$.
\end{corollary}
The Corollary follows immediately by combining Theorems \ref{thm:approximation-error}, \ref{thm:optimization-error}, and \ref{thm:a-posteriori-accuracy}. Since $t = O(n^{1/d})$, by choosing $T = K = O(n^{4 + 2m/d})$, we have
$$ c_* \geq \hat{c}_{1-\delta}(\bar{A}_T)
\geq c_* - C' n^{-m/d},$$
when $n$ is the dimension of the matrix $\bar{A}_T$. In this case the algorithm has a complexity that is $O(Tn^3 + K n^2) = O(n^{7 + 2m/d})$. In particular, for the class of $(m+d/2+2)$-times differentiable functions with $m > d/2$, we achieve a bound $c_* \geq \hat{c}_{1-\delta}(\bar{A}_T)
\geq c_* - C' n^{-1}$, with a computational cost of $O(n^{8})$. There is a lot of room for improvement in the constants of the exponents, but the considered algorithm shows that it is possible to obtain the global optimum of a function with both a posteriori guarantees and an a priori error rate that is adaptive to the degree of differentiability of the function to minimize and that avoids the curse of dimensionality for very smooth functions.
\section{Empirical Evaluation}\label{subsec:empirical}
Finally, we apply our method to two simple non-convex optimization problems in one and two dimensions. The results are summarized in Figures \ref{fig:empirical-results1d} and \ref{fig:empirical-results2d}, and all of the details of the experiments are deferred to Appendix \ref{app:experimental-details}, in which we describe a new feature map $\phi$, and describe a more practical algorithm for solving \eqref{eq:specific-sdp} based on reparametrizing $A = UU^\ast$ \citep{burer2003nonlinear}.
\section{Discussion}
\paragraph{Convex duality.} Following~\citet{rudi2020finding}, we can provide a dual interpretation to the use of PSD models. Indeed, the minimimization problem we solve is
$$
\inf_{ \mu\ \rm{ probability } \ {\rm measure} } \int_{\mc{X}} f(x) d\mu(x)
$$
which we reformulate as
$$
\inf_{ \mu \ {\rm signed}\ {\rm measure} } \int_{\mc{X}} f(x) d\mu(x)
\mbox{ such that } \int_{\mc{X}} d\mu(x) = 1 \mbox{ and } \int_{\mc{X}} \Phi(x) \Phi(x)^* d\mu(x) \succcurlyeq 0.
$$
Given that we expect the solution of the original problem to Dirac measures supported at global minimizers, we can add constraint that are satisfied by Diracs, such as,
$$
\int_{\mc{X}}|d\mu(x)| \leqslant 1 \mbox{ or } \Omega(\mu) \leqslant 1,
$$
for any norm $\Omega$ on signed measure that is larger than the total variation norm.
The first constraint leads to a dual problem
$$
\sup_{c \in {\mathbb{R}}, \ B \succcurlyeq 0} c - \big\| f - c 1 - \phi(\cdot)^\top B \phi(\cdot) \big\|_\infty,
$$
while the second one leads to
$$
\sup_{c \in {\mathbb{R}}, \ B \succcurlyeq 0} c - \Omega^\ast\big( f - c 1 - \phi(\cdot)^\top B \phi(\cdot) \big).
$$
It turns out that the dual of the $S$ norm and of $F$ norm are domininating the total variation norm, and thus have a dual interpretation. Thus, our method for obtaining a posteriori certificates directly extends to optimization problems that are defined through probabilty measures and already tackled by kernel sum-of-squares, such as optimal transport \citep{vacher2021dimension}, or optimal control \citep{berthier2021infinite}.
\paragraph{Comparison to previous work on kernel sums-of-square.}
Compared to \citet{rudi2020finding}, the subsampling is now done differently than before: the constraint $\int_{\mc{X}} \phi(x) \phi(x)^* d\mu(x) \succcurlyeq 0$ is replaced by the projection on the span of $\phi(x^{(1)},\dots,\phi(x^{(n)})$ being a positive semidefinite matrix. This is a relaxation in the dual, which still leads to a lower bound for the optimization problem.
This also suggests a candidate optimal solution when applied to the torus. Indeed, at optimality, we expect $\mu$ to be close to a Dirac at $x_0$, and then (in 1D for simplicity), $\hat{\mu}_1$ should be close to $e^{-2i\pi x_0}$, and we can read off a candidate minimizer as the argument of the first Fourier coefficients (we could imagine using more than one).
\begin{figure}
\centering
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{p1.png}
\end{minipage}
\hfill
\begin{minipage}{0.49\textwidth}
\centering
\includegraphics[width=\textwidth]{p2.png}
\end{minipage}
{\vspace{-2.5mm}\small\caption{For $f:{\mathbb{R}}\to{\mathbb{R}}$ as described in Appendix \ref{app:experimental-details}, we use PSD models using feature maps of dimension $n=25$ and $n=100$. To the left, we see that for $n=25$, the model $g(x)+c$ does not approximate $f$ well, so our a posteriori error guarantee is $0.24$. But, when $n=100$, the model approximates $f$ much better, and our a posteriori guarantee is $0.01$. To the right, we plot the absolute difference between $\hat{f}_k$ and $\widehat{(g+c)}_k$ for small $k$'s, which is what drives the difference in performance between $n=25$ and $n=100$
\label{fig:empirical-results1d}}\vspace{-1mm}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{gap_vs_n.png}
{\vspace{-3mm}\small\caption{For $f:{\mathbb{R}}^2\to{\mathbb{R}}$ as described in Appendix \ref{app:experimental-details}, we plot the a posteriori error guarantee of our algorithm's estimate vs.~$n$, i.e~$\min_j f(x_j) - \hat{c}$ which is the difference between the minimum value of $f$ achieved on a random grid of points, which upper bounds $c_*$, and our estimate of the minimum, $\hat{c}_1$, which lower bounds $c_*$. The error is at most 10\% of the function's range with $n=150$, and can be made less than 1\% with $n=750$.\label{fig:empirical-results2d}}\vspace{-2mm}}
\end{figure}
\paragraph{Acknowledgements}
This work was supported by the French government under the management of the Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). We also acknowledge support from the European Research Council (grants SEQUOIA 724063 and REAL 947908).
| {
"timestamp": "2022-04-12T02:36:46",
"yymm": "2204",
"arxiv_id": "2204.04970",
"language": "en",
"url": "https://arxiv.org/abs/2204.04970",
"abstract": "We consider potentially non-convex optimization problems, for which optimal rates of approximation depend on the dimension of the parameter space and the smoothness of the function to be optimized. In this paper, we propose an algorithm that achieves close to optimal a priori computational guarantees, while also providing a posteriori certificates of optimality. Our general formulation builds on infinite-dimensional sums-of-squares and Fourier analysis, and is instantiated on the minimization of multivariate periodic functions.",
"subjects": "Machine Learning (cs.LG); Optimization and Control (math.OC)",
"title": "Non-Convex Optimization with Certificates and Fast Rates Through Kernel Sums of Squares",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232889752296,
"lm_q2_score": 0.7217432182679956,
"lm_q1q2_score": 0.7093460435737184
} |
https://arxiv.org/abs/1209.3617 | Strategy complexity of finite-horizon Markov decision processes and simple stochastic games | Markov decision processes (MDPs) and simple stochastic games (SSGs) provide a rich mathematical framework to study many important problems related to probabilistic systems. MDPs and SSGs with finite-horizon objectives, where the goal is to maximize the probability to reach a target state in a given finite time, is a classical and well-studied problem. In this work we consider the strategy complexity of finite-horizon MDPs and SSGs. We show that for all $\epsilon>0$, the natural class of counter-based strategies require at most $\log \log (\frac{1}{\epsilon}) + n+1$ memory states, and memory of size $\Omega(\log \log (\frac{1}{\epsilon}) + n)$ is required. Thus our bounds are asymptotically optimal. We then study the periodic property of optimal strategies, and show a sub-exponential lower bound on the period for optimal strategies. | \section{Introduction}
\smallskip\noindent{\bf Markov decision process and simple stochastic games.}
The class of \emph{Markov decision processes (MDPs)} is a classical model for probabilistic
systems that exhibit both stochastic and and deterministic
behavior~\cite{Howard}.
MDPs have been widely used to model and solve control problems for
stochastic systems~\cite{FV97}: there, non-determinism represents the freedom
of the controller to choose a control action, while the probabilistic
component of the behavior describes the system response to control actions.
\emph{Simple stochastic games (SSGs)} enrich MDPs by allowing two types of
non-determinism (angelic and demonic non-determinism) along with stochastic
behavior~\cite{Condon92}.
MDPs and SSGs provide a rich mathematical framework to
study many important problems related to probabilistic systems.
\smallskip\noindent{\bf Finite-horizon objective.}
One classical problem widely studied for MDPs and SSGs is
the \emph{finite-horizon objective}.
In a finite-horizon objective, a finite time horizon $T$ is given and the goal
of the player is to maximize the payoff within the time horizon $T$ in MDPs
(in SSGs against all strategies of the opponent).
The complexity of MDPs and SSGs with finite-horizon objectives have been
well studied, with book chapters dedicated to them~\cite{FV97,Puterman}.
The complexity results basically show that iterating the Bellman equation for
$T$ steps yield the desired result~\cite{FV97,Puterman}.
While the computational complexity have been well-studied, perhaps surprisingly
the strategy complexity has not received great attention.
In this work we consider several problems related to the strategy complexity
of MDPs and SSGs with finite-horizon objectives, where the
objective is to reach a target state within a finite time horizon $T$.
\smallskip\noindent{\bf Our contribution.} In this work we consider the
memory requirement for $\epsilon$-optimal strategies, for $\epsilon>0$,
and a periodic property of optimal strategies in finite-horizon MDPs and
SSGs.
A strategy is an $\epsilon$-optimal strategy, for $\epsilon>0$, if the
strategy ensures within $\epsilon$ of the optimal value against all
strategies of the opponent.
For finite-horizon objectives, the natural class of strategies are counter-based
strategies, which has a counter to count the number of time steps.
Our first contribution is to establish asymptotically optimal memory bounds
for $\epsilon$-optimal counter-based strategies, for $\epsilon>0$, in finite-horizon
MDPs and SSGs.
We show that $\epsilon$-optimal counter-based strategies require at
most memory of size $\log \log (\frac{1}{\epsilon}) + n+1$
and memory of size $\Omega(\log \log (\frac{1}{\epsilon}) + n)$ is required,
where $n$ is the size of the state space.
Thus our bounds are asymptotically optimal.
The upper bound holds for SSGs and the lower bound is for
MDPs.
We then consider the periodic (or regularity) property of optimal strategies.
The period of a strategy is the number $P$ such that the strategy repeats
within every $P$ steps (i.e., it is periodic with time step $P$).
We show a sub-exponential lower bound on the period of optimal strategies
for MDPs with finite-horizon objectives, by presenting a family of MDPs
with $n$ states where all optimal strategies are periodic and the period
is $2^{\Omega(\sqrt{n\cdot \log(n)})}$.
\smallskip\noindent{\bf Organization of the paper.}
The paper is organized as follows:
In Section~\ref{sec:def} we present all the relevant definitions related
to stochastic games and strategies.
In Section~\ref{sec:counter} we show that $\Theta(n+\log \log \epsilon^{-1})$ number of bits are
necessary and sufficient for $\epsilon$-optimal counter-based strategies, for all $\epsilon>0$,
in both finite-horizon MDPs and SSGs.
In Section~\ref{sec:period} we show that there are finite-horizon MDPs where all
optimal strategies are periodic and have a period of $2^{\Omega(\sqrt{n\log n})}$.
\section{Definitions}\label{sec:def}
The class of {\em infinite-horizon simple stochastic games} (SSGs) consists of
two player, zero-sum, turn-based games, played on a (multi-)graph.
The class was first defined by Condon~\cite{Condon92}.
Below we define SSGs, the finite-horizon version, and the important sub-class of MDPs.
\smallskip\noindent{\em SSGs, finite-horizon SSGs, and MDPs.}
An SSG $G=(S_1,S_2,S_R,\perp,(A_s)_{s\in S_1\cup S_2\cup S_R},s_0)$ consists of a terminal state $\perp$ and three sets of disjoint non-terminal states, $S_1$ (max state), $S_2$ (min states), $S_R$ (coin toss states). We will use $S$ to denote the union, i.e., $S=S_1\cup S_2 \cup S_R$. For each state $s\in S$, let $A_s$ be a (multi-)set of {\em outgoing arcs of $s$}. We will use $A=\bigcup_s A_s$ to denote the (multi-)set of all arcs. Each state $s\in S$ has two outgoing arcs. If $a$ is a arc, then $d(a)\in S\cup \{\perp\}$ is the {\em destination} of $a$. There is also a designated start state $s_0\in S$.
The class of {\em finite-horizon simple stochastic games} (FSSGs) also consists
of two player, zero-sum, turn-based games, played on a (multi-)graph.
An FSSG $(G,T)$ consists of an SSG $G$ and a finite time limit (or horizon) $T\geq 0$.
Let $G$ be an SSG and $T\geq 0$, then we will write the FSSG $(G,T)$ as $G^T$.
Given an SSG $G$ (resp. FSSG $G^T$), for a state $s$, we denote by $G_s$ (resp. $G^T_s$)
the same game as $G$ (resp. $G^T$), except that $s$ is the start state.
The class of {\em infinite (resp. finite) horizon Markov decision processes} (MDPs and FMDPs respectively)
is the subclass of SSGs (resp. FSSGs) where $S_2=\emptyset$.
\smallskip\noindent{\em Plays and objectives of the players.}
An SSG $G$ is {\em played} as follows. A pebble is moved on to $s_0$. For $i\in \{1,2\}$, whenever the pebble is moved on to a state $s$ in $S_i$, then Player~$i$ chooses some arc $a\in A_s$ and moves the pebble to $d(a)$. Whenever the pebble is moved on to a state $s$ in $S_R$, then an $a\in A_s$ is chosen uniformly at random and the pebble moves to $d(a)$. If the pebble is moved on to $\perp$, then the game is over.
For all $T\geq 0$ the FSSG $G^T$ is played like $G$, except that the pebble can be moved at most $T+1$ times.
The {\em objective} of both SSGs and FSSGs is for Player~1 to maximize the probability that the pebble is moved on to $\perp$
(eventually in SSGs and with in $T+1$ time steps in FSSGs).
The objective of Player~2 is to minimize this probability.
\smallskip\noindent{\em Strategies.}
Let $S^*$ be the set of finite sequences of states.
For all $T$, let $S^{\leq T}\subset S^*$ be the set of sequences of states, which have length at most $T$.
A {\em strategy $\sigma_i$ for Player~$i$} in an SSG is a map from $S^*\times S_i$ into $A$,
such that for all $w\in S^*$ and $s\in S$ we have $\sigma_i(w \cdot s)\in A_s$.
Similarly, a {\em strategy $\sigma_i$ for Player~$i$} in an FSSG $G^T$ is a map from $S^{\leq T}\times S_i$ into $A$,
such that for all $w\in S^{\leq T}$ and $s\in S$ we have $\sigma_i(w \cdot s)\in A_s$.
In all cases we denote by $\Pi_i$ the set of all strategies for Player~$i$. If $S_i=\emptyset$, we will let $\emptyset$ denote the corresponding
strategy set.
Below we define some special classes of strategies.
\smallskip\noindent{\em Memory-based, counter-based and Markov strategies.}
Let $M= \{0,1\}^*$ be the set of possible {\em memories}.
A {\em memory-based strategy $\sigma_i$ for Player~$i$} consists of a pair
$(\sigma_u,\sigma_a)$, where
\begin{itemize}
\item{} $\sigma_u$, the memory-update function, is a map from $M\times S$ into $M$
\item{} $\sigma_a$, the next-action function, is a map from $M\times S_i$ into $A$, such that for
all $m\in M$ and $s\in S_i$ we have $\sigma_a(m,s)\in A_s$.
\end{itemize}
A {\em counter-based strategy} is a special case of memory-based strategies, where for all $m\in M$
and $s,s'\in S$ we have $\sigma_u(m,s)=\sigma_u(m,s')$. That is the memory can only contain a counter of some type.
We will therefore write $\sigma_u(m,s)$ as $\sigma_u(m)$ for all $m,s$ and any counter-based strategy $\sigma$.
A {\em Markov strategy $\sigma_i$ for Player~$i$} is a special case of strategies where
\[\forall p,p'\in S^{\leq T}: |p|=|p'| \wedge p_{|p|}=p'_{|p'|}\in S_i \Rightarrow \sigma(p',p'_{|p'|})=\sigma(p,p_{|p|}).\]
That is, a Markov strategy only depends on the length of the history and the current state.
Let $\Pi'_i$ be the set of all Markov strategies for Player~$i$.
\smallskip\noindent{\em Following a strategy.}
For a strategy, $\sigma_i$, for Player~$i$ we will say that Player~$i$ {\em follows} $\sigma_i$ if for all $n$ given the sequence of states $(p_i)_{i\leq n}$ the pebble has been on until move $n$ and that $p_n\in S_i$, then Player~$i$ chooses $\sigma((p_i)_{i\leq n},p_n)$.
For a memory-based strategy for Player~$i$ $\sigma_i$, we will say that Player~$i$ {\em follows} $\sigma_i$ if for all $n$ given the sequence of states $(p_i)_{i\leq n}$ the pebble has been on until move $n$, that $p_n\in S_i$ and that $m^i=\sigma_u(m^{i-1},p_i)$ and that $m^0=\emptyset$, then Player~$i$ chooses $\sigma_a(m^n,p_n)$.
\smallskip\noindent{\em Space required by a memory-based strategy.}
The {\em space usages of a memory-based strategy} is the logarithm of the number of distinct states generated by the strategy at any point, if the player follows that strategy. A memory-based strategy is {\em memoryless} if there is only one memory used by the strategy. For any FSSG $G^T$ with $n$ states it is clear that the set of strategies is a subset of memory-based strategies that uses memory at most $T\log n$, since for any strategy $\sigma$ we can construct a memory-based strategy $\sigma'$ by using the memory for the sequence of states and then choose the same action as $\sigma$ would with that sequence of states. Hence we will also talk about $\epsilon$-optimal memory-based strategies.
Also note that for any FSSG $G^T$ it is clear that the set of Markov strategies is a subset of the set of counter-based strategies that uses space at most $\log T$.
\smallskip\noindent{\em Period of a counter-based strategy.}
We will distinguish between two kinds of memories for a counter-based strategy $\sigma$. One kind is only used once (the initial phase) and the other kind is used arbitrarily many times (the periodic phase). Let $m^0=\emptyset$ and $m^i=\sigma_u(m^{i-1})$. Then if $m^i=m^j$ for some $i<j$, we also have that $m^{i+c}=m^{j+c}$ and $m^{i}=m^{i+c(j-i)}$. Hence if a memory is used twice, it will be reused again. We will let the number of memories that are only used once be $N$ and the number of memories used more than once be $p$, which we will call the period. The number $N$ is mainly important for $\epsilon$-optimal strategies and period is mainly important for optimal strategies.
\smallskip\noindent{\em Probability measure and values.}
A pair of strategies $(\sigma_1,\sigma_2)$, one for each player (in either an SSG or an FSSG),
defines a probability that the pebble is eventually moved to $\perp$. Let the probability be denoted as
$P^{\sigma_1,\sigma_2}$.
For all SSGs $G$ (resp. FSSGs $G^T$) it follows from the results of Everett~\cite{Everett}
that
\[\sup_{\sigma_1 \in \Pi'_1}\inf _{\sigma_2 \in \Pi_2} P^{\sigma_1,\sigma_2}=\inf _{\sigma_2 \in \Pi'_2}\sup_{\sigma_1 \in \Pi_1} P^{\sigma_1,\sigma_2}.\]
We will call this common value as the \emph{value} of $G$ (resp. $G^T$) and denote it ${\rm val}(G)$ (resp. ${\rm val}(G^T)$).
\smallskip\noindent{\em $\epsilon$-optimal and optimal strategies.}
For all $\epsilon\geq 0$, we will say that a strategy $\sigma_1$ is {\em $\epsilon$-optimal for Player~1} if
\[\inf _{\sigma_2 \in \Pi_2} P^{\sigma_1,\sigma_2}+\epsilon\geq \sup_{\sigma'_1 \in \Pi'_1} \inf _{\sigma_2 \in \Pi_2} P^{\sigma'_1,\sigma_2}.\]
Similarly, a strategy $\sigma_2$ is {\em $\epsilon$-optimal for Player~2} if
\[\sup_{\sigma_1 \in \Pi_1} P^{\sigma_1,\sigma_2}-\epsilon\leq \inf _{\sigma'_2 \in \Pi'_2}\sup_{\sigma_1 \in \Pi_1} P^{\sigma_1,\sigma'_2}.\]
A strategy $\sigma$ is {\em optimal for Player~$i$} if it is $0$-optimal.
Condon~\cite{Condon92} showed that {\em there exist optimal memoryless strategies} for any SSG $G$ that are also optimal for $G_s$ for all $s\in S$.
This also implies that there are optimal Markov strategies for FSSGs that are also optimal for $G_s$ for all $s\in S$.
\section{Bounds on $\epsilon$-optimal counter-based strategies}\label{sec:counter}
We will first show an upper bound on size of the memory used by a counter-based strategy for playing $\epsilon$-optimal in time limited games. The upper bound on memory size is by application of a result from Ibsen-Jensen and Miltersen~\cite{IJM11}.
The idea of the proof is that if we play an optimal strategy of $G$ in $G^T$ for sufficiently high $T$, then the value we get approaches the value of $G$.
\begin{theorem}\label{thm:upper}{\em (Upper bound)} For all FSSGs $G^T$ with $n$ states and $\epsilon>0$, there is an $\epsilon$-optimal counter-based
strategy for both players such that memory size is at most $\log \log \epsilon^{-1}+n+1$
\end{theorem}
\begin{proof}
Since there is an optimal Markov strategy, there is a counter-based strategy, which uses memory at most $\log T$.
As shown by Ibsen-Jensen and Miltersen~\cite{IJM11}
for any game $G^T$, if the horizon is greater than $2 \log \epsilon^{-1}2^n$, the value of $G^T$ approximates the value of $G$
with in $\epsilon$. It is clear that the value of all states are the same in an infinite-horizon game if either player is forced to play an optimal strategy. Hence, if $T\geq 2 \log \epsilon^{-1}2^n$ and either player plays an optimal strategy of $G$ in $G^T$, then the value of all states are within $\epsilon$ of the value of the game. But there are optimal memoryless strategies in $G$ as shown by Condon~\cite{Condon92}. Therefore we have that in the worst case $T< 2 \log \epsilon^{-1}2^n$. Since $\log T$ is an upper bound, $\log \log \epsilon^{-1}+n+1$ is also an upper bound and hence the result.
\end{proof}
We will now lower bound the size of the memory needed for a counter-based strategy to be $\epsilon$-optimal.
Our lower bound will be divided into two parts. The first part will show that $\log \log \epsilon^-1$ is a lower bound on the memory required
even for some MDPs with constantly many states.
The second part will show that even for fixed $\epsilon$, an $\epsilon$-optimal counter-based strategy will need to use a memory of size $O(n)$.
Both lower bounds will show explicit MDPs with the required properties. See Figure~\ref{fig:epsilon} and Figure~\ref{fig:h4} respectively.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.3,->,>=stealth',shorten >=1pt,auto,node distance=4.2cm*0.5,
semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\tikzstyle{max}=[state,regular polygon,regular polygon sides=3]
\tikzstyle{min}=[max,regular polygon rotate=180]
\node[state,accepting] (goal) {$\perp$};
\node[state] (trap) [below of =goal]{$\top$};
\node[state] (m1) [right of=goal] {1};
\node[state] (m2) [right of=m1] {2};
\node[max] (max) [below right of=m2] {$x$};
\node[state] (start) [right of=max] {${\rm start}$};
\node[state] (h) [below of=m1] {$h$};
\path
(m1) edge [bend left] (goal)
(m1) edge [bend right] (goal)
(m2) edge [bend left] (m1)
(m2) edge [bend right] (m1)
(max) edge (m2)
(max) edge (h)
(h) edge (goal)
(h) edge (trap)
(trap) edge [loop above] (trap)
(trap) edge [loop below] (trap)
(start) edge [loop above] (start)
(start) edge (max)
;
\end{tikzpicture}
\caption{An MDP $G$, such that for all $\epsilon>0$ there is a $T$, such that all $\epsilon$-optimal memory-based strategies for $G^T$ require memory size of at least $\Omega(\log \log \epsilon^{-1})$. Circle vertices are the coin toss states. The triangle vertex is the max state. The vertex $\perp$ is the terminal state. }
\label{fig:epsilon}
\end{figure}
\smallskip\noindent{\em MDP for the lower bound of $\log\log \epsilon^{-1}$.}
Our first lower bound shows that in the MDP $M$ (Figure~\ref{fig:epsilon})
all $\epsilon$-optimal memory-based strategies require at least $\log \epsilon^{-1}$ distinct memory states,
i.e., the size of memory is at least $\log\log \epsilon^{-1}$.
The MDP $M$ is defined as follows. There is one state $x$ in $S_1$, the rest are in $S_R$.
\begin{itemize}
\item{} The state $\top\in S_R$ has $A_\top=\{(\top,\top),(\top,\top)\}$.
\item{} The state $h\in S_R$ has $A_h=\{(h,\top),(h,\bot)\}$.
\item{} The state $1\in S_R$ has $A_1=\{(1,\perp),(1,\perp)\}$.
\item{} The state $2\in S_R$ has $A_2=\{(2,1),(2,1)\}$.
\item{} The state $x\in S_1$ has $A_m=\{(x,2),(x,h)\}$.
\item{} The state start$\in S_R$ has $A_{\text{start}}=\{(\text{start},\text{start}),(\text{start},x)\}$.
\end{itemize}
\begin{lemma}\label{lem:lowere}
All $\epsilon$-optimal memory-based strategies in $M^{T}$, for $T=\log \epsilon^{-1} - 1$, require at least $\log \epsilon^{-1}-2$
distinct states of memory, i.e., the size of memory is at least $\log\log \epsilon^{-1}$.
\end{lemma}
\begin{proof}
We will first show the proof for counter-based strategies. At the end we will then extend it to memory-based strategies.
It is clear that ${\rm val}(M^2_{x})=\frac{1}{2}$ and for all $T>2$ we have ${\rm val}(M^T_{x})=1$. If Player~$1$ chooses $(x,h)$ in $M^2_{x}$, then he gains $\frac{1}{2}$, otherwise, if he chooses $(x,2)$, then he gains $0$. Also for all $T>2$, if Player~1 chooses $(x,2)$ in $M^T_{x}$, then he gains $1$, otherwise, if he chooses $(x,h)$, then he gains $\frac{1}{2}$.
In $M_{{\rm start}}$ we end up at $x$ after precisely $k\geq 2$ moves of the pebble with probability $2^{-k+1}$. Therefore, by the preceding any optimal memory-based strategy $\sigma$ must be able to find out if $T$ minus the length of the history is greater than $2$ from the memory.
Let $\epsilon>0$ be given. For simplicity we will assume that $\epsilon=2^{-k}$ for some $k>0$. Let $c=\log \epsilon^{-1}$. Assume now that there is a counter-based strategy $\sigma=(\sigma_u,\sigma_a)$ that uses $c-3$ states of memory in $M^{c-1}_{{\rm start}}$. The pebble ends up at $m$ after $c-3$ moves with probability $2^{-(c-3)+1}=4\epsilon$. Let the sequences of memories until then be $m^0,m^1,\dots,m^{c-3}$. Since $\sigma$ was $\epsilon$-optimal we must have that $\sigma(m^{c-3},x)=(x,h)$. On the other hand for all $i<c-3$ we must also have that $ \sigma(m^i,x)=(x,2)$. Therefore $m^{c-3}$ differs from $m^i$ for $i< c-3$. Now assume that $m^i=m^j$ for $i<j$ and $i,j< c-3$. But then $\sigma_u(m^i)=\sigma_u(m^j)$ and hence $m^{i+1}=m^{j+1}$ and then by repeating this argument we have that $m^{k}=m^{c-3}$ for $k<c-3$. Therefore $m^i$ differs from $m^j$ for $i\neq j$ and $i,j\leq c-3$ and hence we need at least $c-2$ different memory states.
For general memory-based strategies the proof remains the same. This is because we can note that if the pebble ends up at $x$ after $c-3$ moves, we have that $m^0=\emptyset$ and $m^i=\sigma_u(m^{i-1},{\rm start})$ for $1\leq i\leq c-3$ and hence they must all differ by the same argument as before.
\end{proof}
For our second lower bound we will use an infinite family of MDPs \[H=\{H(1),H(2),\dots,H(i),\dots\},\] such that $H(i)$ contains $2i+4$ states, one of which is a max state, and all $\epsilon$-optimal counter-based strategies require space at least $i-4$, for some fixed $\epsilon$.
\smallskip\noindent{\em Family of MDPs for the lower bound of $n$.}
The MDP $H(i)$ is defined as follows. There is one state $x$ in $S_1$, the rest are in $S_R$. \begin{itemize}
\item{} The state $\top\in S_R$ has $A_\top=\{(\top,\top),(\top,\top)\}$.
\item{} The state $h\in S_R$ has $A_h=\{(h,\top),(h,\bot)\}$.
\item{} The state $1\in S_R$ has $A_1=\{(1,\perp),(1,i)\}$.
\item{} For $j\in \{2,\dots,i\}$, the state $j\in S_R$ has $A_j=\{(j,i),(j,j-1)\}$.
\item{} The state $x\in S_1$ has $A_m=\{(x,i),(x,h)\}$.
\item{} The state $1^*\in S_R$ has $A_{1^*}=\{(1^*,i^*),(1^*,x)\}$.
\item{} For $j\in \{2,\dots,i\}$, the state $j^*\in S_R$ has $A_{j^*}=\{(j^*,i^*),(j^*,(j-1)^*)\}$.
\end{itemize}
There is a illustration of $H(4)$ in Figure \ref{fig:h4}.
Let $i$ be some number. It is clear that ${\rm val}(H(i)^2_{x})=\frac{1}{2}$. It is also easy to see that ${\rm val}(H(i)_i)=1$, but that the time to reach $\perp$ from $i$ is quite long. Hence, one can deduce that there must be a $k$ ($k$ depends on $i$) such that for all $k'\geq k$ it is an optimal strategy in $H(i)_{x}^{k'}$ to choose $(x,i)$ and for all $2\leq k''<k$ it is an optimal strategy in $H(i)_{x}^{k''}$ to choose $(x,h)$. In case there are multiple such numbers, let $k$ be the smallest. The number $k-1$ is then the smallest number of moves of the pebble to reach $\perp$ from $i$, such that that occurs with probability $\geq \frac{1}{2}$ (to simplify the proofs we will assume equality).
Let $p^t$ be the probability for the pebble to reach $x$ from $i^*$ in $t$ or less moves (note that this is also the probability to reach $\perp$ in $t$ moves or less from $i$). It is clear that $p^t$ is equal to the
probability that a sequence of $t$ fair coin tosses contains $i$ consecutive tails. This is known to be exactly
$1-F_{t+2}^{(i)}/2^t$, where $F_{t+2}^{(i)}$ is the $(t+2)$'nd Fibonacci
$i$-step number, i.e. the number given by the linear homogeneous recurrence
$F_c^{(i)} = \sum_{j=1}^{i} F_{c-j}^{(i)}$ and the
boundary conditions $F_{c}^{(i)} = 0$, for $c \leq 0$,
$F_{1}^{(i)} = F_{2}^{(i)} = 1$ (this fact is also mentioned in Ibsen-Jensen and Miltersen~\cite{IJM11}).
The next lemmas will prove various properties of $p^t$, $F^{(i)}_a$ and $k$. We will first show two technical lemmas that will be used in many of the remaining lemmas. Next, we will show that $k$ is exponential in $i$ and show various bounds on $p^t$. We will use all that to show that the number of states in the game is a lower bound on the memory requirement for $\epsilon$-optimal counter-based strategies.
\begin{lemma}\label{lem:Fai}
Let $i$ and $a\geq i+3$ be given. Then \[F_a^{(i)}\leq (2-2^{-i-1})F_{a-1}^{(i)}\]
Let $b\geq 3$ be given. Then \[F_b^{(i)}\leq 2F_{b-1}^{(i)}\]
\end{lemma}
\begin{proof} We can see that \[F_b^{(i)}=\sum_{j=1}^{i} F_{b-j}^{(i)}=2F_{b-1}^{(i)}-F_{b-i}^{(i)}\] for $b\geq 3$. Hence we have that $F_b^{(i)}\leq 2F_{b-1}^{(i)}$.
We therefore have that $F_{a-i}^{(i)}\geq 2^{-i-1}F_{a-1}^{(i)}$ and we can deduce that \[F_a^{(i)}\leq 2F_{a-1}^{(i)}-2^{-i-1}F_{a-1}^{(i)}.\]
The desired result follows.
\end{proof}
Now for the proof that $k$ is exponential in $i$.
\begin{lemma}\label{lem:kexp}
For all $i$, we have that $k\geq 2^{i-2}+i$.
\end{lemma}
\begin{proof}
We will first show that $p^a\leq p^{a-1}+2^{-i}$. We can divide the event that there are $i$ consecutive tails into two possibilities out of $t$ fair coin tosses. Either the first $i$ coin tosses were tails or there are $i$ consecutive tails in the last $t-1$ coin tosses (or both). The first case happens with probability $2^{-i}$ and the last with probability $p^{a-1}$. We can then apply union bounds and get that $p^a\leq p^{a-1}+2^{-i}$.
Clearly we have that $p^{i-1}=0$ and that $p^a$ is increasing in $a$. But we also have that \[
\begin{split}
p^k& \leq 2^{i-2} 2^{-i} + p^{k-2^{i-2}}\Rightarrow\\
\frac{1}{2} & \leq \frac{1}{4} + p^{k-2^{i-2}}\Rightarrow\\
\frac{1}{4} & \leq p^{k-2^{i-2}},
\end{split}
\]
which means that $k> 2^{i-2}+i-1$.
\end{proof}
\begin{lemma}\label{lem:k}
Let $i$ be given. The number $k$ is such that
\[e^{\frac{-1}{8}} \geq (1-2^{-i-2})^{k}\geq \frac{1}{4}\]
and such that
\[e^{\frac{-1}{8}} \geq (1-2^{-i-2})^{k-i}\geq \frac{1}{2}\]
\end{lemma}
\begin{proof}
We have that $1-F_{k+2}^{(i)}/2^k=\frac{1}{2}$, which we can then use to show that \[
\begin{split}
1-F_{k+2}^{(i)}/2^k&=\frac{1}{2}\Rightarrow\\
F_{k+2}^{(i)}/2^k&=\frac{1}{2}\Rightarrow\\
\frac{(2-2^{-i-1})^{k-i}2^i F_{2}^{(i)}}{2^k}&\geq \frac{1}{2}\Rightarrow\\
(1-2^{-i-2})^{k-i}&\geq \frac{1}{2}
\end{split}
\]
where we used Lemma \ref{lem:Fai} for the second implication. We used that $F_{2}^{(i)}=1$ for the third implication.
Since $k\geq 2^{i-2}+i> 2i$ by Lemma \ref{lem:kexp}, we also have that $(1-2^{-i-2})^{k}\geq \frac{1}{4}$.
But we can also use Lemma \ref{lem:kexp} more directly. Notice that since $i\geq 12$ we have that $2^{i+2}\geq 72$. We have that, \[
(1-2^{-i-2})^{k-i}\leq (1-2^{-i-2})^{2^{i-2}}=((1-2^{-i-2})^{2^{i+2}})^{\frac{1}{8}}\leq e^{\frac{-1}{8}},
\]
where we used that $\lim_{x\rightarrow \infty} (1-x^{-1})^x=e^{-1}$ and that $(1-x^{-1})^x$ is increasing in $x$ for $x\geq 1$. We also have that $e^{\frac{-1}{8}} \geq (1-2^{-i-2})^{k}$, by the same argument.
\end{proof}
\begin{lemma}\label{lem:half t}
For all $i$ and $t$, we have \[
p^{2t-2i}\leq 2p^t
\]
\end{lemma}
\begin{proof}
Let $t'=t-i$. Hence, we need to show that $p^{2t'}\leq 2p^{t'+i}$.
The proof comes from the fact that to have $i$ consecutive tails out of $2t'$ fair coin tosses, the $i$ consecutive tails must either start in the first half or end in the second half (or both). But to start in the first half means that it must end in the first $t'+i$ elements. Therefore we can overestimate that probability with $p^{t'+i}$. Similar with the second half. We can then add them together by union bound and the result follows.
\end{proof}
\begin{lemma}\label{lem:upper bound on pdk}
Let $i\geq 12$ and $\frac{1}{10}<d<1$ be given. Then $p^{dk}\leq 1-\frac{e^{\frac{1-d}{8}}}{2}<\frac{1}{2}$.
\end{lemma}
\begin{proof}
Since $d>\frac{1}{10}$, we have that $dk>i$, by Lemma \ref{lem:kexp} and because $i\geq 12$.
We will show that $F_{dk+2}^{(i)}/2^{dk}\geq \frac{e^{\frac{1-d}{8}}}{2}$.
We have that \[
\begin{split}
F_{dk+2}^{(i)}/2^{dk}& \geq \frac{F_{k+2}^{(i)}}{(2-2^{-i-1})^{(1-d)k} 2^{dk}}\\
& =\frac{F_{k+2}^{(i)}}{(1-2^{-i-2})^{(1-d)k} 2^{k}}\\
& =\frac{1}{2\cdot (1-2^{-i-2})^{(1-d)k}}\\
& =\frac{1}{2\cdot ((1-2^{-i-2})^{k})^{1-d}}\\
& \geq \frac{1}{2\cdot (e^{-\frac{1}{8}})^{1-d}}\\
& = \frac{e^{\frac{1}{8}(1-d)}}{2}
\end{split}
\]
where we used Lemma \ref{lem:Fai} for the first inequality, Lemma \ref{lem:k} for the second and that $\lim_{x\rightarrow \infty} (1-x^{-1})^x=e^{-1}$ and that $(1-x^{-1})^x$ is increasing in $x$ for $x\geq 1$ for the third.
\end{proof}
\begin{lemma}\label{lem:greater than k}
Let $i\geq 12$ and $0<d$ be given. Then $p^{(1+d)k}\geq 1-(e^{\frac{-d}{8}}) \frac{1}{2}>\frac{1}{2}$.
\end{lemma}
\begin{proof}
We will show that $F_{(1+d)k+2}^{(i)}/2^{(1+d)k}\leq (e^{\frac{-d}{8}}) \frac{1}{2}$.
We have that \[
\begin{split}
F_{(1+d)k+2}^{(i)}/2^{dk}& \leq \frac{(2-2^{-i-1})^{dk}F_{k+2}^{(i)}}{2^{(1+d)k}} \\
& = (1-2^{-i-2})^{dk}\frac{1}{2}\\
& = ((1-2^{-i-2})^{k})^d \frac{1}{2}\\
& \leq (e^{\frac{-1}{8}})^d \frac{1}{2}\\
& = (e^{\frac{-d}{8}}) \frac{1}{2}
\end{split}
\]
where we used Lemma \ref{lem:Fai} for the first inequality and Lemma \ref{lem:k} for the second.
\end{proof}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.3,->,>=stealth',shorten >=1pt,auto,node distance=4.2cm*0.5,
semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\tikzstyle{max}=[state,regular polygon,regular polygon sides=3]
\tikzstyle{min}=[max,regular polygon rotate=180]
\node[state,accepting] (goal) {$\perp$};
\node[state] (trap) [left of =goal]{$\top$};
\node[state] (m1) [below of=goal] {1};
\node[state] (m2) [below of=m1] {2};
\node[state] (m3) [below of=m2] {3};
\node[state] (m4) [below of=m3] {4};
\node[max] (max) [below left of=m4] {$x$};
\node[state] (h) [left of=m1] {$h$};
\node[state] (m1s) [left of=max] {$1^*$};
\node[state] (m2s) [above of=m1s] {$2^*$};
\node[state] (m3s) [above of=m2s] {$3^*$};
\node[state] (m4s) [above of=m3s] {$4^*$};
\path
(m1) edge (goal)
(m1) edge [bend left,in=120] (m4)
(m2) edge (m1)
(m2) edge [bend right] (m4)
(m3) edge (m2)
(m3) edge [bend left] (m4)
(m4) edge (m3)
(m4) edge [loop left] (m4)
(max) edge (m4)
(max) edge (h)
(h) edge (goal)
(h) edge (trap)
(trap) edge [loop left] (trap)
(trap) edge [loop right] (trap)
(m1s) edge (max)
(m1s) edge [bend left,in=120] (m4s)
(m2s) edge (m1s)
(m2s) edge [bend right] (m4s)
(m3s) edge (m2s)
(m3s) edge [bend left] (m4s)
(m4s) edge (m3s)
(m4s) edge [loop right] (m4)
;
\end{tikzpicture}
\caption{The MDP $H(4)$. It is the fourth member of a family that will show that there exist FSSGs where, for a fixed $\epsilon$, all $\epsilon$-optimal counter-based strategies require memory size to be at least $\Omega(i)$. Circle vertices are the coin toss states. The triangle vertex is the max state. The vertex $\perp$ is the terminal state. }
\label{fig:h4}
\end{figure}
\begin{lemma}
There is an $\epsilon$ such that for all $i\geq 12$, there is a time-bound $T$ such that all $\epsilon$-optimal counter-based strategies for $H(i)^T$ require memory size at least $i-5$.\label{lem:lowern}
\end{lemma}
The proof basically goes as follows: The pebble starts at $i^{*}$ with $2k+1$ moves remaining. First we show that there is a super-constant probability for the pebble to reach $x$ using somewhere between $\frac{k}{5}$ and $\frac{4k}{5}$ moves. In that case there is at least $\frac{6k}{5}+1$ moves left. We then show that there is some number $p>\frac{1}{2}$ independent of $i$ such that the probability to reach $\perp$ from $i$ in $\frac{6k}{5}$ is more than $p$. Secondly we show that there is a super-constant probability for the pebble to reach $x$ using somewhere between $\frac{6k}{5}$ and $\frac{9k}{5}$ moves. In that case there is at most $\frac{4k}{5}+1$ moves left. We then show that there is some number $q<\frac{1}{2}$ independent of $i$ such that the probability to reach $\perp$ from $i$ in $\frac{4k}{5}$ is less than $q$. We can then pick $\epsilon$ such that any $\epsilon$-optimal strategy must distinguish between plays that used between $\frac{k}{5}$ and $\frac{4k}{5}$ moves to reach $x$ from $i^{*}$ and plays that used between $\frac{6k}{5}$ and $\frac{9k}{5}$ moves to reach $x$ from $i^{*}$. We then show that that requires at least $O(k)$ distinct states of memory, and the result then follows from $k$ being exponential in $i$, by Lemma \ref{lem:kexp}.
\begin{proof}The probability for the pebble to reach $x$ using somewhere between $\frac{k}{5}$ and $\frac{4k}{5}$ moves is
\[
\begin{split}
p^{\frac{4k}{5}}-p^{\frac{k}{5}}&=1-F_{\frac{4k}{5}+2}^{(i)}/2^{\frac{4k}{5}}-(1-F_{\frac{k}{5}+2}^{(i)}/2^{\frac{k}{5}})\\
&=\frac{2^{\frac{3k}{5}}F_{\frac{k}{5}+2}^{(i)}-F_{\frac{4k}{5}+2}^{(i)}}{2^{\frac{4k}{5}}}\\
&\geq \frac{2^{\frac{3k}{5}}F_{\frac{k}{5}+2}^{(i)}-(2-2^{-i-1})^{\frac{3k}{5}}F_{\frac{k}{5}+2}^{(i)}}{2^{\frac{4k}{5}}}\\
&= \frac{(2^{\frac{3k}{5}}-(2-2^{-i-1})^{\frac{3k}{5}})F_{\frac{k}{5}+2}^{(i)}}{2^{\frac{4k}{5}}}\\
&= \frac{(1-(1-2^{-i-2})^{\frac{3k}{5}})F_{\frac{k}{5}+2}^{(i)}}{2^{\frac{k}{5}}}\\
&= (1-(1-2^{-i-2})^{k})(1-p^{\frac{k}{5}})\\
&\geq (1-e^{\frac{-1}{8}} )\frac{e^{\frac{1}{10}}}{2}
\end{split}
\]
where we used Lemma \ref{lem:Fai} for the first inequality and Lemma \ref{lem:k} and Lemma \ref{lem:upper bound on pdk} for the second.
In this case we have at least $\frac{6k}{5}+1$ moves left. Therefore if the player chooses to move to $i$, there are at least $\frac{6k}{5}$ moves left. In that case, by Lemma \ref{lem:greater than k}, the pebble will reach $\perp$ with probability at least $1-(e^{\frac{-3}{40}})\frac{1}{2}>\frac{1}{2}$. In both cases we see that the probability is strictly separated from $\frac{1}{2}$.
The probability for the pebble to reach $x$ using somewhere between $\frac{6k}{5}$ and $\frac{9k}{5}$ moves can be calculated similar to between $\frac{k}{5}$ and $\frac{4k}{5}$ moves. We end up with
\[
p^{\frac{9k}{5}}-p^{\frac{6k}{5}}\geq (1-e^{\frac{-1}{8}} )(1-p^{\frac{6k}{5}}).
\]
Hence, we need a upper bound on $p^{\frac{6k}{5}}$, which is smaller than 1 and does not depend on $k$ or $i$. We can get that by noting that $\frac{6k}{5}\leq \frac{8k}{5}-2i$, because of Lemma \ref{lem:kexp} and that $i\geq 12$. Hence we can apply Lemma \ref{lem:half t} followed by Lemma \ref{lem:upper bound on pdk} and get that $p^{\frac{6k}{5}}\leq 2p^{\frac{4k}{5}}\leq 2(1-\frac{e^{\frac{1}{40}}}{2})<1$.
In this case we have at most $\frac{4k}{5}+1$ moves left. Therefore if the player chooses to move to $i$, there are at most $\frac{4k}{5}$ moves left. In that case, by Lemma \ref{lem:upper bound on pdk}, the pebble will reach $\perp$ with probability at most $1-\frac{e^{\frac{1}{40}}}{2}<\frac{1}{2}$.
Let $\sigma$ be some $\epsilon$-optimal counter-based strategy and assume that $\sigma$ uses less than $\frac{k}{5}-1$ states. We will show that if $\epsilon$ is some sufficiently low constant, we get a contradiction and hence all $\epsilon$-optimal counter-based strategies uses at least $\frac{k}{5}$ states. Our result than follows from Lemma \ref{lem:kexp}.
Let $m^0=\emptyset$ and $m^i=\sigma_u(m^{i-1})$. Since $\sigma$ uses less than $\frac{k}{5}$ states, then $m^a=m^b$ for some $a<b<\frac{k}{5}$. Hence also $m^{a+c}=m^{b+c}$ for all $c\geq 0$, by definition. But then $m^{a+c}=m^{a+c+(b-a)d}$ for all $c$ and $d$ greater than 0. Hence, we can make a one to one map between memory $m^a$ for $a\in A=\{\frac{k}{5},\dots,\frac{4k}{5}\}$ and some memory $m^b$ for $b\in B=\{\frac{6k}{5},\dots,\frac{9k}{5}\}$, such that $m^a=m^b$, except for up to $\frac{k}{5}$ of them, which is smaller than a third of the size of both $A$ and $B$.
Let $q^t$ be the the probability to reach $x$ from $i^{*}$ using exactly $t$ moves of the pebble. For $t\geq i+1$ we have that \[q^t=p^t-p^{t-1}=\frac{2F_{t+1}^{(i)}-F_{t+2}^{(i)}}{2^t}=\frac{F_{t+1-i}^{(i)}}{2^t}=2^{-i-1}(1-p^{t-1-i}).\]
(To have a sequence of $i$ tails after precisely $t$ coin flips for $t>i$, we need to have failed to get that many tails in a row for the first $t-1-i$ coin flips and then gotten a head followed by $i$ tails, which is also what our expression tells us.)
We see that $q^t$ is decreasing for $t\geq i+1$, because $p^t$ is increasing. We can therefore calculate the probability to end up at $x$ using a specific amount of time compared to all other times in $A$ as
\[
\begin{split}
\frac{q^{\frac{k}{5}}}{q^{\frac{4k}{5}}}&=\frac{2^{-i-1}(1-p^{\frac{k}{5}-1-i})}{2^{-i-1}(1-p^{\frac{4k}{5}-1-i})}\\
&=\frac{\frac{F_{\frac{k}{5}+1-i}^{(i)}}{2^{\frac{k}{5}-1-i}}}{\frac{F_{\frac{4k}{5}+1-i}^{(i)}}{2^{\frac{4k}{5}-1-i}}}\\
&\geq \frac{ F_{\frac{k}{5}+1-i}^{(i)}2^{\frac{3k}{5}}}{(2-2^{-i-1})^{\frac{3k}{5}}F_{\frac{k}{5}+1-i}^{(i)}}\\
&= (1-2^{-i-2})^{-\frac{3k}{5}}\\
&= ((1-2^{-i-2})^{k})^{-\frac{3}{5}}\\
&\geq e^{\frac{3}{40}},
\end{split}
\]
where we used Lemma \ref{lem:Fai} for the first inequality and Lemma \ref{lem:k} for the second.
We can show similarly that all $q^t$ for $t$ being in $B$ are also equal up to a factor of $e^{\frac{3}{40}}$.
Hence, the probability to reach $x$ from $i^{*}$ with $t$ time remaining for $t-1\in A$ is nearly uniformly distributed over $A$ (up to a factor of $e^{\frac{3}{40}}$). Similar with $t-1$ in $B$.
Therefore we can pick an $\epsilon_1$ (independent of $i$) such that $\sigma_a(m^t,x)=(x,h)$ for all but $\frac{1}{10}$ of the $t$'s in $A$. Similar, we can pick an $\epsilon_2$ (independent of $i$) such that $\sigma_a(m^t,x)=(x,i)$ for all but $\frac{1}{10}$ of the $t$'s in $B$.
By using $\epsilon=\min(\epsilon_1,\epsilon_2)$ both $\frac{9}{10}$ of all $t$ in $A$ have that $\sigma_a(m^t,x)=(x,h)$ and $\frac{9}{10}$ of all $t$ in $B$ have that $\sigma_a(m^t,x)=(x,i)$. But this contradicts that we had a one to one map that mapped at least two thirds of all $m^a$ for $a$ in $A$ to some memory $m^{b}$ for $b$ in $B$ such that $m^a=m^b$ (and at least two thirds of the $b'$s got mapped to).
Hence all $\epsilon$-optimal counter-based strategies uses memory at least $\frac{k}{5}$. The result then follows from $k\geq 2^{i-2}+i$ from Lemma \ref{lem:kexp}.
\end{proof}
\begin{theorem}\label{thm:lower}{\em (Lower bound)} For all sufficiently small $\epsilon>0$ and all $n\geq 5$, there is a FMDP with $n$ states, where all $\epsilon$-optimal counter-based strategies require memory size at least $\Omega(\log \log \epsilon^{-1}+n)$.
\end{theorem}
\begin{proof}
The proof is a simple combination of the two lower bounds in Lemma \ref{lem:lowere} and Lemma \ref{lem:lowern}.
\end{proof}
\section{A lower bound on the period of optimal strategies in MDPs}\label{sec:period}
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.3,->,>=stealth',shorten >=1pt,auto,node distance=4.2cm*0.5,
semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\tikzstyle{max}=[state,regular polygon,regular polygon sides=3]
\tikzstyle{min}=[max,regular polygon rotate=180]
\node[state,accepting] (goal) {$\perp$};
\node[state] (m1) [below of=goal] {1*};
\node[state] (m2) [below of=m1] {2*};
\node[state] (m3) [below of=m2] {3*};
\node[state] (m4) [below of=m3] {4*};
\node[state] (q5) [below of=m4] {5};
\node[state] (q4) [right of=q5] {4};
\node[state] (q3) [right of=q4] {3};
\node[state] (q2) [right of=q3] {2};
\node[state] (q1) [right of=q2] {1};
\node[max](max)[below of=q2]{};
\path
(m1) edge [bend left] (goal)
(m1) edge [bend right] (goal)
(m2) edge [bend left] (m1)
(m2) edge [bend right] (m1)
(m3) edge [bend left] (m2)
(m3) edge [bend right] (m2)
(m4) edge [bend left] (m3)
(m4) edge [bend right] (m3)
(q5) edge (m4)
(q5) edge (q4)
(q4) edge [in=-25,out=90](m3)
(q4) edge (q3)
(q3) edge [in=-25,out=90] (m2)
(q3) edge (q2)
(q2) edge [in=-25,out=90] (m1)
(q2) edge (q1)
(q1) edge [in=-25,out=90] (goal)
(q1) edge [bend left] (q5)
(max) edge (q1)
(max) edge (q2)
;
\end{tikzpicture}
\caption{The MDP $G_5$. Circle vertices are the coin toss states. The triangle vertex is the Max state. The vertex $\perp$ is the terminal state. }
\label{fig:g5}
\end{figure}
We will in this section show that there exist FMDPs $G$, with $n$ states, such that all optimal strategies can be implemented using a counter-based strategy, and the period is greater than $2^{\Omega(\sqrt{n\log n})}$. We will create such FMDPs in two steps. First we will construct a family, such that the $i'$th member requires that one state uses one action every $\Theta(i)$ steps and in all other steps uses the other action. There is an illustration of a member of that family in Figure \ref{fig:g5}. Afterwards we will play many such games in parallel, which will ensure that a large period is needed for all optimal strategies. There is an illustration of such a game in Figure \ref{fig:f2}.
Let $G_p$, $p\in \{2,3,\dots\}$ be the following FMDP, with $2p-1$ coin toss states and one max state. The coin toss states are divided into the sets $\{1^*,2^*,\dots,(p-1)^*\}$ and $\{1,2,\dots,p\}$. To simplify the following description let state $0^*$ denote the $\perp$ terminal state. A description of $G$ is then \begin{itemize}
\item{}State $i^*$ has state $(i-1)^*$ as both its successors.
\item{} State $i$ has state $(i-1)^*$ and $(i-1)$ as successors, except state $1$ which has $\perp$ and state $p$ as successors.
\item{} The max state has $1$ and $2$ as successors.
\end{itemize}
There is an illustration of $G_5$ in Figure \ref{fig:g5}.
\begin{lemma}\label{value of gp}
Let $p\geq 2$ be given. State $i$ has value $1-2^{-f_i(k)}$ in $G_p^{k}$ for $k>0$, where $f_i(k)$ is the function $f_i(k)=\max_{k'\leq k\wedge k'\!\!\mod p=i}(k',0)$.
\end{lemma}
\begin{proof}
It is easily seen by induction that $i^*$ has value 1 in $G_p^i$.
Note that $f_i(k)=i$ for $k\!\!\mod p=i$.
The proof will be by induction in $k$. There will be one base case and two induction cases, one for $1< k\leq p$ and one for $k>p$.
It is easy to see that state $1$ has value $\frac{1}{2}=1-\frac{1}{2}=1-2^{-f_1(1)}$ in $G_p^1$ and state $j$ for $j\neq 1$ has value 0. That settles the base case.
For $1< k\leq p$. Neither of the successors of state $j$, for $j\neq k$, has changed values from $G_p^{k-2}$ to $G_p^{k-1}$. For state $k$, both its successors has changed value. The value of state $k-1^*$ has become ${\rm val}(G_p^{k-1})_{k-1^*}=1$ and the value of state $k-1$ has become ${\rm val}(G_p^{k-1})_{k-1}=1-2^{-f_{k-1}(k-1)}$. The value of state $k$ is then \[{\rm val}(G_p^{k})_{k}=\frac{1+1-2^{-f_{k-1}(k-1)}}{2}=\frac{1+1-2^{-(k-1)}}{2}=1-2^{-(k-1)-1} =1-2^{-f_k(k)}.\]
For $p<k$. Let $i$ be $k\!\!\mod_p$. Neither of the successors of state $j$, for $j\neq i$, has changed values from $G_p^{k-2}$ to $G_p^{k-1}$. The value of state $i'=i-1\!\!\mod_p$, in iteration $k-1$ is ${\rm val}(G_p^{k-1})_{i'}=1-2^{-f_{i'}(k-1)}$. The value of state $i$ is then \[{\rm val}(G_p^{k})_{i}=\frac{1+1-2^{-f_{i'}(k-1)}}{2}=\frac{1+1-2^{-(k-1)}}{2}=1-2^{-(k-1)-1} =1-2^{-f_i(k)}.\]
The desired result follows.
\end{proof}
The idea behind the construction of $F_k$ is that to find the state of the largest value among 1 and 2, in $G^T_p$, for $p\geq 2$ and $T\geq 1$, we need to know if $T\!\!\mod p=1$ or not.
Let $p_i$ be the $i$'th smallest prime number. The FMDP $F_k$ is as follows:
$F_k$ consists of a copy of $G_{p_i}$ for $i\in \{1,\dots,k\}$. Let the max state in that copy of $G_{p_i}$ be $m_i$.
There is a illustration of $F_2$ in Figure \ref{fig:f2}.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.3,->,>=stealth',shorten >=1pt,auto,node distance=4.2cm*0.5,
semithick]
\tikzstyle{every state}=[fill=white,draw=black,text=black]
\tikzstyle{max}=[state,regular polygon,regular polygon sides=3, inner sep=-0.05cm]
\tikzstyle{min}=[max,regular polygon rotate=180]
\node[state,accepting] (goal) {$\perp$};
\node[state] (m1n2) [below right of=goal] {};
\node[state] (q2n2) [below of=m1n2] {};
\node[state] (q1n2) [right of=q2n2] {};
\node[max](maxn2)[below of=q2n2]{$m_1$};
\node[state] (m1n3) [below left of=goal] {};
\node[state] (m2n3) [below of=m1n3] {};
\node[state] (q3n3) [below of=m2n3] {};
\node[state] (q2n3) [left of=q3n3] {};
\node[state] (q1n3) [left of=q2n3] {};
\node[max](maxn3)[below of=q2n3]{$m_2$};
\path
(m1n2) edge [bend left] (goal)
(m1n2) edge [bend right] (goal)
(q2n2) edge (m1n2)
(q2n2) edge (q1n2)
(q1n2) edge [in=0,out=90] (goal)
(q1n2) edge [bend left] (q2n2)
(maxn2) edge (q1n2)
(maxn2) edge (q2n2)
(m1n3) edge [bend left] (goal)
(m1n3) edge [bend right] (goal)
(m2n3) edge [bend left] (m1n3)
(m2n3) edge [bend right] (m1n3)
(q3n3) edge (m2n3)
(q3n3) edge (q2n3)
(q2n3) edge [in=180,out=90] (m1n3)
(q2n3) edge (q1n3)
(q1n3) edge [in=180,out=90] (goal)
(q1n3) edge [bend right] (q3n3)
(maxn3) edge (q1n3)
(maxn3) edge (q2n3)
;
\end{tikzpicture}
\caption{The FMDP $F_2$. Circle vertices are the coin toss states. Triangle vertices are the max states. The vertex $\perp$ is the terminal state. }
\label{fig:f2}
\end{figure}
We will now show that all optimal strategies for $F_k$ are subsets of counter-based strategies with a period defined by $k$. Afterwards we will show that the number of states in $F_k$ can also be expressed in terms of $k$. At the end we will use those two lemmas to get to our result.
\begin{lemma}\label{lem:bits needed for Fk}
Any optimal strategy $\sigma(k,T')$ in $F_k$ is an finite memory counter-based strategies with period $P=\prod_{i\in \{1,\dots,k\}} p_i$, where $p_i$ is the $i$'th smallest prime number.
\end{lemma}
\begin{proof}
Let $i$ be some number in $\{1,\dots,k\}$. The lone optimal choice for $m_i$ and $T'>0$ is to use the action that goes to state $1$ in $G_{p_i}$ if $T\!\!\mod p_i=1$ and otherwise to use the action that goes to state $2$ in $G_{p_i}$ by Lemma \ref{value of gp}. Hence, by the Chinese remainder theorem there are precisely $P$ steps between each time any optimal strategy uses the action that goes to $1$ in all $m_i$'s. That is, any optimal strategy must do the same action at least every $P$ steps. Furthermore it is also easy to see that any optimal strategy must do the same at most every $P$ steps, by noting that $T+P\!\!\mod p_i$ is 1 if and only if $T\!\!\mod p_i$ is 1 and again applying Lemma \ref{value of gp}. A strategy that does the same every $P$ steps can be expressed by a counter-based strategy with period $P$, which also uses memory at most $P$.
\end{proof}
\begin{lemma}\label{lem:states in Fk}
The number of states in $F_k$ is $2 \sum_{i\in \{1,\dots,k\}} p_i$.
\end{lemma}
\begin{proof}
For any $i$, $G_{p_i}$ consists of $2p_i$ states. $F_k$ therefore consists of $2 \sum_{i\in \{1,\dots,k\}} p_i$ states.
\end{proof}
\begin{theorem}\label{thm:period}
There are FMDPs $G$, with $n$ states, where all optimal strategies are finite memory counter-based strategies with period $2^{\Omega(\sqrt{n\log n})}$.
\end{theorem}
\begin{proof}
Let $n$ be such that there exists a game $F_k$ with $n$ states. Note that for any number there is always a larger number, $a$, such that $F_b$ has $a$ states for some $b$.
By Lemma \ref{lem:states in Fk}, we have that $n=2\sum_{i\in \{1,\dots,k\}} p_i$. By the prime number theorem (see e.g. Newman~\cite{Newman}) we have that $\sum_{i\in \{1,\dots,k\}} p_i= \sum_{i\in \{1,\dots,k\}} o(k\log k) = o(k^2\log k)$.
Let $f(x)=x^2\log x$ for $x> 1$. The function $f(x)$ is strictly monotone increasing and hence, has an inverse function. Let that function be $f^{-1}(y)$. We have that $f^{-1}(y)\geq \sqrt{\frac{y}{\log y}}$, for $y\geq 2$, because \begin{eqnarray*}
f^{-1}(y)\geq \sqrt{\frac{y}{\log y}} & \Leftarrow & f(f^{-1}(y)) \geq f(\sqrt{\frac{y}{\log y}})\\
& \Leftarrow & y \geq (\sqrt{\frac{y}{\log y}})^2\log (\sqrt{\frac{y}{\log y}})\\
& \Leftarrow & y \geq \frac{y}{\log y} \log (\sqrt{\frac{y}{\log y}})\\
& \Leftarrow & y \geq \frac{y}{\log y} \log y\\
& \Leftarrow & y \geq y\\
\end{eqnarray*}
Here, the first $\Leftarrow$ follows by taking $f^{-1}$ on both sides. The function $f^{-1}$ is strictly monotone increasing, because $f(x)$ was. The fourth $\Leftarrow$ follows from $y\geq \sqrt{\frac{y}{\log y}}$ for $y\geq 2$ and $\log$ being monotone increasing.
Therefore, let $g(k)=2\sum_{i\in \{1,\dots,k\}} p_i$, then $g^{-1}(n)=\Omega(\sqrt{\frac{n}{\log n}})$.
By Lemma \ref{lem:bits needed for Fk}, we have that the period is $\prod_{i\in \{1,\dots,k\}} p_i$. Trivially we have that
\[
\prod_{i\in \{1,\dots,k\}} p_i\geq \prod_{i\in \{1,\dots,k\}} i =k!= 2^{\Omega ( k \log k)}
\]
We now insert $\Omega(\sqrt{\frac{n}{\log n}})$ in place of $k$ and get
\[
\prod_{i\in \{1,\dots,k\}} p_i= 2^{\Omega(\Omega(\sqrt{\frac{n}{\log n}}) \log (\Omega(\sqrt{\frac{n}{\log n}})))}=2^{\Omega(\sqrt{\frac{n}{\log n}} (\log n - \log \log n))}=2^{\Omega(\sqrt{n\log n})}
\]
The result follows.
\end{proof}
\section{Conclusion}
In the present paper we have considered properties of finite-horizon Markov decision processes and simple stochastic games. The $\epsilon$-optimal strategies considered in Section \ref{sec:counter} indicates the hardness of playing such games with a short horizon. The concept of period from Section \ref{sec:period} indicates the hardness of playing such games with a long horizon. Along with our lower bound from Section~\ref{sec:period} we conjecture the following:
\begin{conjecture}
All FSSGs have an optimal strategy, which is an finite memory counter-based strategy, with period at most $2^n$.
\end{conjecture}
| {
"timestamp": "2012-09-18T02:04:38",
"yymm": "1209",
"arxiv_id": "1209.3617",
"language": "en",
"url": "https://arxiv.org/abs/1209.3617",
"abstract": "Markov decision processes (MDPs) and simple stochastic games (SSGs) provide a rich mathematical framework to study many important problems related to probabilistic systems. MDPs and SSGs with finite-horizon objectives, where the goal is to maximize the probability to reach a target state in a given finite time, is a classical and well-studied problem. In this work we consider the strategy complexity of finite-horizon MDPs and SSGs. We show that for all $\\epsilon>0$, the natural class of counter-based strategies require at most $\\log \\log (\\frac{1}{\\epsilon}) + n+1$ memory states, and memory of size $\\Omega(\\log \\log (\\frac{1}{\\epsilon}) + n)$ is required. Thus our bounds are asymptotically optimal. We then study the periodic property of optimal strategies, and show a sub-exponential lower bound on the period for optimal strategies.",
"subjects": "Computer Science and Game Theory (cs.GT)",
"title": "Strategy complexity of finite-horizon Markov decision processes and simple stochastic games",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9828232869627774,
"lm_q2_score": 0.7217432182679957,
"lm_q1q2_score": 0.7093460421212449
} |
https://arxiv.org/abs/2006.14078 | Machine learning the real discriminant locus | Parameterized systems of polynomial equations arise in many applications in science and engineering with the real solutions describing, for example, equilibria of a dynamical system, linkages satisfying design constraints, and scene reconstruction in computer vision. Since different parameter values can have a different number of real solutions, the parameter space is decomposed into regions whose boundary forms the real discriminant locus. This article views locating the real discriminant locus as a supervised classification problem in machine learning where the goal is to determine classification boundaries over the parameter space, with the classes being the number of real solutions. For multidimensional parameter spaces, this article presents a novel sampling method which carefully samples the parameter space. At each sample point, homotopy continuation is used to obtain the number of real solutions to the corresponding polynomial system. Machine learning techniques including nearest neighbor and deep learning are used to efficiently approximate the real discriminant locus. One application of having learned the real discriminant locus is to develop a real homotopy method that only tracks the real solution paths unlike traditional methods which track all~complex~solution~paths. Examples show that the proposed approach can efficiently approximate complicated solution boundaries such as those arising from the equilibria of the Kuramoto model. | \section{Introduction}
Systems of polynomial equations are a collection of multivariate nonlinear equations in which each equation is a multivariate polynomial. Such systems arise naturally in many areas of science and engineering ranging from
chemistry, particle physics, string theory, mathematical biology, phylogenetics, control theory, robotics, power systems, and computer vision \cite{BHSW13, Cox20, CLO:98, CLO:07, SW05}.
In many of these applications, the coefficients of the equations depend upon one or more parameters
yielding parameterized systems of polynomial equations.
The solutions and the number of real solutions are
functions of the parameters. Investigating the solution structure
as a function of the parameters is typically more difficult
than solving the system for a given value of~the~parameters.
Due to the ubiquity of the problem, many methods
have been proposed to characterize the solution structure
over the parameter space. Classically, the discriminant
describes the boundary between regions where the solution structure
changes \cite{GKZ}. For example, the discriminant of
\begin{equation}\label{eq:Quadratic}
f(x;a,b,c) = ax^2 + bx + c
\end{equation}
is $D = b^2-4ac$.
Since real parameters and the number of real solutions are of most interest
in applications, we take the real discriminant locus
to be the boundary in the parameter space where the number of real solutions change.
For the quadratic expression in Eq.~\eqref{eq:Quadratic}, the solution set in~$\mathbb{R}^3$
of $D=0$ is the real discriminant locus which is the boundary
between the region in $\mathbb{R}^3$ with $D>0$ where $f = 0$ has two real solutions
and the region in $\mathbb{R}^3$ with $D<0$ where $f=0$ has no real~solutions.
Comprehensive Gr\"obner basis computations \cite{Weis:92}
can be used to symbolically compute the discriminant polynomial over the complex numbers
whose solution set is called the classical discriminant locus.
Since the discriminant actually defines the boundaries over the complex
parameter space, one can develop specialized methods over the real numbers.
Some examples include
cylindrical algebraic decomposition~ \cite{bpr:03,hanan2010stability,hernandez2011towards,DBLP:LazardR07,xia2007discoverer},
Brouwer degree \cite{conradi2017identifying}, and polyhedral methods \cite{bihan2018lower,giaroli2019regions}
which have been utilized for modest size systems. However, computational methods
which depend upon Gr\"obner basis or other symbolic computations severely
suffer from exponential complexity.
To mitigate these issues, global symbolic methods can be replaced by local, numerical approximations to determine the discriminant locus. For larger systems, numerical methods based on a form
of homotopy continuation \cite{allgower2012numerical,BHSW13,SW05} have been employed
in which one tracks the solution structure as the parameters
are varied continuously.
Several computational packages such as AUTO~\cite{doedel1981auto} and MATCONT \cite{dhooge2003matcont} employ such techniques
for parameterized differential equations.
Rather than directly run into the real discriminant locus, a perturbed
sweeping approach was presented in \cite{harrington2016decomposing}.
In particular, when all complex solutions over a general parameter point can be computed,
homotopy continuation provides an approach to obtain global information
about the solutions which, for example, can be used to obtain
the number of real solutions at a selected sample points
in the parameter space
\cite{Paramotopy,chandra2017locating,greene2013tumbling,he2013exploring,MartinezPedrera:2012rs}.
Our method aims to numerically approximate the real discriminant locus by viewing this as a classification problem in machine learning. The problem of approximating the
real discriminant locus is posed
as a problem of approximating the decision boundaries that separate data points
according to their labels, where the input features
are the parameters of the given parametric system and the target labels are the number of real solutions at the corresponding parameter values. Given a parameter point, homotopy continuation can be used to generate
the labels, i.e., compute the number of real solutions. A novel approach
for selecting sample points is developed
by leveraging domain knowledge obtained
from numerical algebraic geometry \cite{BHSW13,SW05}
to help guide the approximation of the decision boundaries.
Although there has been a collection of papers,
e.g., \cite{Das2012,huang2002constrained,huang2004constructive,huang2001neural,huang2004neural,huang2018using,perantonis1998constrained},
attempting to solve polynomial equations and improving algorithms using
neural networks, the approach closest to the present work is~\cite{mourrain2006determining}.
This method uses a feed-forward neural network with one hidden layer to predict the
number of real solutions for univariate polynomials.
All the sample points, including training data and testing data are combinations of
integer coefficients in the parameter space.
Their results indicate that the ability of an artificial neural network to generalize
on the test sets are comparable to their performance on the training sets.
They employed several algorithms to train the network and concluded that when the polynomial has high degree, the choice of training program impacts the performance of the network.
Other applications of deep networks for analyzing polynomial equations
include \cite{andoni2014learningNeuralNet,andoni2014learning}
which investigate the effectiveness of deep networks to learn
a target function that is a low degree polynomial.
A neural network which is used as a pre-training method for finding the
number of real solutions and then used to design a neural network-like model to
compute real solutions to univariate polynomial in parallel is provided in \cite{Das2012}.
Finally, \cite{breiding2018learning} employed machine learning algorithms to
learn the geometry and topology of the complex solution set of systems of
polynomial equations.
This manuscript provides a novel approach to analyzing the
real solution structure of parameterized polynomial equations
which are multivariate and depend upon many parameters.
The specific contributions are as follows:
\begin{enumerate}
\item Transform the problem of computing the real discriminant locus of
parameterized polynomial equations into a supervised classification problem in order to use machine learning constructs such as nearest neighbor and deep learning techniques;
\item Devise a novel sampling technique that leverages domain knowledge from
numerical algebraic geometry which can be thought of as a static active learning implementation where the desired training set is determined in advance;
\item Show that machine learning techniques can quickly approximate the real discriminant loci even when they contain cusps and other singular regions which, in turn, provides a decomposition of the parameter space into regions where the number of real solutions remains constant;
\item Design a real homotopy method that utilizes an approximation of the real discriminant locus to track only real paths and computes only real solutions, thus improving efficiency.
\end{enumerate}
The rest of the paper is organized as follows.
Section~\ref{sec:computational_methods}
provides background information on numerical algebraic geometry,
homotopy continuation, and parameterized systems.
Similarly, Section~\ref{sec:machine_learning}
provides an overview of the machine learning techniques
utilized: nearest neighbor and deep learning.
The novel sampling scheme
that leverages domain knowledge from numerical algebraic geometry is presented in Section~\ref{sec:sampling}.
Section~\ref{sec:results} applies the proposed approach to several examples. A real homotopy method is outlined in
Section~\ref{sec:realHomotopy}, which utilizes the learned real discriminant locus
to track only real solution paths.
Finally, a conclusion is provided in Section~\ref{sec:conclusion}.
\section{Numerical algebraic geometry}\label{sec:computational_methods}
The following provides a short description of
two topics in numerical algebraic geometry, namely
parameter homotopies and pseudowitness point sets,
that will be used to learn the real discriminant locus.
More details regarding numerical algebraic geometry are provided in \cite{BHSW13,SW05}.
\subsection{Parameter homotopies}\label{sec:parameterHomotopies}
For simplicity, we consider parameterized polynomial systems of the form
\begin{equation}\label{eq:parameterSystem}
f(x;p) = f(x_1,\dots,x_n;p_1,\dots,p_k) = \left[\begin{array}{c} f_1(x_1,\dots,x_n;p_1,\dots,p_k) \\
\vdots \\ f_n(x_1,\dots,x_n;p_1,\dots,p_k)
\end{array}\right]
\end{equation}
such that, for a generic $p^*\in\mathbb{C}^k$, $f(x;p^*) = 0$ has finitely many solutions in $\mathbb{C}^n$, say $d$,
all of which are nonsingular. A solution $x^*$ of $f(x;p^*)=0$
is nonsingular if $J_x f(x^*;p^*)$ is invertible where
$J_x f(x;p)$ is the Jacobian matrix of $f$ with respect to $x$.
See \cite{HauensteinRegan} and the references therein
for reducing overdetermined parameterized systems to
well-constrained parameterized systems adhering to Eq.~\eqref{eq:parameterSystem}.
With this setup, the real parameter space $\mathbb{R}^k$ contains open subsets where the number
of real solutions to $f(x;p) = 0$ is constant and the boundaries
of these open subsets form the real discriminant locus.
\begin{example}\label{ex:Quadratic}
The parameterized quadratic described by Eq.~\eqref{eq:Quadratic}
generically has $d=2$ solutions in $\mathbb{C}$.
As mentioned in the Introduction, the real parameter space $\mathbb{R}^3$
contains two open subsets: $D>0$ and $D<0$ where $D = b^2-4ac$ in which the number
of real solutions is constant, namely $2$ and $0$. The real discriminant
locus is the set $D=0$ in $\mathbb{R}^3$.
\end{example}
The classical discriminant locus consists of the parameter points in $\mathbb{C}^p$
where $f(x;p)=0$ does not have $d$ solutions.
Since $f(x;p)$ is well-constrained, the classical discriminant locus
is either empty or has codimension $1$ in $\mathbb{C}^p$, i.e., a hypersurface.
This will be exploited in Section~\ref{sec:sampling} to generate sample
points since the real discriminant locus is contained in the classical discriminant locus.
The following example illustrates this relationship and shows that the real discriminant locus
could be of smaller dimension in $\mathbb{R}^p$.
\begin{example}\label{ex:ParameterDisc}
Consider the following parameterized polynomial system from \cite[Ex~2.1]{RealMonodromy}
$$f(x;p) = \left[\begin{array}{c}
x_1^2-x_2^2-p_1 \\ 2x_1x_2-p_2
\end{array}\right]$$
which generically has $d=4$ solutions. In fact, $p_1^2+p_2^2=0$ in $\mathbb{C}^2$ is the classical
discriminant locus. Thus, the classical discriminant locus is a curve in $\mathbb{C}^2$,
but the only point in $\mathbb{R}^2$ on this complex hypersurface is the origin.
Moreover, for all $p\in\mathbb{R}^2\setminus\{(0,0)\}$,
$f(x;p) = 0$ has $2$ real solutions showing that the real discriminant locus in $\mathbb{R}^2$
is indeed $\{(0,0)\}$.
\end{example}
Given $p\in\mathbb{C}^k$ outside of the classical discriminant locus,
the goal is to compute the solutions to $f(x;p) = 0$
which will be accomplished using a parameter homotopy \cite{morgan1989coefficient}.
Suppose that one knows $p^*\in\mathbb{C}^k$ and a set $S\subset\mathbb{C}^n$
consisting of the $d$ solutions to $f(x;p^*) =0$. Therefore,
one has the parameter homotopy
\begin{equation}\label{eq:ParameterHomotopy}
H(x,t) = f(x;\tau(t)\cdot p^* + (1-\tau(t))\cdot p) = 0
\end{equation}
where $t\in[0,1]$, $\gamma\in\mathbb{C}$, and
$$\tau(t) = \frac{\gamma t}{1+(\gamma-1)t}.$$
In particular, $H(x,1) = f(x;p^*)=0$ has known solutions $S$
and one aims to compute the solutions to $H(x,0) = f(x;p)=0$.
For generic values of the constant $\gamma\in\mathbb{C}$, the
arc $\tau(t)\cdot p^* + (1-\tau(t))\cdot p$
for $t\in[0,1]$ that connects $p^*$ to $p$ avoids the classical discriminant locus
so that $H(x,t) =0$ defines $d$ solution paths
for $t\in[0,1]$ which start at the $d$ points in $S$
and end at the $d$ solutions of $f(x;p)=0$.
The paths can be traversed using a variety of numerical methods \cite{BHSW13}
and a certified count on both the number of real
and nonreal solutions can be obtained using \cite{alphaCertified}.
When $p\in\mathbb{R}^k$, the number of complex solutions $d$ can be significantly larger than the
number of real solutions to $f(x;p)=0$.
Thus, Section~\ref{sec:realHomotopy} considers a real parameter homotopy
aiming to only track real solution paths by trying to stay within
each open subset of the parameter space where the number of real solutions~is~constant.
In particular, if the real discriminant locus has smaller dimension,
such as in Ex.~\ref{ex:ParameterDisc}, this is beneficial since it becomes
easier to avoid intersecting the real discriminant locus.
Therefore, our learning of the discriminant locus in Section~\ref{sec:machine_learning}
and sampling scheme in Section~\ref{sec:sampling}
is only concerned with the codimension $1$ boundaries in~$\mathbb{R}^k$.
In order to utilize a parameter homotopy, one first needs to compute the solution set
$S$ to $f(x;p^*)=0$. This can be accomplished by treating the polynomial system $f(x;p^*)$
as a member of another parameterized family and performing a parameter homotopy in
that family. Classical examples include constructing a parameterized family
based on the degrees of $f_i$, the multihomogeneous structure of $f$,
and the monomial structure of $f$, e.g., see \cite[Chap.~8]{SW05}.
\subsection{Pseudowitness point sets}\label{sec:WitnessSets}
The key to the sampling method in Section~\ref{sec:sampling}
is to utilize domain knowledge from the classical discriminant locus
to select sample points to guide the learning of the real discriminant locus.
Rather than computing a polynomial defining the classical discriminant locus,
which can often be a computationally challenging problem,
the method in Section~\ref{sec:sampling} computes a
pseudowitness point set~\cite{HSprojection,HSmembership}
by intersecting with a real line.
For $f(x;p)$ as in Eq.~\eqref{eq:parameterSystem}, consider
the system
$$F(x,p) = \left[\begin{array}{c} f(x;p) \\ \det J_x f(x;p) \end{array}\right].$$
Suppose that $V\subset\mathbb{C}^{n+k}$ is the solution set of $F(x,p) = 0$,
$\pi(x,p) = p$, and $\mathcal{L}\subset\mathbb{C}^k$ is a general line.
Then, the pseudowitness point set for $V$ with respect to
the projection map $\pi$ and line $\mathcal{L}$
is $\pi(V)\cap\mathcal{L}$. In fact, the degree of the classical discriminant locus
is the number of points in $\pi(V)\cap\mathcal{L}$.
Moreover, by using a parameter homotopy, one can deform the line $\mathcal{L}$
to compute a pseudowitness point set for other lines in $\mathbb{C}^k$
providing a method for sampling points on the classical discriminant locus.
Instead of $\det J_x f(x;p)$, one may also use a null space
approach $J_x f(x;p)\cdot w$ for $w\in\mathbb{P}^{n-1}$ \cite{BHPS10}.
\section{Machine learning}\label{sec:machine_learning}
Parameter homotopies discussed in Section~\ref{sec:parameterHomotopies}
provide a means for counting the number of real solutions
corresponding to a given parameter value. This creates labels for training
data which, with machine learning techniques, can be used to make predictions
about previously unseen parameter points.
This setup follows a supervised learning paradigm in machine learning
since the labels are known for training data.
Moreover, approximating the real discriminant locus is equivalent to approximating
the decision boundaries between different classes.
The following describes using two machine learning techniques, namely
$K$-nearest neighbors and
and feedforward neural networks (more popularly known as deep neural networks).
\subsection{\texorpdfstring{$K$}{K}-nearest neighbors}\label{subsec:knn}
The underlying premise of a nearest neighbor classification algorithm
is that the class to which a previously unseen data sample belongs
can be inferred from the class to which the most similar samples in the
training set belong.
In our context, similarity will be measured in the form of the Euclidean distance
using $K$ samples in the training set nearest to the test sample
thereby yielding the $K$-nearest neighbors.
The label assigned to the previously unseen data sample
is simply the class to which to the majority of the $K$-nearest neighbors belong.
In addition to being easy to implement, a $1$-nearest neighbor
classification algorithm has desirable properties to our problem.
In particular, the Bayes error rate is the lowest misclassification rate achievable
by any classifier on the associated data~\cite{Fukunaga:1990:ISP:92131,546912}.
Since the labels are deterministic and the classes do not overlap for our problem, the Bayes error rate is equal to 0.
This is summarized in the following theorem.
\begin{theorem}\label{thm:1NN}
Provided the parameter space is sampled densely enough,
no other classifier will outperform a $1$-nearest neighbor
classification algorithm for determining
the number of real solutions associated with a given parameter point.
\end{theorem}
\begin{proof}
The result follows from the fact that, as the number of training samples tends to infinity,
the error rate of any given classifier is at worst its
Bayes error rate~\cite{1053964,Ripley:1995:PRN:546466}
with the best possible error rate attainable by any classifier being 0.
Since, in this case, the Bayes error rate is indeed 0 due to the non-overlapping nature of the classes,
no other classifier can possibly improve upon the asymptotic behavior of the 1-nearest neighbor classifier.
\end{proof}
Clearly, Theorem~\ref{thm:1NN} has significant practical limitations since both the
complexity and the storage requirements of naive implementations, i.e., non-tree-based methods,
for a $1$-nearest neighbor classification algorithm
are $\mathcal{O}(k\ell)$ when the parameter space is $\mathbb{R}^k$
and $\ell$ is the cardinality of the training set~\cite{Weber:1998:QAP:645924.671192}.
Therefore, implementing a truly optimal version would be unfeasible.
One approach to partially overcome these strict computational requirements
is by implementing a sampling technique that utilizes
domain knowledge as described in Section~\ref{sec:sampling}
which can be viewed as a form of selective sampling~\cite{pmlr-v23-dasgupta12,Lindenbaum:1999:SSN:315149.315323}, a type of active learning~\cite{Aggarwal_chapter22,settles2009active}.
This enables us to ameliorate the impact of the trade-off between the number of samples
stored and algorithmic~performance.
Another approach for improving performance
is to perform preliminary computations on the training data.
One such method is to train a neural network as discussed next.
\subsection{Deep networks}
Backed by the universal approximation theorem~\cite{cybenko1989approximation,hornik1989multilayer},
deep learning techniques \cite{bengio2015deep,lecun2015deep} have garnered significant
popularity in recent times based on success in a wide array of applications.
In particular, the feedforward neural network, i.e., a multi-layer structure of
compositions of activation functions,
has been shown to be a universal approximator
for any mildly constrained target function provided that the network parameters (or weights) and the multilayer structure are chosen appropriately~\cite{cybenko1989approximation,hornik1989multilayer}.
The layers of compositions of functions manifests the multilayer structure in a deep network
where the depth refers to the number of composition levels.
A practical way to obtain a sensible model and its corresponding weights is to start with a large architecture (as a rule of thumb, as many weights as the number of training data points)
and apply an optimization routine, e.g., stochastic gradient descent method,
to achieve numerical values of the weights which best approximate the underlying function.
Since overfitting based on noise in the training data typically results
in poor generalization abilities of the network to classify unseen data and is usually associated with excessive network capacity, regularization techniques are often implemented~\cite{Goodfellow-et-al-2016}. Commonly used regularization techniques include $L_1$ (Lasso) and $L_2$ (Ridge) regularization, dropout, and early stopping \cite{bengio2015deep,bishop2006pattern}.
We adopt a strategy that goes against this widely accepted principle.
The reason for this is that we know {\em a priori} that the
training data originated from counting the number of real solutions to a
parameterized system of polynomial equations, which can be certifiably computed
as discussed in Section~\ref{sec:parameterHomotopies}.
The benefit of knowing the provenance of the data is the awareness that the data in question will not be affected by noise. Therefore, in order to closely approximate the underlying structure
from data involving no noise, we deliberately chose not to regularize~our~models.
\section{Sampling method}\label{sec:sampling}
Given a compact subset of the parameter space $\mathbb{R}^k$, one approach to generate sample points is to
randomly, e.g., uniformly, select a parameter value and use a parameter homotopy
to count the number of real solutions.
Such an approach has been applied
to a variety of problems, e.g.,~\cite{Paramotopy,alphaCertified,ParameterGeography}.
With the aim of approximating the real discriminant
locus, i.e., the classification boundaries,
the following method uses domain knowledge regarding the
classical discriminant locus to provide
sample points near the boundaries to guide
the learning of the boundaries.
For a parameterized polynomial system $f(x;p)$
as in Eq.~\eqref{eq:parameterSystem}, we start
with parameter homotopies for solving $f=0$ (see Section~\ref{sec:parameterHomotopies})
and computing a pseudowitness point set
for the classical discriminant locus (see Section~\ref{sec:WitnessSets}).
The general framework of the sampling method
starts with a randomly
selected parameter value $p^*\in\mathbb{R}^k$,
e.g., uniformly sampled in a compact subset $\Omega$
of the parameter space $\mathbb{R}^k$.
For simplicity, we assume that $\Omega$ is a rectangular
box. The parameter homotopy for $f=0$
is used to count the number of real solutions
to $f(x;p^*)=0$ thereby obtaining the label for $p^*$.
The key addition is to then
select a random direction $v^*$ uniformly in $\mathbb{S}^{k-1}$,
the unit sphere in $\mathbb{R}^k$.
Let $\mathcal{L}^*\subset\mathbb{C}^k$ be the line parameterized
by $p^*+\lambda\cdot v^*$ for $\lambda\in\mathbb{C}$. Then,
the parameter homotopy for computing a pseudowitness
point set for the classical discriminant locus
is used to compute the real points
in the corresponding pseudowitness point set along $\mathcal{L}^*$
inside of $\Omega$,
say $p_1=p^*+\lambda_1\cdot v^*,\dots,p_\ell=p^*+\lambda_\ell\cdot v^*$.
Without loss of generality, we can assume $\lambda_1<\lambda_2<\cdots<\lambda_\ell$.
Compute $\lambda_0$ and $\lambda_{\ell+1}$
such that $\lambda_0 < \lambda_1 < \lambda_\ell < \lambda_{\ell+1}$ where
$p_0=p^*+\lambda_0\cdot v$ and $p_{\ell+1}=p^*+\lambda_{\ell+1}\cdot v$
are the intersection points of $\mathcal{L}^*$ with
the boundary of $\Omega$.
Along $\mathcal{L}^*$, the classical discriminant locus
yields that the number of real solutions
is constant on the intervals $(p_i,p_{i+1})$
contained in $\mathcal{L}^*$ for $i=0,\dots,\ell$.
Hence, the next step
is to determine the number of real solutions
associated with each interval $(p_i,p_{i+1})$.
This is accomplished by
selecting the midpoint of each interval,
namely $m_i = p^*+(\lambda_i+\delta_i/2)\cdot v$
for $i=0,\dots,\ell$
and $\delta_{i}=\lambda_{i+1}-\lambda_i$.
The parameter homotopy for $f=0$ is used
to count the number of real solutions of $f(x;m_i)=0$.
Our sampling scheme takes the midpoints $m_i$
of each interval, which we call ``near center'' points
in the corresponding cell. We add
``near boundary'' points as follows.
Given $\alpha > 0$, the near boundary points
are $b_{i,f} = p^*+(\lambda_i+\Delta_i^f)\cdot v$
and $b_{i,b} = p^*+(\lambda_i-\Delta_i^b)\cdot v$
for $i=1,\dots,\ell$
where $\Delta_i^f = \min\{\alpha,\delta_i/20\}$
and $\Delta_i^b = \min\{\alpha,\delta_{i-1}/20\}$.
Since $b_{i,f}\in(p_i,p_{i+1})$
and $b_{i,b}\in(p_{i-1},p_i)$, the number of real solutions
of $f(x;b_{i,f})=0$
and $f(x;b_{i,b})=0$ are known from the computation above.
The aim of the near center points are
to provide a parameter point sufficiently
in the interior of the region in $\mathbb{R}^k$
with the same number of real solutions.
The aim of the near boundary points are
to help learn the boundary by providing
points on either side of the boundary.
Of course, one could also explicitly
force the learned boundary to pass
through the sampled boundary points.
However, they are not utilized in
Section~\ref{sec:results} since the
near boundary points provide both interior
points of the corresponding regions
as well as guide the learning of the boundary.
In total, our sampling scheme utilized
in Section~\ref{sec:results}
provides three different
types of data points: uniform points, near center points, and near boundary points.
Figure~\ref{fig:sampling} provides an illustration
of these point categories based on a selected uniformly
selected sample point (star) along a randomly selected line $\mathcal{L}^*$ (dotted).
The boundary points (circles),
near center points (triangles), and
near boundary points (diamonds) are also shown.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.3]{SamplingScheme_new.png}
\caption{A visual representation of the sampling scheme, where the star represents the uniform random sample point, circles are points on the boundary, midpoints are marked by triangles, and near boundary points as diamonds. The points are color coded based on the number of real solutions.}
\label{fig:sampling}
\end{figure}
\section{Computational setup and results}\label{sec:results}
The sampling method in Section~\ref{sec:sampling}
utilizes domain knowledge about the location of the boundary
to provide carefully chosen
sample points to guide the learning of the boundary
which is demonstrated in the following four examples:
two warm-up examples utilizing a quadratic
and cubic followed by two examples involving
the Kuramoto model \cite{acebron2005kuramoto,dorfler2014synchronization,strogatz2000kuramoto}.
The data sets
for training and testing for these examples were
generated using the sampling scheme are summarized in
Table~\ref{tab:dataSet_sizes}.
\begin{table}[htbp]
\centering
\begin{tabular}{l|c|c|c|c}
\toprule
\cmidrule{2-5} & \multicolumn{1}{l|}{Quadratic} & \multicolumn{1}{l|}{Cubic} & \multicolumn{1}{l|}{Kuramoto $N=3$} & \multicolumn{1}{l}{Kuramoto $N=4$} \\
\midrule
Uniform & 10,000 & 10,000 & 972 & 8,995 \\
\midrule
Uniform (large) & \textbf{---} & \textbf{---} & 8000 & \textbf{---} \\
\midrule
NearBoundary & 12,934 & 12,022 & 5,192 & 54,040 \\
\midrule
NearBoundary+NearCenter & 25,860 & 22,036 & 8,440 & 78,823 \\
\bottomrule
\end{tabular}%
\caption{Number of data points in each data set used for training/testing for each example.}
\label{tab:dataSet_sizes}%
\end{table}%
With these data sets, the computational
setup for using the one nearest neighbor (1-NN) classification was based on
\textit{KNeighborsClassifier} in {\tt PyTorch}
performed on a laptop with a 2.50 GHz Intel processor
and 12 GB RAM.
Additionally, a feedforward network
was utilized with computations performed
on a laptop with a six-core Intel i7 2.60 GHz processor,
32~GB RAM, and Nvidia Quadro P2000 GPU with 4 GB of video RAM.
The code was implemented in {\tt PyTorch} which
leveraged CUDA acceleration. Multi-layer, fully connected feedforward networks with ReLU activation functions
\cite{Hahnloser:2003:PFS:762330.762336}
were used. A loss function based on multi-class cross-entropy without regularization was optimized during the learning process utilizing an adaptive~learning~rate~scheme.
\subsection{Quadratic}\label{sec:quadratic}
As a first example, consider the quadratic
$f(x;b,c) = x^2 + bx + c = 0$
with parameters $b$ and $c$. This toy
system provides a demonstration of the method
restricting the parameter space to $[-1,1]^2$.
Of course, the boundary between $f$ having
$2$ real solutions and $0$ real solutions
is defined by $b^2 - 4c = 0$.
Figure~\ref{fig:quadratic}(a) plots
uniformly selected data in $[-1,1]^2$
with Figure~\ref{fig:quadratic}(b)
showing the near boundary~data.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.18]{Uniform_quadratic.png} \includegraphics[scale=0.18]{Perturbed_quadratic.png}
\includegraphics[scale=0.38]{QuadPaper.png}
\\
(a) \hspace{1.8in} (b) \hspace{1.8in} (c)
\caption{(a) Uniform random sampled data, (b) near boundary data, and (c) decision boundary from neural network trained on data from~(b)
for $f(x;b,c) = x^2 + bx + c$.}
\label{fig:quadratic}
\end{figure}
Table~\ref{tab:knn-quadratic} summarizes
the results of using various training and testing
data sets with a
one nearest neighbor (1-NN) classification method.
This shows that including sample point data
near the boundary for training produces classification results which are highly accurate.
However, the accuracy of classifying points
near the boundary
severely declines when training with uniform data.
\begin{table}[!htbp]
\centering
\begin{tabular}{l|c|c|c}
\toprule
& \multicolumn{3}{c}{Testing Data} \\
\cmidrule{2-4} & \multicolumn{1}{c|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4}
\multicolumn{1}{c|}{Training Data}
& \multicolumn{1}{c|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.5392 & 0.7678 \\
\midrule
NearBoundary & 0.9999 & 1 & 1 \\
\midrule
NearBoundary+NearCenter & 0.9999 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of 1-NN method on the univariate quadratic $f(x;b,c)=x^2+bx+c$.}
\label{tab:knn-quadratic}
\end{table}%
A feedforward, fully connected neural network with three hidden layers each with 20 neurons was trained on the data from Figure~\ref{fig:quadratic}(b).
We employed $\tanh$ as the activation function for the neurons and used a 2-neuron softmax layer as the output layer -- use of a single neuron layer with a sigmoid activation would have been equivalent. The network was trained to separate the data on either side of the discriminant locus into two different classes. A binary cross-entropy loss without regularization was used to train the network
and implemented a variable learning rate scheme.
The network was trained until it was able to correctly classify every sample in the training set.
Once trained, testing data was used where each of the data points was fed to the neural network and the classification decision recorded.
Figure~\ref{fig:quadratic}(c) illustrates the decision boundary learned with training data
shown in Figure~\ref{fig:quadratic}(b).
This plot was obtained by densely and uniformly sampling the parameter region $[-1,1]^2$,
feeding the resulting samples to the trained network, and color-coding the response of the network for each of the input values in the densely sampled region.
A summary of the quantitative results with various training and testing data sets are presented in Table~\ref{tab:nn-quadratic}. This shows
that the network was able to learn
the real discriminant locus.
Moreover, it behaves well even at regions of the parameter space that were not represented in the training procedure, namely those located away from the boundary.
Similar to the 1-NN results,
the neural network behaves best
when data near the boundary is used
within the training data set.
\begin{table}[!htbp]
\centering
\begin{tabular}{l|c|c|c}
\toprule
& \multicolumn{3}{c}{Test Data} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4}
\multicolumn{1}{c|}{Training Data}
& \multicolumn{1}{l|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.9559 & 0.9779 \\
\midrule
NearBoundary & 1 & 1 & 1 \\
\midrule
NearBoundary+NearCenter & 1 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of feedforward neural network on the univariate quadratic $f(x;b,c) = x^2 + bx + c$.}
\label{tab:nn-quadratic}%
\end{table}%
\subsection{Cubic}\label{sec:cubic}
Since the real discriminant locus
for the quadratic in Section~\ref{sec:quadratic}
was smooth, we increase the degree to have
a cusp on the boundary.
In particular, we consider
the cubic $f(x;b,c) = x^3+bx+c=0$.
The boundary between $f$ having $3$ real solutions
and $1$ real solution is
defined by $4 b^3 + 27 c^2 = 0$
which has a cusp at the origin.
Figure~\ref{fig:cubic}(a) plots uniformly
selected data in $[-1,1]^2$ with
Figure~\ref{fig:quadratic}(b) showing
the near boundary data zoomed in near the cusp.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.18]{Uniform_cubic.png} \includegraphics[scale=0.18]{Perturbed_cubic.png}
\includegraphics[scale=0.38]{CubicPaper.png}\\
(a) \hspace{1.8in} (b) \hspace{1.8in} (c)
\caption{(a) Uniform random sampled data, (b) near boundary data near the cusp, and (c) decision boundary from neural network trained on data from~(b)
for $f(x;b,c) = x^3 + bx + c$.}\label{fig:cubic}
\end{figure}
A 1-NN classification method was used with various training and testing data sets, with results summarized in Table~\ref{tab:knn-cubic}. As for the quadratic
in Section~\ref{sec:quadratic},
including sample point data near the boundary for training yields higher accuracy. In addition, when uniform data is used for training, the accuracy declines when boundary data is included in the testing data set.
\begin{table}[h!]
\centering
\begin{tabular}{l|c|c|c}
\toprule
& \multicolumn{3}{c}{Test Data} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4} \multicolumn{1}{c|}{Training Data} & \multicolumn{1}{l|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.5259 & 0.7259 \\
\midrule
NearBoundary & 0.9999 & 1 & 1 \\
\midrule
NearBoundary+NearCenter & 0.9999 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of 1-NN method on the univariate cubic $f(x;b,c) = x^3 + bx + c$.}
\label{tab:knn-cubic}%
\end{table}%
A feedforward network with the same hyperparameters as that used in the quadratic from Section~\ref{sec:quadratic}
was trained on the data from Figure~\ref{fig:cubic}(b)
and illustrated in Figure~\ref{fig:cubic}(c).
Table~\ref{tab:nn-cubic} presents
the quantitative results with the
various training and testing data sets.
This shows that the network learned a good approximation of the real discriminant locus even in this challenging case where the boundary has a cusp.
As before, the network response behaves well across the full parameter space. In addition, when near boundary data is included in the training data set, the model performs better across all testing data sets, similarly to the 1-NN method.
\begin{table}[h!]
\centering
\begin{tabular}{l|c|c|c}
\toprule
& \multicolumn{3}{c}{Test Data} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4} \multicolumn{1}{c|}{Training Data} & \multicolumn{1}{l|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.7848 & 0.8689\\
\midrule
NearBoundary & 1 & 1 & 1 \\
\midrule
NearBoundary+NearCenter & 1 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of feedforward neural network on the univariate cubic $f(x;,b,c)=x^3 + bx + c$.}
\label{tab:nn-cubic}%
\end{table}%
\subsection{Kuramoto Model}\label{sec:kuramoto}
To move beyond toy examples,
we consider the Kuramoto model \cite{acebron2005kuramoto,dorfler2014synchronization,strogatz2000kuramoto}
which is a popular model to study synchronization phenomena observed in systems consisting of coupled oscillators.
Such a system can be found in a wide variety of applications such as synchronizing fire-flies, rhythmic applause, biological neural networks, laser arrays, power grids, particle coordination, and spin glass models.
For $N$ oscillators, the mathematical model is
a system of ordinary differential equations:
\begin{equation}
\frac{d\theta_i}{d t} = \omega_i - \frac{1}{N} \sum_{j=1}^{N}\sin(\theta_i - \theta_j),
\end{equation}
for $i=1, \dots, N$
where $\theta_i$ is the phase of the $i^{\rm th}$
oscillator (at time $t$).
The oscillators are coupled through the sine function.
The parameters $\omega_i$
are the natural frequencies of the respective oscillators.
We aim to study the number of equilibria
of the Kuramoto model as a function of the
parameters~$\omega_i$.
The equilibria satisfy
\begin{equation}
\omega_i - \frac{1}{N}\sum_{j=1}^{N}\sin(\theta_i - \theta_j) = 0.
\label{eq:kuramoto_trig}
\end{equation}
Since these equations
are invariant under transformation
$\theta_i \rightarrow \theta_i + \alpha$
for any $\alpha$,
this continuous global symmetry gives rise to solution curves. In order to remove the symmetry and, in turn,
restrict the solution space generically
to isolated solutions, we fix $\theta_N = 0$.
Since the sum of all of the equations
in Eq.~\eqref{eq:kuramoto_trig} yields
$$\omega_1+\cdots+\omega_N = 0,$$
we can remove the $N^{\rm th}$ equation from
the system which is now parameterized by
$\omega_1,\dots,\omega_{N-1}$ with
$\omega_N = -(\omega_1+\cdots+\omega_{N-1})$.
Finally, we can rewrite using trigonometric identities
and replace each $\cos \theta_i$ and $\sin \theta_i$
by variables $c_i$ and $s_i$ which are constrained
by the Pythagorean theorem.
Hence, the final parameterized polynomial system under consideration is
\begin{equation}
F(c_1,s_1,\dots,c_{N-1},s_{N-1};
\omega_1,\dots,\omega_{N-1}) =
\left[\begin{array}{c}
\displaystyle \omega_i - \dfrac{1}{N}\sum_{j=1}^N(s_i c_j - s_j c_i) \\[0.1in]
c_i^2 + s_i^2 - 1 \\[0.1in]
i = 1,\dots,N-1 \end{array}\right] = 0.
\label{eq:kuramoto_poly}
\end{equation}
From Eq.~\eqref{eq:kuramoto_trig}, it is easy
to observe that if $\omega_i\notin\left[-\frac{N-1}{N},\frac{N-1}{N}\right]$,
then Eq.~\eqref{eq:kuramoto_poly} can have no real solutions.
Hence, the parameter space is naturally restricted
to a compact subset of $\mathbb{R}^{N-1}$.
Moreover, the number of real solutions
is invariant under permutations of
the parameters. In particular,
we do not label the axes in
Figures~\ref{fig:kuramoto3}
and~\ref{fig:kuramoto4} since
equivalent pictures hold for any labelling.
For generic parameter values,
$F$ from Eq.~\eqref{eq:kuramoto_poly} has
$2^{N}-2$ solutions \cite[Thm.~4.3]{coss2018locating}.
Thus, for $N =3$, there can be a maximum
of $6$ isolated real solutions and
there are parameters for any possible even
number of solutions, e.g., see Figure~\ref{fig:kuramoto3}(d).
For $N = 4$, it was conjectured in
\cite{xin2016analytical} to have a maximum
of~$10$ isolated real solutions
by scanning over of a grid of the parameter space.
This conjecture was proven
to be correct in \cite[Thm.~8.1]{Harris2020}.
The following considers learning the
real discriminant locus of the parameter space
for $N =3$ and $N=4$.
It is known that the classical discriminant
locus has degree $12$ and $48$, respectively,
and has many singularities.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.25]{Uniform_kuramoto3.png} \includegraphics[scale=0.25]{Perturbed_kuramoto3.png}\\
\hspace{0.2in} (a) \hspace{2.8in} (b)\hspace{1.75in} \\
\includegraphics[scale=0.25]{Perturbed_kuramoto3_zoom.png}
\includegraphics[scale=0.5]{Kura3Paper.png} \\
\hspace{0.2in} (c) \hspace{2.8in} (d)\hspace{1.75in} \\
\caption{
For the 3-Oscillator Kuramoto model,
(a) uniformly selected parameter values, (b) data perturbed from the boundary, (c) a zoomed view of data perturbed from the boundary, and (d) decision boundary from neural network trained on data from (b).}
\label{fig:kuramoto3}
\end{figure}
\subsubsection{3-Oscillators}
For $N =3$, there are two parameters $(\omega_1,\omega_2)\in[-1,1]^2$.
A uniform sampling is provided in
Figure~\ref{fig:kuramoto3}(a),
near boundary data points in
Figure~\ref{fig:kuramoto3}(b),
and a zoomed in version of near
boundary points in Figure~\ref{fig:kuramoto3}(c).
A 1-NN classification method was used with various training and testing data sets to learn the real
discriminant locus with the results summarized in Table~\ref{tab:knn-k3}.
As in the previous examples, including sample point data near the boundary for training yields higher accuracy when any testing data set is used.
Due to the increasing difference in size
of the data sets, tests were completed to
determine whether training with a much smaller data set
and testing with a data set on the order of
ten times larger impacted the accuracy results.
To do this, the original uniform data set of approximately 1,000 data points as well as a uniform data set of 8,000 data points were used for training while data sets of approximately 1,000 (uniform), 5,000 (NearBoundary), and 8,000 (NearBoundary+NearCenter) were used for testing.
As summarized in Table~\ref{tab:knn-k3}, the accuracy when the size of data sets is comparable does not drastically increase. Most importantly, it does not change the conclusion that including near boundary data in the training data set yields highest accuracy across all testing data sets.
\begin{table}[!ht]
\centering
\begin{tabular}{l|c|c|c}
\toprule
& \multicolumn{3}{c}{Test Data} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4} \multicolumn{1}{c|}{Training Data} & \multicolumn{1}{l|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.4921 & 0.6525 \\
\midrule
Uniform (large) & 1 & 0.5306 & 0.6973\\
\midrule
NearBoundary & 1 & 1 & 0.9985 \\
\midrule
NearBoundary+NearCenter & 1 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of 1-NN method on the 3-Oscillator Kuramoto model.}
\label{tab:knn-k3}%
\end{table}%
A feedforward, fully connected neural network with five hidden layers each with 20 neurons was trained on the data from Figure~\ref{fig:kuramoto3}(b)
and shown in Figure~\ref{fig:kuramoto3}(d).
We employed ReLU activation function
for the neurons and used a 4-neuron softmax layer as the output layer since this is a 4-class classification task corresponding to $0$, $2$, $4$, and $6$ real solutions.
The network was trained to classify the parameter points according to the number of real solutions associated with the portion of the space they reside in. A multi-class cross-entropy loss without regularization was used to train the network with a variable learning rate scheme.
The network was trained until it correctly classified
every sample in the training set.
For various testing data sets, each sample point was fed to the neural network and the classification decision recorded. Table~\ref{tab:nn-k3} shows
that including data points near the boundary in the training data set decidedly improves the accuracy across all
testing~data~sets.
\begin{table}[!tbp]
\centering
\begin{tabular}{l|c|c|c}
\toprule
& \multicolumn{3}{c}{Test Data} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4} \multicolumn{1}{c|}{Training Data} & \multicolumn{1}{l|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.5027 & 0.6554 \\
\midrule
NearBoundary & 1 & 1 & 0.9861 \\
\midrule
NearBoundary+NearCenter & 1 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of feedforward network on the 3-Oscillator Kuramoto model.}
\label{tab:nn-k3}%
\end{table}%
\subsubsection{4-Oscillators}
Similar computations were performed
on the $4$-Oscillator Kuramoto model,
which has a three-dimensional parameter space.
Following the theoretical bounds,
we only considered sample points in $[-3/4,3/4]^3$.
Figure~\ref{fig:kuramoto4}(a) shows
a two-dimensional slice of the parameter space
using uniformly selected points
while Figure~\ref{fig:kuramoto4}(b)
illustrates some of the near boundary data.
Table~\ref{tab:knn-k4} summarizes the results
when using a 1-NN classification method
which are consistent with previous experiments.
In our experiment using the neural network method,
it is apparent that the network was unable to fully separate the data samples
with correct labels for some of the near boundary points.
We hypothesize that, although learning converged, it likely reached a local minimum
in the optimization landscape.
As the dimensionality of the data and the number of training data points grow,
the complexity of the optimization landscape increases
which makes it less likely to reach the global minimum or at least one that is truly optimal.
This scenario is worsened by the absence of a regularization term
where it has been shown empirically~\cite{mehta2018}
that the number of local minima in the optimization landscape of a neural network
decreases as stronger regularization is enforced.
\begin{figure}[!hb]
\centering
\includegraphics[scale=0.25]{Uniform_kuramoto4_2D.png} \includegraphics[scale=0.25]{perturbed_kuramoto4.png}\\
(a) \hspace{2.5in} (b)\hspace{3in} \\
\caption{
For the 4-Oscillator Kuramoto model,
(a) uniformly selected parameter values on a 2D slice
and (b) some of the data perturbed from the boundary.
}
\label{fig:kuramoto4}
\end{figure}
\begin{table}[tbp]
\centering
\begin{tabular}{l|r|c|c}
\toprule
\multicolumn{1}{c|}{\multirow{3}[6]{*}{Training Data}} & \multicolumn{3}{c}{Test Data} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Uniform} & \multicolumn{1}{l|}{NearBoundary} & \multicolumn{1}{l}{NearBoundary+NearCenter} \\
\cmidrule{2-4} & \multicolumn{1}{l|}{Accuracy} & Accuracy & Accuracy \\
\midrule
Uniform & 1 & 0.463 & 0.5911 \\
\midrule
NearBoundary & 0.9782 & 1 & 0.9901 \\
\midrule
NearBoundary+NearCenter & 0.9869 & 1 & 1 \\
\bottomrule
\end{tabular}%
\caption{Performance of 1-NN method on the 4-Oscillator Kuramoto model.}
\label{tab:knn-k4}%
\end{table}%
\section{Real parameter homotopy leveraging learned boundaries}\label{sec:realHomotopy}
The examples presented in Section~\ref{sec:results}
show that machine learning techniques
coupled with the sampling scheme
from Section~\ref{sec:sampling}
produce accurate results for classifying,
i.e., predicting the number
of real solutions, over the parameter space.
Often in science and engineering
applications, one is not only
interested in the number of real solutions,
but actually computing the real solutions.
Typically, for these applications, the number
of real solutions is significantly smaller
than the number of complex solutions,
so developing a parameter homotopy that
only tracks real solution paths
can drastically reduce the computational time.
The key to developing such a real parameter homotopy
is to track along a segment in the parameter
space which does not intersect the real discriminant locus.
Thus, after learning, one can develop
a robust and efficient real parameter homotopy setup
as follows that we demonstrate on
the 3-Oscillator and 4-Oscillator Kuramoto model.
Given a real parameter $p\in\mathbb{R}^k$, the
real parameter homotopy method uses
the nearest neighbor method to select
the closest parameter point $p^*$ to $p$ in
the sampled (training) data set.
Since the real solutions for $f(x;p^*)=0$
have already been computed, one only
tracks the solutions paths starting at real solutions
for the homotopy defined by
$$H(x,t) = f(x;t\cdot p^* + (1-t)\cdot p) = 0,$$
which is simply Eq.~\eqref{eq:ParameterHomotopy} with $\gamma=1$.
Therefore, if the line segment $[p^*,p]$
does not intersect the real discriminant locus,
then there is a bijection between the real solutions
of $f(x;p) = 0$ and $f(x;p^*)=0$,
and every real solution path of $H=0$ is smooth
for $t\in[0,1]$.
Using sample points via the sampling scheme
in Section~\ref{sec:sampling} on either side of
the boundary aims to increase the chance
the segment between~$p$ and
the nearest sample point $p^*$ is
contained in the same region and thus this real parameter
homotopy method succeeds. Figure~\ref{fig:RealHomotopy}
is Figure~\ref{fig:kuramoto3}(d)
with two added segments.
One segment (black) is within the same region
so that the real parameter homotopy method would succeed.
Although the other segment (purple) has endpoints with the
same region, there is no guarantee of success
since it intersects the real discriminant locus.
\begin{figure}[!h]
\centering
\includegraphics[scale=1.4]{Kura3Paper2.png}
\caption{Illustration of
two segments added to Figure~\ref{fig:kuramoto3}(d),
one (black) which is guaranteed to succeed while the other (purple) may fail since it intersects
the real discriminant locus.}\label{fig:RealHomotopy}
\end{figure}
\subsection{3-Oscillators}
As an illustration, consider the
3-Oscillator Kuramoto model.
Since, from Section~\ref{sec:kuramoto},
the generic number of complex solutions is $6$,
one of course can easily track all $6$ complex solution
paths using a classical parameter homotopy in Eq.~\eqref{eq:ParameterHomotopy}.
In our experiment,
using a single core of a
2.4 GHz AMD Opteron Processor,
this took on average $1.33$ seconds.
Nonetheless, we utilize this as a test
case to show some improvement as well as
analyzing the success rate
which was determined by comparing using a classical
parameter homotopy with this
machine learning assisted real parameter homotopy.
Table~\ref{tab:K3_real_parameter_homotopy}
shows that, on average, the real parameter
homotopy took less than $0.1$ seconds
and was successful on every randomly selected
parameter value tested.
One reason for the order of magnitude
reduction in computational time is that,
by selecting the closest parameter value,
the homotopy solution paths are much shorter
and thus faster to track.
\begin{table}[!htbb]
\centering
\begin{tabular}{c|c|c|c}
\toprule
Number of data points & Number of paths & Average time (in seconds) & Success rate \\
\midrule
249 & 2 & 0.077 & $100\%$\\
\midrule
26& 4 & 0.081 & $100\%$\\
\midrule
17 & 6 & 0.086 &$100\%$ \\
\bottomrule
\end{tabular}
\caption{The average computation time for finding all real roots for the 3-Osillator Kuramoto model using a machine learning assisted real parameter homotopy method.}
\label{tab:K3_real_parameter_homotopy}
\end{table}
\subsection{4-Oscillators}
Following a similar setup, we also applied
the method to the 4-Oscillator Kuramoto model.
In this case, the generic number of complex
solutions is $14$, but the maximum number of real
solutions is $10$ showing that there will always
be wasted computational effort when computing
the real solutions using a classical parameter homotopy.
In our experiment, the average time
for tracking the 14 complex solution paths
using a classical parameter homotopy was $3.40$ seconds.
Table~\ref{tab:K4_2_real_parameter_homotopy}
summarizes the results that again show
over an order of magnitude reduction
in computational time with a success rate
in accordance with the classification accuracy
in Table~\ref{tab:knn-k4}.
\begin{table}[!htbb]
\centering
\begin{tabular}{c|c|c|c}
\toprule
Number of data points & Number of paths & Average time (in seconds) & Success rate \\
\midrule
30504 & 2 & 0.114 & $98.2\%$\\
\midrule
17088& 4 & 0.121 & $98.8\%$\\
\midrule
9041 & 6 & 0.126 &$99.2\%$ \\
\midrule
4383 & 8 & 0.128 &$98.0\%$ \\
\midrule
345 & 10 & 0.132 &$100\%$ \\
\bottomrule
\end{tabular}
\caption{The average computation time for finding all real roots for the 4-Osillator Kuramoto model using a machine learning assisted real parameter homotopy method.}
\label{tab:K4_2_real_parameter_homotopy}
\end{table}
\section{Outlook and conclusions}\label{sec:conclusion}
This paper provides a novel viewpoint
on the mathematical problem of identifying
the boundaries, called the real discriminant
locus, of the parameter space
that separate the regions corresponding
to different number of real solutions
to a parameterized polynomial system.
Although there is a discriminant polynomial
which vanishes on the real discriminant locus,
it can be difficult to compute, facilitating
the need to numerically approximate it.
Our approach is based on the correspondence
between the real discriminant locus
and decision boundaries of
a supervised classification problem in machine learning.
By utilizing domain knowledge from numerical algebraic
geometry, we developed a sampling strategy for selecting
points near the boundary to assist the machine learning techniques in providing an accurate approximation
of the boundary.
With a parameter homotopy, one is able to accurately
label the data so that there is no noise
in the data. Hence, no regularization techniques
need to be utilized, which would have
forced the algorithm to strictly learn only
smooth boundaries. This is important
since singularities often arise on real
discriminant loci as illustrated
in Section~\ref{sec:results}.
One challenge with using deep networks
to learn a real discriminant locus
is how to properly select the number of
layers and neurons within each layer
needed to develop an accurate approximation.
We utilized hyperparameter optimization
to search for reasonable choices
along with stochastic gradient descent methods
to determine weights to fit the data.
Another challenge is the presence
of singularities which
seem to make training more difficult for deep networks.
Therefore, these type of problems
provide a unique benchmarking opportunity for multi-class machine learning algorithms as the ground truth
regarding both labels and classification
boundaries can be explicitly computed
for some examples, such as univariate polynomials
as in Sections~\ref{sec:quadratic} and~\ref{sec:cubic}.
We overcome some of these difficulties
by developing a sampling scheme
that produces significantly more points near the boundaries than in other areas of the parameter space
so that one is able to quickly obtain an accurate
approximation of the real discriminant locus.
When deep networks can take an inordinate
amount of time to train, one can utilize
local approximation methods such as
$K$-nearest neighbor classification algorithm.
In fact, as shown in Theorem~\ref{thm:1NN},
no classifier can outperform the $1$-nearest neighbor
classification algorithm provided that the
parameter space is sampled densely enough.
The examples in Section~\ref{sec:results}
show that deep networks are useful when the
parameter space is not sampled very densely.
Although our proposed sampling method
can be viewed as active learning,
one can also employ a more explicit active learning
approach where an algorithm interactively queries
the parameter space and samples more densely near
singularities such as cusps and other difficult regions.
One could also attempt to first construct an algorithm to
remove $\epsilon$ neighborhoods surrounding all singularities,
learn the remaining parameter space
and real discriminant locus,
and then take $\epsilon \rightarrow 0$. These approaches will be explored in the future.
Finally, as an application of learning
the real discriminant locus,
we developed a real parameter homotopy method
that tracks on real solution paths in Section~\ref{sec:realHomotopy}.
Even for relatively small problems,
this method reduced the computational
time by over an order of magnitude.
After generating sample data ``offline,''
this method is easy to implement
in an ``online'' solver which could drastically
improve the computation of real solutions.
With proper adjustments,
this method is easily extensible to other
nonlinear functions such as rational, exponential,
logarithmic, trigonometric, and piecewise~functions.
\section*{Acknowledgement}
The paper is a result of exploratory and fundamental research, and statements made in it are Dhagash Mehta's and his co-authors' personal views which do not represent The Vanguard Group's views.
The authors thank Martin Pol\'{a}\v{c}ek for insightful comments.
\bibliographystyle{abbrv}
| {
"timestamp": "2020-06-26T02:04:45",
"yymm": "2006",
"arxiv_id": "2006.14078",
"language": "en",
"url": "https://arxiv.org/abs/2006.14078",
"abstract": "Parameterized systems of polynomial equations arise in many applications in science and engineering with the real solutions describing, for example, equilibria of a dynamical system, linkages satisfying design constraints, and scene reconstruction in computer vision. Since different parameter values can have a different number of real solutions, the parameter space is decomposed into regions whose boundary forms the real discriminant locus. This article views locating the real discriminant locus as a supervised classification problem in machine learning where the goal is to determine classification boundaries over the parameter space, with the classes being the number of real solutions. For multidimensional parameter spaces, this article presents a novel sampling method which carefully samples the parameter space. At each sample point, homotopy continuation is used to obtain the number of real solutions to the corresponding polynomial system. Machine learning techniques including nearest neighbor and deep learning are used to efficiently approximate the real discriminant locus. One application of having learned the real discriminant locus is to develop a real homotopy method that only tracks the real solution paths unlike traditional methods which track all~complex~solution~paths. Examples show that the proposed approach can efficiently approximate complicated solution boundaries such as those arising from the equilibria of the Kuramoto model.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG); Symbolic Computation (cs.SC); Algebraic Geometry (math.AG); Applications (stat.AP)",
"title": "Machine learning the real discriminant locus",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982823294509472,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460416856276
} |
https://arxiv.org/abs/1201.2853 | Renormalized energy concentration in random matrices | We define a "renormalized energy" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of [SS1,SS3]. Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix $\beta$-sine processes on the real line (beta=1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the beta=2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels. | \section{Introduction}
The aim of this paper is to introduce and compute a function, called the ``renormalized energy", for some specific random point processes that arise in random matrix models, and in this way to associate to each of these processes a unique number, which is expected to measure its ``disorder".
Our ``renormalized energy", that we denote $\mathcal{W}$, is defined over configurations of points lying either on the real line or on the plane, as the limit as $N \to \infty$ of
\begin{align*}
\mathcal{W}_N(\{a_i\}) & = - \frac{1}{N} \sum_{i\neq j, a_i, a_j \in [0, N]} \log \left|2\sin \frac{\pi(a_i-a_j)} N\right| + \log N\qquad \text{in dimension 1},\\
\mathcal{W}_N(\{a_i\}) & = \frac{1}{2\pi N^2 } \sum_{i\neq j, a_i, a_j \in [0,N]^2} E_N(a_i-a_j) + \log \frac{N}{2\pi \eta(i)^2}\qquad \text{in dimension 2},\end{align*} where $E_N$ is an explicit Eisenstein series, and $\eta$ is the Dedekind Eta function.
This definition is inspired by that of the ``renormalized energy", denoted $W$,
introduced by Sandier and the second author in \cite{ss1} in the case of points in the plane and in \cite{ma1d} in the case of points on the real line. The definitions for $W$ and $\mathcal{W}$ coincide when the point configuration has some periodicity (this is where our new definition originates), and in that case they amount to computing a sum of pairwise interactions
$$\sum_{i\neq j} G(a_i-a_j)$$ where $a_i$ are the points and $G$ is a suitable logarithmic kernel (the Green's function on the underlying torus); however they are not identical in general. We will give more details on the connection in Section \ref{secdefW}.
In \cite{ma1d} it is shown that in dimension 1, $W$ is bounded below and its minimum is achieved at the perfect lattice $\mathbb{Z}$. In dimension 2, the situation is more complex; it is also shown in \cite{ss1} that the minimum of $W$ is achieved, but it is only conjectured that this minimum value is achieved at the perfect triangular lattice or ``Abrikosov lattice" according to the terminology of the physics of superconductors (which was the first main motivation for $W$ in \cite{ss1} where it was introduced). This conjecture is supported by the result that, among configurations of points which form a perfect lattice (of fixed volume), the renormalized energy is minimal if and only if the lattice is the perfect triangular lattice, i.e. with 60$^\circ$ angles (that result is shown in \cite{ss1} based on the use of modular functions and results in number theory).
It is thus natural to think of $W$ or $\mathcal{W}$ as a way to measure the disorder of a configuration of points. With this outlook, in dimension 1 the lattice $\mathbb{Z}$ is the most ordered configuration of points with prescribed density 1, while in dimension 2, it is expected to be the triangular lattice (which is better ordered than any other lattice, say the square lattice).
In addition, due to its logarithmic nature, $\mathcal{W}$ has some nice scaling and additive properties, which we believe make it a very good object.
A further motivation for choosing to study $\mathcal{W}$ as opposed to any other total pairwise interaction function is
that, as seen in \cite{ma2d} and \cite{ma1d}, $W$ arises very naturally from the statistical mechanics of Coulomb or log gases, which contain as particular cases the Ginibre and GOE/GUE/GSE ensembles of random matrices.
In \cite{ss1}, $W$ was introduced and derived in the context of the minimization of the Ginzburg-Landau model of superconductivity. In \cite{ma2d} it was derived as a sort of limiting interaction energy for two dimensional Coulomb gases, and similarly in \cite{ma1d} with log gases. These works are based on analysis and energy estimates (upper and lower bounds). Both the questions we pursue here and the methods we use are quite different: they aim at obtaining explicit formulas for specific random matrix models. In particular, this is a way to compute some interesting statistics over random matrix eigenvalues, as initiated by Dyson-Mehta \cite{dysonmehta}. We will comment more on this just below.
\medskip
Let us now briefly introduce the notion of a random point process.
A (simple) random point process is a probability measure on the set of all locally finite collections of (mutually distinct) points in the space,
cf. e.g. \cite{dvj}.
It can also be viewed as a random measure of the form $\xi(\omega)=\sum_{p \in \Lambda} \delta_p$, with the points $p$ distinct and $\Lambda $ discrete.
Random point processes are essentially characterized by their ``$k$-point correlation functions" $\rho_k(x_1, \dots, x_k)$, which give the probability densities of finding $k$ points at the locations $x_1, \dots, x_k$.
We will normalize our processes so that the average number of points per unit volume is always $1$, which is equivalent to $\rho_1(x)\equiv 1$.
Perhaps the most famous random point process is the Poisson process, characterized by the facts that the number of points in disjoint subsets are independent, and the number of points in any finite volume subset of the space follows a Poisson distribution with parameter equal to the volume of the set with respect
to a reference measure.
An important class of point processes is that of determinantal point processes, see \cite{Sos00}, \cite{Lyo03}, \cite{Joh05}, \cite{Kon05},
\cite{Hou06}, \cite{Sos06}, \cite{Bor11} and references therein. That class is characterized by the fact that the $k$-point correlation functions are given by symmetric minors of a (correlation) kernel, cf. Section \ref{correl}.
It is easy to see that $\lim_{N\to\infty}\mathcal{W}_N=+\infty$ for the translation invariant Poisson process, which
means that it is ``too chaotic'' from the point of view of the renormalized energy. Let us list the (stationary)
processes for which we show that $\mathcal W$ provides more meaningful information.
We will be interested in one dimension in the $\beta$-sine processes ($\beta=1,2,4$) which arise as the local limit of the law of eigenvalues in random matrix ensembles
with orthogonal, unitary, and symplectic symmetry groups (they are determinantal for $\beta=2$ and Pfaffian otherwise). In two dimensions we will examine the ``Ginibre" point process, which is also a determinantal process arising as the local limit of the law of eigenvalues of matrices from the complex Ginibre ensemble (i.e. square matrices with complex Gaussian iid entries), for further reference, see \cite{forrester,agz,mehta}; as well as the random process of zeros of random Gaussian analytic functions, often denoted GAF, whose description can be found in \cite{Hou06}.
As our processes are always translation invariant, the $2$-point correlation function can always be written in the form $\rho_2(x,y)=1-T_2(x-y)$ for the
2-point \emph{cluster function} $T_2$ (we will come back to this notation in Sections \ref{correl} and \ref{sec4}).
The main results we obtain are the following:
\begin{itemize}
\item For general stationary processes we identify sufficient conditions on the process and its $2$-point correlation function $\rho_2$ for the existence of $\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N$,
and give an explicit formula in terms of $\rho_2$ which is (up to constants)
$$\lim_{N\to \infty}\mathbb{E}\mathcal{W}_N= \int_{\mathbb{R}^d} \log |x|T_2(x)\, dx,$$ with $d=1 $ or $2$ according to the dimension.
\footnote{Interestingly enough, the sufficient conditions involve the equality $\int_{\mathbb{R}^d} \log |x|T_2(x)\, dx=1$
that we check for the above mentioned processes. We expect
it to hold for general $\beta$-sine processes but we do not see an \emph{a priori} reason for that.
It is also not clear to us whether this condition has a physical meaning.}
\item We apply this formula to the specific point processes mentioned above.
\item For the specific point processes above, we explicitly compute the limit of the variance of $\mathcal{W}_N$ and obtain that it is $0$. This implies that for such processes, $\mathcal{W}_N$ concentrates around its expected value, and converges in probability to $\lim\mathbb{E}\mathcal{W}_N$ as $N \to \infty$.
\item We prove that in the class of determinantal point processes with translation invariant kernels in dimensions
1 and 2, $\lim_{N\to \infty}\mathbb{E}\mathcal{W}_N$ is minimized by the processes whose correlation kernel is the Fourier transform of
the characteristic function of the ball. A complete physical interpretation of this fact seems to be missing.
In dimension 1 the optimization gives the $\beta=2$ sine process, which can be seen as a heuristic
explanation of the fact that this process is the universal local limit of $\beta=2$ random matrix ensembles.
Indeed, such a local limit has to be translation invariant and determinantal, and it is natural to expect that it also minimizes an appropriate energy functional. In other dimensions, point processes with such specific kernels called ``Fermi-sphere point processes" appear in \cite{t} as higher-dimensional analogues of the sine process.
\end{itemize}
For our set of specific processes, we thus show that we can attribute to the process a unique number, which we compute explicitly. Whenever these numbers are distinct this implies that the processes are mutually singular (as measures
on the space of point configurations).
Moreover, we check that the processes that are expected to have the highest level of ``order" or rigidity indeed have a lower value of $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$.
For example for the $\beta$-sine processes we find
$$\begin{array}{ll}
\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N= 2-\gamma - \log 2 &\quad \text{for } \ \beta=1\\
\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N= 1-\gamma & \quad \text{for } \ \beta=2\\
\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N= \frac{3}{2}- \log 2 - \gamma & \quad \text{for } \ \beta=4,\end{array}$$
where $\gamma$ is the Euler constant. These three numbers form a decreasing sequence, as can be expected.
In two dimensions, we obtain that the Ginibre process has a higher $\lim_{N\to \infty} \mathcal{W}_N$, hence less rigidity, than that of zeros of Gaussian
analytic functions, in agreement with the results and philosophy of \cite{gnps}.
The values of $\lim \mathbb{E}\mathcal{W}_N$ for the $\beta=1,2,4$ sine processes above equal twice the thermodynamic
``energy per particle'' or ``mean internal energy'' for the log-gas with infinitely many particles, as obtained by Dyson in 1962 \cite[I.IX, III.VI]{dyson}.
This is not surprizing as our definition of $\mathcal{W}_N$ essentially coincides with that of Dyson \cite[I.VI]{dyson} once the points on the interval $[0, N]$
are identified with points on a circle of same length. Furthermore, one can use
this fact to infer finer asymptotic properties of $\mathcal{W}_N$ as follows (we are very grateful to Peter
Forrester for the idea).
If, instead of considering growing windows, one approximates the $\beta$-sine
processes by the \emph{circular ensembles} of particles on the unit circle with joint probability density
$\prod_{i<j}|z_i-z_j|^\beta$, one observes that in rescaled variables, $\mathcal{W}_N$ is simply $-2N^{-1}\sum_{i\ne j}\log|z_i-z_j|+\log N$,
and its characteristic function can be immediately obtained from Selberg's formula for the partition function:
$$
\mathbb{E} \exp(it\,\mathcal{W}_N^\mathrm{circular})=N^{it}\,\frac{Z(\beta-\frac{2it}N)}{Z(\beta)}\,, \qquad Z(\beta)=\frac{\Gamma(1+\frac{\beta N}2)}
{(\Gamma(1+\frac\beta 2))^N}\,,
$$
see e.g.
\cite[Section 4.7.2]{forrester} for the formula for $Z(\beta)$.
Using Stirling's formula it is not hard to see that for any $\beta>0$ and $t\in\mathbb{R}$
$$
\lim_{N\to\infty}\mathbb{E} \exp\left(itN^{1/2}\left(\mathcal{W}_N^\mathrm{circular}-u(\beta)\right)\right)=\exp\left(-\frac{v(\beta)t^2}2\right)
$$
with
$$
u(\beta)=\Psi\left(1+\frac\beta2\right)-\log\left(\frac\beta
2\right),\qquad v(\beta)=\frac 2\beta -\Psi'\left(1+\frac\beta 2\right),\qquad \Psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}\,,
$$
and the expression of $u(\beta)/2$ through the partition function $Z(\beta)$ coincides with that of the thermodynamic energy per particle.
By L\'evy's continuity theorem this implies the central limit theorem
$$
\lim_{N\to\infty}\mathrm{Prob} \left\{\frac{\mathcal{W}_N^{\mathrm{circular}}-u(\beta)}{\sqrt{v(\beta)/N}}\le s\right\} =\frac 1{\sqrt{2\pi}}
\int_{-\infty}^s e^{-x^2/2}dx, \qquad s\in\mathbb{R}.
$$
It is natural to conjecture that the same central limit theorem holds for $\mathcal{W}_N$ on general $\beta$-sine processes
(constructed in \cite{VV}) as well, in particular
providing asymptotic values $\lim\mathbb{E}\mathcal{W}_N=u(\beta)$ and $\lim N\cdot\mathrm{Var(\mathcal{W}_N)}=v(\beta)$
for any $\beta>0$. At the moment we do not know how to prove it, although
in the $\beta=1,2,4$ cases one may hope to obtain a proof via controlling the asymptotics
of moments of $\mathcal{W}_N$ through explicit formulas for the correlation functions.
In the follow-up paper \cite{dysonmehta}, prompted by the wish to analyze experimental data, Dyson and Mehta
looked for a way of approximating the energy per particle, in the case $\beta=1$, by averaging a statistic
of the form $\sum F(a_i,a_j)$ over windows of increasing length in the random matrix spectrum. The concentration
requirement of the asymptotically vanishing variance led them to the conclusion that with their choice
of $F$, the statistic had to be corrected with further interaction terms \cite[Eq. (109)]{dysonmehta}.
Our renormalized energy $\mathcal{W}_N$ seems to be a better solution to Dyson-Mehta's problem, although one
should note that Dyson-Mehta's statistic is purely local, while $\mathcal{W}_N$ involves long range
interaction (between leftmost and rightmost particles in $[0,N]$).
\medskip
The paper is organized as follows:
In Section \ref{secdefW} we give the precise definitions of $W$,
the context about it and its connection to Coulomb energy, from \cite{ss1,ma1d}. This leads us to the definition of $\mathcal{W}_N$. In Section \ref{sec3} we compute limits of expectations of $\mathcal{W}_N$: First on the real line and for general processes, then
for the $\beta=1,2,4$ sine processes, then in the plane and for general processes, finally for the explicit Ginibre and zeros of GAF processes. In Section \ref{optim} we find minimizers of $\lim \mathcal{W}_N$
among determinantal processes with translation-invariant correlation kernels. In Section~\ref{sec4} we compute the limit variance of $\mathcal{W}_N$ for our specific processes (and show it is $0$).
In Section~\ref{sc:misc} we gather some miscellaneous computations: the effect on $\lim \mathbb{E} \mathcal{W}_N$ of superposition and decimation of processes and the computation of expectation of $\lim \mathbb{E} \mathcal{W}_N$ for the $\beta=2$ discrete sine processes.
\vskip .5cm
{\bf Acknowledgements:} We are very grateful to Peter Forrester for his idea of using circular ensembles for the
heuristics above, and also for drawing our attention to \cite{jancovici}. We also thank the anonymous referee for a careful reading and very interesting comments.
A. B. thanks Alan Edelman for help with numerics and S. S. thanks C. Sinan G\"unt\"urk for helpful discussions related to Section \ref{optim}.
A. B. was partially supported by NSF grant DMS-1056390, S. S. was supported by a EURYI award. We also thank the MSRI for support to attend a workshop where this work was initiated.
\section{Definitions of $W$ and $\mathcal{W}_N$} \label{secdefW} The aim of this section is to present a definition of $W$ for a configuration of points either on the line or in the plane, which is directly deduced from that of \cite{ss1,ma1d}, but depends only on the data of the points. We do not attempt here to fully and rigorously connect the next definition with that of \cite{ss1,ma1d} (since we believe it presents some serious technical difficulties) but the link will be readily apparent.
We start by recalling the definitions from \cite{ss1}, but rather in the form presented in \cite{ma2d}.
\subsection{Definition of $W(j)$ in the plane}
In \cite{ss1}, $W$ was
associated to a discrete set of points (with asymptotic density) $m$ in
the plane, via a vector field $j$: if $j$ is a vector field in $\mathbb{R}^2$ satisfying
\begin{equation}\label{eqj}
{\rm curl\,} j = 2\pi (\nu - m), \qquad \mathrm{div} \ j=0,\end{equation} where $m$ is a positive number (corresponding to the average point density) and $\nu$
has the form
\begin{equation}\label{eqnnu}
\nu= \sum_{p \in\Lambda} \delta_{p}\quad \text{ for
some discrete set} \ \Lambda\subset\mathbb{R}^2,\end{equation} then
for any function $\chi$ we define
\begin{equation}\label{WR}W(j, \chi): = \lim_{\eta\to 0} \(
\frac{1}{2}\int_{\mathbb{R}^2 \backslash \cup_{p\in\Lambda} B(p,\eta) }\chi |j|^2
+ \pi \log \eta \sum_{p\in\Lambda} \chi (p) \).
\end{equation}
\begin{defi}\label{defA} Let $m$ be a nonnegative number.
Let $j$
be a vector field in $\mathbb{R}^2$. We say $j$
belongs to the admissible class
$\mathcal{A}_m $
if \eqref{eqj}, \eqref{eqnnu} hold
and
\begin{equation}\label{densbornee}
\frac{ \nu (B_R ) } {|B_R|}\quad \text{ is bounded by a constant independent of $R>1$}.
\end{equation}
\end{defi}
\begin{defi}
The ``renormalized energy" $W(j)$ relative to the family of balls (centered at the origin) $\{B_R\}_{R>0}$ (and the number $m$) in $\mathbb{R}^2$ is defined for $j \in \mathcal{A}_m$ by
\begin{equation}\label{WU} W(j):= \limsup_{R \to \infty}
\frac{W(j, \chi_{R})}{|B_R|} ,
\end{equation}
where
$\chi_{R}$ denotes positive cutoff functions satisfying, for some
constant $C$ independent of $R$,
\begin{equation}
\label{defchi} |\nabla \chi_{R}|\le C, \quad \text{Supp}(\chi_{R})
\subset B_R, \quad \chi_{R}(x)=1 \ \text{if } d(x, (B_R)^c) \ge
1.\end{equation}
\end{defi}
Note that by scaling, the density of points can be changed to any, and the rule for that change of scales is
\begin{equation}\label{chscale}
W(j)= m \(W(j') - \frac{\pi}{2} \log m\),\end{equation}
where $\nu$ has density $m$ and $j'= \frac{1}{\sqrt{m}}j (\frac{\cdot}{\sqrt{m}})$ (hence the set $\nu'$ has density $1$).
This function was first introduced in \cite{ss1} and derived as a limiting interaction
energy for vortices of Ginzburg-Landau configurations.
Independently of Ginzburg-Landau it can be viewed
as a Coulombian interaction energy for an infinite number of points in the plane, computed via a renormalization. Many of its properties are stated in \cite{ss1}. In \cite{ma2d} it was directly connected to 2D log gases. We will give more details and properties of $W$ in Section~\ref{sec:background}.
\subsection{Definition of $W(j)$ on the line}
In \cite{ma1d} a one-dimensional analogue to the above definition is introduced for the study of 1D log gases, we now present it.
The renormalized energy between points in 1D is obtained by ``embedding" the real line in the plane and computing the renormalized energy in the plane, as defined in \cite{ss1}.
More specifically, we introduce the following definitions:
$\mathbb{R}$ denotes the set of real numbers, but also the real line of the plane $\mathbb{R}^2$ i.e. points of the form $(x,0)\in \mathbb{R}^2$. For the sake of clarity, we denote points in $\mathbb{R}$ by the letter $x$ and points in the plane by $z=(x,y)$. For a function $\chi $ on $\mathbb{R}$, we define its natural extension $\bar{\chi}$ to a function on $\mathbb{R}^2$ by $\bar{\chi}(x,y)=\chi(x)$.
$I_R$ denotes the interval $[-R/2, R/2]$ in $\mathbb{R}$.
${\delta_\mathbb{R}}$ denotes the measure of length on $\mathbb{R}$ seen as embedded in $\mathbb{R}^2$, that is
$$\int_{\mathbb{R}^2} \varphi {\delta_\mathbb{R}} = \int_\mathbb{R} \varphi(x, 0)\, dx$$
for any smooth compactly supported test function $\varphi $ in $\mathbb{R}^2$.
This measure can be multiplied by bounded functions on the real-line.
\begin{defi}For any function $\chi $ on $\mathbb{R}$, and any function $h$ in $\mathbb{R}^2$ such that
\begin{equation}\label{eqj1}
-\Delta h = 2\pi (\nu - m {\delta_\mathbb{R}}) \qquad \text{in } \ \mathbb{R}^2\end{equation} where
$m$ is a nonnegative number and $\nu$ has the form
\begin{equation} \label{eqnnu1}\nu= \sum_{p \in\Lambda} \delta_{p}\quad \text{ for
some discrete set of points of $\mathbb{R}$} ,\end{equation}
we denote $j=-\nab^{\perp} h :=(\partial_2 h, - \partial_1 h)$ and
\begin{equation}\label{WR1}W(j, \chi): = \lim_{\eta\to 0} \(
\frac{1}{2}\int_{\mathbb{R}^2 \backslash \cup_{p\in\Lambda} B(p,\eta) }\bar{\chi}
|j|^2 + \pi \log \eta \sum_{p\in\Lambda} \bar{\chi} (p) \),
\end{equation}
where $\bar{\chi}$ is the natural extension of $\chi$.
\end{defi}
\begin{defi}\label{defA} Let $m $ be a nonnegative number.
We say $j=-\nab^{\perp} h$
belongs to the admissible class
$\mathcal{A}_m $ if \eqref{eqj1}, \eqref{eqnnu1} hold
and
\begin{equation}\label{densbornee}
\frac{ \nu (I_R ) } {R}\quad \text{ is bounded by a constant independent of $R$}.
\end{equation}
\end{defi}
We use the
notation $\chi_{R}$ for positive cutoff functions over $\mathbb{R}$ satisfying, for some
constant $C$ independent of $R$,
\begin{equation}
\label{defchi1} |\nabla \chi_{R}|\le C, \quad \text{Supp}(\chi_{R})
\subset I_R, \quad \chi_{R}(x)=1 \ \text{if } |x|<\frac{R}{2}-1 .\end{equation}
\begin{defi}The renormalized energy $W$ is defined,
for $j \in \mathcal{A}_m$, by
\begin{equation} \label{Wroi1} W(j):= \limsup_{R \to \infty}
\frac{W(j, \chi_{R})}{R} .
\end{equation}
\end{defi}
Note that while $W$ in 2D can be viewed as a renormalized way of computing $\|h\|_{H^1(\mathbb{R}^2)}$, in 1D it amounts rather to a renormalized computation of $\|h\|_{H^{1/2}(\mathbb{R})}$.
In one dimension, the formula for change of scales is
\begin{equation}\label{minalai1}
W(j)= m \(W(j') - \pi \log m\).\end{equation}
where $j$ corresponds to density $m$ and $j'$ to $m=1$.
\subsection{Background}\label{sec:background}
We recall here some properties and background from \cite{ss1,ma1d}.
\begin{itemize}
\item[-] Since in the neighborhood of $p\in\Lambda$ we have ${\rm curl\,} j = 2\pi\delta_p - m(x)$, $\mathrm{div} \ j = 0$, we have near $p$ the decomposition $j(x) = \nabla^\perp \log|x-p| + f(x)$ where $f$ is locally bounded, and it easily follows that the limits \eqref{WR}, \eqref{WR1} exists. It also follows that $j$ belongs to $L^p_{\text{\rm loc}}$ for any $p<2$.
\item[-] Because the number of points is infinite,
the interaction over large balls needs to be normalized
by the volume as in a thermodynamic limit, and thus $W$ does not feel compact perturbations of the configuration of points. Even though the interactions are long-range, this is not difficult to justify rigorously.
\item[-] The cut-off function $\chi_R$ cannot simply be replaced by the characteristic function of $B_R$ because for every $p\in\Lambda$
$$\lim_{\substack{R\to|p|\\R<|p|}} W(j,\mathbf{1}_{B_R}) = +\infty,\quad \lim_{\substack{R\to|p|\\R>|p|}} W(j,\mathbf{1}_{B_R}) = -\infty.$$
\item[-]In dimension 1, there is a unique $h$ satisfying \eqref{eqj1} and for which $W(- \nab^{\perp} h)$ is finite. Thus $W$ amounts to a function of $\nu$ only.
\item[-] In dimension 2, there is not a unique $j$ for which $W(j)<\infty$ and it would be tempting to define $W$ as a function of $\nu$ only and not $j$ by minimizing over $j$, however we do not know how to prove that the resulting object has good properties such as measurability.
\end{itemize}
An important question is that of characterizing the minimum and minimizers of $W$.
For the case of dimension 1, it is proven in \cite{ma1d} that the value
of $W$ does not depend on the choice of $\chi_R$ satisfying \eqref{defchi1}, that $W$ is Borel-measurable over $L^p_{loc}(\mathbb{R}^2, \mathbb{R}^2)$ for $p<2$, and that
\begin{equation}\label{minen1d}
\min_{\mathcal{A}_1} W = - \pi \log (2\pi)\end{equation} is achieved for $\nu=\sum_{p\in \mathbb{Z}} \delta_p$.
For the case of dimension 2,
it is proven in Theorem 1 of \cite{ss1} that the value of $W$ does not depend on $\{\chi_{B_R}\}_R$ as long as it satisfies \eqref{defchi}, that $W$ is Borel-measurable over $L^p_{loc}(\mathbb{R}^2, \mathbb{R}^2)$, for $p<2$ and that $\min_{\mathcal{A}_m} W$
is achieved and finite. Moreover, there exists a minimizing sequence consisting of periodic vector fields.
The value of the minimum of $W$ is not known, it is conjectured (see \cite{ss1}) that it is asymptotically equal to the value at the perfect triangular lattice. This conjecture is supported by the fact that the triangular lattice can be proved to be the minimizer among lattice configurations of fixed volume.
In addition to arising from the study of Ginzburg-Landau \cite{ss1},
$W$ is naturally connected, as shown in \cite{ma2d,ma1d}, to Vandermonde factors such as $e^{-\frac{\beta}{2} w_n}$ with
\begin{equation}
\label{wn}
w_n (x_1, \dots, x_n)= - \sum_{i \neq j} \log |x_i-x_j| +n \sum_{i=1}^n V(x_i),\end{equation} where $x_i, \dots , x_n \in \mathbb{R}^d$ with $d=1 $ or $2$,
for any potential $V$ (for example $V$ quadratic).
To explain this, let us introduce some more notation. We set
\begin{equation}\label{Ib}
I (\mu)= \int_{\mathbb{R}^d\times \mathbb{R}^d} - \log |x-y|\, d\mu(x) \, d\mu(y) + \int_{\mathbb{R}^d} V(x)\, d\mu(x). \end{equation} It is well known in potential theory (see \cite{safftotik}) that, provided $V(x)- \log |x| \to +\infty $ as $|x|\to \infty$, $I$ has a unique minimizer among probability measures, called the equilibrium measure -- let us denote it ${\mu_0}$. It is characterized by the fact that
there exists a constant $c$ such that
\begin{equation}\label{optcondmo}
U^{{\mu_0}} + \frac{V}{2}= c\ \text{ in the support of ${\mu_0}$, and} \ U^{\mu_0} + \frac{V}{2}\ge c \text{ everywhere}\end{equation}
where for any $\mu$, $U^\mu$ is the potential generated by $\mu$, defined by
\begin{equation}
U^\mu(x)= - \int_{\mathbb{R}^d} \log |x-y|\, d\mu(y).\end{equation}We may then set
$\zeta= U^{\mu_0} + \frac{V}{2} - c$ where $c$ is the constant in \eqref{optcondmo}. It satisfies
\begin{equation}\label{eqz}
\left\{\begin{array}{rlll}
\zeta &=& 0 & \text{in $\text{Supp}(\mu_0)$}\\
\zeta&>& 0 & \text{in $\mathbb{R}^d\setminus \text{Supp}(\mu_0)$}\end{array}\right.\end{equation}
It is easy to check that $\zeta$ grows like $V$ at infinity.
The connection between $w_n$ and $W$ is given by the following exact ``splitting formula", valid in both 1 and 2 dimensions.
\begin{lem}[\cite{ma1d,ma2d}]\label{lem26}
For any $x_1, \dots, x_n\in \mathbb{R}^d$, $d=1$ or $d=2$, the following holds
\begin{equation}\label{idwn}
w_n(x_1, \dots, x_n)= n^2 I ({\mu_0})- \frac{n}{d} \log n + \frac{1}{\pi} W(-\nab^{\perp} H', \mathbf{1}_{\mathbb{R}^d} ) +2 n \sum_{i=1}^n \zeta (x_i)\end{equation}
where $W$ is defined in \eqref{WR} or \eqref{WR1} respectively, and where
\begin{equation}\label{2.2}
H= - 2\pi\Delta^{-1} \(\sum_{i=1}^n \delta_{x_i} - n \mu_0 \) \ \text{in } \ \mathbb{R}^2,\quad H'(n^{1/d} x) = H(x).\end{equation} where the equation is solved in $\mathbb{R}^2$ and
$\mu_0$ is naturally extended into a measure on $\mathbb{R}^2$ if $d=1$. (Note that $\Delta^{-1}$ is the convolution with $1/(2\pi)\log |\cdot|$.)
\end{lem}
\begin{proof}[Sketch of the proof]
We start by writing
$$w_n(x_1, \dots, x_n) =\int_{\triangle^c}\,- \log |x-y| \, d\nu(x)\, d\nu(y) + n\int V(x)\, d\nu(x)$$ where $\triangle $ denotes the diagonal of $(\mathbb{R}^d)^2$ and $\nu = \sum_{i=1}^n \delta_{x_i}$. The idea is to compute the right-hand side by splitting $\nu$ as $n {\mu_0} + \nu - n{\mu_0}$. This way, using the fact that ${\mu_0}(\triangle) = 0$, we obtain
\begin{multline*}
w_n(x_1, \dots, x_n) = n^2 I({\mu_0})+ 2n \int U^{{\mu_0}}(x)\, d(\nu - n {\mu_0})(x)+ n\int V(x)\, d(\nu- n {\mu_0})(x)\\+
\int_{\triangle^c}\,- \log |x-y| \, d(\nu - n {\mu_0})(x)\, d(\nu- n {\mu_0})(y) .
\end{multline*}
Since $U^{{\mu_0}} + \frac{V}{2}= c + \zeta$ and since $\nu$ and $n {\mu_0}$ have same mass $n$, we have
$$2n \int U^{{\mu_0}}(x)\, d(\nu - n {\mu_0})(x)+ n\int V(x)\, d(\nu- n {\mu_0})(x)= 2 n \int \zeta \,d(\nu - {\mu_0})= 2n \int \zeta \,d\nu,$$
where we used that $\zeta=0$ on the support of ${\mu_0}$. Thus
\begin{equation}\label{avantchvar}w(x_1, \dots, x_n) = n^2 I({\mu_0})+ 2n \int \zeta \,d\nu
+
\int_{\triangle^c}\,- \log |x-y| \, d(\nu - n {\mu_0})(x)\, d(\nu- n {\mu_0})(y) .
\end{equation}
We then claim that the last term in the right-hand side is equal to
$\frac{1}{\pi} W(-\nab^{\perp} H ,\mathbf{1}_{\mathbb{R}^d})$.
We define
$H_i(x) := H(x)+\log|x-x_i|$. We have $H_i = -\log\ast (\nu_i - n{\mu_0})$, with $\nu_i = \nu-\delta_{x_i}$, and near $x_i$, $H_i$ is $C^1$. It is then not difficult to deduce
\begin{equation}\label{rhsss}\int_{\triangle^c}\,- \log |x-y| \, d(\nu - n {\mu_0})(x)\, d(\nu- n {\mu_0})(y)= \sum_i H_i(x_i) - n\int H(x)\,d{\mu_0}(x).\end{equation}
On the other hand, by definition \eqref{WR}, \eqref{WR1} and using Green's formula, we have
$$\frac{1}{2\pi} \int_{\mathbb{R}^2 \setminus\cup_i B(x_i,\eta)} |\nabla H|^2 =\frac{1}{2\pi} \sum_i \int_{\partial B(x_i,\eta)} H\frac{\partial H}{\partial n} + \frac{1}{2}\int_{\mathbb{R}^2 \setminus \cup_i B(x_i,\eta)}H\,d(\nu - n{\mu_0}).$$ Using the decomposition $H= H_i - \log |x-x_i|$ near each $x_i$, adding $n \log \eta$ and then letting $\eta \to 0$, we arrive at the same result as the right-hand side of \eqref{rhsss}. This establishes the claim, and the final result then follows from the change of scales $x' = n^{1/d} x$ in $W$.
\end{proof}
\subsection{Calculation for points on a torus in dimension two}
The reason why we need another definition here is that, as we already mentioned, we wish to define $W$ based on the knowledge of $\nu$, i.e. a set of points, alone. For a given $\nu$, there is no uniqueness of the $j$ satisfying \eqref{eqj} or \eqref{eqj1} (in fact the indetermination is exactly a constant, see \cite[Lemma 1.4]{ma2d}), and this makes it problematic to define $W$ for points only in a measurable manner.
However, if the configuration of points is periodic, then $W$ can be computed nonambiguously from $\nu$ only.
The formula for this in dimension 2 is given in \cite{ss1}.
If the periodicity is
with respect to a torus $\mathbb{T}$, and the number of points in that torus is $n$ (denote them $a_1, \dots, a_n$) then there exists a unique (up to a constant) solution to
\begin{equation}\label{Ha}
-\Delta H_{\{a_i\}}= 2\pi \(\sum_{i=1}^n \delta_{a_i
} - \frac{n}{|\mathbb{T}|} \) \quad \text{on} \ \mathbb{T} \end{equation}
(i.e. a periodic solution) and we let \begin{equation}\label{ja}
j_{\{a_i\}}= - \nab^{\perp} H_{\{a_i\}}.\end{equation} It was proved in \cite{ss1} that $W(j_{\{a_i\}})$ is the smallest of the $W(j)$ over all $j$ satisfying \eqref{eqj} which are $\mathbb{T}$-periodic, and it was established
(see formula (1.10)) that
$$W(j_{\{a_i\}})= \frac{1}{2} \sum_{i \neq j} G(a_i- a_j) + n c$$
where $c$ is a constant, and $G$ is the Green's function associated to the torus in the following way:
\begin{equation}\label{green2d}-\Delta G= 2\pi\( \delta_0 -\frac{1}{|\mathbb{T}|} \) \qquad \text{in} \ \mathbb{T}.\end{equation}
In \cite{ss1}, (1.11), an explicit formula is given for this:
\begin{equation}\label{formexpl}
W(j_{\{a_i\}} )= \frac{1}{2} \sum_{i \neq j} \sum_{p \in (\mathbb{Z}^2)^* \setminus \{0\}} \frac{e^{2i\pi p \cdot (a_i- a_j)} }{4\pi^2 |p|^2 } + \frac{n}{2}
\lim_{x\to 0} \( \sum_{p \in (\mathbb{Z}^2)^*\backslash
\{0\} } \frac{ e^{2i\pi p \cdot x}}{ 4\pi^2 |p|^2} + \log
|x|\). \end{equation}
which can in turn be expressed using Eisenstein series.
For simplicity we prefer to work with density $1$ in a square torus, which is then necessarily
$\mathbb{T}_N:=\mathbb{R}^2 /(N\mathbb{Z})^2$ where $n=N^2$. Compared to \cite{ss1}, this will change the constants in the formulae.
So for the sake of clarity we will include below a complete proof of the following:
\begin{lem}\label{Wnper}
Let $a_1, \dots , a_n$ be $n=N^2$ points on $\mathbb{T}_N$. Let $j_{\{a_i\}}$ be the $\mathbb{T}_N$-periodic vector field associated through \eqref{ja} to the configuration of
points $\{a_i\}$, extended by periodicity to the whole plane. Then $W(j_{\{a_i\}})$ as defined in \eqref{WU} is a function of $a_1, \dots, a_n$ only, which is equal to
\begin{equation}\label{Wtor}
W_N(a_1, \dots, a_n)= \frac{\pi}{N^2} \sum_{i \neq j} G(a_i-a_j) + \pi R \end{equation}
where $G$ is the unique solution of
\begin{equation}\label{G2d}
-\Delta G= 2\pi\( \delta_0 -\frac{1}{N^2}\) \qquad \text{in} \ \mathbb{T}_N,\end{equation}
with $\Xint-_{\mathbb{T}_N} G(x)\, dx=0$; and $R$ is a constant, equal to $\lim_{x\to 0} \( G(x)+ \log |x|\)$.
\end{lem}
\begin{proof} First, arguing as in \cite{ss1} Proposition 3.1, we have
\begin{equation}\label{f1} W(j_{\{a_i\}})= \frac{1}{|\mathbb{T}_N|} \(\lim_{\eta \to 0} \frac{1}{2} \int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} |j_{\{a_i\}} |^2 + \pi n \log \eta\),\end{equation} which means we reduce the computation in the plane to a computation on the torus.
The next step is a renormalized computation \`a la \cite{bbh}, as skeched in the proof of Lemma \ref{lem26}.
Solving for \eqref{Ha} via the Green function \eqref{G2d}, in view of the translation invariance of the equation, we can choose
\begin{equation}\label{f2} H_{\{a_i\}}(x) = \sum_{i=1}^n G(x- a_i).\end{equation} Let us denote \begin{equation}\label{RG}
R(x)= G(x)+ \log |x|,\end{equation} it is well-known that $R(x)$ is a continuous function. We denote $R:=R(0)$.
Inserting \eqref{ja} into \eqref{f1}, we have
\begin{equation}\label{f3}
W(j_{\{a_i\}})= \frac{1}{|\mathbb{T}_N|} \(\lim_{\eta \to 0} \frac{1}{2} \int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} |\nabla H_{\{a_i\}} |^2 + \pi n \log \eta\).\end{equation}
Using Green's formula for integration by parts, we have
$$\int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} |\nabla H_{\{a_i\}} |^2= \int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} (- \Delta H_{\{a_i\}} ) H_{\{a_i\}} + \sum_{i=1}^n \int_{\partial B(a_i, \eta)} H_{\{a_i\}} \frac{\partial H_{\{a_i\}} }{\partial n} .$$
Inserting \eqref{Ha} and \eqref{f2}, we have
\begin{multline*}\int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} |\nabla H_{\{a_i\}} |^2 = - \sum_{j=1}^n \int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} 2\pi G(x-a_j) \, dx \\+ \sum_{i =1}^n \sum_{j=1}^n \int_{\partial B(a_i, \eta)} G(x-a_j) \frac{\partial H_{\{a_i\}} }{\partial n} , \end{multline*} with $n$ the inner unit normal to the circles.
Using for the first term the fact that $G$ is chosen to have average zero, and for the second term splitting $G(x- a_j)$ as $ -\log |x-a_j|+ R(x-a_j)$, using the continuity of $R(x)$ and $\lim_{\eta \to 0} \int_{\partial B(a_i, \eta)} \frac{\partial H_{\{a_i\}} }{\partial \nu}=2\pi $ (from \eqref{Ha}), and letting $\eta \to 0$ we find
$$\lim_{\eta\to 0} \frac{1}{2} \int_{\mathbb{T}_N \backslash \cup_{i=1}^n B(a_i, \eta)} |\nabla H_{\{a_i\}} |^2 + \pi n \log \eta= \pi \sum_{i =1}^n R(0) + \pi \sum_{i \neq j} (- \log |a_i-a_j| + R(a_i-a_j)).$$
Inserting into \eqref{f3} yields \eqref{Wtor}.
\end{proof}
It turns out that $G$ can be expressed through an Eisenstein series. The Eisenstein series with parameters $\tau \in \mathbb{C}$ and $u,v \in \mathbb{R}$ (cf. \cite{lang}) is defined by
\begin{equation}
\label{eis}
E_{u,v}(\tau):= \sum_{(m,n)\in \mathbb{Z}^2 \setminus \{0\}} e^{2i\pi (mu+nv)}\frac{ Im (\tau) }{|m\tau +n|^2}.
\end{equation}
Let us now define, for $x \in \mathbb{C}$,
\begin{equation}\label{defE}
E_N(x):= E_{Re(x/N), Im (x/N)} (i).\end{equation}
As in \cite{ss1} we will also need another classical modular function: the Dedekind $\eta $ function. It is defined, for $\tau$ in the upper-half of the complex plane by
\begin{equation}\label{defeta}
\eta(\tau)= q^{1/24} \prod_{k=1}^\infty (1-q^k)\qquad \text{where} \ q=e^{2i\pi \tau}.\end{equation}
The following holds
\begin{pro}\label{form2d}
Let $a_1, \dots , a_n$ be $n=N^2$ points on $\mathbb{T}_N$, and let $W_N$ be defined as in
\eqref{Wtor}. $E_N$ being defined in \eqref{defE}, we have
\begin{equation}\label{WNE}
W_N (a_1, \dots , a_n)=\frac{1}{2N^2} \sum_{i\neq j} E_N(a_i-a_j) + \pi \log \frac{N}{2\pi} - 2\pi \log \eta(i),\end{equation} where points in the plane are identified with complex numbers.
\end{pro}
\begin{proof}
As in \cite{ss1}, $G$, introduced in Lemma \ref{Wnper}, can be computed from \eqref{G2d} using Fourier series and this yields:
\begin{equation}\label{Gser}
G(x)= \sum_{p\in \mathbb{Z}^2 \setminus \{0\}} \frac{e^{\frac{2i\pi}{N} p \cdot x}}{2\pi |p|^2}.\end{equation}
We recognize here an Eisenstein series:
\begin{equation}\label{Geis}
G(x)=\frac{1}{2\pi} E_{Re(\frac{x}{N}), Im (\frac{x}{N}) } (i) =\frac{1}{2\pi} E_N( x) .\end{equation}
Note that the fact that $G$ has zero average implies that
\begin{equation}\label{zeroavE}
\int_{\mathbb{T}_N} E_N(x)\, dx=0.\end{equation}
There remains to compute the value of the constant $R= \lim_{x\to 0} \( G(x)+ \log |x|\)$. For that we use the ``second Kronecker limit formula" (see \cite{lang}) that asserts that
$$E_{u,v}(\tau)=-2\pi \log |f(u-v\tau, \tau)q^{v^2/2}|, $$
where $q=e^{2i\pi \tau}$, $p=e^{2i\pi z}$, $z=u-v\tau$, and
\begin{equation}\label{f}
f(z,\tau)=q^{1/12} (p^{1/2}-p^{-1/2}) \prod_{k\ge 1}(1-q^k p)(1-q^k/p).\end{equation}
We use this formula with $\tau=i$, $u= Re(\frac{x}{N})$, $v=Im (\frac{x}{N})$. In that case $q=e^{-2\pi}$, $z=\frac{\overline{x}}{N}$ and $p=e^{2i\pi \frac{\overline{x}}{N} }
$ (where the bar denotes complex conjugation), and the formula yields
\begin{equation}\label{klf}
E_N(x)= -2\pi \log \left|f\(\frac{\overline{x}}{N} , i\) e^{- \pi (Im \frac{x}{N})^2 }\right| .
\end{equation}
As $x\to 0$ we have $p\to 1$ and
$$p^{1/2}-p^{-1/2} \sim 2i \pi \frac{\overline{x}}{N} $$
and also (see \eqref{defeta})
$$ q^{1/12}\prod_{k\ge 1} (1-q^k p)(1-q^k/p) \sim_{x \to 0} q^{1/12} \prod_{k\ge 1}
(1-q^k)^2 = \eta(i)^2 . $$
So as $x \to 0$, we have
\begin{equation*}
E_N(x) \sim -2\pi \log \left|\eta(i)^2 2i\pi \frac{\overline{x}}{N}\right|
= -2\pi \log \( \frac{2\pi |x|}{N}\) - 4\pi \log \eta(i).\end{equation*}
Combining with \eqref{Geis}, it follows that $R= - \log \frac{2\pi}{N} - 2\log \eta(i).$ Inserting this and \eqref{Geis} into \eqref{Wtor} we get the result.
\end{proof}
We emphasize here that this formula is the exact value for $W$ as defined in \eqref{WU} provided we assume full periodicity with respect to $\mathbb{T}_N$.
\subsection{Calculation for points on a torus in dimension one}
We do here the analogue in dimension 1, i.e. we compute $W$ given by \eqref{Wroi1} assuming that the point configuration is periodic with respect to $\mathbb{T}_N:=\mathbb{R}/(N\mathbb{Z})$. We assume that there are $n=N$ points $a_1, \dots, a_N$ in $\mathbb{T}_N$, hence $W$ is computed with respect to the average density 1, i.e. $m= 2\pi$. There exists a unique (up to a constant) $H_{\{a_i\}}$ satisfying
$$-\Delta H_{\{a_i\}}= 2\pi \(\sum_{i=1}^n \delta_{(a_i,0)}-{\delta_\mathbb{R}}\)$$ in $\mathbb{R}^2$ and which is $N$-periodic with respect to the first variable, i.e $H_{\{a_i\}}(x+ N, y)=H_{\{a_i\}} (x, y)$ for every $(x, y) \in \mathbb{R}^2$. We then set $j_{\{a_i\}}= - \nab^{\perp} H_{\{a_i\}}$.
Then, as in \cite{ma1d} we have
\begin{lem}
Let $a_1, \dots , a_N$ be $N$ points on $\mathbb{T}_N=\mathbb{R}/(N \mathbb{Z})$. Let $j_{\{a_i\}}$ be as above.
Then $W(j_{\{a_i\}})$ as defined in \eqref{Wroi1} is a function of $a_1, \dots, a_N$ only, which is equal to
\begin{equation}\label{Wtor1d}
W_N(a_1, \dots, a_N)= \frac{\pi}{N} \sum_{i \neq j} G(a_i-a_j, 0) +\pi R \end{equation}
where
$G(z)$ is a restriction of the Green function of $\mathbb{T}_N \times \mathbb{R} $, more precisely the solution to
\begin{equation}\label{G3}
- \Delta_z G(z)=2\pi \(\delta_{(0,0)} - \frac{1}{N} \delta_{\mathbb{T}_N}\) \qquad z\in \mathbb{T}_N \times \mathbb{R} \end{equation} with $\Xint-_{\mathbb{T}_N} G(x,0)\, dx=0$, and $R$ is a constant, equal to $$\lim_{x\to 0} \( G(x,0) + \log |x|\).$$ Here $\delta_{\mathbb{T}_N} $ is the distribution defined over $\mathbb{T}_N\times \mathbb{R}$ by $\int \delta_{\mathbb{T}_N} \phi= \int_{\mathbb{T}_N} \phi(x,0)\, dx.$
\end{lem}
\begin{proof}The proof is analogous to that of Lemma \ref{Wnper}.
First we have
\begin{equation}\label{f5}
W(j_{\{a_i\}})= \frac{1}{|\mathbb{T}_N|} \(\lim_{\eta \to 0} \frac{1}{2} \int_{(\mathbb{T}_N\times \mathbb{R} ) \backslash \cup_{i=1}^N B((a_i,0), \eta)} |\nabla H_{\{a_i\}} |^2 + \pi N \log \eta\).\end{equation}
We again write
\begin{equation}\label{f6} H_{\{a_i\}}(z) = \sum_{i=1}^n G(z- (a_i,0) ), \end{equation} with $G(z)= -\log |z|+R(z)$, $R$ a continuous function.
Using Green's formula we have
\begin{multline*}
\int_{(\mathbb{T}_N\times \mathbb{R} )\backslash \cup_{i=1}^N B((a_i,0) , \eta)} |\nabla H_{\{a_i\}} |^2= -2 \int_{(\mathbb{T}_N\times \mathbb{R} ) \backslash \cup_{i=1}^N B((a_i,0), \eta)} H_{\{a_i\}} \delta_{\mathbb{T}_N} \\+ \sum_{i=1}^N \int_{\partial B((a_i,0) , \eta)} H_{\{a_i\}} \frac{\partial H_{\{a_i\}} }{\partial n} .\end{multline*}
From
\eqref{f6} we have $$\lim_{\eta\to 0} \int_{(\mathbb{T}_N\times \mathbb{R} ) \backslash \cup_{i=1}^N B((a_i,0), \eta)} H_{\{a_i\}} \delta_{\mathbb{T}_N} = \sum_{j=1}^N \int_{\mathbb{T}_N} G(x-a_j,0) \, dx =0$$ by choice of $G$.
The rest of the proof is exactly as in Lemma \ref{Wnper}, inserting the splitting of $G$ and \eqref{G3}.
\end{proof}
On the other hand we shall see below that we have the following explicit formula for $G$ restricted to the real axis:
\begin{equation}
\label{Glogsin}G(x,0)= - \log \left|2\sin \frac{\pi x}{N}\right|.
\end{equation}
Note that a consequence of this and the zero average condition on $G$ is that
\begin{equation}\label{log0}
\int_{\mathbb{T}_N} \log \left|2\sin \frac{\pi v}{N}\right|\, dv=0.\end{equation}
The previous lemma and \eqref{Glogsin} lead us to the following
\begin{pro}
Let $a_1, \dots, a_N$ be $N$ points on $\mathbb{T}_N= \mathbb{R}/(N\mathbb{Z})$ and let $W_N$ be defined as in \eqref{Wtor1d}. We have
\begin{equation}\label{WNlog}
W_N(a_1, \dots, a_N)= -\frac{\pi}{N} \sum_{i \neq j} \log \left|2\sin \frac{\pi(a_i - a_j)}{N} \right|- \pi \log \frac{2\pi}{N}.\end{equation}
\end{pro}
\begin{proof} The first step is to prove \eqref{Glogsin}.
This is done by solving \eqref{G3} in Fourier series/transform. We choose the following normalization for Fourier transforms and series:
$$\widehat{f}(\xi)= \int f(x) e^{-2i\pi x \cdot \xi }\, dx$$
$$c_k(f)= \int_{\mathbb{T}_N} f(x) e^{-\frac{2i\pi kx}{N}} \, dx.$$
Then the Fourier inversion formula is $f(x) =\int \widehat{f}(\xi) e^{2i\pi x\cdot \xi} \, d\xi$ and respectively
$$f(x)= \frac{1}{N} \sum_{k \in \mathbb{Z}} c_k(f) e^{\frac{2i\pi k}{N} x} .$$
Since $G$ is $\mathbb{T}_N$-periodic in its first variable, we may define its Fourier transform as a Fourier series in the first variable and a Fourier transform in the second, i.e., for $m \in \mathbb{Z}$ and $\xi \in \mathbb{R}$,
$$\widehat{G}(m , \xi)= \int_{x\in \mathbb{T}_N} \int_{y \in \mathbb{R}} G(x,y) e^{-\frac{2i \pi mx}{N}} e^{-2i \pi y \xi} \, dx\, dy.
$$
If $G$ solves \eqref{G3} then by Fourier transform, $\widehat{G}$ has to satisfy
$$4\pi^2 \(\frac{m^2}{N^2}+\xi^2\)\widehat{G}(m , \xi)= 2\pi \widehat{\delta_{(0,0)}} - \frac{2\pi}{N} \widehat{\delta_{\mathbb{T}_N}}. $$
It is direct to establish that
$\widehat{\delta_{\mathbb{T}_N}} =N \delta_m^0$ with $\delta_m^0$ by definition equal to $1$ if $m =0$ and $0$ otherwise.
Combining these facts,
we obtain
$$ \widehat{G}(m,\xi) = \frac{1 - \delta_m^0}{2\pi \( \frac{m^2}{N^2}+\xi^2\)}
\qquad for\ (m, \xi) \neq (0,0).$$
The undetermination of $\widehat{G}$ at $(0,0)$ corresponds to the fact that $G$ is only determined by \eqref{G3} up to a constant.
By Fourier inversion, it follows that
\begin{multline*}
G(x,y)= \frac{1}{N} \sum_{m \in \mathbb{Z}} \int_{\xi \in \mathbb{R}}
\frac{1 - \delta_m^0 } {2\pi ( \frac{m^2}{N^2}+\xi^2 ) } e^{2i\pi \frac{m}{N} x + 2i \pi y \xi} \, d\xi+c
= \frac{1}{N} \sum_{m \in \mathbb{Z}^* } \int_{ \mathbb{R}}
\frac{e^{2i\pi \frac{m}{N} x + 2i \pi y\xi }} {2\pi ( \frac{m^2}{N^2}+\xi^2)} \, d\xi+c.
\end{multline*}
Using the formula $\int_0^\infty \frac{\cos (bx)}{x^2+z^2} \, dx= \frac{\pi e^{-|b|z} }{2z}$ (cf. \cite{pbm1}) with $b=2\pi y$ and $z=|m|/N$, we arrive at
\begin{equation}
\label{Gcos}
G(x,y)= \sum_{m=1}^\infty \frac{\cos (2\pi\frac{ m}{N}x) }{m} e^{-2\pi |y|\frac{|m|}{N} }+c.\end{equation}
We next particularize to the case $y=0$, and use Clausen's formula (cf. \cite[Chap. 4]{lewin})
\begin{equation}\label{clausen}
\sum_{k=1}^\infty\frac{\cos (kx)}{k} = - \log \left|2\sin \frac{x}{2}\right| \quad \text{for} \ 0<x<2\pi,
\end{equation} and thus
we find $$G(x,0)= - \log \left|2\sin \frac{\pi x}{N}\right|+ c.$$
The constant $c$ should be chosen so that $\Xint-_{\mathbb{T}_N} G(x,0)\, dx=0$, which imposes
$c=0$, in view of the relation
$\int_{\mathbb{T}_N} \log \left|2\sin \frac{\pi x}{N} \right|\, dx =0$, a direct consequence of \eqref{clausen}.
This establishes \eqref{Glogsin}. In addition the value of $R$ follows: $R= \lim_{x\to 0} G(x,0)+ \log |x|= - \log \frac{2\pi}{N}$ and inserting into \eqref{Wtor1d}, the result follows.
\end{proof}
\subsection{New definition for a general point configuration in the plane/line}
In order to define $W$ for a general configuration of points, without referring to the current they generate, we choose to base ourselves on the formulas \eqref{WNE} and \eqref{WNlog} of the previous subsection.
To get rid of some constants, we use however a different normalization\footnote{the choice of normalization in dimension 1 is based on \eqref{minen1d}} and, for a given family of points $\{a_i\}$, we define
\begin{equation}
\label{wc1d2}\mathcal{W}_N = - \frac{1}{N} \sum_{i\neq j, a_i, a_j \in [0, N]} \log \left|2\sin \frac{\pi(a_i-a_j)} N\right| + \log N\qquad \text{in dimension 1}\end{equation} respectively
\begin{equation}\label{wc2d2}
\mathcal{W}_N = \frac{1}{2\pi N^2 } \sum_{i\neq j, a_i, a_j \in [0,N]^2} E_N(a_i-a_j) + \log \frac{N}{2\pi \eta(i)^2}\qquad \text{in dimension 2}.\end{equation}
The results of the previous sections suggest trying to define $\mathcal{W}$ as the limit as $N\to \infty$ of these quantities.
More precisely, for a given random point process, $\mathcal{W}_N$ becomes a random variable, and we try to see whether it has a limit as $N \to \infty$.
Again, we do not claim a complete rigorous connection between such a quantity (which only depends on the point configuration) and the original $W$, which we recall, was defined via a vector field $j$. We also emphasize that in \eqref{wc1d2} and \eqref{wc2d2} the number of points in $[0,N]$ resp. $[0,N]^2$ is no longer necessarily equal to $N$ resp. $N^2$.
Let us comment a little on the minimization of $\mathcal{W}_N$. Since our definition has been relaxed, the question of whether $\mathcal{W}_N$ achieves a minimum becomes unclear. However, in dimension 1,
observing that still $\mathcal{W}_N(\mathbb{Z})=0$, we have the following statement
(in dimension 2 no such result is available):
\begin{lem}
Assume $a_1, \dots, a_k $ are $k$ points in $[0,N]$. Then
\begin{equation}\label{eq:new1}
- \frac{1}{N} \sum_{i\neq j} \log \left|2\sin \frac{\pi(a_i-a_j)} N\right| + \log N \ge \( 1- \frac{k}{N}\) \log N + \frac{k}{N} \log \frac{N}{k}.
\end{equation}
Thus, for any point configuration $\mathcal{W}_N\ge \left( 1- \frac{k}{N}\right) \log N + \frac{k}{N} \log \frac{N}{k} $,
where $k$ is the number of points in $[0,N]$.
\end{lem}
\begin{proof}
The proof is a simple adaptation of the proof of minimality of the perfect lattice in \cite{ma1d}. Let $a_1,\dots, a_k\in [0,N]$, and assume $a_1<\dots <a_k$.
Let us also denote $u_{1,i}= a_{i+1}-a_i $, with the convention $a_{ k+1}=a_1+ N $. We have $\sum_{i=1}^k u_{1,i}= N$.
Similarly, let
$u_{p,i}= a_{i+p}-a_{i}$, with the convention $a_{k+l}=a_l+N$. We have $\sum_{i=1}^k u_{p,i}= p N$.
By periodicity of $\sin$, we may view the points $a_i$ as living on the circle $\mathbb{R}/(N\mathbb{Z})$. When adding the terms in $a_i-a_j$ in the sum of \eqref{eq:new1}, we can split it according to the difference of $p=j-i$ but modulo $N$. This way, there remains
\begin{equation}\label{3.12}
- \frac{1}{N} \sum_{i\neq j} \log \left|2\sin \frac{\pi(a_i-a_j)} N\right| + \log N=-
\frac{2}{N}\sum_{p=1}^{[k/2]} \sum_{i=1}^k \log \left| 2\sin\frac{ \pi u_{p,i}}{N} \right|+ \log N,\end{equation} where $[\cdot ]$ denotes the integer part.
But the function $\log |2\sin x|$ is stricly concave on $[0,\pi]$. It follows that
$$\frac{1}{k}\sum_{i=1}^k \log \left| 2\sin\frac{ \pi u_{p,i}}{N} \right|\le \log \left|2\sin \( \frac{\pi }{N k} \sum_{i=1}^k u_{p,i} \) \right|= \log \left|2\sin\frac{ p\pi}{k}\right|.$$
Inserting into \eqref{3.12} we obtain
\begin{equation}
\label{3.2}
- \frac{1}{N} \sum_{i\neq j} \log \left|2\sin \frac{\pi(a_i-a_j)} N\right| + \log N\ge
- \frac{2k}{N} \sum_{p=1}^{[k/2]} \log \left|2\sin\frac{ p\pi}{k}\right|+ \log N.
\end{equation}
On the other hand, we know that $\mathcal{W}_N(\mathbb{Z})=0$ which means that $- \frac{2}{N} \sum_{p=1}^{[ N/2]}N \log \left|2\sin \frac{p \pi} N\right| + \log N= 0$, but also, since this is true for arbitrary integers $N$,
\begin{equation}\label{wdez}
- 2 \sum_{p=1}^{[k/2]} \log \left|2\sin \frac{p \pi} k\right| + \log k=0 .\end{equation}
Inserting into \eqref{3.2} we are led to
\begin{equation}
\label{3.3}
- \frac{1}{N} \sum_{i\neq j} \log \left|2\sin \frac{\pi(a_i-a_j)} N\right| + \log N\ge
- \frac{k}{N} \log k + \log N= \( 1- \frac{k}{N}\) \log N + \frac{k}{N} \log \frac{N}{k}.
\end{equation}
\end{proof}
\section{Expectation of $\mathcal{W}_N$}\label{sec3}
\subsection{Expectation and 2-point correlation functions}\label{correl}
We now turn to evaluating $\mathcal{W}_N$ for random point processes.
In view of its form \eqref{wc1d2}--\eqref{wc2d2}, the expectation
of the random variable $\mathcal{W}_N$ can be computed from the sole knowledge of the second correlation function of the process.
Indeed, recall that for any $k\ge 1$, the $k$-point correlation function $ \rho_k$ of a random point process in $\mathbb{R}^d$ is
characterized by the property that
\begin{equation}\label{corr}\mathbb{E} \sum_{i_1, \dots, i_k \ pairwise\ distinct} F(x_{i_1},
\dots, x_{i_k})= \int_{(\mathbb{R}^d)^k} F(x_{1}, \dots, x_{k}) \rho_k(x_{1}, \dots, x_{k})
\, dx_{1} \dots \, dx_{k} ,\end{equation}
where the expectation is with respect to our measure on locally finite subsets
$X=\{x_j\}\subset \mathbb{R}^d$, and $F$ ranges over a suitable space of test functions,
see e.g. \cite{dvj}.
We note here that determinantal processes are a particular class of processes characterized by the fact that
the correlation functions can be expressed as
\begin{equation}\label{rodet}
\rho_k(x_1,\dots, x_n)= \det \(K(x_i, x_j)\)_{i,j \in [1, k]}
\end{equation}
for some kernel $K(x,y)$, see \cite{Sos00}, \cite{Lyo03}, \cite{Joh05}, \cite{Kon05},
\cite{Hou06}, \cite{Sos06}, \cite{Bor11} and references therein. This will be used later.
Here we need specifically the two-point correlation function.
In addition, for our function the formula simplifies when the process
is assumed to be stationary (i.e. translation invariant).
From now on we make the basic
assumption that we are dealing with a translation invariant multiplicity-free random
point process in $\mathbb{R} $ or $\mathbb{R}^2$, of density $1$ (i.e $\rho_1\equiv 1$) with
absolutely continuous correlation functions (hence, \eqref{corr} holds).
If $\rho_2(x,y)$ is the two-point correlation function of such a process, it is of the
form $r_2(x-y)$ since the process is stationary. It is more convenient to work
with the ``second cluster function" $T_2= 1- r_2$ (we will give a general definition of
the cluster functions in Section \ref{sec4}).
Our basic assumptions thus imply
\begin{equation}\label{basicass}
\rho_1 \equiv 1 , \quad \rho_2(x,y)=1-T_2(x-y) \ \text{for some function } T_2.\end{equation}
By definition of $\rho_2$, the expectation of $\mathcal{W}_N$ is, in dimension 1,
\begin{equation}\label{ew1d}\mathbb{E} \mathcal{W}_N= \frac{1}{N} \int_{[0,N]^2} \log \left|2 \sin \frac{\pi (x-y)}{N} \right|T_2(x-y)\, dx\, dy +\log N \end{equation} (where we have used \eqref{wc1d2} and \eqref{log0}) and
respectively in dimension 2
\begin{equation}\label{ew2d}\mathbb{E} \mathcal{W}_N =- \frac{1}{2\pi N^2} \int_{[0,N]^2 \times [0, N]^2} E_N(x-y) T_2(x-y) \, dx \, dy + \log \frac{N}{2\pi \eta(i)^2} \end{equation} (where we have used \eqref{wc2d2} and \eqref{zeroavE}).
The question is then whether these quantities have a limit as $N \to \infty$. As we show below, this will only be
true under additional assumptions which in particular ensure a sufficient decay of $T_2$.
Finally, it would be most interesting to find natural conditions on the random points behavior
(their spacing etc) that would guarantee the existence of a limit to $ \mathbb{E}\mathcal{W}_N$.
\subsection{Expectation for one-dimensional processes: theoretical formula}
\begin{theo}\label{th1}
Consider a random point process $\mathcal{X}$ on the real line with
the one-point correlation function $\rho_1(x) $ and a two-point correlation
function $\rho_2(x,y)$, satisfying \eqref{basicass}. Under the following assumptions
\begin{enumerate}
\item[1)] $\sup_{v\in \mathbb{R}} |T_2(v)|<\infty$;
\item[2)] there exist a sequence $\{\alpha_N\}_{N\ge 1}$ such that $\log N \ll \alpha_N \ll N^{1/2 -\varepsilon}$
as $N \to \infty$ (for some $\varepsilon >0$) and uniformly in $ A\in [\alpha_N, N-\alpha_N]$ we have
$$\int_{\alpha_N}^A T_2(v) \log \left|2\sin \frac{\pi v}{N}\right|\, dv
=o(1)\quad \text{as }N\to\infty;$$ \end{enumerate}
the following holds:
\\
- if $\displaystyle\int_{-\infty}^\infty T_2(v)\log |v| \, dv<\infty$ and $ \displaystyle \int_{-\infty}^\infty T_2(v)\, dx = c \neq 1 $, then $\mathbb{E} \mathcal{W}_N \to \infty $ as $N\to \infty$;\footnote{We believe that $\mathcal{W}_N$ should be bounded below or at least that the value $-\infty$ is in fact not taken.}\\
- if $\displaystyle \int_{-\infty}^\infty T_2(v)\, dx =1$ and $1-\displaystyle\int_{-\alpha_N}^{\alpha_N} T_2(v)\, dv=o\bigl((\log N)^{-1}\bigr)$ for $\{\alpha_N\}_{N\ge 1}$ as above,
then $\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N $ exists and is finite if and only if $\int_{-\infty}^\infty T_2(v)\log |v| \, dv$ converges, and if so then
\begin{equation}\label{resth1}
\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N = \log 2\pi + \int_{-\infty}^\infty \log | v| T_2(v) \, dv.\end{equation}
\end{theo}
\begin{remark} \label{2.1}
Condition 2) is satisfied by the stronger one:
\begin{equation}\label{rem2.1}\int_B^\infty |T_2(v)|\, dv= o\( \frac{1}{\log B}\) \quad \text{as} \ B \to +\infty.\end{equation} To see this it suffices to observe that on $[\alpha_N, N-\alpha_N]$ we have $|\sin\frac{\pi v}{N}|\ge|\sin \frac{\pi \alpha_N}{N}|$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{th1}]
In view of \eqref{ew1d} we need to compute
\begin{equation}\label{tocomp}
\lim_{N\to \infty}\frac{1}{N} \int_{[0,N]^2} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy + \log N.\end{equation}
For $(x,y) \in [0,N]^2$ we denote $u=x+y$ and $v=x-y$.
We then split $[0,N]^2$ into the disjoint union of the following domains, see Figure \ref{fig1}, where $\alpha_N$ is in condition 2):
\begin{eqnarray*}
& \cdot & D_0= \{(x, y) \in [0,N]^2, |v| \le \alpha_N, \alpha_N \le u \le 2N- \alpha_N \}\\
& \cdot & D_{1} = \{(x, y)\in [0,N]^2, v\ge N- \alpha_N \} \\
& \cdot & D_{1'}=\{ (x, y)\in [0,N]^2, v\le -N+ \alpha_N \} \\
& \cdot & D_{2} = \{(x,y) \in [0,N ]^2 , u \ge 2N - \alpha_N\}\\
& \cdot & D_{2'} = \{(x,y) \in [0,N]^2, u \le \alpha_N\}\\
& \cdot & D_3= \{(x,y) \in [0,N ]^2 , \alpha_N \le v \le N-\alpha_N\}\\
& \cdot & D_{3'} = \{(x,y) \in [0,N ]^2 , - N + \alpha_N \le v \le -\alpha_N\}\end{eqnarray*}
\begin{figure}\label{fig1}
\begin{center}
\includegraphics{figure-domain.pdf}\caption{Splitting of the domain of integration}
\end{center}\end{figure}
We evaluate the integral in \eqref{tocomp} over each of these domains successively.
We start with the contribution of $D_1'$.
Making the change of variables $a=\frac{x}{N}, b=\frac{N-y}{N}$ we have
\begin{multline*}
\int_{D_1'} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy
\\= N^2 \int_{a\ge 0, b\ge 0, a+b \le\frac{\alpha_N}{N}} \log |2\sin \pi (a+b)|T_2(N(a+b-1))\, da\, db.\end{multline*}
Using assumption 1) and noting that in $D_1$, $\sin \pi (a+b)=\pi (a+b)+O((a+b)^3)$, and thus
$|\log |2\sin \pi (a+b)||= |\log |2\pi (a+b)||+ O(\frac{\alpha_N^2}{N^2})$,
we deduce
\begin{multline*}
\left|\int_{D_1'} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy \right|\\
\le N^2 \sup |T_2| \int_{a\ge 0, b\ge 0, a+b \le\frac{ \alpha_N}{N} } |\log |2\pi (a+b)||\, da \, db +O\(\frac{\alpha_N^4}{N^2}\).\end{multline*}
Using $\int_0^r \log s \, ds= O( s |\log s|)$, it follows that
\begin{multline}\label{intd1}
\left|\int_{D_1'} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy \right| \\ \le C N^2 \sup |T_2|\frac{\alpha_N^2}{N^2} \left|\log \frac{\alpha_N}{N}\right| +O\(\frac{\alpha_N^4}{N^2}\) =o(N)\end{multline} using $\alpha_N\ll N^{1/2-\varepsilon}.$
The estimate for the domains $D_{1}, D_{2}, D_{2'}$ is similar.
For the contribution over $D_3$, using the change of variables $(x, y) \to (u,v)$ we have
$$ \int_{D_3} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy =
\frac{1}{2} (2N-2\alpha_N) \int_{\alpha_N}^{N-\alpha_N} T_2(v)\log \left|2\sin \frac{\pi v}{N}\right|\, dv.$$
This is $o(N)$ when assumption 2) holds. The estimate on $D_{3'}$ is completely analogous.
We have thus found that all contributions over $D_{1}, D_{1'}, D_2, D_{2'}, D_{3}, D_{3'}$ are negligible. The behavior of the integral will thus be determined by the contribution of $D_0$.
Changing again the variables $(x,y) $ into $(u,v)$,
we have
$$ \int_{D_0} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy =
\frac{1}{2} (2N-2\alpha_N)\int_{-\alpha_N}^{\alpha_N} T_2(v) \log \left|2\sin \frac{\pi v}{N}\right|\, dv.$$
But in $D_0$ we have $\sin \frac{\pi v}{N}= \frac{\pi v}{N}(1+ O(\frac{\alpha_N^2}{N^2}))$ as $N \to \infty$, hence $\log |\sin \frac{\pi v}{N}|= \log |\frac{ 2\pi v }{N}|+ O(\frac{\alpha_N^2}{N^2}).$
Therefore,
\begin{multline*}
\int_{D_0} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy \\ = (N -\alpha_N) \Big( - \log N \int_{-\alpha_N}^{\alpha_N} T_2(v)\, dv
+ \int_{-\alpha_N}^{\alpha_N} \log |2\pi v| T_2(v) \, dv \Big) + \sup|T_2| O\(\frac{\alpha_N^3}{N}\).\end{multline*}
Using $\alpha_N \ll N^\frac{1}{2}$, we easily deduce that if $\int_{-\infty}^\infty T_2(v)\, dv \neq 1$ and $\int_{-\infty}^\infty \log |2\pi v|T_2(v)\, dv<\infty$, then
$$\frac{1}{N} \int_{D_0} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy + \log N \to \infty$$as $N \to \infty$, and we conclude as desired.
If $\int_{-\infty}^\infty T_2(v)\, dv =1$, then we may proceed and find
\begin{multline}\label{d0}
\frac{1}{N} \int_{D_0} \log \left|2\sin \frac{\pi (x-y)}{N}\right| T_2(x-y)\, dx\, dy + \log N=
\int_{-\alpha_N}^{\alpha_N} \log |2\pi v|T_2(v)\, dv \\
+ \log N \(1-\int_{-\alpha_N}^{\alpha_N} T_2(v)\, dv\) +O\(\frac{\alpha_N}{N} \log N \) + O\( \frac{\alpha_N^3}{N}\) \end{multline}
and we conclude (returning to \eqref{ew1d} and \eqref{tocomp}, and using the assumptions) that \eqref{resth1} holds.
\end{proof}
\subsection{Specific computations on the line}\label{sec:speci}
In this subsection, we use the result from the previous theorem to compute explicit asymptotic values for $\mathbb{E} \mathcal{W}_N$ for some well-known point processes, namely the homogeneous Poisson process, and the $\beta$-sine processes with $\beta=1,2,4$. (Recall that $\mathcal{W}_N$ was defined in such a way that
for the lattice $\mathbb{Z}$ we have $\mathcal{W}_N = 0$.)
The homogeneous Poisson process satisfies $\rho_1(x)=1$ and $\rho_2(x,y)=1$, hence $T_2=0$. We immediately deduce from \eqref{ew2d} that $\mathbb{E} \mathcal{W}_N \to + \infty$.
Hence, the Poisson process can be viewed as having the `value of $W$' equal to $+\infty$.
The $\beta$-sine processes for $\beta=1,2,4$ arise in random matrices as the local limits for random matrix ensembles with orthogonal, Hermitian, and symplectic symmetries, see \cite{mehta,agz,forrester} and references therein.
These are stationary processes whose correlations can be computed as follows: introduce the kernels
\begin{align}\label{k2} & K^{(2)}(x,y)= \frac{\sin \pi(x-y)}{\pi(x-y)} &\quad \text{for } \ \beta=2\\
\label{k1} & K^{(1)}(x,y)= \displaystyle\left(\begin{array}{lr}\displaystyle \frac{\sin\pi(x-y)}{\pi(x-y)}
& \displaystyle\frac{\partial }{\partial x} \frac{\sin\pi(x-y)}{\pi(x-y)} \\[2mm]
\displaystyle\frac 1\pi\int_0^{\pi(x-y)} \frac{\sin t}{t}\, dt - \displaystyle\frac{1}{2} sgn(x-y) &
\displaystyle\frac{\sin\pi(x-y)}{\pi(x-y)}\end{array}\right) & \quad \text{for } \ \beta=1\\
\label{k4}& K^{(4)}(x,y)= \displaystyle\left(\begin{array}{lr}\displaystyle\frac{\sin 2\pi(x-y)}{2\pi(x-y)} & \displaystyle \frac{\partial }{\partial x} \frac{\sin 2\pi(x-y)}{2\pi(x-y)}\\[2mm]
\displaystyle\frac{1}{2\pi}\int_0^{2\pi(x-y)} \frac{\sin t}{t}\, dt &\displaystyle
\frac{\sin 2\pi(x-y)}{2\pi(x-y)}\end{array}\right)& \quad \text{for } \ \beta=4\end{align}
where all the indeterminacies $0/0$ at $x=y$ are resolved by continuity. \\
The $\beta=2$ sine process is a determinantal process with kernel $K^{(2)}(x,y)$, thus from \eqref{rodet}, its $2$-point correlation function is given by
\begin{equation}\label{rodet2}
\rho_2(x_1,x_2)= \det \(K^{(2)}(x_i, x_j)\)_{i,j \in [1,2]}.
\end{equation}
The correlation functions for the $\beta=1,4$ sine processes have the form
\begin{equation}\label{qdet}
\rho_n(x_1,\dots, x_n)= \mathrm{qdet} \(K^{(\beta)}(x_i, x_j)\)_{i,j \in [1, n]},\qquad n=1,2,\dots
\end{equation}
where $\mathrm{qdet}$ denotes the quaternion determinant, see e.g. \cite[Section 6.1.1]{forrester}
for a definition. Alternatively, the right-hand side of \eqref{qdet} can be expressed
as the Pfaffian of a closely related matrix, cf. \cite[Proposition 6.1.5]{forrester}.
Random point processes with correlation functions of such form are often called \emph{Pfaffian},
see \cite[Section 10]{Bor11} and references therein.
The three processes above satisfy $\rho_1\equiv 1$,
and their second cluster functions can be easily seen to be given by
\begin{eqnarray}\label{t21}
& T_2^{(2)} (v)& =\(\frac{\sin \pi v}{\pi v}\)^2\qquad \text{for } \ \beta=2\\
\label{t22}
& T_2^{(1)}(v)& = \( \frac{\sin \pi v}{\pi v}\)^2 - \frac{1}{\pi} \frac{\partial }{\partial v} \frac{\sin \pi v}{\pi v} \(\int_0^{\pi v} \frac{\sin t}{t}\, dt- \frac{\pi}{2}sgn(v)\) \qquad \text{for } \ \beta=1\\
\label{t24}& T_2^{(4)} (v) & = \(\frac{\sin 2 \pi v}{2\pi v }\)^2- \frac{1}{2\pi} \frac{\partial }{\partial v} \frac{\sin 2\pi v}{2\pi v}\int_0^{2\pi v} \frac{\sin t}{t}\, dt \qquad \text{for } \ \beta=4
.\end{eqnarray}
\begin{pro} \label{pro23}
The $\beta$-sine processes for $\beta=1,2,4$ satisfy the assumptions of Theorem \ref{th1}, and
\begin{eqnarray*}
&\displaystyle \lim_{N\to \infty} \mathbb{E} \mathcal{W}_N & = 1-\gamma
\qquad \text{for } \ \beta=2\\
& \displaystyle \lim_{N\to \infty} \mathbb{E} \mathcal{W}_N & = 2-\gamma - \log 2
\qquad \text{for } \ \beta=1\\
& \displaystyle\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N & = \frac{3}{2}-\log 2-\gamma\qquad \text{for } \ \beta=4,\end{eqnarray*}
where $\gamma$ is the Euler constant.
\end{pro}
Before stating the proof, we recall some integrals of classical functions that we will need. We state them without a proof and refer to \cite{pbm1,pbm2}.
\begin{eqnarray}
\label{dirichlet}
\int_0^{+\infty} \frac{\sin v }{v} \, dv & = & \frac{\pi}{2}
\\
\label{2632}
\int_0^{+\infty} \frac{\sin v}{v} \log v \, dv & = &\frac{\pi}{2}\gamma
\\
\label{2569} \int_0^{+\infty}\(\frac{\sin v}{v}\)^2 \, dv & = & \frac{\pi}{2}
\\
\label{225} \int_0^{+\infty} \(\frac{\sin v}{v}\)^2 \log v \, dv & = & - \frac{\pi}{2} (\gamma+\log 2 - 1).\end{eqnarray}
These formulas can be found in \cite{pbm1}, they are respectively (2.5.3.12), (2.6.32.3), (2.5.6.9), and (2.6.32.7).
Finally we need a few more integrals that are based on
$$Si(x)=\int_0^x \frac{\sin t}{t}\, dt , \quad si(x)= Si(x)- \frac{\pi}{2} = -\int_{x}^{+\infty} \frac{\sin t}{t} \, dt$$ and
$$Ei(x)= \int_{-\infty}^x \frac{e^t}{t}\, dt.$$
These are
\begin{eqnarray}
\label{26411}
\int_0^{+\infty} \frac{ si(v) \sin v}{v^2+z^2} \, dv & = &\frac{\pi}{2z}\sinh(z)Ei(-z)\\
\label{26210}
\int_0^{+\infty} \frac{v \, si(v)}{v^2+z^2} \, dv & = & \frac{\pi}{2}\,Ei(-z)\\
\label{264} \int_0^{+\infty}\frac{Si(v) \sin v }{v^2+z^2 } \, dv &=& \frac{\pi}{4}e^{-z}\,\frac{Ei(z)-Ei(-z)}{z}\,,\end{eqnarray}
cf. respectively (2.6.4.11), (2.6.2.10) and (2.6.4.16) in \cite{pbm2}.
\begin{proof}[Proof Proposition \ref{pro23}]
We start with the simplest case.
\\
{\it Case $\beta=2$:} First it is easy to check that assumptions \eqref{basicass} as well as assumption 1) of Theorem \ref{th1} are verified. Also, we have $T_2(v)=O(|v|^{-2})$ as $v\to \infty$,
and $1-\int_{-\alpha_N}^{\alpha_N} T_2=O((\alpha_N)^{-1})=o((\log N)^{-1})$, thus assumption 2) is implied by \eqref{rem2.1}.
According to Theorem \ref{th1} we then have
$$ \lim_{N\to \infty} \mathbb{E}\mathcal{W}_N = \int_{-\infty}^\infty \log |2\pi v|
\(\frac{\sin \pi v}{\pi v}\)^2
\, dv.
$$
{}From \eqref{225}, we obtain
\begin{equation}\label{intsinlog}
\int_{-\infty}^\infty \left(\frac{\sin\pi v}{\pi v}\right)^2 \log|2\pi v|\, dv=1-\gamma,\end{equation}
hence the result.
\\
{\it Case $\beta=1$:}
Similarly to the case of $\beta=2$, one checks that $T_2(v)=O(|v|^{-2})$. Indeed, note that $\int_0^{\pi |v|} \frac{\sin t}{t} \, dt - \frac{\pi}{2} = \int_{\pi |v|}^{+\infty} \frac{\sin t}{t}\, dt= \frac{\cos (\pi |v|) }{\pi |v|} -\int_{\pi |v|}^{+\infty} \frac{\cos t}{t^2}= O({|v|^{-1}})$.
Assumptions \eqref{basicass} and 1), 2) are then verified as in the $\beta =2$ case.
Next we check that $\int T_2^{(1)}=1$.
By evenness and integration by parts
\begin{multline*}\int_{-\infty}^\infty T_2^{(1)}(v)\, dv= 2 \int_{0}^\infty
\left( \( \frac{\sin \pi v}{\pi v}\)^2 - \frac{1}{\pi} \frac{\partial }{\partial v} \frac{\sin \pi v}{\pi v} \(\int_0^{\pi v} \frac{\sin t}{t}\, dt- \frac{\pi}{2}\)\right)dv\\
=4\int_0^\infty \( \frac{\sin \pi v}{\pi v}\)^2 \, dv -1 =1.\end{multline*}
The convergence is also fast enough, since $T_2(v) = O(|v|^{-2})$.
According to Theorem \ref{th1} we thus have
\begin{multline*} \lim_{N\to \infty} \mathbb{E} \mathcal{W}_N = \int_{-\infty}^\infty \log | 2\pi v| \( \( \frac{\sin \pi v}{\pi v}\)^2 - \frac{1}{\pi} \frac{\partial }{\partial v}
\frac{\sin \pi v}{\pi v} \(\int_0^{\pi v} \frac{\sin t}{t}\, dt- \frac{\pi}{2}sgn(v)\)\) \, dv\\ =
2 \int_{0}^\infty \log |2\pi v| \( \( \frac{\sin \pi v}{\pi v}\)^2 - \frac{1}{\pi} \frac{\partial }{\partial v} \(\frac{\sin \pi v}{\pi v}-1\) \(\int_0^{\pi v} \frac{\sin t}{t}\, dt- \frac{\pi}{2}\)\) \, dv.
\end{multline*}
Integrating by parts, we are led to
\begin{multline}\label{rfg1}
\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N= 4 \int_{0}^\infty \log (2 \pi v)\(\frac{\sin \pi v}{\pi v}\)^2\, dv+\frac{2}{\pi} \int_0^\infty \(\frac{\sin \pi v}{\pi v}-1\) \( \int_0^{\pi v} \frac{\sin t}{t}-\frac{\pi}{2}\) \frac{dv }{v}\\
-\frac 2\pi\int_0^{+\infty} \log(2\pi v)\,\frac{\sin \pi v}{v}\,dv .\end{multline}
In view of \eqref{intsinlog} the first term on the right-hand side is equal to $2-2\gamma$.
The second is equal to
\begin{multline*} \frac{2}{\pi} \int_0^{+\infty} \(\frac{\sin \pi v}{\pi v}-1\) si(\pi v) \frac{dv}{v} = \frac{2}{\pi} \int_0^{+\infty} \( \frac{\sin u}{u}-1\) si(u) \frac{du}{u}
\\ = \frac{2}{\pi} \lim_{z\to 0} \int_0^\infty \( \frac{si(u) \sin u }{u^2+z^2} - \frac{si(u) u}{u^2+z^2}\) \, du.
\end{multline*}
With \eqref{26411} and \eqref{26210},
$$ \frac{2}{\pi} \int_0^{+\infty} \(\frac{\sin \pi v}{\pi v}-1\) si(\pi v) \frac{dv}{v} = \lim_{z\to 0} Ei(-z) \( \frac{\sinh(z)}{z}-1\).$$
On the other hand, near $x=0$ one has \begin{equation}\label{expei}
Ei(x)=\gamma + \log |x|+ \sum_{n=1}^\infty \frac{x^n}{n n!}, \end{equation} hence the above right-hand side limit is $0$. The second term in \eqref{rfg1} thus vanishes. From \eqref{2632} and \eqref{dirichlet}, the third one is equal to $\gamma-\log 2.$ This concludes the case $\beta=1$.
\\
{\it Case $\beta=4$:}
Assumptions \eqref{basicass} and 1) are easily verified, we proceed to 2).
We may write
$$T_2(v)= \( \(\frac{\sin 2\pi v}{2\pi v}\)^2 - \frac{1}{2\pi} \frac{\partial }{\partial v} \frac{\sin 2\pi v}{2\pi v} \( \int_0^{2\pi v} \frac{\sin t}{t}\, dt - \frac{\pi}{2} sgn(v) \) \) - \frac{1}{4}\frac{\partial}{\partial v} \frac{\sin 2\pi v}{2\pi v}sgn(v)
.
$$
The first part is $O(|v|^{-2})$ just as in the case above, so it remains to check that
$$\int_{\alpha_N}^A \frac{\partial }{\partial v}\frac{\sin 2\pi v}{2\pi v} \log \left|2\sin \frac{\pi v}{N}\right|\, dv=o(1)$$ uniformly in $A\in [\alpha_N, N-\alpha_N].$
Integrating by parts, we have
\begin{multline*}
\int_{\alpha_N}^A \frac{\partial }{\partial v}\frac{\sin 2\pi v}{2\pi v} \log \left|2\sin \frac{\pi v}{N}\right|\, dv\\
= \frac{\sin 2\pi A}{2\pi A} \log \left|2\sin \frac{\pi A}{N}\right|- \frac{\sin 2\pi \alpha_N}{2\pi \alpha_N}\log \left|2\sin \frac{\pi \alpha_N}{N}\right| - \int_{\alpha_N}^A \frac{\sin 2\pi v}{2 v} \frac{\cos \frac{\pi v}{N}}{N\sin \frac{\pi v}{N}}\, dv.\end{multline*}
In $[\alpha_N, A]$ we may bound from below $\sin \frac{\pi v}{N} $ by $\sin \frac{\pi \alpha_N}{N}$, which is asymptotically
equivalent to $\frac{\pi \alpha_N}{N}$ as $N \to \infty$. Hence the integral on the right hand side may by bounded by
$\frac{C}{\alpha_N}\int_{\alpha_N}^N\frac{dv}{v}\le \frac{C\log N}{\alpha_N} =o(1)$ in view of assumption 4).
The other terms are also easily found to be $o(1)$ by a similar argument.
We next check that $\int T_2=1$ with fast enough convergence.
By evenness and integration by parts
\begin{multline*}\int_{-\alpha_N}^{\alpha_N} T_2^{(4)}(v)\, dv= 2 \int_{0}^{\alpha_N}\left(
\( \frac{\sin 2\pi v}{2\pi v}\)^2 - \frac{1}{2\pi} \frac{\partial }{\partial v} \frac{\sin 2 \pi v}{2 \pi v} \(\int_0^{2\pi v} \frac{\sin t}{t}\, dt\)\right) dv\\
=4\int_0^{\alpha_N} \( \frac{\sin 2\pi v}{2\pi v}\)^2 \, dv - \frac{1}{2\pi} \frac{\sin 2\pi \alpha_N}{\alpha_N} \int_0^{2\pi \alpha_N} \frac{\sin t}{t}\, dt =1+O((\alpha_N)^{-1}).\end{multline*}
According to Theorem \ref{th1} we thus have
\begin{equation*} \lim_{N\to \infty} \mathbb{E} \mathcal{W}_N = \int_{-\infty}^\infty \log |2\pi v|
\(
\(\frac{\sin 2 \pi v}{2\pi v }\)^2- \frac{1}{2\pi} \frac{\partial }{\partial v} \frac{\sin 2\pi v}{2\pi v}\int_0^{2\pi v} \frac{\sin t}{t}\, dt\)
\, dv.\end{equation*}
Using evenness and integration by parts as above, we find
\begin{equation*}
\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N
=4 \int_0^\infty \(\frac{\sin 2 \pi v}{2\pi v }\)^2 \log | 2\pi v|\, dv
+\frac{1}{\pi} \int_0^\infty \frac{\sin 2\pi v}{2\pi v^2}\( \int_0^{2\pi v} \frac{\sin t}{t}\, dt\) \, dv.\end{equation*}
By \eqref{225} the first term on the right-hand side is equal to $-\gamma - \log 2 +1$. By change of variables, the second term is equal to
$$\frac{1}{\pi} \int_0^{+\infty} \frac{Si(u) \sin u}{u^2}\, du= \frac{1}{\pi} \lim_{z\to 0} \int_0^{+\infty} \frac{Si(u) \sin u}{ u^2+z^2}\, du= \frac{1}{4} \lim_{z\to 0} e^{-z} \frac{Ei(z)-Ei(-z)}{z} $$
by \eqref{264}. Combining with \eqref{expei} we find that the second term is equal to $\frac{1}{2}$ and we conclude the proof.
\end{proof}
As expected $\lim_{N\to \infty}\mathbb{E} \mathcal{W}_N$ decreases as $\beta=1,2,4$ increases.
\subsection{Expectation for two-dimensional processes : theoretical formula}
In the plane, the computations are easier because we can take advantage of the fast (exponential) decay of the correlation kernels.
\begin{theo}\label{th2}
Consider a random point process $\mathcal{X}$ in the plane, with a one point correlation function $\rho_1(x) $ and a two-point correlation function $\rho_2(x,y)$, satisfying \eqref{basicass}. We identify the plane with the complex plane $\mathbb{C}$. Under the assumption
$$\int_{\mathbb{R}^2} |v|^k |T_2(v)|\, dv <\infty \qquad \text{for } \ k=1,2,3;$$
the following holds:
\\
- if $\displaystyle \int_{\mathbb{R}^2} T_2(v)\, dv=c \neq 1 $, we have $\mathbb{E}\mathcal{W}_N \to \infty $ as $N\to \infty$;\\
- if $\displaystyle \int_{\mathbb{R}^2} T_2(v)\, dv =1$ and $1-\int_{[-N,N]^2} T_2(v)\, dv= o((\log N)^{-1})$, then $\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N $ exists and is finite if and only if $\int_{\mathbb{R}^2} T_2(v)\log |v| \, dv$ converges, and if so, then
\begin{equation}\label{resth2}
\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N = \int_{\mathbb{R}^2} \log | v|\, T_2(v) \, dv.\end{equation}
\end{theo}
\begin{proof}
Returning to \eqref{ew2d} we have to compute
$$\lim_{N\to \infty} -\frac{1}{2\pi N^2}\int_{[0,N]^2 \times [0,N]^2 } E(x-y) T_2(x-y)\, dx \, dy +
\log \frac{N}{2\pi \eta(i)^2}.$$
Making the change of variables $(u,v)= (x+y,x-y)$, we have
$$\int_{[0,N]^2 \times [0,N]^2}E_N(x-y) T_2(x-y)\, dx \, dy = \frac{1}{4} \int_{v\in [-N,N]^2}\int_{ u \in S_N(v)}E(v) T_2(v)\,du \, dv,$$
where $S_N(v)=\{ x+y: x\in [0,N],\, y\in [0,N]^2,\, x-y=v\}$. We may compute that
$|S_N(v)|= 4N^2 - 2N|v_1|- 2N|v_2|+|v_1||v_2|,$
so
\begin{multline}\label{et1}
\int_{[0,N]^2\times [0,N]^2} E_N(x-y)
T_2(x-y)\, dx \, dy \\ = \frac{1}{4}\int_{[-N,N]^2} \left(4N^2 - 2N|v_1|- 2N|v_2|+|v_1||v_2|\right) E_N(v) T_2(v)\, dv.\end{multline}
Next we return to \eqref{klf} where $f$ is given by \eqref{f} and perform an asymptotic analysis as $N\to \infty$.
We have
$$p^{1/2}-p^{-1/2}= e^{i\pi \frac{\overline{x}}{N} } - e^{-i \pi \frac{\overline{x}}{N}}=
2i \pi \frac{\overline{x}}{N}+O\(\frac{|x|^2}{N^2}\),$$
while, since $p=1+ O(\frac{|x|}{N})$ we may write (with $q=e^{-2\pi}$)
$$
q^{1/12}\prod_{k\ge 1} (1-q^k p)(1-q^k/p)= \( q^{1/12} \prod_{k \ge 1} (1-q^k) \) (1+ O(p-1))= \eta(i)^2 + O\left(\frac{|x|}{N}\right),$$ hence
$$f\left(\frac{\overline{x}}{N}, i\right) = 2i \pi \frac{\overline{x}}{N} \eta(i)^2 \left(1+O\left(\frac{|x|}{N}\right)\right).$$
Inserting into \eqref{klf} and combining with $e^{-\pi (Im \frac{x}{N})^2 }=1+O(\frac{|x|^2}{N^2})$ we obtain
\begin{equation}\label{asE}
E_N(x)= -2\pi \log |x|-2\pi \log \frac{2\pi \eta(i)^2 }{N}+O\left( \frac{|x|}{N}\right)\quad \text{as} \ N \to \infty.\end{equation}
Inserting this into \eqref{et1}, we are led to
\begin{multline*}
\int_{[0,N]^2\times [0,N]^2 } E_N(x-y) T_2(x-y)\, dx \, dy \\
= \int_{[-N,N]^2 } \left(N^2 - \frac{1}{2} N|v_1|- \frac{1}{2} N|v_2|+\frac{1}{4}|v_1||v_2|\right) \( -2\pi \log |v|-2\pi \log \frac{2\pi \eta(i)^2 }{N}+O\left( \frac{|v|}{N}\right)\) T_2(v) \, dv.\end{multline*}
Therefore
\begin{multline*} - \frac{1}{2\pi N^2} \int_{[0,N]^2 \times [0,N]^2} E_N(x-y) T_2(x-y) \, dx\, dy + \log \frac{N}{2\pi \eta(i)^2} \\= \log \frac{N}{2\pi \eta(i)^2} \( 1-\int_{[-N,N]^2 }\(1 +O\left(\frac{|v|}{N}\right) +O\left(\frac{|v|^2}{N^2}\right)\) T_2(v)\,dv \)\\
+ \int_{[-N,N]^2} \( \log |v| + O\left(\frac{|v|}{N}\right) \)\(1 +O\left(\frac{|v|}{N}\right) +O\left(\frac{|v|^2}{N^2}\right) \) T_2(v) \, dv.\end{multline*}
Using the assumption we have \begin{eqnarray*}
& \int_{[-N,N]^2} \frac{|v|}{N}T_2(v)\, dv=o(1), \\
& \int_{[-N,N]^2 } \frac{|v|^2}{N^2} T_2(v)\, dv=o(1),\\
&\int_{[-N,N]^2 } \frac{|v|^3}{N^3}T_2(v)\, dv=o(1),\\
&\int_{[-N,N]^2 } \frac{|v|}{N} \log |v|T_2(v)\, dv=o(1), \\
&\int_{[-N,N]^2 } \frac{|v|^2}{N^2} \log |v|T_2(v)\, dv=o(1).\end{eqnarray*}
It follows that, as $N \to \infty$,
\begin{multline*} - \frac{1}{2\pi N^2} \int_{[0,N]^2\times [0,N]^2 } E_N(x-y) T_2(x-y) \, dx\, dy + \log \frac{N}{2\pi \eta(i)^2} \\= \log \frac{N}{2\pi \eta(i)^2} \( 1-\int_{[-N,N]^2 } T_2(v)\,dv \)
+ \int_{[-N,N]^2} \log |v| T_2(v) \, dv +o(1).\end{multline*}
The result then easily follows.\end{proof}
\subsection{Specific computations in the plane}
We turn to computing that limit for two specific processes. The first one is
the determinantal random point process with correlation kernel
\begin{equation}\label{kginibre}
K(x,y)= e^{-\frac{\pi }{2} (|x|^2 + |y|^2 - 2x\overline{y})}.\end{equation}
This process arises in random matrices as the local limit of the complex Ginibre ensemble, see e.g. \cite[Proposition 15.2.3]{forrester}, and thus it is sometimes called the Ginibre point process.
From the determinantal structure of the correlation functions, cf. \eqref{rodet},
we have $\rho_2(x,y)=1-|K(x,y)|^2=
1- e^{-\pi |x-y|^2}$ and $T_2(v)=e^{-\pi |v|^2} $.
This easily satisfies all the assumptions of Theorem \ref{th2} (in particular $\int T_2=1$) and we obtain
\begin{pro}
The determinantal process with kernel \eqref{kginibre} satisfies
$$\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N= -\frac{1}{2}\left(\gamma+\log \pi\right).$$
\end{pro}
This statement can be compared to a computation done in \cite{jancovici}, see also
\cite[Ex.15.3.1(iv)]{forrester}.
\begin{proof} According to Theorem \ref{th2} it suffices to compute
$$\int_{\mathbb{R}^2}\log |v| e^{-\pi |v|^2}\, dv= \int_0^\infty \log r \, e^{-\pi r^2 } 2\pi r \, dr=\int_0^\infty \frac{1}{2} (\log s - \log \pi)e^{-s}ds ,$$ using the change of variables $s=\pi r^2$.
We have $\int_0^\infty e^{-s} \log s\, ds=-\gamma$, and the result follows.\end{proof}
The second one is the process of the zeros of a Gaussian analytic function (often denoted GAF).
It consists of the random zeros of the analytic function
$\sum_{n=0}^\infty \frac{\xi_n}{\sqrt{n!}} z^n$ when the $\xi_n$ are i.i.d Gaussians suitably normalized, and
it is a stationary process in the plane. The general background can be found e.g. in \cite{Hou06}.
The second cluster function for the process, when the density $\rho_1$ is taken to be $1$, is given according to \cite{forresterh} by
$$T_2(x)=1-h\(\frac{\pi|x|^2}{2}\)$$
where $$h(x)= 1+ \frac{1}{2} \frac{d^2}{dx^2} \( x^2 (\coth x -1 )\) .$$
It is easy to check that the assumptions of Theorem \ref{th2} are satisfied, and we deduce
\begin{pro}
The ``zeros of Gaussian analytic functions" process satisfies
$$\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N= - \frac{1}{2}(1+\log \pi)
.$$
\end{pro}
\begin{proof}
To check that we may apply Theorem \ref{th2} we first compute
\begin{eqnarray*}
\int_{\mathbb{R}^2} T_2(v)\, dv & = & \int_{\mathbb{R}^2} \(1- h\(\frac{\pi |v|^2}{2}\)\) \, dv\\
& = & \int_0^\infty \( 1- h\(\frac{\pi r^2}{2}\) \) 2\pi r \, dr \\
& = & 2 \int_0^\infty \( 1-h(u)\) \, du\\
& = & - \left[ \frac{d}{dx} ( x^2 (\coth x -1))\right]_0^\infty \\
& = & - \left[ 2x(\coth x-1) + x^2 (1-\coth^2 x)\right]_0^\infty \\
& = & 1
\end{eqnarray*}
where we have used the change of variables $u = \pi r^2/2$ and the asymptotic relation
$ \coth x\sim \frac{1}{x}$ as $x \to 0$.
It is also easy to check that the convergence of $\int T_2$ is exponential, hence fast enough, and we may apply Theorem \ref{th2}.
This yields
\begin{eqnarray*}
\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N& =& \int_{\mathbb{R}^2} \log |v|T_2(v)\, dv\\
& = & 2\pi \int_0^\infty \log r\(1-h\(\frac{\pi r^2}{2}\)\) r\, dr \\
& = & 2 \int_0^\infty \log \sqrt{\frac{2u}{\pi}}\, (1- h(u))\, du\\
& = & - \frac{1 }{2} \int_0^\infty\( \log\frac{ 2u}{\pi} \)\frac{d^2}{du^2} (u^2 (\coth u -1)) \, du.\end{eqnarray*}
Let us now compute
\begin{multline*}
\int_\varepsilon^\infty \(\log \frac{2u}{\pi}\) \frac{d^2}{du^2} (u^2 (\coth u -1)) \, du\\
= \left[ \(\log \frac{2x}{\pi}\) \frac{d}{dx} ( x^2 (\coth x -1) ) \right]_{\varepsilon}^\infty - \int_{\varepsilon}^\infty \frac{1}{u} \frac{d}{du} ( u^2 (\coth u -1))\, du \\
= - \log \frac{2\varepsilon}{\pi} \( 2\varepsilon (\coth \varepsilon -1) + \varepsilon^2 (1- \coth^2 \varepsilon)\) - \left[ x ( \coth x -1) \right]_{\varepsilon}^\infty - \int_{\varepsilon}^\infty (\coth u - 1) \, du\\
= - \log \frac{2\varepsilon}{\pi} (1+O(\varepsilon)) + 1+O(\varepsilon) - \left[\log \sinh x - x \right]_{\varepsilon}^\infty.\end{multline*}
Taking the limit $\varepsilon \to 0$, we conclude
$$\lim_{N \to \infty} \mathbb{E} \mathcal{W}_N = - \frac{1}{2}(1+\log \pi) .$$
\end{proof}
It is well known (cf. \cite{ns} and references therein) that
the ``GAF process" is more ``rigid" than the ``Ginibre point process" (cf. also recent work \cite{gnps}). We just demonstrated here it has more order via the renormalized energy.
\section{Optimization over determinantal processes}\label{optim}
As explained in the introduction and Section \ref{sec:background}, the question of minimizing $W$ is an important one, and open in dimension 2. It thus seems interesting to try to minimize $\lim_{N\to \infty} \mathbb{E} \mathcal{W}_N$ as expressed in \eqref{resth1} and \eqref{resth2}, over a subclass of processes.
In this section we show that we can characterize the minimizer of this expression over the class of
determinantal random point processes whose correlation kernel $K(x,y)$ is Hermitian and
translation invariant, i.e.
$K(x,y)= k(x-y)$ for some function $k$. For those processes, we have $T_2=k^2$. Note, however, that the important
determinantal process with kernel \eqref{kginibre} is not in this class: while all its
correlation functions are translation invariant, the correlation kernel is not.
We prove the following statement. The proof relies on a rearrangement inequality.
\begin{theo}\label{th3}
Let $\cal{K}$ be the class of determinantal processes on the real line, respectively the plane,
with self-adjoint translation-invariant kernels $K(x,y)=k(x-y)$, and $k(v)\in L^2(\mathbb{R}^d)$, $d=1$ or $d=2$, such that
\begin{enumerate}
\item[1)] $\rho_1(x)=k(0)=1$ and $\int_{\mathbb{R}^d} k^2(x)\, dx=1$;
\item[2)] $ \int_{\mathbb{R}^d} \log |x|k^2(x)\, dx<\infty $.
\end{enumerate}
Let ${\cal F}(k)=
\int_{\mathbb{R}^d} \log |x|k^2(x)\, dx $. Then for
any process from $\cal K$ with correlation kernel $K(x,y)=k(x-y)$, we have
$${\mathcal F} (k)\ge {\cal F} ( \widehat{\mathbf{1}_{B}}),$$
where $B$ is the ball centered at $0$ of volume one, and \ $\widehat{\cdot} $ is the Fourier transform.
Thus, on the real line ${\cal F}(k)$ is minimized over $\cal K$ by the $\beta=2$ sine process, while
on the plane the minimizing determinantal process has the kernel given by
$k(v)=\widehat{\mathbf{1}_B}(v)= \frac{J_1(2\sqrt{\pi} |v| ) }{\sqrt{\pi}|v|}$, where $J_1$ is a Bessel function of the first kind.
\end{theo}
\begin{remark} Condition 1) says that our processes have density 1, which we have assumed throughout,
and that $\lim_{N\to\infty} \mathbb{E}\mathcal{W}_N$ is finite, cf. Theorems \ref{th1} and \ref{th2}. The functional $\cal F$ coincides with
$\lim_{N\to\infty} \mathbb{E} \mathcal{W}_N$, given that the decay assumptions are satisfied.
\end{remark}
\begin{remark} Numerical integration shows that in dimension 2, ${\cal F} ( \widehat{\mathbf{1}_{B}})\approx
-0.65$, which is greater than $\lim_{N\to\infty} \mathbb{E}\mathcal{W}_N$ for the Ginibre ensemble
$(=-\tfrac 12(\gamma+\log \pi)\approx -0.86)$ and for the zeroes of the Gaussian analytic function
$(=-\tfrac 12(1+\log\pi)\approx -1.07)$. Thus, the two latter processes are ``more rigid''.
\end{remark}
\begin{proof}[Proof of Theorem \ref{th3}]
Let us denote by $f$ the inverse Fourier transform of $k$, and by $T_K$ the integral operator
corresponding to $K$ via $T_K(\varphi)= \int K(x,y)\varphi(y)\, dy.$ We have
$$T_K(\varphi)= \int k(x-y)\varphi(y)\, dy = k * \varphi.$$
By Macchi-Soshnikov's theorem, see \cite{Sos00}, any self-adjoint translation-invariant
correlation kernel of a determinantal process gives rise to an integral operator
with spectrum between $0$ and $1$. Hence, the spectrum of $T_K$ is in $[0,1]$. Since $T_K$ is also the convolution by $k$, this implies that $f=\check{k}$ takes values in $[0,1]$. Moreover $f \in L^2 $ with $\int |f|^2=\int k^2=1$, and $\int f=k(0)=1$ from assumption 1).
A function $f $ with values in $[0,1]$ which satisfies $\int f=\int f^2=1$ can only be a characteristic function of a set $A$, denoted $f=\mathbf{1}_A$, with $A$ measurable of measure 1. Writing $k_A$ for the corresponding $k$, we may write
$$k_A(x)= \int e^{- 2i \pi \xi \cdot x} \mathbf{1}_{A}(\xi)\, d\xi.$$
There remains to optimize over $A$, measurable set of measure 1,
\begin{equation}\label{ia}
I(A):= \int_{\mathbb{R}^d} k_A^2(x) \log |x|\, dx.
\end{equation}
Let $A$ be measurable of measure 1, such that $I(A) < +\infty$.
Noting that $k_A $ is an $L^\infty$ (and also continuous) function, for every $\alpha>0$ the integrals
$$\int_{\mathbb{R}^d} k_A^2(x) \frac{1-|x|^{-\alpha}}{\alpha} \, dx$$
then also converge, by comparison.
Given any $\tau\in \mathbb{R}$, $(e^{\tau h}-1)/h$ converges to $\tau $ monotonically as $h \to 0$ (it suffices to check that
this function is increasing in $h$). It then follows that in each of the domains $|x|<1$
and $|x|\ge 1$, we have
$\frac{1-|x|^{-\alpha}}{\alpha} \to \log |x|$ as $\alpha \to 0$, monotonically. Splitting the integrals as sums over these two regions,
it follows by monotone convergence theorem that
\begin{equation}
\label{cvmonotone}
I(A)= \lim_{\alpha\to 0} \int_{\mathbb{R}^d} k_A^2(x) \frac{1-|x|^{-\alpha}}{\alpha} \, dx.\end{equation}
We then remark (in view of the formula $\widehat{f*g}=\hat{f}\hat{g}$) that
$k_A^2(x)$ is the Fourier transform of
$$f_A(x)=\int_{\mathbb{R}^d}\mathbf{1}_A(y)\mathbf{1}_A (y-x)\, dy= |(A+x)\cap A|.$$
We next claim that $f_A$ is a continuous function.
First, consider the case where $A$ is an open set. Then, as $x\to x_0$ we have $\mathbf{1}_A(y) \mathbf{1}_A(y-x)
\to \mathbf{1}_A (y) \mathbf{1}_A(y-x_0)$ almost everywhere, by openness of $A$, while
$|\mathbf{1}_A (y)\mathbf{1}_A(y-x)|\le \mathbf{1}_A(y) $ and $\mathbf{1}_A \in L^1$. The claim is thus true by dominated
convergence theorem.
Second, if $A$ is a general measurable set, by outer regularity of the measure we may approximate
it by an open set $U$ such that $A\subset U$ and $|U\backslash A|<\varepsilon$.
Then it is immediate that for any $x$, $|f_A(x)-f_U(x)|= ||(A+x)\cap A|- |(U+x)\cap U||\le 2\varepsilon$.
Since $f_U$ is continuous and $\varepsilon $ is arbitrary, it follows that $f_A$
has to be continuous too as a uniform limit of continuous functions. The claim is proved.
We also note that $f_A(0)=|A|=1$.
The next ingredient is that in dimension 1
\begin{equation}\label{f1d}
\widehat{(1-|x|^{-\alpha})}= \delta- 2 \Gamma(-\alpha+1) \sin \frac{\pi \alpha}{2} (2\pi |\xi|)^{\alpha-1}, \end{equation}
while in dimension 2
\begin{equation}\label{f2d}
\widehat{(1-|x|^{-\alpha})}= \delta - \pi^{\alpha-1}\frac{\Gamma(\frac{2-\alpha}{2})}{\Gamma(\frac{\alpha}{2})} |\xi|^{\alpha-2}
\end{equation}
For a reference see e.g. \cite[Chapter 5]{edwards}, or \cite[page 113]{sch}.
Let us continue with the one-dimensional case.
We deduce from the above facts that
\begin{equation}\label{fourier3}
I_\alpha(A):= \int_{\mathbb{R}} k_A^2(x) \frac{1-|x|^{-\alpha}}{\alpha} \, dx= \frac{1}{\alpha}\int_{\mathbb{R}} f_A(\xi)\(
\delta- 2 \Gamma(-\alpha+1) \sin \frac{\pi \alpha}{2} (2\pi |\xi|)^{\alpha-1} \)\, d\xi.\end{equation}
The relation can be justified
by convoluting $ \delta- 2 \Gamma(-\alpha+1) \sin \frac{\pi \alpha}{2} (2\pi |\xi|)^{\alpha-1}$ with a Gaussian
kernel approximating $\delta$ at scale $\varepsilon$,
using the fact that $\int \hat{f} g=\int f \hat{g}$ in the Schwartz class, the continuity of $f_A$
and then letting $\varepsilon \to 0$ on both sides. Moreover, this argument shows that
$\int f_A(\xi) |\xi|^{\alpha-1}\, d\xi $ is convergent.
Using $f_A(0)=1$ and Fubini's theorem, we may rewrite \eqref{fourier3} as
\begin{multline}\label{fourier4}
I_\alpha(A) = \frac{1}{\alpha}\( 1- 2 \Gamma(-\alpha+1) \sin \frac{\pi \alpha}{2}
\int_{\mathbb{R}^2} \mathbf{1}_A(y)\mathbf{1}_A(y-x) (2\pi |x|)^{\alpha-1} \, dx\, dy\) \\
= \frac{1}{\alpha}- \frac{2}{\alpha} \Gamma(-\alpha +1) \sin \frac{\pi \alpha}{2}
\int_{\mathbb{R}^2} \mathbf{1}_A(y)\mathbf{1}_A(z) (2\pi |y-z|)^{\alpha-1}\, dz\, dy.
\end{multline} Notice that $\frac{1}{\alpha} \Gamma(-\alpha+1) \sin \frac{\pi \alpha}{2} \sim \frac{\pi}{2}$ as $\alpha\to 0$
so for $\alpha$ small enough, $- \frac{2}{\alpha} \Gamma(-\alpha+1) \sin \frac{\pi \alpha}{2}(2\pi |y-z|)^{\alpha-1}$ is increasing in
$|y-z|$.
Now Riesz's rearrangement inequality (see \cite[Theorem 3.7]{ll})
asserts that a quantity of the form of the right-hand side of \eqref{fourier4}
is always decreased by changing $\mathbf{1}_A$ into its symmetric rearrangement
$(\mathbf{1}_A)^*=\mathbf{1}_{A^*}$.
This means that for all $\alpha$ small enough,
$I_\alpha(A) \ge I_\alpha(A^*)$. But $I(A)= \lim_{\alpha\to 0} I_\alpha(A)$ hence the same is true also for $I$
i.e. $I(A) \ge I(A^*).$ The symmetric rearrangement $A^*$ of $A$ is the ball centered at $0$ and of volume $|A|=1$.
We have thus
found that $\int k_A^2(x) \log |x|\, dx$ is minimal when $A$
is the ball centered at $0$ and of volume 1.
In dimension one, the Fourier transform of $\mathbf{1}_{[-\frac{1}{2}, \frac{1}{2}]}$ is $\frac{\sin \pi x}{\pi x}$, which corresponds to the determinantal process with $K(x,y)=\frac{ \sin \pi (x-y)}{\pi(x-y)}$, that is the sine process (for $\beta=2$).
In dimension 2, the argument is exactly parallel, starting again from \eqref{f2d}. \end{proof}
\section{Computations of variance of $\mathcal{W}_N$}\label{sec4}
In Section \ref{sec3} we dealt with the expectation of $W$.
In this section we turn to examining its variance in the sense of computing
$\lim_{N\to \infty} \mathrm{Var}(\mathcal{W}_N) $ for the same specific random
point processes.
In what follows we will need the formalism of (higher) cluster functions to efficiently
deal with the $k$-point correlation functions for $k=2,3,4$; we refer to \cite{tw,faris}
for details and further references on this formalism.
For any nonempty subset $S=\{i_1, \dots, i_k\}$ of $\{1, \dots, N\}$ we write
$\rho_S=\rho_k(x_{i_1}, \dots, x_{i_k})$, where $\rho_k$ is the $k$-point correlation function,
and define the $n$-point cluster function as
\begin{equation}\label{deftn}
T_n(x_1, \dots, x_n)=\sum (-1)^{n-m} (m-1)! \rho_{S_1} \dots \rho_{S_m}\end{equation}
with the sum running over all partitions of $\{1, \dots, n\}$ into nonempty subsets $S_1, \dots , S_m$.
From the $T_n$, the $\rho_n$ can be recovered through the reciprocal formula
\begin{equation}\label{reciprocal}
\rho_n=\sum (-1)^{n-m}T_{S_1}\dots T_{S_m}.
\end{equation}
If a random point process is determinantal (cf. \eqref{rodet}) with correlation kernel $K$,
then (see e.g. \cite{tw}) for any $k\ge 1$
\begin{equation}\label{tdet}
T_k(x_1, \dots, x_k)=\frac{1}{k}\,\sum_{\sigma\in \mathbf{S}_k}K(x_{\sigma(1)}, x_{\sigma(2)})\dots K( x_{\sigma(k)}, x_{\sigma(1)}),\end{equation}
where $\mathbf{S}_k$ denotes the symmetric group on $k$ symbols.
If the correlation functions of a point process are given by quaternion determinants
(cf. \eqref{qdet}) then (see e.g. \cite{tw}) for any $k\ge 1$
\begin{equation}\label{tqdet}
T_k(x_1, \dots, x_k)=\frac{1}{2k}\,Tr\sum_{\sigma\in \mathbf{S}_k}K(x_{\sigma(1)}, x_{\sigma(2)}) \dots K( x_{\sigma(k)}, x_{\sigma(1)}).\end{equation}
\begin{lem}\label{prform}
We have
$$\mathrm{Var}\(\sum_{i\neq j}G_N(a_i-a_j)\)= I_1 + \dots + I_6$$ where
\begin{align
\label{t1} &
I_1= 2 \int_{[0,N]^2} G_N(x-y)^2 \, dx\, dy\\
\label{t2} & I_2= - 4 \int_{[0,N]^3} G_N(x-y)G_N(x-z) T_2(y,z)\, dx\, dy\, dz\\
\label{t3} &
I_3 = 2 \int_{[0,N]^4}
G_N(x-y)G_N(z-t) T_2(x,z)T_2(y,t) \, dx\, dy\, dz\, dt \\
\label{t4} & I_4= - 2\int_{[0,N]^2} G_N(x-y)^2 T_2(x,y)\, dx\, dy\\
\label{t5} &
I_5= 4 \int_{[0,N]^3 } G_N(x-y)G_N(x-z)T_3(x,y,z)\, dx\, dy\, dz\\
\label{t6} &
I_6= - \int_{[0,N]^4 } G_N(x-y)G_N(z-t)T_4(x,y,z,t)\, dx\, dy\, dz\, dt,\end{align}
where $G_N(x)= -\log \left|2\sin \frac{\pi x}{N}\right|$ in dimension 1, resp.
$G_N(x)=\frac{1}{2\pi} E_N(x) $ defined in \eqref{defE} in dimension 2.
\end{lem}
\begin{proof}
Expanding the square, we have
\begin{align}
\label{li1}& \(\sum_{i\neq j, a_i, a_j \in [0,N]} G_N(a_i- a_j) \)^2 =
\sum_{i,j,k,l \ \mathrm{p.d.}} G_N(a_i- a_j)G_N(a_k- a_l) \\
\label{li2}& + \sum_{i,j,l\ \mathrm{p.d.}} G_N(a_i- a_j)G_N(a_i- a_l) +
\sum_{i,j,k \ \mathrm{p.d.}} G_N(a_i- a_j) G_N(a_k- a_i)\\
\label{li3}& + \sum_{i,j,l \
\mathrm{p.d.}} G_N(a_i- a_j) G_N(a_j-a_l)+
\sum_{i,j,k \
\mathrm{p.d.}} G_N(a_i- a_j) G_N(a_k-a_j)\\
\label{li4} & +
\sum_{i\neq j} G_N(a_i-a_j)^2 + \sum_{i\neq j} G_N(a_i- a_j) G_N(a_j-a_i),\end{align} where the sums are still taken over points in $[0,N]$, and p.d. stands for ``pairwise distinct''.
Since $G_N$ is even, it is clear that all the sums in \eqref{li2} and \eqref{li3} are equal, and the sums in \eqref{li4} as well.
Using $k$-point correlation functions (cf. \eqref{corr}), we thus may write
\begin{multline}
\label{avecro}
\mathbb{E} \(\sum_{i\neq j, a_i, a_j \in [0,N]} G_N(a_i- a_j)\)^2\\
= \int_{[0,N]^4}
G_N(x-y)G_N(z-t) \rho_4(x,y,z,t) \, dx\, dy\, dz\, dt \\
+ 4\int_{[0,N]^3} G_N(x-y)\rho_3(x,y,z)\, dx\, dy\, dz+ 2\int_{[0,N]^2} G_N(x-y)^2 \rho_2(x-y)\, dx\, dy.
\end{multline}
It is now convenient to express this in terms of the cluster functions $T_k$, using \eqref{reciprocal}, which yields
\begin{eqnarray*}
\rho_2(x,y) & = & T_1(x)T_1(y)-T_2(x,y)=1-T_2(x,y)\\
\rho_3(x,y,z)& =& 1-T_2(x,y)-T_2(x,z)- T_2(y,z)+T_3(x,y,z)\\
\rho_4(x,y,z,t)& =& 1-T_2(x,y) -T_2(x,z)-T_2(x,t) -T_2(y,z)-T_2(y,t)- T_2(z,t) \\
& & + T_3(x,y,z)+T_3(x,y,t)+T_3(x,z,t)+T_3(y,z,t)\\
& & + T_2(z,y)T_2(z,t)+T_2(x,z)T_2(y,t)+T_2(x,t)T_2(y,z) - T_4(x,y,z,t).\end{eqnarray*}
Substituting these relations into \eqref{avecro} and using that $\int_0^N G_N=0$, we obtain (writing the terms in the same order)
\begin{align} \label{avect}
& \mathbb{E}\(\sum_{i\neq j, a_i, a_j \in [0,N]} G_N(a_i- a_j)\)^2 =
\( \int_{[0,N]^2} G_N(x-y)T_2(x,y) \, dx\, dy \)^2 \\
\nonumber & + 2 \int_{[0,N]^4]}
G_N(x-y)G_N(z-t) T_2(x,z)T_2(y,t) \, dx\, dy\, dz\, dt \\
\nonumber & - \int_{[0,N]^4 } G_N(x-y)G_N(z-t)T_4(x,y,z,t)\, dx\, dy\, dz\, dt\\
\nonumber & + 4 \int_{[0,N]^3 } G_N(x-y)G_N(x-z)T_3(x,y,z)\, dx\, dy\, dz\\
\nonumber &
-4 \int_{[0,N]^4} G_N(x-y)G_N(x-z)T_2(y,z)\, dx\, dy\, dz\\
\nonumber & + 2 \int_{[0,N]^2} G_N(x-y)^2 \, dx\, dy- 2\int_{[0,N]^2} G_N(x-y)^2 T_2(x,y)\, dx\, dy.\end{align}
Similarly (and as we have seen in the proof of Theorem \ref{th1}) we have
$$\mathbb{E}\sum_{i\neq j}G_N(a_i-a_j)= - \int_{[0,N]^2} G_N(x-y)T_2(x,y)\, dx\, dy$$ and the result follows.\end{proof}
\subsection{The one-dimensional case}
\begin{theo} \label{th4}
For the sine-$\beta$ processes with $\beta=1,2,4$ as described above, we have
$$\lim_{N\to \infty}\mathrm{Var} (\mathcal{W}_N)=0.$$
\end{theo}
\begin{proof}
Since we already know from Proposition \ref{pro23} that $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$ exists, and in view of \eqref{wc1d2}, it suffices
to show that
\begin{equation}\label{vardef}
\lim_{N\to \infty}\frac{1}{N^2}\( \mathbb{E}\( \sum_{i\neq j, a_i, a_j\in [0,N]} G_N(a_i- a_j)\)^2 - \(\mathbb{E}\sum_{i\neq j, a_i, a_j\in [0,N]} G_N(a_i- a_j)\)^2\) =0.\end{equation}
We apply Lemma \ref{prform} and
now deal with all the terms $I_1$ to $I_6$ in \eqref{t1}--\eqref{t6}.
First, we have
\begin{equation}
\label{t1r}I_1= 2\int_{[0,N]^2} G_N(x-y)^2 \, dx\, dy= 2N^2 \int_{[0,1]^2} \( \log \left|2\sin \pi (x-y)\right|\)^2\, dx\, dy.\end{equation}
For $I_2$, using the explicit expression for $G_N$, making the change of variables $x'=x/N, y'=y/N, t=y-z$, and recalling that $T_2$ is translation-invariant since the process is,\footnote{hence by abuse of notation we write $T_2(x,y)=T_2(x-y)$ as for Theorem \ref{th1}}
we find
\begin{multline}\label{t2r}
I_2= - 4 \int_{[0,N]^4} G_N(x-y)G_N(x-z) T_2(y-z)\, dx\, dy\, dz\\
= - 4 N^2 \int_{[0,1]^2}\int_{[N(y-1), Ny]} \log |2\sin \pi (x-y)|\log \left|2\sin \pi
\left(x-y+\frac{t}{N}\right)\right| T_2(t)\, dx\, dy\, dt.\end{multline}
One may like to think that
$\int f(x-y+\frac{t}{N}) T_2(t)\, dt \to f(x-y)$ since $\int T_2=1$. However, $\log |2\sin\cdot |$ is not regular enough to apply this reasoning and $\int T_2$ may converge only conditionally. First notice that the cluster functions given in \eqref{t21}-\eqref{t22}-\eqref{t24} satisfy
\begin{equation}
\label{assv1} |T_2(v)|=O\(\frac{1}{|v|}\), \end{equation}
\begin{equation}
\label{assv2}
\int_{|v|>M} T_2(v)\, dv=O\(\frac{1}{M}\).\end{equation}
Pick two exponents $a,b>0 $ with $a+b<1$. Let us examine the $t$ integral in the right-hand side of \eqref{t2r}. Assume first that $[-N^b, N^b]\subset [N(y-1), Ny]$ and that $|x-y|>N^{-a}$. Note that for this to be satisfied it suffices that
\begin{equation}\label{restric}
(x,y)\in S_{N,a}:=\{ (x, y) \in \mathbb{R}^2:
N^{-a}<y<1-N^{-a}, \ |x-y|>N^{-a}\}.\end{equation}
By the mean value formula, we may write for some $|\theta|<N^b$
\begin{equation*}\log \left|2\sin \pi \left(x-y+\frac{t}{N}\right) \right|- \log \left|2\sin \pi (x-y)\right|= \frac{\pi \cos \pi (x-y+\frac{\theta}{N}) }{N\sin \pi (x-y+\frac{\theta}{N})}=O(N^{a-1})\end{equation*} since we assumed $|x-y|>N^{-a}$.
Thus
\begin{multline}\label{contribpr}\int_{[-N^b, N^b]} \log |2\sin \pi (x-y)|\log \left|2\sin \pi \left(x-y+\frac{t}{N}\right)\right| T_2(t)\, dt\\
=\int_{[-N^b, N^b]} \log^2|2\sin \pi (x-y)| T_2(t)\, dt +O(N^{a-1} \log N) \\
= \log |2\sin \pi(x-y)| \bigl(1- O(N^{-b})\bigr) +O(N^{a-1} \log N)=
\log |2\sin \pi(x-y)| +O(N^{-b} \log N).\end{multline}
where we have used \eqref{assv1}, then \eqref{assv2} and $|x-y|>N^{-a}$.
We then claim that if $|x-y|>N^{-a}$ we have
\begin{equation}\label{reste1}
\int_{|t|>N^b} \log |2\sin \pi (x-y)|\log \left|2\sin \pi \left(x-y+\frac{t}{N}\right)\right| T_2(t)\, dt= o(1).\end{equation}
Assuming this, and combining with \eqref{contribpr}
we obtain that
\begin{multline*}\int_{(x,y)\in [0,1]^2\cap S_{N, a} } \int_{[N(y-1), Ny]} \log |2\sin \pi (x-y)|\log \left|2\sin \pi \left(x-y+\frac{t}{N}\right)\right| T_2(t)\, dx\, dy\, dt \\=
\int_{(x,y)\in [0,1]^2\cap S_{N,a}}\log^2 |2\sin \pi (x-y)|\, dx\, dy+o(1).\end{multline*}
But it is easy to check, since the integrals converge and \eqref{assv1} holds, that the contributions of the set where \eqref{restric} does not hold are $o(1)$ as $N \to \infty$. We may thus conclude that
\begin{equation}\label{resultt1}
I_2 = - 4 N^2\int_{(x,y)\in [0,1]^2}
\log^2 |2\sin \pi (x-y)|\, dx\, dy +o(N^2).\end{equation}
To finish with this $I_2$ term, it remains to prove \eqref{reste1}.
For $\beta=1,2$ this is immediately true since $T_2(v)=O(|v|^{-2})$.
For $\beta =4$, we notice that the same argument that was used above to restrict to $|x-y|>N^{-a}$ can be used to restrict to $|x-y + \frac{t}{N}|>N^{-c}$ (note that the initial integral is symmetric in $y$ and $z$).
Inserting the formula for $T_2$ \eqref{t24}, and neglecting the $O(1/t^2)$ part of $T_2$, we thus have to prove that
$$\int_{|t|>N^{b}, |x-y+\frac{t}{N}| >N^{-c}}
- \frac{\partial }{\partial t}\frac{\sin 2\pi t}{2\pi t} \frac{1}{2\pi}\int_0^{2\pi t} \frac{\sin t}{t} \log \left|2\sin \pi \left(x-y+\frac{t}{N}\right)\right|=o(1).$$
We integrate by parts and find that the boundary terms are negligible, and there remains to show that
$$\int_{|t|>N^{b}, |x-y+\frac{t}{N}| >N^{-c}}
\frac{\sin 2\pi t}{2\pi t}
\frac{\partial }{\partial t} \( \log \left|2\sin \pi \left(x-y+\frac{t}{N}\right)\right|\frac{1}{2\pi} \int_0^{2\pi t} \frac{\sin t}{t}\) \, dt=o(1).$$
If the derivative falls on the second factor, we are back to the $O(1/t^2)$ situation which gives a negligible term, and for the other term we use
$$\frac{\partial}{\partial t} \log \left|2\sin \pi \left(x-y+\frac{t}{N}\right)\right|= O(N^{c-1})$$ by explicit computation, which gives that the integral is $O(N^{c-1} \log N)=o(1)$.
This completes the treatment of $I_2$.
We turn to $I_3$. Using a similar change of variables, we may write this term
\begin{multline*}
I_3= 2N^2 \int_{[0,1]^2} \int_{[N(x-1),Nx]}\int_{[N(y-1), Ny]}
\log \left|2\sin\pi (x-y)\right|\log \left|2\sin \pi \left(x-y+\frac{v-u}{N}\right)\right| \\
\times T_2(u)
T_2(v)\, dx\, dy\, du\, dv.
\end{multline*}
Very similar manipulations to those above show that $\log \left|2\sin \pi \left(x-y+\frac{v-u}{N}\right)\right| $ can be replaced by $\log |2\sin \pi (x-y)|$ with a $o(N^2)$ correction. This leads us to
\begin{equation*}
I_3=2N^2 \int_{[0,1]^2 }\log^2 |2\sin \pi(x-y)|\, dx\, dy+o(N^2).
\end{equation*}
For $I_4$, we have
\begin{equation*}
I_4=\int_{[0,N]^2} \log^2 \left|2\sin \frac{\pi(x-y)}{N}\right|T_2(x-y)\, dx\, dy=o(N^2).\end{equation*}
Indeed, we may take away a $\delta N$-neighborhood of the diagonal, outside of which $\log^2 \left|2\sin \frac{\pi(x-y)}{N}\right|$ is bounded by $\log^2 N$ and $\int |T_2|$ is controlled by $\log N $, using \eqref{assv1}. Thus the whole integral is controlled by $N\log^3 N=o(N^2)$.
Adding the above results we find that
$$I_1+I_2+I_3+I_4=o(N^2).$$
It remains to show that $I_5$ and $I_6$ also
give $o(N^2)$ contributions.
The expressions $I_5$ and $I_6$ are estimated using explicit formulas for cluster functions of the sine-$\beta$ processes. First returning to \eqref{k2}--\eqref{k1} we see that for $\beta=1,2,$ the entries of $K^{(2)}$ and $K^{(1)}$ are
$O\bigl(\frac{1}{1+|x-y|}\bigr)$. Combining with \eqref{tdet}--\eqref{tqdet}, it follows that $T_3(x,y,z)= O\(\frac{1}{1+|(x-y)(y-z)(z-x)|}\)$ in both these cases.
For \eqref{k4}, we have
\begin{equation}\label{entryk4}
K^{(4)}(x,y)= \left(\begin{array}{lr}
O\(\frac{1}{1+|x-y|} \)& O \( \frac{1}{1+|x-y|}\)\\
O(1) & O \( \frac{1}{1+|x-y|}\)\end{array}\right).\end{equation}
We thus obtain $$T_3(x,y,z)=O\left(\frac{1}{1+|(x-y)(y-z)|}\right)+O\left( \frac{1}{1+|(y-z)(z-x)|}\right)+O\left(\frac{1}{1+|(x-y)(z-x)|}\right).$$
In \eqref{t5} we may first (as above) remove a small neighborhood of the diagonals, off of which $|G_N(x-y)|$ and $|G_N(y-z)|$ are bounded by $O(\log N)$. It then remains to estimate $$\int_{[0,N]^3} |T_3(x,y,z)|\, dx\, dy\, dz.$$
Replacing $T_3$ by its above estimates, and changing variables to $x+y+z$ and successively two out of $x-y$, $y-z$, and $z-x$, we find $\int_{[0,N]^3} |T_3(x,y,z)|\, dx\, dy\, dz\le O(N \log^3 N)$, and $I_5=o(N^2)$.
We finally turn to $I_6$.
The formula for $T_4 $ is given by \eqref{tdet}--\eqref{tqdet}. Comparing to \eqref{k2}--\eqref{k1}, we see that for $\beta=1,2$, we have $$T_4(x,y,z,t)= O\left( \frac{1}{1+|(x-y)(y-z)(z-t)(t-z)|}\right).$$ The same reasoning as for $I_5$ gives $I_6=o(N^2)$.
For $\beta=4$, in the formula for $T_4$ obtained with \eqref{k4}, in view of \eqref{entryk4}, there are terms which a priori have insufficient decay: they are terms of the form
$$ \frac{\partial}{\partial x_1} \frac{\sin\pi(x_1-x_2)}{2\pi(x_1-x_2)} \frac{\partial }{\partial x_3}
\frac{\sin 2\pi(x_3-x_4)}{2\pi(x_3-x_4)} \int_0^{2\pi(x_2-x_3)}\frac{\sin t}{t}\, dt \int_0^{2\pi(x_4-x_1)}\frac{\sin t}{t}\, dt,
$$ where $(x_1,\dots, x_4)$ is a permutation of the variables $x,y,z,t$.
This leads to two different types of integrals
\begin{multline}\label{typea}
\int_{[0,N]^4}
\log \left|2\sin \frac{\pi(x-y)}{N}\right|
\log \left|2\sin \frac{\pi(z-t)}{N}\right| \\ \times
\int_0^{2\pi(x-y)}\frac{\sin s}{s}\, ds\int_0^{2\pi(z-t)} \frac{\sin s}{s}\,ds \cdot \frac{\partial }{\partial x} \frac{\sin 2\pi(x-z)}{2\pi(x-z)}\frac{\partial }{\partial y}
\frac{\sin 2\pi(y-t)}{2\pi(y-t)}\, dx\,dy\,dz\,dt\end{multline}
and
\begin{multline}\label{typeb}
\int_{[0,N]^4}
\log \left|2\sin \frac{\pi(x-y)}{N}\right|
\log \left|2\sin \frac{\pi(z-t)}{N}\right| \\ \times
\int_0^{2\pi(x-z)}\frac{\sin s}{s}\, ds\int_0^{2\pi(y-t)} \frac{\sin s}{s}\,ds \cdot \frac{\partial }{\partial x} \frac{\sin 2\pi(x-y)}{2\pi(x-y)}\frac{\partial }{\partial z}
\frac{\sin 2\pi(z-t)}{2\pi(z-t)}\, dx\,dy\,dz\,dt.\end{multline}
For \eqref{typea}, we may again restrict the domain to $|x-y|>N^a$ with $0<a<1$.
Then, integrating by parts in $x$ gives boundary terms which are negligible,
and a new integrand with extra decay, involving $\frac{\partial}{\partial x} \int_0^{2\pi(x-y)} \frac{\sin s}{s}\, ds$ or $\frac{\partial }{\partial x} \log \left|2\sin \frac{\pi(x-y)}{N}\right|$. This leads again to $o(N^2)$ contributions.
For \eqref{typeb}, we may first restrict the integral to $|x-z|>N^a$ and $|y-t|>N^a$, using arguments as above. Then, we may replace $
\int_0^{2\pi(x-z)}\frac{\sin s}{s}\, ds$ and $\int_0^{2\pi(y-t)} \frac{\sin s}{s}\,ds $ by $\frac{\pi}{2}sgn(x-z) $ and $\frac{\pi}{2}sgn(y-t)$ respectively, making only a $o(N^2)$ error.
Then note that the integrand in $x-y$ is a locally odd function, so we can remove domains $|x-y|<N^b$, $|z-t|<N^b$ from the integration domain. Finally, integration by parts in $x$ gives additional decay, yielding $o(N^2)$ contribution.
We conclude that $I_6=o(N^2)$, and the result follows.
\end{proof}
\begin{coro}For all point processes with finite $\lim_{N\to\infty} \mathbb{E}(\mathcal{W}_N)$ and $\lim_{N\to \infty} \mathrm{Var}(\mathcal{W}_N)=0$,
$$\mathcal{W}_N \to \lim_{N\to \infty} \mathbb{E}\mathcal{W}_N \quad \text{as } \ N \to \infty$$
in $L^2(\Omega)$ and thus in probability.\end{coro}
The proof is immediate.
Processes satisfying these assumptions and having
different values for $\lim_{N\to \infty}\mathbb{E}\mathcal{W}_N$ are thus mutually singular, such as $\beta$-sine processes with different $\beta\in\{1,2,4\}$.
\subsection{The two-dimensional case}
\begin{theo}
For the determinantal random point process with kernel \eqref{kginibre} we have
$$\lim_{N\to \infty}\mathrm{Var}
(\mathcal{W}_N)=0.$$\end{theo}
\begin{proof} The starting point is again Lemma \ref{prform}.
We note that in view of \eqref{kginibre}, \eqref{rodet} and \eqref{deftn}, all the cluster functions for that process are exponentially
decreasing when viewed as functions of pairwise distances between arguments.
We start with the term \eqref{t2}. Replacing $G_N$ by $\frac{1}{2\pi} E_N$, using the translation invariance, and changing variables as before ($y-z=u$), we find
\begin{multline*}
I_2+2I_1=\\ \frac{2N^2}{\pi}\int_{[0,1]^2}\int_{Ny-[0,N]^2}E_N(N(x-y))\( E_N(N(x-y))- E_N(N(x-y+u/N))\) T_2(u)dx\, dy\, du.
\end{multline*}
Noting that in view of the definition of $E$ (cf. \eqref{eis}--\eqref{defE}) we have $E_N(Nx)=E_1(x)$ and from \eqref{asE}, $E_1$ behaves like $C\log|x|$ near $x=0$ (and similarly near
points of the lattice $\mathbb{Z}^2$).
Given $\eta>0$ there thus exists $\delta>0$ such that
$$\int_{[0,1]^2 \cap \{|x-y- \mathbb{Z}^2|<\delta\}} \int_{Ny-[0,N]^2}E_1(x-y)\( E_1(x-y))- E_1(x-y+u/N)\) T_2(u)dx\, dy\, du<\eta.$$
On the other hand, still in view of the definition \eqref{eis}--\eqref{defE}, $E_1$ is uniformly continuous away from $\mathbb{Z}^2$, hence
we may write, as $N \to \infty$, $$
\int_{[0,1]^2 \backslash \{|x-y- \mathbb{Z}^2|<\delta\}} \int_{Ny-[0,N]^2}E_1(x-y)\( E_1(x-y))- E_1(x-y+u/N)\) T_2(u)dx\, dy\, du=o(1).$$
Since this is true for any $\eta$, it follows that
$$I_2+ 2I_1=o(N^2).$$
Similarly,
\begin{multline*}
I_3 -I_1=
\\
\int_{[0,1]^2 }\int_{Nx-[0,N]^2}\int_{Ny-[0,N]^2} E_1(x-y)\(E_1\left(x-y+\frac{u-v}{N}\right) - E_1(x-y)\) T_1(u)T_2(v)\, dx\, dy\, du\, dv.\end{multline*}
The same reasoning shows that this is $o(N^2)$.
For $I_4$, the change of variables $x'=x/N$ and $y'=y/N$ yields
$$I_4 =-2N^2\int_{[0,1]^2} E_1(x-y)^2 T_2(N(x-y))\, dx\, dy.$$
We may take out a $\delta$ neighborhood of the diagonal and its translates by $\mathbb{Z}^2$, off of which $E_1(x-y)$ can be bounded by $C \log |x-y|$ and $T_2(\frac{x-y}{N})$ by $e^{-C N^2 \delta^2}$. The whole term is thus $o(N^2)$.
We turn to \eqref{t5}.
From \eqref{tdet} and \eqref{kginibre} we find that $|T_3(x,y,z)|\le e^{-C(|x-y|^2+|y-z|^2 + |x-z|^2)}.$
As above we have
$$I_5= 4\int_{[0,1]^3}E_1(x-y)E_1(x-z)T_2(Nx,Ny,Nz)\, dx\, dy\, dz
$$
and as above we may take out $\delta$-neighborhoods $|x-y|<\delta$ or $|x-z|<\delta$ or $|y-z|<\delta$ (and their translates by $\mathbb{Z}^2$), outside of which the $E_1$ terms are bounded by $\log$'s
and $T_3$ by $e^{-C N^2 \delta^2}$. The whole term is thus $o(N^2)$.
A very similar reasoning applies to $I_6$.
Combining all these, we find the result.
\end{proof}
\begin{remark} In view of the proof above, the same result holds for any process such
that the cluster functions decay sufficiently fast away from diagonals, for example exponentially.
\end{remark}
\section{Miscellaneous computations}\label{sc:misc}
In this section we gather various additional computations of expectations and additional facts.
\subsection{Operations on processes}
In this subsection, we examine the effect on $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$ of two common operations on independent processes: superposition and decimation (see \cite{dvj}).
\begin{pro}\label{pro26}
Let $\mathcal{X}_1,\dots , \mathcal{X}_M$ be $M$ independent translation invariant
point processes with density $1$ and two-point correlation functions $\bigl\{\rho_2^{(i)}\bigr\}_{1\le i\le M}$, satisfying the assumptions of Theorem \ref{th1}.
Assume that $\lim_{N\to \infty}\mathbb{E} \mathcal{W}_N(\mathcal{X}_i)<\infty$ for $i=1,\dots,M$. Let $\mathcal{X}$ denote
the superposition of independent processes $\bar{\mathcal{X}}_i$, where $\bar{\mathcal{X}}_i$ denotes the image of the process $\mathcal{X}_i$ under the dilation by factor $M$ of the line. Then, with obvious notation,
$$\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N (\mathcal{X})= \log M + \frac{1}{M} \sum_{i=1}^M \lim_{N\to \infty} \mathbb{E} \mathcal{W}_N (\mathcal{X}_i).$$
\end{pro}
\begin{proof}
Let $T_2^{(i)}$ be the second cluster functions corresponding to $\mathcal{X}_i$.
Let $\bar{\rho}_2^{(i)}$ now denote the second correlation function for the process $\bar{\mathcal{X}_i}$, which has density $1/M$. We have
$\bar{\rho}_2^{(i)}(x,y)= \frac{1}{M^2}\rho_2^{(i)}(\frac{x}{M}, \frac{y}{M})$.
The process $\mathcal{X}$ clearly has density $1$, and its second correlation function is
$$\rho_2(x,y)= \sum_{i \neq j \in [1,M]} \rho_1^{(i)} \rho_1^{(j)}
+ \sum_{i=1}^M \bar{\rho}_2 (x,y) =M(M-1) \frac{1}{M^2}+ \frac{1}{M^2} \sum_{i=1}^M\rho_2^{(i)}\left(\frac{x}{M}, \frac{y}{M}\right).$$
We also denote by $T_2$ the corresponding second cluster function. We thus
have $$T_2(v)= \frac{1}{M} - \frac{1}{M^2} \sum_{i=1}^M \(1- T_2^{(i)} \left(\frac{v}{M}\right)\)=
\frac{1}{M^2} \sum_{i=1}^M T_2^{(i)} \left(\frac{v}{M}\right),$$ and it easily follows, with a change of variables, that
$\int_{- \infty}^\infty T_2(v)\, dv= 1.$
In addition,
\begin{multline}
\int_{-\infty}^{\infty} T_2(v) \log |2\pi v|\, dv=
\int_{-\infty}^\infty \frac{1}{M^2} \sum_{i=1}^M T_2^{(i)} \left(\frac{v}{M}\right)\log |2\pi v|\, dv\\
= \frac{1}{M}\sum_{i=1}^M \int_{-\infty}^\infty T_2^{(i)} (s) \log |2\pi Ms|\, ds=
\log M + \frac{1}{M} \sum_{k=1}^M \int_{-\infty}^\infty T_2^{(i)}(v) \log|2\pi v|\, dv.
\end{multline}
The result follows easily using Theorem \ref{th1}.
\end{proof}
In dimension 2, reproducing the proof, but replacing the dilations by factor $M$ by dilations by factor $\sqrt{M}$, we obtain instead the result:
$$\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N (\mathcal{X})= \frac{1}{2}\log M + \frac{1}{M} \sum_{i=1}^M \lim_{N\to \infty} \mathbb{E} \mathcal{W}_N (X_i).$$
For example, superposing two independent processes with the same second correlation function
leads to an increase of $\lim_{N\to \infty}\mathbb{E} \mathcal{W}_N$ by $\log 2$ in dimension 1 and
$\frac{1}{2} \log 2 $ in dimension 2. Superposition can thus be seen as ``increasing the disorder".
We next turn to decimation.
One can define a `random decimation' of a process by erasing points at random with probability 1/2. The second correlation function then transforms into
$$R_2(x,y)= \frac{1}{4}\rho_2 (x,y).$$ But the space needs to be rescaled by a factor 2 in order
to maintain a density one, so the correlation function after that is
$$\rho_2'(x,y)= \rho_2(2x, 2y).$$
It is clear in view of Theorems \ref{th1} and \ref{th2} that if a process has finite $\lim_{N\to \infty}\mathbb{E}\mathcal{W}_N$, its decimation will not, since the condition $\int T_2=1$ will be destroyed by this operation.
On the other hand, one can define a `deterministic decimation' by erasing every even (or odd)
point of an (ordered) random point configuration followed by rescaling of the space to keep
the density at $1$. While we cannot say anything about this operation in general, one can observe
what it does in a couple of cases.
It is known, see e.g. \cite[page 66]{agz}, that the $\beta=2$ sine process is the
deterministic decimation of superposition of two $\beta=1$ sine processes, or symbolically
$$
(\text{sine } \beta=2) = \text{decimation}((\text{sine } \beta=1)\sqcup (\text{sine } \beta=1)).
$$
From Proposition~\ref{pro23} we know that $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$ is $1-\gamma$ for the left-hand side,
and Proposition~\ref{pro26} says that $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$ is $2-\gamma$ for
$(\text{sine } \beta=1)\sqcup (\text{sine } \beta=1)$. Thus, the deterministic decimation decreased
the value of $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$ by 1.
Similarly,
$$
(\text{sine } \beta=4) = \text{decimation}((\text{sine } \beta=1)),
$$
and we see that the decimation decreased the value of $\lim_{N\to \infty} \mathbb{E}\mathcal{W}_N$ from $2-\gamma-\log 2$ to
$\frac32-\gamma-\log 2$.
\subsection{Discrete $\beta=2$ sine process}\label{sec5}
The $\beta=2$ discrete sine process was first obtained in \cite{boo} as the bulk scaling limit
of the Plancherel measure for symmetric groups, and it was shown in \cite{bkmm} to be the
a universal local scaling limit for a broad family of discrete probabilistic models
of random matrix type with $\beta=2$. The goal of this section is to compute
$\lim_{N\to\infty}\mathbb{E}\mathcal{W}_N$ for the suitably scaled discrete sine process embedded into the real
line. By the construction, this provides an interpolation between the case of the perfect
lattice, for which $\mathcal{W}_N\equiv 0$, and the case of the continuous $\beta=2$ sine process
treated in the previous sections.
Let $\rho \in (0,1)$. The discrete sine process with density $\rho$ is a random point process on $\mathbb{Z}$ with the correlation functions ($k\ge 1$)
$$\rho_k(x_1, \dots, x_k)= \det \left[\frac{\sin \pi \rho(x_i-x_j)}{\pi(x_i-x_j)}\right]_{i,j=1}^k.$$
\begin{pro}
Embed $\mathbb{Z} $ into $\mathbb{R}$ via $n \mapsto \rho n$; this turns the discrete sine process of density $\rho$
into a random point process on $\mathbb{R}$ with density $1$.
For the latter process we have
\begin{equation}\label{formdiscsin}\lim_{N\to \infty}\mathbb{E}\mathcal{W}_N= \rho \log \rho + \frac{2}{\rho} \sum_{u=1}^\infty \( \frac{\sin (\rho \pi u)}{\pi u}\)^2 \log (2\pi \rho u).
\end{equation}
\end{pro}
\begin{proof}
For the calculation, we assume that $\frac{N}{\rho}$ is an integer (the same argument however
should hold without this assumption by examining more carefully error terms).
The calculation is then can be viewed as a discrete version of that of Theorem \ref{th1}.
First, by definition \eqref{wc1d2} of $\mathcal{W}_N$, we have
\begin{equation*}
\mathbb{E} \mathcal{W}_N= - \frac{1}{N} \sum_{i\neq j \in [1, N/\rho]} \rho_2 (i,j)
\log \left|2\sin \frac{\pi \rho (i-j)}{N} \right|+\log N\end{equation*}
and since $\rho_2(i,j)=\rho^2 - \left( \frac{\sin \pi \rho (i-j)}{\pi(i-j)}\right)^2$
we find
\begin{multline}\label{Wcsin}
\mathbb{E} \mathcal{W}_N= \frac{1}{N} \sum_{i \neq j \in [1, N/\rho] } \( \frac{\sin \pi \rho (i-j)}{\pi(i-j)} \)^2 \log \left|2\sin \frac{\pi \rho (i-j)}{N}\right| + \log N \\ - \frac{\rho^2}{N} \sum_{i \neq j \in [1, N/\rho] } \log \left|2\sin \frac{\pi \rho (i-j)}{N}\right| .\end{multline}
Let us first examine the contribution of the last sum.
From the knowledge of $\mathcal{W}_N(\mathbb{Z})=0$, we know that for $K\in \mathbb{Z}$
we have
$$\lim_{K\to \infty} \frac{1}{K}\sum_{i\neq j \in [1,K]} \log \left|2\sin \frac{ \pi (i-j)}{K}\right|+\log K=0.$$
Applying this to $K = N/\rho$ we find that
\begin{equation}\label{knr}
\frac{\rho^2}{N} \sum_{i \neq j \in [1, N/\rho] } \log \left|2\sin \frac{\pi (i-j)}{N/\rho}\right| = - \rho \log N/\rho + o(1).\end{equation}
We next turn to the first two terms in \eqref{Wcsin}. As in Theorem \ref{th1}, we expect only the near diagonal terms to contribute, so that
$$\log \left|2\sin \frac{\pi \rho(i-j)}{N}\right|\sim \log \left|\frac{2\pi \rho (i-j)}{N}\right|=\log |2\pi \rho (i-j)|- \log N.$$
Inserting this and \eqref{knr} into \eqref{Wcsin} we find
\begin{equation}\label{Wcsin2}
\mathbb{E} \mathcal{W}_N\sim\frac{1}{N} \sum_{i \neq j \in [1, N/\rho] }
\left( \frac{\sin \pi \rho (i-j)}{\pi(i-j)} \right)^2
( \log \left|2\pi \rho (i-j)\right| -\log N ) -
\rho \log N/\rho + \log N +o(1).
\end{equation}
We first focus on
\begin{align}\label{lonn}
\frac{- \log N }{N} \sum_{i \neq j \in [1, N/\rho] } \( \frac{\sin \pi \rho (i-j)}{\pi(i-j)} \)^2 &= & - \frac{\log N}{N} \sum_{u=1}^{N/\rho -1} (N/\rho - u) \frac{2\sin^2 (\pi \rho u)}{\pi^2 u^2}\\ \nonumber &=& - \frac{\log N}{\rho} \sum_{u=1}^{N/\rho -1}
\frac{1-\cos 2\pi \rho u}{\pi^2 u^2}+o(1).\end{align}
Indeed, we can bound $ \sum_{i=1}^{N/\rho } u \frac{2 \sin^2 (\pi \rho u)}{\pi^2 u^2} $ by $\sum_{i=1}^{N/\rho} \frac{1}{u}=O(\log N/\rho)$ and this multiplied by $\frac{\log N}{N}$ is negligible as $N\to + \infty$.
The last sum then appears as a Fourier series and can be computed explicitly, which leads to
\begin{equation}\label{rm1} - \frac{\log N}{\rho} \sum_{u=1}^{N/\rho -1}
\frac{1-\cos 2\pi \rho u}{\pi^2 u^2}= \frac{\log N}{\rho} \(\frac{\pi^2}{6}- \frac{1}{12}(12 \pi^2 \rho^2 - 12 \pi \rho + 2\pi^2)\) = (\rho -1)\log N.\end{equation}
We next turn to
\begin{equation}\label{rm2}\frac{1}{N} \sum_{i \neq j \in [1, N/\rho] }
\( \frac{\sin \pi \rho (i-j)}{\pi(i-j)} \)^2 \log |2\pi \rho (i-j)|= \frac{2}{N} \sum_{u=1}^{N/\rho-1} (N/\rho -u) \(\frac{ \(
\sin \pi \rho u\)^2}{\pi^2 u^2}\)^2\log (2\pi \rho u).\end{equation}
Again, the term containing $u$ can be neglected since it is bounded by $\frac{1}{N} \sum_{u=1}^{N/\rho} \frac{1}{u} \log (2\pi \rho u)\le O(\frac{\log^2 N}{N})=o(1)$.
Combining \eqref{Wcsin2}--\eqref{rm2} and letting $N \to \infty$, we finally arrive at \eqref{formdiscsin}.
\end{proof}
The graph of $\lim_{N\to \infty}\mathbb{E}\mathcal{W}_N$ in \eqref{formdiscsin} is presented in Fig. \ref{fig2}, it shows a function decreasing
from $1-\gamma$ at $\rho =0$ to $0$ at $\rho =1$, as expected.
\begin{figure}\label{fig2}
\begin{center}
\includegraphics[height=7cm]{plot_discretesine.jpg}\caption{Numerical evaluation of $\lim_{N\to \infty}\mathbb{E} \mathcal{W}_N$ for the discrete sine process}\end{center}
\end{figure}
| {
"timestamp": "2012-10-24T02:04:38",
"yymm": "1201",
"arxiv_id": "1201.2853",
"language": "en",
"url": "https://arxiv.org/abs/1201.2853",
"abstract": "We define a \"renormalized energy\" as an explicit functional on arbitrary point configurations of constant average density in the plane and on the real line. The definition is inspired by ideas of [SS1,SS3]. Roughly speaking, it is obtained by subtracting two leading terms from the Coulomb potential on a growing number of charges. The functional is expected to be a good measure of disorder of a configuration of points. We give certain formulas for its expectation for general stationary random point processes. For the random matrix $\\beta$-sine processes on the real line (beta=1,2,4), and Ginibre point process and zeros of Gaussian analytic functions process in the plane, we compute the expectation explicitly. Moreover, we prove that for these processes the variance of the renormalized energy vanishes, which shows concentration near the expected value. We also prove that the beta=2 sine process minimizes the renormalized energy in the class of determinantal point processes with translation invariant correlation kernels.",
"subjects": "Probability (math.PR); Mathematical Physics (math-ph)",
"title": "Renormalized energy concentration in random matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232940063591,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460413225093
} |
https://arxiv.org/abs/1902.08249 | A new stability test for linear neutral differential equations | We obtain new explicit exponential stability conditions for the linear scalar neutral equation with two bounded delays $
\dot{x}(t)-a(t)\dot{x}(g(t))+b(t)x(h(t))=0, $
where $
0\leq a(t)\leq A_0<1$, $0<b_0\leq b(t)\leq B$, using the Bohl-Perron theorem and a transformation of the neutral equation into a differential equation with an infinite number of delays. The results are applied to the neutral logistic equation. | \section{Introduction}
Many applied problems lead to neutral differential equations as their mathematical models, for example,
a model of a controlled motion of a rigid body, a distributed network (a long line with tunnel diodes), models of infection diseases, a price model in economic dynamics, see, for example, \cite{Krisztin,KolmMysh,Kuang}.
Though neutral delay differential equations describe important applied models,
from mechanics to disease spread in epidemiology,
compared to other classes of equations, stability theory
for neutral equations with variable coefficients and delays is not sufficiently developed.
In particular, there are no explicit stability results for general linear equations but only for particular classes of neutral equations, see \cite{Cahlon,Gil,Gop,Gyori,TangZou} and references therein.
The aim of the present paper is to
obtain stability conditions for the equation
\begin{equation}
\label{1}
\dot{x}(t)-a(t)\dot{x}(g(t))=-b(t)x(h(t))
\end{equation}
which depend on both delays.
To this end, we transform \eqref{1}
into a linear delay differential equation with an infinite number of delays.
This method has not been applied before to stability problems,
but used to study oscillation in \cite{BB2}.
As an application, we give local asymptotic stability tests
for the logistic neutral
equation
\begin{equation}\label{log1}
\dot{x}(t)=r(t) x(t)\left(1-\frac{x(h(t))-\rho \dot{x}(g(t))}{K} \right),
\end{equation}
where $\rho>0$ corresponds to higher resources consumption by a shrinking population. The model
\begin{equation}\label{log2}
\dot{x}(t)=r_0 x(t)\left(1- \frac{x(t-\tau)-\rho \dot{x}(t-\tau)}{K}\right)
\end{equation}
which is an autonomous version of (\ref{log1}), was studied in \cite{GopLog,FK,Yu1}.
\section{Preliminaries}
We consider scalar delay differential equation (\ref{1})
under the following conditions:\\
(a1) $a, b, g, h$ are Lebesgue measurable, $a$ and $b$ are essentially
bounded on $[0,\infty)$ functions;\\
(a2) $ 0\leq a_0\leq a(t)\leq A_0<1, 0<b_0\leq b(t)\leq B_0$ for all $t\geq t_0\geq 0$ and some fixed $t_0\geq 0$;\\
(a3) $mes~ E=0\Longrightarrow mes~ g^{-1}(E)=0$,
where $mes~E$ is the Lebesgue
measure of the set $E$;\\
(a4) $0\leq t-g(t)\leq \sigma$, $0 \leq t-h(t) \leq \tau$ for $t \geq t_0$ and some $\sigma>0$, $\tau>0$ and $t_0 \geq 0$.
Along with (\ref{1}), we consider for each $t_0 \geq 0$ an initial value problem
\begin{equation}
\label{2}
\dot{x}(t)-a(t)\dot{x}(g(t))+b(t)x(h(t))=f(t), ~t\geq t_0,~
x(t)=\varphi(t), ~ t \leq t_0,~\dot{x}(t)=\psi(t),~ t<t_0,
\end{equation}
where $f:[t_0,\infty)\rightarrow {\mathbb R}$ is a Lebesgue measurable locally essentially bounded function,
$\varphi:(-\infty,t_0] \rightarrow {\mathbb R}$ and $\psi :(-\infty,t_0)\rightarrow {\mathbb R}$
are Borel measurable bounded functions.
Further, we assume that the above conditions hold without mentioning it.
\begin{definition}
A locally absolutely continuous on $[t_0,\infty)$
function $x: {\mathbb R} \rightarrow {\mathbb R}$ is called {\bf a solution of problem} (\ref{2})
if it satisfies the equation in (\ref{2}) for almost all $t\in [t_0,\infty)$ and
the equalities in (\ref{2})
for $t\leq t_0$.
For each $s\geq t_0$ the solution $X(t,s)$ of the problem
\begin{equation}
\label{7.6}
\dot{x}(t)-a(t)\dot{x}(g(t))+b(t)x(h(t))=0, ~x(t)=0,~\dot{x}(t)=0,~t<s,~x(s)=1
\end{equation}
is called {\bf the fundamental function} of equation (\ref{1}). We assume $X(t,s)=0$ for $0\leq t<s$.
We will say that equation (\ref{1}) is {\bf uniformly exponentially stable}
if there exist
$M>0$ and $\gamma>0$ such that
the solution of problem (\ref{2}) with $f \equiv 0$
has the estimate
$\displaystyle
|x(t)|\leq M e^{-\gamma (t-t_0)} \sup_{t \in (-\infty, t_0]}(|\varphi(t)|+|\psi(t)|)$, $t\geq t_0$,
where $M$ and $\gamma$ do not depend on $t_0 \geq 0$, $\varphi$ and $\psi$.
The fundamental function $X(t,s)$ of equation (\ref{1}) {\bf has an exponential estimate} if it satisfies
$\displaystyle |X(t,s)|\leq M_0 e^{-\gamma_0(t-s)}$ for some $t\geq s\geq t_0$,
$M_0>0$ and $\gamma_0>0$.
\end{definition}
For a fixed bounded interval $J=[t_0,t_1]$, consider the space $L_{\infty}[t_0,t_1]$ of all essentially bounded on $J$
functions with the
norm $|y|_J= \esssup_{t\in J} |y(t)|$,
denote $\|f\|_{[t_0,\infty)}=\esssup_{t\geq t_0} |f(t)|$ for an unbounded interval, $I$ is the identity operator.
Define an operator
on the space $L_{\infty}[t_0,t_1]$ as
$\displaystyle
(Sy)(t)=\left\{\begin{array}{ll}
a(t)y(g(t)),& g(t)\geq t_0,\\
0,& g(t)<t_0.\\
\end{array}\right.
$
Note that there exists a unique
solution of
problem (\ref{2}),
and it
can be presented as
\begin{equation*}
x(t) = X(t,t_0)x_0+\int_{t_0}^t X(t,s)[(I-S)^{-1}f](s)ds
+ \int_{t_0}^t X(t,s)[(I-S)^{-1}F](s)ds,
\end{equation*}
where
$F(t)=a(t)\psi(g(t))-b(t)\varphi(h(t))$ and $\psi(g(t))=0$ for $g(t)\geq t_0$,
$\varphi(h(t))=0$ for $h(t)\geq t_0 $, see, for example, \cite{AzbSim}.
Existence of an exponential estimate for the fundamental function is equivalent \cite{AzbSim}
to the exponential stability for equations with bounded delays.
The following result is usually referred to as the Bohl-Perron principle.
\begin{uess}\label{lemma3}\cite[Theorem 4.7.1]{AzbSim}
Assume that
the solution of the problem
\begin{equation}\label{10}
\dot{x}(t)-a(t)\dot{x}(g(t))+b(t)x(h(t))=f(t),~~t \geq t_0, ~x(t)=0,~t\leq t_0,~\dot{x}(t)=0,~t<t_0
\end{equation}
is bounded on $[t_0,\infty)$ for any
essentially bounded on $[t_0,\infty)$ function $f$.
Then equation (\ref{1}) is uniformly exponentially stable.
\end{uess}
In Lemma~\ref{lemma3} we can consider
boundedness of solutions not for all essentially bounded on $[t_0,\infty)$ functions $f$ but only
for essentially bounded on $[t_0+\delta,\infty)$ functions $f$ that vanish on $[t_0, t_0+\delta)$ for any fixed $\delta>0$, see \cite{BB3}.
We further use this fact in the paper without an additional reference.
Denote by $X_0(t,s)$ the fundamental function of
the equation with a single delay
\begin{equation}\label{8}
\dot{x}(t)+B(t)x(h_0(t))=0, ~~B(t)\geq 0,~~ 0\leq t-h_0(t)\leq \tau_0.
\end{equation}
\begin{uess}\label{lemma4}\cite{BB3}
If $X_0(t,s)>0$ for $t\geq s\geq t_0$ then
$\displaystyle
\int_{t_0+\tau_0}^t X_0(t,s) B(s)\, ds\leq 1.
$
\end{uess}
\begin{uess}\label{lemma5}\cite{BB3,GL}
If for some $t_0\geq 0$,
$\displaystyle
\int_{h_0(t)}^t B(s)\, ds\leq \frac{1}{e},~t\geq t_0
$
then $X_0(t,s)>0$ for $t\geq s\geq t_0$.
If in addition $B(t)\geq b_0>0$ then equation (\ref{8}) is
exponentially stable.
\end{uess}
Finally, the properties of the operator $S$ are outlined in
the following lemma.
\begin{uess}\label{lemma6} \cite{ABR}
If $\|a\|_{[t_0,\infty)}\leq A_0<1$ then $I-S$ is invertible in the space $L_{\infty}[t_0,\infty)$
we have $ \displaystyle
((I-S)^{-1}y)(t)= y(t)+ \sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(t) \right) y \left( g^{[j]}(t) \right),
$
where $g^{[0]}(t)=t$,
$g^{[1]}(t)=g(t)$, $g^{[n]}(t)=g((g^{[n-1]}(t)))$, and the operator norm satisfies
\begin{equation}
\label{star}
\|(I-S)^{-1}\|_{L_{\infty}[t_0,\infty)\to L_{\infty}[t_0,\infty)}\leq \frac{1}{1-A_0}.
\end{equation}
\end{uess}
\section{Explicit Stability Conditions}
\begin{guess}\label{theorem1}
Assume that for $t\geq t_0$ at least one of the following conditions holds:
\\
a)
$\displaystyle
\tau B_0+\frac{\sigma A_0 B_0^2 (1-a_0)}{(1-A_0)^2 b_0}<1-A_0;
$
\\
b)
$\displaystyle
t-h(t)\geq \frac{1-A_0}{e B_0}$ ~~ and ~~ $\displaystyle \tau B_0+\frac{\sigma A_0 B_0^2 (1-a_0)}{(1-A_0)^2 b_0}<\left(1+\frac{1}{e}\right)(1-A_0).
$
\\
Then equation (\ref{1}) is uniformly exponentially stable.
\end{guess}
\begin{proof}
Applying $(I-S)^{-1}$ to
(\ref{10}), using (\ref{star}) on $J$ instead of $[t_0,\infty)$ and (a2), we get
\begin{equation}
\label{2star}
|\dot{x}|_J \leq \left. \left. \left\|(I-S)^{-1} \right\|_{L_{\infty}(J)\to L_{\infty}(J)} \right[ B_0 |x|_J + |f|_J \right] \leq
\frac{B_0}{1-A_0}|x|_J+M_1,
\end{equation}
where $\displaystyle M_1=\frac{\|f\|_{[t_0,\infty)}}{1-A_0}$ and
$\|f\|_{[t_0,\infty)}<\infty$. By Lemma \ref{lemma6}, (\ref{10})
is equivalent to the equation with an infinite number of delays
\begin{equation}\label{11}
\dot{x}(t)=
- b(t)x(h(t)) - \sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(t) \right) b \left( g^{[j]}(t) \right)
x \left( h(g^{[j]}(t) ) \right)+f_1(t),
\end{equation}
where $~ f_1(t)=((I-S)^{-1}f)(t)$ and $\|f_1\|_{[t_0,\infty)}<\infty$.
Since $x(t)=0$ for $t\leq t_0$, we can assume that $b(t)=b_0$, $t\leq t_0$.
Denote
$$\displaystyle
B(t)=b(t) + \sum_{j=1}^{\infty} \prod_{k=1}^{j-1} a\left( g^{[k]}(t) \right) b \left( g^{[j]}(t) \right).
$$
By Lemma~\ref{lemma6}, using the bounds for $a$ and $b$, we obtain
$ \displaystyle \frac{b_0}{1-a_0}\leq B(t)\leq \frac{B_0}{1-A_0}.$
Equation (\ref{11}) can be rewritten in the form
$$
\dot{x}(t)+B(t)x(t)= b(t)\int_{h(t)}^t \!\!\!\! \dot{x}(\xi)d\xi+
\sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(t) \right) b \left( g^{[j]}(t) \right)
\int_{h(g^{[j]}(t))}^t \!\!\!\!\!\! \dot{x}(\xi)d\xi+ f_1(t),
$$
therefore
$$
x(t)=\int\limits_{t_0}^t e^{-\int\limits_s^t B(\xi)d\xi}B(s) \left( \frac{1}{B(s)}\left[b(s) \!\! \int\limits_{h(s)}^s
\!\! \dot{x}(\xi)d\xi\right.
\left.+\sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(s) \right) b \left( g^{[j]}(s) \right)
\!\!\!\!\!\! \int\limits_{h(g^{[j]}(s))}^s
\!\!\!\!\!\! \dot{x}(\xi)d\xi\right] \right)ds+ f_2(t),
$$
where $B(t)\geq b_0>0$ implies $\displaystyle \|f_2\|_{[t_0,\infty)} \leq \int_{t_0}^t e^{-\int_s^t B(\xi)d\xi} |f_1(s)|\, ds <\infty$.
We have
$$
t-g(t)\leq \sigma, t-g^{[2]}(t)=t-g(t)+(g(t)-g(g(t)))\leq 2\sigma,\dots, t-g^{[n]}(t)\leq n\sigma,
$$$$
t-h(t)\leq \tau, t-h(g(t))=t-g(t)+(g(t)-h(g(t)))\leq \sigma+\tau,\dots, t-h(g^{[n]}(t))\leq n\sigma+\tau.
$$
Hence for $t\in J$,
\begin{align*}
& \frac{1}{B(s)}\left[b(s)\int_{h(s)}^s \dot{x}(\xi)d\xi+
\sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(s) \right) b \left( g^{[j]}(s) \right)
\int_{h(g^{[j]}(s))}^s \dot{x}(\xi)d\xi\right]
\\
\leq & \frac{1}{B(s)}\left[ b(s)\tau+
\sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(s) \right) b \left( g^{[j]}(s) \right)(\tau+j \sigma)\right] |\dot{x}|_J
\\
\leq & \left[ \tau+\frac{(1-a_0)A_0B_0\sigma}{b_0}\sum_{j=1}^{\infty} jA_0^{j-1} \right] |\dot{x}|_J
\leq \left[ \tau+\frac{\sigma A_0B_0(1-a_0)}{b_0(1-A_0)^2}\right]\frac{B_0}{1-A_0}|x|_J+M_2,
\end{align*}
where the constant $M_2$ does not depend on $J$, and the last inequality is due to (\ref{2star}).
By Lemma~\ref{lemma4}, the solution of problem (\ref{10}) satisfies
$\displaystyle
|x|_J \leq \left[\tau+\frac{\sigma A_0B_0(1-a_0)}{b_0(1-A_0)^2}\right]\frac{B_0}{1-A_0}|x|_J+M_3,
$
where $M_3$ is a constant not dependent on $J$.
Condition a) of the theorem implies
$\displaystyle
\left[\tau+\frac{\sigma A_0B_0(1-a_0)}{b_0(1-A_0)^2}\right]\frac{B_0}{1-A_0}<1.
$
Hence $|x(t)|\leq M$ for $t \geq t_0$, for some constant $M$ which does not depend on $J$.
By Lemma \ref{lemma3}, equation (\ref{1}) is uniformly exponentially stable.
Next, assume that the conditions in b) hold. Consider the following delay equation
\begin{equation}\label{12}
\dot{x}(t)+B(t)x\left(t-\frac{1-A_0}{B_0 e}\right)=0.
\end{equation}
Since $B(t)\geq b_0$ and $\displaystyle \frac{B(t)(1-A_0)}{B_0 e}\leq \frac{1}{e}$, by Lemma~\ref{lemma5}
equation (\ref{12}) is exponentially stable, and its fundamental function is positive: $X_0(t,s)>0$,
$t\geq s\geq t_0$.
We have
$$
\tau\geq t-h(t)\geq \frac{1-A_0}{B_0 e}, ~~\tau+\sigma\geq t-h(g(t))\geq \frac{1-A_0}{B_0 e},~~
\tau+n\sigma\geq t-h(g^{[n]}(t))\geq \frac{1-A_0}{B_0 e}.
$$
Problem (\ref{10}) is equivalent to (\ref{11}) which has a solution
$$
x(t)=\int_{t_0}^t X_0(t,s) B(s) \left( \frac{1}{B(s)}\left[b(s) \!\!\!\!\! \int\limits_{h(s)}^{s-\frac{1-A_0}{B_0 e}}
\!\!\!\!\! \dot{x}(\xi) \right) d\xi\right.
\left.+\sum_{j=1}^{\infty} \prod_{k=0}^{j-1} a\left( g^{[k]}(s) \right) b \left( g^{[j]}(s) \right)
\!\!\!\!\! \int\limits_{h(g^{[j]}(s))}^{s-\frac{1-A_0}{B_0 e}} \!\!\!\!\!\!\! \dot{x}(\xi)d\xi\right]ds+f_3(s),
$$
where $f_3(t)=\int_{t_0}^t X_0(t,s) f_1(s)ds$, and $\|f_3\|_{[t_0,\infty)}<\infty$, since (\ref{12}) is exponentially stable.
By the same calculations as in a) we have
$$
|x|_J \leq \left(\left[\tau+\frac{\sigma A_0B_0(1-a_0)}{b_0(1-A_0)^2}\right]\frac{B_0}{1-A_0}-\frac{1}{e}\right)|x|_J+M_4,
$$
where $M_4$ does not depend on the interval $J$.
By the second condition in b), we have
$\displaystyle
\left[\tau+\frac{\sigma A_0B_0(1-a_0)}{b_0(1-A_0)^2}\right]\frac{B_0}{1-A_0}-\frac{1}{e}<1.
$
Hence $|x(t)|\leq M$ for $t \geq t_0$ for some constant $M$ which does not depend on $I$.
By Lemma \ref{lemma3}, equation (\ref{1}) is exponentially stable.
\end{proof}
Consider now two partial cases of equation (\ref{1}), one with constant coefficients
\begin{equation}\label{13}
\dot{x}(t)-a\dot{x}(g(t))=-bx(h(t)),
\end{equation}
where $a,b$ are positive constants, and another with a non-delayed term
\begin{equation}\label{14}
\dot{x}(t)-a(t)\dot{x}(g(t))=-b(t)x(t).
\end{equation}
\begin{corollary}\label{corollary1}
If either a)
$
\displaystyle \tau b+\frac{\sigma a b }{1-a}<1-a;
$ or b) $\displaystyle t-h(t)\geq \frac{1-a}{e b}$ and $
\displaystyle \tau b+\frac{\sigma a b }{1-a}<\left(1+\frac{1}{e}\right)(1-a)
$
then equation (\ref{13}) is uniformly exponentially stable.
\end{corollary}
\begin{corollary}\label{corollary2}
If ~
$\displaystyle
\frac{\sigma A_0 B_0^2 (1-a_0)}{(1-A_0)^3 b_0}<1
$
~ then equation (\ref{14}) is uniformly exponentially stable.
\end{corollary}
\section{Examples and Applications}
First, we illustrate the results obtained in the paper with examples.
\begin{example}
Equation (\ref{13}) with
$g(t)\equiv t$ and variable $h(t)$,
by Corollary \ref{corollary1},
is uniformly exponentially stable if $\displaystyle \frac{b\tau}{1-a}<1+\frac{1}{e}\approx 1.37$.
The well-known Myshkis test establishes stability for $\displaystyle \frac{b\tau}{1-a}<\frac{3}{2}$, under the assumption that the delay function is continuous.
Corollary~\ref{corollary1} gives a close estimate for a measurable delay.
\end{example}
\begin{example}
Consider an equation with a variable coefficient and time-dependent $h(t)$
\begin{equation}
\label{ex2eq1}
\dot{x}(t)-0.6\dot{x}(t-0.1)=-r(1+0.1 \sin t)x(h(t)), ~0.9 \leq t-h(t)\leq 1,
\end{equation}
and its particular case with a constant delay
\begin{equation}
\label{ex2eq2}
\dot{x}(t)-0.6\dot{x}(t-0.1)=-r(1+0.1 \sin t)x(t-1).
\end{equation}
We compare Theorem~\ref{theorem1} with applicable results obtained in \cite{TangZou}.
For both (\ref{ex2eq1}) and (\ref{ex2eq2}), we have $\tau=1$, $\sigma=0.1$, $b_0=0.9r$,
$B_0=1.1r$, $a=0.6$. By Part a) of Theorem~\ref{theorem1}, $r<0.307$ implies exponential stability, while Part b) requires $r>0.149$ for (\ref{ex2eq1}), while $r>0.134$ for (\ref{ex2eq2}), and $r<r_0 \approx 0.420347$.
In \cite{TangZou}, a positive integer $N$ is introduced such that in (\ref{ex2eq2}), $a+\frac{3}{2} a^N =0.6+1.5 \cdot 0.6^N \leq 1$; obviously, $N=3$.
The first asymptotic stability condition for (\ref{ex2eq2}) from \cite{TangZou}
$$
\limsup_{t \to \infty} \int_{t-(3\tau+(N-1)\sigma)}^t b(s)\, ds < \frac{3}{2}-2a \left( 1-\frac{1}{4} a \right)=0.48
$$
is satisfied for $r<r_1 \approx 0.14118$, while
the second sufficient inequality from \cite{TangZou}
$$
\limsup_{t \to \infty} \int_{t-(\tau+(N-1)\sigma)}^t b(s)\, ds < \frac{3-4a^N}{2(1-a^N)} (1-a) \approx 0.544898
$$
holds for $r<r_2 \approx 0.415025$. Note that in this case Theorem~\ref{theorem1} gives a sharper
estimate $\approx 0.420347$ for $r$; in addition, it provides a sufficient exponential stability condition for (\ref{ex2eq1}),
while \cite{TangZou} for (\ref{ex2eq2}) only. To the best of our knowledge,
other known conditions are also not applicable to (\ref{ex2eq1}).
\end{example}
Next, let us apply the results of Theorem~\ref{theorem1} to
logistic neutral equations (\ref{log1}) and (\ref{log2}),
where $\rho>0$, $K>0$, $t-h(t)\leq \tau$, $t-g(t)\leq \tau$, $0<r_0\leq r(t)\leq R_0$, $r_0 \rho \leq R_0 \rho<1$,
$r,g$ and $h$ are measurable functions.
Equation (\ref{log2}) was studied in \cite{GopLog, FK, Yu1}.
\begin{proposition}\label{p1} \cite{Yu1}
If
$\displaystyle 2r_0|\rho|(2-r_0|\rho|)+r\tau<\frac{3}{2}$
then the positive equilibrium $K$ of equation (\ref{log2}) is locally asymptotically stable.
\end{proposition}
Note that the inequalities $2r_0|\rho|(2-r_0|\rho|)<\frac{3}{2}$ and $r_0 \rho<1$ imply $r_0|\rho|<0.5$.
\begin{guess}\label{thlog}
If either a) $\displaystyle
\tau R_0 \rho+\frac{\sigma R_0^2 \rho (1-r_0)}{(1-R_0)^2 r_0}<1-R_0,
$ or \\ b) $\displaystyle t-h(t)\geq \frac{1-R_0}{eR_0 \rho}$ ~~ and ~~
$\displaystyle
\tau R_0 \rho+\frac{\sigma R_0^2 \rho (1-r_0)}{(1-R_0)^2 r_0}<\left(1+\frac{1}{e}\right)(1-R_0)
$
\\
then the positive equilibrium $K$ of equation (\ref{log1}) is locally asymptotically stable.
\end{guess}
\begin{proof}
Substituting $x=y-K$ in (\ref{log1}) leads to
$\displaystyle
\dot{y}(t)=-\frac{r(t)}{K}(y(t)+K)\left[ y(h(t))-\rho \dot{y}(g(t)) \right],
$
its linearization about the zero equilibrium is
$\displaystyle \dot{z}(t)=-r(t) \left[ z(h(t))-\rho \dot{z}(g(t)) \right]$.
Applying Theorem~\ref{theorem1} with $a_0=r_0$, $A_0=R_0$, $b_0=r_0 \rho$, $B_0=R_0 \rho$, we deduce that
the linearization
is exponentially stable, and thus $K$ is locally asymptotically stable.
\end{proof}
\begin{remark}
The fact that exponential stability of the linearized equation implies local (and in some cases even global,
see \cite{BB_NA_2009} and references therein) asymptotic stability of a nonlinear scalar equation was applied
to conclude the proof of Theorem~\ref{thlog}.
\end{remark}
\begin{corollary}\label{corollarylog}
If either $\displaystyle \tau r_0\rho< (1-r_0)^2$ or $\displaystyle
\frac{1-r_0}{e} < \tau r_0\rho < \left(1+\frac{1}{e}\right) (1-r_0)^2$
then the positive equilibrium $K$ of equation (\ref{log2}) is locally asymptotically stable.
\end{corollary}
Compared to Proposition~\ref{p1},
Theorem~\ref{thlog}
is applicable to non-autonomous equations with different delays.
Also, for $r_0=0.2$, $\rho=4$ and any $\displaystyle 0 \leq \tau<0.8 \left(1+\frac{1}{e}\right)$,
Theorem~\ref{thlog} establishes local asymptotic stability of (\ref{log2}), while for these $r_0$ and $\rho$, Proposition~\ref{p1}
fails for any $\tau$.
\section*{Acknowledgment}
The second author was partially supported by the NSERC research grant RGPIN-2015-05976.
| {
"timestamp": "2019-02-25T02:02:23",
"yymm": "1902",
"arxiv_id": "1902.08249",
"language": "en",
"url": "https://arxiv.org/abs/1902.08249",
"abstract": "We obtain new explicit exponential stability conditions for the linear scalar neutral equation with two bounded delays $\n\\dot{x}(t)-a(t)\\dot{x}(g(t))+b(t)x(h(t))=0, $\nwhere $\n0\\leq a(t)\\leq A_0<1$, $0<b_0\\leq b(t)\\leq B$, using the Bohl-Perron theorem and a transformation of the neutral equation into a differential equation with an infinite number of delays. The results are applied to the neutral logistic equation.",
"subjects": "Dynamical Systems (math.DS)",
"title": "A new stability test for linear neutral differential equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232940063591,
"lm_q2_score": 0.7217432122827967,
"lm_q1q2_score": 0.7093460413225092
} |
https://arxiv.org/abs/1906.08944 | Explicit Artin maps into ${\rm PGL}_2$ | Let $G$ be a subgroup of ${\rm PGL}_2({\mathbb F}_q)$, where $q$ is any prime power, and let $Q \in {\mathbb F}_q[x]$ such that ${\mathbb F}_q(x)/{\mathbb F}_q(Q(x))$ is a Galois extension with group $G$. By explicitly computing the Artin map on unramified degree-1 primes in ${\mathbb F}_q(Q)$ for various groups $G$, interesting new results emerge about finite fields, additive polynomials, and conjugacy classes of ${\rm PGL}_2({\mathbb F}_q)$. For example, by taking $G$ to be a unipotent group, one obtains a new characterization for when an additive polynomial splits completely over ${\mathbb F}_q$. When $G = {\rm PGL}_2({\mathbb F}_q)$, one obtains information about conjugacy classes of ${\rm PGL}_2({\mathbb F}_q)$. When $G$ is the group of order 3 generated by $x \mapsto 1 - 1/x$, one obtains a natural tripartite symbol on ${\mathbb F}_q$ with values in ${\mathbb Z}/3{\mathbb Z}$. Some of these results generalize to ${\rm PGL}_2(K)$ for arbitrary fields $K$. Apart from the introduction, this article is written from first principles, with the aim to be accessible to graduate students or advanced undergraduates. An earlier draft of this article was published on the Math arXiv in June 2019 under the title {\it More structure theorems for finite fields}. | \section{Introduction} \label{sec:intro}
Let $K$ be a field and $G$ a finite subgroup of $\PGL_2(K)$. It is
well known, and will be proved in Section~\ref{sec:Q}, that there is
$Q \in K(x)$
such that $K(x)/K(Q(x))$ has Galois group~$G$. Normalize $Q$ so that
$Q(\infty) = \infty$; $Q$ will be called a {\it quotient map} for~$G$. If $\tau \in \cj K\cup\{\infty\}$, then $Q^{-1}(\tau)$ is a $G$-orbit in~$\cj K\cup\{\infty\}$.
If $|Q^{-1}(\tau)|=|G|$, then $\tau$ is said to be {\it regular}.
Let $\s \in \Aut(\cj K/K)$ such that $\s(\tau)=\tau$. Then $Q^{-1}(\tau)$ is closed under $\sigma$, and it is a $G$-orbit,
so for each $v \in Q^{-1}(\tau)$ there is $\gamma \in G$ such that $\sigma(v) = \gamma(v)$.
If $\tau$ is regular, then $\gamma$ is uniquely determined by $v$ and~$\sigma$, and the conjugacy class
$\calC_{\gamma,G} = \set{\a \g \a^{-1} : \a \in G}$ is uniquely determined by $\tau$ and $\sigma$.
In that case, define
$$\inv_Q(\tau,\sigma) = \calC_{\gamma,G}.$$
If $G$ is abelian, so that $\calC_{\gamma,G} = \{\gamma\}$, then we write more simply $\inv_Q(\tau,\sigma)=\gamma$. We abbreviate $\calC_\g=\calC_{\g,G}$ if $G$ is clear from the context.
If $K=\F_q$ and $\sigma(v)=v^q$, where $q$ is any prime power, then we write $\inv_Q(\tau,q)$ instead of $\inv_Q(\tau,\s)$,
or just $\inv(\tau)$ if $Q$ and $q$ are clear from the context.
In that case,
Xander Faber astutely observed that the map $\tau \mapsto \inv(\tau) = \calC_\g$ is essentially the Artin map for the extension
$\F_q(x)/\F_q\left(Q(x)\right)$.
The connection is as follows.
Let $\tau \in \F_q$.
The polynomial $Q-\tau\in \F_q(Q)$ corresponds to a degree-1 place $P=(Q-\tau)$ of $\F_q(Q(x))$. This place is unramified in $\F_q(x)$
if and only if $\tau$ has $|G|$ distinct preimages in $\cj \F_q$;
this coincides with our definition that $\tau$ is regular.
If $v \in Q^{-1}(\tau)$
and $g$ is its minimal polynomial over $\F_q$, then $g$ corresponds to a place $\calP$ in $\F_q(x)$ that lies over $P$.
The element $\gamma$ such that $v^q = \gamma(v)$ is the {\it Frobenius automorphism} of $\calP$ for the extension $\F_q(x)/\F_q(Q)$,
and the map $P \mapsto \calC_\g$
is the {\it Artin map}. For a more general discussion of the Artin map over function fields, see Rosen \cite[Ch.~9]{Rosen}.
The Artin map is defined in \cite[page~122]{Rosen}. Because of this close relation with the Artin map, we call $\inv_Q(\tau,\s)$
the {\it Artin invariant} of $\tau$ with respect to $Q$ and $\s$.
While the existence and general properties of the Artin map are widely known, what has not been previously appreciated is
how interesting the examples are, even in the genus-0 case, {\it i.e.}, over the rational function field $\F_q(x)$.
Thus, the emphasis in this article is not so much on the existence of $\inv(\tau)$, but rather on the
wealth of arithmetic information that is revealed by specific examples. Some of this arithmetic information was previously known, and some is new.
\begin{example} \label{example:QRes}
The simplest example is $G=\{\textmatrix 1001,\textmatrix{-1}001\}$ and $Q(x)=x^2$ over $\F_q$.
Here one should assume $1\ne -1$, \ie, $q$ is odd.
All $\tau \in \F_q$ are regular except $\tau=0$. If $v^2=\tau\ne 0$ then
$$v^q = (v^2)^{(q-1)/2}v = \tau^{(q-1)/2} v = \jacobi \tau q v,$$
where $\jacobi \tau q$ is the quadratic residue symbol: 1 if $\tau$ is a nonzero square, or $-1$ if
$\tau$ is a nonsquare. Thus, $v^q = \gamma(v)$ with $\gamma = \textmatrix {\jacobi \tau q} 0 0 1$, and $\inv(\tau) = \gamma$.
As can be seen, the Artin invariant for this group is intimately connected with the quadratic residue symbol.
\qed
\end{example}
The next example is related to the theory of Kummer extensions. For an exposition on Kummer extensions,
see \cite[\S II.L]{Artin} or \cite[Ch.~4, \S4]{Neukirch}. If $(n,q)=1$ then let $\mu_n$ denote the group of $n$th roots of unity
in $\cj\F_q$.
\begin{example} \label{example:Kummer}
Let $q$ be any prime power (possibly even).
Suppose that $n$ divides $q-1$, and let $G = \set{\textmatrix \zeta001 : \zeta^n=1} \subset \PGL_2(\F_q)$.
The quotient map is $Q(x)=x^n$, and every element of $\F_q$ is regular except~0.
Let $\tau \in \F_q^\x$ and $\inv(\tau) = \textmatrix \z 001$.
Then $\tau = v^n \in \F_q^\x \iff v^q = \zeta v$.
The field extension $\F_q(v)=\F_q(\tau^{1/n})$ is a Kummer extension. The Galois group sends $v$ to $v^q=\inv(\tau) (v) = \z v$.
To express $\inv(\tau)$ directly in terms of $\tau$, note that $\zeta = v^{q-1} = (v^n)^{(q-1)/n}=\tau^{(q-1)/n}$.
Example~\ref{example:QRes} is the special case $n=2$.
\qed
\end{example}
\begin{example}{\bf (Klein Group)} \label{example:Klein} Let $G = \set{\textmatrix 1001, \textmatrix {-1}001, \textmatrix 0110,\textmatrix 0{-1}10}\subset
\PGL_2(\F_q)$. Here assume
$1 \ne -1$, so $q$ is odd.
$Q(x) = (x+1/x)^2/4$ is a quotient map. Let $\tau=Q(v) \in \F_q$.
Then $\tau$ is regular iff $\tau \not \in \{0,1\}$ iff $v^4 \ne 1$. The following theorem pertains to this example.
\newpage
\noindent{\bf Theorem.}\ {\it Let $q$ be an
odd prime power. Every element $\tau \in \F_q$ can be written as
$$\tau = (v+1/v)^2/4,\qquad\text{$v\in\mu_{2(q-1)}\cup\mu_{2(q+1)}$.}$$
Moreover, if $\tau \not\in\{0,1\}$, then $v^{q-AB}=A$, where
$A = \jacobi\tau q$ and $B=\jacobi{\tau-1}q$. }
\medskip
The theorem implies $v^q = A v^{AB} = \gamma(v)$, where
\begin{equation} \g=\inv(\tau,q) = \textmatrix A001 \textmatrix 0110^{(1-AB)/2}.
\label{eq:Klein}
\end{equation}
In other words, $\inv(\tau,q)$ is given explicitly in terms of $\jacobi \tau q$ and $\jacobi{\tau-1}q$.
The above theorem is proved in \cite[Theorem~4.1]{Wilson-like} and is used in \cite{Permutation,Wilson-like}
to obtain a new factorization formula for Dickson and Chebyshev polynomials
and new theorems in elementary number theory. For a short and self-contained exposition on these topics, see \cite{Structure}.
In fact, the article \cite{Structure} directly motivated the current article.
\qed
\end{example}
The next example (explained in Section~\ref{sec:unipotent}) computes the Artin invariant when $G$ is a unipotent subgroup of
$\PGL_2(\F_q)$, and leads to unexpected results about additive polynomials that have all their roots in the ground field.
\begin{example} \label{example:1b01} Let $G = \set{\textmatrix 1b01 : b \in \F_q}$.
A quotient map is $Q_G(x)=x^q-x$. If $Q_G(v)=\tau \in \F_q$, then $v^q=v+\tau = \textmatrix 1\tau 01(v)$. Thus, $\inv(\tau)=\textmatrix 1\tau 01$.
More generally,
let $q=p^n$, where $p$ is a prime or a prime power, let $W\subset \F_q$ be a $d$-dimensional $\F_p$-vector subspace
of $\F_q$, and $G_W=\set{\textmatrix 1w01 : w \in W}$. As is well known, the quotient map is $Q_W(x)= \prod_{w\in W}(x-w)$, which is an $\F_p$-additive polynomial,
\ie, $Q_W(\l x+y)=\lambda Q_W(x)+Q_W(y)$ for $\lambda \in \F_p$. $Q_W$ can be regarded as an $\F_p$-linear map from $\F_q$ to $\F_q$ with kernel $W$.
Let $Y=Q_W(\F_q)$. Then $Y$
is an $(n-d)$-dimensional $\F_p$-vector subspace of $\F_q$, and for $\tau \in \F_q$ we will prove in Section~\ref{sec:unipotent}
that
$$\inv_{Q_W}(\tau) = \textmatrix {1\ } {Q_Y(\tau)} {0\ } 1 \in G_W.$$
In particular, $Q_Y(\tau) \in W$, which is not otherwise obvious. This observation implies the
following symmetric relation between $W$ and~$Y$. Though easy to prove, to our knowledge it was not previously noticed.
\medskip
\noindent{\bf Proposition.}\ {\it Let $q=p^n$, let $W$ be an $\F_p$-vector subspace of $\F_q$, $Q_W(x) = \prod_{w \in W} (x-w)$, and $Y=Q_W(\F_q)$. Then
$W=Q_Y(\F_q)$, and there are short exact sequences
$$0 \to W \stackrel{inc.}{\longrightarrow} \F_q \stackrel{Q_W}{\longrightarrow} Y \to 0 $$
and
$$0 \to Y \stackrel{inc.}{\longrightarrow} \F_q \stackrel{Q_Y}{\longrightarrow} W \to 0. $$ }
\qed
\end{example}
These observations lead to a simple characterization of when an $\F_p$-additive polynomial splits in $\F_q$.
(See Proposition~\ref{prop:linPoly}).
\medskip
\noindent{\bf Splitting Criterion.}\
{\it
Let $q=p^n$ where $p$ is a prime or a prime power, and let $L(x) = x^{p^d} + \sum_{i=0}^{d-1} a_i\, x^{p^i}$ be an $\F_p$-additive polynomial, where $a_i \in \F_q$, $a_0 \ne 0$, and $d\ge1$. Then all the roots of $L$
are in $\F_q$ if and only if there is an $\F_p$-additive polynomial $M(x) = x^{p^{n-d}} + \sum_{i=0}^{n-d-1} b_i x^{p^i} \in \F_q[x]$ with $M \circ L(x) = x^q - x$.
In that case, it is also true that $L \circ M(x) = x^q - x$.}
\medskip
This splitting criterion is simpler than
characterizations that are currently in the literature.
The criterion in current use is as follows. (See McGuire and Sheekey \cite{MS} and Csajb\'{o}k {\it et al} \cite{CMPZ19}.)
Let $L(x) = x^{p^d} + \sum_{i=0}^{d-1} a_i x^{p^i}\in \F_q[x]$, and define $d \x d $ matrices $C_L$ and $A_L$ by
$$C_L = \begin{pmatrix} 0 & 0 \cdots & 0 & -a_0 \\
1 & 0 \cdots & 0 & -a_1 \\
0 & 1 \cdots & 0 & -a_2 \\
\vdots & \vdots & \cdots & \vdots \\
0 & 0 \cdots & 1 & -a_{d-1}
\end{pmatrix} $$
$$A_L = C_L C_L^{(p)} \cdots C_L^{(p^{n-1})}$$
where $C^{(p^i)}$ means raising every matrix entry to the power $p^i$. Then $L$ has all its roots in $\F_q$ if and only if $A_L$ is equal to the identity matrix.
As an example, let $q=P^7$ and $$L(x) = x^{P^3} - b x^P - a x,$$ where $a,b \in \F_{q}$ and $a\ne0$. It was shown by Csajb\'{o}k {\it et al} \cite[Theorem 3.3]{CMPZ18} using combinatorial arguments that $L$ can have all its roots in $\F_{q}$ only if $q$ is even. A complete characterization of when $L$ has all its roots in $\F_{q}$ was found by G. McGuire and D. Mueller \cite{MM}. One can obtain this result more simply using the new splitting criterion. Namely, let $M(x) = x^{P^4}+u_3 x^{P^3}+u_2 x^{P^2}+u_1 x^P+u_0 x\in\F_{P^7}[x]$ and try to solve $M\circ L(x) = x^{P^7}-x$. From the coefficients of $x^{P^6}$, $x^{P^5}$, $x^{P^4}$, and $x$ one finds that $u_3=0$, $u_2=b^{P^4}$, $u_1 = a^{P^4}$, and $u_0=1/a$. The equation $M\circ L(x)=x^{P^7}-x$ then simplifies to
$$0 = (1/a-b^{P^4+P^2})x^{P^3} - (a^{P^2}b^{P^4}+a^{P^4}b^P)x^{P^2}
-(a^{P^4+P} + b/a)x^P.$$
Then $a=b^{-P^4-P^2}$, and in particular $b\ne0$. From the coefficient of $x^{P^2}$, and using $b^{P^7}=b$, we have
$0=a^{P^2} b^{P^4} + a^{P^4} b^P = b^{-P^6-P^4} b^{P^4} + b^{-P^8-P^6}b^P = 2b^{-P^6}$. Thus, $2=0$, showing $q$ is even. Finally,
$a^{P^4+P}+b/a=0$ yields $b=a^{P^4+P+1}=(b^{-P^4-P^2})^{P^4+P+1}$, which simplifies to ${\rm N}_{\F_{q}/\F_P}(b)=1$. The conclusion is that $L$ has all its roots in $\F_{q}$ iff $q$ is even,
${\rm N}_{\F_{q}/\F_P}(b)=1$, and $a=b^{-P^4-P^2}$.
\begin{example} \label{example:pgl2}
The case $G=\PGL_2(\F_q)$, studied in Section~\ref{sec:PGL2}, reveals nontrivial information about conjugacy classes of $\PGL_2(\F_q)$.
If $K$ is any field and if $g=\textmatrix abcd \in \GL_2(K)$, let
\begin{equation}
\iota(g) = \frac{(a+d)^2}{ad-bc}. \label{eq:iota2}
\end{equation}
Then $\iota(cg) = \iota(g)$ for $c\in K^\x$, so $\iota$ is well defined on $\PGL_2(K)$. Also, $\iota(hgh^{-1})=\iota(g)$ for $h \in \GL_2(K)$,
so $\iota$ is constant on conjugacy classes. If $e_1,e_2$ are the roots of the characteristic polynomial of $g$, then $\iota(g)=e_1/e_2+e_2/e_1+2$.
The following results concerning the map $\iota$ will be proved in Section~\ref{sec:PGL2}.
\noindent{\bf Theorem.}\ {\it A quotient map for $\PGL_2(\F_q)$ is
$Q(x) = (x^{q^2}-x)^{q+1}/(x^q-x)^{q^2+1}$, and $\tau \in \F_q$ is regular with respect to $Q$ iff $\tau \ne 0$.
Let $\calK$ denote the set of conjugacy classes $\calC_\g$ such that $\circ(\g)\ge 3$.
Then $\iota$ induces a bijection from $\calK$ onto $\F_q^\x$, and the inverse bijection is $\inv_Q$.
\qed}
The bijections in the theorem are pictured here:
$$ \fbox{$\calC_\g$ s.t. $\circ(\g)\ge 3$}\qquad {{\stackrel\iota\longrightarrow} \atop \stackrel{\inv_Q}\longleftarrow}\qquad \fbox{$\F_q^\x$ } $$
From this we derive: If $\g\ne 1$ and $\iota(\g)\ne0$, then $$\calC_\g=\{\a\in\PGL_2(\F_q) : \text{$\a\ne1$ and $\iota(\a)=\iota(\g)$}\}.$$ This is false when
$\iota(\g)=0$; for example the two matrices $\g=\textmatrix 100{-1}$ and $\d=\textmatrix 01{-1}0$ in $\PGL_2(\F_3)$ have $\iota(\g)=\iota(\d)=0$, but they are nonconjugate
because $\g$ has two fixed points in $\F_3\cup\{\infty\}$ wherease $\d$ has none.
Let $H$ be any subgroup of $G=\PGL_2(\F_q)$, $Q_H$ a quotient map for $H$, and $Q_G(x)=1+(x^3-x)/(x^q-x)^{q^2-q+1}$.
It is shown in Lemma~\ref{prop:H} that there is a unique rational function $h\in \F_q(x)$ such that $Q_G=h\circ Q_H$. We will prove:
\bigskip\noindent{\bf Theorem.}\ {\it Let $H,Q_H,h$ be as above. Suppose $\tau \in \F_q$ is regular with respect to $Q_H$ and let $\inv_{Q_H}(\tau,q)=\calC_{\g,H}$. If $\g=1$ then
$h(\tau)=\infty$. If $\g\ne1$ then $h(\tau)=\iota(\g)$. \qed}
\end{example}
\begin{example} \label{example:order3} Section~\ref{sec:three} considers
$G=\{I,\beta,\beta^2\}\subset \PGL_2(K)$, where $\beta = \textmatrix 1{-1}10$ and $K$ is any field.
A quotient map is $Q(x)=(x^3-3x+1)/(x(x-1))$, and $\tau\in\cj K$ is regular if and only if $\tau^2-3\tau+9\ne0$.
Let $\s \in \Aut(\cj K/K)$, $\tau\in \cj K$
such that $\tau^2-3\tau + 9\ne 0$ and $\s(\tau)=\tau$. Let $v\in\cj K$ such that $Q(v)=\tau$.
Then there is a unique $\ell\in \Z/3\Z$ such that $\s(v)=\b^\ell(v)$. By definition, $\beta^\ell=\inv_Q(\tau,\s)$.
Section~\ref{sec:three} presents formulae for $\ell$ in terms of $\tau$, as follows.
\noindent{\bf Theorem.}\ {\it With notation as above, if $\inv_Q(\tau,\s)=\b^\ell$ then $\ell \pmod 3$ is determined by:
\begin{enumerate}
\item If char$(K)\ne 3$, let $\omega\in \cj K$ denote a primitive cube root of unity and let $\zeta\in\cj K$ satisfy
$\zeta^3=(\tau+3\omega^2)/(\tau+3\omega)$. Then $\s^2(\zeta)/\zeta = \omega^{\ell}$.
\item If char$(K)=3$, let $\zeta\in\cj K$ satisfy $\zeta^3-\zeta=1/\tau$. Then $\ell=\s(\zeta)-\zeta$.
\item In the special case where $K=\F_q$ and $\sigma$ is the Frobenius, $\s(x)=x^q$, then
$$\begin{cases}
\omega^\ell= \left(\frac{\tau+3\omega^2}{\tau+3\omega}\right)^{(q^2-1)/3} & \text{if $3\nmid q$,}\\
\ell=\Tr_{\F_q/\F_3}(1/\tau) & \text{if $3|q$.}
\end{cases}
$$
\end{enumerate}
}
\qed
\end{example}
This article has three parts. Part~{\ref{part:general} (Sections~\ref{sec:orbits}--\ref{sec:invariant}) presents the general theory.
Specifically, Section~\ref{sec:orbits} discusses $G$-orbits in $\cj K \cup\{\infty\}$,
Section~\ref{sec:Q} discusses existence and computation of quotient maps,
and Section~\ref{sec:invariant} defines the invariant $\inv(\tau)$.
Part~\ref{part:K} (Sections~\ref{sec:known}--\ref{sec:six}) considers finite groups that are naturally defined over $\PGL_2(K)$ for any $K$.
Section~\ref{sec:known} considers $G=\{\textmatrix 1001,\textmatrix 0110\}\subset \PGL_2(K)$ and shows how the Artin invariant for this group
in the case $K=\F_q$
is related to the well-known fact: $\F_q = \set{\z+1/\z : \z^{q-1}=1\ {\rm or}\ \z^{q+1}=1}$.
Section~\ref{sec:K} generalizes Examples~\ref{example:Kummer} and~\ref{example:Klein} to arbitrary fields~$K$.
Section~\ref{sec:three} considers the order-3 group given in Example~\ref{example:order3}.
Section~\ref{sec:six} considers the dihedral group of order~six in $\PGL_2(K)$ generated by $\textmatrix1{-1}10$ and $\textmatrix 0110$.
Part~\ref{part:q} considers subgroups of $\PGL_2(\F_q)$, including Borel subgroups, unipotent subgroups, cyclic subgroups,
$\PGL_2(\F_q)$, and $\PSL_2(\F_q)$. Some applications are given.
\part{General Theory} \label{part:general}
\section{Orbits} \label{sec:orbits}
Let $K$ be any field and $\cj K$ its algebraic closure.
The projective linear group $\PGL_2(K)$ is defined as the group of invertible $2\x2$ matrices
with entries in $K$, modulo the scalar matrices, $\textmatrix c00c$, where $c \in K^\x$.
As is well known, if $K \subset L$ where $L$ is a field, then $\PGL_2(K)$ acts on $L\cup \{\infty\}$ via
$$\textmatrix abcd (v) = \frac{av+b}{cv+d}.$$
This equation is self-explanatory if $v \in L$ and $cv+d\ne 0$. If $v=\infty$, then
$\textmatrix abcd (v) = a/c$, where we interpret $a/0 = \infty$. Also, if $v\in L$ and $cv+d=0$, then $\textmatrix abcd (v) = \infty$.
The reader can verify that if $\gamma,\delta \in \PGL_2(K)$ then $\gamma\left(\delta (v)\right) = (\gamma \delta) v$.
$\PGL_2(K)$ acts triply transitively on $K\cup\{\infty\}$, \ie, for any distinct $a,b,c \in K \cup \{\infty\}$ there is $\gamma \in \PGL_2(K)$
taking $\infty$ to $a$, 0 to $b$, and 1 to $c$. In fact, $\gamma$ is uniquely determined and it equals
$$\gamma = \begin{pmatrix} a(b-c) & b(c-a) \\ b-c & c-a \end{pmatrix}$$
if $a,b,c$ are all finite, or $\textmatrix {c-b} b01$ if $a=\infty$, $\textmatrix a{c-a}10$ if $b=\infty$, $\textmatrix a{-b}1{-1}$ if $c=\infty$.
Thus, $\PGL_2(K)$ is in one-to-one correspondence with the set of ordered triples $(a,b,c)$ of
distinct elements in $K \cup \{\infty\}$, and in particular
\begin{equation} |\PGL_2(\F_q)| = (q+1)q(q-1). \label{eq:cardPGL2} \end{equation}
Let $G$ be a finite subgroup of $\PGL_2(K)$, and let $|G|$ denote its cardinality.
If $v\in L\cup \{\infty\}$, then the $G$-orbit containing $v$, or simply {\it orbit} if $G$ is clear from context, is
defined as
$$\calO_v = \set{\gamma(v) : \gamma \in G}.$$
Orbits partition $L \cup \{\infty\}$ into disjoint sets.
We will say an orbit is {\it short} if it has fewer than $|G|$ elements;
otherwise the orbit is {\it full-sized}.
\begin{lemma} \label{lem:short} Let $G$ be a finite subgroup of $\PGL_2(K)$ and let $L/K$ be an extension of fields.
An element $v\in L\cup\{\infty\}$ belongs to a short orbit if and only if there is $\gamma \in G$, $\gamma \ne \textmatrix 1001$,
such that $\gamma (v) = v$. Every short orbit is contained in $K\cup\{\infty\}$ or a quadratic extension of $K$.
The union of short orbits has at most $2(|G|-1)$ elements.
\end{lemma}
\begin{proof} Let $S$ denote the union of short orbits in $L$. Then
$$ v \in S \iff \text{$\calO_v$ is short} \iff \gamma_1(v) = \gamma_2(v)\ \text{for a pair of distinct elements $\gamma_1,\gamma_2 \in G$},$$
and in that case, $\gamma_1^{-1}\gamma_2$ fixes $v$. Thus,
$$S = \cup_{1\ne \gamma \in G} F_\gamma,$$
where $F_\gamma = \set{v \in L\cup\{\infty\} : \gamma(v)=v}$. Since $|F_\g|\le 2$ and consists of rational elements in $\F_q\cup\{\infty\}$ or a pair of conjugate elements, the result follows.
\end{proof}
\begin{lemma} \label{lem:orbitMultiplicity} Let $G$ be a finite subgroup of $\PGL_2(K)$, and let $\calO \subset L \cup \{\infty\}$ be a $G$-orbit,
where $L/K$ is an extension field.
Then $|\calO|$ divides $|G|$, and each element of $\calO$ is fixed by exactly $|G|/|\calO|$ elements of $G$. The integer
$\mult(\calO)=|G|/|\calO|$ is called the {\it multiplicity of $\calO$}.
If $\calO \ne \calO_\infty$ and $v \in \calO$, then
$$\prod_{\gamma \in G} (x-\gamma(v)) = \left(\prod_{w \in \calO} (x-w)\right)^{\mult(\calO)}.$$
\end{lemma}
\begin{proof} This follows from standard facts about groups acting on sets, as can be found for example in \cite[Section 4.1, Prop.~2]{DF}. \end{proof}
If $v \in \cj K$,
define $\deg_K(v) = [K(v):K]$. Note that $\deg_K(v) = \deg_K(\gamma(v))$ for all
$\gamma \in \PGL_2(K)$, because $v$ and $\gamma(v)$ generate the same field over $K$. Consequently,
$\deg_K(v)$ is constant on orbits. If $K=\F_q$, we write $\deg_q(v)$ instead of $\deg_K(v)$. Then $\F_q(v)=\F_{q^t}$, where $t = \deg_q(v)$.
\begin{lemma} \label{lem:degree} Let $\gamma \in \PGL_2(\F_q)$ and suppose order$(\gamma)=t>1$.
If $v \in \cj\F_q$ and
$v^q = \gamma(v)$ then $v^{q^i}=\g^i(v)$ for all $i\ge1$, and $\deg_q(v)$ divides $t$. If in addition $v,\g(v),\ldots,\g^{t-1}(v)$ are distinct,
then $\deg_q(v) = t$.
\end{lemma}
\begin{proof} Note that $v^{q^2} = (v^q)^q = (\gamma(v))^q$. Since $\gamma$ has entries in $\F_q$, this equals $\gamma(v^q) = \gamma(\gamma(v))=\gamma^2(v)$.
By induction one can show that $v^{q^i} = \gamma^i(v)$ for all $i\ge1$. Thus, the $G$-orbit $\calO_v$ is the set of $\F_q$-conjugates of $v$, where
$G$ is the cyclic group of order $t$ generated by~$\g$. Then $\deg_q(v)=|\calO_v|=t/\mult(\calO_v)$. This shows that $\deg_q(v)$
divides $t$, and $\deg_q(v)=t$ iff $\calO_v$ has full size, \ie, iff $\g^i(v)$ for $0\le i < t$ are distinct.
\end{proof}
The next proposition will be useful in determining how many field elements $\tau$ have the same invariant $\calC_\g$, assuming that $\order(\gamma)\ge 3$.
A stronger version that includes the case where $\circ(\g)=2$ will be proved in Section~\ref{sec:cyclic}.
\begin{proposition} \label{prop:counting} Suppose that $\gamma \in \PGL_2(\F_q)$ has order $t \ge 3$. Let
$$A_{\g,q} = \set{v \in \cj\F_q : v^q = \gamma(v)},\qquad \calL_{\g,q} = A_{\g,q} \setminus \F_{q^2}.$$
Let $G \subset \PGL_2(\F_q)$ be a group that contains $\gamma$ and let $\calC_\g = \set{\a\g\a^{-1}:\a\in G}$.
\begin{enumerate}
\item[({\it i})] If $v \in A_{\g,q}$ then the $G$-orbit $\calO_v = \set{\b(v):\b \in G}$ has full size if and only if $v \in \calL_{\g,q}$.
\item[({\it ii})] $|\calL_{\g,q}|=q+\kappa$ with $\kappa \in \{0,1,-1\}$, and $t$ divides $q+\kappa$. (Note that $\kappa$ is uniquely determined
from $t$, since $t\ge 3$ and $\kappa \equiv - q \pmod t$).
\item[({\it iii})] Let $\calL = \cup_{\b \in \calC_\g} \calL_{\b,q}$.
Then $\calL$ decomposes into exactly $r$ $G$-orbits, all of full size, where $r = |\calC_\g|(q+\kappa)/|G|$.
\end{enumerate}
\end{proposition}
\begin{proof} {\it (i)} Let $v \in A_{\g,q}$, so $v^q=\gamma(v)$. We will show that the $G$-orbit $\calO_v$ has full size if and only if $v\in \calL_{\g,q}$.
Since all short orbits are contained in $\F_{q^2} \cup \{\infty\}$, $v \in \calL_{\g,q}$ implies that $\calO_v$ has full size.
Conversely, if $\calO_v$ has full size then $\gamma^i(v)$ are distinct for $i=0,1,\ldots,t-1$, so $\deg_q(v)=t$
by Lemma~\ref{lem:degree}. In particular, $v \not \in \F_{q^2}$ so $v \in \calL_{\g,q}$.
{\it (ii)} Write $\gamma = \textmatrix abcd$. $A_{\gamma,q}$ is the set
of solutions in $\cj\F_q$ to $f(x) = 0$, where $f(x)=x^q(cx+d)-ax-b$. We claim the roots are distinct.
For if $r$ is a repeated root, then $f(r)=f'(r)=0$, so $r^qc-a=0$.
Either $c=a=0$ (contradicting that $ad-bc \ne 0$) or $r=a/c$. But $r=a/c$ implies
$f(r)=(a/c)(a+d)-a^2/c-b=(ad-bc)/c\ne0$, contradicting that $r$ is a root of $f$. This establishes that $f$ has no repeated roots, so it has $\deg(f)$
distinct roots in $\cj\F_q$. So $|A_{\g,q}|=\deg(f)$, which equals $q+1$ if $c\ne 0$, or $q$ if $c=0$.
Let $X=A_{\g,q}\cap \F_{q^2}$. Any $v \in X$ satisfies $v = v^{q^2} = \gamma^2(v)$, so it is a fixed
point of $\gamma^2$. There are at most two fixed points, so $|X|\le 2$. Further, if $c=0$ then $\gamma^2$ fixes $\infty$,
so it can fix at most one other point, and it follows that $|X|\le 1$ when $c=0$.
Since $A_{\g,q}$ is the disjoint union of $\calL_{\g,q}$ and $X$, $|\calL_{\g,q}|=|A_{\g,q}| - |X|$. If $c=0$ then $|A_{\g,q}|=q$ and $|X| \in \{0,1\}$,
and if $c\ne 0$ then $|A_{\g,q}| = q+1$ and $|X| \in \{0,1,2\}$. In either case, $|\calL_{\g,q}|\in \{q-1,q,q+1\}$, \ie,
$|\calL_{\g,q}|=q+\kappa$ where $\kappa \in \{-1,0,1\}$.
To see that $t$ divides $|\calL_{\g,q}|$, observe that $\g$ permutes $\calL_{\g,q}$ and has no fixed points. The permutation breaks into cycles, each of order $t$,
so the cardinality of $\calL_{\g,q}$ must be a multiple of~$t$.
{\it (iii)} If $v \in \calL$ and $\a \in G$ then we claim $\a(v) \in \calL$. Indeed, $v^q = \b (v)$ for some
$\b=\ep\g\ep^{-1} \in \calC_\g$,
so if $w=\a(v)$ then $w^q=\a(v^q)=\a\ep\g\ep^{-1}(v)= \a\ep\g\ep^{-1}\a^{-1}(w)=(\a\ep)\g(\a\ep)^{-1}(w)$. Further, $\deg_q(w)=\deg_q(v)=t$, so $w\not\in \F_{q^2}$.
This shows $\a(v)\in \calL$, as claimed. Then $\calL$ splits into $G$-orbits. All have full length by {\it (i)}, so $|G|$ divides $|\calL|$, and the number of $G$-orbits is $|\calL|/|G|$.
Finally, $\calL$ is a disjoint union of the sets $\calL_{\b,q}$ with $\b \in \calC_\g$, because if $v^q=\b(v)$ and $v^q=\b'(v)$ then $\b(v)=\b'(v)$, $\b^{-1}\b'(v)=v$.
Since $\deg_q(v)>2$, this forces $\b=\b'$. Note that $\order(\ep\g\ep^{-1})=\order(\g) = t$ for each $\b = \ep\g\ep^{-1} \in \calC_\g$, so each set\
$\calL_{\b,q}$ has the same cardinality, $q + \kappa$, where $\kappa\in \{-1,0,1\}$ and $\kappa \equiv -q \pmod t$. We conclude that $|\calL|=(q+\kappa)|\calC_\g|$
and the number of $G$-orbits is $r=(q+\kappa)|\calC_\g|/|G|$.
\end{proof}
\section{Quotient maps} \label{sec:Q}
Let $G$ be a finite subgroup of $\PGL_2(K)$, where $K$ is any field.
A {\it quotient map} for $G$ is a rational function $Q(x)$ such that the extension field $K(x)/K(Q)$ has Galois group $G$. We further require that $Q(\infty)=\infty$.
This section gives proof of existence, properties, examples, and computational aspects of quotient maps.
\subsection{Existence of quotient maps.}
The existence of a quotient map essentially follows from Galois theory of the field $K(x)$ (see Artin \cite{Artin}), together with some facts about subfields of $K(x)$
(see van der Waerden \cite{VdW}), where $K(x)/K$ is transcendental.
Every nonzero $f \in K(x)$ can be written uniquely as $p_1(x)/p_2(x)$, where
$p_1$ and $p_2$ are relatively prime polynomials and $p_2$ is monic. Define $$\deg(f) = \max\{\deg(p_1),\deg(p_2)\}.$$
If $\gamma = \textmatrix abcd \in \PGL_2(K)$, then it may be viewed as an element of $K(x)$ of degree~1: $$\gamma(x) = (ax+b)/(cx+d),\qquad \deg(\g)=1.$$
\begin{lemma} \label{lem:degf} {\it (i)}\ If $f\in K(x)$ is nonconstant then $[K(x):K(f)] = \deg(f)$. \\ \noindent{\it (ii)}\ If $f,g \in K(x)$
are nonconstant then $\deg(f\circ g)= \deg(f)\,\deg(g)$.
\end{lemma}
\begin{proof} {\it (i)}\ See \cite[Section 10.2]{VdW}. \\ {\it (ii)}\ Let $y=g(x)$. Since $[K(x):K(y)] = \deg(g)$, we have $[K(y):K] = \infty$ and so $y$ is transcendental.
By {\it (i)}, $\deg(f \circ g) = [K(x):K(f(g(x)))] = [K(x):K(y)] [K(y):K(f(y))] = \deg(g)\,\deg(f)$. \end{proof}
\begin{corollary} \label{cor:autKx}
If $\gamma \in \PGL_2(K)$ and $f \in K(x)$, define $A_\gamma(f) = f\circ \gamma^{-1}$. Then $\gamma \mapsto A_\gamma$ is an isomorphism
from $\PGL_2(K)$ onto $\Aut(K(x)/K)$.
\end{corollary}
\begin{proof} This is well known, but we give a proof for completeness. $\Aut(K(x)/K)$ is defined as the group of isomorphisms from $K(x)$ to $K(x)$ that
fix all elements of $K$.
First, $A_\gamma \in \Aut(K(x)/K)$ because $A_\g(f+g)=A_\g(f)+A_\g(g)$, $A_\g(f) A_\g(g) = A_\g(fg)$ when $f,g \in K(x)$,
$A_\g(c) = c$ when $c \in K$, and $A_\g^{-1}=A_{\g^{-1}}$. Clearly $A_\g=1 \iff \g^{-1}(x)=x \iff \g=1$.
Further, $A_{\gamma} (A_\delta(f)) = A_\gamma (f \circ \delta^{-1}) = f\circ \delta^{-1} \circ
\gamma^{-1} = f \circ (\gamma\delta)^{-1} = A_{\gamma\delta}(f)$. So $\PGL_2(K)$ injects into $\Aut(K(x)/K)$, and we just need to show it is surjective.
Let $A$ be any automorphism of $K(x)/K$.
Since $x$ generates $K(x)$ over $K$, so does $A(x)$. Then $\deg(A(x))=1$ by Lemma~\ref{lem:degf}.
Write $A(x)= (ax+b)/(cx+d)$, where $ax+b$ and $cx+d$ have no common factor and
$a$ or $c$ is nonzero. Then $ad-bc\ne 0$, so $\gamma=\textmatrix abcd$ is in $\PGL(2,K)$. Evidently $A(x)=A_{\gamma^{-1}}(x)$, and since an automorphism
of $K(x)/K$ is determined by the image of $x$, it follows that $A = A_{\gamma^{-1}}$. This proves surjectivity.
\end{proof}
Let $\Sigma$ be the fixed field of $G$:
\begin{equation} \Sigma = \set{f(x) \in K(x) : \text{$f\circ \gamma(x) = f(x)$ for all $\gamma \in G$} }. \label{eq:Sigma}
\end{equation}
\begin{proposition} \label{prop:Q0}
There is a function $Q(x) \in K(x)$ of degree $|G|$ such that $\Sigma = K(Q)$. Moreover, $[K(x):\Sigma]=|G|$, $K(x)/\Sigma$ is Galois,
and its Galois group is isomorphic to $G$. If $Q'(x) \in \Sigma$ and $\deg(Q')=|G|$ then there is $\alpha \in \PGL_2(K)$ such that $Q' = \alpha \circ Q$.
\end{proposition}
\begin{proof} Let $x,y$ be independent transcendentals and consider
\begin{equation} F(y) = \prod_{\gamma \in G} y - \gamma(x) \in K(x)[y]. \label{eq:Fy} \end{equation}
$F(y)$ has degree $|G|$ and its coefficients are in $\Sigma$.
Since $F \in \Sigma[y]$ and $F(x)=0$, this shows $K(x)$ is an algebraic extension of $\Sigma$ and $[K(x):\Sigma] \le |G|$.
The group $G$ is contained in $\Aut(K(x)/K)$ by Corollary~\ref{cor:autKx}, and it fixes all elements of $\Sigma$, therefore $[K(x):\Sigma] \ge |G|$
by Galois theory (see the corollary to Theorem~13 in \cite{Artin}). Combining these inequalities gives $[K(x):\Sigma]=|G|$. Since the degree of the extension equals
the order of the group of automorphisms of $K(x)$ that fix $\Sigma$, the extension is Galois.
L\"uroth's Theorem \cite[\S10.2, {\it p.}~218]{VdW} states that any field $E$ such that $K\subset E \subset K(x)$ and $[K(x):E]<\infty$ has the form $E=K(f)$,
where $f\in K(x)\setminus K$. Therefore, $\Sigma = K(Q)$ for some $Q \in K(x)$. By Lemma~\ref{lem:degf}{\it (i)},
$\deg(Q)=[K(x):K(Q)]$, which equals $|G|$.
If $Q' \in \Sigma=K(Q)$ then $Q'=h(Q)$ for some $h \in K(x)$. By Lemma~\ref{lem:degf}{\it (ii)},
if $\deg(Q')=|G|$ then $\deg(h)=1$, so $h \in \PGL_2(K)$.
\end{proof}
\begin{proposition} \label{prop:QExists}
Let $K$ be any field and let $G$ be a finite subgroup of $\PGL_2(K)$.
There is a rational function $Q\in K(x)$ such that
\begin{enumerate}
\item $Q(\gamma x) = Q(x)$ for all $\gamma \in G$;
\item If $Q$ is written as a reduced fraction, \ie, $Q=f/g$ where $f,g \in K[x]$
and $\GCD(f,g)=1$, then $|G|=\deg(f) > \deg(g)$.
\end{enumerate}
Further, if $\widetilde Q$ is another function with these properties, then
$\widetilde Q(x) = a Q(x) + b$ for some $a\in K^\x$ and $b\in K$.
\end{proposition}
\begin{proof} Let $Q_0=f_0/g_0$ be a function as in Proposition~\ref{prop:Q0}, so $\deg(Q_0)=|G|$ and $Q_0\circ\gamma = Q_0$ for all $\gamma \in G$.
Then $\alpha \circ Q_0$ satisfies
these conditions also, for any $\alpha \in \PGL_2(K)$. We claim that $\alpha$ can be chosen so that $\alpha\circ Q_0 = f/g$, where $\deg(f) = |G| > \deg(g)$.
If $\deg(g_0) < |G|$, take $\alpha=\textmatrix 1001$, the identity map. If $\deg(g_0)=|G|$ and $\deg(f_0)<|G|$, then take $\alpha = \textmatrix 0110$, the reciprocal map.
Finally, if $\deg(f_0) = \deg(g_0) = |G|$, then there is $c \in K$ such that $\deg(f_0+cg_0) < |G|$, and $\textmatrix 0110 \textmatrix 1c01 \circ Q =
g_0/(f_0+cg_0)$ has the desired form.
For the last statement, $\widetilde Q = \alpha \circ Q$ for $\alpha \in \PGL_2(K)$ by Proposition~\ref{prop:Q0}. The condition on the degrees of the denominators
forces $\alpha$ to have the form $\textmatrix a b 0 1$.
\end{proof}
Because the functions in Proposition~\ref{prop:QExists} are so central to this article, we give them a name.
\begin{definition} \label{def:quotient} A function $Q\in K(x)$ that satisfies the two conditions of Proposition~\ref{prop:QExists} is called
a {\it quotient map} for $G$.
\end{definition}
\begin{proposition} \label{prop:KQSigma} If $Q(x)$ is a quotient map for $G$ then $K(Q(x))=\Sigma$, where $\Sigma$ is defined in~(\ref{eq:Sigma}).
In particular, $K(x)/K(Q)$ is Galois, and
its automorphism group is isomorphic to~$G$.
\end{proposition}
\begin{proof} $Q \in \Sigma$ by the first part of the definition, so $K(Q)\subset \Sigma$.
Also, $\deg(Q)=|G|$ by the second part of the definition, so $[K(x):K(Q)]=\deg(Q)=|G|=[K(x):\Sigma]$. We conclude that $K(Q)=\Sigma$. Then $K(x)/K(Q)=K(x)/\Sigma$ is Galois, and its Galois group is isomorphic to $G$ by
Proposition~\ref{prop:Q0}.
\end{proof}
\subsection{Properties of quotient maps.} \label{sec:Qproperties}
\begin{proposition} \label{prop:H}
If $H \subset G \subset \PGL_2(K)$ are finite subgroups and $Q_H$, $Q_G$ are quotient maps for these groups then $Q_G=h(Q_H)$ for a unique $h\in K(x)$,
and $\deg(h)=|G|/|H|$.
\end{proposition}
\begin{proof} Let $\Sigma_H = \set{ u \in K(x) : \text{$u \circ \gamma = u$ for all $\g \in H$}}$ and define $\Sigma_G$ similarly.
By Proposition~\ref{prop:KQSigma}, $\Sigma_G=K(Q_G)$ and $\Sigma_H = K(Q_H)$.
Since $Q_G \in \Sigma_G \subset \Sigma_H=K(Q_H)$, $Q_G=h(Q_H)$ for some rational function~$h$. Since $Q_H$ is transcendental over $K$, $h$ is unique.
$|G|=\deg(Q_G)=\deg(h(Q_H)) = \deg(h)\,\deg(Q_H) = \deg(h)|H|$ by Definition~\ref{def:quotient} and Lemma~\ref{lem:degf}. Thus, $\deg(h)=|G|/|H|$.
\end{proof}
The next proposition (especially {\it (i)}) illustrates that quotient maps have very strong arithmetic properties.
\begin{proposition} \label{prop:Qproperties}
Let $Q(x)\in K(x)$ be a quotient map for $G$, where $G \subset \PGL_2(K)$. Write $Q(x)=f(x)/g(x)$ where $f,g$ are relatively prime polynomials and $f$ is monic.
Let $L/K$ be an extension field and let $x,y$ be independent transcendentals over $L$. Then
\begin{enumerate}
\item[{\it (i)}] {$f(y)-Q(x)g(y) = \prod_{\gamma \in G} \left(y - \gamma(x)\right)$.} \\
\item[{\it (ii)}] If $v_1,v_2 \in L$ and $Q(v_2)\ne \infty$ then $Q(v_1)=Q(v_2)$ if and only if $v_2 = \gamma(v_1)$ for some $\gamma \in G$. Consequently, if $w \in L$ then $Q^{-1}(w)$
is a $G$-orbit in $\cj L$. \\
\item[{\it (iii)}] If $w \in L$ and $\calO=Q^{-1}(w)$ is the corresponding orbit in $\cj L$, then $f(x)-w g(x) = \left(\prod_{v \in \calO} x-v\right)^{\mult(\calO)}$. \\
\item[{\it (iv)}]
$g(x) = a\prod_{v \in \calO_\infty, v \ne \infty} (x-v)^{\mult(\calO_\infty)}$, where $a \in K^\x$. Here
$\calO_\infty=\set{\gamma(\infty) : \gamma \in G}$ and $\mult(\calO_\infty)=|H|$,
where $H=\{\g \in G : \g(\infty)=\infty\} = \{\textmatrix abcd \in G : c=0\}$.
\end{enumerate}
\end{proposition}
\begin{proof} {\it (i)}\ By Definition~\ref{def:quotient}, $\deg(f)=|G|>\deg(g)$. The left and right sides of the equation in~{\it (i)}, when regarded as polynomials in~$y$,
are both monic polynomials
of degree $|G|$ with coefficients in $\Sigma$, where $\Sigma$ is defined in (\ref{eq:Sigma}). Also, both vanish at $y=x$.
Since $[K(x):\Sigma]=|G|$ by Proposition~\ref{prop:Q0}, both are minimal polynomials for $x$ over $\Sigma$. Then each divides the other, so they are equal. \\
{\it (ii)}\ If $v_2=\g(v_1)$ for some $\g\in G$, then $Q(v_2)=Q\circ \g(v_1)=Q(v_1)$, since $Q\circ \g=Q$. Now assume
$Q(v_1)=Q(v_2)$. By hypothesis, this is finite, so $g(v_2)\ne 0$. By part {\it (i)},
$$f(v_2)-Q(v_1) g(v_2) = \prod_{\gamma \in G} v_2 - \gamma(v_1).$$
The left side vanishes since $Q(v_1)=Q(v_2)$. Thus, $v_2 = \gamma(v_1)$ for some $\gamma \in G$. \\
{\it (iii)}\ Let $v\in Q^{-1}(w)$. Set $x=v$ in the identity of part {\it (i)}
to obtain that $f(y)-wg(y) = \prod_{\gamma \in G} y - \gamma(v)$, then
apply
Lemma~\ref{lem:orbitMultiplicity}. \\
{\it (iv)}\
Let $F(x,y)=g(x)f(y)-f(x)g(y) \in K[x,y]$. Since $\deg(f)=|G|>\deg(g)$, this polynomial has degree~$|G|$ in each variable, and by~{\it (i)},
$$F(x,y) =g(x)\prod_{\g\in G} \left(y-\g(x)\right).$$
Let
$$ u(x) = \prod_{\textmatrix abcd \in G} (cx+d).$$
(Since $G$ is projective, $u(x)$ is well-defined only up to a nonzero scalar multiple in $K^\x$.)
Let $H=\{\textmatrix abcd \in G : c = 0\}$, so $|H|=\mult(\calO_\infty)$.
Then $\deg(u)=|G|-|H|$, and
$$F(x,y) = \frac{g(x)}{u(x)} \prod_{\textmatrix abcd\in G} \left((cx+d)y-(ax+b)\right).$$
Note that $(cx+d)y-(ax+b)$ has degree~1 in $y$ because $c$ or $d$ is nonzero; also it has degree~1 in $x$ because $c$ or $a$ is nonzero. Let
$$P(x,y) = \prod_{\textmatrix abcd\in G} \left((cx+d)y-(ax+b)\right).$$
This has degree $|G|$ in $x$ and in~$y$.
Also, $P(x,y)$ is not divisible by any linear factor $rx+s \in K[x]$ with $r\ne0$, because $(cx+d)y+(ax+b)$ can be divisible by $rx+s$ only if
it vanishes at $x=-s/r$, in which case $\textmatrix abcd \choose{-s/r}1 = \choose 00$, contradicting that $\textmatrix abcd$ is invertible.
In particular, $P(x,y)$ is not divisible by any nonconstant factor $cx+d$ of $u(x)$, and so it is relatively prime to~$u(x)$.
Since $u(x)F(x,y)=g(x)P(x,y)$, $u(x)$ is relatively prime to $P(x,y)$, $u(x)$ must divide~$g(x)$.
$F(x,y)$ and $P(x,y)$ both have degree~$|G|$ in~$x$, therefore $\deg_x(g/u)=0$, \ie, it is constant.
To complete the proof, it remains only to prove that $u(x)$ is a constant multiple of $\prod_{v \in \calO_\infty, v \ne \infty} (x-v)^{|H|}$.
Since $\textmatrix abcd^{-1} = \textmatrix d{-b}{-c}a$ in $\PGL_2(K)$ and $\g \to \g^{-1}$ is a bijection of~$G$,
$$ u(x) \equiv^* \prod_{\textmatrix abcd \in G} (-cx + a),$$
where the symbol $\equiv^*$ indicates ``up to a constant multiple in~$K^\x$''. Now
$-cx+a\equiv^* 1$ if $c=0$, $-cx+a\equiv^* x-a/c$ if $c\ne 0$, and $a/c = \textmatrix abcd(\infty)\in \calO_\infty$.
Thus,
$$u(x) \equiv^* \prod_{\g \in G\setminus H} (x-\g(\infty)).$$
Let $R$ be a complete set of coset representatives for $G/H$, excluding the identity coset, so $G \setminus H$ is the disjoint union of $rH$ for $r \in R$.
Writing $\gamma = r h$ with $r \in R$, $h\in H$ we have $\gamma(\infty)=rh(\infty)=r(\infty)$, so $\calO_\infty \setminus \{\infty\} = \set{r(\infty):r \in R}$.
Also, $\{r(\infty):r\in R\}$ are distinct, for $r(\infty)=r'(\infty)$ would imply $r^{-1}r'\in H$ and consequently $r'H=rH$.
Thus,
\begin{equation*} u(x) \equiv^*
\prod_{r \in R} \prod_{h \in H} (x-rh(\infty))
= \prod_{r \in R} (x-r(\infty))^{|H|}
= \prod_{v\in \calO_\infty,\ v\ne \infty}(x-v)^{|H|}.
\end{equation*}
Since $g$ is a constant multiple of $u$ and both have coefficients in~$K$, this proves the result.
\end{proof}
Parts {\it (ii)} and {\it (iv)} of Proposition~\ref{prop:Qproperties}
together imply:
\begin{equation} \text{If $v_1,v_2 \in L \cup \{\infty\}$ then $Q(v_1)=Q(v_2)$ if and only if
$v_2 = \gamma(v_1)$ for some $\gamma \in G$.} \label{orbitStatement}
\end{equation}
\begin{proposition} \label{prop:Qorbit} Let $G$ be a finite subgroup of $\PGL_2(K)$ and $Q$ a quotient map for~$G$. If $w \in \cj K \cup \{\infty\}$ then
$Q^{-1}(w)$ is a $G$-orbit in $\cj K \cup \{\infty\}$. \end{proposition}
\begin{proof} First we show that $Q^{-1}(w)$ is nonempty. If $w = \infty$ then $\infty \in Q^{-1}(w)$. Now assume $w$ is finite, and let $v\in \cj K$
be a root of $g(x)w-f(x)$, where $Q=f(x)/g(x)$ and $f,g$ are relatively prime. If $g(v)=0$ then the equation $g(v)w-f(v)=0$ forces $f(v)=0$, contradicting
that $f,g$ are relatively prime. We conclude that $g(v) \ne 0$, so $\tau = f(v)/g(v) = Q(v)$ and $v \in Q^{-1}(\tau)$. The fact that $Q^{-1}(w)$ is a
$G$-orbit follows from Proposition~\ref{prop:Qproperties}{\it (ii)} if $w\in\cj K$ or Proposition~\ref{prop:Qproperties}{\it (iv)} if $w=\infty$.
\end{proof}
\subsection{Computation of quotient maps.} \label{sec:computeQ}
From the perspective of Galois theory,
quotient maps arise from invariant theory. We show in this section that they may also be computed
by considering their zeros and poles.
\begin{theorem} \label{thm:computeQ}
Let $G$ be a finite subgroup of $\PGL_2(K)$. Let $\calO\subset \cj K$ be a $G$-orbit that does not contain $\infty$ and let $\mult(\calO)=|G|/|\calO|$ be its multiplicity.
Let
$$f_\calO(x)= \left(\prod_{v \in \calO} (x-v)\right)^{\mult(\calO)}\quad {\rm and}\quad
g(x) = \left(\prod_{v \in \calO_\infty, v \ne \infty}(x-v)\right)^{\mult(\calO_\infty)}.$$
Then there is $w \in \cj K$ such that $f_\calO(x)/g(x) + w $ is a
quotient map for $G$.
\end{theorem}
\begin{proof}
Let $Q$ be a quotient map for $G$. By Proposition~\ref{prop:Qproperties}{\it (iv)}, $Q=f/g$ for some $f \in K[x]$. On replacing $Q$ by a constant multiple, we can assume that $f$ is monic.
Let $v \in \calO$ and $w=Q(v)\in \cj K$.
By Proposition~\ref{prop:Qproperties}{\it (iii)}, $f(x)-wg(x)=f_\calO(x)$. Thus, $Q(x)=f(x)/g(x) = f_\calO(x)/g(x) + w$.
\end{proof}
\begin{example} \label{example:G3Q}
Let $G= \set{1,\b,\b^2}$ where $\b = \textmatrix 1{-1}10$. Then
$$\calO_\infty = \set{\infty,\beta(\infty),\beta^2(\infty)} = \set{\infty,1,0}.$$
The denominator of $Q$ is therefore $x(x-1)$. To compute the numerator, select any $v \in \cj K \setminus \{0,1\}$ and compute its orbit $\calO$.
If the characteristic is not~2, taking $v=-1$ gives
$\calO=\{-1,2,1/2\}$, and $f_\calO(x)=(x+1)(x-2)(x-1/2)$.
The formula for $Q$ will be prettier if we add 3/2, so we take
$$Q(x) = \frac{f_\calO(x)}{g(x)} + \frac 32 = \frac{(x+1)(x-2)(x-1/2)}{x(x-1)} + \frac 32 = \frac{x^3-3x+1}{x(x-1)}.$$
It turns out that this formula works for characteristic~2 as well. To see this, suppose that $K$ has characteristic~2 and let $\omega$ be a primitive
cube root of 1 in $\cj K$. Then $\{\omega\}$ is a $G$-orbit of multiplicity~3, and a quotient map is
$$\frac{(x-\omega)^3}{x(x-1)} + \omega = \frac{x^3+\omega x^2 + \omega^2 x + 1}{x(x-1)} + \omega = \frac{x^3+x+1}{x(x-1)}.$$
This equals $(x^3-3x+1)/(x(x-1))$ since $-3=1$ in characteristic~2. Thus, the formula $Q(x)=(x^3-3x+1)/(x(x-1))$ works for all fields.
\qed
\end{example}
\begin{example} \label{example:PGL2Q}
Consider $G=\PGL_2(\F_q)$.
Then $\calO_\infty = \F_q \cup \{\infty\}$, the multiplicity of this orbit is $(q^3-q)/(q+1)=q^2-q$, and
$\prod_{v\in \calO_\infty, v\ne \infty}(x-v)=\prod_{v\in\F_q}(x-v)=x^q-x$, so $g(x)=(x^q-x)^{q^2-q}$.
To compute the numerator, select $v \in \F_{q^3} \setminus \F_q$.
Every element $\gamma(v)$ has the same degree as $v$, so the
orbit is contained in $\F_{q^3}\setminus \F_q$.
Further, $\calO_v$ has full size, because all short orbits are contained
in $\F_{q^2} \cup \{\infty\}$ by Lemma~\ref{lem:short}. Since $|\PGL_2(\F_q)|=q^3-q=|\F_{q^3}\setminus \F_q|$, it
follows that $\calO_v=\F_{q^3}\setminus \F_q$
and we may take the numerator to be
$$f(x) = \prod_{v \in \F_{q^3}}(x-v)/\prod_{v\in \F_q}(x-v) = (x^{q^3}-x)/(x^q-x).$$
We obtain $Q_G(x)=f(x)/g(x) = (x^{q^3}-x)/(x^q-x)^{q^2-q+1}$. After working out formulas for $\inv(\tau)$, we decided to alter the definition to
$Q_G(x)=(x^{q^3}-x)/((x^q-x)^{q^2-q+1}) + 1$ because that made the statement of our main theorem for $\PGL_2(\F_q)$ more aesthetic.
If instead we had selected $v \in \F_{q^2} \setminus \F_q$, the orbit would be $\calO=\F_{q^2} \setminus \F_q$, with multiplicity $|G|/|\calO|
=(q^3-q)/(q^2-q)=q+1$. Then $f_\calO(x)=\left((x^{q^2}-x)/(x^q-x)\right)^{q+1}$ and
a quotient map is $f_\calO(x)/g(x)=(x^{q^2}-x)^{q+1}/(x^q-x)^{q^2+1}$.
It turns out that $Q_G$ and $f_\calO(x)/g(x)$ are equal. In fact, since $(x^{q^2}-x)^{q+1}=(x^{q^2}-x)^q(x^{q^2}-x)$,
\begin{eqnarray*} \frac{x^{q^3}-x}{(x^q-x)^{q^2-q+1}}-\frac{(x^{q^2}-x)^{q+1}}{(x^q-x)^{q^2+1}} &=& \frac{(x^{q^3}-x)(x^q-x)^q-(x^{q^3}-x^q)(x^{q^2}-x)}
{(x^{q^2}-x)^{q^2+1}} \\
&=& \frac{-x^{q^3+q}+x^{q^3+1}+x^{q^2+q}-x^{q^2+1}}{(x^q-x)^{q^2+1}} \\
&=& \frac{-x^{q^3}(x^q-x)+x^{q^2}(x^q-x)}{(x^q-x)^{q^2+1}} \\
&=& \frac{-x^{q^3}+x^{q^2}}{(x^q-x)^{q^2} } = -1.
\end{eqnarray*}
Then
\begin{equation}
Q_G(x) = \frac{x^{q^3}-x}{(x^q-x)^{q^2-q+1}} + 1 = \frac{(x^{q^2}-x)^{q+1}}{(x^q-x)^{q^2+1}}.\label{eq:QGidentity}
\end{equation}
\qed
\end{example}
The above method to compute $f$ and $g$ by creating orbits finds quotient maps for most groups we considered. However, it did not work well
when attempting to find a quotient map
for a cyclic group of $\PGL_2(\F_q)$ of order $\ell$ when $\ell \ge 3$ and $\ell| q+1$, as it is difficult to find an expression for $\prod_{v' \in \calO_v} (x-v')$.
Instead, we took advantage that such $G$ is conjugate over a quadratic extension to a diagonal subgroup, and for this group it is easy to find
$Q$. By composing $Q$ with the element of $\PGL_2(\F_{q^2})$ that establishes the conjugacy, one obtains an invariant function $Q_0$, but it is not rational
and the denominator has degree $|G|$. By applying an appropriate linear fractional transformation to $Q_0$, one can regain rationality and the property that the
degree of the denominator is smaller than the degree of the numerator.
This lengthy computation was done in an earlier draft of the article. Fortunately, Xander Faber found a much easier method to compute this quotient map,
given in Proposition~\ref{prop:Qell}.
The following lemma describes how quotient maps are related when $G_1$ and $G_2$ are conjugate subgroups of
$\PGL_2(K)$, where $K$ is any field.
\begin{lemma} \label{lem:conjugateGroups} Let $G_1$, $G_2$ be finite subgroups of $\PGL_2(K)$ that are conjugate to one another; \ie, there is $\a\in \PGL_2(K)$
such that $G_2=\set{\alpha\gamma \alpha^{-1} : \gamma \in G_1 }$.
If $Q_1$ is a quotient map for $G_1$ then let $Q'=Q_1 \circ\alpha^{-1}$ and $k=Q'(\infty)$.
Let $\beta$ be any element of $\PGL_2(K)$ such that $\beta(k)=\infty$. Then $Q_2= \beta \circ Q_1 \circ \alpha^{-1}$ is a quotient map for $G_2$.
\end{lemma}
\begin{proof} $Q_2 \circ \delta(x) = Q_2(x)$ for all $\delta \in G_2$ because for $\gamma \in G_1$,
$$Q_2(\alpha\gamma \alpha^{-1} (x)) = \beta \circ Q_1(\gamma\alpha^{-1} (x)) = \beta \circ Q_1(\alpha^{-1} (x)) = Q_2(x).$$
Further, $\deg(Q_2)=\deg(Q_1)=|G|$ since linear fractional transformations do not affect the degree.
Finally, $Q_2(\infty) = \beta \circ Q'(\infty) = \beta(k)=\infty$, therefore the degree of the numerator of $Q_2$ exceeds the degree of the denominator.
Thus, $Q_2(x)$ is a quotient map for $G_2$.
\end{proof}
\section{Artin invariant} \label{sec:invariant}
Let $K$ be a field and let $G$ be a finite subgroup of $\PGL_2(K)$.
In the previous section we defined a quotient map for $G$ to be a $G$-invariant function $Q(x)=f(x)/g(x)\in K(x)$ such that $\deg(f)=|G|>\deg(g)$, and we proved
existence and some properties. In particular, if $\tau \in \cj K\cup\{\infty\}$ then $Q^{-1}(\tau)$ is a $G$-orbit in $\cj K\cup\{\infty\}$.
\begin{definition} Let $\tau \in \cj K \cup \{\infty\}$.
If the $G$-orbit $Q^{-1}(\tau)$ has full size, \ie, $|Q^{-1}(\tau)|=|G|$, then we say that $\tau$ is {\it regular (with respect to~$Q$)}; otherwise it is irregular.
\end{definition}
\begin{proposition}[Definition of Artin invariant]
Let $\tau \in \cj K$ and $\s \in \Aut(\cj K/K)$ such that $\s(\tau)=\tau$.
Let $v \in \cj K$ such that $Q(v)=\tau$. Then there is $\gamma \in G$ such that $\sigma(v)=\gamma(v)$.
If $\tau$ is regular, then the conjugacy class $\calC_\g =\{\d \g \d^{-1} : \d \in G\}$ is independent of the
choice of $v\in Q^{-1}(\tau)$.
In that case, we write $\inv_Q(\tau,\sigma)=\calC_\g$, and we call $\inv_Q(\tau,\sigma)$ the {\it Artin invariant} of $\tau$ with respect to $Q$ and~$\sigma$.
If $\infty$ is regular, \ie, $Q^{-1}(\infty)$ has full size, then we define $\inv_Q(\infty,\s)=\calC_\textmatrix1001$.
\end{proposition}
\begin{proof} $Q^{-1}(\tau)$ is a $G$-orbit by Proposition~\ref{prop:Qorbit}. Since
$Q\left(\sigma(v)\right)=\sigma\left(Q(v)\right)=\sigma(\tau)=\tau$, $v$ and $\sigma(v)$
are both in $Q^{-1}(\tau)$, therefore there is $\gamma \in G$ such that $\sigma(v)=\gamma(v)$.
Now suppose that $\tau$ is regular. Then $|Q^{-1}(\tau)|=|G|$, so $\gamma$ is uniquely determined from $\sigma$ and~$v$. We claim that $\calC_\g$
depends only on $\sigma$ and $\tau$, and not on the choice of $v\in Q^{-1}(\tau)$. Indeed, suppose that $w\in Q^{-1}(\tau)$, and we will show that
$\s(w)=\a(w)$ where $\a \in \calC_\g$.
There is $\d \in G$ such that $w=\d(v)$. Since the entries of $\delta$ are rational, $\s(w)=\s(\d(v)) = \d(\s(v))=\d(\g(v))=\d\g\d^{-1}(w)$. Here
$\d\g\d^{-1}\in\calC_\g$, as required.
\end{proof}
When $\tau=\infty$, then $Q^{-1}(\tau)=\{\g(\infty) : \g \in G\} \subset \{\infty\} \cup K$. If one defines $\s(\infty)=\infty$ for all $\s \in \Aut(\cj K/K)$,
then $\s(v)=v$ for all $v\in Q^{-1}(\tau)$. This is why it makes sense to define $\inv(\infty,\s)=\calC_{\textmatrix 1001}$ when $\infty$ is regular.
A benefit of this defintion is that it makes certain statements cleaner, for example Proposition~\ref{prop:cyclicCount}.
If $K=\F_q$ and $\sigma={\rm Frob}_q \in \Aut(\cj\F_q/\F_q)$
is the $q$-power Frobenius, then we will write $\inv_Q(\tau,q)$ instead of $\inv_Q(\tau,{\rm Frob}_q)$, or simply
$\inv(\tau)$ if $Q$ and $q$ are understood from the context.
For future reference, the definition of Artin invariant may be briefly summarized as follows when $K=\F_q$ and $\tau \in \F_q$ is regular:
\begin{equation}\text{If $Q(v)=\tau$, then $v^q=\gamma(v)$ for some $\g \in G$, and $\inv_Q(\tau,q)=\calC_\g.$} \label{eq:invariantFq}
\end{equation}
For arbitrary $K$, when $\tau \in \cj K$ is regular, $\sigma \in \Aut(\cj K/K)$, and $\s(\tau)=\tau$, the criterion is:
\begin{equation} \text{If $\tau=Q(v)$, then $\sigma(v)=\g(v)$ for some $\g \in G$, and $\inv_Q(\tau,\s)=\calC_\g$.}
\label{eq:invariantK}
\end{equation}
If $G$ is abelian, then $\calC_\g = \{\g\}$. In that case, we sometimes write $\inv_Q(\tau,\sigma)=\g$, instead of $\inv_Q(\tau,\sigma)=\calC_\g=\{\g\}$.
\medskip
For every subgroup of $\PGL_2(\F_q)$ that we have investigated,
$\inv(\tau)$ can be described directly in terms of $\tau$, {\it e.g.}, involving Legendre symbols or other numerical invariants, without reference to $v$.
As a matter of notation, we often use a symbol
$[\tau/q]$ to denote these values that are directly computed from $\tau$. For instance, in Example~\ref{example:Klein}, we can define $[\tau/q] =
\left(\jacobi\tau q,\jacobi{\tau-1}q\right)$ for $\tau \in \F_q \setminus \{0,1\}$, and then (\ref{eq:Klein}) describes $\inv(\tau,q)$ directly in terms of $[\tau/q]$.
\begin{proposition} \label{prop:main} Let $q$ be a prime power, $G$ a subgroup of $\PGL_2(\F_q)$, and $Q\in \F_q(x)$
a quotient map for $G$, as in Definition~\ref{def:quotient}.
\begin{enumerate}
\item[{\it (i)}] For $\gamma\in G$, let
\begin{equation} \text{$V_{\gamma,q} = \set{v \in \cj\F_q\setminus\calO_\infty : v^q = \gamma (v)}$ and
$V_{G,q} = \cup_{\b \in G} V_{\b,q}$.} \label{eq:VgammaDef}
\end{equation}
Then $V_{G,q}$ decomposes into exactly $q$ $G$-orbits, and $Q$ induces a bijection between these orbits and $\F_q $.
\item[{\it (ii)}] If $v \in V_{G,q}$ and $|\calO_v|=|G|$, then there is a unique $\gamma\in G$
such that $v\in V_{\gamma,q}$.
\item[{\it (iii)}] For each $\tau \in\F_q$ there is a conjugacy class $\calC \subset G$
such that $Q^{-1}(\tau) \subset \cup_{\gamma \in \calC} V_{\gamma,q}$.
If $\tau$ is regular (\ie, $|Q^{-1}(\tau)|=|G|$), then $\calC$ is uniquely determined and $\calC=\inv_Q(\tau,q)$.
\item[{\it (iv)}] If $G$ is abelian, then for each $\tau \in \F_q$ there is $\gamma \in G$ such that $Q^{-1}(\tau) \subset V_{\gamma,q}$. If in addition
$\tau$ is regular, then $\g = \inv_Q(\tau,q)$.
\item[{\it (v)}] Suppose $\g\in G$ has order $t\ge 3$, and let $\calC_\g = \{\a\g\a^{-1} : \a\in G\}$. Then $t|(q+\kappa)$ for a unique $\kappa\in\{0,1,-1\}$,
and the number of regular elements $\tau\in \F_q$ with $\inv_Q(\tau,q)=\calC_\g$ is
exactly $|\calC_\g|(q+\kappa)/|G|$. In particular, if $G$ is abelian then there are exactly
$(q+\kappa)/|G|$ regular elements $\tau\in\F_q$ with $\inv_Q(\tau,q)=\g$.
\end{enumerate}
\end{proposition}
\begin{proof} {\it (i)}\ Let $v \in \cj\F_q \setminus \calO_\infty $ and $\tau = Q(v)\in\cj\F_q$.
Since each preimage set $Q^{-1}(\tau)$ is a $G$-orbit by Proposition~\ref{prop:Qorbit},
\begin{eqnarray*} \tau\in\F_q &\iff& \tau = \tau^q \iff Q(v)=Q(v^q) \\
&\iff& \text{$v^q = \gamma (v)$ for some $\gamma \in G$} \iff v \in V_{G,q}.
\end{eqnarray*}
This shows
\begin{equation} Q^{-1}(\F_q) = \cup_{\gamma \in G} V_{\gamma,q} = V_{G,q}.\label{eq:VGQ} \end{equation}
Since each preimage set $Q^{-1}(\tau)$ is a $G$-orbit
and $\F_q$ has $q$ elements, $V_{G,q}$ partitions into exactly $q$ orbits.
{\it (ii)}\ If $v \in V_{G,q}$, then $v \in V_{\gamma,q}$ for some $\gamma \in G$, so $v^q=\gamma(v)$. If in addition the orbit of $v$ has full size,
then the elements $\gamma(v)$ for $\gamma \in G$ are distinct, so that $\gamma$ is uniquely determined from $v$ and $q$.
{\it (iii)}\ and {\it (iv)}\ Suppose $\tau \in \F_q$ and $v \in Q^{-1}(\tau)$.
By part {\it (i)}, which we have already proved,
there is $\gamma \in G$ such that $v^q = \gamma(v)$.
Let $\calC = \set{\alpha \gamma \alpha^{-1} : \alpha \in G}$, the conjugacy class of $\gamma$. We claim that
$\calO_v \subset \cup_{\beta \in \calC} V_{\beta,q}$. To see this, let $w = \alpha(v) \in \calO_v$, where $\a \in G$.
Since the entries of $\a$ are in $\F_q$,
$$w^q = \left(\a(v)\right)^q = \a(v^q) = \a \gamma(v) = \a \gamma \a^{-1} (w),$$
therefore $w \in V_{\beta,q}$ where $\beta = \a \gamma \a^{-1} \in \calC$. This proves the claim. Now suppose $\tau$ is regular.
For any $v\in Q^{-1}(\tau)$ there is $\b\in\calC$ such that $v^q=\beta(v)$. Then
$\inv_Q(\tau,q)=\calC_\beta$ by~(\ref{eq:invariantFq}). Since $\beta\in\calC$, $\calC=\calC_\b$ also.
{\it (v)}\ By~{\it (iii)}, the number of regular $\tau\in\F_q$ with $\inv_Q(\tau,q)=\calC_\g$ is the number of full-sized $G$-orbits in
$\cup_{\b\in\calC_\g} V_{\b,q}$. By Proposition~\ref{prop:counting}, this number is $|\calC_\g|(q+\kappa)/|G|$.
\end{proof}
Recall that if $G \subset \PGL_2(K)$, there was some choice in the definition of $Q$, as one could change it to $aQ+b$, where $a,b \in K$ and $a\ne0$.
Since $\tau = Q(v) \iff a\tau + b = (aQ+b)(v)$, the set of preimages of $\tau$ under $Q$ is the same as the set of preimages of $a\tau + b$ under $aQ+b$.
In particular, $\tau$ is regular with respect to $Q$ iff $a\tau+b$ is regular with respect to $aQ+b$, and in that case
\begin{equation} \inv_{a Q+b}(a\tau+b) = \inv_Q(\tau). \label{eq:aQb} \end{equation}
We select $a,b$ so that the invariants, when expressed in terms of $\tau$, have simple and natural expressions.
If two finite subgroups of $\PGL_2(K)$ are conjugate to one another by a rational linear fractional transformation, then their Artin invariants are
essentially equivalent, as shown below.
Thus, we are free to normalize groups via rational conjugation when possible. If $\s\in\Aut(\cj K/K)$ then we define $\s(\infty)=\infty$.
\begin{lemma} \label{lem:conjugateInv}
Suppose that $G_1,G_2$ are finite subgroups of $\PGL_2(K)$ that are conjugate to one another; that is, there is $\alpha \in \PGL_2(K)$ such that
$G_2=\set{\alpha\gamma \alpha^{-1} : \gamma \in G_1 }$. If $Q_1$ is a quotient map for $G_1$ then let $Q_2 = \b \circ Q_1 \circ \a^{-1}$
be a quotient map for $G_2$, as in Lemma~\ref{lem:conjugateGroups}. Then $\tau \in \cj K\cup\{\infty\}$ is regular with respect to $Q_1$ iff $\beta(\tau)$ is regular
with respect to $G_2$. Further, if $\tau\in \cj K\cup\{\infty\}$ is regular, $\s \in \Aut(\cj K/K)$, and $\s(\tau)=\tau$ then
$$\inv_{Q_2}(\beta(\tau),\s) = \a\, \inv_{Q_1}(\tau,\s)\, \a^{-1}.$$
\end{lemma}
\begin{proof} $Q_1(v) = \tau$ iff $\beta\circ Q_1 (v) = \beta(\tau)$
iff $Q_2(\alpha(v))=\beta(\tau)$. Therefore, $Q_2^{-1}(\beta(\tau))=\a (Q_1^{-1}(\tau))$.
It follows that $Q_1^{-1}(\tau)$ has full size iff $Q_2^{-1}(\beta(\tau))$ has full size, so $\tau $ is regular wrt $Q_1$ iff $\beta(\tau)$ is regular
wrt $Q_2$. In that case, $\g\in \inv_{Q_1}(\tau,\s)$ iff there is $v\in Q_1^{-1}(\tau)$ with $\s(v) = \gamma(v)$ iff there is $w=\a(v) \in Q_2^{-1}(\b(\tau))$ with
$\sigma(w)=\a(\sigma(v))=\a\g(v)=\a\g\a^{-1}(w)$ iff $\a\g\a^{-1} \in \inv_{Q_2}(\beta(\tau))$.
Note that this proof is valid even when $\tau=\infty$ or $\b(\tau)=\infty$.
\end{proof}
The next lemma shows that if $H\subset G$ are finite subgroups
of $\PGL_2(K)$ then their Artin invariants
are closely related. If $\d\in H$, let $\calC_{\d,H} = \{\g\d\g^{-1} : \g \in H\}$.
Then $\calC_{\d,H} \subset \calC_{\d,G}$.
Let $Q_G$, $Q_H$ be quotient maps for $G$ and $H$.
By Proposition~\ref{prop:H}, there is a unique rational function
$h\in K(x)$ of degree~$|G|/|H|$ such that $Q_G = h\circ Q_H$.
\begin{lemma} \label{lem:Hinv} Let $G,H,Q_G=h(Q_H)$ be as above.
Suppose $h(\tau)$ is regular with respect to~$G$, where $\tau \in \cj K\cup\{\infty\}$.
Then $\tau$ is regular with respect to~$H$, and for any $\s\in\Aut(\cj K/K)$
such that $\s(\tau)=\tau$, there is $\d\in H$ such that
$$ \inv_{Q_H}(\tau,\s)= \calC_{\d,H}, \qquad \inv_{Q_G}(h(\tau),\s) = \calC_{\d,G}.$$
\end{lemma}
\begin{proof} Let $V=Q_H^{-1}(\tau)$; this is an $H$-orbit by Proposition~\ref{prop:Qorbit}.
Let $v\in V$. Then $\s(v)\in V$, and there is $\d \in H$ such that $\s(v)=\d(v)$.
Since $Q_G(v)=h(Q_H(v))=h(\tau)$, $v$ is in the $G$-orbit $Q_G^{-1}(h(\tau))$.
By hypothesis, $h(\tau)$ is regular with respect to $Q_G$, therefore
$\g(v)$ for $\g \in G$ are distinct.
Then $V=\{\g(v) : \g \in H\}$ has $|H|$ distinct elements, so $\tau$ is regular with respect to $Q_H$.
Since $Q_H(v)=\tau$, $Q_G(v)=h(\tau)$, and $\s(v)=\d(v)$, (\ref{eq:invariantK}) implies
$\inv_{Q_H}(\tau,\s) = \calC_{\d,H}$ and $\inv_{Q_G}(h(\tau),\s) = \calC_{\d,G}$.
\end{proof}
\part{Small subgroups of $\PGL_2(K)$} \label{part:K}
This part considers finite subgroups that are contained in $\PGL_2(K)$ for any $K$. Sections~\ref{sec:known} and~\ref{sec:K}
contain known examples, and Sections~\ref{sec:three} and \ref{sec:six} contain new ones.
\section{An example related to Dickson polynomials} \label{sec:known}
Anyone who has studied Dickson polynomials is probably familiar with the lemma that $v \mapsto v + 1/v$
gives a surjective map from $\mu_{q-1} \cup \mu_{q+1}$ onto $\F_q$.
This lemma appears in a 1961 article by Brewer \cite{Brewer}, and was probably known earlier.
This section examines that lemma from the Artin invariant perspective.
Let $K$ be any field, $c \in K^\x$, and
$G = \set{\textmatrix 1001,\textmatrix 0c10 }\subset\PGL_2(K)$.
The $G$-orbits are $\calO_v = \set{v,c/v}$ for $v\in\cj K \cup\{\infty\}$, and the short orbits are $\{\sqrt c\}$ and $\{-\sqrt c\}$.
A quotient map is $Q(x)=x+c/x$.
The short orbits $\{\sqrt c\}$ and $\{-\sqrt c\}$ are sent under $Q$ to $2\sqrt c $ and $-2\sqrt c $, respectively;
these are the irregular elements of $\cj K \cup\{\infty\}$.
\begin{proposition} \label{prop:0c10} Let $G$ and $Q$ be as above.
Let $\tau \in \cj K$ be regular (equivalently, $\tau^2-4c\ne0$) and let $\s \in \Aut(\cj K/K)$
such that $\s(\tau)=\tau$. If char$(K)\ne 2$, then
$\inv(\tau,\sigma)= \textmatrix 0c10^{(1-A)/2}$, where $A=\sigma\left(\sqrt{\tau^2-4c\,}\,\right)/\sqrt{\tau^2-4c\,}\in \{1,-1\}$.
If char$(K)=2$, let $X$ be a root of $X^2+X+c/\tau^2=0$. Then $\s(X)=X+j$, where $j\in \F_2$, and $\inv(\tau,\s)=\textmatrix 0c10^j$.
\end{proposition}
\begin{proof} The solutions to $Q(v)=\tau$ are the roots of $x^2-\tau x +c$. Denote these roots by $v$ and $v'$. Let $\g=\inv(\tau,\s)\in G$, so $\s(v)=\g(v)$.
If char$(K)\ne 2$ then $\{v,v'\}=\{(\tau\pm \sqrt{\tau^2-4c})/2\}$.
If $\s(v)=v$ then $\s(\sqrt{\tau^2-4c})=\sqrt{\tau^2-4c}$ and $\g=1$; otherwise $\s(v)=v'$, $\s(\sqrt{\tau^2-4c})=-\sqrt{\tau^2-4c}$, $\g=\textmatrix 0c10$.
This proves the proposition when the characteristic is not~2.
If char$(K)=2$, then $v/\tau$, $v'/\tau$ are the two solutions to $X^2+X=c/\tau^2$, and $v'/\tau = (v/\tau)+1$.
If $\s(v)=v$ then $\s(X)=X$ and $\g=\textmatrix 1001$. If $\s(v)\ne v$ then $\s(v)=v'$, $\s(X)=X+1$ and $\g=\textmatrix 0c10$. The result follows.
\end{proof}
Consider the case $K=\F_q$ and $\s(x)=x^q$. If $\tau\in \cj \F_q$ then $Q^{-1}(\tau)$ is a $G$-orbit, say $\{v,c/v\}$, and $\tau^q=\tau$ iff $v^q \in \{v,c/v\}$.
When $c=1$, then $v^q\in \{v,1/v\}$ iff $v\in \mu_{q-1}\cup \mu_{q+1}$, and one obtains the result mentioned above
that $v\mapsto v+1/v$ gives a surjective map of $\mu_{q-1}\cup \mu_{q+1}$ onto $\F_q$.
Proposition~\ref{prop:0c10} in this case is due to Brewer \cite{Brewer} for $q$ odd, and Dillon-Dobbertin \cite{DD} when $q$ is even.
In the case where $q$ is even, the element $j$ in Proposition~\ref{prop:0c10} is equal to $\Tr_{\F_q/\F_2}(c/\tau^2)$.
\begin{proposition} \label{prop:BDD}
{\bf (Brewer \cite{Brewer} for $q$ odd; Dillon and Dobbertin \cite{DD} for $q$ even).}
Let $q$ be a prime power and $Q(x) = x + c/x$, where $c \in \F_q^\x$. Every $\tau \in \F_q$ may be written as $\tau = Q(v)$, where $v\in\cj\F_q^\x$
and where
$v^q = v$ or $v^q = c/v$. Conversely, if $v^q=v$ or $v^q=c/v$ then $Q(v) \in \F_q$.
Let $\tau=Q(v)\in\F_q$.
If $\tau^2=4c$ then $v=\pm \sqrt c$ and $v^q=v=c/v$. If $\tau^2 \ne 4c$, then
\begin{enumerate}
\item for $q$ odd: $v^q=v \iff \jacobi{\tau^2-4c}q = 1$;\qquad $v^q=c/v \iff \jacobi{\tau^2-4c}q = -1$;
\item for $q$ even: $v^q=v \iff \Tr_{\F_q/\F_2}(c/\tau^2)=0$;\qquad $v^q=c/v \iff \Tr_{\F_q/\F_2}(c/\tau^2)=1$.
\end{enumerate}
\end{proposition}
Brewer proved Proposition~\ref{prop:BDD}(1) and used it to compute the number of $\F_p$-rational points
of a curve $y^2 = D_n(x)$ over a prime field $\F_p$, where $D_n(x)$ is a Dickson polynomial, determined by the property that $D_n(x+1/x)=x^n+1/x^n$.
These point-counting formulas were applied to determine some character sums.
Dillon and Dobbertin proved Proposition~\ref{prop:BDD}(2) and used it to show that for $q$ even, $D_n(S_j) \subset S_j \cup \{0\}$, where
$S_j= \set{\tau\in\F_q^\x : \Tr_{F_q/\F_2}(1/\tau)=j} $ for $j \in \{0,1\}$.
Analogues of the result of Dillon and Dobbertin for the case of odd characteristic are formulated in \cite{Permutation}.
\medskip
\noindent{\bf Remark.}\ The large difference in behavior between odd or even characteristic in Proposition~\ref{prop:BDD}
is attributable to the fact that the transformation $x\mapsto c/x$ has a unique fixed point in the algebraic closure in characteristic~2, but two fixed points in odd characteristic.
Consequently, $G$ is conjugate to the unipotent group $\{\textmatrix 1001,\textmatrix 1101\}$ in characteristic~2.
Unipotent groups are related to additive or linear
maps (see Section~\ref{sec:unipotent}), consistent with the appearance of a trace map in Proposition~\ref{prop:BDD}(2).
This phenomenon will revisit us in Section~\ref{sec:three}, when we study a group of order~3.
\qed
\medskip
As shown in \cite[Section 9]{Wilson-like}, Proposition~\ref{prop:BDD}
can be used to obtain a quick proof of Legendre symbol formulas $\jacobi 2q$, $\jacobi 3q$, and $\jacobi 5q$.
\section{Kummer and Klein examples} \label{sec:K}
This section shows that two groups that were considered in the introduction for finite fields generalize to an arbitrary field $K$.
\bigskip
\noindent {\it Kummer extensions.}\ (See Example~\ref{example:Kummer}.)
Suppose $K$ contains the primitive $n$th roots of unity, so in particular $p\nmid n$ if $K$ has finite characteristic~$p$.
Let $G= \set{\textmatrix a001 : a^n=1}\subset \PGL_2(K)$. Then $Q(x)=x^n$ is a quotient map. If $\tau \in K^\x$ then $\tau$ is regular, $Q^{-1}(\tau)=
\set{a \tau^{1/n} : a^n=1}$, and if $\s \in \Aut(K(\tau^{1/n})/K)$ then $\inv_Q(\tau,\sigma)=\textmatrix a001$, where $a=\sigma(\tau^{1/n})/\tau^{1/n}\in \mu_n$.
\bigskip
\noindent{\it Klein group.}\ (See Example~\ref{example:Klein} and \cite{Structure}.)
Let $G= \set{\textmatrix 1001,\textmatrix {-1}001,\textmatrix 0b10, \textmatrix 0{-b}10}\subset \PGL_2(K)$, where $b$ is a fixed element of $K^\x$. Assume the
characteristic of $K$ is not~2. The short orbits are
$\calO_\infty=\{\infty,0\}$, $\{\pm\sqrt b\}$, and $\{\pm\sqrt{-b}\}$. A quotient map is $Q(x)=(x+b/x)^2/4$, and $b,0,\infty$ are irregular.
Let $\tau \in \cj K$. Then $Q^{-1}(\tau) = \set{\pm \sqrt\tau \pm \sqrt{\tau-b}}$, and if $v=\sqrt\tau+\sqrt{\tau-b}$
(for some choices of square root), then $b/v=\sqrt\tau-\sqrt{\tau-b}$. Suppose that $\tau$ is regular ({\it i.e.}, $\tau \not \in \{0,b\}$),
$\s \in \Aut(\cj K/K)$, and $\s(\tau)=\tau$.
Let $A=\sigma(\sqrt\tau)/\sqrt\tau$
and $B=\sigma(\sqrt{\tau-b})/\sqrt{\tau-b}$. Then $\inv(\tau,\sigma) = \textmatrix A001 \textmatrix 0b10^{(1-AB)/2}$.
\section{Artin invariant for a transformation group of order~3} \label{sec:three}
Consider
$G_3= \set{I,\beta,\beta^2}$ where
$$\beta = \textmatrix{1\,}{-1}{1\,}{\,\,0}\in\PGL_2(K)$$
and $K$ is any field.
The $G_3$-orbits of $\cj K \cup \{\infty\}$ are $\calO_\infty = \{\infty,1,0\}$ and
$$\calO_v = \set{v, 1-1/v, 1/(1-v)}, \quad
\text{$v\in \cj K \setminus \{0,1\}$.}$$
$G_3$ is normalized by the map $\rho = \textmatrix 0110$ that takes $v$ to $1/v$:
$\rho\beta\rho=\beta^2=\beta^{-1}$. This feature will appear in our
analysis. (See Lemma~\ref{lem:G3invProperty}.)
The characteristic-3 case turns out to be very different from other characteristics.
In fact, our result for characteristic~3 is strikingly similar to the theorem
of Dillon and Dobbertin (see Prop.~\ref{prop:BDD}(2)).
This phenomenon will be explained at the end of this section.
\newpage
\subsection{Short orbits, quotient map, irregular elements.} \label{sec:G3Q}
\begin{lemma} \label{lem:G3short} The short orbits of $G_3$ (that is, the orbits with fewer than three
elements) are $\set{-1}$ in characteristic~3, or $\set{-\omega}$
and $\set{-\omega^2}$ in characteristic different from~3, where $\omega$
is a primitive cube root of unity in $\cj K$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:short}, $v$ is in a short orbit if
and only if $v=\beta(v)$ or $v=\beta^{-1}(v)$, or equivalently,
$v^2-v+1=0$. In characteristic~3, this factors as $(v+1)^2=0$, so
$\{-1\}$ is the only short orbit. If the characteristic is not three,
then $v^2-v+1=0 \iff v \in \set{-\omega,-\omega^2}$.
\end{proof}
\begin{lemma} \label{lem:G3Q}
$Q_3(x) = (x^3-3x+1)/(x(x-1))$ is a quotient map for $G_3$ over any field.
The set of irregular elements of $\cj K \cup \{\infty\}$ is $\{0\}$ if char$(K)=3$ and
$\{-3\omega ,-3\omega^2\} $ if char$(K)\ne3$, where $\omega$ is a
primitive cube root of unity in~$\cj K$. That is, the irregular elements are the roots
of $\tau^2-3\tau+9=0$.
\end{lemma}
\begin{proof} The formula for the quotient map was computed in Example~\ref{example:G3Q}.
The irregular points are the images under $Q_3$ of the short orbits. These are
$Q_3(-1)=0$ in char.~3, and $Q_3(-\omega)=-3\omega$,
$Q_3(-\omega^2)=-3\omega^2$ in char.~$\ne 3$.
\end{proof}
\begin{lemma} \label{lem:reciprocal} $Q_3(1/x) = 3-Q_3(x)$.
\end{lemma}
\begin{proof} This is a simple computation. \end{proof}
\begin{lemma} \label{lem:G3invProperty} If $\tau \in \cj K$ is regular then so is $3-\tau$, and for any $\s \in \Aut(\cj K/K)$
such that $\s(\tau)=\tau$ we have
$\inv_{Q_3}(3-\tau,\sigma)=\inv_{Q_3}(\tau,\sigma)^{-1}$.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:reciprocal}, $\tau=Q_3(v) \iff 3-\tau=Q_3(1/v)$. The short elements of $\cj K$ are closed under reciprocal,
therefore $\tau$ is regular iff $3-\tau$ is regular. Let $\rho = \textmatrix 0110$. Now
\begin{eqnarray*}
\inv(\tau,\s)=\beta^j &\iff& \sigma(v)=\beta^j(v) \\
&\iff& \sigma(1/v)=1/\sigma(v)=\rho\sigma(v)=\rho\beta^j(v)=\beta^{-j}\rho(v)=\beta^{-j}(1/v).
\end{eqnarray*}
Since $Q_3(1/v)=3-\tau$ and $\s(1/v)=\b^{-j}(1/v)$, this shows that $\inv(3-\tau,\sigma)=\b^{-j}=\inv(\tau,\s)^{-1}$.
\end{proof}
If $\tau\in \F_q$ is regular then $\inv(\tau)\in G_3$
is the unique element $\gamma \in G_3$ such that $v^q=\gamma(v)$ for any (hence every) $v \in Q_3^{-1}(\tau)$. Recalling that $\beta = \textmatrix 1{-1}10$,
and noting that $v \not \in \calO_\infty = \{\infty,0,1\}$,
\begin{equation}
\inv(\tau) = \begin{cases} \beta^0 & \text{iff $v^{^q-1}=1$,} \\
\beta & \text{iff $v^{q+1}-v+1=0$,} \\
\beta^{-1} & \text{iff $v^{q+1}-v^q+1=0$.} \end{cases} \label{eq:type}
\end{equation}
\subsection{Explicit description of $\inv(\tau,\sigma)$ when char$(K)\ne 3$.}
We wish to express $\inv(\tau,\sigma)$
purely in terms of $\tau$, without reference to~$v$.
Our method is to solve for $v$ in terms of $\tau$
and then determine how the Galois group permutes the solutions.
The equation relating $v$ and $\tau$ is cubic in $v$,
namely,
\begin{equation} v^3-3v+1-\tau v(v-1)=0. \label{eq:cubic} \end{equation}
There are well-documented ways to explicitly solve a cubic dating back to the 1500's, however
these fail in characteristic~3. This section assumes char$(K)\ne3$,
and the characteristic-3 case will be considered
in Section~\ref{sec:G3char3}.
Let $\omega$ denote a {\it fixed} primitive cube root of~1 in $\cj K$.
Then $\omega^2+\omega+1=0$.
The first step to solve a cubic $x^3+Ax^2+Bx+C$ is to make a change of
variables $y=x+A/3$ so as to eliminate the $x^2$ term. Writing
$y^3 + Dy + E =0$, substitute $y=z+k/z$ to obtain
$$z^3 + k^3/z^3 + (3k+D)z + (3k^2+Dk) z^{-1} + E = 0.$$
By setting $k=-D/3$, the $z$ and $z^{-1}$ terms drop out:
$$z^6 + E z^3 - D^3/27 = 0.$$
Then $z^3 = \left(-E \pm \sqrt{E^2+4D^3/27}\right)/2$.
The right side has two possible values, so $z$ has six possible values,
however $y=z+k/z$ turns out to have only three possible values.
To solve the cubic (\ref{eq:cubic}),
it is convenient to let
$$\hat\tau = \tau/3,\qquad R = \hat\tau^2-\hat\tau+1=(\hat\tau+\omega)(\hat\tau+\omega^2).$$
Substitute $y=v-\hat \tau$ to obtain
$$y^3 -3Ry - (2\hat\tau-1)R = 0.$$
Next, substitute $y = z + R/z$ to obtain
$$z^3 + R^3/z^3 - (2\hat\tau-1)R = 0.$$
Then $z^6-(2\hat\tau-1)Rz^3+R^3=0$, so $(z^3/R)^2-(2\hat\tau-1)(z^3/R)+R=0$ and
$$z^3/R = (1/2)\left( 2\hat\tau-1 \pm \sqrt{(2\hat\tau-1)^2-4R}\right).$$
Note that $(2\hat\tau-1)^2-4R=-3$ and $\omega = (-1+\sqrt{-3})/2$ for
an appropriate choice of $\sqrt{-3}$. Thus, one choice of $z$ satisfies
$$z^3 = R\cdot(\hat\tau + \omega).$$
Fix $\lambda,\mu \in
\cj K$ such that
\begin{equation} \text{$\lambda^3 = \tau/3+\omega$ and $\mu^3 = \tau/3+\omega^2$.}
\label{eq:lambdamu} \end{equation}
Then $\lambda\mu$ is a cube root of $R$, so one solution for $z$ is
$z = \lambda^2 \mu$. Then $v=y+\hat\tau=\hat\tau + z+R/z
=\hat\tau + \lambda^2\mu + \lambda^3 \mu^3/(\lambda^2\mu)
= \hat\tau + \lambda^2 \mu + \lambda \mu^2$.
If we set $\lambda'=\omega^j \lambda$ for $j\in\Z/3\Z$ and use $\lambda'$ in the above
construction instead of $\lambda$, then we arrive at a solution
\begin{equation} v_j = \tau/3 + \omega^{-j} \lambda^2 \mu + \omega^{j} \lambda \mu^2.
\label{eq:vj} \end{equation}
\begin{proposition} \label{prop:vjRel} Suppose char$(K)\ne 3$, let $\tau\in \cj K$
and $\l,\mu,v_j$ as in (\ref{eq:lambdamu}) and~(\ref{eq:vj}).
Then $Q_3(v_j) = \tau$, where $Q_3(x)=(x^3-3x+1)/(x^2-x)$. Also,
\begin{equation}
v_1 = 1-1/v_0, \quad v_2 = 1-1/v_1. \label{eq:vjRel}
\end{equation}
\end{proposition}
\begin{proof} That $Q_3(v_j)=\tau$ was proved above. By Proposition~\ref{prop:Qorbit}, $v_0$, $v_1$, and $v_2$
belong to the same $G_3$-orbit. Either $v_1 = 1-1/v_0$, in which case $v_0 v_1 = v_0-1$,
or $v_1=1/(1-v_0)$, in which case $v_0v_1 = v_1-1$.
To see which of these holds, we compute $v_0 v_1$. As before, let $\hat\tau = \tau/3$. Then
\begin{eqnarray*} v_0 v_1 &=& (\hat\tau + \lambda^2 \mu + \lambda \mu^2)(\hat\tau + \omega^2 \lambda^2\mu + \omega \lambda \mu^2) \\
&=& (\hat\tau + \lambda \mu (\lambda + \mu))(\hat\tau + \lambda\mu (\omega^2 \lambda + \omega \mu)) \\
&=& \hat\tau^2 + \hat\tau \lambda\mu (\lambda + \mu + \omega^2 \lambda+\omega \mu )
+ \lambda^2 \mu^2 ( \omega^2 \lambda^2 + \omega^2\lambda\mu + \omega\lambda\mu + \omega\mu^2) \\
&=& \hat\tau^2 + \hat\tau \lambda\mu (-\omega\lambda - \omega^2 \mu )
+ \lambda^2 \mu^2 ( \omega^2 \lambda^2 - \lambda\mu + \omega\mu^2) \\
&=& \hat\tau^2 - \hat\tau (\omega \lambda^2\mu + \omega^2 \l\mu^2 )
+ \lambda^3 \omega^2\lambda\mu^2 -\lambda^3\mu^3 + \mu^3\omega\lambda^2\mu \\
&=& \hat\tau^2 - \hat\tau (\omega \lambda^2\mu + \omega^2 \l\mu^2 )
+ (\hat\tau+\omega) \omega^2\lambda\mu^2 -(\hat\tau+\omega)(\hat\tau+\omega^2) + (\hat\tau+\omega^2)\omega\lambda^2\mu \\
&=& \hat\tau^2 +\lambda\mu^2 - (\hat\tau^2-\hat\tau+1) + \lambda^2\mu \\
&=& \hat\tau + \lambda^2\mu + \lambda\mu^2 - 1 = v_0-1.
\end{eqnarray*}
This computation shows that $v_1=1-1/v_0$, \ie, $v_1=\beta(v_0)$, where $\beta=\textmatrix1{-1}10$.
Since $\set{v_0,v_1,v_2}$ is an orbit, it must be that $v_2 = \beta(v_1)$.
\end{proof}
It is interesting to note what happens when $\tau$ is irregular.
Then, as shown in Section~\ref{sec:G3Q}, $\tau \in \{-3\omega,-3\omega^2\}$, therefore
$\lambda\mu=0$ and $v_j = \tau/3$ for all $j$.
The labeling of $v$'s depends on the choices for $\omega$, $\lambda$ and $\mu$.
However, for any such choice, $\beta(v_j)=1-1/v_j = v_{j+1}$ for each $j$. In other words,
\begin{equation} \beta^k(v_j) = v_{j+k}\quad\text{for $j,k\in\Z/3\Z$.} \label{eq:betavj} \end{equation}
The next theorem describes $\inv_{Q_3}(\tau,\sigma)$ directly in terms of $\sigma$ and $\tau$.
\begin{theorem} \label{thm:G3charneq3} Suppose char$(K)\ne 3$, and let $\s \in \Aut(\cj K/K)$.
If $\tau \in \cj K \setminus\{-3\omega,-3\omega^2\}$ and $\s(\tau)=\tau$, then
$\inv_{Q_3}(\tau,\sigma)=\b^\ell$ where $\ell$ is determined from $\tau$ as follows.\\
{\it (i)}\ Let $\z \in \cj K$ such that
$$\z^3=\frac{\tau+3\omega^2}{\tau+3\omega}.$$
Then $\s^2(\z)/\z = \omega^\ell$. \\
{\it (ii)}\
If $K=\F_q$ and $\s(x)=x^q$, where $3\nmid q$, then
$$\left(\frac{\tau+3\omega^2}{\tau+3\omega}\right)^{(q^2-1)/3} = \omega^\ell.$$
\end{theorem}
\begin{proof}
Since $Q(v_j)=\tau$ and $\inv(\tau,\s)=\beta^\ell$, (\ref{eq:invariantK}) and (\ref{eq:betavj}) imply
$\sigma(v_j)=\beta^{\ell}(v_j)=v_{j+\ell}$, hence
$$\sigma^2(v_j)=\beta^{2\ell}(v_j)=v_{j+2\ell}.$$
Let $\l,\mu$ be as in (\ref{eq:lambdamu}) and $\z_0=\mu/\l$.
Then $\z_0^3=(\tau+3\omega^2)/(\tau+3\omega)=\z^3$, therefore $\z=\omega^i\z_0$
for some $i\in \Z/3\Z$. Since $\omega$ is defined over at most a quadratic
extension of $K$, $\sigma^2(\omega)=1$ and hence $\s^2(\z)/\z=\s^2(\z_0)/\z_0$.
Now $\sigma^2(\z_0)/\z_0$
is a cube root of unity, because its cube is equal to $\sigma^2(\z_0^3)/\z_0^3=1$.
Let $\omega^k = \sigma^2(\z_0)/\z_0$.
By~(\ref{eq:vj}),
$$v_j/\l^3= \tau/(3\l^3) + \omega^{-j} \mu/\l + \omega^j(\mu/\l)^2 =
\frac\tau{\tau+3\omega} + \omega^{-j} \z_0 + \omega^j \z_0^2.$$
Then
$$ \sigma^2(v_j/\l^3) =
\frac\tau{\tau+3\omega} + \omega^{-j} \omega^k\z_0 + \omega^j \omega^{2k}\z_0^2
= v_{j-k}/\l^3,$$
which implies $\sigma^2(v_j)=v_{j-k} = v_{j+2k}$.
We have shown $\s^2(v_j)=v_{j+2k}=v_{j+2\ell}$. Since $v_0,v_1,v_2$
are distinct when $\tau$ is regular, it follows that $k\equiv \ell \pmod3$.
This proves {\it (i)}.
If $K=\F_q$ and $\s(x)=x^q$, then
$$ \omega^\ell = \s^2(\z)/\z = \z^{q^2-1} = (\z^3)^{(q^2-1)/3}
= \left( \frac{\tau+3\omega^2}{\tau+3\omega}\right)^{(q^2-1)/3},$$
proving {\it (ii)}.
\end{proof}
If the choice of $\omega$ is changed to $\tilde\omega=\omega^2$, then $\ell$ does not change:
$\l$ and $\mu$ are exchanged, and so the new $\z_0$ value is $\widetilde \z_0=1/\z_0$
and $\s^2(\widetilde\z)/\widetilde\zeta = \s^2(\z^{-1})/\z^{-1}=\omega^{-\ell}=\widetilde \omega^\ell$.
This is to be expected; for example when $K=\F_q$, $\b^\ell$ reflects the form of
equation satisfied by $v\in Q_3^{-1}(\tau)$ (see (\ref{eq:type})),
and this is certainly independent of the choice of $\omega$.
\subsection{Explicit description of $\inv(\tau,\sigma)$ in characteristic 3. } \label{sec:G3char3}
As promised, we return to the case of char.~3.
Then the only short $G_3$-orbit is $\{-1\}$, and the only irregular element is $Q_3(-1)=0$. Let $\tau \in \cj K^\x$ and write $\tau = Q_3(v)=(v+1)^3/(v(v-1))$.
Note that $v \not \in \F_3$ since $\tau \not \in \{\infty,0\}$.
Suppose $\s \in \Aut(\cj K/K)$ and $\s(\tau)=\tau$.
Then $\inv_{Q_3}(\tau,\sigma)=\beta^\ell$ is determined from
$\sigma(v)=\beta^\ell(v)$. We wish to describe $\inv_{Q_3}(\tau,\sigma)$
purely in terms of $\tau$ and $\sigma$, without reference to $v$. The approach of solving for $v$ in terms of $\tau$
no longer works in characteristic~3, so we must try something different.
\begin{proposition} \label{prop:G3char3} Let $K$ be a field of characteristic~3 and let $\tau \in \cj K$ be regular (so $\tau \ne 0$).
Let $\sigma \in \Aut(\cj K/K)$
and assume $\s(\tau)=\tau$. Let $\z\in \cj K$ satisfy $\z^3-\z=1/\tau$. Then there is $\ell \in \F_3$ such that $\sigma(\z)=\z+\ell$,
and we have $\inv_{Q_3}(\tau,\sigma)=\beta^\ell$.
\end{proposition}
\begin{proof}
Let $v \in Q^{-1}(\tau)$. Since $\tau = (v^3+1)/(v^2-v)$, $v^3 - \tau v^2 + \tau v + 1 = 0$. Since $\tau\not\in\{ 0,\infty\}$,
$v\not\in\F_3$. The substitution $y=v+1$ eliminates the linear term:
$$y^3 - \tau y^2 + \tau = 0.$$
Let $\z = -1/y = -1/(v+1)$. (Here note that $y \ne 0$ since $v \not\in\F_3$.) Then
$$\z^3-\z = 1/\tau.$$
Let $\ell=\sigma(\z)-\z$. Then $\ell\in\F_3$, because
$$\ell^3=\s(\z^3)-\z^3=\s(\z+1/\tau)-(\z+1/\tau)=\s(\z)-\z=\ell.$$
The other roots of $x^3-x=1/\tau$ are $\z+i$, $i \in \F_3$, and $\s(\z+i)-(\z-i)=\ell$ for all three roots.
Since $\z=-1/(v+1)= \textmatrix0{-1}11(v)$,
\begin{eqnarray*} \s(v) &=& \s\left(\textmatrix {1}1{-1}0 (\z)\right) = \textmatrix {1}1{-1}0 \textmatrix 1\ell01 (\z) \\
&=& \textmatrix {1}1{-1}0 \textmatrix 1\ell01 \textmatrix 0{-1}11 (v) =\begin{pmatrix} \ell+1&\ell \\ -\ell & 1-\ell \end{pmatrix}(v) = \beta^\ell(v).
\end{eqnarray*}
Thus, $\inv_{Q_3}(\tau,\sigma)=\beta^\ell$, where $\ell = \sigma(\z)-\z$.
\end{proof}
\begin{corollary} \label{cor:G3char3Fq}
If $\tau \in \F_q^\x$, where $q=3^n$, then $\inv_{Q_3}(\tau,q)=\b^\ell$, where $\ell=\Tr_{\F_q/\F_3}(1/\tau)$.
\end{corollary}
\begin{proof} Let $\z \in \cj\F_q$ satisfy $\z^3-\z=1/\tau$. By Proposition~\ref{prop:G3char3}, $\z^q=\z+\ell$ for some $\ell \in \F_3$, and $\inv_{Q_3}(\tau,q)=\b^\ell$.
We claim that $\ell=\Tr_{\F_q/\F_3}(1/\tau)$. Indeed,
\begin{equation*} \Tr_{\F_q/\F_3}(1/\tau) = \sum_{i=0}^{n-1} (1/\tau)^{3^i} = \sum_{i=0}^{n-1}(\z^3-\z)^{3^i} = \sum_{i=0}^{n-1} \left( \z^{3^{i+1}} - \z^{3^i}\right)
= \z^q-\z = \ell. \end{equation*}
\end{proof}
\subsection{A symbol with values in $\Z/3\Z$.} \label{sec:symbol}
Let $K=\F_q$, where $q$ is any prime power. Then
$\inv(\tau,q)\in G_3 = \set{I,\beta,\beta^{-1}}$ for regular $\tau \in \F_q$. $G_3$ is isomorphic to $\Z/3\Z$ abstractly, but making this explicit
requires selecting a preferred generator, which seemingly could equally well be $\beta$ or $\beta^{-1}$. On the other hand, Corollary~\ref{cor:G3char3Fq} relates
$\inv(\tau,q)$ with the absolute trace map in char.~3, which
is genuinely a map to $\Z/3\Z$. Specifically, if $q=3^n$ and $\tau = Q_3(v) \in \F_q^\x$, then
$$\text{$v^q = \beta^j(v) \iff \Tr_{\F_q/\F_3}(1/\tau)=j$, \ie,
$\log_\beta(\inv(\tau,q)) = \Tr_{\F_q/\F_3}(1/\tau)$.}$$
The trace map
determines the preferred generator $\beta\in G_3$, or equivalently a preferred isomorphism $\log_\beta : G_3 \to \Z/3$.
Then, for all characteristics, $\log_\beta (\inv(\tau))$ takes values in $\Z/3\Z$. We summarize this in the theorem below.
\begin{theorem} \label{thm:G3symbol} {\bf (A tripartite symbol)}
Let $q$ be any prime power. Let $\tau \in \F_q$, and assume $\tau^2-3\tau+9 \ne 0$.
Define a symbol $[\tau/q] \in \Z/3\Z$ by
$$ \left[\tau/ q\right] = \Tr_{\F_q/\F_3}(1/\tau), \qquad \text{if $3|q$;} $$
$$ \left(\frac{\tau + 3\omega^2}{\tau+3\omega}\right)^{(q^2-1)/3} = \omega^{[ \tau/ q]},\qquad \text{if $3\nmid q$}, $$
where $\omega$ is any primitive cube root of unity in $\cj\F_q$ when $3\nmid q$.
Let $\beta = \textmatrix1{-1}10 \in \PGL_2(\F_q)$.
Then $\beta$ has order~3, so $\beta^j$ is well defined for $j \in \Z/3\Z$.
If $v \in \cj\F_q$ such that $v^3-3v+1 = v(v-1)\tau$, then
\begin{equation} v^q=\beta^{[\tau/ q]}(v). \label{eq:vqbeta} \end{equation}
Moreover, (\ref{eq:vqbeta}) determines $[\tau/ q]\in\Z/3\Z$ uniquely and could serve as an alternative definition for the symbol.
\end{theorem}
\begin{proof} This is immediate from Theorem~\ref{thm:G3charneq3} and
Corollary~\ref{cor:G3char3Fq}.
\end{proof}
Theorem~\ref{thm:G3symbol} weaves together fields of different characteristic in a remarkable way.
The group $G_3$ is defined over the integers, so $G_3\subset \PGL_2(K)$ for any field $K$. Taking $K=\F_q$,
this is consistent with the fact that 3 divides $|\PGL_2(\F_q)|=q(q-1)(q+1)$; in particular, 3 could divide $q$, $q-1$, or $q+1$.
Generally speaking, these three cases behave very differently, \eg,
when $3|q$, $\beta$ has a single eigenvalue and it is conjugate to $\textmatrix 1101$. If $3|q-1$ then $\beta$ has two rational eigenvalues and it is
conjugate to a diagonal matrix of order~3. If $3|q+1$, then $\beta$ has a pair of quadratic irrational eigenvalues. The equation~(\ref{eq:vqbeta})
applies to all these cases simultaneously.
We point out the similarity between Corollary~\ref{cor:G3char3Fq}
(when $q=3^n$) and Proposition~\ref{prop:BDD}(2) (when $q=2^n$).
(See the remark following Proposition~\ref{prop:BDD}.)
In Corollary~\ref{cor:G3char3Fq}, the order-3 element $\beta$ is conjugate to $\textmatrix 1101$ in characteristic~3, and
$$\inv(\tau,q) = \beta^{\Tr_{\F_q/\F_3}(1/\tau)}.$$
In Proposition~\ref{prop:BDD}(2) with $G=\{\textmatrix 1001,\textmatrix 0110\}$, the order-2 element $\gamma=\textmatrix 0110$ is conjugate to
$\textmatrix 1101$ in characteristic~2, and
$$\inv(\tau,q) = \gamma^{\Tr_{\F_q/\F_2}(1/\tau)}.$$
In both cases, the appearance of the trace map can be explained by Lemma~\ref{lem:conjugateInv}
and Proposition~\ref{prop:dim1codim1}{\it (i)}.
\section{The dihedral group of order~6} \label{sec:six}
Consider the subgroup $G_6 \subset \PGL_2(K)$ that is generated by the transformations
$\beta(x)=1-1/x$ and $\rho(x)=1/x$. This group is dihedral of order~6 since $\beta^3=1$, $\rho^2=1$,
and $\rho\beta^i\rho = \beta^{-i}$.
The orbit of $v\in\cj K \cup \{\infty\}$ is
$$\calO_v = \set{v,1-1/v,1/(1-v),1/v,1-v,v/(v-1)}.$$
Note that $\calO_\infty= \{\infty,1,0\}$ is short.
To find the other short orbits, we find all $v \in \cj K \setminus \{0,1\}$ such that $v=\gamma(v)$ for some $1\ne \gamma \in G$.
If char$(K) \not \in \{2,3\}$, then
\begin{itemize}
\item $v = 1-1/v$ iff $v \in \{-\omega,-\omega^2\}$, where $\omega^2+\omega+1=0$;
\item $v = 1/(1-v)$ iff $v \in \{-\omega,-\omega^2\}$;
\item $v = 1/v$ iff $v = -1$. (Here $v=1$ is excluded since it belongs to $\calO_\infty$.)
\item $v=v/(v-1)$ iff $v = 2$. (Here $v=0$ is excluded since it belongs to $\calO_\infty$.)
\item $v = 1-v$ iff $v = 1/2$.
\end{itemize}
The short orbits are $\{-\omega,-\omega^2\}$, $\{-1,2,1/2\}$, and $\calO_\infty$ in that case.
If char$(K)=2$ then the latter three equations have no solutions in $\cj K \setminus \F_2$, and the only short orbits in $\cj K \cup \{\infty\}$
are $\{-\omega,-\omega^2\} = \F_4\setminus \F_2$ and $\calO_\infty = \F_2 \cup \{\infty\}$. Finally, if char$(K)=3$ then
the short orbits are $\{-1\}$ and $\calO_\infty$.
\begin{lemma} \label{lem:G6Q} The function
\begin{equation} \label{def:Q6}
Q_6(x) = \frac{(x^3-3x+1)(x^3-3x^2+1)}{x^2(x-1)^2}.
\end{equation}
is a quotient map for $G_6$ over any field $K$.
The set of irregular elements of $\cj K\cup \{\infty\}$ is $S\cup\{\infty\}$, where
\begin{equation} \label{def:S}
S = \begin{cases} \{-9,-9/4\} & \text{if char$(K) \not \in \{2,3\}$} \\ \{1\} & \text{if char$(K)=2$} \\ \{0\} & \text{if char$K)=3$.} \end{cases}
\end{equation}
\end{lemma}
\begin{proof} Note that $Q_6(x)=-Q_3(x)Q_3(1/x)$, where $Q_3(x)=(x^3-3x+1)/(x(x-1))$ is the quotient map for $G_3$
given in Lemma~\ref{lem:G3Q}.
It is clear that $Q_6(x)=Q_6(1/x)$. Also,
$Q_6(\beta(x)) = -Q_3(\beta(x)) Q_3(\rho(\beta(x))) = -Q_3(x) Q_3(\beta^2\rho(x)) = Q_6(x)$. This proves $G$-invariance. The degree of
the numerator is $6=|G_6|$ and the degree of the denominator is $<|G_6|$. Thus, $Q_6$ satisfies all required properties to be a quotient map for $G_6$.
The irregular elements are $Q_6(v)$, where $v$ is in a short $G_6$-orbit. The short $G_6$-orbits were computed in the paragraph preceding the statement of the lemma.
Computing their images under $Q_6$ demonstrates that the irregular elements are $S\cup\{\infty\}$.
\end{proof}
Amusingly, Artin \cite[\S II.G]{Artin} considers the
particular example of finding the fixed field in $K(x)$ to the set of automorphisms $f(x) \mapsto f(\gamma(x))$ for $\gamma \in G_6$.
He finds that the fixed field is $K(I)$ where $I(x)=(x^2-x+1)^3/\left(x^2(x-1)^2\right)$.
Note that $I(x)=Q_6(x) + 9$. In the notation of Theorem~\ref{thm:computeQ}, $I(x) = f_\calO(x)/g(x)$, where $\calO=\{-\omega,-\omega^2\}$.
Recall that $\inv_{Q_6}(\tau,\sigma)$ is a certain conjugacy class in $G_6$. The conjugacy classes are
$$\calC_\beta = \{\beta,\beta^2\}, \quad \calC_\rho = \set{\rho,\rho\beta,\rho\beta^2},\quad \calC_1=\{1\}.$$
The next proposition computes $\inv_{Q_6}(\tau,\s)$.
\begin{proposition} Let $\tau \in \cj K$ be regular with respect to $Q_6$
and let $\sigma \in \Aut(\cj K/K)$ such that $\s(\tau)=\tau$.
Let $z \in \cj K$ satisfy $z^2-3z=\tau$. Then \\
{\it (i)}\ $z$ is regular with respect to $Q_3$. \\
{\it (ii)}\ If $\sigma(z)=z$ then $\gamma = \inv_{Q_3}(z,\sigma) \in G_3$ is defined, and $\inv_{Q_6}(\tau,\sigma)=\calC_\g$. \\
{\it (iii)}\ If $\sigma(z)\ne z$, then $\inv_{Q_6}(\tau,\sigma) = \calC_\rho$. \\
{\it (iv)}\ If {\rm char}$(K)\ne 2$, then $\sigma(z)=z$ iff $\s(\sqrt{9+4\tau}\,)=\sqrt{9+4\tau}$. If $K=\F_{2^n}$ and $\sigma(x)=x^{2^n}$, then
$\sigma(z)=z$ iff $\Tr_{\F_{2^n}/\F_2}(\tau)=0$.
\end{proposition}
\begin{proof} By Lemma~\ref{lem:reciprocal}, $Q_6(x)=-Q_3(x)Q_3(1/x) = -Q_3(x)(3-Q_3(x))$. Thus, $Q_6(x)=h\circ Q_3(x)$
and $\tau = h(z)$, where $h(x)=x^2-3x$. The statements {\it (i)} and {\it (ii)} are true by Lemma~\ref{lem:Hinv}, and {\it (iv)} is well known.
It remains only to prove~{\it (iii)}. Let $v\in Q_3^{-1}(z)$.
Then $Q_6(v)=h(Q_3(v))=h(z)=\tau$, so that by Proposition~\ref{prop:Qorbit}, $Q_6^{-1}(\tau)=\{\g(v) : \g \in G\}$.
Since $Q_6(\s(v))=\s(\tau)=\tau$, $\s(v)=\g(v)$ for some $\g \in G_6$, and by (\ref{eq:invariantK}), $\inv_{Q_6}(\tau,\s)=\calC_\g$.
We claim that $\g \not\in H$. Indeed,
$$\sigma(z)=\sigma(Q_3(v))=Q_3(\sigma(v))=Q_3(\g(v)),$$
therefore $\sigma(z)=z$ iff $v,\g(v)$ are both in $Q_3^{-1}(z)$
iff $v$ and $\g(v)$ are in the same $G_3$-orbit iff $\g(v)=\d(v)$ for some $\d \in G_3$ iff $\g=\d \in G_3$, where in the last
step we used the fact that $\g(v)$ for $\g\in G_6$ are distinct because of the hypothesis that $\tau$ is $G_6$-regular.
Since we assume in {\it (iii)} that $\s(z)\ne z$, this proves the claim that $\g \in G\setminus H$.
Then $\inv_{Q_6}(\tau,\sigma)=\calC_\g = \calC_\rho$, since $\calC_\rho = G_6 \setminus G_3$.
\end{proof}
\part{Subgroups of $\PGL_2(\F_q)$} \label{part:q}
This part considers many different subgroups of $\PGL_2(\F_q)$, including Borel subgroups, unipotent subgroups, cyclic subgroups,
$\PGL_2(\F_q)$, and $\PSL_2(\F_q)$. As indicated in the introduction (Examples~\ref{example:1b01} and~\ref{example:pgl2}), explicit computation of Artin invariants
reveals arithmetic information about additive polynomials and conjugacy classes of $\PGL_2(\F_q)$.
\section{Borel subgroup of $\PGL_2(\F_q)$}
The Borel subgroup $B_q\subset \PGL_2(\F_q)$ is defined as
$$B_q=\set{\g \in \PGL_2(\F_q) : \g(\infty)=\infty} = \set{\textmatrix ab01 \in \PGL_2(\F_q) : a \in \F_q^\x,\ b \in \F_q}.$$
The cardinality is $q(q-1)$, and the short orbits are $\{\infty\}$ and $\F_q$. The orbit $\F_q$ has multiplicity $|B_q|/q=q-1$. By
Theorem~\ref{thm:computeQ}, a quotient map is given by
$$Q(x) = (x^q-x)^{q-1}.$$
The irregular elements are the images of the short orbits under $Q$, namely 0 and $\infty$.
The conjugacy classes of $B_q$ are $\calC_{\textmatrix a001}$ for $a \in \F_q^\x$ and $\calC_{\textmatrix 1101}$.
No element $\tau \in \F_q^\x$ has $\inv(\tau) = \calC_\textmatrix 1001$, because $v^q = v$ implies that $v$ belongs to a short $B_q$-orbit.
\begin{proposition} \label{prop:Borel} If $\tau \in \F_q^\x$ then
$$ \inv(\tau) = \begin{cases} \calC_{\textmatrix 1101} & \text{if $\tau = 1$} \\
\calC_{\textmatrix \tau 001} & \text{otherwise.}
\end{cases}
$$
\end{proposition}
\begin{proof} If $v \in \cj\F_q\setminus \F_q$ and $v^q = \textmatrix 1101 (v) = v+1$ then $Q(v) = (v^q-v)^{q-1} = 1$. Thus, $\inv(1)=\calC_{\textmatrix 1101}$.
If $\tau\ne1$ and $v^q= \textmatrix \tau 001(v) = \tau v$ then
$Q(v) = (v^q-v)^{q-1}=v^{q-1}(v^{q-1}-1)^{q-1}=\tau (\tau -1)^{q-1}=\tau $. Thus, $\inv(\tau )=\calC_{\textmatrix \tau 001}$ when $\tau\ne1$.
\end{proof}
Now we generalize. Suppose that $q=P^e$, where $P$ is a prime power, and let
$$H = B_P = \set{ \textmatrix ab01 : a \in \F_P^\x,\ b \in \F_P}.$$
Then $Q_H(x) = (x^P-x)^{P-1}$ is a quotient map, $\{\infty\}$ and $\F_P$ are the only short $B_P$-orbits, and 0 and $\infty$ are irregular.
The conjugacy classes of $H$ are $\calC_{\textmatrix a001,H}$ with $a \in \F_P^\x$ and $\calC_{\textmatrix 1101,H}$, where $\calC_{\g,H}$ denotes $\{\a\g\a^{-1} : \a\in H\}$.
Let N and Tr denote the polynomials in $\F_q[x]$:
$$N(x) = \prod_{i=0}^{e-1} x^{P^i} = x^{1+P+P^2+\cdots + P^{e-1}} = x^{(q-1)/(P-1)},\qquad
\Tr(x) = \sum_{i=0}^{e-1} x^{P^i}.$$
If $\tau \in \F_q$ then $N(\tau)=N_{\F_q/\F_P}(\tau)$ and $\Tr(\tau)=\Tr_{\F_q/\F_P}(\tau)$.
For each $a\in\F_P$, there are exactly $q/P$ elements in $\F_q$ with trace~$a$,
and $q/P$ is the degree of Tr$(x)$, thus
$$\text{If $\tau \in \cj\F_q$ and Tr$(\tau)=a\in\F_P$ then $\tau \in \F_q$ and $\Tr_{\F_q/\F_P}(\tau)=a$.}$$
Likewise, if $\tau\in\cj\F_q$ and $N(\tau)\in\F_P^\x$ then $\tau\in\F_q^\x$, because $\tau^{q-1}=N(\tau)^{P-1}=1$.
Note also that if $\tau=s^{P-1}$ and $N(\tau)=1$ then $s\in\F_q^\x$, because $s^{q-1}=N(s^{P-1}) = N(\tau)$.
\begin{proposition} \label{prop:BorelGen}
With respect to $H$ and $Q_H$ given above, for $\tau \in \F_q^\x$: \\
{\it (i)}\ If N$(\tau)\ne 1$ then $ \inv_{Q_H}(\tau,q) = \ \calC_{\textmatrix {N(\tau)} 001,H}$. \\
{\it (ii)}\ If N$(\tau) = 1$ then we may write $\tau=s^{P-1}$ with $s \in \F_q^\x$, and
$$ \inv_{Q_H}(\tau,q) = \begin{cases} \calC_{\textmatrix 1001,H} & \text{if $\Tr(s)=0$} \\
\calC_{\textmatrix 1 101,H} & \text{if $\Tr(s)\ne 0$.}
\end{cases}
$$
\end{proposition}
\begin{proof} We will apply Lemma~\ref{lem:Hinv}, taking $G=B_q$ and $H=B_P$. First, we compute a function $h$ such that $Q_G=h\circ Q_H$:
\begin{eqnarray*} Q_G &=& (x^q-x)^{q-1} = \left( \sum_{i=0}^{e-1} (x^P-x)^{P^i}\right)^{q-1} = (x^P-x)^{q-1}\left( \sum_{i=0}^{e-1}(x^P-x)^{P^i-1} \right)^{q-1}
\\ &=& N(Q_H)f(Q_H)^{q-1}, \quad
\text{where $f(x)=\sum_{i=0}^{e-1} x^{(P^i-1)/(P-1)}$.}
\end{eqnarray*}
Thus, $h(x)=N(x)f(x)^{q-1}$.
Let $\tau \in \F_q^\x$ and $v \in Q_H^{-1}(\tau)$, so $\tau = (v^P-v)^{P-1}$. Then $h(\tau)=h\circ Q_H(v)=Q_G(v)=(v^q-v)^{q-1}$. Since
$f(\tau)\in\F_q$, $h(\tau)=N(\tau)f(\tau)^{q-1} \in \{ N(\tau),0\}$. On the other hand, $h(\tau)=(v^q-v)^{q-1}$ vanishes if and only if $v\in \F_q$. Thus,
$$h(\tau) = \begin{cases} N(\tau) & \text{if $v \not \in \F_q$} \\ 0 & \text{if $v\in \F_q$.} \end{cases} $$
Since $(v^P-v)^{P-1}=\tau\ne 0$, $v \not \in \F_P$.
Let $s\in \cj\F_q$ such that $\tau=s^{P-1}$. Then
$(s/(v^P-v))^{P-1}=1$, so $s=c(v^P-v)$ with $c\in\F_P^\x$, and $\Tr(s)=c\Tr(v^P-v)=c(v^q-v)$. In particular, $\Tr(s)=0$ iff $v\in \F_q\setminus\F_P$. Thus, if $\Tr(s)=0$ then
$N(\tau)=(v^P-v)^{q-1}=1$,
and the formulas $\tau=Q_H(v)$, $v^q=v$ imply $\inv_{Q_H}(\tau)=\calC_{\textmatrix1001}$.
Now suppose $\Tr(s)\ne 0$, so $v\not\in\F_q$.
Then $h(\tau)=N(\tau)\ne0$, so it is regular with respect to $Q_G$. By Lemma~\ref{lem:Hinv},
there is $\d \in H$ such that $\inv_{Q_H}(\tau,q)=\calC_{\d,H}$ and $\inv_{Q_G}(N(\tau),q)=\calC_{\d,G}$. On the other hand, Proposition~\ref{prop:Borel} implies
$\inv_{Q_G}(N(\tau),q) = \calC_{\g,G}$, where $\g=\textmatrix{N(\tau)}001$ if $N(\tau)\ne 1$ and $\g=\textmatrix 1101$ if $N(\tau)=1$. In both cases,
$\calC_{\g,G}=\left\{\textmatrix {N(\tau)} b01 : b \in \F_q\right\}$ and $\calC_{\g,G}\cap H=\calC_{\g,H}$. Since $\d \in \calC_{\d,G}\cap H = \calC_{\g,G}\cap H=\calC_{\g,H}$,
it follows that $\inv_{Q_H}(\tau,q)=\calC_{\d,H}=\calC_{\g,H}$.
\end{proof}
\section{Unipotent subgroups of $\PGL_2(\F_q)$} \label{sec:unipotent}
Let $q = p^n$ where $p$ is prime.
A {\it unipotent subgroup} of $\PGL_2(\F_q)$ is a group $G_W=\set{\textmatrix 1 w 0 1 : w \in W}$, where $W$ is an $\F_p$-vector subspace of $\F_q$.
Note that $\{\infty\}$ is the only short orbit. All other orbits are cosets of $W$ in $\cj\F_q$, and each has cardinality $|G_W|=p^{\dim(W)}$.
Every element of $\cj\F_q$ is regular.
Thus, for each $\tau \in \F_q$, there is a unique $\g = \textmatrix 1j01 \in G_W$ such that
$v^q=\gamma(v)$ for all $v \in Q_W^{-1}(\tau)$. Otherwise put, if $Q_W(v)=\tau \in \F_q$, then $j := v^q-v \in W$, and $j$ depends only on $\tau$.
The simplest example is $W = \F_p\subset \F_q$, and $G_W\subset \PGL_2(\F_q)$ is the subgroup of order $p$ generated by $\textmatrix 1101$. Then $Q(x)=x^p-x$
is a quotient map\footnote{The polynomial $x^p-x-\tau$ is called an Artin-Schreier polynomial.
If $\Tr_{\F_q/\F_p}(\tau)\ne 0$ then its splitting field has degree~$p$
and Galois group $\Z/p\Z$. See \cite[VI, \S6, Th.~6.4]{Lang}.}.
Let $\tau = Q(v) \in \F_q$. Then
$v^q= \textmatrix 1j01 v = v+j$ for some $j \in \F_p$, and $\inv(\tau)=\textmatrix 1j01$.
To relate $\inv(\tau)$ to a quantity that is directly computable from $\tau$, we note that
\begin{eqnarray*}
\Tr_{\F_q/\F_p}(\tau) &=& \tau + \tau^p + \tau^{p^2} + \cdots + \tau^{q/p} \\
&=& (v^p-v)+(v^{p^2}-v^p)+(v^{p^3}-v^{p^2})+ \cdots + (v^q-v^{q/p})
= v^q-v=j.
\end{eqnarray*}
Thus, the invariant coming from this group is essentially the absolute trace of $\tau$.
Now let $W$ be an arbitrary $\F_p$-vector subspace of $\F_q$. Then
$$Q_W(x) = \prod_{w \in W} (x-w)$$
is easily seen to be $G_W$-invariant, so it is a quotient map for $G_W$.
\begin{lemma} $Q_W(x)$ is an additive polynomial, \ie, $Q_W(x+y)=Q_W(x)+Q_W(y)$. \end{lemma}
\begin{proof} See Goss \cite{Goss}, Theorem 1.2.1. Alternatively, observe that $Q_W(x)$ and $Q_W(x+y)$ are both quotient maps for $G_W$
over the field $\F_q(y)$, as both are invariant under $G_W$ and of the right form. By Proposition~\ref{prop:QExists},
there are $a,b\in \F_q(y)$ with $a\ne0$ such that $Q_W(x+y)=aQ_W(x)+b$. Since $Q_W(x+y)$ and $Q_W(x)$ are both monic as polynomials in~$x$,
$a$ must be~1. The value for $b$ can be found by setting $x=0$: $Q_W(y)=Q_W(0)+b=b$. Hence, $Q_W(x+y)=Q_W(x)+Q_W(y)$.
\end{proof}
Suppose now that $P$ is a power of $p$ and $\F_p\subset \F_P \subset \F_q$, and that $W$ is an $\F_P$-vector subspace of $\F_q$. In that case, $Q_W$ is $\F_P$-additive, meaning that it is additive and it satisfies the additional property that $Q_W(ax)=aQ_W(x)$ for all $a\in \F_P$.
\begin{lemma} A monic polynomial $L(x)\in\F_q[x]$ is $\F_P$-additive if and only if it has the form $x^{P^d} + \sum_{i=0}^{d-1} a_i x^{P^i}$,
where $a_i \in \F_q$.
\end{lemma}
\begin{proof} See Goss \cite{Goss}, Proposition 1.1.5. \end{proof}
\begin{proposition} \label{prop:YW} Let $q=P^e$.
Let $W\subset\F_q$ be a $d$-dimensional $\F_P$-vector subspace and $Y = Q_W(\F_q)$. Then $Y$ is an $(e-d)$-dimensional
$\F_P$-vector subspace of $\F_q$, and $Q_Y \circ Q_W(x) = x^q-x$.
\end{proposition}
\begin{proof} Since $Q_W$ is $\F_P$-additive, it may be viewed as an $\F_P$-linear map from $\F_q$ to $\F_q$. Its image $Y$ is then an $\F_P$-vector subspace of $\F_q$.
Note that $W$ is the kernel. Because of the exact sequence $0 \to W \to \F_q \to Y \to 0$, $\dim_{\F_P}(W) + \dim_{\F_P}(Y) = \dim_{\F_P}(\F_q)=e$.
Let $Z$ be a complementary subspace to $W$, that is, $\dim_{\F_P}(Z)=e-d$ and $Z+W=\F_q$. Then $Q_W$ maps $Z$ isomorphically onto $Y$, and
\begin{eqnarray*}
x^q-x &=& \prod_{a \in \F_q} (x-a) = \prod_{w \in W, z \in Z}(x-w-z) \\
&=& \prod_{z\in Z}Q_W(x-z) = \prod_{z \in Z} (Q_W(x)-Q_W(z)) \\
&=& \prod_{y\in Y} (Q_W(x)-y) = Q_Y\circ Q_W(x).
\end{eqnarray*}
\end{proof}
\begin{proposition} \label{prop:YWinv}
Let $W$ and $Y$ be as in Proposition~\ref{prop:YW}.
If $\tau\in \F_q$ then $Q_W^{-1}(\tau)$ is a $G_W$-orbit (\ie, a coset $v+W\subset \cj\F_q$), and there is a unique $\gamma = \textmatrix 1w01 \in G_W$
such that $v^q = \gamma(v) = v + w$ for all $v \in Q_W^{-1}(\tau)$. Moreover, $w = Q_Y(\tau)\in W$.
\end{proposition}
\begin{proof} Writing $Q_W(v)= \tau$, we have $0=\tau^q-\tau=Q_W(v^q)-Q_W(v)=Q_W(v^q-v)$, so $v^q-v \in W$. Set $w=v^q-v$.
By Proposition~\ref{prop:YW}, $w=v^q-v=Q_Y(Q_W(v))=Q_Y(\tau)$.
\end{proof}
In the above proposition, $\gamma = \inv_{Q_W}(\tau)$ is expressed directly in terms of $\tau$:
$$\inv_{Q_W}(\tau) = \textmatrix 1 {Q_Y(\tau)} 0 1 \in G_W.$$
This is surprising, as it is not even obvious that $Q_Y(\tau)$ belongs to $W$.
The following corollary may be of independent interest.
\begin{corollary} \label{cor:duality} {\bf (Reciprocity)} If $q=P^e$, $W$ is a $d$-dimensional $\F_P$-vector subspace of $\F_q$, and $Y=Q_W(\F_q)$, then there are short exact
sequences of $\F_P$-vector spaces:
$$0 \to W \stackrel{inc.}{\longrightarrow} \F_q \stackrel{Q_W}{\longrightarrow} Y \to 0 $$
and
$$0 \to Y \stackrel{inc.}{\longrightarrow} \F_q \stackrel{Q_Y}{\longrightarrow} W \to 0. $$
Moreover, $Q_Y \circ Q_W(x) = Q_W \circ Q_Y(x) = x^q-x$.
\end{corollary}
\begin{proof} $Q_W$ is an $\F_P$-linear map from $\F_q$ to $\F_q$, has kernel~$W$, and has image~$Y$, thus the first short exact sequence holds, and $Y$ is an $\F_P$-subspace of $\F_q$ of dimension
$e-d$. Likewise, there is a short exact sequence of $\F_P$-vector spaces
$$0 \to Y \stackrel{inc.}{\longrightarrow} \F_q \stackrel{Q_Y}{\longrightarrow} V \to 0, $$
where $\dim_{\F_P}(V)=e-(e-d)=d$.
By Proposition~\ref{prop:YWinv}, if $\tau \in \F_q$ then
$Q_Y(\tau) \in W$, thus
$V=Q_Y(\F_q) \subset W$. Since $V$ and $W$ have the same dimension, they are equal. Proposition~\ref{prop:YW}
shows $Q_Y \circ Q_W(x) = x^q-x$. Applying Proposition~\ref{prop:YW} again, but with the roles of $Y$ and $W$ reversed,
and using that $W = Q_Y(\F_q)$ (which we have already proved), we deduce $Q_W \circ Q_Y = x^q-x$ also.
\end{proof}
\begin{proposition} \label{prop:linPoly} Let $q=P^e$ and let $L(x) = x^{P^d} + \sum_{i=0}^{d-1} a_i x^{P^i}$ be an $\F_P$-additive polynomial,
where $a_i \in \F_q$ and $a_0 \ne 0$. Then all the roots of $L$
are in $\F_q$ if and only if there is an additive polynomial $M(x) = x^{P^{e-d}} + \sum b_i x^{P^i}\in\F_q[x]$ with $M \circ L(x) = x^q - x$.
In that case, it is also true that $L \circ M(x) = x^q - x$, all the roots of $M$ are in $\F_q$, and $M=Q_Y$ where $Y=L(\F_q)$.
\end{proposition}
\begin{proof} The roots of $L$ in $\cj\F_q$ comprise a $d$-dimensional $\F_P$-vector subspace $W\subset \cj\F_q$. If $W\subset \F_q$, then $L=Q_W$ and the result follows
from Corollary~\ref{cor:duality}. Conversely, if there is an $\F_P$-additive polynomial $M(x)$ satisfying $M\circ L(x) = x^q-x$, then for any root $w\in\cj\F_q$ of $L$ we have
$0 = M\circ L(w) = w^q-w$, showing that $w \in \F_q$. Thus, $L=Q_W$ where $W\subset \F_q$. By Corollary~\ref{cor:duality}, $Q_Y \circ L=x^q-x$, where $Y=Q_W(\F_q)$.
Then $M \circ L(x) = x^q-x = Q_Y \circ L(x)$. Set $z=L(x)$, which is transcendental. Since $M(z)=Q_Y(z)$, $M=Q_Y$.
By Corollary~\ref{cor:duality}, $L \circ M = Q_W \circ Q_Y = x^q-x$.
\end{proof}
\begin{proposition} \label{prop:dim1codim1} Suppose $q=P^e$, where $P$ is a prime power. \\
{\it (i)}\ Let $W$ be a one-dimensional $\F_P$-subspace of $\F_q$, so $W=c\,\F_P$ where $c \in \F_q^\x$. A quotient map is $Q_W(x)=x^P-c^{P-1}x$.
If $\tau \in \F_q$ then $\inv_{Q_W}(\tau) = \textmatrix 1w01$, where $w=c \Tr_{\F_q/\F_P}(\tau/c^P)$. That is, $Q_W(v)=\tau\in\F_q$ implies $v^q-v = c \Tr_{\F_q/\F_P}(\tau/c^P)$.\\
{\it (ii)}\ If $Y$ is an $(e-1)$-dimensional $\F_P$-vector subspace of $\F_q$, then for $\tau \in \F_q$,
$$\inv_{Q_Y}(\tau) = \begin{pmatrix} 1 & {\tau^P-\tau/a_0} \\ 0& 1 \end{pmatrix},\qquad \text{where $a_0 = \prod_{0\ne y \in Y} y$.} $$
That is, $Q_Y(v)=\tau\in\F_q$ implies $v^q-v=\tau^P-\tau/a_0$.
Also, $a_0=1/c^{P-1}$ for some $c \in \F_q^\x$, and $Q_Y(\F_q)=c \F_p$.
\end{proposition}
\begin{proof} Corollary~\ref{cor:duality} implies $W\longleftrightarrow Y$ gives a bijection between the set of all 1-dimensional $\F_P$-vector spaces $W=c\,\F_P$ and the set of all
$(e-1)$-dimensional vector spaces $Y$. In this correspondence, $Y=Q_W(\F_q)$, $W=Q_Y(\F_q)$, and $Q_W\circ Q_Y=Q_Y\circ Q_W=x^q-x$.
Assume $W$ and $Y$ are so related.
First, $Q_W(x)= \prod_{j\in\F_P} (x-jc) = c^P \prod_{j \in \F_P} ((x/c)-j) = c^P ( (x/c)^P - (x/c) ) = x^P - c^{P-1}x$.
If Tr is the polynomial $\sum_{i=0}^{e-1} x^{P^i}$ and $L(x)=c\Tr(x/c^P)$ then
$$L\circ Q_W(x)=c\Tr(x^P/c^P-x/c)=c( (x/c)^q-x/c)=x^q-x,$$
therefore $L=Q_Y$. Since $\inv_{Q_W}(\tau)=\textmatrix 1 {Q_Y(\tau)} 01 $, {\it (i)} follows.
Next, $\inv_{Q_Y}(\tau)=\textmatrix 1{Q_W(\tau)}01$, and $Q_W(\tau)=\tau^P-c^{P-1}\tau$. Let $a_0 = \prod_{0\ne y\in Y} y$. Since $Q_Y(x) = \prod_{y\in Y}(x-y) =
\prod_{y\in Y}(x+y)$, $a_0$ is
the coefficient of $x$ in $Q_Y(x)$. Since $Q_Y=L$, this coefficient is $1/c^{P-1}$. Thus, $Q_W(\tau)=\tau^P-\tau/a_0$. This proves {\it (ii)}.
\end{proof}
\section{Cyclic subgroups of $\PGL_2(\F_q)$} \label{sec:cyclic}
In this section, we find quotient maps and Artin invariants of cyclic subgroups of $G\subset \PGL_2(\F_q)$, and prove that the Artin invariant is equidistributed -- every $\g\in G$
has the same number of regular elements $\tau \in \F_q\cup\{\infty\}$ such that $\inv(\tau)=\g$. We also study
the equation $v^q=\g(v)$ when $\g\in \PGL_2(\F_q)$ and $v \in \cj\F_q\cup\{\infty\}$.
\subsection{Dickson's analysis.}
Cyclic subgroups of $\PSL_2(\F_q)$ were analyzed
by Dickson \cite{Dickson}. We modify his analysis to obtain the cyclic subgroups of $\PGL_2(\F_q)$.
It is useful to introduce the following matrices.
For $\lambda\in \F_{q^2} \setminus \F_q$, define $C_\lambda \in \GL_2(\F_{q^2})$ by
\begin{equation} C_\lambda = \begin{pmatrix} \lambda & -\lambda^q \\ 1 & -1 \end{pmatrix},\qquad {\rm so}\quad
C_\l^{-1} = (\l-\l^q)^{-1} \begin{pmatrix} 1&-\l^q \\ 1 & -\l \end{pmatrix}.
\label{eq:Clambda}
\end{equation}
Then
\begin{equation} C_{\l^q} = \begin{pmatrix} \l^q & -\l \\ 1&-1 \end{pmatrix}
= C_\l \begin{pmatrix} 0&-1 \\-1&0 \end{pmatrix},
\label{eq:Clambdaq} \end{equation}
so $C_{\l^q}(x) = C_\l(1/x)$. Note that $C_\l(\infty)=\l$, $C_\l(0)=\l^q$, $C_\l(1)=\infty$.
If $M \in \cj\F_q^\x\GL_2(\F_q)$ then its order as an element of $\PGL_2(\F_q)$ is
the least $\ell\ge1$ such that $M^\ell$ is a scalar matrix.
\begin{proposition} \label{prop:Dickson}
Suppose that $M \in \GL_2(\F_q)$ and that $M$ has order $\ell > 1$ as an element of $\PGL_2(\F_q)$.
Then exactly one of the following holds: (a)\ $M$ has a unique fixed point $z \in \cj\F_q \cup \{\infty\}$, which is rational over $\F_q$;
(b)\ $M$ has two distinct fixed points $z_1,z_2 \in \cj\F_q \cup \{\infty\}$, which are both rational;
or (c)\ $M$ has two fixed points in $\cj \F_q \cup \{\infty\}$, which
are a conjugate pair $\{\l,\l^q\}$, with $\l \in \F_{q^2} \setminus \F_q$.
If (a) holds, then $\ell=p$, the prime that divides $q$. If $z = \infty$ then
$kM = \textmatrix 1b01$ for some $b,k\in \F_q^\x$. If $z \in \F_q$ then $kM = \a \textmatrix 1 b01 \a^{-1}$ for some $b,k \in \F_q^\x$,
where $\a=\textmatrix z011 \in \PGL_2(\F_q)$.
If (b) holds then $\ell$ divides $q-1$.
Let $\a \in \GL_2(\F_q)$ such that $\a(z_1) = \infty$ and $\a(z_2) = 0$, for example $\a=\textmatrix 1{-z_2}1{-z_1}$.
Then $\a M \a^{-1} = \textmatrix a00d$, where $a,d \in \F_q^\x$
and $a/d$ has order $\ell$.
If (c) holds then $\ell$ divides $q+1$, and there is
$\d\in \F_{q^2}\setminus \F_q$ such that $M = D_{\d,\l}$, where
\begin{equation} D_{\d,\l} = C_\l \textmatrix {\d^q} 0 0 {\delta} C_\l^{-1}. \label{def:Ddel} \end{equation}
Further, $\d^{q-1}$ is a primitive $\ell$th root of unity.
Conversely, if $\d,\l \in \F_{q^2} \setminus \F_q$ and $M=D_{\d,\l}$, then $M \in \GL_2(\F_q)$, the fixed points of $M$ in $\cj\F_q\cup\{\infty\}$ are
$\{\l,\l^q\}$, and the order of $M$ as an element of $\PGL_2(\F_q)$ equals $\circ(\d^{q-1})$, the multiplicative order of $\d^{q-1}$.
\end{proposition}
\begin{proof} Write $M = \textmatrix abcd$. The fixed points of $M$ in $\cj\F_q$ are $z$ such that
$az+b=z(cz+d)$. Note that $\infty$ is a fixed point if and only if $c=0$, and it is the unique fixed point if and only if $c=0$, $a=d$, and $b\ne 0$.
The quadratic either has a single repeated root in $\F_q \cup \{\infty\}$,
two distinct roots in $\F_q \cup \{\infty\}$, or a pair of conjugate roots
$\l,\l^q$ where $\l \in \F_{q^2}\setminus \F_q$. This gives rise to the mutually exclusive cases (a), (b), and (c).
(a)\ Suppose the quadratic equation has a single repeated root $z\in\F_q\cup \{\infty\}$. Let $\a \in \GL_2(\F_q)$ such that $\a(z)=\infty$.
Then $\a M\a^{-1}$ fixes $\infty$ only, so it
is a scalar multiple of $\textmatrix 1b01$, where $b \in \F_q$ and $b\ne 0$. The order of $M$ as an element of $\PGL_2(\F_q)$ is
equal to $p$, the characteristic of $\F_q$.
(b)\ Suppose the quadratic equation has two distinct roots $z_1,z_2 \in \F_q \cup \{\infty\}$. Let $\a\in \GL_2(\F_q)$ such that $\a(z_1)=\infty$ and $\a(z_2)=0$.
Then $\a M\a^{-1}$ fixes 0 and $\infty$, so it has the form $\textmatrix a00d$.
(c)\
Finally, suppose that the quadratic equation has no rational roots. Then $c\ne0$. The roots of the quadratic are a pair $\l,\l^q \in \F_{q^2}$.
$C_\l^{-1} M C_\l$ fixes $\infty$ and $0$, so it has the form $\textmatrix \a 0 0 \d$, where
$\a + \d = \Tr(M)$ and $\a\d = \det(M)$. Thus, $M=C_\l \textmatrix \a 00\d C_\l^{-1}$. Note that $(x-\a)(x-\d) = x^2 -\Tr(M) x + \det(M)$, so $\a$
and $\d$ are either rational, or they form a conjugate pair in $\F_{q^2}$. If they are rational, then by applying the Frobenius to all coefficients we find:
$$M=C_{\l^q} \textmatrix \a 00 \d C_{\l^q}^{-1} = C_\l \textmatrix 0{-1}{-1}0 \textmatrix \a 0 0 \d \textmatrix 0{-1}{-1}0 C_\l^{-1} = C_\l \textmatrix \d 0 0 \a C_\l^{-1},$$
which would imply that $\a = \d$, so that $M$ is a scalar matrix. However, we assumed that $M$ has order $\ell>1$ as an element of $\PGL_2(\F_q)$, so
we obtain a contradiction. We conclude that $\a,\d$ are a conjugate pair in $\F_{q^2}$, \ie, $\d \in \F_{q^2}\setminus \F_q$ and $\a=\d^q$. Then
$$M = C_\l \textmatrix {\d^q} 00 \d C_\l^{-1},\qquad\text{where $\d \in \F_q^{2} \setminus \F_q$.} $$
As an element of $\PGL_2(\cj\F_q)$, $M$ is equivalent to $\d^{-1}M = C_\l \textmatrix \z001 C_\l^{-1}$, where $\z = \delta^{q-1}$.
If $E$ denotes the matrix on the right, then it is clear that $E^i$ is scalar if and only if $\z^i=1$, therefore the order of $M$ in $\PGL_2(\F_q)$
is $\circ(\zeta)$. Since the order of $\d$ divides $q^2-1$, $\ell=\circ(\z)$ divides $q+1$.
Now we prove the final statement of (c). Let $\l,\d \in \F_{q^2}\setminus \F_q$ and let $M=D_{\d,\l}$. Applying the Frobenius, we find that
$$ M^{(q)} = C_{\l^q} \textmatrix \d 00{\d^q} C_{\l^q}^{-1} = C_\l \textmatrix 0{-1}{-1}0 \textmatrix \d 00{\d^q} \textmatrix 0{-1}{-1}0 C_\l^{-1} = M.$$
Thus, $M$ is rational.
\end{proof}
If $\z^{q+1} = 1$, define
\begin{equation} E_{\z,\l} = C_\l \textmatrix \z001 C_\l^{-1}. \label{eq:Ezeta} \end{equation}
This is not rational as a matrix, but it is rational as an element of $\PGL_2(\F_q)$, and in fact if $\delta^{q-1}=\z$ (so $\delta^{q^2-1}=1$, \ie,
$\delta \in \F_{q^2}^\x$) then
$$E _{\z,\l} = \delta^{-1} D_{\d,\l} \in \cj\F_q^\x \GL_2(\F_q).$$
In particular, $E_{\z,\l}=D_{\d,\l}$ as elements of $\PGL_2(\F_q)$.
\subsection{Quotient map and Artin invariant of a cyclic group.}
Let $G$ be a cyclic subgroup of $\PGL_2(\F_q)$ of order $\ell>1$ generated by
$M\in\PGL_2(\F_q)$.
Because of Lemma~\ref{lem:conjugateInv}, to understand the Artin invariant of $G$, we may first conjugate by any $\a\in\PGL_2(\F_q)$. By Proposition~\ref{prop:Dickson},
there are three cases:
(a)\quad $M=\textmatrix 1b01$ where $b \in \F_q^\x$ and $\ell=p$.
By Proposition~\ref{prop:dim1codim1}{\it (i)}, a quotient map is $Q_G(x)=x^p-b^{P-1}x$, every $\tau \in \F_q$ is regular, and $\inv(\tau)= \textmatrix 1w01$,
where $w=b\Tr_{\F_q/\F_p}(\tau/b^p)$.
(b)\quad $M=\textmatrix a001$ and $\ell=\circ(a)$.
This is the Kummer case (Example~\ref{example:Kummer}).
A quotient map is $Q(x)=x^\ell$. The irregular elements are 0 and $\infty$,
and for $\tau \in \F_q^\x$, $\inv(\tau) = \textmatrix {\tau^{(q-1)/\ell}}001$.
(c)\quad $M=D_{\d,\l}=E_{\z_\ell,\l}$, where $\l,\mu\in\F_{q^2}\setminus\F_q$, $\z_\ell=\d^{q-1}$, $\ell=\circ(\z_\ell)$, and $\ell|(q+1)$. Then the group generated by $M$ is
$$ G_\ell = \{\,E_{\z_\ell^i,\l} : 0\le i<\ell\} = \set{E_{\z,\l} : \z^\ell = 1 }.$$
We will compute a quotient map and Artin invariant for this group.
We begin by finding the short orbits.
\begin{lemma} \label{lem:AppBfixed} If $\zeta^{q+1}=1$ and $\zeta\ne1$ then for $v \in \cj\F_q \cup \{\infty\}$,
$E_{\zeta,\lambda}(v)=v$ if and only if $v \in \{\lambda,\lambda^q\}$. \end{lemma}
\begin{proof} Since $E_{\zeta,\lambda}=C_\lambda \textmatrix{\zeta} 001 C_\lambda^{-1}$ with $C_\l = \textmatrix \l{-\l^q}1{-1}$,
\begin{eqnarray*} E_{\zeta,\lambda}(v)=v &\iff& \zeta C_\lambda^{-1}(v)=C_\lambda^{-1}(v)
\iff C_\lambda^{-1}(v)\in \{0,\infty\} \\
&\iff& v \in \{\lambda,\lambda^q\}.
\end{eqnarray*}
\end{proof}
\begin{lemma} \label{lem:AppBshort} The short $G_\ell$-orbits in $\cj\F_q \cup \{\infty\}$ are $\{\lambda\}$ and $\{\lambda^q\}$.
$\calO_\infty$ consists of $\infty$ together with $\ell-1$ elements of $\F_q$.
\end{lemma}
\begin{proof}
First, $\lambda$ and $\lambda^q$ are each fixed by every element of $G_\ell$ by Lemma~\ref{lem:AppBfixed}, so they form singleton orbits.
The same lemma shows that $\lambda$ and $\lambda^q$ are the only elements of $\cj\F_q$ that are fixed by a nontrivial element of $G_\ell$.
By Lemma~\ref{lem:short} it follows that $\{\lambda\}$ and $\{\lambda^q\}$ are the only short orbits.
For the last statement, note that $E_{\z,\l}(\infty) = D_{\d,\l}(\infty) \in \F_q$ where $\d^{q-1}=\z$, and no element of $\F_q$ is in a short orbit.
Therefore $\calO_\infty \subset \F_q \cup \{\infty\}$ and $|\calO_\infty|=\ell$.
\end{proof}
\begin{proposition} \label{prop:Qell} Let $\ell$ divide $q+1$ and $G_\ell = \set{ E_{\z,\l} : \z^\ell = 1}$.
A quotient map for $G_\ell$ is given by
\begin{equation}
Q_\ell(x)=C_\l \circ [\ell] \circ C_\l^{-1}(x) = \frac{\l(x-\l^q)^\ell - \l^q(x-\l)^\ell}{(x-\l^q)^\ell - (x-\l)^\ell},
\label{eq:Qell}
\end{equation}
where $[\ell]$ denotes the $\ell$th power map: $[\ell](x) = x^\ell$.
\end{proposition}
\begin{proof} Let $Q_\ell=C_\l\circ [\ell]\circ C_\l^{-1}$. Since $[\ell]\circ C_\l^{-1}(x)=\left(\frac{x-\l^q}{x-\l}\right)^\ell$, (\ref{eq:Qell}) holds.
We need to prove that $Q_\ell$ is $G$-invariant, has degree $\ell$, is $\F_q$-rational, and carries $\infty$ to $\infty$.
Note that $[\ell]\circ\textmatrix \z001(x) = (\z x)^\ell=x^\ell = [\ell](x)$ when $\z\in \mu_\ell$. Therefore,
$$
Q_\ell \circ E_{\z,\l} = (C_\l \circ [\ell] \circ C_\l^{-1}) \circ (C_\l \circ \textmatrix \z001 \circ C_\l^{-1}) =Q_\ell.$$
By Lemma~\ref{lem:degf}, $\deg(Q_\ell) = \deg([\ell])=\ell$.
By (\ref{eq:Qell}), the numerator of $Q_\ell$
has degree~$\ell$ and the denominator has degree~$<\ell$. (Alternatively, the degree of the denominator is less than the degree of the numerator
iff $Q_\ell(\infty)=\infty$. We have $Q_\ell(\infty)=C_\l[\ell]C_\l^{-1}(\infty)=C_\l[\ell](1)=C_\l(1)=\infty$.)
To complete the proof that $Q_\ell$ is a quotient map, it remains only to prove rationality.
When Frobenius is applied to the coefficients of the rational function $Q_\ell$, $\l$ and $\l^q$ are exchanged.
Both the numerator and denominator of (\ref{eq:Qell}) are negated, so $Q_\ell$ remains invariant. Alternatively,
since $C_{\l^q}(x)=C_\l(1/x)=C_\l \circ [-1]$, the conjugate of $Q_\ell$ is
$$C_{\l^q} \circ [\ell] \circ C_{\l^q}^{-1} = (C_\l \circ [-1]) \circ [\ell] \circ (C_\l \circ [-1])^{-1} = C_\l \circ [-1\cdot \ell \cdot -1] \circ C_\l^{-1}=Q_\ell.$$
\end{proof}
\begin{lemma} \label{lem:irregQell}
The only irregular elements for $Q_\ell$ are $\l$ and $\l^q$.
In particular, every element of $\F_q\cup\{\infty\}$ is regular with respect to $Q_\ell$.
\end{lemma}
\begin{proof} Recall $\tau$ is irregular iff $Q_\ell^{-1}(\tau)$ is a short orbit for $G_\ell$, and the only short orbits are $\{\l\}$ and
$\{\l^q\}$. Thus, the only irregular elements are $Q_\ell(\l)$ and $Q_\ell(\l^q)$.
By~(\ref{eq:Qell}), $Q_\ell(\l)=\l$ and $Q_\ell(\l^q)=\l^q$.
\end{proof}
\begin{lemma} \label{lem:uv} If $v = C_\l(u)$ then $v^q = C_\l(u^{-q})$.
\end{lemma}
\begin{proof} By (\ref{eq:Clambdaq}), $v^q=C_{\l^q} (u^q) = C_\l\textmatrix 0{-1}{-1}0 (u^q) = C_\l(u^{-q})$.
\end{proof}
\begin{theorem} \label{thm:invQell} If $\tau \in \F_q$, then
$\inv_{Q_\ell}(\tau) = E_{\z,\l}$ where
$\z = \left(\frac{\tau-\l}{\tau-\l^q}\right)^{(q+1)/\ell}$.
\end{theorem}
\begin{proof}
We have $\inv(\tau) = E_{\z,\l}$ for some $\z\in\mu_\ell$, and we must show $\zeta = \left(\frac{\tau-\l}{\tau-\l^q}\right)^{(q+1)/\ell}$.
Let $v\in Q_\ell^{-1}(\tau)$ and $u = C_\l^{-1}(v)$, so that $v^q = E_{\z,\l}(v) = C_\l(\z u)$. Since $v^q = C_\l(u^{-q})$ by Lemma~\ref{lem:uv},
$C_\l(\z u) = C_\l(u^{-q})$, and so $\z = u^{-(q+1)}$.
Since $\tau = Q_\ell(v) = C_\l\left([\ell](u)\right) = C_\l(u^\ell)$, it follows that $u^\ell = C_\l^{-1}(\tau)$. Consequently,
$$\z = u^{-(q+1)} =(u^\ell)^{-(q+1)/\ell} = \left(C_\l^{-1}(\tau)\right)^{-(q+1)/\ell} = \left(\frac{\tau-\l}{\tau-\l^q}\right)^{(q+1)/\ell}.$$
\end{proof}
The next lemmas find a symmetry of $\inv_{Q_\ell}$.
\begin{lemma} \label{lem:QellR} Let
$R = C_\l \textmatrix 0110 C_\l^{-1}$.
Then
\begin{equation} Q_\ell\circ R(x) =
\frac{\l^q (x-\l^q)^\ell -\l(x-\l)^{\ell}}{(x-\l^q)^\ell - (x-\l)^{\ell}},
\label{eq:QellR} \end{equation}
and
\begin{equation} Q_\ell(x) + Q_\ell\circ R(x) = \l + \l^q. \label{eq:QellPlusQellR} \end{equation}
\end{lemma}
\begin{proof}
$R=C_\l \circ \textmatrix 0110 \circ C_\l^{-1} = C_\l \circ [-1] \circ C_\l^{-1}$, therefore
\begin{eqnarray*}
Q_\ell\circ R(x) &=& C_\l\circ[-\ell]\circ C_\l^{-1}(x)= C_\l((x-\l)/(x-\l^q))^\ell) \\
&=& \frac{\l^q (x-\l^q)^\ell -\l(x-\l)^{\ell}}{(x-\l^q)^\ell - (x-\l)^{\ell}}.
\end{eqnarray*}
A simple computation using (\ref{eq:Qell}) then shows $ Q_\ell(x) + Q_\ell\circ R(x) = \l+\l^q$.
\end{proof}
\begin{lemma} \label{lem:QellSymmetry}
Let $\tau \in \F_q$. If $\inv_{Q_\ell}(\tau) = \g $ then $\inv_{Q_\ell}(\l+\l^q-\tau)=\g^{-1}$. \end{lemma}
\begin{proof} Write $\tau = Q_\ell(v)$. Then $Q_\ell\left(R(v)\right)=\l+\l^q-Q_\ell(v)=\l+\l^q-\tau$. If $\inv_{Q_\ell}(\tau)=\g$ then $v^q = \g(v)$,
so $(R(v))^q = R(v^q) = R \left(\g(v)\right) = \g^{-1}\circ R(v)$. Let $w=R(v)$. Since $Q_\ell(w) = \l+\l^q-\tau$ and $w^q=\g^{-1}(w)$, it
follows that $\inv(\l+\l^q-\tau)= \g^{-1}$.
\end{proof}
\subsection{Equidistribution of the Artin invariant when $G$ is cyclic.}
We show that when $G$ is cyclic, the number of regular $\tau\in\F_q\cup\{\infty\}$ with $\inv(\tau)=\g$ is the same for all $\g \in G$.
If $G$ is a subgroup of $\PGL_2(\F_q)$ and $\g \in G$, let
$$N_{\g,G} = \#\{ \tau\in \F_q\cup\{\infty\}: \text{$\tau$ is regular and $\inv(\tau,q)=\calC_{\g,G}$}\},$$
{\it i.e.}, the number of $\tau\in \F_q\cup\{\infty\}$ such that $|Q^{-1}(\tau)|=|G|$ and $v^q=\g(v)$ for some $v\in Q^{-1}(\tau)$.
By (\ref{eq:aQb}), $N_{\g,G}$ does not depend on the particular choice of quotient map.
Since $G$ maps $\F_q \cup \{\infty\}$ to itself, it is a union of $G$-orbits, and
$N_{\textmatrix 1001,G}$ is the number of full-sized $G$-orbits in $\F_q\cup\{\infty\}$.
If $G$ is abelian then $\sum_{\g\in G} N_{\g,G}$ is equal to the total number of regular elements in $\F_q\cup\{\infty\}$ wrt any quotient map.
\begin{proposition} \label{prop:cyclicCount}
If $G$ is a cyclic subgroup of $\PGL_2(\F_q)$ of order $\ell\ge2$ and $M$ is a generator of $G$, then $N_{\g,G}=(q+\kappa)/\ell$ for all $\g\in G$, where $\kappa\in\{1,0,-1\}$ and
$1-\kappa$ is the number of $\tau\in\F_q\cup\{\infty\}$ that are fixed by~$M$.
\end{proposition}
\begin{proof} By Lemma~\ref{lem:conjugateInv}, if $G'=\a G \a^{-1}$ where $\a \in \PGL_2(\F_q)$ then $N_{\g,G} = N_{\a \g \a^{-1},G'}$ for all $\g\in G$.
Also, the value $\kappa$ is the same for both $M$ and $\a M \a^{-1}$. Thus, the assertion holds for $G$ iff it holds
for $\a G \a^{-1}$. Then Proposition~\ref{prop:Dickson} reduces to the three cases (a) $M=\textmatrix 1b01$;
(b) $M=\textmatrix a001$; or (c) $M=E_{\z,\l}$. These three cases correspond to $\kappa=0$, $-1$, and 1, respectively. The short orbits are
$\{\infty\}$ in case~(a); $\{\infty\}$ and $\{0\}$ in case~(b); and $\{\l\}$ and $\{\l^q\}$ in case~(c). In particular, $\F_q\cup\{\infty\}$ contains $1-\kappa$ short elements,
so the number of full-sized orbits in $\F_q\cup\{\infty\}$ is $((q+1)-(1-\kappa))/\ell=(q+\kappa)/\ell$. Thus, $N_{\textmatrix 1001,G} = (q+\kappa)/\ell$.
In particular, $\ell$ divides $q+\kappa$.
Proposition~\ref{prop:main}{\it (v)} implies that $N_\g=(q+\kappa)/\ell$ when $o(\g)\ge 3$.
This completes the proof when $\ell$ is odd.
If $\ell$ is even, then $G$ contains exactly one element $\g_2$ of order~2. The irregular elements of $\cj \F_q\cup \{\infty\}$ (with respect to the quotient maps described in
this section) are $\{\infty\}$ in case~(a), $\{0,\infty\}$
in case~(b), and $\{\l,\l^q\}$ in case~(c), so the total number of regular elements in $\F_q\cup \{\infty\}$ is $q+\kappa$.
Then $\sum_{\g\in G} N_\g = q+\kappa$. We have already shown $N_\g=(q+\kappa)/\ell$ when $\g=1$ or $\circ(\g)\ge 3$,
\ie, for all $\g\in G$, $\g\ne \g_2$. Then $\N_\g = (q+\kappa)/\ell$ for $\g=\g_2$ also.
\end{proof}
\subsection{The equation $v^q=\g(v)$.}
Let $\g=\textmatrix abcd\in\PGL_2(\F_q)$ have order $t$, and consider the equation $v^q=\g(v)$, where $v \in \cj\F_q \cup \{\infty\}$. By convention, we interpret
$\infty^q=\infty$. Denote the solution set by $S_{\g,q}$. Then $\infty\in S_{\g,q}$ iff $c=0$,
and the remaining solutions are roots of the polynomial $f(x)=x^q(cx+d)-(ax+b)$. As shown in the proof of Proposition~\ref{prop:counting}{\it (ii)}, this equation has no repeated roots.
Thus, $|S_{\g,q}|=q+1$, where in the case $c=0$ the
``$+1$'' accounts for $v=\infty$. Note that $S_{\g,q}\setminus \{\infty\}$ is the same as the set $A_{\g,q}$ that was studied in Section~\ref{sec:orbits}. If $d\ge 1$, let
$$S_{\g,q}^{(d)} = \{v \in S_{\g,q} : \deg_q(v)=d \},$$
where $\deg_q(v)=[\F_q(v):\F_q]$ when $v\in \cj\F_q$ and $\deg_q(\infty)=1$.
\begin{lemma} \label{lem:SgammaConj} If $\a,\g\in \PGL_2(\F_q)$ and $d\ge1$, then $S_{\a \g \a^{-1},q}^{(d)} = \{\a(v) : v \in S_{\g,q}^{(d)} \}$.
\end{lemma}
\begin{proof} If $v\in\cj\F_q\cup\{\infty\}$ and $w=\a(v)$, then $\deg_q(w)=\deg_q(v)$, and
$$v\in S_{\g,q} \iff v^q=\g(v) \iff (\a v)^q=\a\g(v) \iff w^q= \a\g\a^{-1}(w) \iff w \in S_{\a\g\a^{-1},q}.$$
\end{proof}
If $1\ne \g \in \PGL_2(\F_q)$, let
$$Z_\g=\{\a\in\PGL_2(\F_q) : \a \g = \g \a \}.$$
Lemma~\ref{lem:SgammaConj} immediately implies that $Z_\g$ acts on $S_{\g,q}^{(d)}$. The next lemma is well known.
\begin{lemma} \label{lem:Zgamma}
If $1\ne \g \in \PGL_2(\F_q)$, let $1-\kappa$ denote the number of fixed points of $\g$ in $\F_q\cup\{\infty\}$. Then $\kappa \in \{0,1,-1\}$, and
$|Z_\g|=q+\kappa$.
\end{lemma}
\begin{proof} Write $\g\sim\b$ to denote that $\g$ is conjugate to $\b$. By Proposition~\ref{prop:Dickson}, there are three cases: (a)\ $\g \sim \textmatrix 1b01$ for
some $b \in \F_q^\x$ and $\kappa=0$; (b)\ $\g \sim \textmatrix a001$ for some $1\ne a\in\F_q^\x$ and $\kappa=-1$; or (c)\ $\g \sim D_{\d,\l}$ for some $\d,\l \in\F_{q^2}\setminus \F_q$ and
$\kappa=1$.
In case~(a), it is easy to see that $\textmatrix rstu$ commutes with $\textmatrix 1b01$ (in $\PGL_2$, \ie, up to a scalar multiple) iff $t=0$ and $r=u$, therefore
$Z_\g \cong Z_{\textmatrix 1b01}=\{\textmatrix 1 e01 : e \in \F_q\}$, and $|Z_\g|=q$. In case~(b), $\textmatrix rstu$ commutes with $\textmatrix a001$ in $\PGL_2$
iff $s=t=0$, so
$Z_\g\cong Z_{\textmatrix a001} = \{\textmatrix e001 : e \in \F_q^\x\}$,
and $|Z_\g|=q-1$. In case~(c), $C_\l \textmatrix rstu C_\l^{-1}$ commutes with $D_{\d,\l}$ in $\PGL_2$ iff $\textmatrix rstu$
commutes with $\textmatrix {\d^q}00\d$ in $\PGL_2$ iff
$s=t=0$, and $C_\l \textmatrix r00u C_\l^{-1}$ is rational in $\PGL_2$ iff $C_{\l^q} \textmatrix {r^q}00{u^q}C_{\l^q}^{-1} = C_\l \textmatrix {kr} 00{ku} C_\l^{-1}$ with $k\ne0$
iff $\textmatrix {u^q}00{r^q} = \textmatrix {kr}00{ku}$
iff $(u/r)^q=r/u $ iff $u^{-1} C_\l \textmatrix r00u C_\l^{-1} = E_{\z,\l}$ with $\z=r/u \in \mu_{q+1}$. Thus, $Z_\g
= \{E_{\z,\l} : \z^{q+1}=1\}$ and $|Z_\g|=q+1$. In each case, $|Z_\g|=q+\kappa$.
\end{proof}
\begin{proposition} \label{prop:vqgammav} Let $1\ne \g\in\PGL_2(\F_q)$ have order~$t$, and let $\kappa$ be as in Lemma~\ref{lem:Zgamma}. Then
$S_{\g,q}=S_{\g,q}^{(1)} \cup S_{\g,q}^{(t)}$, and $|S_{\g,q}^{(t)}|=q+\kappa$. If $v$ is any element of $S_{\g,q}^{(t)}$, then
$S_{\g,q}^{(t)}=\{z(v) : z \in Z_\g\}$.
\end{proposition}
\begin{proof}
By Lemma~\ref{lem:SgammaConj}, if the proposition holds for $\g$ then it also holds for $\a\g\a^{-1}$, where $\a\in\PGL_2(\F_q)$. Using Proposition~\ref{prop:Dickson},
we may therefore assume that one of the three cases holds: (a)\ $\g=\textmatrix 1b01$, $b\in\F_q^\x$, $\kappa=0$; (b)\ $\g=\textmatrix a001$, $1\ne a \in \F_q^\x$, $\kappa=-1$;
or (c)\ $\g=D_{\d,\l}$ and $\kappa=1$, where $\d,\l\in\F_{q^2}\setminus \F_q$.
In case~(a), $S_{\g,q}$ contains $\{\infty\}$, together with all $v\in\cj\F_q$ such that
$v^q=v+b$.
Since $v^{q^i}=v+ib$, $v$ has exactly $p$ conjugates, where $p$ is the prime dividing~$q$, and so $\deg_q(v)=p=\circ(\g)$.
If $v_0\in\cj\F_q$ is one solution to $v^q=v+b$, then the others are $v+c$ for $c\in\F_q$. Since $Z_\g=\{\textmatrix 1c01 : c \in \F_q\}$, the proposition holds in this case.
In case~(b), $S_{\g,q}$ consists of $0,\infty$, and the nonzero solutions to $v^q=av$. If $v\ne0$, then the distinct conjugates of $v$ are
$v^{q^i}=a^iv$ for $0\le i<\circ(a)$, so $\deg_q(v)=\circ(a)=\circ(\g)$.
If $v_0$ is one nonzero solution, so $v_0^{q-1}=a$, then the others
are $cv_0$ for $c\in\F_q^\x$. Since $Z_\g=\{\textmatrix c001:c\in\F_q^\x\}$, the nonzero solutions form a single $Z_\g$-orbit. This analysis shows that $S_{\g,q}$
contains two elements of $S_{\g,q}^{(1)}$ and the remaining $q-1$ elements comprise a single $Z_\g$ orbit of size $|Z_\g|=q-1$.
In case~(c), we may write $\g=E_{\z_0,\l}$, and $Z_\g = \{E_{\z,\l} : \z^{q+1}=1\}$. As shown in Lemma~\ref{lem:AppBfixed}, if $1\ne \a\in Z_{\g}$ then $\l$ and $\l^q$
are its only fixed points in $\cj \F_q \cup \{\infty\}$. In particular, $\a$ has no fixed points in $\F_q$, so $v^q=v=\a(v)$ has no solutions. Taking $\a=\g$, this implies that
$S_{\g,q}^{(1)}=\emptyset$. If $v\in\{\l,\l^q\}$ then $\g(v)=v\ne v^q$, so $v \not \in S_{\g,q}$.
Now let $v$ be any element of $S_{\g,q}$. We have shown that $v\not\in\F_q\cup\{\infty\}\cup\{\l,\l^q\}$. Thus, $\deg_q(v)>1$ and $E_{\z,\l}(v)\ne v$ for every $E_{\z,\l}\in Z_\g$.
Then $\{\a(v) : \a \in Z_\g\}$ are distinct. Calling this set $S$, we have $|S|=|Z_\g|=q+1$. Also, $S\subset S_{\g,q}$ by Lemma~\ref{lem:SgammaConj}. Since both have cardinality
$q+\kappa$, $S=S_{\g,q}$.
Since $\a(v)$ are distinct for $\a\in Z_\g$, $\g^i(v)$ are distinct for $0\le i<\circ(\g)$. Then $\deg_q(v)=\circ(\g)$ by Lemma~\ref{lem:degree}.
\end{proof}
\begin{corollary} \label{cor:factoring1} Let $1\ne \g=\textmatrix abcd \in \PGL_2(\F_q)$, let $t=\circ(\g)$, and let $1-\kappa$ be the number of fixed points of $\g$ in $\F_q\cup\{\infty\}$.
Then $\kappa\in \{0,1,-1\}$, and the polynomial $x^q(cx+d)-(ax+b)\in\F_q[x]$
factors into exactly $(q+\kappa)/t$ irreducible polynomials of degree~$t$. The remaining factors are linear.
If $r$ is one irrational root of $f$, then the others are $\{z(r): z\in Z_\g\}$.
\end{corollary}
\begin{proof} The roots of $f$ are the finite elements of $S_\g$. Each degree-$t$ factor of $f$ corresponds to $t$ conjugate roots in $S_{\g,q}^{(t)}$. The result now follows from
Proposition~\ref{prop:vqgammav}.
\end{proof}
\section{$G=\PGL_2(\F_q)$.} \label{sec:PGL2}
This section considers the case $G=\PGL_2(\F_q)$, and we prove the statements from the introduction (Example~\ref{example:pgl2}). As shown in (\ref{eq:QGidentity}),
a quotient map is
\begin{equation*} Q(x) = 1 + \frac {x^{q^3} - x } {\left(x^q-x\right)^{q^2-q+1}}
= \frac{(x^{q^2}-x)^{q+1}}{(x^q-x)^{q^2+1}}.\end{equation*}
As usual, we begin by considering short orbits, \ie, orbits of size less than $|G|$. Recall from Section~\ref{sec:orbits} that $|G|=q^3-q$.
\begin{lemma} \label{lem:PGL2Short}
$v \in \cj\F_q$ belongs to a short orbit of $\PGL_2(\F_q)$ if and only if $v \in \F_{q^2}$. The orbit of $\infty$ is $\F_q \cup \{\infty\}$.
\end{lemma}
\begin{proof} $\PGL_2(\F_q)$ maps $\F_{q^2}\cup\{\infty\} $ to itself. Since $q^2+1 < q^2+q \le q(q-1)(q+1)$, each element of $\F_{q^2}$ belongs to a short orbit.
Conversely, all elements of short orbits are in $\F_{q^2} \cup \{\infty\}$ by Lemma~\ref{lem:short}.
For the last statement, we know $\PGL_2(\F_q)$ preserves $\F_q \cup \{\infty\}$. It is a single orbit, because if $a \in \F_q$ then
$\textmatrix a 1 1 0(\infty) = a$.
\end{proof}
\begin{lemma} \label{lem:pgl2Q}
The only irregular elements in $\cj\F_q\cup\{\infty\}$ with respect to $Q$ are~0 and~$\infty$. Their preimages are the short $G$-orbits
\begin{equation} Q^{-1}(\infty)=\F_q\cup\{\infty\},\qquad Q^{-1}(0) = \F_{q^2}\setminus \F_q. \label{eq:deg2Orbit} \end{equation}
\end{lemma}
\begin{proof} By Lemma~\ref{lem:PGL2Short}, the union of the short orbits is $\F_{q^2} \cup \{\infty\}$, and $\calO_\infty=
\F_q \cup \{\infty\}$. The images of the short orbits under $Q$ are the irregular elements. If $v \in\calO_\infty$ then $Q(v)=\infty$,
and if $v \in \F_{q^2}\setminus \F_q$ then $Q_G(v)=0$.
Then $\infty$ and 0 are the only irregular elements of $\cj\F_q\cup\{\infty\}$, and $\F_{q^2}\cup\{\infty\}$
is the union of exactly two short orbits:
$Q^{-1}(\infty)=\F_q\cup\{\infty\}$ and $Q^{-1}(0)=\F_{q^2}\setminus \F_q$.
\end{proof}
\begin{lemma} \label{lem:ge3}
Let $\tau\in\F_q^\x$. Then $\tau$ is regular, and $\inv(\tau) = \calC_\g$ has the property that $\circ(\gamma)\ge3$.
\end{lemma}
\begin{proof} By Lemma~\ref{lem:pgl2Q}, $\tau$ is regular.
Then $Q^{-1}(\tau)$ is a full-sized orbit of $\PGL_2(\F_q)$, and $\inv(\tau)$ is defined as the unique conjugacy
class $\calC \subset \PGL_2(\F_q)$ such that $v^q = \gamma(v)$ with $\gamma \in \calC$ whenever $v \in Q^{-1}(\tau)$.
Since all elements of $\F_{q^2}$ are in short orbits, $Q^{-1}(\tau)$ misses $\F_{q^2}$, so $\deg_q(v)\ge 3$ for each $v \in Q^{-1}(\tau)$.
By Lemma~\ref{lem:degree}, $\deg_q(v) = \circ(\gamma)$.
Thus, $\inv(\tau) = \calC_\g$ always has the property that $\circ(\g)\ge 3$.
\end{proof}
\begin{lemma} \label{lem:xyz}
If $\beta = \textmatrix abcd \in \PGL_2(K)$ then for any $x,y,z$ in a field containing $K$,
\begin{equation} \beta(x)-\beta(z) = \frac{(ad-bc)(x-z)}{(cx+d)(cz+d)} \label{eq:xz} \end{equation}
\begin{equation} \frac{\beta(x)-\beta(z)}{\beta(y)-\beta(z)} = \frac{(x-z)(cy+d)}{(y-z)(cx+d)} \label{eq:xyz} \end{equation}
\end{lemma}
\begin{proof} This is a straightforward computation. \end{proof}
Recall in (\ref{eq:iota2}) we defined
$$\iota\left(\textmatrix abcd\right) = \frac{e_1}{e_2} + \frac{e_2}{e_1} + 2 = \frac{(a+d)^2}{ad-bc}$$
where $e_1,e_2$ are the roots of the characteristic equation of $\textmatrix abcd$. We noted that $\iota$ is well defined on $\PGL_2(\F_q)$
and is constant on conjugacy classes. Also, recall from Proposition~\ref{prop:BDD} (with $c=1$) that
\begin{equation} \F_q = \{ \z + 1/\z : \z \in \mu_{q-1}\cup \mu_{q+1} \}. \label{eq:zeta} \end{equation}
\begin{theorem} \label{thm:pgl2bijection}
The map $\iota$ induces a bijection between conjugacy classes $\calC_\g$ in $\PGL_2(\F_q)$ such that $\circ(\g)\ge 3$ and
$\F_q^\x$.
Further, $\iota(\gamma)=\tau$ iff $\inv_Q(\tau) = \calC_\g$. That is, $\inv_Q$ is the inverse bijection to $\iota$.
\end{theorem}
\begin{proof}
Let $\gamma\in \PGL_2(\F_q)$ have order $\ell \ge 3$. Then case~(a), (b), or~(c) of Proposition~\ref{prop:Dickson} holds.
In case~(a),
$\calC_\g = \calC_{\textmatrix 1b01}$ where $b\in\F_q^\x$, $\ell =p\ge 3$, and $\iota(\gamma) = 4$.
Let $v \in \cj\F_q$ such that $v^q = \textmatrix 1b01 v = v +b$. Then
$$Q(v)=\frac{(v^{q^2}-v)^{q+1}}{(v^q-v)^{q^2+1}} = \frac{((v+2b)-v )^{q+1}}{((v+b)-v)^{q^2+1}} = \frac{(2b)^2}{b^2}=4= \iota(\g).$$
In case~(b), $\calC_\g = \calC_{\textmatrix a001}$ where $a \in \F_q^\x$ and $\circ(a)\ge 3$.
Let $v \in \cj\F_q^\x$ such that $v^q = \textmatrix a001 v = av $. Then
\begin{eqnarray*} Q(v) &=& \frac{ (v^{q^2}-v)^{q+1}}{(v^q-v)^{q^2+1}} = \frac{ (a^2 v-v)^{q+1}}{(av-v)^{q^2+1}} = \frac{ v^{q+1}(a^2-1)^2}{v^{q^2+1}(a-1)^2} \\
&=& \frac {(a+1)^2}{v^{q^2-q}} = \frac{(a+1)^2}{a^q}=\frac{(a+1)^2}a = \iota(\g).
\end{eqnarray*}
In case~(c), $\g = E_{\z,\l}$ for some $\l \in \F_{q^2} \setminus \F_q$ and $\z$ of order $\ell$. Here, $\iota(\gamma)=\iota(\textmatrix \z 001) = (\z+1)^2/\z$.
Let $v \in \cj\F_q$ such that
$v^q = \g(v)$. Since $\g$ is rational as an element of $\PGL_2(\F_q)$, $v^{q^i} = \g^i(v) = E_{\z^i,\l}(v)$.
Let $u = C_\l^{-1}(v)$. Then $v^{q^i} = E_{\z^i,\l}(v) = C_\l(\z^i u)$. Using Lemma~\ref{lem:xyz},
\begin{eqnarray*}
Q(v) &=& \frac{(v^{q^2}-v)^{q+1}}{(v^q-v)^{q^2+1}} =
\left(\frac{v^{q^3}-v^q}{v^{q^3}-v^{q^2}}\right) \left(\frac{v^{q^2}-v}{v^{q}-v}\right) \\
&=& \left(\frac{C_\l\left(\zeta u\right) - C_\l(\z^3 u)}{C_\l\left(\zeta^2 u \right)-C_\l(\z^3 u)}\right)
\left(\frac{C_\l(\z^2 u) - C_\l(u) } {C_\l(\z u) - C_\l(u) }\right) \\
&=& \frac{(\zeta u - \z^3 u)(\z^2 u - 1)}{(\z^2 u - \z^3u)(\z u - 1)} \frac{(\z^2 u - u)(\z u - 1)}{(\z u - u)(\z^2 u - 1)} \\
&=& \frac{ (\z - \z^3)(\z^2-1)}{(\z^2-\z^3)(\z-1)} = \frac{(\z+1)^2}{\z} = \iota(\g).
\end{eqnarray*}
Combining the three cases, we see that if $v^q = \g(v)$ and $\circ(\g)\ge 3$ then $Q(v)=\iota(\g)$. In each case, $\tau=\iota(\g) \in \F_q^\x$,
so it is regular and $\inv(\tau)$ is defined. Since $Q(v)=\tau$ and $v^q=\g(v)$, $\inv(\tau) = \calC_\g$. We have shown $\inv\circ \iota$ is
the identity on $\{\calC_\g : o(\g) \ge 3 \}$.
To prove that $\iota$ and $\inv_Q$ are bijections, it remains to prove that $\iota$ is surjective
from $\{\calC_\g : \circ(\gamma)\ge 3 \}$ onto $\F_q^\x$. Let $\tau \in \F_q^\x$. By (\ref{eq:zeta}), $\tau-2=\z + 1/\z$ where $\z^{q-1}=1$
or $\z^{q+1}=1$. If $\z=1$ then $\tau=4=\iota(\textmatrix 1101)$. In even characteristic, $4=0 \not \in \F_q^\x$. In odd characteristic, $\circ\textmatrix 1101
=p\ge 3$. If $\z=-1$ then $\tau=0 \not \in \F_q^\x$. If $\z^{q-1}=1$ and $\z\not\in \{1,-1\}$ then $\tau = \iota(\g)$ for $\g=\textmatrix \z001$, and $\circ(\gamma)
\ge 3$. Finally, if $\z^{q+1}=1$ and $\z \not \in \{1,-1\}$ then $\tau = \iota(E_{\z,\l})$ and $\circ(E_{\z,\l})\ge3$. Thus, $\iota$ is surjective and the theorem is proved.
\end{proof}
\begin{corollary} \label{cor:PGL2inv}
Let $\tau \in \F_q$. By (\ref{eq:zeta}), $\tau-2 = \zeta+1/\zeta$, where $\zeta^{q-1}=1$ or $\zeta^{q+1}=1$.
Let $Q$ be the quotient map for $\PGL_2(\F_q)$ given in (\ref{eq:QGidentity}), and let $\inv = \inv_Q$.
\begin{enumerate}
\item[{\it (i)}] If $\zeta = -1$ (so $\tau = 0$) then $Q^{-1}(\tau) = \F_{q^2} \setminus \F_q$, and $\inv(\tau)$ is undefined since this orbit is short.
\item[{\it (ii)}] If $\zeta = 1$ and $1 \ne -1$ (so $\tau = 4$ and $q$ is odd), then $\inv(\tau) = \calC_{\textmatrix 1101}$.
\item[{\it (iii)}] If $\zeta \not \in \{1,-1\}$ and $\zeta^{q-1}=1$, then $\inv(\tau) = \calC_{\textmatrix \z 001}$.
\item[{\it (iv)}] If $\zeta \not \in \{1,-1\}$ and $\zeta^{q+1}=1$, then $\inv(\tau) = \calC_\gamma$, where $\gamma = E_{\z,\lambda}$
for any $\lambda \in \F_{q^2}\setminus \F_q$.
(All such matrices $E_{\z,\lambda}$ belong to the same conjugacy class in $\PGL_2(\F_q)$.)
\end{enumerate}
\end{corollary}
\begin{proof} {\it (i)} was shown in (\ref{eq:deg2Orbit}), {\it (ii)}-{\it (iv)} follow from Theorem~\ref{thm:pgl2bijection}.
\end{proof}
\begin{corollary} \label{cor:order_gamma}
Let $1\ne \g \in \PGL_2(\F_q)$, where $q=p^e$ and $p$ is any prime. \\
{\it (i)} $\iota(\g)=0$ if and only if $\circ(\g)=2$. \\
{\it (ii)} $\iota(\g)=4$ if and only if $\circ(\g)=p$. \\
{\it (iii)} Write $\iota(\g) = \z + 1/\z + 2$, where $\z \in \mu_{q-1}\cup \mu_{q+1}$. If $\z\ne 1$ (equivalently, $\iota(\g)\ne 4$), then $\circ(\g)=\circ(\z)$.
\end{corollary}
\begin{proof} {\it (i)}\ If $\g = \textmatrix abcd$ then $$\g^2=\begin{pmatrix} a^2+bc & b(a+d) \\ c(a+d) & d^2+bc \end{pmatrix}.$$
This matrix is scalar iff $b(a+d)=c(a+d)=a^2-d^2=0$. These equations hold iff $a+d=0$ or $b=c=a-d=0$. The latter is excluded since we assume $\g \ne 1$.
{\it (ii)} and {\it (iii)}\ Assume first that $\iota(\g)\ne0$ and write $\iota(\g)=\z+1/\z+2$, where $\z\ne -1$.
By {\it (i)}, $\circ(\g)\ge 3$. By Theorem~\ref{thm:pgl2bijection}, if $\iota(\a)=\iota(\g)$ and $\circ(\a)\ge 3$, then $\calC_\g=\calC_{\a}$, so $\circ(\g)=\circ(\a)$.
If $\z=1\ne-1$, then $p$ is odd and $\iota(\g)=4=\iota(\textmatrix 1101)$. Since $\circ(\textmatrix 1101)=p\ge3$, $\circ(\g)=p$.
If $\z \in\mu_{q-1}\setminus \mu_2$ then $\iota(\g)=\iota(\textmatrix \z001)$, and $\circ(\textmatrix \z001)=\circ(\z)\ge3$, so $\circ(\g)=\circ(\z)$. Finally, if $\z \in \mu_{q+1}\setminus \mu_2$
then $\iota(\g)=\iota(E_{\z,\l})$ and $\circ(E_{\z,\l})=\circ(\z)\ge3$, so $\circ(\g)=\circ(\z)$.
The proofs of ({\it (ii)}) and ({\it iii}) are complete when $\iota(\g)\ne 0$. In ({\it ii}), $\iota(\g)=0$ iff $4=0$ iff $p=2$. In that case
{\it (ii)} follows from {\it (i)}. In ({\it iii}), $\iota(\g)=0$ iff $\z=-1$. Since the case $\z=1$ is excluded, $q$ must be odd. Then $\circ(\g)=2$ by {\it (i)}, but also $\circ(\z)=\circ(-1)=2$.
\end{proof}
\begin{corollary} \label{cor:PGL2conjugate}
If $\g,\g' \in \PGL_2(\F_q) \setminus \{1\}$ and $\iota(\g)=\iota(\g')\ne 0$ then $\g$ and $\g'$ are conjugate. In particular, if $\g\ne1$ and $\iota(\g)\ne0$
then $$\calC_\g = \{\a \in \PGL_2(\F_q) : \text{$\a\ne1$ and $\iota(\a)=\iota(\g)$}\}.$$
\end{corollary}
\begin{proof} By Corollary~\ref{cor:order_gamma}{\it (i)}, the hypothesis implies that $\circ(\g)\ge 3$ and $\circ(\g')\ge3$. Then $\iota(\g)=\iota(\g')$ implies $\calC_\g=\calC_{\g'}$
by Theorem~\ref{thm:pgl2bijection}.
\end{proof}
\begin{proposition} \label{prop:factoring} Let $q=p^e$ where $p$ is prime, and let $\textmatrix abcd \in \GL_2(\F_q)$ be a nonscalar matrix.
Let $f(x)=x^q(cx+d)-(ax+b)$, and write $(a+d)^2/(ad-bc)-2=\z+1/\z$ where $\z\in\mu_{q-1}\cup\mu_{q+1}$. \\
{\it (i)}\ If $\z=1$ then $f$ has exactly $p^{e-1}$ irreducible factors of degree~$p$, and the remaining factors are linear. \\
{\it (ii)} If $\z=-1$ and $1\ne-1$ (so $q$ is odd), then $f$ has exactly $(q+\kappa)/2$ irreducible quadratic factors and the remaining factors are linear, where
$\kappa=-\jacobi{-(ad-bc)}q$. \\
{\it (iii)} If $\z\in\mu_{q+\kappa}\setminus \mu_2$, where $\kappa\in\{1,-1\}$, then $f$ has $(q+\kappa)/t$ irreducible factors of degree~$t$ and the remaining factors are linear,
where $t=\circ(\z)$. \\
In each case, if $v$ is one irrational root of $f$, then the others are $\a(v)$ such that $\a \in \PGL_2(\F_q)$ and $\a\textmatrix abcd=\textmatrix abcd\a$.
\end{proposition}
\begin{proof} Let $\g=\textmatrix abcd$, considered as an element of $\PGL_2(\F_q)$, and let $1-\kappa$ denote the number of fixed points of $\g$ in $\F_q\cup\{\infty\}$.
Note that $\iota(\g)=2+\z+1/\z=(\z+1)^2/\z$.
By Corollary~\ref{cor:factoring1}, $f$ has $(q+\kappa)/\circ(\g)$ irreducible factors of degree~$\circ(\g)$, the remaining factors are linear, and the irrational
roots comprise a $Z_\g$-orbit of full size.
So to prove the proposition, it suffices to compute $\circ(\g)$ and $\kappa$ in each of the cases {\it (i)}--{\it (iii)}.
{\it (i)}\ Given that $\z=1$, we must show $\circ(\g)=p$ and $\kappa=0$. $\iota(\g)=4$ and so $\circ(\g)=p$ by Corollary~\ref{cor:order_gamma}{\it (ii)}.
If $p$ is odd, then $\g\sim\textmatrix 1b01$ by Corollary~\ref{cor:PGL2conjugate},
so $\g$ has a unique fixed point in $\F_q\cup\{\infty\}$ and $\kappa=0$. If $p=2$, then $\iota(\g)=4=0$, so $\g=\textmatrix abc{-a} =\textmatrix abca$. Note that $b$, $c$
cannot both be zero, as otherwise $\g$ would be scalar. It is easy to see that $\g$ has a unique fixed point $(b/c)^{1/2}$ if $c\ne0$, or $\infty$ if $c=0$.
Thus, $1-\kappa=1$ and $\kappa=0$. We have shown $t=p$ and $\kappa=0$ for any $q$, even or odd, as required.
{\it (ii)}\ Given that $\z=-1$ and $p$ is odd, we must show $\circ(\g)=2$ and $\kappa=-\jacobi{-\det(\g)}q$. Since $\iota(\g)=2+\z+1/\z=0$, $\g=\textmatrix abc{-a}$, and
$\circ(\g)=2$ by Corollary~\ref{cor:order_gamma}{\it (i)}.
The fixed points of $\g$ are the roots of $cz^2-2az-b$, and the number of rational roots is $1+\jacobi{4a^2+4bc}q=1+\jacobi{-\det(\g)}q$. Thus, there are $1-\kappa$ fixed points
in $\F_q\cup\{\infty\}$, where $\kappa=-\jacobi{-\det(\g)}q$, as was to be shown.
{\it (iii)}\ The hypothesis is that $\iota(\g)=(\z+1)^2/\z$ where $\z\in\mu_{q-1}\cup\mu_{q+1}$ and $\z^2\ne 1$.
By Corollary~\ref{cor:order_gamma}{\it (iii)}, $\circ(\g)=\circ(\z)$, and by Corollary~\ref{cor:PGL2conjugate},
$\g \sim \textmatrix \z001$ if $\z^{q-1}=1$, and $\g\sim E_{\z,\l}$ if $\z^{q+1}=1$. In the former case, $\kappa=-1$, and in the latter case, $\kappa=1$.
{\it (iii)} now follows from Corollary~\ref{cor:factoring1}.
\end{proof}
We conclude this section by proving the second theorem from Example~\ref{example:pgl2}. Let $G=\PGL_2(\F_q)$, $Q_G$ the quotient map given by~(\ref{eq:QGidentity}), $H$ a subgroup of $G$,
and $Q_H$ a quotient map for~$H$.
By Proposition~\ref{prop:H}, there is a unique function $h\in \F_q(x)$ such that $Q_G = h\circ Q_H$.
\begin{theorem} \label{thm:Fq} Let $H\subset \PGL_2(\F_q)$ and $Q_H,h$ be as above. Suppose $\tau \in \F_q$ is regular with respect to $Q_H$ and let $\inv_{Q_H}(\tau,q)=\calC_{\g,H}$.
If $\g=1$ then $h(\tau)=\infty$. If $\g\ne1$ then $h(\tau)=\iota(\g)$.
\end{theorem}
\begin{proof} By Proposition~\ref{prop:Qorbit}, $V=Q_H^{-1}(\tau)$ is an $H$-orbit, and since $\tau$ is regular with respect to $Q_H$, the orbit has full size.
Let $v\in V$. Since $Q_H(v^q)=\tau^q=\tau$, $v^q \in V$, and consequently $v^q=\d(v)$ for a unique $\d \in H$. By~(\ref{eq:invariantFq}), $\inv_{Q_H}(\tau)=\calC_{\d,H}$, therefore
$\d$ is conjugate to $\g$. In particular, $\d$ has the same order as $\g$ and $\iota(\d)=\iota(\g)$.
Note that $Q_G(v)=h(Q_H(v))=h(\tau)$. By (\ref{eq:deg2Orbit}), $Q_G^{-1}(\infty)=\F_q\cup\{\infty\}$ and $Q_G^{-1}(0)=\F_q^2\setminus \F_q$.
First, $\g=1\Rightarrow \d=1\Rightarrow v\in\F_q\Rightarrow h(\tau)=Q_G(v)=\infty$.
Next, suppose $\circ(\g)=2$. Since $v,\g(v)=v^q$ are distinct and $v^{q^2}=\g^2(v)=v$, $v$ belongs to $\F_{q^2}\setminus \F_q$. Then $h(\tau)=Q_G(v)=0$. On the other hand,
$\circ(\g)=2\iff \iota(\g)=0$. So $h(\tau)=\iota(\g)=0$ in this case.
Finally, if $\circ(\g)\ge 3$ then $[\F_q(v):\F_q]=\circ(\d)=\circ(\g)\ge3$ by Lemma~\ref{lem:degree}, so $v \not\in\F_{q^2}$. Then $h(\tau)=Q_G(v)\in\F_q^\x$, so that $h(\tau)$ is regular with respect to $Q_G$.
Lemma~\ref{lem:Hinv} then implies that $\inv_{Q_G}(h(\tau))=
\calC_{\g,G}$. Finally, Theorem~\ref{thm:pgl2bijection} implies $h(\tau)=\iota(\gamma)$.
\end{proof}
\section{$G=\PSL_2(\F_q)$.} \label{sec:PSL2}
The projective special linear group is defined as $\SL_2(\F_q)$ modulo the scalar matrices $\textmatrix a00a\in \SL_2(\F_q)$.
If $\a \in GL_2(\F_q)$ and $\det(\a)=c^2$ with $c \in \F_q$, then $c^{-1}\a \in \SL_2(\F_q)$, so ($\a$ mod scalars) represents an element of $\PSL_2(\F_q)$.
On the other hand, if $\det(\a)$ is a nonsquare, then it has no scalar rational multiple in $\SL_2(\F_q)$. Thus, there is a short exact sequence
$$ 1 \longrightarrow \PSL_2(\F_q) \longrightarrow \PGL_2(\F_q) \longrightarrow \{\pm 1 \} \longrightarrow 1 $$
where the first map is inclusion and the second map is $\jacobi{\det(\a)}q$. The square-class of the determinant is well defined, because a
scalar matrix has square determinant.
It follows that $[\PGL_2(\F_q):\PSL_2(\F_q)] = |\{\pm1\}|$, which is 1 if $q$ is even and 2 if $q$ is odd. In particular, $\PSL_2(\F_q)=\PGL_2(\F_q)$ when
$q$ is even, and we already studied this group in Section~\ref{sec:PGL2}.
For this reason, in this section we assume $q$ is odd. Then $|\PSL_2(\F_q)| = (1/2)q(q-1)(q+1)$.
Usually we first find short orbits and then find the quotient map. However, for this example it turns out to be easier to do these steps in reverse order.
\begin{proposition} \label{prop:PSL2Q} A quotient map for $\PSL_2(\F_q)$ is
$$Q_S(x) = \frac{(x^{q^2}-x)^{(q+1)/2}}{(x^q-x)^{(q^2+1)/2}}.$$
If $\gamma \in \PGL_2(\F_q)$ then
\begin{equation} Q_S\circ \gamma(x) = \jacobi{\det(\gamma)}q Q_S(x). \label{eq:QSgamma} \end{equation}
\end{proposition}
\begin{proof}
First we prove (\ref{eq:QSgamma}). If the equation holds for $\gamma_1$ and for $\gamma_2$, then it holds for $\gamma_1\circ \gamma_2$ as well, because
$$Q_S\circ (\gamma_1 \circ \gamma_2) = (Q_S \circ \gamma_1) \circ \gamma_2 = \jacobi{\det(\g_1)}q Q_S\circ \g_2 = \jacobi{\det(\g_1)}q \jacobi{\det(\g_2)}q Q_S,$$
and $\jacobi{\det(\g_1)}q \jacobi{\det(\g_2)}q = \jacobi{\det(\g_1) \det(\g_2)}q = \jacobi{\det(\g_1\g_2)}q$.
Thus, it suffices to prove (\ref{eq:QSgamma}) for $\gamma =\textmatrix c001$,
$\gamma = \textmatrix 1b01$, and $\gamma = \textmatrix 0110$, as these generate $\PGL_2(\F_q)$.
If $\gamma = \textmatrix c001$ with $c \in \F_q^\x$ then
$$Q_S(\gamma (x) ) = Q_S(cx) = \frac{\left((cx)^{q^2}-cx\right)^{(q+1)/2}}{\left((cx)^q-cx)\right)^{(q^2+1)/2}}
= \frac{c^{(q+1)/2}(x^{q^2}-x)^{(q+1)/2}}{c^{(q^2+1)/2}(x^q-x)^{(q^2+1)/2}}.$$
The right side is $Q_S(x)$ times
\begin{eqnarray*} c^{(q-q^2)/2} &=& (c^q)^{-(q-1)/2} = c^{-(q-1)/2} = c^{q-1} c^{-(q-1)/2} \\
&=& c^{(q-1)/2} = \jacobi cq = \jacobi {\det(\gamma)} q.
\end{eqnarray*}
Thus, $Q_S(\gamma(x)) = \jacobi {\det(\gamma)} q Q_S(x)$.
If $\gamma = \textmatrix 1b01$ with $b\in\F_q$ then
$$Q_S(\gamma(x)) = Q_S(x+b) = \frac{ ((x+b)^{q^2} - (x+b))^{(q+1)/2}}{((x+b)^q-(x+b))^{(q^2+1)/2}} = Q_S(x)$$
and $\det(\gamma)=1$.
Finally, if $\gamma = \textmatrix 0110$ then
$$Q_S(\gamma(x)) = Q_S(1/x) = \frac{ (x^{-q^2} - x^{-1}))^{(q+1)/2}}{(x^{-q}-x^{-1})^{(q^2+1)/2}}.$$
Multiply numerator and denominator by $x^{(q^2+1)(q+1)/2}$ to obtain
$$ Q_S(1/x) = \frac{ (x - x^{q^2})^{(q+1)/2}}{(x-x^{q})^{(q^2+1)/2}} = (-1)^{(q-q^2)/2} Q_S(x).$$
Since $q^2 \equiv 1 \pmod 4$, $(-1)^{(q-q^2)/2}=(-1)^{(q-1)/2} = \jacobi{-1}q$.
Noting that $\det(\gamma)=-1$, the result follows.
Since $\det(\gamma)$ is a square for all $\gamma \in \PSL_2(\F_q)$, eq.~(\ref{eq:QSgamma}) shows
that $Q_S \circ \gamma = Q_S$ for all $\gamma \in \PSL_2(\F_q)$.
Note that $Q_S^2=Q_G$, where
$Q_G$ is the quotient map for $\PGL_2(\F_q)$ given by (\ref{eq:QGidentity}). Then
$Q_S(\infty)=\infty$ and $\deg(Q_S)=(1/2)\deg(Q_G)=(1/2)|\PGL_2(\F_q)|=
|\PSL_2(\F_q)|$.
Thus, $Q_S$ is a quotient map for $\PSL_2(\F_q)$.
\end{proof}
\begin{lemma} \label{lem:PSL2short} Let $S= \PSL_2(\F_q)$. \\
{\it (i)}\ $\calO_\infty = \F_q \cup \{\infty\}$, and it has multiplicity $(1/2)(q^2-q)$. \\
{\it (ii)}\ $\F_{q^2} \setminus \F_q$ is an $S$-orbit, and it has multiplicity $(1/2)(q+1)$. \\
{\it (iii)}\ $\calO_\infty$ and $\F_{q^2}\setminus \F_q$ are the only short orbits. \\
{\it (iv)}\ The irregular elements of $\cj\F_q\cup\{\infty\}$ with respect to $Q_S$ are 0 and~$\infty$.\\
{\it (v)}\ If $\tau \in \F_q^\x$ and $\inv_{Q_S}(\tau)=\calC_{\g,S}$ then $\circ(\g)\ge 3$ and $\iota(\g)=\tau^2$, where $\iota(\textmatrix abcd)=(a+d)^2/(ad-bc)$.
\end{lemma}
\begin{proof}
{\it (i)}\ Certainly $\calO_\infty \subset \F_q \cup \{\infty\}$. If $b\in \F_q$ then $b=\textmatrix 1b01 \textmatrix {0\,}{-1}{1\,}0 (\infty)$, so equality holds.
The multiplicity is $|S|/|\calO_\infty|=(1/2)(q^3-q)/(q+1)=(1/2)(q^2-q)$.
{\it (ii)}\ Since $Q_S^2=Q_G$, $Q_S^{-1}(0)=Q_G^{-1}(0)$, which is $\F_{q^2}\setminus \F_q$ by~(\ref{eq:deg2Orbit}).
By Proposition~\ref{prop:Qorbit}, it is an $S$-orbit. The size of the orbit is $q^2-q$,
and the multiplicity is $|S|/(q^2-q)
= (1/2)(q^3-q)/(q^2-q)=(q+1)/2$.
{\it (iii)} Both $\calO_\infty$ and $\F_{q^2}\setminus \F_q$ are short as their multiplicities are greater than~1. There are no other short orbits
by Lemma~\ref{lem:short}.
{\it (iv)} holds because the images of the short orbits under $Q_S$ are $\infty$ and 0.
{\it (v)} Write $\inv_{Q_S}(\tau)=\calC_{\g,S}$. Apply Theorem~\ref{thm:Fq}, with $h(x)=x^2$. If $\g$ were the identity, then the theorem guarantees that
$h(\tau)=\infty$, and if $\circ(\g)=2$ then the theorem says $h(\tau)=0$.
However, $h(\tau)=\tau^2\in\F_q^\x$, so it must be that $\circ(\g)\ge 3$.
\end{proof}
\begin{theorem} \label{thm:QSinv} Let $\tau \in \F_q^\x$. Then $\inv_{Q_S}(\tau)=
\calC_{\gamma,S}$, where $\gamma$ is as follows.
\begin{enumerate}
\item[{\it (i)}\ ] If $\tau = 2$ then $\gamma = \textmatrix 1201$.
\item[{\it (ii)}\ ] If $\tau = -2$ then $\gamma = \textmatrix 1{2u}01$, where $u \in \F_q$ and $\jacobi uq=-1$.
\item[{\it (iii)}\ ] If $\tau = \pm(a + 1/a)$ with $a \in \F_q^\x $ and $a^4 \ne 1$ then $\gamma = \textmatrix {a} 0 0 {a^{-1}}$.
\item[{\it (iv)}\ ] If $\tau = \pm(\z+1/\z)$ with $\zeta^{q+1}=1$ and $\zeta^4\ne 1$ then $\gamma =E_{\zeta^2,\lambda}$,
where $\l$ is any element of $\F_{q^2} \setminus \F_q$.
Here $E_{\zeta,\l} $ is defined by (\ref{eq:Ezeta}), and $E_{\zeta^2,\l}=E_{\zeta,\l}^2 \in \PSL_2(\F_q)$.
\end{enumerate}
\end{theorem}
\begin{proof} All elements of $\F_q^\x$ are regular by Lemma~\ref{lem:PSL2short}, so $\inv_{Q_S}(\tau)=\calC_{\g,S}$ is defined.
{\it (i)} and {\it (ii)}\ Let $v$ be a solution to $v^q=v+b$ where $b \in \F_q^\x$. Then
$$Q_S(v) = \frac{(v^{q^2}-v)^{(q+1)/2}}{(v^q-v)^{(q^2+1)/2} }
= \frac{(v + 2b - v)^{(q+1)/2}}{(v + b -v)^{(q^2+1)/2}} = 2^{(q+1)/2} b^{(q-q^2)/2}.$$
Now $b^{(q-q^2)/2} = (b^q)^{(1-q)/2} = b^{(1-q)/2}=b^{q-1}b^{(1-q)/2} = b^{(q-1)/2}$, so
$$Q_S(v)= 2^{(q+1)/2}b^{(q-1)/2} = 2 (2b)^{(q-1)/2} = 2 \jacobi{2b} q.$$
If $b=2$ then $Q_S(v)=2 $ and if $b=2u$ then $Q_S(v) = -2$. Since $v^q=v+b=\textmatrix 1b01(v)$, we conclude that $\inv_{Q_S}(2) = \calC_{\textmatrix 1201}$ and
$\inv_{Q_S}(-2) = \calC_{\textmatrix 1{2u}01}$.
{\it (iii)}\ By Lemma~\ref{lem:PSL2short}{\it (v)}, $\iota(\g)=\tau^2=(a+1/a)^2=\iota(\textmatrix a00{a^{-1}})$ and $\circ(\g)\ge3$. By Corollary~\ref{cor:PGL2conjugate},
$\g$ and $\textmatrix a00{a^{-1}}$ are conjugate in $\PGL_2(\F_q)$,
say $\g=\a \textmatrix a00{a^{-1}} \a^{-1}$. Let $\a' = \a \textmatrix d001$, where $d=1/\det(\a)$. Then $\a' \in \PSL_2(\F_q)$ and
$\g=\a' \textmatrix a00{a^{-1}} (\a')^{-1}$. Thus, $\inv_{Q_S}(\tau)=\calC_{\g,S}=\calC_{\textmatrix a00{a^{-1}},S}$.
{\it (iv)}\ By Lemma~\ref{lem:PSL2short}{\it (v)}, $\iota(\g)=\tau^2=(\z+1/\z)^2=(\z^2+1)^2/\z^2=\iota(E_{\z^2,\l})$ and $\circ(\g)\ge3$. By Corollary~\ref{cor:PGL2conjugate},
$\g$ is conjugate to $E_{\z^2,\l}$ in $\PGL_2(\F_q)$,
say $\g=\a E_{\z^2,\l} \a^{-1}$ where $\a \in \PGL_2(\F_q)$.
Here, $E_{\z^2,\l}=E_{\z,\l}^2\in \PSL_2(\F_q)$.
If $\a\in \PSL_2(\F_q)$ then $\inv_{Q_S}(\tau)=\calC_{\g,S}=\calC_{E_{\z^2,\l},S}$ as required.
If $\a \not\in\PSL_2(\F_q)$ then $\det(\a)$ is
a nonsquare. Let $\d$ be a primitive element of $\F_{q^2}$ and let $D_{\d,\l}=C_\l \textmatrix {\d^q}00\delta C_\l^{-1}$. Then $\det(D_{\d,\l})=\d^{q+1}$ is a primitive
element of $\F_q$, and in particular a nonsquare in $\F_q$.
By Proposition~\ref{prop:Dickson}, its entries are rational, so it belongs to $\PGL_2(\F_q)$. It commutes with $E_{\z^2,\l}$, therefore
$\g=\a' E_{\z^2,\l} (\a')^{-1}$, where $\a'=\a D_{\d,\l}$. Since $\jacobi{\det(\a')}q=\jacobi{\det(\a)} q \jacobi {\det(D_{\d,\l})} q = (-1)\cdot(-1) = 1$, $\a'\in \PSL_2(\F_q)$,
so that $\g$ and $E_{\z^2,\l}$ are in the same conjugacy class of $\PSL_2(\F_q)$.
Then $\inv_{Q_S}(\tau) = \calC_{\g,S} = \calC_{E_{\z^2,\l},S}$ as required.
\end{proof}
\section{Acknowledgements}
The author thanks Xander Faber for reviewing this article and providing some very insightful comments. First, he observed that $\inv(\tau,q)$ is
essentially the Artin map, as explained in the introduction. This led to a change in emphasis, and even a change in the title.
Second, he greatly simplified Section~\ref{sec:cyclic} by finding
a more direct way to compute the quotient map for a cyclic group of order $\ell$ in the case where $\ell|q+1$.
In addition, he gave many other suggestions that greatly improved the exposition.
His collegiality is immensely appreciated.
| {
"timestamp": "2021-04-05T02:02:04",
"yymm": "1906",
"arxiv_id": "1906.08944",
"language": "en",
"url": "https://arxiv.org/abs/1906.08944",
"abstract": "Let $G$ be a subgroup of ${\\rm PGL}_2({\\mathbb F}_q)$, where $q$ is any prime power, and let $Q \\in {\\mathbb F}_q[x]$ such that ${\\mathbb F}_q(x)/{\\mathbb F}_q(Q(x))$ is a Galois extension with group $G$. By explicitly computing the Artin map on unramified degree-1 primes in ${\\mathbb F}_q(Q)$ for various groups $G$, interesting new results emerge about finite fields, additive polynomials, and conjugacy classes of ${\\rm PGL}_2({\\mathbb F}_q)$. For example, by taking $G$ to be a unipotent group, one obtains a new characterization for when an additive polynomial splits completely over ${\\mathbb F}_q$. When $G = {\\rm PGL}_2({\\mathbb F}_q)$, one obtains information about conjugacy classes of ${\\rm PGL}_2({\\mathbb F}_q)$. When $G$ is the group of order 3 generated by $x \\mapsto 1 - 1/x$, one obtains a natural tripartite symbol on ${\\mathbb F}_q$ with values in ${\\mathbb Z}/3{\\mathbb Z}$. Some of these results generalize to ${\\rm PGL}_2(K)$ for arbitrary fields $K$. Apart from the introduction, this article is written from first principles, with the aim to be accessible to graduate students or advanced undergraduates. An earlier draft of this article was published on the Math arXiv in June 2019 under the title {\\it More structure theorems for finite fields}.",
"subjects": "Number Theory (math.NT)",
"title": "Explicit Artin maps into ${\\rm PGL}_2$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982823294006359,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460413225092
} |
https://arxiv.org/abs/2006.01400 | Approximation Guarantees of Local Search Algorithms via Localizability of Set Functions | This paper proposes a new framework for providing approximation guarantees of local search algorithms. Local search is a basic algorithm design technique and is widely used for various combinatorial optimization problems. To analyze local search algorithms for set function maximization, we propose a new notion called localizability of set functions, which measures how effective local improvement is. Moreover, we provide approximation guarantees of standard local search algorithms under various combinatorial constraints in terms of localizability. The main application of our framework is sparse optimization, for which we show that restricted strong concavity and restricted smoothness of the objective function imply localizability, and further develop accelerated versions of local search algorithms. We conduct experiments in sparse regression and structure learning of graphical models to confirm the practical efficiency of the proposed local search algorithms. | \section{Introduction}\label{sec:local-background}
Local search is a widely used technique to design efficient algorithms for optimization problems.
Roughly speaking, local search algorithms start with an initial solution and gradually increase the objective value by repeatedly moving the solution to a nearby point.
While this approach leads to effective heuristics for many optimization problems in practice, it is not always easy to provide approximation guarantees on their performance.
In this paper, we propose a generic framework for providing approximation guarantees of local search algorithms for \textit{set function optimization}.
Set function optimization is a problem of finding an (approximately) optimal set from all feasible sets.
Various machine learning tasks have been formulated as set function optimization problems, such as feature selection \citep{Das2011,Elenberg18}, summarization \citep{LB11,BMKK14}, or active learning \citep{HJZL06,GK11}.
A promising approach to analyze local search algorithms for set function optimization is to utilize \textit{submodularity}.
Submodularity \citep{Fujishige2005} is a property of set functions useful for designing efficient algorithms and has been extensively studied.
Existing studies showed that for maximizing a submodular function subject to a certain constraint, local search procedures yield a constant-factor approximate solution to an optimal solution \citep{NWF78,FNW78,LSV10,FNSW11}.
Local search algorithms have been applied to several machine learning tasks that have submodularity \citep{IJB13,Balkanski2016}, but there are many other practical set functions that deviate from submodularity.
To analyze local search algorithms for these problems, we propose a novel analysis framework that can be applied to local search algorithms for non-submodular functions.
The key notion of our framework is a property of set functions, which we call \textit{localizability}.
Intuitively, localizability is a property that implies any local optimum is a good approximation to a global optimal solution.
By utilizing this property, we show that for maximizing a set function with localizability under a certain constraint, simple local search algorithms achieve a good approximation.
\begin{table*}[t]
\vskip 0.15in
\centering
\caption{
Comparison of existing bounds on approximation ratios of local search algorithms, greedy algorithms, and modular approximation for sparse optimization with combinatorial constraints.
The result of \citet{Elenberg2017} is indicated by $\dagger$.
The result of \citet{CFK18} is indicated by $\ddagger$.
$M_s$ and $M_{s,t}$ are restricted smoothness constants and $m_s$ is a restricted strong concavity constant (See \Cref{def:restricted} for details).
$T$ is the number of iterations of local search algorithms and $s$ is the maximum cardinality of feasible solutions.
}
\label{table:sparse-ratio}
\begin{tabular}{lccc}
\toprule
Constraint & Local search & Greedy-based & Modular approx.\\
\midrule
Cardinality & $\frac{m_{2s}^2}{M_{s,2}^2} \left( 1 - \exp\left(\frac{M_{s,2}T}{sm_{2s}}\right) \right)$ & $1 - \exp\left(- \frac{m_{2s}}{M_{s,1}}\right)$ $\dagger$ & $\frac{m_1 m_{s}}{M_1 M_s}$\\
\hline
Matroid & $\frac{m_{2s}^2}{M_{s,2}^2} \left( 1 - \exp\left(\frac{M_{s,2}T}{sm_{2s}}\right) \right)$ & $\frac{1}{(1 + \frac{M_{s,1}}{m_s})^2}$ $\ddagger$ & $\frac{m_1 m_{s}}{M_1 M_s}$\\
\hline
\begin{tabular}{@{}c@{}}$p$-Matroid intersection\\or $p$-Exchange systems\end{tabular} & $\frac{1}{p-1+1/q}\frac{m_{2s}^2}{M_{s,2}^2} \left( 1 - \exp\left(\frac{(p-1+1/q)M_{s,2}T}{sm_{2s}}\right) \right) $ & N/A & $\frac{1}{p-1+1/q}\frac{m_1 m_{s}}{M_1 M_s}-\epsilon$\\
\bottomrule
\end{tabular}
\end{table*}
The main application of our framework is \textit{sparse optimization}.
Sparse optimization is the problem of finding a sparse vector that optimizes a continuous objective function.
It has various applications such as feature selection for sparse regression and structure learning of graphical models.
An approach to sparse optimization is a reduction to a set function optimization problem, which is adopted by \citet{JRD16} and \citet{Elenberg2017}.
We show that localizability of this set function is derived from restricted strong concavity and restricted smoothness of the original objective function, which implies approximation guarantees of local search algorithms.
Furthermore, we devise accelerated variants of our proposed local search algorithms by utilizing the structure of sparse optimization.
An advantage of our approach over existing methods is its applicability to a broader class of combinatorial constraints.
\paragraph{Our contribution.}
In this paper, we propose a new property of set functions called \textit{localizability} and provide a lower bound on the approximation ratio of local search algorithms for maximizing a set function with this property.
Our contribution is summarized as follows.
\begin{itemize}
\item We define localizability of set functions and show that localizability of sparse optimization is derived from restricted strong concavity and restricted smoothness of the original objective function.
\item Under the assumption of localizability, we provide lower bounds on the approximation ratio of a standard local search algorithm under a matroid constraint, $p$-matroid intersection constraint, or $p$-exchange system constraint.
\item For sparse optimization, we propose two accelerated variants of local search algorithms, which we call \textit{semi-oblivious} and \textit{non-oblivious} local search algorithms.
\item We conduct experiments on sparse regression and structure learning of graphical models to confirm the practical efficiency of our accelerated local search algorithms.
\end{itemize}
\subsection{Related Work}
\paragraph{Local search for submodular maximization.}
For monotone submodular maximization, several algorithms have been designed based on local search.
\citet{NWF78} proposed a $1/2$-approximation local search procedure for a cardinality constraint, which they call an \textit{interchange heuristic}.
\citet{FNW78} generalized this result to a single matroid constraint.
For a $p$-matroid intersection constraint, \citet{LSV10} proposed a $(1/p-\epsilon)$-approximation local search algorithm.
\citet{FNSW11} proposed a novel class of constraints called $p$-exchange systems and devised a $(1/p-\epsilon)$-approximation local search algorithm.
\citet{FW14} devised a $(1 - 1/\mathrm{e})$-approximation local search algorithm for a matroid constraint.
Also for non-monotone submodular maximization, constant-factor approximation local search algorithms have been devised \citep{FMV11,LMNS09}.
While local search algorithms for submodular maximization have been studied extensively, there are only a few results on those for non-submodular maximization.
\paragraph{Approximation algorithms for non-submodular function maximization and sparse optimization.}
For non-submodular function maximization, many existing studies have adopted an approach based on \textit{greedy algorithms}.
\citet{Das2011} analyzed greedy algorithms for sparse linear regression by introducing the notion of submodularity ratio.
\citet{Elenberg18} extended their results to sparse optimization problems with restricted strong concavity and restricted smoothness.
\citet{CFK18} showed that the random residual greedy algorithm achieves $(\gamma_{} / (1 + \gamma_{}))^2$-approximation for a set function maximization with submodularity ratio $\gamma$ under a single matroid constraint\footnote{They did not specify the subscripts of $\gamma$, but it is not larger than $\min_{i=1,\cdots,s} \gamma_{i-1,s-i}$, where $s$ is the rank of the matroid.}.
We compare our results with existing methods for sparse optimization in \Cref{table:sparse-ratio}.
Note that the results of \citet{Das2011} and \citet{CFK18} hold for any monotone set function whose submodularity ratio is bounded, while we utilize a stronger property derived from restricted strong concavity and restricted smoothness.
\citet{Bian17} analyzed the greedy algorithm for maximizing a set function whose submodularity ratio and generalized curvature are both bounded.
\citet{Sakaue19} considered sparse optimization with a constraint expressed by a monotone set function with bounded superadditivity ratio and restricted inverse curvature, but their framework cannot deal with matroid constraints and $p$-exchange system constraints.
\citet{FS18} developed a similar analysis to ours, but their focus lies in a different problem called dictionary selection.
The Frank-Wolfe algorithm~\citep{FW56,Jaggi13} is a continuous optimization method that is often applied to sparse optimization.
A variant called the \textit{pairwise Frank-Wolfe algorithm}~\citep{LJ15} incorporates the technique of moving weight between two atoms at each iteration, which is similar to our local search procedures, but their guarantees are incomparable to ours.
\paragraph{Modular approximation for sparse optimization.}
For sparse optimization with structured constraints, there exists a trivial benchmark called \textit{modular approximation} \citep{CK11}.
Modular approximation maximizes a linear function that approximates the original set function by ignoring all interactions between elements.
We provide a detailed description and analysis of modular approximation in \Cref{sec:modular-approximation}.
\paragraph{Sparse recovery.}
Many existing studies on sparse optimization focus on sparse recovery guarantees, which cannot be compared directly with our guarantees on approximation ratios.
In the context of compressed sensing, several algorithms similar to our non-oblivious local search algorithms have been developed \citep{NT10,BRB13}.
\citet{KC12} and \citet{BLSGB16} developed frameworks that can be applied to a matroid constraint.
For structure learning of graphical models, \citet{JJR11} provided sparse recovery guarantees for the forward-backward greedy algorithm by assuming restricted strong concavity and restricted smoothness.
Recently, algorithms with recovery guarantees under weaker assumptions have been developed \citep{Bresler15,KM17,WSD19}.
\subsection{Organization}
The rest of this paper is organized as follows.
\Cref{sec:local-setting} specify the problem settings that we tackle in this paper.
\Cref{sec:local-approximate} introduces the notion of localizability and shows localizability of sparse optimization.
In \Cref{sec:local-algorithms}, we propose local search algorithms for a matroid constraint, $p$-matroid intersection constraint, or $p$-exchange system constraint.
In \Cref{sec:local-acceleration}, we devise accelerated local search algorithms for sparse optimization.
In \Cref{sec:local-applications}, we describe applications of our problem settings: sparse regression and structure learning of graphical models.
In \Cref{sec:local-experiments}, we empirically compare our proposed algorithms with existing methods.
Due to space constraints, we defer all proofs to the appendix.
\section{Problem Setting}\label{sec:local-setting}
In this section, we introduce the problem settings that we deal with in this paper.
\paragraph{Set function maximization.}
Let $N \coloneqq [n]$ be the ground set and define a non-negative set function $f \colon 2^N \to \mathbb{R}_{\ge 0}$.
Throughout the paper, we assume $f$ is monotone, i.e., $f(X) \le f(Y)$ holds for any $X \subseteq Y \subseteq N$.
We say $f$ is submodular when $f(S \cup \{v\}) - f(S) \ge f(T \cup \{v\}) - f(T)$ for any $S \subseteq T \subseteq N$ and $v \in N \setminus T$.
Let $\mathcal{I} \subseteq 2^N$ be a set family that represents all feasible solutions.
We assume $(N, \mathcal{I})$ is an independence system, that is, $\emptyset \in \mathcal{I}$ and $X \in \mathcal{I}$ for any $X \subseteq Y$ such that $Y \in \mathcal{I}$.
A set function maximization problem can be written as
\begin{equation}\label{eq:set-max}
\text{Maximize} \quad f(X) \qquad \text{subject to} \quad X \in \mathcal{I}.
\end{equation}
In general, suppose we have access to a value oracle and independence oracle, which return the value of $f(X)$ and the Boolean value that represents whether $X \in \mathcal{I}$ or not for any input $X \in N$, respectively.
We consider three classes of independence systems: matroid constraints, $p$-matroid intersection, and $p$-exchange systems, which include structures that appear in applications.
A standard setting of sparse optimization where $\mathcal{I} = \{ X \subseteq N \mid |X| \le s \}$ is a special case of matroid constraints.
\begin{definition}[{Matroids}]
An independence system $(N, \mathcal{I})$ is called a \textit{matroid} if for any $S, T \in \mathcal{I}$ with $|S| < |T|$, there exists $v \in T \setminus S$ such that $S \cup \{v\} \in \mathcal{I}$.
\end{definition}
\begin{definition}[{$p$-Matroid intersection}]
An independence system $(N, \mathcal{I})$ is a \textit{$p$-matroid intersection} if there exist $p$ matroids $(N, \mathcal{I}_1), \cdots, (N, \mathcal{I}_p)$ such that $\mathcal{I} = \bigcap_{i=1}^p \mathcal{I}_i$.
\end{definition}
\begin{definition}[{$p$-Exchange systems~\citep{FNSW11}}]
An independence system $(N, \mathcal{I})$ is a \textit{$p$-exchange system} if for any $S, T \in \mathcal{I}$, there exists a map $\varphi \colon (T \setminus S) \to 2^{S \setminus T}$ such that (a) for any $v \in T \setminus S$, it holds that $|\varphi(v)| \le p$, (b) each $v \in S \setminus T$ appears in $(\varphi(v))_{v \in T \setminus S}$ at most $p$ times, and (c) for any $X \subseteq T \setminus S$, it holds that $(S \setminus \bigcup_{v \in X} \varphi(v)) \cup X \in \mathcal{I}$.
\end{definition}
\paragraph{Sparse optimization.}
Sparse optimization is the problem of finding a sparse solution that maximizes a continuously differentiable function $u \colon \mathbb{R}^n \to \mathbb{R}$.
Assume we have an access to a zeroth and first-order oracle that returns the value of $u(\mathbf{w})$ and gradient $\nabla u(\mathbf{w})$ given $\mathbf{w} \in \mathbb{R}^n$.
To define the approximation ratio properly, we need to assume $u(\mathbf{0}) \ge 0$, but we can normalize any function $u' \colon \mathbb{R}^n \to \mathbb{R}$ by setting $u(\mathbf{w}) \coloneqq u'(\mathbf{w}) - u'(\mathbf{0})$.
Let $N = [n]$ be the set of all variables and $\mathcal{I} \subseteq 2^N$ a family of feasible supports.
We can write a sparse optimization problem with structured constraints as
\begin{equation}
\text{Maximize} \quad u(\mathbf{w}) \qquad \text{subject to} \quad \mathrm{supp}( \mathbf{w} ) \in \mathcal{I},
\end{equation}
where $\mathrm{supp}(\mathbf{w})$ represents the set of non-zero elements of $\mathbf{w}$, that is, $\mathrm{supp}(\mathbf{w}) = \{ i \in N \mid \mathbf{w}_i \neq 0 \}$.
We define $\| \mathbf{w} \|_0 = |\mathrm{supp}(\mathbf{w})|$.
We assume restricted strong concavity and restricted smoothness of the objective function $u$, which are defined as follows.
\begin{definition}[{Restricted strong concavity and restricted smoothness~\citep{NRWY12,JTK14}}]\label{def:restricted}
Let $\Omega$ be a subset of $\mathbb{R}^d \times \mathbb{R}^d$ and $u \colon \mathbb{R}^d \to \mathbb{R}$ be a continuously differentiable function.
We say that $u$ is \emph{restricted strongly concave} with parameter $m_\Omega$ and \emph{restricted smooth} with parameter $M_\Omega$ on domain $\Omega$ if
\begin{align*}
- \frac{m_\Omega}{2} \| \mathbf{y} - \mathbf{x} \|_2^2
&\ge u(\mathbf{y}) - u(\mathbf{x}) - \langle \nabla u (\mathbf{x}), \mathbf{y} - \mathbf{x} \rangle\\
&\ge - \frac{M_\Omega}{2} \| \mathbf{y} - \mathbf{x} \|^2_2
\end{align*}
for all $(\mathbf{x}, \mathbf{y}) \in \Omega$.
\end{definition}
Let $\Omega_{s} = \{ (\mathbf{x}, \mathbf{y}) \in \mathbb{R}^n \times \mathbb{R}^n \mid \| \mathbf{x} \|_0 \le s, \| \mathbf{y} \|_0 \le s, \| \mathbf{x} - \mathbf{y} \|_0 \le s \}$ and $\Omega_{s,t} = \{ (\mathbf{x}, \mathbf{y}) \in \mathbb{R}^n \times \mathbb{R}^n \mid \| \mathbf{x} \|_0 \le s , ~ \| \mathbf{y} \|_0 \le s, ~ \| \mathbf{x} - \mathbf{y} \|_0 \le t \}$.
Let $m_s$ be restricted strong concavity parameter on $\Omega_s$ and $M_{s,t}$ the restricted smoothness parameter on $\Omega_{s,t}$ for any positive integer $s,t \in \mathbb{Z}_{> 0}$.
Due to the restricted strong concavity of $u$, $\mathrm{argmax}_{\mathrm{supp}(\mathbf{w}) \subseteq X} u(\mathbf{w})$ is uniquely determined.
We denote this maximizer by $\mathbf{w}^{(X)}$.
By introducing a set function $f \colon 2^{N} \to \mathbb{R}_{\ge 0}$ defined as
\begin{equation*}
f(X) = \max_{\mathrm{supp}(\mathbf{w}) \subseteq X} u(\mathbf{w}),
\end{equation*}
we can regard the sparse optimization problem as a set function optimization problem \eqref{eq:set-max}.
\paragraph{Notations.}
Vectors are denoted by bold lower-case letters (e.g. $\mathbf{x}$ and $\mathbf{y}$) and matrices are denoted by bold upper-case letters (e.g. $\mathbf{A}$ and $\mathbf{B}$).
Sets are denoted by upper-case letters (e.g. $X$ and $Y$).
For $X \subseteq N$ and $a \in N$, we define $X + a \coloneqq X \cup \{a\}$ and $X - a \coloneqq X \setminus \{a\}$.
We define the symmetric difference by $X \triangle Y \coloneqq (X \setminus Y) \cup (Y \setminus X)$.
\section{Localizability of Set Functions}\label{sec:local-approximate}
In this section, we define a property of set functions, which we call \textit{localizability}.
Since the general version of the definition of localizability is complicated, we first introduce a simplified definition as a warm-up, and then state the general definition.
Intuitively, localizability measures how much small modifications of a solution increase the objective value.
Localizability is defined as a property that the sum of the increases yielded by small modifications is no less than the increase yielded by a large modification.
\begin{definition}[{Localizability (simplified version)}]\label{def:localizability}
Let $f \colon 2^N \to \mathbb{R}_{\ge 0}$ be a non-negative monotone set function.
For some $\alpha, \beta \in \mathbb{R}_{\ge 0}$, we say $f$ is \textit{$(\alpha, \beta)$-localizable with size $s$} if for arbitrary subsets $X, X^* \subseteq N$ of size $s$ and bijection $\phi \colon X \setminus X^* \to X^* \setminus X$, we have
\begin{equation*}
\sum_{x \in X \setminus X^*} \left\{ f(X - x + \phi(x)) - f(X) \right\} \ge \alpha f(X^*) - \beta f(X).
\end{equation*}
\end{definition}
This property is sufficient to provide an approximation guarantee of local search algorithms for matroid constraints.
However, we need a generalized version of localizability that considers exchanges of multiple elements to deal with more complicated constraints.
\begin{definition}[{Localizability}]\label{def:localizability-general}
Let $f \colon 2^N \to \mathbb{R}_{\ge 0}$ be a non-negative monotone set function.
For some $\alpha, \beta_1, \beta_2 \in \mathbb{R}_{\ge 0}$, we say $f$ is \textit{$(\alpha, \beta_1, \beta_2)$-localizable with size $s$ and exchange size $t$} if for arbitrary subsets $X, X^* \subseteq N$ of size at most $s$ and any collection $\mathcal{P}$ of subsets of $X \triangle X^*$ such that $|P| \le t$ for each $P \in \mathcal{P}$, we have
\begin{equation*}
\sum_{P \in \mathcal{P}} \left\{ f(X \triangle P) - f(X) \right\} \ge \alpha k f(X^*) - ( \beta_1 \ell + \beta_2 k) f(X),
\end{equation*}
where $k$ and $\ell$ are positive integers such that each element in $X^* \setminus X$ appears at least $k$ times in $\mathcal{P}$ and each element in $X \setminus X^*$ appears at most $\ell$ times in $\mathcal{P}$.
\end{definition}
If we consider the case when $k=1$ and $\ell=1$, this definition coincides with the simplified version with $\beta = \beta_1 + \beta_2$.
In existing studies on submodular maximization, \citet{LSV10} and \citet{FNSW11} utilized this property of linear functions and non-negative monotone submodular functions to prove the approximation bounds for local search algorithms.
\begin{proposition}[{Proved in the proof of Lemma 3.1 of \citet{LSV10}}]
Any linear function is $(1, 1, 0)$-localizable and any non-negative monotone submodular function is $(1, 1, 1)$-localizable with any size and any exchange size.
\end{proposition}
In the following proposition, we show that the set function derived from sparse optimization satisfies localizability under the restricted strong concavity and restricted smoothness assumption.
\begin{proposition}\label{lem:feature-exchange}
Suppose $u \colon 2^N \to \mathbb{R}$ is a continuously differentiable function with $u(\mathbf{0}) \ge 0$.
Let $s,t \in \mathbb{Z}_{\ge 0}$ be arbitrary integers.
Assume $u$ is restricted strong concave on $\Omega_{2s}$ and restricted smooth on $\Omega_{s,t}$.
If $f \colon 2^N \to \mathbb{R}$ is a set function defined as $f(X) = \max_{\mathrm{supp}(\mathbf{w}) \subseteq X} u(\mathbf{w})$, then $f$ is $\displaystyle \left( \frac{m_{2s}}{M_{s,t}}, \frac{M_{s,t}}{m_{2s}}, 0 \right)$-localizable with size $s$ and exchange size $t$.
\end{proposition}
\section{Local Search Algorithms}\label{sec:local-algorithms}
In this section, we describe our proposed local search algorithms for a matroid constraint, a $p$-matroid intersection constraint, and a $p$-exchange system constraint, and provide approximation ratio bounds in terms of localizability.
\subsection{Algorithms for a Matroid Constraint}\label{sec:local-matroid}
Here, we describe our proposed algorithm for a matroid constraint.
The algorithm starts with an initial solution, which is any base of the given matroid.
The main procedure of the algorithm is to repeatedly improve the solution by replacing an element in the solution with another element.
At each iteration, the algorithm seeks a pair of an element $x \in X$ and another element $x' \in N \setminus X$ that maximizes $f(X - x + x')$ while keeping the feasibility, which requires $\mathrm{O}(sn)$ oracle calls.
The detailed description of the algorithms is given in \Cref{alg:matroid-anytime}.
\begin{algorithm}[t]
\caption{Local search algorithms for a matroid constraint}\label{alg:matroid-anytime}
\begin{algorithmic}[1]
\STATE Let $X \gets \emptyset$.
\STATE Add arbitrary elements to $X$ until $X$ is maximal in $\mathcal{I}$.
\FOR{$i = 1,\cdots,T$}
\STATE Find the pair of $x \in X$ and $x' \in N \setminus X$ such that $\displaystyle (x, x') \in \mathrm{argmax} \{ f(X - x + x') \mid X - x + x' \in \mathcal{I} \}$
\IF{$f(X - x + x') - f(X) > 0$}
\STATE Update the solution $X \gets X - x + x'$.
\ELSE
\STATE \textbf{return} $X$.
\ENDIF
\ENDFOR
\STATE \textbf{return} $X$.
\end{algorithmic}
\end{algorithm}
We can provide an approximation guarantee in terms of localizability of the objective function as follows.
\begin{theorem}\label{thm:matroid-anytime}
Suppose $\mathcal{I}$ is the independence set family of a matroid and $s = \max \{ |X| \mid X \in \mathcal{I} \}$.
Assume the objective function $f$ is non-negative, monotone, and $(\alpha, \beta_1, \beta_2)$-localizable with size $s$ and exchange size $2$.
If $X$ is the solution obtained by executing $T$ iterations of \Cref{alg:matroid-anytime} and $X^*$ is an optimal solution, then we have
\begin{equation*}
f(X) \ge \frac{\alpha}{\beta_1 + \beta_2} \left( 1 - \exp\left( - \frac{(\beta_1 + \beta_2) T}{s} \right) \right) f(X^*).
\end{equation*}
If $X$ is the output returned by \Cref{alg:matroid-anytime} when it stops by finding no pair to improve the solution, then we have
\begin{equation*}
f(X) \ge \frac{\alpha}{\beta_1 + \beta_2} f(X^*).
\end{equation*}
\end{theorem}
\subsection{Algorithms for $p$-Matroid Intersection and $p$-Exchange System Constraints}\label{sec:local-intersection}
In this section, we consider two more general constraints, $p$-matroid intersection and $p$-exchange system constraints with $p \ge 2$.
The proposed algorithms for these two constraints can be described as almost the same procedure by using the different definitions of $q$-reachability as follows.
\begin{definition}[{$q$-Reachability for $p$-matroid intersection~\citep{LSV10}}]\label{def:reachability-intersection}
Let $\mathcal{I} \subseteq 2^N$ be a $p$-matroid intersection.
A feasible solution $T \in \mathcal{I}$ is $q$-reachable from $S \in \mathcal{I}$ if $|T \setminus S| \le 2q$ and $|S \setminus T| \le 2pq$.
\end{definition}
\begin{definition}[{$q$-Reachability for $p$-exchange systems~\citep{FNSW11}}]\label{def:reachability-system}
Let $\mathcal{I} \subseteq 2^N$ be a $p$-exchange system.
A feasible solution $T \in \mathcal{I}$ is $q$-reachable from $S \in \mathcal{I}$ if $|T \setminus S| \le q$ and $|S \setminus T| \le pq - q + 1$.
\end{definition}
We denote by $\mathcal{F}_q(X)$ the set of all $q$-reachable sets from $X$ that is determined by each definition of $q$-reachability for $p$-matroid intersection or $p$-exchange systems.
First, we must decide parameter $q \in \mathbb{Z}_{\ge 1}$ that determines the neighborhood to be searched at each iteration.
When we select larger $q$, we search larger solution space for improvement at each iteration; thus, we can obtain a better bound on its approximation ratio, while the number of oracle calls $n^{\mathrm{O}(q)}$ becomes larger as well.
The initial solution of the proposed algorithms is any feasible solution.
Then the algorithms repeatedly replace the solution with a $q$-reachable solution that increases the objective value the most.
The detailed description of this local search algorithm is given in \Cref{alg:system-anytime}.
\begin{algorithm}[t]
\caption{Local search algorithms for a $p$-matroid intersection or $p$-exchange system constraint ($p \ge 2$)}\label{alg:system-anytime}
\begin{algorithmic}[1]
\STATE Let $X \gets \emptyset$.
\FOR{$i = 1,\cdots,T$}
\STATE Find $\displaystyle X' \in \mathrm{argmax}_{X' \in \mathcal{F}_q(X)} f(X')$.
\IF{$f(X') - f(X) > 0$}
\STATE Update the solution $X \gets X'$.
\ELSE
\STATE \textbf{return} $X$.
\ENDIF
\ENDFOR
\STATE \textbf{return} $X$.
\end{algorithmic}
\end{algorithm}
We can provide an approximation ratio bound under the assumption of localizability of the objective function as follows.
\begin{theorem}\label{thm:system-anytime}
Suppose $\mathcal{I}$ is the independence set family of a $p$-matroid intersection or $p$-exchange system and $s = \max \{ |X| \mid X \in \mathcal{I} \}$.
Let $t = 2p(q+1)$ for the $p$-matroid intersection case and $t = pq+1$ for the $p$-exchange system case.
Assume the objective function $f$ is non-negative, monotone, and $(\alpha, \beta_1, \beta_2)$-localizable with size $s$ and exchange size $t$.
If $X$ is the output obtained by executing $T$ iterations of \Cref{alg:system-anytime} with parameter $q$ and $X^*$ is an optimal solution, then the approximation ratio is lower-bounded by
\begin{equation*}
\frac{\alpha \left( 1 - \exp\left( \frac{(\beta_1 (p - 1 + 1/q) + \beta_2 ) T}{s}\right) \right)}{\beta_1 (p - 1 + 1/q) + \beta_2}.
\end{equation*}
If $X$ is the output returned by \Cref{alg:system-anytime} when it stops by finding no better $q$-reachable solution, then we have
\begin{equation*}
f(X) \ge \frac{\alpha}{\beta_1 (p - 1 + 1/q) + \beta_2} f(X^*).
\end{equation*}
\end{theorem}
\section{Acceleration for Sparse Optimization}\label{sec:local-acceleration}
We consider two accelerated variants of the proposed local search algorithms in the case of sparse optimization.
To distinguish the original one from the accelerated variants, we call \Cref{alg:matroid-anytime} and \Cref{alg:system-anytime} \textit{oblivious local search algorithms}.
\subsection{Acceleration for a Matroid Constraint}
The oblivious version computes the value of $f(X - x + x')$ for $\mathrm{O}(sn)$ pairs of $(x, x')$ at each iteration.
We can reduce the computational cost by utilizing the structure of sparse optimization.
The first variant is the \textit{semi-oblivious} local search algorithm.
For each element $x' \in N \setminus X$ to be added, it computes the value of $f(X - x + x')$ only for $x \in X$ with the smallest $(\mathbf{w}^{(X)})^2_x$ among those satisfying $X - x + x' \in \mathcal{I}$.
Thus, we can reduce the number of times we compute the value of $f(X - x + x')$ from $\mathrm{O}(sn)$ to $\mathrm{O}(n)$.
The second variant is the \textit{non-oblivious} local search algorithm.
It uses the value of
\begin{equation*}
\frac{1}{2M_{s,2}} \left( \nabla u(\mathbf{w}^{(X)}) \right)^2_{x'} - \frac{M_{s,2}}{2} \left(\mathbf{w}^{(X)}\right)_{x}^2
\end{equation*}
in place of the increase of the objective function $f(X - x + x') - f(X)$.
We need to evaluate $\nabla u (\mathbf{w}^{(X)})$ and $\mathbf{w}^{(X)}$ at the beginning of each iteration, but it is not necessary to compute the value of $f(X - x + x')$.
The detailed description of these algorithms are given in \Cref{alg:matroid-anytime-full} in \Cref{sec:full-pseudocodes}.
\begin{theorem}\label{thm:matroid-sparse}
Suppose $f(X) = \max_{\mathrm{supp}(\mathbf{w}) \subseteq X} u(\mathbf{w})$ and $\mathcal{I}$ is the independence set family of a matroid.
If $X$ is the solution obtained by executing $T$ iterations of the semi-oblivious or non-oblivious local search algorithms and $X^*$ is an optimal solution, then we have
\begin{equation*}
f(X) \ge \frac{m_{2s}^2}{M_{s,2}^2} \left( 1 - \exp\left( - \frac{M_{s,2} T}{s m_{2s}} \right) \right) f(X^*),
\end{equation*}
where $s = \max \{|X| \colon X \in \mathcal{I} \}$.
If $X$ is the output returned when the algorithm stops by finding no pair to improve the solution, then we have
\begin{equation*}
f(X) \ge \frac{m_{2s}^2}{M_{s,2}^2} f(X^*).
\end{equation*}
\end{theorem}
\subsection{Acceleration for $p$-Matroid Intersection and $p$-Exchange System Constraints}
Similarly to the case of matroid constraints, we can develop the semi-oblivious and non-oblivious local search algorithms for $p$-matroid intersection and $p$-exchange system constraints.
The semi-oblivious variant only checks $X' \in \mathcal{F}_q(X)$ that minimizes $\left\| \left( \mathbf{w}^{(X)} \right)_{X \setminus X'} \right\|^2$ among $X'' \in \mathcal{F}_q(X)$ such that $X'' \setminus X = X' \setminus X$.
The non-oblivious version selects the solution $X' \in \mathcal{F}_q(X)$ that maximizes
\begin{equation*}
\frac{1}{2M_{s,t}} \left\| \left( \nabla u(\mathbf{w}^{(X)}) \right)_{X' \setminus X} \right\|^2 - \frac{M_{s,t}}{2} \left\| \left(\mathbf{w}^{(X)}\right)_{X \setminus X'} \right\|^2.
\end{equation*}
The detailed description of these algorithms are given in \Cref{alg:system-anytime-full} in \Cref{sec:full-pseudocodes}.
While the oblivious local search algorithm requires $O(n^q)$ times of the evaluation of $f$ for finding the most suitable exchange at each iteration in general, the non-oblivious local search reduces it to a linear function maximization problem.
In several cases such as a partition matroid constraint or a matching constraint, we can find the most suitable exchange in time polynomial in $n$ and $q$ by using standard techniques of combinatorial optimization.
We can provide the same approximation guarantees for these accelerated variants as the oblivious variant.
\begin{theorem}\label{thm:system-sparse}
Suppose $f(X) = \max_{\mathrm{supp}(\mathbf{w}) \subseteq X} u(\mathbf{w})$ and $\mathcal{I}$ is the independence set family of a $p$-matroid intersection or $p$-exchange system.
Let $t = 2p(q+1)$ for the $p$-matroid intersection case and $t = pq+1$ for the $p$-exchange system case.
If $X$ is the output obtained by executing $T$ iterations of the semi-oblivious or non-oblivious local search algorithms with parameter $q$ and $X^*$ is an optimal solution, then its approximation ratio is lower-bounded by
\begin{equation*}
\frac{1}{p - 1 + 1/q} \frac{m_{2s}^2}{M_{s,t}^2} \left( 1 - \exp\left( - \frac{(p - 1 + 1/q) M_{s,t} T}{s m_{2s}} \right) \right),
\end{equation*}
where $s = \max \{|X| \colon X \in \mathcal{I} \}$.
If $X$ is the output returned when the algorithm stops by finding no better $q$-reachable solution, then we have
\begin{equation*}
f(X) \ge \frac{1}{p - 1 + 1/q} \frac{m_{2s}^2}{M_{s,t}^2} f(X^*).
\end{equation*}
\end{theorem}
\begin{remark}
We also develop another version of our local search algorithms that increases the objective value at a predetermined rate, which is described in \Cref{sec:local-geometric}.
\end{remark}
\begin{remark}
The parameter $M_{s,t}$ used in the non-oblivious variant can be replaced with an upper bound on $M_{s,t}$, which leads to the approximation ratio bounds whose $M_{s,t}$ is also replaced with the upper bound.
\end{remark}
\section{Applications}\label{sec:local-applications}
In this section, we provide two applications of our framework: feature selection for sparse regression and structure learning of graphical models.
\subsection{Feature Selection for Sparse Regression}
In sparse regression, given a design matrix $\mathbf{A} \in \mathbb{R}^{n \times d}$ and a response vector $\mathbf{y}$, we aim to find a sparse vector $\mathbf{w} \in \mathbb{R}^n$ that optimizes some criterion.
We can formulate this problem as a sparse optimization problem of maximizing $u(\mathbf{w})$ subject to $|\mathrm{supp}(\mathbf{w})| \le s$, where $u \colon \mathbb{R}^{n} \to \mathbb{R}$ is the criterion determined by $\mathbf{A}$ and $\mathbf{y}$.
\citet{Das2011} devised approximation algorithms in the case where $u$ is the squared multiple correlation $R^2$, i.e., $u(\mathbf{w}) \coloneqq 1 - \| \mathbf{y} - \mathbf{A} \mathbf{w} \|_2^2 / \| \mathbf{y} \|_2^2$, and \citet{Elenberg18} extended their results to general objectives with restricted strong concavity and restricted smoothness.
Here we consider sparse regression with \textit{structured constraints}.
In practical scenarios, we often have prior knowledge of relationships among features and can improve the quality of the estimation by incorporating it into structured constraints \citep{BCDH10,Huang2009}.
We formulate sparse regression with structured constraints as the problem of maximizing $u(\mathbf{w})$ subject to $\mathrm{supp}(\mathbf{w}) \in \mathcal{I}$, where $\mathcal{I}$ is the set family of feasible supports.
An advantage of our local search framework is its applicability to a broad class of structured constraints, including matroid constraints.
For example, the following constraint is a special case of matroid constraints.
Suppose the set of features are partitioned into several categories.
Due to a balance among categories, it is often the case that we should select almost the equal number of features from each category.
Such a constraint can be expressed by using a partition matroid.
Partition matroid constraints were used for multi-level subsampling by \citet{BLSGB16} and detecting splice sites in precursor messenger RNAs by \citet{CFK18}.
If there are multiple matroid constraints, we can formulate them as a $p$-matroid intersection constraint.
To our knowledge, our proposed algorithms are the first to cope with multiple matroid constraints.
\subsection{Structure Learning of Graphical Models}
Undirected graphical models, or Markov random fields, express the conditional dependence relationships among random variables.
We consider the problem of estimating the graph structure of an undirected graphical model given samples generated from this probability distribution.
The goal of this problem is to restore the set of edges, that is, the set of all conditionally dependent pairs.
To obtain a more interpretable graphical model, we often impose a sparsity constraint on the set of edges.
This task can be formulated as a sparse optimization problem.
While most existing methods solve the neighborhood estimation problem separately for each vertex under a sparsity constraint \citep{JJR11,KM17}, our framework provides an optimization method that handles the sparsity constraints for all vertices simultaneously.
Suppose we aim to maximize some likelihood function (e.g., pseudo-log-likelihood \citep{Besag75}) under the sparsity constraint on each vertex, i.e., the degree of each vertex is at most $b$, where $b \in \mathbb{Z}_{\ge 0}$ is the maximum degree.
This degree constraint is called a $b$-matching constraint, which is a special case of $2$-exchange system.
Hence, we can apply our local search algorithms to this problem.
\section{Experiments}\label{sec:local-experiments}
In this section, we conduct experiments on two applications: sparse regression and structure learning of graphical models.
All the algorithms are implemented in Python 3.6.
We conduct the experiments in a machine with Intel Xeon E3-1225 V2 (3.20 GHz and 4 cores) and 16 GB RAM.
\begin{figure*}[t]
\centering
\subfigure[regression, time, $n = 200$]{
\includegraphics[width=0.3\textwidth]{./figures_local_search/regression_n200_time.pdf}\label{fig:regression_n200_time}
}
\subfigure[regression, objective, $n = 200$]{
\includegraphics[width=0.3\textwidth]{./figures_local_search/regression_n200.pdf}\label{fig:regression_n200}
}
\subfigure[regression, objective, $n = 1000$]{
\includegraphics[width=0.3\textwidth]{./figures_local_search/regression_n1000.pdf}\label{fig:regression_n1000}
}
\subfigure[graphical, time, $|V| = 10$]{
\includegraphics[width=0.3\textwidth]{./figures_local_search/graphical_n10_time.pdf}\label{fig:graphical_n10_time}
}
\subfigure[graphical, objective, $|V| = 10$]{
\includegraphics[width=0.3\textwidth]{./figures_local_search/graphical_n10.pdf}\label{fig:graphical_n10}
}
\subfigure[graphical, objective, $|V| = 20$]{
\includegraphics[width=0.3\textwidth]{./figures_local_search/graphical_n20.pdf}\label{fig:graphical_n20}
}
\caption{
The experimental results on sparse linear regression under a partition matroid constraint (\ref{fig:regression_n200_time}, \ref{fig:regression_n200}, and \ref{fig:regression_n1000}) and structure learning of graphical models under a $b$-matching constraint (\ref{fig:graphical_n10_time}, \ref{fig:graphical_n10}, and \ref{fig:graphical_n20}).
\ref{fig:regression_n200_time} shows the running time in the case where $n = 100$.
\ref{fig:regression_n200} and \ref{fig:regression_n1000} show the objective value ($R^2$) in the case where $n = 200$ and $n = 1000$, respectively.
\ref{fig:graphical_n10_time} shows the running time in the case where $|V| = 10$.
\ref{fig:graphical_n10} and \ref{fig:graphical_n20} show the ratio between the objective achieved by the algorithms and the optimal objective value in the case where $|V| = 10$ and $|V| = 20$, respectively.
}
\label{fig:local_search_regression}
\end{figure*}
\subsection{Experiments on Sparse Regression}
\paragraph{Datasets.}
We generate synthetic datasets with a partition matroid constraint.
First, we determine the design matrix $\mathbf{A} \in \mathbb{R}^{n \times d}$ by generating each of its entries according to the uniform distribution on $[0, 1]$.
Then we normalize each of columns to ensure that the mean will be $0$ and the standard deviation will be $1$.
Suppose the set of all features are partitioned into $n_c$ equal-size categories.
We randomly select a sparse subset $S^*$ by selecting $n_p$ parameters from each category.
The response vector is determined by $\mathbf{y} = \mathbf{A}_{S^*} \mathbf{w} + \epsilon$, where $\mathbf{w}$ is a random vector generated from the standard normal distribution $\mathcal{N}(0, 1)$ and $\epsilon$ is a noise vector whose each element is generated from $\mathcal{N}(0, 0.2)$.
We consider two settings with different parameters.
We set $(n, d, n_c, n_p) = (200, 50, 5, 5)$ in one setting and $(n, d, n_c, n_p) = (1000, 100, 10, 10)$ in the other setting.
We use $R^2$ as the objective function to be maximized.
For each parameter, we conduct $10$ trials and plot the average.
\paragraph{Methods.}
We implement the oblivious, semi-oblivious, and non-oblivious local search algorithms.
As benchmarks, we implement the random residual greedy algorithm \citep{CFK18} and modular approximation.
We apply these methods to a partition matroid constraint with capacity $n'_p$ for each $n'_p \in \{1,\cdots,10\}$.
\paragraph{Results.}
First, we compare the proposed methods in the case of $n = 200$ (\Cref{fig:regression_n200_time} and \Cref{fig:regression_n200}).
We can observe that the non-oblivious variant is approximately $10$ times faster than the oblivious variant, while achieving an objective value comparable to that of the oblivious variant.
In comparison to the random residual greedy algorithm, the non-oblivious variant achieves a higher objective value in a similar running time.
Modular approximation is considerably faster than the other methods, but the quality of its solution is poor.
Next, we conduct experiments on larger datasets with $n = 1000$ (\Cref{fig:regression_n1000}).
The oblivious and semi-oblivious local search algorithms cannot be applied to this setting due to their slow running time.
Moreover, in this setting, we can observe that the non-oblivious variant outperforms the benchmarks.
\subsection{Experiments on Structure Learning of Graphical Models}
\paragraph{Datasets.}
We consider Ising models $G = (V, E)$ with parameter $(w_{uv})_{u, v \in V}$.
First we generate the true graphical model randomly from the configuration model with degree $d$ for all vertices.
For each edge $(u, v) \in E$, we set the parameter $w_{uv} = +0.5$ or $w_{uv} = -0.5$ uniformly at random.
We synthetically generate $100$ samples by Gibbs sampling from this Ising model.
We consider two settings with different parameters.
We set $(|V|, d) = (10, 5)$ in one setting and $(|V|, d) = (20, 7)$ in the other setting.
We consider the problem of maximizing the pseudo-log-likelihood.
For each setting, we conduct $10$ trials and plot the average.
\paragraph{Methods.}
We implement the oblivious, semi-oblivious, and non-oblivious local search algorithms with parameter $q = 1$.
Since calculating $M_{s,3}$ requires much computational cost, we use an upper bound $4 \sum_{i=1}^N \| \mathbf{x}^i \|_2^3$ instead of $M_{s,3}$ in the non-oblivious variant.
As a benchmark, we implement modular approximation, in which to maximize a linear function over a $b$-matching constraint, we use the reduction from a max-weight $b$-matching problem to a max-weight matching problem \citep[Theorem 32.4]{Schrijver} and the max-weight matching problem solver in NetwerkX library.
We also implement random selection, which randomly samples a subgraph whose degree is $d$ at all vertices.
In all methods, we use the L-BFGS-G solver in scipy.optimize library for evaluating the value of $f$.
We apply these methods to pseudo-log-likelihood maximization under a $b$-matching constraint for each $b \in \{1, \cdots, d\}$.
\paragraph{Results.}
First, we compare the proposed methods in the case of $|V| = 10$ (\Cref{fig:graphical_n10_time} and \Cref{fig:graphical_n10}).
We can observe that our acceleration techniques work well in practice.
The running time of the non-oblivious variant is competitive with that of modular approximation and its solution quality is higher than that of modular approximation for larger solution size.
Next, we conduct experiments on larger graphs with $|V| = 20$ (\Cref{fig:graphical_n20}).
Since the oblivious and semi-oblivious local search algorithms are too slow to be applied to this setting, we omit them.
Also, in this setting, we can observe that the non-oblivious variant outperforms the benchmarks particularly in cases of larger solution size.
\section*{Acknowledgements}
The author would like to thank Andreas Krause for providing insightful comments in the early stages of this study.
The author is thankful to Takeru Matsuda and Kazuki Matoya for inspiring discussions.
This study was supported by JSPS KAKENHI Grant Number JP 18J12405.
\nocite{langley00}
| {
"timestamp": "2020-06-03T02:09:45",
"yymm": "2006",
"arxiv_id": "2006.01400",
"language": "en",
"url": "https://arxiv.org/abs/2006.01400",
"abstract": "This paper proposes a new framework for providing approximation guarantees of local search algorithms. Local search is a basic algorithm design technique and is widely used for various combinatorial optimization problems. To analyze local search algorithms for set function maximization, we propose a new notion called localizability of set functions, which measures how effective local improvement is. Moreover, we provide approximation guarantees of standard local search algorithms under various combinatorial constraints in terms of localizability. The main application of our framework is sparse optimization, for which we show that restricted strong concavity and restricted smoothness of the objective function imply localizability, and further develop accelerated versions of local search algorithms. We conduct experiments in sparse regression and structure learning of graphical models to confirm the practical efficiency of the proposed local search algorithms.",
"subjects": "Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG)",
"title": "Approximation Guarantees of Local Search Algorithms via Localizability of Set Functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232914907945,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460395069175
} |
https://arxiv.org/abs/2212.03501 | Eight times four bialgebras of hypergraphs, cointeractions, and chromatic polynomials | We consider the bialgebra of hypergraphs, a generalization of Schmitt's Hopf algebra of graphs, and show it has a cointeracting bialgebra. So one has a double bialgebra in the sense of L. Foissy, who recently proved there is then a unique double bialgebra morphism to the double bialgebra structure on the polynomial ring ${\mathbb Q}[x]$. We show the polynomial associated to a hypergraph is the hypergraph chromatic polynomial.Moreover hypergraphs occurs in quartets: there is a dual, a complement, and a dual complement hypergraph. These correspondences are involutions and give rise to three other double bialgebras, and three more chromatic polynomials. In all we give eight quartets of bialgebras which includes recent bialgebras of M. Aguiar and F. Ardila, and by L. Foissy. | \section{Introduction}
\label{sec:intro}
We introduce twelve bialgebras of hypergraphs, or three quartets of
bialgebras. Two of these quartets come as four double bialgebras.
This latter notion was recently introduced and their general theory developed
by L.~Foissy \cite{Fo22}. A double bialgebra is the same as two
cointeracting bialgebras, where the underlying algebras are the same.
Foissy shows \cite{Fo22} that such a double bialgebra comes with
a unique morphism to the double bialgebra structure naturally defined on the polynomial
ring ${\mathbb Q}[x]$, when the comodule
bialgebra is a Hopf algebra. In our case, i.e., for hypergraphs, we get associated to any hypergraph
a quartet of polynomials in ${\mathbb Q}[x]$. One of these
is the chromatic polynomial of the hypergraph \cite{Doh,Hel},
which generalizes the classical chromatic polynomial of a graph.
\vskip 2mm
For a finite set $V$ let $P(V)$ be the power set, i.e., the set of all subsets
of $V$. A hypergraph is simply a map of sets
$$
h : E \rightarrow P(V),
$$
where the elements of $E$ are called edges, and those of $V$ vertices. Such a hypergraph
induces three other hypergraphs, thus giving a quartet of hypergraphs:
\begin{itemize}
\item A {\it{dual hypergraph}}
$$
h^d : V \rightarrow P(E),
$$
by considering for each vertex $v \in V$ all the edges containing $v$.
\item A {\it{complement hypergraph}}
$$
h^c : E \rightarrow P(V),
$$
where $h^c(e) = V \backslash h(e)$.
\item A {\it{dual complement hypergraph}}
$$
h^{cd} : V \rightarrow P(E),
$$
by taking the dual of $h^c$ or equivalently the complement of $h^d$.
\end{itemize}
Let $H$ be the vector space with basis all isomorphism classes of finite
hypergraphs.
We first consider the restriction bialgebra
$(H, \mu,\Delta, \eta, \epsilon_\Delta)$ on hypergraphs, which
straightforwardly generalizes the restriction bialgebra on graphs
introduced by W.~Schmitt \cite{Schmitt}.
Let $H^\circ \subseteq H$ be the subspace generated by the hypergraphs
with no empty edges. It becomes a sub-bialgebra of the above.
We then exhibit an extraction-contraction bialgebra
$(H^\circ, \mu,\delta, {\bf 1}_\delta)$ such that $(H,\mu,\Delta)$ is a comodule
bialgebra over $(H^\circ, \mu, \delta)$. So
$(H^\circ, \mu,\Delta,\delta)$
becomes a double bialgebra in the sense of Foissy \cite{Fo22}.
Dualization and complementation are involutions on $H$. This transports
the double bialgebra $H^\circ$ to three other double bialgebras, resulting in
a quartet of double bialgebras:
\[
H^\circ, \quad H^d, \quad H^c, \quad H^{cd}.
\]
Each of them comes with a unique {\it double bialgebra} morphism to ${\mathbb Q}[x]$.
We may extend
each of these morphisms to a (single) bialgebra morphism from the restriction
bialgebra $(H, \mu, \Delta)$ to ${\mathbb Q}[x]$. So for each
hypergraph $h$ we get four corresponding polynomials
\[
\chi_h(x), \quad \chi_h^d(x), \quad \chi_h^c(x), \quad \chi_h^{cd}(x).
\]
The first polynomial, $\chi_h(x)$, is the chromatic polynomial of
the hypergraph $h$.
The value $\chi_h(k)$ counts the number of colorings of the vertices of
$h$ with $k$ colors, such that {\it no edge is monochromatic}.
As an example consider the hypergraph $h$ with one edge $E=\{e\}$ and $|V|=n$ vertices, the
edge containing all $n$ vertices, i.e., $h(e)=V$. The hypergraph chromatic polynomial is then
$\chi_h(x) = x^n - x$.
We note that the chromatic polynomial of a hypergraph $h$
vanishes whenever $h$ has an edge with exactly one vertex.
The notion of chromatic polynomials for hypergraphs, seems to date back to
the article \cite{Hel}. It has further been considered in various
articles, like \cite{Doh,Tom, ZD}.
The other polynomials above also have interpretations in terms of colorings.
In the end we give another quartet of bialgebra of hypergraphs.
It comes from the bialgebras of graphs and hypergraphs
introduced in \cite{AA} Subsections 3.1 and 20.1.
The coproduct $\Delta^\prime$
is now no longer co-commutative. It is restriction on the first factor,
but contraction (of all edges) on the second factor.
\vskip 2mm
The organization of the article is as follows.
Section \ref{sec:hyp} recalls basics around hypergraphs, and
the notions of dualization $d$ and complementation $c$. We give the restriction
bialgebra of hypergraphs,
and the three other bialgebras derived using $d$, $c$ and $cd$. Section
\ref{sec:coint} recalls the notion of cointeracting bialgebras, and
double bialgebras from \cite{Fo22}.
Section \ref{sec:ext-con}
introduces the extraction-contraction bialgebra on hypergraphs.
We show that the restriction bialgebra of Section \ref{sec:hyp}
is a comodule bialgebra over this bialgebra.
Section \ref{sec:poledge}
considers the maps from the double bialgebras of hypergraphs
to ${\mathbb Q}[x]$ and computes the associated polynomial for the simplest
of hypergraphs: i.~The hypergraphs with no edges, ii.~the discrete hypergraphs,
where each edge is paired with exactly one vertex, and iii.~the hypergraphs with
only one edge containing all vertices.
Section \ref{sec:chrompol}
shows that the associated polynomial of hypergraphs is the chromatic
polynomial, counting non-monochromatic colorings.
Section \ref{sec:quartpol}
gives examples of the quartet of polynomials for various
hypergraphs. In Section \ref{sec:newq}
we introduce another quartet of bialgebras
of hypergraphs, derived from the bialgebras of graphs and hypergraphs
introduced in \cite{AA}.
Moreover we ask if there are cointeracting bialgebras associated to these
bialgebras.
\vskip 2mm
\noindent {\it Acknowledgements:} The second author thanks NTNU for
hosting a longer stay where the initial phases of this work was done.
He received support from Lorentz Meltzers h{\o}yskolefond.
We thank L.Foissy for bringing to our attention some inaccuracies
in the first version of this article, concerning the cointeractions of
bialgebras
\section{Bialgebras of hypergraphs}
\label{sec:hyp}
We give our notion of hypergraphs, and the hypergraphs one may derive
using the process of dualization and complementation.
We give the restriction coalgebra on hypergraphs, and the three
other coalgebras one gets by transporting this using dualization and
complementation. We list examples of the coproducts.
There are also two commutative products. In total we get four bialgebras
of hypergraphs.
\subsection{Hypergraphs, relations, and bipartite graphs}
For a set $S$ denote by $P(S)$ the
power set, consisting of all subsets of $S$. The complement
of a subset $T \subseteq S$ is written $S \backslash T$, but when
$S$ is understood, we simply write $T^c$. The subset $T$ may be
identified with a map $S \rightarrow \{0,1\}$ by sending the elements of $T$
to $1$ and other elements to $0$. Thus
\[
P(S) = \text{Hom}(S, \{0,1\}).
\]
\begin{definition}
Let $V$ and $E$ be sets. A {\it hypergraph} is a map $h : E \rightarrow P(V)$.
The elements of $V$ and $E$ are {\it vertices} and {\it edges}, respectively.
\end{definition}
For an edge $e \in E$, the subset $h(e) \subseteq V $ is the set of vertices
of $e$. By abuse of notation we may write $e \subseteq V$.
We have {\it no restrictions} on the map $h$.
We allow edges to have an empty set of vertices.
We allow different edges to have the same set of vertices, or more
generally that their vertex sets may be related by inclusion.
A hypergraph is thus an element in
\begin{equation}
\label{eq:hyp-EV}
\text{Hom}(E, \text{Hom}(V, \{0,1\}))
= \text{Hom}(E \times V, \{0,1\})
= \text{Hom}(V, \text{Hom}(E, \{0,1\})).
\end{equation}
By the middle part above, our general notion of a hypergraph, is
simply equivalent to a subset of $E \times V$, or a relation between $E$
and $V$. We could thus equally well (and maybe more appropriately) have
called this a relation. However the connotations suggested by
vertices and (hyper)edges will be natural, so we use this.
\begin{remark}
A relation between $E$ and $V$ also identifies as a
bipartite graph with vertices $E \cup V$.
So there is the ``paradox'':
\[
\text{bipartite graphs } \subset \text{ graphs }
\subset \text{ hypergraphs } = \text{ bipartite graphs}.
\]
\end{remark}
\begin{remark}
A map $E \rightarrow \text{Hom}(V,\{0,1\})$ can be considered a {\it promap}
or {\it profunctor} $E \mathrlap{{\hskip 2.8mm}{\llin}}{\lpil} V$. This is a special case of a profunctor
between partially ordered sets \cite{Fl}, or even of profunctors
between categories \cite[Ch.4]{ACT}.
\end{remark}
\subsection{Derived hypergraphs}
By \eqref{eq:hyp-EV} above, we get a dual hypergraph
\[
h^d : V \rightarrow P(E),
\]
where $V$ becomes the edges of the dual hypergraph, and $E$ the vertices.
There is a complementation map
\[
P(V) \mto{c} P(V), \quad S \mapsto S^c = V\backslash S
\]
which is an involution. Composing with $h$ we get a complement hypergraph
\[
h^c : E \mto{h} P(V) \mto{c} P(V).
\]
We may also take the dual of the complement $h^{cd} = (h^c)^d$ and the
complement of the dual $h^{dc} = (h^d)^c$. These are equal (see
Example \ref{ex:hyp-EV} below). The hypergraph $h$ induces four hypergraphs:
\[
h, \quad h^d, \quad h^c, \quad h^{cd}.
\]
And by \eqref{eq:hyp-EV} the hypergraph $h : E \rightarrow P(V)$ is
equivalent to a relation between $E$ and $V$, and so may be represented by
a $0,1$-matrix with rows indexed by $E$ and columns indexed by $V$.
\begin{example} \label{ex:hyp-EV}
Let the hypergraph $h$ be given by the $0,1$-matrix below.
The three other hypergraphs are then displayed in the same way.
\begin{equation*}
h = E \, \overset{\large{V}}{\left [ \begin{matrix} 0 & 1 & 0 & 1 \\
1 & 0 & 1 & 1 \end{matrix} \right ]}, \quad
h^c = E \, \overset{V}{\left [ \begin{matrix} 1 & 0 & 1 & 0 \\
0 & 1 & 0 & 0 \end{matrix} \right ]}, \quad
h^d = V \, \overset{E}{\left [ \begin{matrix} 0 & 1 \\ 1 & 0 \\
0 & 1 \\ 1 & 1 \end{matrix} \right ]}, \quad
h^{cd} = V \, \overset{E}{\left [ \begin{matrix} 1 & 0 \\ 0 & 1 \\
1 & 0 \\ 0 & 0 \end{matrix} \right ]}
\end{equation*}
\end{example}
\subsection{The coalgebra}
For $U \subseteq V$, the {\it restriction}
\[
E_{|U} = \{ e \in E \, | \, e \subseteq U \}.
\]
Let $H$ be the ${\mathbb Q}$-vector space generated by isomorphism
classes of hypergraphs $(E,V,h)$ where $V$ and $E$ are finite sets.
For $U \subseteq V$, the complement set is $U^c = V \backslash U$.
We have a coproduct $\Delta$ defined as follows (we omit
the maps $h$ as they are understood):
\[
(E,V) \overset{\Delta}{\longmapsto}
\sum_{U \subseteq V} (E_{|U}, U) \otimes (E_{|U^c}, U^c).
\]
There is a counit $\epsilon_\Delta$ on $H$:
\[
(E,V) \overset{}{\mapsto} \begin{cases} 1, & V = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
This gives a coalgebra structure on $H$.
\begin{example}
\begin{equation*}
\cherry \overset{\Delta} \mapsto
{\bf 1} \otimes \cherry + 2 \, \point \otimes \linvert + \point \otimes \dpoint
+ 2 \, \linvert \otimes \point + \dpoint \otimes \point
+ \cherry \otimes {\bf 1}
\end{equation*}
\end{example}
More examples are gathered in Subsection \ref{subsec:hyp-ex}.
\subsection{The coalgebra from dualization}
Due to the symmetric situation of edges and vertices in a relation
we also have a dual coproduct.
Since $d$ is an involution, we get a coproduct
\[
\Delta^d = (d \otimes d) \circ \Delta \circ d
\]
on hypergraphs. Explicitly, for each $F \subseteq E$, let the
{\it restriction}
\[
V_{|F} = \{ v \in V \, | \, \text{ the edges incident to } v \text{ are in } F \}.
\]
Also let $F^c = E \backslash F$ be the complement.
The coproduct $\Delta^d$ is given by:
\[
(E,V) \mapsto \sum_{F \subseteq E} (F, V_{|F}) \otimes (F^c, V_{|F^c}),
\]
and a counit $\epsilon^d_\Delta$:
\[
(E,V) \overset{}{\mapsto} \begin{cases} 1, & E = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
This gives a second coalgebra structure on $H$.
\subsection{The coalgebra from complementation}
The involution $c$ gives a coproduct
\[
\Delta^c = (c \otimes c) \circ \Delta \circ c.
\]
For a subset $U \subseteq V$, let the {\it link}
\[
\text{lk}_U E = \{ e \in E \, | \, h(e) \supseteq U \}
= \{ e \in E \, | \, e \text{ is incident to every } u \in U \}.
\]
The complement $\Delta^c$ is then defined by
\[
(E,V) \mapsto \sum_{U \subseteq V} (\text{lk}_{U^c} E, U) \otimes (\text{lk}_U E, U^c),
\]
and the counit $\epsilon^c_\Delta$ is the same as $\epsilon_\Delta$:
\[
(E,V) \overset{}{\mapsto} \begin{cases} 1, & V = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
\subsection{The coalgebra from the dual-complement}
The involution $d \circ c$ gives a coproduct
\[
\Delta^{cd} = (d \circ c \otimes d \circ c) \circ \Delta \circ (d \circ c).
\]
For a subset $F \subseteq E$, let the {\it core}
\[
\text{cor}_F V = \cap_{f \in F} h(f) = \{ v \in V \, | \, v \text{ incident to every } f \in F\}.
\]
The dual-complement $\Delta^{cd}$ is then defined by
\[
(E,V) \mapsto \sum_{F \subseteq E} (F, \text{cor}_{F^c} V) \otimes (F^c, \text{cor}_F V),
\]
and the counit $\epsilon^{cd}_\Delta$ is the same as $\epsilon^d_\Delta$:
\[
(E,V) \overset{}{\mapsto} \begin{cases} 1, & E = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
\subsection{Examples}
\label{subsec:hyp-ex}
Here are the various coproducts for the path graph with three vertices:
We write $1$ for the hypergraph $(\emptyset,\emptyset)$.
\begin{align*}
\cherry & \overset{\Delta} \longmapsto
1 \otimes \cherry + 2 \, \point \otimes \linvert + \point \otimes \dpoint
+ 2 \, \linvert \otimes \point + \dpoint \otimes \point
+ \cherry \otimes 1 \\
\cherry & \overset{\Delta^d} \longmapsto 1 \otimes \cherry +
2 \, \pointl\otimes \pointl + \cherry \otimes 1 \\
\cherry & \overset{\Delta^c} \longmapsto
1 \otimes \cherry + 2 \, \point \, \edge \otimes \pointl \point +
\point \otimes \pointl \pointl
+ 2 \, \pointl \point \otimes \point \, \edge + \pointl \pointl \otimes \point
+ \cherry \otimes 1 \\
\cherry & \overset{\Delta^{cd}} \longmapsto 1 \otimes \cherry +
2 \, \pointl \point \otimes \pointl \point + \cherry \otimes 1
\end{align*}
Here are the coproducts for the hypergraph with one edge on three vertices:
\begin{align*}
\trekant & \overset{\Delta}{\longmapsto} 1 \otimes \trekant
+ 3 \, \point \otimes \dpoint + 3 \, \dpoint \otimes \point + \trekant \otimes 1 \\
\trekant & \overset{\Delta^d}{\longmapsto} 1 \otimes \trekant
+ \trekant \otimes 1 \\
\trekant & \overset{\Delta^c}{\longmapsto} \edge \otimes \trekant
+ 3 \pointl \otimes \linvert + 3 \, \linvert \otimes \pointl +
\trekant \otimes \edge \\
\trekant & \overset{\Delta^{cd}}{\longmapsto} \trev \otimes \trekant
+ \trekant \otimes \trev
\end{align*}
Note that for $\Delta^c$ we get an empty edge instead of $1$ in the
first tensor term. This is due to the complement of the triangle edge being
an empty edge.
\subsection{Relation between the coproducts}
There is the following relation between these coproducts.
\begin{proposition} Let the indices $\ell$ and $k$ be either $c,d,cd$ or empty.
Write ${\bf 1}$ for the identity map. Then:
\[
({\bf 1} \otimes \Delta^\ell) \circ \Delta^k = \tau_{1,3} (\Delta^\ell \otimes {\bf 1}) \circ \Delta^k.
\]
\end{proposition}
Note that $\tau_{1,3}(a \otimes b \otimes c):=c \otimes b \otimes a$.
\begin{proof}
This is a formal consequence of $\Delta^\ell$ and $\Delta^k$ both being cocommutative. Using Sweedler notation
\begin{equation} \label{eq:rel-D1}
h \overset{\Delta^k} \longmapsto h_{(1)} \otimes h_{(2)} = h_{(2)} \otimes h_{(1)}.
\end{equation}
Then ${\bf 1} \otimes \Delta^\ell$ maps the $\Delta^k(h)=h_{(1)} \otimes h_{(2)}$ to (using cocommutativity)
\[
h_{(1)} \otimes h_{(21)} \otimes h_{(22)} = h_{(1)} \otimes h_{(22)} \otimes h_{(21)}.
\]
Switching the first and third term, this is
\begin{equation} \label{eq:rel-D12}
h_{(21)} \otimes h_{(22)} \otimes h_{(1)}.
\end{equation}
On the other hand applying $(\Delta^\ell \otimes {\bf 1})$ to the right side of \eqref{eq:rel-D1}, i.e., $\Delta^k(h)=h_{(2)} \otimes h_{(1)}$, we get precisely \eqref{eq:rel-D12}.
\end{proof}
\subsection{Products and bialgebras}
We have a product $\mu$ on $H$ by:
\[
(E,V,h) \cdot (E^\prime, V^\prime,h^\prime)
= (E \sqcup E^\prime, V \sqcup V^\prime,h \sqcup h^\prime),
\]
and the unit being $1 = (\emptyset, \emptyset)$.
Write $\eta : \Bbbk \rightarrow H$ sending $1 \mapsto 1 = (\emptyset, \emptyset)$.
With this product, we have two bialgebra structures on $H$:
\[
(H, \mu, \Delta, \eta, \epsilon_\Delta), \quad
(H, \mu, \Delta^d, \eta, \epsilon^d_\Delta).
\]
There is also another product $\mu^c$ on $H$ given by
$\mu^c = c \circ \mu \circ (c \otimes c)$:
\[
(E,V,h) \cdot (E^\prime, V^\prime,h^\prime)
= (E \sqcup E^\prime, V \sqcup V^\prime,h \hat{\sqcup} h^\prime),
\]
where $h \hat{\sqcup} h^\prime$ sends $e \in E$ and $e^\prime \in E^\prime$ to respectively:
\[
e \mapsto h(e) \cup V^\prime, \quad e^\prime \mapsto V \cup h^\prime(e^\prime).
\]
The unit is again $1 = (\emptyset, \emptyset)$. With this product
we have another two bialgebra structures on $H$, which we denote by
\[
(H, \mu^c, \Delta^c, \eta, \epsilon^c_\Delta), \quad
(H, \mu^c, \Delta^{cd}, \eta, \epsilon^{cd}_\Delta).
\]
The bialgebras $(H,\mu,\Delta) $ and $(H, \mu^c, \Delta^c)$
are graded by cardinalities
of vertices.
On the other hand, the bialgebras $(H, \mu, \Delta^d)$ and
$(H, \mu^c, \Delta^{cd})$ are graded by cardinalities of edges.
\section{Cointeracting bialgebras}
\label{sec:coint}
We recall the notion of cointeracting bialgebras.
For more detail and examples, see the concise review of D.~Manchon \cite{Man}.
Let
\[ (A,\mu_A, \Delta_A, \eta_A, \epsilon_A), \quad
(B,\mu_B, \delta_B, \eta_B, \epsilon_B) \]
be bialgebras over the field $k$. We suppose $A$ is a (left) comodule over $B$:
\[ \delta : A \rightarrow B \otimes A, \] and want this to fulfill the following
conditions (${\bf 1}$ denotes identity maps):
\begin{itemize}
\item[1)] The counit $\epsilon_A : A \rightarrow k$ is a comodule morphism:
\begin{equation} \label{eq:dobi-counit}
({\bf 1}_B \otimes \epsilon_A) \circ \delta = \eta_B \circ \epsilon_A.
\end{equation}
\item[2)] The coalgebra map $\Delta_A : A \rightarrow A \otimes A$ is a
comodule morphism:
\begin{equation} \label{eq:dobi-comod}
({\bf 1}_B \otimes \Delta_A) \circ \delta = m_{13,2,4} \circ (\delta \circ \delta)
\circ \Delta_A,
\end{equation}
where $m_{13,2,4}$ is multiplication in $B$ for the $1$'st and $3$'rd factor.
\item[3)] The unit $\eta : k \rightarrow A$ is a comodule morphism, which amounts
to $\delta(1_A) = 1_B \otimes 1_A$.
\item[4)] The multiplication $\mu : A \otimes A \rightarrow A$ is a comodule morphism:
\[ \delta \circ \mu = ({\bf 1}_B \otimes \mu_A) \circ m_{13,2,4} \circ
\delta \otimes \delta. \]
\end{itemize}
\begin{remark}
The characters of $A$, the "dual object" of $A$,
is then a monoid $M(A)$, whose
multiplication is the dual of the coproduct $\Delta_A$.
(If $A_\Delta$ is a Hopf algebra, then $M(A)$ is a group.)
The characters of $B$ also get a monoid structure $E(B)$ by the dual of
the coproduct $\delta_B$. This monoid acts as endomorphisms on
the monoid of characters $M(A)$ by the duals of $\delta$ and
\eqref{eq:dobi-comod}.
\end{remark}
When $A = B$ and:
\begin{itemize}
\item $\mu_A = \mu_B$, so they are the same as algebras, and
\item $\delta = \delta_B$,
\end{itemize}
following
L.Foissy \cite{Fo22}, we call $(B,\mu, \Delta_B, \delta_B)$ with its two
counits $\epsilon_\Delta$ and $\varepsilon_\delta$ a {\it double bialgebra}.
Then 3) and 4) above, follow from $B$ being a bialgebra.
\begin{remark}
By \cite[Cor.2.4]{Fo22}, for a double bialgebra,
if $(B, \mu, \Delta)$ is a Hopf algebra, the
multiplication $\mu$ is commutative.
\end{remark}
\begin{remark}
The first example of a double bialgebra seems to be
\cite{CEM}, where $B_\Delta$ is the Connes--Kreimer Hopf algebra,
\cite{CK}. The coproduct $\Delta$ is given by admissible
cuts in non-planar rooted trees, and $\delta$ is given by partitioning trees
into subtrees and contracting these ({\it{extraction-contraction}}).
Quasi-shuffle double bialgebras occur in \cite{EM}, and
in a slightly more general setting in \cite[Sec.1.3]{Fo22}.
\cite{FFM} has double bialgebras for finite topologies
(which may be identified with finite preorders).
For more details and examples see \cite{Man}.
\end{remark}
In the following
our focus is double bialgebras,
but we would like to make slightly
more general room. We suppose $B \subseteq A$ as vector spaces, and:
\begin{itemize}
\item The multiplication
restricts $m_A|_B = m_B$.
\item The comodule morphism $\delta$ restricts to the coproduct $\delta_B$.
\end{itemize}
In our case $A$ is the vector space $H$ spanned by (isomorphism classes) of
finite hypergraphs. For $B$ we restrict to various
subclasses of hypergraphs.
With each of these restrictions $(B,\mu_B, \Delta_B)$ becomes a
{\it connected}
sub-bialgebra of $(A, \mu_A, \Delta_A)$ and so
a Hopf algebra. Furthermore $(B,\mu_B, \Delta_B, \delta_B)$ is a double
bialgebra.
In each case there will also be a sub-bialgebra $C$ of $A$
such that
$A \cong B \otimes C$.
\section{Extraction-contraction bialgebras}
\label{sec:ext-con}
We introduce the extraction-contraction bialgebra for hypergraphs.
This generalizes the extraction-contraction bialgebra for graphs.
However the algebra for graphs seems to be done only for simple
graphs in the literature. In the coproduct one then sums over
certain partitions of the vertex sets. But to get this for hypergraphs
we must rather sum over (nearly) all subsets of edges. This
point of view occurs in \cite{DFM} considering Tutte polynomials for
minor systems, in particular for graphs \cite[Section 4.2]{DFM},
or in \cite{KMT} considering Tutte polynomials for species, which
connects to bialgebras via the Fock functor, \cite[Chap.15]{AM}.
See also \cite{Man12}.
We present an extraction-contraction bialgebra in cointeraction
with the restriction bialgebra on hypergraphs.
Via dualization and complementation we transport this to four double
bialgebras on hypergraphs.
\subsection{Connectedness and contractions}
\label{subsec:EC-quot}
For each vertex set $W$, there is the distinguished {\it discrete} hypergraph,
whose edges $E = W$ , i.e., are the sets $\{w\}$ for $w \in W$.
The hypergraph $(W,W)$ is given by the canonical map $W \rightarrow P(W)$.
Given a map $\phi : V \rightarrow W$ we get a map $P(V) \mto{P\phi} P(W)$
sending $S \mapsto \phi(S)$.
Assume now $\phi$ is surjective and there is a map $\psi : E \rightarrow W$
giving a commutative diagram:
\begin{equation} \label{eq:EC-EVmor}
\xymatrix{ E \ar[r]^{h} \ar[d]_{\psi} & P(V) \ar[d]^{P\phi} \\
W \ar[r] & P(W)}.
\end{equation}
Note that in such a diagram, the map $\psi$ is uniquely determined
by $\phi$.
\begin{definition} For a hypergraph $(E,V)$ let $E^*$ be the non-empty
edges in $E$.
The hypergraph $(E,V)$ has {\it connected vertex set}
if the only such diagram for $(E^*,V)$
with $V \twoheadrightarrow W$ surjective is when $W$ is a single point.
This means the equivalence relation on $V$ generated by $u \sim u^\prime$
if $u$ and $u^\prime$ is on a common edge, only has one equivalence class,
the whole of $V$.
\end{definition}
That $\phi : V \twoheadrightarrow W$ gives a diagram \eqref{eq:EC-EVmor} with
a discrete hypergraph $(W,W)$, means that the vertex set of each
non-empty edge $e \in E$ maps
to a single point in $W$ (depending on $e$).
Let $(E^*_j,V_j), j \in J$ be the connected components of $(E^*,V)$ for the
equivalence relation above. Let $V^\prime = \cup_{j \in J} V_j$ and
$W = J \cup (V \backslash V^\prime)$. (Note that $V \backslash V^\prime$
are the vertices which are not incident to any edge.)
This gives a surjection $V \twoheadrightarrow W$, and
it gives a commutative diagram \eqref{eq:EC-EVmor}
for $(E^*,V)$ and the discrete hypergraph
$(W,W)$, which is initial among such diagrams.
\vskip 2mm
For each subset $F \subseteq E^*$, we get a sub-hypergraph $(F,V)$
with connected components $(F_i,U_i), i \in I$. Let $U = \cup_{i \in I} U_i$,
the {\it vertex support} of $(F,V)$,
and let $V/F = I \cup (V\backslash U)$ be the {\it contraction} of
$V$ by $F$.
This gives a surjection $V \twoheadrightarrow V/F$, and a commutative diagram
with \eqref{eq:EC-EVmor} for $(F,V)$ and
$(V/F,V/F)$, which is initial among
all commutative diagrams to discrete hypergraphs.
\subsection{Coproduct $\delta$}
\label{subsec:EC-d1}
Let $H^\circ \subseteq H$ be generated by the hypergraphs where each edge
has a non-empty vertex set. That is hypergraphs $(E,V)$ with $E = E^*$,
or equivalently $E_{|\emptyset} = \emptyset$.
Note that $H^\circ \subseteq H$ is actually
a sub-bialgebra of $(H,\mu,\Delta)$ as the coproduct $\Delta$ restricts to
$H^\circ$.
We can now make $H$ a comodule over $H^\circ$ by the coproduct:
\begin{equation} \label{eq:extcon-delta}
\delta : H \rightarrow H^\circ \otimes H, \quad
(E,V) \overset{\delta}{\mapsto} \sum_{F \subseteq E^*} (F,V) \otimes (F^c,V/F).
\end{equation}
The pair $(F^c,V/F)$ is a hypergraph by the composition
\[ F^c \subseteq E \rightarrow P(V) \rightarrow P(V/F). \]
Note that all edges with empty vertex sets go into $F^c$ in the right
tensor term, above.
\begin{example}
\begin{equation*}
\cherry \overset{\delta}{\mapsto} \trev \otimes \cherry +
2 \, \linP \otimes \linvert + \cherry \otimes \bullet
\end{equation*}
The next example is a single edge with three vertices:
\begin{equation*}
\trekant \overset{\delta}{\mapsto} \trev \otimes \trekant
+ \trekant \otimes \bullet
\end{equation*}
Now we consider two edges, one with two vertices, and one with three vertices:
\begin{equation*}
\trekantKant \overset{\delta}{\mapsto} \trev \otimes \trekantKant
+ \linP \otimes \linvert + \trekant \otimes \pointl + \trekantKant \otimes \bullet
\end{equation*}
Note the vertex with a single edge attached. Often in the literature
an edge with single vertex
is displayed as a loop, but here we do as above.
\end{example}
The comodule coproduct $\delta$ restricts to a coproduct
$\delta^\circ : H^\circ \rightarrow H^\circ \otimes H^\circ$.
There is a counit $\varepsilon_\delta$:
\[ (E,V) \overset{}{\mapsto} \begin{cases} 1, & E = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
which is the same as the map
$\epsilon^d_\Delta$ restricted to $H^\circ$.
\begin{proposition}
$(H^\circ,\mu, \delta^\circ, \eta, \varepsilon_\delta)$ is a bialgebra.
\end{proposition}
\begin{proof}
The essential thing is to prove that $\delta$ is coassociative. Letting
the edge sets $A, B$ be disjoint subsets of $E = E^*$,
this amounts to show that the map
$V \rightarrow V/(A \cup B)$ identifies as the composition
$V \rightarrow V/A \rightarrow (V/A)/B$, which is clear.
By looking at the terms of \eqref{eq:extcon-delta} when $F = \emptyset$ and
when $F = E^* = E$ we also see:
\[ (\varepsilon_\delta \otimes {\bf 1}) \circ \delta = ({\bf 1} \otimes \varepsilon_\delta) \circ
\delta = {\bf 1}. \]
\end{proof}
\begin{theorem} \label{pro:EC-cointer1}
The bialgebra $(H,\mu, \Delta, \eta, \epsilon_\Delta)$
is a comodule bialgebra over
the bialgebra $(H^\circ,\mu, \delta^\circ, \eta, \varepsilon_\delta)$
via $\delta$ in \eqref{eq:extcon-delta}.
In consequence $(H^\circ, \mu, \Delta^\circ, \delta^\circ)$
is a double bialgebra.
\end{theorem}
\begin{proof}[Proof of Proposition \ref{pro:EC-cointer1}.]
The coproducts are:
\begin{align*} (E,V) & \overset{\Delta}{\mapsto}
\sum_{V^\prime \subseteq V} (E_{|V^\prime }, V^\prime)
\otimes (E_{|V \backslash V^\prime }, V \backslash V^\prime), \\
(E,V) & \overset{\delta}{\mapsto} \sum_{F \subseteq E^*} (F,V) \otimes (F^c,V/F).
\end{align*}
We must show properties 1)-4) of Section \ref{sec:coint}.
Property 3) is easy, and 4) also, as multiplication is disjoint
union of hypergraphs.
\noindent{\bf Property 1).}
For $(E,V)$ with $V \neq \emptyset$, both sides of $\eqref{eq:dobi-counit}$
vanish. When $V = \emptyset$ we have:
\[
\xymatrix{ (E,\emptyset) \ar[rr]^{\epsilon_\Delta} \ar[d]_{\delta}
& & 1 \ar[d]^{\eta} \\
(\emptyset, \emptyset) \otimes (E,\emptyset)
\ar[rr]^{{\bf 1} \otimes \epsilon_\Delta}& & 1_{H^\circ} = (\emptyset, \emptyset). }
\]
\noindent{\bf Property 2).}
\noindent 1.
Let us describe the composition $({\bf 1} \otimes \Delta) \circ \delta$.
Using the notation at the end of the last subsection, we may write
$V/F = I \cup (V\backslash U)$, where $U$ is the support of $(F,V)$.
Now let $S \subseteq I$ and $R \subseteq V \backslash U$,
so we get decompositions $I = S \sqcup S^c$ and
$V \backslash U = R \sqcup R^c$.
Further we define $U_S = \cup_{i \in S} U_i$ and
$U_{S^c} = \cup_{i \in S^c} U_i$.
As $S$ and $R$ varies, the composition $({\bf 1} \otimes \Delta) \circ \delta$
is the sum of terms
\begin{equation} \label{eq:EC-Dd}
(F,V) \otimes (F^c_{|S \cup R}, S \cup R) \otimes (F^c_{|S^c \cup R^c}, S^c \cup R^c).
\end{equation}
\vskip 2mm
\noindent 2. Let us describe the composition $(\delta \otimes \delta) \circ \Delta$.
For $F^1 \subseteq E_{|V^\prime}$, let its components be $(F^1_i,U_i), i \in S^1$
and $U^1 = \cup_{i \in S^1} U_i$. Similarly for
$F^2 \subseteq E_{|V \backslash V^\prime}$ we get $S^2$ and $U^2$.
Write $R^1 = V^\prime \backslash U^1$ and
$R^2 = (V \backslash V^\prime) \backslash U^2$.
As $F^1$ and $F^2$ varies the composition $(\delta \otimes \delta) \circ \Delta$
then sends $(E,V)$ to a sum of terms:
\begin{equation} \label{eq:EC-mdd}
(F^1,V^\prime ) \otimes ((E_{|V^\prime} \backslash F^1), S^1 \cup R^1)
\otimes (F^2, V\backslash V^\prime) \otimes
((E_{|V \backslash V^\prime} \backslash F^2), S^2 \cup R^2).
\end{equation}
\vskip 2mm
\noindent 3.
The bialgebras are in cointeraction if the maps
\[
m_{13,2,4} \circ (\delta \otimes \delta) \circ \Delta
= ({\bf 1} \otimes \Delta) \circ \delta,
\]
which amounts to showing that the maps giving terms
\eqref{eq:EC-Dd} and \eqref{eq:EC-mdd} coincide
after the multiplication of the first and third entry in the tensor product in \eqref{eq:EC-mdd}.
\vskip 2mm
\noindent 4.
The data in part 1 are $F,S, R$ and derived data $S^c, R^c, U, U_S, U_{S^c}$.
The data in part 2 are
$F^1, F^2, V^\prime$ and derived data $R^1, R^2, S^1, S^2$.
Let us see how these correspond to each other.
Given the data in part 1, we let
\[
U^1 = U_S, \quad F^1 = F_{|U_S}, \quad S^1 = S, \quad R^1 = R.
\]
Note that $V^\prime = U^1 \cup R^1$. Also
\[
U^2 = U_{S^c}, \quad F^2 = F_{|U_{S^c}}, \quad S^2 = S^c, \quad R^2 = R^c.
\]
Then $V \backslash V^\prime = U^2 \cup R^2$. Note also
\[
R^1 \cup R^2 = V \backslash (U^1 \cup U^2) = V \backslash U.
\]
Finally note that $F_1 \cup F_2 = F$ since $U_S$ and $U_{S^c}$
decompose $(F,V)$ into two components.
\vskip 2mm
\noindent 5. Conversely, given the data in part 2,
we get
\[
S = S^1, S^c = S^2, R = R^1, R^c = R^2, \quad F = F^1 \cup F^2.
\]
Hence the two maps with terms in \eqref{eq:EC-Dd} and \eqref{eq:EC-mdd}
identify (after multiplying the first and last terms in the latter).
\end{proof}
\begin{remark}
The double bialgebra for graphs was considered by L.~Foissy in reference \cite{FoChrom}, and before that by W.~Schmitt \cite{Schmitt} (but Schmitt does not state the cointeraction property). See also D.~Manchon's article \cite{Man12}. Both consider simple graphs, so one does not allow loops (edges with a single vertex) or multiple edges. Letting $G \subseteq H$ be the subspace of graphs, i.e., hypergraphs where all edges have cardinality $\leq 2$, the simple graphs are a quotient of the subalgebra $G$. This goes well, as the coproduct $\delta$ descends, since if a graph $(E,V)$ has a loop, or a multiple edge, this will also be the case with every term in the resulting coproduct by $\delta$. The same is the case for hypergraphs. So one may consider hypergraphs with no loops (edges with only a single vertex), and with no multiple edges, and one will still get bialgebras for $\delta$ and $\Delta$.
\end{remark}
\subsection{Coproduct $\delta^d$}
Let $H^d \subseteq H$ be the subspace of hypergraphs with no isolated vertices,
i.e. every vertex is contained in at least one edge. Alternatively
$V = V^*$ or equivalently $V_{|\emptyset} = \emptyset$.
So $H^d \subseteq H$ is a sub-bialgebra of $(H, \mu, \Delta)$
as the coproduct $\Delta^d$ restricts to $H^d$.
Since $d(H^\circ) = H^d$, from
Subsection \ref{subsec:EC-d1} we get a
comodule coproduct:
\begin{equation*} \delta^d : H \rightarrow H^d \otimes H, \quad
\delta^d = (d \otimes d) \circ \delta \circ d.
\end{equation*}
We describe this explicitly.
Consider discrete hypergraphs $(G,G)$ given by the canonical $G \rightarrow P(G)$
and commutative diagrams from maps $\phi : E \rightarrow G$:
\begin{equation}
\xymatrix{ V \ar[r]^{h^d} \ar[d]_{\psi} & P(E) \ar[d]^{P\phi} \\
G \ar[r] & P(G)}.
\end{equation}
\begin{definition}
Let $V^* \subseteq V$ be the vertices which are incident to at least one edge. The hypergraph $(V,E)$ has {\it connected edge set} if the only such diagram for $(V^*,E)$ with $E \twoheadrightarrow G$ surjective is when $G$ is a single point.
This means the equivalence relation on $E$ generated by $e \sim e^\prime$ if $e$ and $e^\prime$ share a common vertex, only has one equivalence class, the whole of $E$.
\end{definition}
Note that if each edge is incident to a vertex and each vertex is
incident to an edge, the two notions of connectedness coincide.
But if there are vertices not on any edge, the graph may have connected edge
set but will not have connected vertex set.
Similarly if there are edges with an empty vertex set.
Given $(E,V)$ let $(E_k,V^*_k), k \in K$ be the
connected components for $(E,V^*)$ in the above
relation. Let $E^\prime = \cup_{k \in K} E_k$, and $G = K \cup (E \backslash
E^\prime)$ (the edges of $E \backslash E^\prime$ are edges with no vertices).
We have a surjection $E \twoheadrightarrow G$, and a commutative diagram
for $(E,V^*)$ and $(G,G)$ which is initial among such diagrams.
For each subset $U \subseteq V^*$, we get a sub-hypergraph $(E,U)$ (here each edge $e$ is contracted onto $U$, i.e. we remove from each edge the vertices in $V \backslash U$), with connected components $(F_\ell,U_\ell), \ell \in L$. Let $F = \cup_{\ell \in L} F_\ell$ and $E/U = L \cup (E\backslash F)$. This gives a surjection $E \twoheadrightarrow E/U$ where for each vertex $u \in U$ all edges containing $u$ are mapped to a single element in $E/U$. So we have a commutative diagram
with $(E,U)$ and $(E/U,E/U)$, which is initial among all commutative diagrams to discrete hypergraphs.
The comodule coproduct is then:
\[ \delta^d : H \rightarrow H^d \otimes H, \quad
(E,V) \overset{\delta^d}{\longmapsto} \sum_{U \subseteq V^*} (E,U) \otimes (E/U,U^c).
\]
The comodule coproduct $\delta^d$ restricts to a coproduct
$\delta^d : H^d\rightarrow H^d \otimes H^d$. (By abuse of notation we also use
$\delta^d$ for its restriction to $H^d$.)
There is a counit $\varepsilon^d_\delta$:
\[ (E,V) \overset{}{\mapsto} \begin{cases} 1, & V = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
which is the same as the map
$\epsilon_\Delta$ restricted to $H^d$.
This gives a bialgebra $(H^d,\mu, \delta^d, \eta, \varepsilon^d_\delta)$.
\begin{proposition}
The bialgebra $(H, \mu, \Delta^d, \eta, \epsilon^d_\Delta)$
is a comodule bialgebra
over the bialgebra $(H^d,\mu, \delta^d, \eta, \varepsilon^d_\delta)$.
As consequence we have a double bialgebra $(H^d, \mu, \Delta^d, \delta^d)$.
\end{proposition}
\begin{proof}
This is similar to the proof of Proposition \ref{pro:EC-cointer1}.
\end{proof}
\subsection{Coproduct $\delta^c$}
Let $H^c \subseteq H$ be the image of $H^\circ \subseteq H$ by complementation $c$.
Let $E^\times \subseteq E$ denote the edges whose vertex set is not the whole of $V$.
Then $H^c$ are generated by the hypergraphs $(E,V)$ where no edge contains all
the vertices. Alternatively hypergraphs with
$E = E^\times $ or $\text{lk}_EV = \emptyset$.
Again $H^c \subseteq H$ is a sub-bialgebra of $(H, \mu^c, \Delta^c)$.
Using the complement involution we get a coproduct:
\begin{equation*} \delta^{c} : H \rightarrow H^{c} \otimes H, \quad
\delta^c = (c \otimes c) \circ \delta \circ c.
\end{equation*}
For a hypergraph $(F,V)$ denote the complement hypergraph as $(\ov{F},V)$.
So the latter is given by the composition $F \mto{h} P(V) \mto{c} P(V)$.
For $F \subseteq E^\times$ consider surjections $V \twoheadrightarrow W$ such that for each
$f \in F$, the complement vertex set $h(f)^c \subseteq V$ maps to a single point
in $W$. Let $V \twoheadrightarrow V/\ov{F}$ be initial among such surjections.
The comodule coproduct $\delta^c$ is then:
\[ \delta^{c} : H \rightarrow H^{c} \otimes H, \quad
(E,V) \overset{\delta^c}{\mapsto} \sum_{F \subseteq E^\times} (F,V) \otimes (F^c,V/\ov{F}).
\]
This again restricts to a coproduct on $H^c$ giving a double bialgebra
$(H^c, \mu^c, \Delta^c, \delta^c)$.
\subsection{Coproduct $\delta^{cd}$}
Let $H^{cd} \subseteq H$ be the image of $H^\circ \subseteq H$ by $c \circ d$.
Let $V^\times \subseteq V$ denote the vertices not on every edge of $E$.
Then $H^{cd}$ are generated by the hypergraphs $(E,V)$
where no vertex is contained
in all edges. Alternatively hypergraphs with
$V = V^\times $ or $\text{cor}_VE = \emptyset$.
Again $H^{cd} \subseteq H$ is a sub-bialgebra of $(H, \mu^c, \Delta^{cd})$.
Using the complement dual involution we get a coproduct:
\begin{equation*} \delta^{cd} : H \rightarrow H^{cd} \otimes H, \quad
\delta^{cd} = (c\circ d \otimes c \circ d) \circ \delta \circ (c \circ d).
\end{equation*}
For $U \subseteq V^\times$ consider surjections $E \twoheadrightarrow G$ such that
for each $u \in U$, all edges not containing $u$ map to a single point in $G$.
Let $E \twoheadrightarrow E/\ov{U}$ be initial among such surjections.
The comodule coproduct is then:
\begin{equation*} \delta^{cd} : H \rightarrow H^{cd} \otimes H, \quad
(E,V) \overset{\delta^{cd}}{\mapsto} \sum_{U \subseteq V^\times} (E,U) \otimes
(E/\ov{U}, U^c).
\end{equation*}
Note that vertices in the intersection of all edges,
only occur on the right side of the tensor product.
The map $\delta^{cd}$ again restricts to a coproduct on $H^{cd}$ giving
a double bialgebra
$(H^{cd}, \mu^c, \Delta^{cd}, \delta^{cd})$.
\subsection{Examples with $\delta$-coproducts}
\label{subsec:coint-ex}
Here are the various comodule coproducts for the path graph with three vertices:
\begin{align*}
\cherry & \overset{\delta} \longmapsto
\trev \otimes \cherry + 2 \, \linvert \, \point \otimes \linvert
+ \cherry \otimes \point
\\
\cherry & \overset{\delta^d} \longmapsto \edge \, \edge \otimes \cherry +
2 \, \pointl \, \edge \otimes \cher + \pointll \otimes \linvert
+ 2 \cher \otimes \pointl + \pointl \pointl \otimes \pointl
+ \cherry \otimes \edge \\
\cherry & \overset{\delta^{c}} \longmapsto \trev \otimes \cherry +
2 \, \point \, \linvert \otimes \linvert \, \point
+ \cherry \otimes \trev \\
\cherry & \overset{\delta^{cd}} \longmapsto
\edge \, \edge \edge \otimes \cherry + 2 \, \edge \pointl \otimes \cher
+ \pointl \, \pointl \otimes \pointll
\end{align*}
\begin{remark} The path graph is in $H^\circ, H^d, H^c$ but not in $H^{cd}$.
The three first coproducts may be taken in
the extraction/contraction bialgebras, but
the last case only occurs as a comodule coproduct.
\end{remark}
Here are the comodule coproducts for the triangle hypergraph
with one edge on three vertices:
\begin{align*}
\trekant & \overset{\delta}{\longmapsto} \trev \otimes \trekant
+ \trekant \otimes \point \\
\trekant & \overset{\delta^d}{\longmapsto} \edge \otimes \trekant
+ 3 \pointl \otimes \linvert + 3 \linvert \otimes \pointl
+ \trekant \otimes \edge \\
\trekant & \overset{\delta^c}{\longmapsto} \trev \otimes \trekant
\\
\trekant & \overset{\delta^{cd}}{\longmapsto} \edge \otimes \trekant
\end{align*}
\begin{remark} The triangle is in $H^\circ$ and $H^d$, so the first
two are also coproducts in the extraction/contraction algebras.
The last two cases only occurs as comodule coproducts.
\end{remark}
\section{Associated polynomials}
\label{sec:poledge}
In the following $\Bbbk$ is a field of characteristic zero, as we relate
to \cite{Fo22}.
The polynomial algebra $\Bbbk[x]$ has a Hopf algebra
structure $(\mu, \Delta, 1, \epsilon_\Delta)$ where
$$
\Delta(x) = x \otimes 1 + 1 \otimes x,
$$
and a bialgebra structure $(\mu, \delta, 1, \epsilon_\delta)$ where
$$
\delta(x) = x \otimes x.
$$
We shall identify $\Bbbk[x] \otimes \Bbbk[x]$ as $\Bbbk[x,y]$, so these
maps can be written
\[
x \mapsto x+y \text{ resp.~} x \mapsto x\cdot y.
\]
If $(B, \mu, \Delta, \delta)$ is a connected double bialgebra, by
the main result of \cite{Fo22} there is a unique algebra morphism
$\Phi : B \rightarrow \Bbbk[x]$ such that $\Phi$ is a bialgebra morphism
$(B,\mu,\Delta) \rightarrow (\Bbbk[x],\mu,\Delta)$ and a bialgebra
morphism $(B,\mu,\delta) \rightarrow (\Bbbk[x], \mu, \delta)$.
We compute the values of this for some simple hypergraphs.
We apply this to the double bialgebra $(H^\circ, \mu, \Delta, \delta)$.
Note that $(H^\circ,\mu,\Delta)$ is
a connected bialgebra.
Hence there is a unique double bialgebra morphism
\begin{equation} \label{eq:pol-chio}
\chi^\circ : (H^{\circ},\mu, \Delta, \delta) \rightarrow
(\Bbbk[x], \mu, \Delta, \delta).
\end{equation}
We first compute this for some simple hypergraphs.
In the following let $n$ be the cardinality $|V|$ of the vertex set $V$.
\begin{proposition} \label{pro:ass-noedge}
Consider the hypergraph $(\emptyset, V)$ with no edges.
Then
$$
\chi^\circ(\emptyset, V) = x^n.
$$
\end{proposition}
\begin{proof}
Due to $(\emptyset, V) = \prod_{v \in V} (\emptyset, \{v\})$, it
is enough to show that $p(x) := \chi(\emptyset, \{v \}) = x$.
We have:
\[
(\emptyset, \{v\}) \overset{\Delta}{\longmapsto}
(\emptyset, \emptyset) \otimes (\emptyset, \{v\}) +
(\emptyset, \{v \}) \otimes (\emptyset, \emptyset).
\]
Mapping this to $\Bbbk[x]$ and $\Bbbk[x] \otimes \Bbbk[x] \cong \Bbbk[x,y]$
we get the equation
\[
p(x+y) = 1 \cdot p(x) + p(y) \cdot 1.
\]
We also have:
\[
(\emptyset, \{v\}) \overset{\delta}{\longmapsto}
(\emptyset, \{v\}) \otimes (\emptyset, \{v\}),
\]
giving $p(xy) = p(x) \cdot p(y)$. These two equations give $p(x) = x$.
\end{proof}
\begin{proposition}
Let $(e,V)$ be the hypergraph consisting of a single edge $e$ containing all
the vertices of $V$. Then for $n \geq 1$:
$$
\chi^\circ(e,V) = x^n - x.
$$
\end{proposition}
\begin{remark} We see that for the hypergraph with a single edge containing
a single vertex $(e, \{v \})$,
the associated polynomial is zero, $ \chi(e,\{v \})=0$.
However, by Proposition \ref{pro:ass-noedge}, if we have a single vertex
and no edge $(\emptyset, \{v\})$, the associated polynomial is $x$.
\end{remark}
\begin{proof} Let $p(x) = \chi(e,V)$.
\[
(e,V) \overset{\delta}{\longmapsto}
(\emptyset, V) \otimes (e,V) + (e, V) \otimes (\emptyset, \{*\}).
\]
Again mapping this to $\Bbbk[x]$ and $\Bbbk[x] \otimes \Bbbk[x] \cong \Bbbk[x,y]$ we get the
equation
\[
p(xy) = x^n \cdot p(y) + p(x) \cdot y. \]
The only solutions are $p(x) = a(x^n - x)$.
The $\Delta$-coproduct is:
\[
(e, V) \overset{\Delta}{\longmapsto} 1 \otimes (e,V)
+ \sum_{\emptyset \subset U \subset V} (\emptyset, U) \otimes (\emptyset, U^c) +
(e,V) \otimes 1 .
\]
Mapping to $\Bbbk[x]$ and $\Bbbk[x,y]$, this gives for instance for $n = 3$:
\[
p(x+y) = p(x) + 3x^2y + 3xy^2 + p(y).
\]
We easily verify that $a = 1$.
\end{proof}
By sending hypergraphs $(E,\emptyset) \mapsto 1$, \eqref{eq:pol-chio}
may be extended to a bialgebra morphism
\begin{equation} \label{eq:pol-chi}
\chi : (H,\mu, \Delta) \rightarrow (\Bbbk[x], \mu, \Delta).
\end{equation}
\section{The chromatic polynomial for hypergraphs}
\label{sec:chrompol}
The map $\chi$ in the previous section gives for
each hypergraph $(E,V)$ a polynomial $\chi_{E,V}(x)$. For classical
graphs, $\chi_{E,V}(n)$ is the chromatic polynomial counting
the number of colorings of the vertices
of $(E,V)$ with
$n$ colors, such that on every edge the two
vertices have different colors, see \cite{FoChrom}.
For general hypergraphs we now show
that it counts the number of colorings of vertices, using $n$ colors,
such that no edge is monochromatic. So in this case, perhaps more informative
than {\it chromatic} polynomial would be to call it the
{\it non-monochromatic} polynomial. However the former is standard
in the literature.
\subsection{Colorings}
By \eqref{eq:extcon-delta}
for the bialgebra coproduct $\delta^\circ$, the image of $(E,V) \in H^\circ$
is a sum of terms for $F \subseteq E$:
\[
(F,V) \otimes (F^c, V/F).
\]
\begin{lemma} Let $U \subseteq V$ be the vertex support of edges in $(F,V)$
(i.e.~all vertices incident to some edge in $F$).
The hypergraph $(F^c,V/F)$ has no loops (edges with a single vertex) if and only if
for each connected component $(F_i,U_i)$ of $(F,U)$, the
edge set $F_i$ is induced, i.e.,~$F_i = F_{|U_i}$.
\end{lemma}
\begin{proof}
If there was an edge in $F_{|U_i}$ which is not in $F_i$, then this edge would
become a loop in $(F^c, V/F)$ as the vertices of $U_i$ map to a
single point in $W$.
\end{proof}
\begin{definition}
A {\it coloring} of the hypergraph $(E,V)$ by a set $C$, whose
elements are called colors,
is a map $V \rightarrow C$ such that no edge $e \in E$ is monochromatic,
i.e., for each edge not all the vertices have the same color.
\end{definition}
In particular note:
\begin{itemize}
\item If there is some edge with only one vertex,
the hypergraph $(E,V)$ has no coloring,
\item If $V \neq \emptyset$ and $C = \emptyset$, there is no coloring
since there is no map $V \rightarrow C$.
\item If $V = \emptyset$ there is a unique map $V \rightarrow C$ for
every $C$ so there is {\it one} coloring for each $C$.
\end{itemize}
Let $\chi_{E,V}(n)$ be the number of colorings of a hypergraph $(E,V)$ with
$n$ colors. We see that $\chi_{E,V}(0) = 0$ when $|V| \geq 1$.
From \cite{Doh,Hel} it follows that $\chi_{E,V}(n)$ is a polynomial,
the {\it chromatic polynomial} of the hypergraph $(E,V)$.
\begin{example} {} \hfill
\begin{itemize}
\item When $V = \emptyset $ then
$\chi_{E,V}(n) = 1$ for all $n \geq 0$.
\item When $E = \emptyset$, then $\chi_{E,V}(n) = n^{|V|}$.
\item When $E$ is a single edge containing all the vertices $V$,
then $\chi_{E,V}(n) = n^{|V|} - n$, the polynomial
we computed in the previous section.
\end{itemize}
\end{example}
\begin{theorem} \label{thm:col-chi} Let $\Bbbk = {\mathbb Q}$.
The double bialgebra morphism $\chi^\circ$ of
\eqref{eq:pol-chio}
maps the hypergraph $(E,V)$ to the chromatic polynomial $\chi_{E,V}(x)$.
\end{theorem}
We prove this at the end of this section.
\begin{remark}
Colorings of hypergraphs
are discussed in \cite[Chap.19]{Ber} but concerns the chromatic number,
and not the polynomial.
In particular, this is defined by requiring that the edges not
to be monochromatic. This definition of the chromatic number of
a hypergraph was introduced by Erd\"os and Hanal \cite{EH}.
The chromatic {\it polynomial} for hypergraphs seems first to have
been discussed
by T.~Helgason in \cite{Hel}. Somewhat later K.~Dohmen \cite{Doh}
discusses basic properties
of this polynomial (based on his Ph.D.~dissertation) and shows among other things that its
coefficients are integers.
I.~Tomescu \cite{Tom} computes the polynomial for classes of hypergraphs,
and consider its coefficients. R.~Zhang and F.~Dong \cite{ZD} is
a recent presentation of various facts
and properties concerning this polynomial, as well a some
open questions, especially concerning its zeroes.
In \cite[Pro. 18.1]{AA} M.~Aguiar and F.~Ardila associate a polynomial
character to graphs. The immediate extension of this to hypergraphs,
the character $\epsilon_\delta$, gives precisely the polynomial we consider here.
\end{remark}
\subsection{Colorings built from two disjoint color sets}
Before proving Theorem \ref{thm:col-chi} we develop some auxiliary results.
\begin{proposition} Let $C = X \cup Y$ be a disjoint union. The colorings of $(E,V)$ by $C$ is in bijection with the following data set:
\begin{itemize}
\item A subset $U \subseteq V$,
\item A coloring of $(E_{|U}, U)$ by $X$,
\item A coloring of $(E_{|U^c},U^c)$ by $Y$.
\end{itemize}
\end{proposition}
\begin{proof}
If the map $V \rightarrow C$ gives a coloring, we get a subset $U \subseteq V$, those vertices colored
by $X$ and the complement $U^c$ colored by $Y$.
The map $U \rightarrow X$ then is a coloring of $(E_{|U}, U)$ and
$U^c \rightarrow Y$ is a coloring of $(E_{|U^c}, U^c)$.
Conversely, given such a data set, we get a map $V \rightarrow X \cup Y$.
This gives a coloring, since if $e \in E$ is an edge not in $E_{|U}$
or in $E_{|U^c}$, then $e$ contains vertices from both
$U$ and $U^c$ and these will have different colors.
\end{proof}
\begin{corollary} \label{cor:colXuY}
The number of colorings $\chi_{E,V}(n)$ is a polynomial in $n$ with zero constant term if $V$ is not empty, and
\begin{equation} \label{eq:col-nm}
\chi_{E,V}(n+m) = \sum_{U \subseteq V}
\chi_{E_{|U},U}(n) \cdot \chi_{E_{|U^c}, U^c}(m).
\end{equation}
\end{corollary}
\begin{proof}
The identity above is immediate from the proposition above.
We show that $\chi_{E,V}(n)$ is a polynomial by induction on
the the cardinality of $E$ and of $V$, with zero constant term
when $V \neq \emptyset$.
If $V$ is empty $\chi_{E,V}(n)$ is the constant polynomial $1$.
If $E$ is empty,
$\chi_{E,V}(n) = n^{|V|}$.
Let then $m = 1$, the difference $\chi_{E,V}(n+1)- \chi_{E,V}(n)$ is
a polynomial for $n \geq 0$, by induction and the expression
\eqref{eq:col-nm} above.
Whence $\chi_{E,V}(n)$ is a polynomial for $n \geq 1$. But the
above identity gives by induction when $n = m$ that the
the constant term of $\chi_{E,V}(2n)$ is twice the constant
term of $\chi_{E,V}(n)$, and so this constant term must be zero.
\end{proof}
Now we consider colorings by $X \times Y$.
\begin{proposition}
Let $C = X \times Y$. The colorings of $(E,V) \in H^\circ$ by $C$ is in bijection
with the following data set:
\begin{itemize}
\item A subset $F \subseteq E$ and a coloring of $(F,V)$ by $X$,
\item A coloring of $(F^c, V/F)$ by $Y$.
\end{itemize}
Note that in any such coloring, both hypergraphs $(F,V)$ and
$(F^c, V/F)$ must be loopless.
\end{proposition}
\begin{proof}
Given a coloring by $X \times Y$, let $F \subseteq E$ be all edges
which are monochromatic for the composition $V \rightarrow X \times Y \rightarrow Y$.
Let $U$ be the vertex support of $F$, and let $I$ index the
connected components of $(F,U)$. Then $V/F = I \cup (V \backslash U)$
and the map $V \rightarrow Y$ descends to a map $V/F \rightarrow Y$.
Clearly the composition $V \rightarrow X \times Y \rightarrow X$ gives
a coloring of $(F,V)$. Furthermore the composition $V \rightarrow Y$
gives a coloring of $(F^c, V/F)$, since the edges in $F^c$ are
non-monochromatic for $Y$.
Conversely, given a data set as stated in the proposition above,
we get a map $V \rightarrow X \times Y$,
and this is obviously a coloring of $(E,V)$. Furthermore these
correspondences are seen to be inverse to each other.
\end{proof}
\begin{corollary} \label{cor:colXxY}
The number of colorings fulfils the identity
\[
\chi_{E,V}(nm) = \sum_{F \subseteq E}
\chi_{F,V}(n) \cdot \chi_{F^c,V/F}(m),
\]
where $(F,V)$ and $(F^c,V/F)$ are as in the definition
(of the restriction $\delta^\circ$) of $\delta$, in \eqref{eq:extcon-delta}.
\end{corollary}
\begin{proof}
This follows by the proposition above. Note that if $(F,V)$ has a loop
then $\chi_{F,V}(n)$ is zero and if $(F^c,V/F)$ has a loop, then
$\chi_{F^c,W}(m)$ is zero.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:col-chi}.]
This follows from Corollaries \ref{cor:colXuY} and
\ref{cor:colXxY}.
\end{proof}
\begin{remark}
As mentioned in \eqref{eq:pol-chi} $\chi^\circ$ extends
to a bialgebra homomorphism $\chi : (H,\mu,\Delta) \rightarrow (\Bbbk[x], \mu, \Delta)$,
sending each $(E,\emptyset) \mapsto 1$.
This will be convenient as we then to any hypergraph have a chromatic
polynomial. However for the $(\{e\}, V)$ the single edge containing
all the vertices $V$, the hypergraph polynomial is $x^n - x$ where
$n = |V|$. This formula is valid for $n \geq 1$, but not for $n = 0$,
when the chromatic polynomial by our extension is $1$. This suggests
the naturality of the double bialgebra setting $(H^\circ, \Delta, \delta)$
for the chromatic polynomial, and that the extension has some lack
of naturality.
\end{remark}
\section{Four chromatic polynomials}
\label{sec:quartpol}
Associated to the double bialgebra $(H^d,\mu,\Delta^d, \delta^d)$, there
is also as above a unique double bialgebra morphism to
$(\Bbbk[x], \mu, \Delta, \delta)$. It gives a polynomial $\chi^d_{E,V}(x)$, and
as in the end of the previous section, we may extend its domain to
all hypergraphs $(E,V) \in H$. Then
$\chi^d_{E,V}(n)$ counts the number of colorings of the edges of
$(E,V)$, with $n$ colors,
such that there is no vertex where all the edges incident to
this vertex have the same color.
In particular for a tree, this polynomial would always vanish, as
every tree has a leaf. However one may remove the vertex leaves, to get
trees where the ``leaves'' are edges. This gives non-trivial polynomials.
\begin{center}
\begin{tikzpicture}[scale = 0.7]
\draw[] (0,0)--(-0.5,0.75) node[anchor=west] at (-0.33,0.5){};
\draw[] (0,0)--(-0.5,-0.75) node[anchor=west] at (-0.33,-0.5) {};
\draw[] (0,0)--(1,0) node[anchor=north] at (0.5,0) {};
\draw[] (1,0)--(2,0) node[anchor=north] at (1.5,0) {};
\filldraw (0,0) circle (2pt);
\filldraw (-0.5,0.75) circle (2pt);
\filldraw (-0.5,-0.75) circle (2pt);
\filldraw (1,0) circle (2pt);
\filldraw (2,0) circle (2pt);
\draw node at (3,0) {$\rightsquigarrow$};
\draw[] (4,0)--(3.5,0.75) node[anchor=west] at (3.67,0.5){};
\draw[] (4,0)--(3.5,-0.75) node[anchor=west] at (3.67,-0.5) {};
\draw[] (4,0)--(5,0) node[anchor=north] at (4.5,0) {};
\draw[] (5,0)--(6,0) node[anchor=north] at (5.5,0) {};
\filldraw (4,0) circle (2pt);
\filldraw (5,0) circle (2pt);
\end{tikzpicture}
\end{center}
\vskip 2mm
From the four double bialgebras
\[
H^\circ, \quad H^d, \quad H^c, \quad H^{cd}
\]
we get four chromatic polynomials associated to every hypergraph
$(E,V)$ in $H$:
\[
\chi_{E,V}(x), \quad \chi^d_{E,V}(x), \quad \chi^c_{E,V}(x), \quad
\chi^{cd}_{E,V}(x) .
\]
\subsection{Examples}
\begin{example}
Consider the hypergraph $h$ with $a$ vertices and $m$ edges, each edge
containing all the vertices, alternatively phrased: We have an
$m$-tuple edge containing all vertices. We
represent this below to the left, together with its derived hypergraphs:
\[
h = \pointli{a}{m}, \quad h^d = \pointli{m}{a}, \quad
h^c = \pointe{a}{m},
\quad h^{cd} = \pointe{m}{a}.
\]
In the last two hypergraphs the edges are empty and vertices are
non-incident. Note that (for $a,m \geq 1$)
$h$ is in $H^\circ$ and $H^d$ but not in $H^c$ nor $H^{cd}$.
As $0,1$-matrices, $h$ is the $m\times a$-matrix
with all entries $1$, and $h^d$ its transpose. Further $h^c$ is the
$m \times a$-matrix with all entries $0$ and $h^{cd}$ its transpose.
Their chromatic polynomials are (for $a,m \geq 1 $):
\[
\chi(x) = x^a - x, \quad \chi^d(x)= x^m - x, \quad \chi^c(x) = x^a,
\quad \chi^{cd}(x) = x^m.
\]
\end{example}
\begin{example}
Consider the hypergraph $h$ given schematically as:
\[
h = \trekantl{a}{b}{c}{\ell}{m}{n}.
\]
By this we mean vertex sets $A,B,C$ consisting of $a,b$ and $c$ vertices,
an $\ell$-multiple edge consisting of $A \cup B$, an $m$-multiple edge
consisting of $A \cup C$ and an $n$-multiple edge consisting of
$B \cup C$. Its derived hypergraphs are:
\[
h^c = \pointli{c}{\ell} \, \pointli{b}{m} \, \pointli{a}{n}, \quad
h^d = \trekantl{\ell}{m}{n}{a}{b}{c}, \quad
h^{cd} = \pointli{\ell}{c} \, \pointli{m}{b} \, \pointli{n}{a}
\]
If all integers here are positive, these are in all four double
bialgebras and the
chromatic polynomials are:
\begin{align*}
\chi(x) &= x^{a+b+c} - x^{a+1} - x^{b+1} - x^{c+1} + 2x\\
\chi^c(x) &= (x^a-x)(x^b-x)(x^c-x) \\
\chi^d(x) &= x^{\ell + m + n} - x^{\ell+1} - x^{m+1} - x^{n+1} + 2x\\
\chi^{cd}(x) &= (x^\ell-x)(x^m-x)(x^n-x).
\end{align*}
\end{example}
\begin{example}
All trees with $n$ edges have the same chromatic
polynomial $x(x-1)^n$. However the four chromatic polynomials
refine things considerably. There are two trees with three edges,
the path and the star:
\begin{center}
\begin{tikzpicture}[scale = 0.7]
\draw[] (0,0)--(1,0) node[anchor=west] at (-0.33,0.5){};
\draw[] (1,0)--(2,0) node[anchor=west] at (-0.33,-0.5) {};
\draw[] (2,0)--(3,0) node[anchor=north] at (0.5,0) {};
\filldraw (0,0) circle (2pt);
\filldraw (1,0) circle (2pt);
\filldraw (2,0) circle (2pt);
\filldraw (3,0) circle (2pt);
\draw[] (6,0)--(7,0) node[anchor=west] at (-0.33,0.5){};
\draw[] (7,0)--(7.5,0.85) node[anchor=west] at (-0.33,-0.5) {};
\draw[] (7,0)--(7.5,-0.85) node[anchor=north] at (0.5,0) {};
\filldraw (6,0) circle (2pt);
\filldraw (7,0) circle (2pt);
\filldraw (7.5,0.85) circle (2pt);
\filldraw (7.5,-0.85) circle (2pt);
\draw node at (7.7,0) {.};
\end{tikzpicture}
\end{center}
The chromatic polynomials of the path are:
\begin{equation*} \chi(x) = x(x-1)^3,
\quad \chi^d(x)= 0, \quad
\chi^c(x) = x(x-1)^3,
\quad \chi^{cd}(x) = 0.
\end{equation*}
The chromatic polynomials of the star are:
\begin{equation*} \chi(x) = x(x-1)^3,
\quad \chi^d(x)= 0, \quad
\chi^c(x) = x^2(x-1)(x-2),
\quad \chi^{cd}(x) = x(x-1)(x-2).
\end{equation*}
\end{example}
\subsection{When all chromatic polynomials are zero}
It is not hard to come up with examples where all four chromatic
polynomials vanish. The chromatic polynomial $\chi_h(x)$ of a hypergraph
$h$ vanishes precisely when it has an edge with exactly one vertex.
Using the representation of the hypergraph as $0,1$-matrix $h$,
we then have:
\begin{itemize}
\item $\chi_h(x) = 0$ iff $h$ has a row with exactly one $1$.
\item $\chi_h^d(x) = 0$ iff $h$ has a column with exactly one $1$.
\item $\chi_h^c(x) = 0$ iff $h$ has a row with exactly one $0$.
\item $\chi_h^{cd}(x) = 0$ iff $h$ has a column with exactly one $0$.
\end{itemize}
Using this any hypergraph may be extended, adding two vertices and two edges, to a hypergraph where all chromatic polynomials vanish. One needs only add rows and columns with the above properties to the $0,1$-matrix.
\section{Further bialgebras}
\label{sec:newq}
\subsection{Restriction contraction bialgebras}
Ardila and Aguiar in \cite[Subsec.3.1]{AA} introduce a variation on
Schmitt's Hopf algebra
of graphs \cite{Schmitt}. This generalizes readily to hypergraphs
\cite[Subsec.20.1]{AA}.
Aguiar and Ardila essentially do not allow empty edges. In the graph
case they do not have empty edges. In the hypergraph case they require
the hypergraph to have exactly one empty edge. But these are of course
in bijection with hypergraphs with no emtpy edge.
We give a slightly more general version of this in that we do not have
eny restriction on the edges.
For a subset $U$ of $V$, let $U^c$ be the complement.
We get a hypergraph $(E,U^c)$ by the composite
$E \rightarrow P(V) \rightarrow P(U^c)$.
We get a bialgebra of hypergraphs $(H,\mu,\Delta^\prime, \eta, \epsilon^\prime)$
where the coproduct is:
\[
(E,V) \overset{\Delta^\prime} \longmapsto
\sum_{U \subseteq V} (E_{|U}, U) \otimes (E, U^c).
\]
This is not cocommutative. For the edge this coproduct would
now be:
\begin{equation} \label{eq:pro-cop3}
\linvert \, \overset{\Delta^\prime}\longmapsto \, 1 \otimes \linvert +
2 \bullet \otimes \pointl + \linvert \otimes \edge,
\end{equation}
while in $(H, \mu, \Delta)$ the coproduct is:
\[
\linvert \, \overset{\Delta}\longmapsto \, 1 \otimes \linvert +
2 \bullet \otimes \bullet + \linvert \otimes 1.
\]
The counit $\epsilon^\prime$ is:
\[
(E,V) \overset{}{\mapsto} \begin{cases} 1, & V = \emptyset \\
0, & \text{ otherwise } \end{cases}.
\]
As in Section \ref{sec:hyp} we get four bialgebras:
\[ (H, \mu, \eta, \Delta^\prime, \epsilon^\prime), \quad
(H, \mu, \eta, \Delta^{\prime d}, \epsilon^{\prime d}), \quad
(H, \mu^c, \eta, \Delta^{\prime c}, \varepsilon^{\prime c}), \quad
(H, \mu^c, \eta, \Delta^{\prime cd}, \varepsilon^{\prime cd}).
\]
\vskip 2mm
By sending a hypergraph $(E,V) \mapsto (E^*,V)$ (i.e. omit all empty edges),
we get a connected {\it quotient} bialgebra
$(H^\circ, \mu, \Delta^\prime)$ now sending the egde to:
\begin{equation} \label{eq:pro-cop3}
\linvert \, \overset{\Delta^\prime}\longmapsto \, 1 \otimes \linvert +
2 \bullet \otimes \pointl + \linvert \otimes 1,
\end{equation}
This is essentially the Hopf algebra of hypergraphs in \cite[Subsec.20.1]{AA}.
Let $H^\emptyset$ be the hypergraphs with empty vertex set and coproduct:
\[ (E,\emptyset) \overset{\Delta^\emptyset}{\mapsto} (E,\emptyset) \otimes
(E,\emptyset). \]
Tensoring the bialgebras on $H^\circ$ and $H^\emptyset$ we get
another bialgebra structure on $H = H^\circ \otimes H^\emptyset$.
However we consider
$(H, \mu, \Delta^\prime, \eta, \epsilon^\prime)$ to be the fundamental one
as the others are derived from it.
\subsection{Question on cointeraction}
The bialgebra $(H^\circ,\mu,\Delta^\prime)$
does not seem to come with a
cointeraction.
To see the problem, consider the single edge. The natural coproduct
on a cointeracting algebra would be:
\[ \linvert \, \overset{\delta^\prime}\longmapsto \, \tobull \otimes \linvert +
\linvert \otimes \bullet. \]
Considering the half-edge we should also expect to
sum over the subsets of the
edges, giving the two term coproduct:
\begin{equation} \label{eq:pro-half}
\pointl \, \overset{\delta^\prime}\longmapsto \, \bullet \otimes \pointl + \pointl
\otimes \bullet.
\end{equation}
The requirement for cointeraction is the equation
\begin{equation*}
({\bf 1}_B \otimes \Delta) \circ \delta = m_{13,2,4} \circ (\delta \circ \delta)
\circ \Delta.
\end{equation*}
However applying this to the edge in $(H^\circ, \mu, \Delta^\prime)$, the left
side gets $6$ terms, and the right side $8$ terms.
The map $\delta^\prime \otimes \delta^\prime $ applied to \eqref{eq:pro-cop3} gives
two extra terms due to the half-edge \eqref{eq:pro-half},
compared to $\delta \otimes \delta$ in $(H^\circ, \mu, \Delta)$.
\begin{question}
Is it possible in some way to get cointeraction for the bialgebra
$(H^\circ, \mu, \Delta^\prime)$? If not, why is it not possible?
\end{question}
\bibliographystyle{amsplain}
| {
"timestamp": "2023-01-02T02:05:24",
"yymm": "2212",
"arxiv_id": "2212.03501",
"language": "en",
"url": "https://arxiv.org/abs/2212.03501",
"abstract": "We consider the bialgebra of hypergraphs, a generalization of Schmitt's Hopf algebra of graphs, and show it has a cointeracting bialgebra. So one has a double bialgebra in the sense of L. Foissy, who recently proved there is then a unique double bialgebra morphism to the double bialgebra structure on the polynomial ring ${\\mathbb Q}[x]$. We show the polynomial associated to a hypergraph is the hypergraph chromatic polynomial.Moreover hypergraphs occurs in quartets: there is a dual, a complement, and a dual complement hypergraph. These correspondences are involutions and give rise to three other double bialgebras, and three more chromatic polynomials. In all we give eight quartets of bialgebras which includes recent bialgebras of M. Aguiar and F. Ardila, and by L. Foissy.",
"subjects": "Rings and Algebras (math.RA); Combinatorics (math.CO)",
"title": "Eight times four bialgebras of hypergraphs, cointeractions, and chromatic polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232904845687,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460387806809
} |
https://arxiv.org/abs/1709.09738 | Formulations of the PFR Conjecture over $\mathbb{Z}$ | The polynomial Fre\uıman--Ruzsa conjecture is a fundamental open question in additive combinatorics. However, over the integers (or more generally $\mathbb{R}^d$ or $\mathbb{Z}^d$) the optimal formulation has not been fully pinned down.The conjecture states that a set of small doubling is controlled by a very structured set, with polynomial dependence of parameters. The ambiguity concerns the class of structured sets needed. A natural formulation in terms of generalized arithmetic progressions was recently disproved by Lovett and Regev. A more permissive alternative is in terms of \emph{convex progressions}; this avoids the obstruction, but uses is a significantly larger class of objects, yielding a weaker statement.Here we give another formulation of PFR in terms of Euclidean ellipsiods (and some variations). We show it is in fact equivalent to the convex progression version; i.e. that the full range of convex progressions is not needed. The key ingredient is a strong result from asymptotic convex geometry. | \section{Introduction}
A celebrated theorem of Fre\u{\i}man \cite{freiman} states that if $A \subseteq \ZZ$ is a finite set satisfying the small doubling hypothesis $|A+A| \le K |A|$ for some small $K$, where $A +A$ is the sumset $\{x+y \colon x,y \in A\}$, then $A$ must be contained in a generalized arithmetic progression, i.e.~a set of the form
\[
P = \big\{ a_0 + n_1 a_1 + \dots + n_d a_d \colon n_i \in \{0,\dots,N_i - 1\} \big\}
\]
for some integers $a_i$, where the rank $d$ and size\footnote{Note this quantity may not be the same as the cardinality $|P|$ if the sums are not distinct. Such a case is called an \emph{improper} generalized arithmetic progression. We use the term ``size'' in this technical sense throughout.} $N_1 \dots N_d$ of the generalized arithmetic progression are bounded by functions of $K$ only. An analogue for subsets of general abelian groups was obtained by Green and Ruzsa \cite{green-ruzsa}.
The polynomial Fre\u{\i}man--Ruzsa conjecture is a central open question in additive combinatorics, and asks for essentially optimal quantitative bounds in modified versions of these structural results. The most commonly discussed case is when $A \subseteq \FF_p^n$ for some bounded $p$; then the conjecture states that if $|A+A| \le K |A|$ then $A$ is contained in $O(K^{O(1)})$ cosets of the same subgroup $H \le \FF_p^n$, where $|H| = O(K^{O(1)}) |A|$.
For subsets of $\ZZ$, or more generally $\ZZ^m$ or $\RR^m$, there has been less much consensus on what the correct statement should be. One natural formulation is the following.
\begin{conjecture}[PFR; GAP formulation]
\label{conj:gap-pfr}
If $G = \RR^m$ or $\ZZ^m$ and $A \subseteq G$ is a finite set satisfying $|A+A| \le K |A|$, then there exists a generalized arithmetic progression
\[
P = \big\{ a_0 + n_1 a_1 + \dots + n_d a_d \colon n_i \in \{0,\dots,N_i - 1\} \big\}
\]
for some choice of $a_i \in G$, with rank\footnote{Here and subsequently all $O(\cdot)$ constant are absolute and in particular independent of $m$.} $d = O(\log 2 K)$ and size $O(K^{O(1)}) |A|$; and a set $X \subseteq G$, $|X| = O(K^{O(1)})$; such that $A \subseteq P + X$.
\end{conjecture}
Unfortunately this formulation is false: this was shown recently by Lovett and Regev \cite{lovett-regev}, answering a question of Green \cite{green}. Their counterexample (in $\RR^m$) has the form $A = B \cap \cL$ where $B \subseteq \RR^m$ is a large Euclidean ball and $\cL \subseteq \RR^m$ is a randomly chosen lattice of full rank.
This leaves open the following formulation, first discussed (in a closely related form) in \cite{green}.
\begin{conjecture}[PFR; convex formulation]
\label{conj:convex-pfr}
For $A$, $G$, $K$ as in Conjecture \ref{conj:gap-pfr}, there exists a \emph{convex progression}
\[
P = \big\{ a_0 + n_1 a_1 + \dots + n_d a_d \colon (n_1,\dots,n_d) \in \ZZ^d \cap B \big\}
\]
where $B \subseteq \RR^d$ is some centrally symmetric convex body and $a_0,\dots,a_d \in G$ are given, with rank $d = O(\log 2 K)$ and size $|B \cap \ZZ^d| = O(K^{O(1)}) |A|$; and a set $X \subseteq G$, $|X| = O(K^{O(1)})$; such that $A \subseteq P + X$.
\end{conjecture}
The previous formulation is (equivalent to) the special case of this one where $B$ must be an axis-aligned cuboid $[-N_1,N_1] \times \dots \times [-N_d, N_d]$; and such $B$ do not suffice. It is natural to ask how how large a collection of convex sets $B$ is necessary for the conjecture to have a chance of being true.
Our main formulation will use only convex sets $B$ which are (not necessarily axis-aligned) \emph{Euclidean ellipsiods}, i.e.~sets of the form $B = \{ v \in \RR^d \colon \|\gamma(v)\|_2 \le 1\}$ for some $\gamma \in \GL_d(\RR)$. That is, we state the following:
\begin{conjecture}[PFR; ellipsoid formulation]
\label{conj:ellipsoid-pfr}
For $A$, $G$, $K$ as before, there exists an \emph{ellipsoid progression}
\[
P = \big\{ a_0 + n_1 a_1 + \dots + n_d a_d \colon \vec{n} = (n_1,\dots,n_d) \in \ZZ^d,\, \|\gamma(\vec{n})\|_2 \le 1 \big\}
\]
where $a_0,\dots,a_d \in G$ and $\gamma \in \GL_d(\RR)$ are given, with rank $d = O(\log 2 K)$ and size $|\{ \vec{n} \in \ZZ^d \colon \|\gamma(\vec{n})\|_2 \le 1 \}| = O(K^{O(1)}) |A|$; and a set $X \subseteq G$, $|X| = O(K^{O(1)})$; such that $A \subseteq P + X$.
\end{conjecture}
Again, the $P$ here are a special case of those in Conjecture \ref{conj:convex-pfr}. Our main result is:
\begin{theorem}
\label{thm:main}
Conjectures \ref{conj:convex-pfr} and \ref{conj:ellipsoid-pfr} are equivalent.
\end{theorem}
I.e.~if Conjecture \ref{conj:convex-pfr} is true at all then it suffices to consider convex sets $B$ that are ellipsoids.
\begin{remark}
In fact there is nothing special about ellipsoids: it is true, and our proof will implicitly show, that the class of all convex bodies $\{ \gamma(B_0) \colon \gamma \in \GL_d(\RR) \}$ is sufficient for any fixed convex body $B_0$ (or rather, one for each $d$). For instance, yet another formulation would be in terms of \emph{skew progressions} (not a standard term)
\[
P = \big\{ a_0 + n_1 a_1 + \dots + n_d a_d \colon \vec{n} = (n_1,\dots,n_d) \in \ZZ^d,\, |\phi_1(\vec{n})|, \dots, |\phi_d(\vec{n})| \le 1 \big\}
\]
for some $a_i$ and some basis of linear forms $\phi_1,\dots,\phi_d \colon \RR^d \to \RR$; i.e., taking $B_0 = [-1,1]^d$. So, the weakness of Conjecture \ref{conj:gap-pfr} exploited by the Lovett--Regev argument is not the restriction on the shape of $B$, but the requirement that it be aligned to some lattice basis for $\ZZ^d$.
\end{remark}
\begin{remark}
Another variant would be to replace the set $P$ by a Gaussian density
\[
\theta(x) = \sum_{\vec{n} \colon a_0 + n_1 a_1 + \dots + n_d a_d = x} \exp(-\|\gamma(\vec{n})\|_2^2)
\]
and replace the covering requirement $A \subseteq X + P$ by a correlation one such as $\langle 1_A, \theta \rangle \gg K^{-O(1)} \|\theta\|_2 \|1_A\|_2$. This is readily seen to be equivalent to Conjecture \ref{conj:ellipsoid-pfr} using standard tools.
\end{remark}
The non-trivial ingredient in the proof of Theorem \ref{thm:main} comes from asymptotic convex geometry, and can be encapsulated in the following (very much non-trivial) result due to Milman \cite{milman}.\footnote{The reader could consult \cite{gm} for an overview of these ideas. In the case that $B_2$ is a Euclidean ball and $\gamma_1 = \id$, the ellipsoid $\gamma_2(B_2)$ is referred to as the \emph{$M$-ellipsoid} of $B_1$.}
\begin{theorem}[Milman's reverse Brunn--Minkowski inequality]
\label{thm:mil}
There is an absolute constant $c > 0$ such that the following holds. For any $d$, and any two convex bodies $B_1$ and $B_2$ in $\RR^d$, there exist volume-preserving linear maps $\gamma_1, \gamma_2 \in \SL_d(\RR)$ such that for all $t_1, t_2 > 0$:
\[
\vol(t_1 \gamma_1(B_1) + t_2 \gamma_2(B_2))^{1/d} \le c \left(t_1 \vol(B_1)^{1/d} + t_2 \vol(B_2)^{1/d} \right) \, .
\]
It is clear one can take $\gamma_1 = \id$ if desired.
\end{theorem}
\section{Proof of the main theorem}
As we have stated, most of the work in the proof is done by Theorem \ref{thm:mil}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
Suppose $A \subseteq G$ with $|A+A| \le K |A|$ is given. Applying Conjecture \ref{conj:convex-pfr} to $A$, we are given a symmetric convex body $C \subseteq \RR^d$, elements $a_0 \in G$, $\vec{a} \in G^d$ and $X \subseteq G$ such that $d = O(\log 2 K)$, $|C \cap \ZZ^d| = O(K^{O(1)}) |A|$, $|X| = O(K^{O(1)})$ and $A \subseteq P + X$ where
\[
P = \{ a_0 + \vec{a} \cdot \vec{n} \colon \vec{n} \in C \cap \ZZ^d \} \, .
\]
Let $B_0$ denote the standard Euclidean ball $\{ v \in \RR^d \colon \|v\|_2 \le R\}$ where $R$ is chosen so that $\vol B_0 = \vol C = V$. Applying Theorem \ref{thm:mil}, we obtain $\gamma \in \SL_d(\RR)$ such that, writing $B$ for the ellipsoid $\gamma(B_0)$, we have
\begin{equation}
\label{eq:vol}
\vol(t_1 C + t_2 B) \le c^d (t_1 + t_2)^d V
\end{equation}
for any $t_1, t_2 > 0$. We make the following claim:
\begin{claim*}
There exist finite sets $Y, Z \subseteq \ZZ^d$ with $|Y|, |Z| = \exp(O(d))$, such that
\[
(C \cap \ZZ^d) \subseteq Y + (B \cap \ZZ^d)
\]
and
\[
(B \cap \ZZ^d) \subseteq Z + (C \cap \ZZ^d)\, .
\]
\end{claim*}
Given this, we can deduce that
\[
|B \cap \ZZ^d| \le |Z|\, |C \cap \ZZ^d| = \exp(O(d)) O(K^{O(1)}) |A| = O(K^{O(1)}) |A|
\]
as $d = O(\log 2 K)$, and that $A \subseteq P + X \subseteq P' + X'$ where
\[
P' = \big\{ a_0 + \vec{a} \cdot \vec{n} \colon \vec{n} \in B \cap \ZZ^d \big\}
\]
and
\[
X' = X + \big\{ \vec{a} \cdot \vec{y} \colon \vec{y} \in Y \big\} \, ,
\]
meaning $|X'| \le |X|\, |Y| = O(K^{O(1)})$; so this suffices to proves the result.
\begin{proof}[Proof of claim]
This is a fairly standard packing/covering argument. Let $Y$ be a maximal subset of $C \cap \ZZ^d$ such that the sets $y + B/2$ for $y \in Y$ are disjoint. By maximality, $C \cap \ZZ^d \subseteq Y + B$, and hence
\[
C \cap \ZZ^d \subseteq (Y + B) \cap \ZZ^d = Y + (B \cap \ZZ^d) \, .
\]
Also, each set $y + B/2$ for $y \in Y$ is contained in $C + B/2$, so by disjointness and volume-counting we have
\[
|Y| \le \frac{\vol(C + B/2)}{\vol(B/2)} \le \frac{c^d (3/2)^d V}{(1/2)^{d} V} = (3 c)^d
\]
by \eqref{eq:vol}. The argument for $Z$ is analogous, exchanging the roles of $B$ and $C$.
\end{proof}
This completes the proof of Theorem \ref{thm:main}.
\end{proof}
| {
"timestamp": "2017-09-29T02:02:44",
"yymm": "1709",
"arxiv_id": "1709.09738",
"language": "en",
"url": "https://arxiv.org/abs/1709.09738",
"abstract": "The polynomial Fre\\uıman--Ruzsa conjecture is a fundamental open question in additive combinatorics. However, over the integers (or more generally $\\mathbb{R}^d$ or $\\mathbb{Z}^d$) the optimal formulation has not been fully pinned down.The conjecture states that a set of small doubling is controlled by a very structured set, with polynomial dependence of parameters. The ambiguity concerns the class of structured sets needed. A natural formulation in terms of generalized arithmetic progressions was recently disproved by Lovett and Regev. A more permissive alternative is in terms of \\emph{convex progressions}; this avoids the obstruction, but uses is a significantly larger class of objects, yielding a weaker statement.Here we give another formulation of PFR in terms of Euclidean ellipsiods (and some variations). We show it is in fact equivalent to the convex progression version; i.e. that the full range of convex progressions is not needed. The key ingredient is a strong result from asymptotic convex geometry.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Formulations of the PFR Conjecture over $\\mathbb{Z}$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232894783426,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460380544441
} |
https://arxiv.org/abs/math/9506224 | Topological conjugacy of circle diffeomorphisms | The classical criterion for a circle diffeomorphism to be topologically conjugate to an irrational rigid rotation was given by A. Denjoy. In 1985, one of us (Sullivan) gave a new criterion. There is an example satisfying Denjoy's bounded variation condition rather than Sullivan's Zygmund condition and vice versa. This paper will give the third criterion which is implied by either of the above criteria. | \section{Introduction}
Given a circle orientation preserving homeomorphism $f:
S^{1}\rightarrow S^{1}$,
the rotation
number \[\rho(f)=\lim_{n\rightarrow \infty}\frac{F^{n}(x)-x}{n} \;\;mod
\;\; 1\]
is independent of $x$ and the lift $F$ of $f$, where
$F:R^{1}\rightarrow R^{1}$ is a lift of $f$ and $x\in
R^{1}$. And it is invariant under
topological conjugations. The rotation number $\rho(f)$ is
a rational number if and only if $f$ has a periodic orbit.
From the theory of Poincar\'e, for an orientation preserving homeomorphism
$f:S^{1}\rightarrow S^{1}$, if $f$ has
a periodic orbit then its dynamics turn out trivial: any two periodic
orbits have the same period and any orbit tends to a periodic orbit; if
$f$ doesn't have any periodic orbit then it is semi-conjugate to an
irrational rigid
rotation. A natural question is whether or not the semi-conjugation
could be improved to be a topological conjugation. In the
following context when we say a rigid rotation we always mean an
irrational rigid rotation. Denjoy proved the following.
\begin{description}\item[Theorem A] Given an orientation preserving
homeomorphism $f$ of the circle $S^{1}$ with an irrational rotation
number,
$f$ is topologically conjugate to a rigid rotation provided $f$ is
$C^{1}$ and the logarithm of the
derivative of $f$ is of bounded variation.
\end{description}
There is an example (Denjoy
counterexample) to show that $C^{1}$ smoothness is not enough [6].
It is shown even $C^{\infty }$ smoothness is not enough yet in [15].
Actually
an orientation preserving circle homeomorphism with an irrational
rotation
number is topologically conjugate to an irrational rigid rotation if
and only if it has no wandering interval. Denjoy achieved this by
controlling the variation of the derivative.
Recently one of us proved the non existence of wandering interval by
assuming the logarithm of the derivative satisfies the Zygmund
condition.
\begin{description}\item[Definition]
A continuous map $f:R^{1}\rightarrow R^{1}$ satisfies the Zygmund
condition if there
exists $B>0$ such that \[\sup_{x,t}|\frac{f(x+t)+f(x-t)-2f(x)}{t}| \leq
B.\]
\end{description}
\begin{description}\item[Theorem B] Given an orientation preserving
homeomorphism $f$ of the circle $S^{1}$ with an irrational rotation
number, $f$
is topologically conjugate to a rigid rotation if $f$ is $C^{1}$ and the
logarithm of the derivative satisfies the Zygmund condition.
\end{description}
But there is an example satisfying Denjoy's bounded variation condition
and not Zygmund's condition and vice versa [section 5]. This paper
gives a third criterion which is implied by either of the above
two and which implies $f$ is topologically conjugate to a rigid
rotation.
\begin{description}\item[Definition] Let $I$ be a closed interval of
$R^{1}$.
A continuous map $f:I\rightarrow R^{1}$ is of bounded Zygmund variation
if there exists $B>0$ such that
\[\sup_{\{x_{0},x_{1},\cdots,x_{n}\}}\sum_{i=0}^{n-1}
|f(x_{i})+f(x_{i+1})-2f(\frac{x_{i}+x_{i+1}}{2})| \leq B ,\]
where $\{ x_{0},x_{1},\cdots,x_{n} \} $ is a partition of the interval
$I$. The supremum is called Zygmund variation of $f$ over $I$. It is denoted by
$ZV(f|_{I})$.
\end{description}
\begin{description}\item[Definition] Let $I$ be a closed interval of
$R^{1}$. A continuous map $f:I\rightarrow R^{1}$ is of bounded quadratic
variation if there exists $B>0$ such that
\[\sup_{\{x_{0},x_{1},\cdots,x_{n}\}
}\sum_{i=0}^{n-1}(f(x_{i+1})-f(x_{i}))^{2} \leq B ,\]
where $\{x_{0},x_{1},\cdots,x_{n}\}$ is a partition of the interval
$I$. The supremum is called quadratic variation of $f$ over $I$. It is denoted by $QV(f|_{I})$.
\end{description}
\begin{description}\item[Theorem C] Given an orientation preserving
homeomorphism $f$ of the circle $S^{1}$ with an irrational rotation
number, $f$
is topologically conjugate to a rigid rotation if $f$ is $C^{1}$ and the
logarithm of the derivative has bounded Zygmund variation and
bounded quadratic variation.
\end{description}
\section{Cross ratio distortion}
In this section we control cross ratio distortion for
standard 4-tuples in terms of Zygmund variation and quadratic variation
(compare $\S 1$ of [5]).
Let $a,b,c,d\in R^{1}$ and $a<b<c<d$.
One cross ratio $[a,b,c,d]=\frac{(d-b)(c-a)}{(c-b)(d-a)}$ can be
computed by \[log[a,b,c,d]=\int \int_{S}\frac{dxdy}{(x-y)^{2}},\]
where $S$ is $\{(x,y):a\leq x\leq b, c\leq y\leq d\}$.
Another cross ratio $(a,b,c,d)$ is $\frac{(c-b)(d-a)}{(b-a)(d-c)}$
and, obviously, \[[a,b,c,d]=1+\frac{1}{(a,b,c,d)}.\]
Given a homeomorphism $h$, the distortion of the second cross ratio
under $h$ is \[\frac{(ha,hb,hc,hd)}{(a,b,c,d)}.\]
In this paper, by the cross ratio distortion we mean the distortion of
the second cross ratio.
We call a 4-tuple $a<b<c<d$ standard if $b-a=c-b=d-c$. The cross ratio
distortion under $h$ of a standard 4-tuple is bounded away from zero
and from above if and
only if $(ha,hb,hc,hd)$ is also. If $h$ is $C^{1}$ diffeomorphism, then
\[log[1+\frac{1}{(ha,hb,hc,hd)}]=log[ha,hb,hc,hd]=\int \int _{S}(h\times
h)^{*}\mu ,\]
where $\mu $ is the measure $\frac{dxdy}{(x-y)^{2}}$.
Clearly the cross ratio distortion under $f$ of a standard 4-tuple is
bounded
away from zero and from above if and only if $log[ha,hb,hc,hd]$ is also.
Calculating the integrand, we get \[\frac{h'xh'y}{(hx-hy)^{2}}=
\frac{1}{(x-y)^{2}}\frac{h'xh'y}{[h']^{2}_{xy}},\]
where $[h']_{xy}$ is the average of $h'$ over the interval $[x,y]$.
Since $b-a=c-b=d-c$,
$\int \int_{S}\frac{dxdy}{(x-y)^{2}}=log([a,b,c,d])=log\frac{4}{3}$.
Thus a bound on $\frac{h'xh'y}{[h']^{2}_{xy}}$ yields a bound on
the cross ratio distortions for standard 4-tuples.
We say $h$ satisfies the {\em bounded Koebe condition} if one of the
following equivalent conditions hold:
\[1)\;\;\frac{1}{M}\leq \frac{h'xh'y}{[h']^{2}_{xy}} \leq M \; for\;
some\; M>0,\]
\[2)\;\;|log\frac{h'xh'y}{[h']^{2}_{xy}}|\leq M' \; for\; some
\;M'>0.\]
The following proposition is trivial.
\newtheorem{prop}{Prop.}
\begin{prop}
If $h$ satisfies the bounded Koebe condition then the cross ratio
distortion under $h$ of a standard 4-tuple is bounded away
from zero and from above.
\end{prop}
In order to estimate the $\log $ in 2), i.e.,
\[logh'x+logh'y-2log[h']_{xy},\]
let us consider the following two terms:
\[a)\;\;logh'x+logh'y-2[logh']_{xy}\] and
\[b)\;\;log[h']_{xy}-[logh']_{xy}\;.\]
Remark: If both a) and b) are bounded, then 1) and 2) hold.
Expression a) can be controlled by the Zygmund
variation of $logh'$ on the interval $[x,y]$ because of the following
proposition.
\begin{prop}
Let $\phi $ be a continuous function from $R^{1}$ to $R^{1}$. Then
\[|\phi (x)+\phi (y)-2[\phi ]_{xy}|\]
is no more than the Zygmund variation $ZV(\phi |_{[x,y]})$ of $\phi corollary$ over $[x,y]$.
\end{prop}
Remark: As we define the Zygmund variation of $\phi $ on the interval
$[a, b]$ in the introduction, we can also define the average Zygmund
variation of $\phi $ on $[a, b]$ by replacing the value $\phi (\frac{x_{i}+
x_{i+1}}{2})$ of $\phi
$ at the middle point by the average $\frac{1}{|x_{i+1}-x_{i}|}
\int _{[x_{i}, x_{i+1}]}\phi $ of $\phi $ over $[x_{i}, x_{1+1}]$.
Prop. 2 tells us that the average Zygmund variation of $\phi $ over
$[a, b]$ is no more than the Zygmund variation of $\phi $ over $[a, b]$.
Conversely one can show that the Zygmund variation of $\phi $ over $[a, b]$
is no more than twice the average Zygmund variation of $\phi $ over
$[a, b]$.
Hence these two conditions are actually equivalent.
Proof: Without loss of generality, assume $[x,y]=[0,1]$.
Then \[[\phi ]_{01}=\int _{0}^{1}\phi dx. \]
Suppose that we define successive approximations to the average of $\phi $
over $[a,b]$ by
\[A_{0}[a,b] = (\phi (a) + \phi (b))/2\]
and
\[A_{n+1}[a,b] = (A_{n}[a,m] + A_{n}[m,b])/2 ,\]
where $m = (a+b)/2 $. Similarly, measure non-linearity by expressions
\[N_0[a,b] = A_0[a,b] - A_1[a,b] = (\phi(a) - 2 \phi(m) + \phi(b))/4\]
and
\[N_{n+1}[a,b] =(N_n[a,m] + N_n[m,b])/2 ,\]
or equivalently
\[N_{n}[a,b] = A_{n}[a,b] - A_{n+1}[a,b] .\]
Then
\[A_{0}[0,1] -A_{n}[0,1] = N_{0}[0,1] + \cdots + N_{n-1}[0,1]\]
with
\[| N_{k}[0,1] | \leq ZV(\phi|_{[0,1]})/2^{k+2}\]
hence
\[2 |A_{0}[0,1] - \lim _{n\rightarrow \infty } A_{n}[0,1]|
\leq ZV(\phi |_{[0,1]}) ,\]
i.e.,
\[|\phi (0)+\phi (1)-2[\phi ]_{01}|\leq ZV(\phi |_{[0,1]}).\]
Next we estimate the expression b) in terms of the quadratic variation.
\newtheorem{lemma}{Lemma}
\begin{lemma}
If $\epsilon \geq \delta >-1$, assume \[log(1+\epsilon)=\epsilon
-\frac{\epsilon ^{2}}{2}\Delta (\epsilon ),\]
then there exists $B(\delta )>0$ depending on $\delta $ such that
$|\Delta (\epsilon )|\leq B(\delta )$.
\end{lemma}
Proof: Since \[\Delta (\epsilon )=\frac{\epsilon -log(1+\epsilon )
}{\epsilon ^{2}/2},\]
The proof is an elementary calculation.
\newtheorem{definition}{Definition}
\begin{definition}
A quantity $C_{1}$ (depending on parameters) is a big O of another
quantity $C_{2}$ (depending on the parameters) if there exists a
constant $B$ (independent of the parameters) such that
\[|C_{1}|\leq B|C_{2}|.\]
\end{definition}
\begin{prop}
Suppose the derivative $h'$ satisfies $1/C\leq h' \leq C $ for some
$C>0$. Then
the expression b) is equal to the big $O$ of the quadratic variation
of $logh'$ over the interval $[x,y]$.
\end{prop}
Proof: Let $h'(x)=a$. The expression b) is unchanged if we multiply
$h'x$ by $1/a$. Write $(1/a)h'$ on $J=[x,y]$ as $1+\epsilon $ where
$\epsilon $ is a function of $(t-x), t\in J$. Expand the two terms of b)
\[log\frac{1}{|J|} \int_{J}(1+\epsilon)
-\frac{1}{|J|}\int_{J}log(1+\epsilon )\]
\[=log(1+\frac{1}{|J|} \int_{J}\epsilon )
-\frac{1}{|J|}\int_{J}[\epsilon -\frac{\epsilon ^{2}}{2}\Delta
(\epsilon )]\]
\[=[\frac{1}{|J|}\int_{J} \epsilon -\frac{1}{2}(\frac{1}{|J|}\int_{J}
\epsilon)^{2} \Delta (\frac{1}{|J|}\int_{J} \epsilon
)]-[\frac{1}{|J|}\int_{J} \epsilon -\frac{1}{|J|}\int_{J}\frac{\epsilon
^{2}}{2}\Delta (\epsilon )]\]
\[=-\frac{1}{2}(\frac{1}{|J|}\int_{J}\epsilon )^{2}\Delta
(\frac{1}{|J|}\int_{J}\epsilon )+\frac{1}{|J|}\int_{J}\frac{\epsilon
^{2}}{2}\Delta (\epsilon ).\]
By the Cauchy inequality, $(\frac{1}{|J|}\int _{J}\epsilon )^{2}\leq
\frac{1}{|J|}\int _{J}\epsilon ^{2}$.
Since $1/C\leq h'\leq C$ for some $C>0$, there exists $\delta (C)>-1$
such that $\epsilon =\frac{h^{'}t}{h^{'}x}-1\geq \delta $ for any
$t\in J$, $J=[x, y]$.
Hence $\frac{1}{|J|}\int _{J}\epsilon \geq \delta $. By
the Lemma 1, there exists $B(\delta )>0$ such that $|\Delta (\epsilon )|
\leq B(\delta )$. Hence $|\Delta (\frac{1}{|J|}\int _{J}\epsilon )|\leq
B(\delta )$.
Furthermore we can get that $\epsilon =\frac{h't}{h'x}-1$ is
a big $O$ of $log\frac{h't}{h'x}=logh't-logh'x$. so the
expression b) is a big $O$ of the quadratic variation of $logh'$ over
$J$.
The following proposition will be used in section 4 to the
iterates of a circle diffeomorphism $f$.
\begin{prop}
Suppose $h:I\rightarrow R^{1}$ is a $C^{1}$ diffeomorphism with $h'>0$,
and $logh'$ has bounded Zygmund variation and bounded quadratic
variation over $I$. Assume $J_{0}\subset I$ and $J_{0}\;
J_{1}=h(J_{0}),\;\cdots,\;J_{n}=h^{n}(J_{0})$ are pairwise
disjoint. Then the cross ratio distortion under $h^{n}$ of a standard
4-tuple in the interval $J_{0}$ is the big $O$ of the sum of the
Zygmund variation and the quadratic variation of $\log h^{'}$ on
$\cup _{i=0}^{n-1}J_{i}$.
\end{prop}
Proof: From the expression 2) above the Prop. 1, we want to
estimate \[log
\frac{(h^{n})^{'}(x)(h^{n})^{'}(y)}{[(h^{n})^{'}]^{2}_{xy}}.\]
By the chain rule of calculating the derivative of $h^{n}$,
\[log\frac{(h^{n})^{'}(x)(h^{n})^{'}(y)}{[(h^{n})^{'}]^{2}_{xy}}
=\sum _{i=0}^{n-1}log
\frac{h^{'}(h^{i}(x))h^{'}(h^{i}(y))}{[h^{'}]^{2}_{h^{i}(x)h^{i}(y)}}.\]
Each summand can be decomposed into the expression a) and
expression b), by the Prop. 2 and Prop. 3, each summand is the big
$O$ of the sum of the Zygmund variation and the quadratic variation of
$logh^{'}$ over the interval $[h^{i}(x), h^{i}(y)]$, where $i=0, 1, 2,
\cdots , n-1$.
So the cross ratio distortion under $h^{n}$ of a standard 4-tuple in
$J_{0}$ is the big $O$ of the sum of the Zygmund variation and the
quadratic variation of
$logh^{'}$ on $\cup _{i=0}^{n-1}J_{i}$.
\section{Nonwandering set and ergodicity}
In this section we review some basic techniques due to Denjoy [1].
Suppose $f:S^{1}\rightarrow S^{1}$ is an orientation preserving
homeomorphism with an irrational rotation number.
For $x\in S^{1}$, let \[\omega (x)=\cap_{n\in N}Cl(\{f^{k}(x)|k\geq
n\}),\] \[\alpha (x)=\cap_{n\in N}Cl(\{f^{-k}(x)|k\geq n\}),\]
where $Cl(A)$ means the closure of the set $A$.
They are called $\omega $ limit set of the orbit of $x$ and
$\alpha $ limit set of the orbit of $x$ respectively.
$x\in S^{1}$ is called a wandering point of $f$ if there exists a
neighborhood $U$ of $x$ such that \[f^{k}(U)\cap U=\emptyset , \forall
k\in Z\setminus \{0\}.\]
A point is called a nonwandering point if it is not a wandering point.
$\Omega (f)$ denotes the set of all nonwandering points, which is
called nonwandering set. Clearly it is a closed subset.
A subset $A$ is invariant under $f$ if \[f(A)\subset
A,\;f^{-1}(A)\subset A.\]
A non-empty subset $A$ is minimal for $f$ if it is closed, invariant
under $f$ and there is no non-empty proper closed subset of $A$ which is
invariant under $f$.
\begin{prop}
Suppose $f$ has no periodic point, then
(1) $\Omega (f)=\omega (x)=\alpha (x),\;\forall x\in S^{1};$
(2) $\Omega (f)$ is a minimal set of $f$;
(3) either $\Omega (f)$ is a nowhere dense perfect subset of $S^{1}$ or
$\;\Omega (f)=S^{1}$.
\end{prop}
Proof: (1) $\omega (x)$ is a non-empty closed invariant subset of
$S^{1}$. Let $(\gamma ,\delta )$ be a component of
$S^{1}\setminus \omega
(x)$, then $f^{j}((\gamma ,\delta ))$ is also a component of
$S^{1}\setminus \omega (x)$ for any $j\in Z$. Since $f$ has no periodic
point, $\{ f^{j}([\gamma ,\delta ])| j\in Z\} $ must be pairwise
disjoint and hence $(\gamma,\delta)$ is a wandering interval
of $f$. So $S^{1}\setminus \omega (x) \subset
S^{1}\setminus \Omega (f)$. Hence $\Omega (f)\subset \omega (x)$.
Clearly $\omega (x)\subset \Omega (f)$. So $\Omega (f)=\omega (x)$.
Similarily $\Omega (f)=\alpha (x)$.
(2) Clearly from (1).
(3) Let $\partial \Omega $ denote the boundary of $\Omega $, $\partial
\Omega $ is closed. Since \[\partial \Omega \subset \Omega ,\;f(\partial
\Omega )=\partial f(\Omega )=\partial \Omega ,\]
either $\partial \Omega =\emptyset $ hence $\Omega =S^{1}$ or $\partial
\Omega =\Omega $ hence $\Omega $ is nowhere dense. For the second case,
$\Omega $ is perfect since $\Omega =\omega (y),\; \forall y\in \Omega
$.
\begin{definition}
Suppose $f$ has no periodic point. We say $f$ is ergodic if $\Omega
(f)=S^{1}$, otherwise we say $f$ is not ergodic.
\end{definition}
The following result is well known and its proof can be found in
several references ([1], [2], [3], [4] and etc.).
\begin{prop}
Suppose an orientation preserving homeomorphism $f:S^{1}\rightarrow
S^{1}$ has no periodic point and is ergodic, $\alpha =\rho (f)$. Then
$f$ is topologically conjugate to an irrational rigid rotation
$\tau _{\alpha }:S^{1}\rightarrow S^{1}$ given by \[\tau _{\alpha }(\xi
) =e^{2\pi i\alpha }\xi .\]
\end{prop}
\section{Proofs of results}
A circle homeomorphism with an irrational rotation number is
topologically conjugate to a rigid rotation if and only if it is
ergodic, in other words if and only if it has no wandering interval.
Denjoy's $C^{1+b.v}$-condition and [5]'s $C^{1+Z}$-condition both
guarantee the nonexistence of a wandering interval. In this section we
prove that the $C^{1}$-plus bounded Zygmund variation and bounded
quadratic variation guarantee the nonexistence of a wandering interval.
Before we get into the proofs of these results, we need the following
technique lemmas.
\begin{prop} [Contraction Principle]([10], [6])
Suppose $f: S^{1}\rightarrow S^{1}$ is a circle
homeomorphism has no periodic orbits and $I$ is a subinterval of
$S^{1}$. If $\inf _{n\geq 0} \{|f^{n}(I)|\}=0$, then $I$ is a wandering
interval of $f$.
\end{prop}
Proof: Let $I_{n}= f^{n}(int I)$ and $\Sigma =\cup _{n\geq 0}I_{n}$.
Case 1: If $\Sigma =S^{1}$, then $S^{1}$ is covered by finite
$I_{n_{i}}, i=1, 2, ..., k$. Since $\inf _{n\geq 0}{|I_{n}|}=0$,
the Lebesgue's lemma implies that there is $I_{l}$, $l\in N$, contained
in one of $I_{n_{i}}, i=1, 2, ..., k$, name it $I_{j}$. Without loss of
generality we assume $l>j$. Note $I_{l}=f^{l-j}(I_{j})$. So $f^{l-j}$
will have a periodic point in the closure of $I_{j}$. This is
contradiction.
Case 2: Suppose $\Sigma \neq S^{1}$. If there is no component $U$ of
$\Sigma $ such that some iterate of $U$ intersects with $U$, then $U$
and hence $I$ are wandering intervals. If there is a component $U$ of
$\Sigma $ such that $f^{n}(U)\cap U\neq \emptyset $ for some $n\geq 0$,
then $f^{n}(U)\subseteq U$ and hence $f^{n}$ has a periodic point in the
closure of $U$. This is again a contradiction.
\begin{lemma}[Real Koebe Principle]
If $h:I\rightarrow R^{1}$ does not increase the cross ratio distortions
for standard 4-tuples too much then the quasisymmetric distortions
for standard interior triples are controlled.
More precisely, if $x,\; y\in I$ satisfy $|x-y|$ is as small as the
distance to the boundary $\partial I $ of $I$ and $z=(x+y)/2 $, then
\[\frac{1}{C}\leq |h(x)-h(z)|/|h(z)-h(y)| \leq C,\]
where $C$ only depends on the bound of the cross ratio distortions for
standard 4-tuples.
\end{lemma}
Proof: See $\S 2$ of [5]. The idea to prove this lemma is to use the
four interval arguement.
Let $J, L, M, R$ be four contiguous equal lenth
intervals. Suppose the lenth of $h(L)$ is much smaller than $h(M)$.
Since the cross ratio distortion
$\frac{|h(M)||h(T)|}{h(L)||h(R)|}/3$ on $L, M, R$ is greater than the
ratio distortion $\frac{|h(M)|}{|h(L)|}$, no bound of ratio distortions
implies no bound of cross ratio distortions.
\vspace{.2in}
It is easy to use the Real Koebe Principle to get the following
Macroscopic Koebe Distortion Principle.
\begin{definition}
Let $M$ and $T$ be two intervals with $M\subset T$, and $L$ and $R$
be components of $T\setminus M$. If $\epsilon >0$
we say $T$ is an $\epsilon $-scaled
neighborhood of $M$ if
\[\frac{|L|}{|M|}\geq \epsilon \;and \;\frac{|R|}{|M|}\geq \epsilon .\]
\end{definition}
\begin{prop} [Macroscopic Koebe Distortion Principle]
Given any $B>0$, $\epsilon >0$, there exists $\delta >0$ only
depending on $B$ and $\epsilon $ such that,
for any homeomorphism $f$ of the circle, any subintervals $M\subset T$
and any $n\geq 0$, if the cross ratio distortion under $f^{n}$ of any
standard 4-tuple in $T$ is bounded by $B$ and $f^{n}(T)$ contains an
$\epsilon $-scaled neighborhood of $f^{n}(M)$ then $T$ contains a
$\delta $-scaled neighborhood of $M$.
\end{prop}
Proof: Let $T\setminus M=L\cup R$. Without loss of generality, we only
need to prove $\frac{|M|}{|L|}$ can not be very large. Suppose
$\frac{|M|}{|L|}$ is large, we cut $M$ into pieces $L_{i}$ from left to
right with lengths $2^{i-1}|L|, i=1, 2, 3, \cdots $. We also denote
$L_{0}=L$. From the Real Koebe Principle, there exists a constant $C$
only depending $B$ such that
\[\frac{|f^{n}(L_{i})|}{|\cup _{j=0}^{i-1}f^{n}(L_{j})|}\geq
\frac{1}{C},\]
where $i=1, 2, 3, \cdots $. Hence
\[\frac{|\cup _{j=0}^{i}f^{n}(L_{i})|}
{|\cup _{j=0}^{i-1}f^{n}(L_{j})|}\geq
1+\frac{1}{C},\]
where $i=1, 2, 3, \cdots $. So
\[\frac{|\cup _{j=0}^{i}f^{n}(L_{i})|}
{|f^{n}(L_{0})|}\geq
(1+\frac{1}{C})^{i},\]
where $i=1, 2, 3, \cdots $. This means
\[\frac{|\cup _{j=1}^{i}f^{n}(L_{i})|}
{|f^{n}(L_{0})|}\geq
(1+\frac{1}{C})^{i}-1,\]
where $i=1, 2, 3, \cdots $.
Clearly $i$ can not be very large, otherwise $f^{n}(T)$ can not be an
$\epsilon $-scaled neighborhood of $f^{n}(M)$. Hence we can find a bound
of $i$ only depending on $B$ and $\epsilon $, which means there exists
$\delta >0$ only depending on $B$ and $\epsilon $ such that $T$ contains
a $\delta $-scaled neighborhood of $M$.
\begin{definition}
The intersection multiplicity of a collection of sets
$X_{\alpha \in \Lambda }$ is the maximal cardinality of a
subcollection with non-empty intersection.
\end{definition}
Use the Contraction Principle (Prop. 7), it is easy
to get the following proposition.
\begin{prop}
Suppose $f:S^{1}\rightarrow S^{1}$ is an orientation preserving
homeomorphism without
periodic orbits. Let $I$ be a wandering interval and
not contained in any larger wandering interval. If $I$ is a proper subset
of an interval $J$, then the intersection
multiplicity of the pullbacks $\{f^{-i}(J): i=0, 1, 2, ...\}$ is
infinity.
\end{prop}
Proof: Suppose the intersection multiplicity of the pullbacks
$\{f^{-i}(J): i=0, 1, 2, ...\}$ is finite. Then
$|f^{-n}(J)|\rightarrow 0$ as $n\rightarrow \infty $. Now apply the
contraction Principle to $J$ and the map $f^{-1}$. It says that $J$ is a wandering interval. But this is false because $I$ was a maximal wandering interval.
\begin{definition}
Let $f:S^{1}\rightarrow S^{1}$ be an orientation preserving
homeomorphism. The variation of the logarithm of cross ratio distortion
under $f$ is defined as
\[\sup _{\{x_{0},x_{1},\cdots ,x_{n}\}}\sum _{i=0}^{n-1}\sup
_{b_{i},c_{i}\in (x_{i},x_{i+1})}\log \frac{(f(x_{i}), f(b_{i}), f(c_{i}), f(x_{i+1}))}{(x_{i},b_{i},c_{i},x_{i+1})}\]
where $b_{i}$ and $c_{i}$ belong to the open interval
$(x_{i},x_{i+1})$ from $x_{i}$ to
$x_{i+1}$ counter clockwisely and $\{x_{0},x_{1},\cdots ,x_{n}\}$ is a
partition of $S^{1}$.
\end{definition}
Let $f:S^{1}\rightarrow S^{1}$ be an orientation preserving
homeomorphism with an irrational rotational number. Let $I$ be a
wandering interval for $f$. The following combinatorial machinery on
wandering intervals, $I_{n}=f^{n}(I): n=0,1,2,...$, was
developed in [10] and can be found in [6].
\begin{definition} If $n\in N$, we say $I_{k}$ is a left (or right)
predecessor
of $I_{n}$ if there is no $I_{l}, 0\leq l <n $, in the gap $(I_{k},
I_{n})$ (or $(I_{n}, I_{k})$), where $(I_{k},I_{n})$
denotes the
counter-clockwise gap from $I_{k}$ to $I_{n}$. We
denote them by $I_{L(n)}$ and $I_{R(n)}$.
$I_{n}$ has a successor $I_{n+a}$ if
1. $I_{n-a}$ is a left (or right) predecessor (with $0<a\leq n)$;
2. $f^{a}|_{[I_{n-a},I_{n+a}]}$ (or $f^{a}|_{[I_{n+a},I_{n-a}]}$)
contains no predecessor of $I_{n}$;
3. if $I_{n}$ is to the left (or right) of $I_{n+a}$, then there is no
$I_{k}, 0\leq k< n+a$ in the gap $(I_{n}, I_{n+a})$ (or $(I_{n+a},
I_{n})$).
\end{definition}
Furthermore we define the natural neighborhood $T_{n}$ of $I_{n}$ to be
the biggest closed interval containing $I_{n}$ which contains no
$I_{i},i\in N$, except its nearest predecessor or successor.
Remark: Of course $I_{n}$ can have at most one predecessor on each side.
Moreover $I_{n}$ has at most one successor, denote it by $I_{S(n)}$.
Therefore $T_{n}=[I_{L(n)}, I_{R(n)}]$ if $I_{n}$ has two predecessors
and no successor and $T_{n}=[I_{L(n)}, I_{S(n)}]$ (or
$T_{n}=[I_{S(n)}, I_{R(n)}]$) if $I_{n}$ has a successor.
One can prove the following lemmas ([6], p. 309).
\begin{lemma}
For every $n\in N$, $I_{n}$ can have at most one successor.
\end{lemma}
\begin{lemma}
Assume the interval $I_{n}$ has two predecessors $I_{L(n)}, I_{R(n)}$
and a successor $I_{S(n)}$, If this successor is to the right of
$I_{n}$ then the predecessors of $I_{S(n)}$ are $I_{n}$ and $I_{R(n)}$
and if $I_{S(n)}$ has a successor then this successor must be again to
the right of $I_{S(n)}$.
\end{lemma}
Remark: This lemma implies that if $I_{n}$ has a successor $I_{S(n)}$
and $I_{S(n)}$ also has a successor $I_{S(S(n))}$ then
$S(n)-n=S(S(n))-S(n)$ and $I_{S(n)}$ is between $I_{n}$ and
$I_{S(S(n))}$. Continuing this if there exists a maximal integer $k$
such
that $I_{S^{i+1}(n)}$ is a successor of $I_{S^{i}(n)}$ for $0\leq i \leq
k-1$, then the intervals $I_{S^{i}(n)}$, $0\leq i \leq k-1$, are
ordered and $f^{a}$ acts as a translation on these intervals, where
$a=S(n)-n$.
\newtheorem{theorem}{Theorem}
\begin{theorem} ([6], p. 310)
Let $n\in N$ and assume that $I_{n}$ has two predecessors $I_{L(n)}$
and $I_{R(n)}$. Let $M_{n}\supset I_{n}$ be an interval contained either
in $[I_{L(n)}, I_{n}]$ or in $[I_{n}, I_{R(n)}]$. Assume that
$\{M_{t_{0}}, M_{t_{0}+1}, ..., M_{n}\}$ are pullbacks of $M_{n}$. If
the intersection multiplicity of this collection is at least $2m$ and
$m\geq 2$ then there exists $t\in \{t_{0}, ..., n\}$ such that
(1) $I_{S(t)}, I_{S^{2}(t)}, ..., I_{S^{2m-2}(t)}$ are defined;
(2) $n=S^{m}(t)$ and $I_{S^{j}(t)}$ is contained in $M_{n}$ for $j=m,
..., 2m-2$.
\end{theorem}
\newtheorem{cor}{Corollary}
\begin{cor}
Assume an interval $T\supset I_{n}$ and $T$ is contained in the
natural neighborhood $T_{n}$ of $I_{n}$. Then the intersection
multiplicity of the pullbacks of $T$ is at most $15$.
\end{cor}
Proof: Consider the pullbacks of $T\cap [I_{L(n)}, I_{n}]$ and $T\cap
[I_{n}, I_{R(n)}]$ seperately. Suppose the intersection multiplicity of
the pullbacks of $T$ is at least $16$. Then either the pullbacks of
$T\cap [I_{L(n)}, I_{n}]$ or the pullbacks of $T\cap [I_{n}, I_{R(n)}]$
has intersection multiplicity $\geq 8$. Take $m=4$, the previous theorem
imples that $I_{S^{2}(n)}$ is contained in $T\cap [I_{n}, I_{R(n)}]$.
This is impossible because of $T\subset T_{n}$.
Now we can prove the following theorem.
\begin{theorem}
Let $f:S^{1}\rightarrow S^{1}$ be an orientation preserving
homeomorphism
with an irrational rotation number. If the logarithm of the cross ratio
distortion under $f$ has bounded variation $B$, then $f$ has no
wandering interval, hence it is topologically conjugate to a rigid
rotation.
\end{theorem}
Proof: Suppose $I$ is a maximal wandering interval for $f$ and
$I_{n}=f^{n}(I), n\geq 0, n\in Z$.
There exists arbitrarily large $n\in N$ and $l, r< n$ such that
$I_{n}\subset (I_{l}, I_{r})$, $I_{k}\cap (I_{l}, I_{r})=\emptyset,
0\leq k<n$, and $|I_{n}|\leq \min \{|I_{l}|, |I_{r}|\}$. This property
is proved as follows. Pick up $I_{l}$
and $I_{r}$ such that the gap $(I_{l}, I_{r})$ contains no $I_{k}$ for
$0\leq k\leq \max \{l, r\}$. By the density of any
orbit under an irrational rotation, there exists $I_{n}$ first gets into
the gap $(I_{l}, I_{r})$. If $|I_{n}|\leq \min \{|I_{l}|, |I_{r}|\}$
then it is done, otherwise replace $I_{r}$ by $I_{n}$ and go on.
Since the sum of the lenths of $I_{k}$ is bounded, eventually we will
get $|I_{n}|\leq |I_{r}|$, furthermore we get $|I_{n}|\leq |I_{l}|$.
We have seen $I_{l}$ and $I_{r}$ are two predecessors of $I_{n}$.
Let $T_{n}$ be the natural neighborhood of $I_{n}$. If $I_{n}$ has no
successor, then $T_{n}=[I_{l}, I_{r}]$. By the corollary 1, the
intersection multiplicity of the pullbacks of $T_{n}$ is bounded by
$15$. Use the Macroscopic Koebe Distortion Principle, we get
a bigger wandering interval $J$ strickly containing $I$. This
contradicts with the maximality of $I$. Hence $I_{n}$ has a
successor $I_{s(n)}$. Use the same way as the above, we get
$|I_{s(n)}|<|I_{n}|$. Inductively we get infinitely many
successors $I_{s^{i}(n)}, i=1,2,...$, and by theorem 1, all successors
are contained in $[I_{l}, I_{r}]$ and are ordered. Moreover
$s^{i}(n)-s^{i-1}(n)$ is a constant $a=s(n)-n$. It follows
$I_{s^{i}(n)}$ converges to a fixed point of $f^{a}$ as $i\rightarrow
\infty $. This contradicts with $f$ has no periodic points.
\vspace{.2in}
${\bf Proof\; of\; Theorem \;C:}$ Let $I$ be a maximal wandering
interval for
$f$ and $I_{n}=f^{n}(I), n\geq 0, n\in Z$. Let $T_{n}$ be the natural
neighborhood of $I_{n}$. The intersection multiplicity of the pullbacks
of $T_{n}$ is bounded by $15$. By the Prop. 4 the cross ratio distortion
of $f^{n}$ on $T_{0}$ is uniformly bounded by a constant $B$. The rest
of the proof follows the proof of the above theorem.
\vspace{.2in}
The remainder of this section explains why the conditions of
the
theorem C is weaker than Denjoy's condition and [5]'s condition.
It is almost trivial that Denjoy's condition implies the conditions of
the theorem C.
\begin{prop}
Let $h: I\rightarrow R^{1}$ be a $C^{1}$ smooth function and $logh'$ is
of bounded variation, then $logh'$ is of bounded Zygmund variation and
bounded quadratic variation.
\end{prop}
Proof: By the triangle inequality, the Zygmund variation of $logh'$ is
no more than the variation of $logh'$ on the interval $I$.
Let $M$ be the maximal value of $|logh'|$ on the interval $I$,
then the quadratic variation of $logh'$ is no more than $2M$
multiplied by the variation of $logh'$ on the interval $I$.
Clearly the Zygmund condition implies bounded Zygmund variation.
Furthermore, the Zygmund condition implies $\alpha $-H\"older continuous
for $0<\alpha <1$. The $1/2$-H\"older continuity implies the bounded
quadratic variation.
\begin{lemma}
If $\phi :I\rightarrow R^{1}$ satisfies the Zygmund condition:
there exists $B>0$ such that
\[\sup_{x,t}|\frac{\phi (x+t)+\phi (x-t)-2\phi (x)}{t}|\leq B,\]
then $\phi $ is $\alpha $-h\"older continuous for any $0<\alpha <1$.
\end{lemma}
Proof: Denote $D(x,t)=\frac{\phi (x+t)-\phi (x)}{t}$. Then
\[D(x,t/2)+D(x+t/2,t/2)=2D(x,t),\;|D(x,t/2)-D(x+t/2,t/2)|\leq B,\]
\[D(x,t/4)+D(x+t/4,t/4)=2D(x,t/2),\;|D(x,t/4)-D(x+t/4,t/4)|\leq B,\]
\[\cdot \]
\[\cdot \]
\[\cdot \]
\[D(x,\frac{t}{2^{n}})+D(x+\frac{t}{2^{n}},\frac{t}{2^{n}})
=2D(x,\frac{t}{2^{n-1}}),\;
|D(x,\frac{t}{2^{n}})-D(x+\frac{t}{2^{n}},\frac{t}{2^{n}})|\leq B.\]
These give us
\[|D(x,t/2^{n})|\leq |D(x,t)|+nB, \] i. e.,
\[|\frac{\phi (x+t/2^{n})-\phi (x)}{t/2^{n}}|\leq |D(x,t)|+nB.\]
Then \[|\frac{\phi (x+t/2^{n})-\phi (x)}{(t/2^{n})^{\alpha }}|\leq
(|D(x,t)|/n+B)n(|t|/2^{n})^{1-\alpha },\]
which tells us that $\phi $ is $\alpha $-H\"older continuous for any
$0<\alpha <1$.
\begin{prop}
If $\phi :I\rightarrow R^{1}$ satisfies the Zygmund condition, then
$\phi $ is of bounded Zygmund variation and bounded quadratic variation
over the interval $I$.
\end{prop}
\section{Three examples}
In the introduction it is mentioned that there exists an example
satisfying
Denjoy's bounded variation condition but [5]'s Zygmund condition
and vice versa. In this section, we will give these two examples and
also we will give an example to show that there is an example being
of bounded quadratic variation but not being of bounded Zygmund
variation.
\newtheorem{example}{Example}
\begin{example} Let $\phi :[-1,\;1]\rightarrow [-1,\;1]$ be the
following function \[\phi (x)=x,\;\;\;x\in [-1,\;0],\]
\[\phi (x)=\sqrt {x},\;\;\;x\in (0,\;1].\]
Clearly $\phi $ is monotone hence it is of bounded variation, but the
Zygmund condition fails since the right derivative of $\phi $ at the
point $0$ is infinite but the left derivative is $1$.
\end{example}
\begin{example}Let $\phi _{0}(x)=2x$ for $x\in [0,
\frac {1}{2}]$ and $\phi _{0}(x)=2-2x$ for $x\in [\frac {1}{2}, 1]$. And
let \[\phi _{n}=\frac {\phi (2^{n}x-i)}{2^{n}} \;for \; x\in [\frac
{i}{2^{n}}, \frac{i+1}{2^{n}}], \]
where $i=0, 1, ..., 2^{n}-1$.
Let $\phi (x)=\sum _{n=0}^{\infty } \phi _{n} (x)$.
$\phi $ is differentiable at a set of measure $0$ only. It can't be
of bounded variation, otherwise it is differentiable almost
everywhere which is a contradiction. [7] and [8] study the general
theory about the differentiability of a function satisfying Zygmund
condition.
\end{example}
\begin{example}Let $\phi : [0,\;1]\rightarrow [0,\;1]$ is defined by
the figure 1.
\begin{figure}[t]
\par \centerline{ \hbox{\psfig{figure=qnotz.ps,width=3.0in }}}\par
\end{figure}
It is easy to get the quadratic variation of $\phi $ on $[0,\;1]$ is
equal to $\sum_{n=1}^{\infty }\frac{2}{n^{2}}$ which is finite. But the
difference between the left derivative and the right derivative
of $\phi $ at $1/2^{n}$ is equal to
\[\frac{2^{n+2}}{n+1}-\frac{2^{n+1}}{n}=\frac{2^{n+1}}{n}\frac{n-1}{n+1}\]
which tends to $\infty $ as $n\rightarrow \infty $. Therefore it has
bounded quadratic variation but has no bounded Zygmund variation.
\end{example}
Actually the left question is to study whether or not the
bounded Zygmund variation property implies the bounded quadratic
variation property.
\section{Appendix}
The nonexistence of wandering domains for any rational map of the
complex sphere was proved by Dennis Sullivan in 1985 [9]. The
analogue of this theorem for one-dimensional dynamical systems was done
for certain smooth multimodal maps by Martens, de Melo and van Strien in
1992 [10]. In the latest publication [6], the smooth condition used by
de Melo and van Strien is that a multimodal map piecewise satisfies
$C^{1+b.v}$
(or $C^{1+Z}$) and the map can be written as a power map $(x\mapsto
|x|^{\alpha }, \alpha >1)$ composed by a $C^{1+b.v}$ (or $C^{1+Z}$)
diffeomorphism around every turning point. Combine the analysis work
of getting a bound of cross ratio distortions in this paper and
the combinatorial machinery on wandering intervals in [10] or
[6, p. 308-312], we can get a weak version of Martens, de Melo
and van Strien's theorem of no wandering intervals for multimaodal maps.
Before we state the theorem, let us give the definition of a wandering
interval for a multimodal map of an interval.
\begin{definition} Let $f: I\rightarrow I$ be a continuous map of an
interval $I$.
An open interval $J\subset I$ is called a wandering interval of $f$ if
1) $f^{n}(J)\cap f^{m}(J) =\emptyset$ for any $n\neq m, n,m\in N$;
2) $f^{n}(J)$ does not converge to a periodic orbit.
\end{definition}
\begin{theorem}$[11]$ Let $f: I\rightarrow I$ be a $C^{1}$
smooth map satisfying
1) $f$ is $C^{1+b.Z.v+b.q.v}$ away from critical points;
2) Let $K_{f}$ be the set of critical points of $f$. For each $x_{0}\in
K_{f}$, there exist $\alpha >1$, a neighborhood $U(x_{0})$ of $x_{0}$
and a $C^{1+b.Z.v+b.q.v}$ diffeomorphism $\phi :U(X_{0})\rightarrow
(-1, 1)$ such that $\phi (x_{0})=0$ and \[f(x)=f(x_{0})\pm |\phi
(x)|^{\alpha }, \forall x \in U(x_{0}).\]
Then $f$ has no wandering intervals.
\end{theorem}
Norton, Sullivan and Velling [12, 13 and 14] have begun the work of
generalizing
the setting of Denjoy's theorem to two dimensional dynamical systems by
considering diffeomorphisms of the torus. The quasiconformal theory
has found a place there.
\vspace{.2in}
\noindent{\em Acknowledgements}.
Both authors wish to thank Frederick P. Gardiner for his help in
writing, and thank Yunping Jiang and Meiyu Su for some discussions.
We are grateful to John Milnor for his suggestions to improve the writing
and especially for his simple setting of the proof of Prop. 2.
| {
"timestamp": "1999-11-25T20:34:38",
"yymm": "9506",
"arxiv_id": "math/9506224",
"language": "en",
"url": "https://arxiv.org/abs/math/9506224",
"abstract": "The classical criterion for a circle diffeomorphism to be topologically conjugate to an irrational rigid rotation was given by A. Denjoy. In 1985, one of us (Sullivan) gave a new criterion. There is an example satisfying Denjoy's bounded variation condition rather than Sullivan's Zygmund condition and vice versa. This paper will give the third criterion which is implied by either of the above criteria.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Topological conjugacy of circle diffeomorphisms",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232894783426,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460380544441
} |
https://arxiv.org/abs/2104.02635 | Quantitative ergodic theorems for actions of groups of polynomial growth | We strengthen the maximal ergodic theorem for actions of groups of polynomial growth to a form involving jump quantity, which is the sharpest result among the family of variational or maximal ergodic theorems. As a consequence, we deduce in this setting the quantitative ergodic theorem, in particular, the upcrossing inequalities with exponential decay. The ideas or techniques involve probability theory, non-doubling Calderón-Zygmund theory, almost orthogonality argument and some delicate geometric argument involving the balls and the cubes on the group equipped with a not necessarily doubling measure. | \section{Introduction}\label{ST1}
\subsection{Background and main results}
In the past few decades, a great deal of significant results related to the pointwise ergodic theorems for group actions have been established. The earliest pointwise ergodic theorems, to our knowledge, was obtained by Birkhoff~\cite{Birkhoff31}, where he established pointwise ergodic theorems for one-parameter flow (such as a translation group on $\mathbb{R}$ or $\mathbb{Z}$). Wiener~\cite{Wiener39} extended Birkhoff's result to the case of several commuting flows. Even further, these pointwise ergodic results were generalized by Calder\'{o}n~\cite{Cal53} for an increasing family of compact symmetric neighborhoods of the identity satisfying doubling condition, which is abundant on non-commutative groups with polynomial volume growth. Calder\'{o}n's works~\cite{Cal53, Cal69} motivate further research on pointwise ergodic theorems, such as \cite{Bewley71,Chatard70, Coifman-Weiss76, Emerson74, Herz71, Hong-Liao-Wang17, Tempelman67, Tempelman72}. In particular,
Breuillard \cite{Breuillard14} (see also Tessera \cite{Tessera07} ) showed that the sequence of balls with respect to any fixed word metric on groups of polynomial growth satisfy the doubling condition and are asymptotically invariant, and thus established the corresponding pointwise ergodic theorem; actually these results apply to more general metrics such as the periodic pseudodistances as defined in \cite{Breuillard14} (and recalled in Section~\ref{ST7}). This settled a long-standing problem in ergodic theory since Calder\'on's classical paper \cite{Cal53} in 1953.
Lindenstrauss~\cite{Lindenstrauss01} established the pointwise ergodic theorem for tempered F{\o}lner sequences; this result resolves the problem of the existence of a F{\o}lner sequence which satisfies the pointwise ergodic theorem on an arbitrary amenable group. For more details we refer the reader to the survey works~\cite{AAB+, Nevo06}.
In this paper, we aim at establishing the quantitative pointwise ergodic theorems for actions of polynomial growth groups in terms of the following jump quantity. Given a sequence of measurable functions $\{\a_r(x):r>0\}$ and $\lambda>0$, the $\lambda$-jump function of the sequence $\a=\{\a_r(x):r\in \mathcal I\}$ is defined by
\begin{equation*
\mathcal{N}_{\lambda}(\a)(x)=\sup\{N|\exists~r_0<r_1<\cdots<r_N,r_i\in\mathcal{I}:\min_{0<i\le N}|\a_{r_{i}}(x)-\a_{r_{i-1}}(x)|>\lambda\}.
\end{equation*}
where $\mathcal I$ is a subset of $(0,\infty)$ and the supremum is taken over all finite increasing sequences in $\mathcal I$.
\bigskip
Let $G$ be a locally compact group equipped with a {measure $m$, and let $d$ be a metric on $G$. For $r>0$ and $h\in G$, we define the ball $B(h,r)=\{g\in G : d(g, h) \leq r\}$, and we will write it simply $B_r$ when $x=e$ (the identity in $G$).} Let $r_0>0$. Let $\epsilon\in(0,1]$, we say that $(G,d,m)$ satisfies the $(\epsilon,r_0)$-annular decay property if {there exists a constant $K>0$ such that for all $h\in G$, $r\in (r_0,\infty)$ and $s\in (0,r]$,
\begin{equation}\label{decay property}
m(B(h,r+s))-m(B(h,r))\le K\bigg(\frac{s}{r}\bigg)^{\epsilon}m(B(h,r)).
\end{equation}}
{Let $D_0>0$, we call that $(G,d)$ satisfies the $(D_0,4r_0)$}-geometrically doubling property if for every $r\in(0,4r_0]$ and every ball $B(h,r)$, there are at most $D_0$ balls $B(h_i,r/2)$ such that
\begin{equation}\label{geo-doubling}
B(h,r)\subseteq \bigcup_{1\le i\le{D_0}}B(h_i,r/2).
\end{equation}
Let $p\in[1,\infty)$ and $f\in L^p(G,m)$, {we consider the following averages
\begin{equation}\label{averaging operator1}
A^\prime_rf(h)=\frac{1}{m(B(h,r))}\int_{B(h,r)}f(g)dm(g).
\end{equation}}
One of the main result of this paper is the following theorem.
\begin{thm}\label{main-thm1}
Assume that $(G,d,m)$ satisfies~\eqref{decay property} and~\eqref{geo-doubling}. Let $\mathbf{A}^\prime=\{A^\prime_r:r\ge r_0\}$ be the sequence of averaging operators given by~\eqref{averaging operator1}. Then the following assertions hold true.
\begin{enumerate}[\noindent]
\item \emph{(i)}~For any $p\in(1,\infty)$, $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}$ is of strong type $(p,p)$ uniformly in $\lambda>0$, that is, there exists a constant $c_{p}>0$ such that
\begin{equation*
\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)(g)}\|_{L^p(G,m)}\le c_{p}\|f\|_{L^p(G,m)},\;\forall f\in L^p(G,m).
\end{equation*}
\item \emph{(ii)}~For $p=1$, $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}$ is of weak type $(1,1)$ uniformly in $\lambda>0$, that is, there exists a constant $c>0$ such that for any $\gamma>0$,
\begin{equation*
\sup_{\lambda>0}m\big(\{g\in G:\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)(g)}>\gamma\}\big)\le\frac{c}{\gamma}\|f\|_{L^1(G,m)},\;\forall f\in L^1(G,m).
\end{equation*}
\end{enumerate}
\end{thm}
Conditions~\eqref{decay property} and~\eqref{geo-doubling} are related to the geometric structure of group $G$; there exist lots of examples such as the ones introduced in~\cite{Nevo06,Tessera07}, see Section~\ref{ST7} for more details. {Moreover, as a consequence of Theorem~\ref{main-thm1}, we obtain the quantitative ergodic theorems for actions of groups of polynomial growth.}
Let $(X,\Sigma,\mu)$ be a $\sigma$-finite measure space and $T$ an action of $G$ on the associated $L^p$-spaces $L^p(X,\mu)$, under some additional assumptions recalled in later sections. In particular, if $T$ is induced by a $\mu$-preserving measurable transformation $\tau$ on $X$, then $T$ extends to an isometric action on $L^p(X,\mu)$ for all $1\le p\leq \infty$, given by $T_gf(x)=f(\tau_{g^{-1}}x)$. {Given an action $T$ and a ball $B_r$, the associated averaging operator is given by
\begin{equation}\label{averaging operator}
A_rf(x)=\frac{1}{m(B_r)}\int_{B_r}T_gf(x)dm(g).
\end{equation}
}
\begin{thm}\label{main-thm2}
Assume that $G$ is of polynomial growth with a symmetric compact generating set $V$ and $d$ is the associated word metric on $G$, that is,
$$d(g,h)=\min\{n\in \mathbb N, \;g^{-1}h\in V^n\} .$$
{Let $m$ be a Haar measure.} Let $\mathbf{A}=\{{A}_r:r\in\mathbb{N}\}$ the corresponding sequence of averaging operators with respect to an action $T$
\begin{enumerate}[\noindent]
\item \emph{(i)}~If $T$ is an action induced by a measure-preserving measurable transformation, then $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}$ is of weak type $(1,1)$ and of strong type $(p,p)$ for all $1<p<\infty$ uniformly in $\lambda>0$.
\item \emph{(ii)}~If $T$ is a strongly continuous regular action of $G$ on $L^p(X,\mu)$ \emph{($1<p<\infty$)}, then $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}$ is of strong type $(p,p)$ uniformly in $\lambda>0$.
\end{enumerate}
\end{thm}
{The notions of regular action will be recalled in Subsection~\ref{strong-type}.}
{If we take $G$ to be the integer group $\mathbb Z$ and $d$ to be the usual word metric, then we recover the usual ergodic average $A_n = \frac1{2n+1}\sum^n_{k=-n} T^k$ for an automorphism $T$, as is treated in \cite{Bour89,JKRW98}. Moreover, from the definition of jump quantity, Theorems~\ref{main-thm2} imply that the underlying sequence of functions $A_rf$ converges almost everywhere as $r\rightarrow\infty$, that is the pointwise ergodic theorem.}
The jump quantity appeared first implicitly in \cite{PX88}, and then explicitly in \cite{JKRW98}, whose study was motivated by the research of variational inequality arising from probability thoery \cite{DL76}. For $q\in(0,\infty]$, the $q$-variation (semi-)norm $V_q$ of a sequence
$\{\a_r(x):r\in(0,\infty)\}$ of complex-valued functions is defined by
$$V_q(\a_r(x):r\in \mathcal I)=\sup_{\substack{0<r_0<\cdots<r_J\\ r_j\in \mathcal I}}\bigg(\sum_{j=0}^J|\a_{r_{j+1}}(x)-\a_{r_j}(x)|^q\bigg)^{1/q}.$$
The $\infty$-variation norm is nothing but equivalent to the maximal norm; moreover any $q$-variation norm dominates the maximal norm
\begin{equation*
\sup_{r\in\mathcal I}|\a_r(x)|\le |\a_{r_1}(x)|+V_q(\a_{r}(x):r\in\mathcal I),~\forall~r_1>0.
\end{equation*}
As the jump quantity, finite $q$-variation norm with $q<\infty$ also deduces immediately the pointwise convergence of the underlying sequence of operators without density argument. This idea was firstly exploited by Bourgain~\cite{Bour89} to study the pointwise ergodic theorem for dynamical systems where the density argument is not available. Bourgain's work \cite{Bour89} inspired many studies on variational inequalities and ergodic theory, see \cite{JKRW98, JKRW03, Krause14, Kra-Mirek-Tro18, Mir-stein-Tro17, Zorin-Kranich15} and the references therein.
On the other hand, by Chebychev's inequality, the $q$-variation norm dominates the jump quanity
\begin{equation*
\sup_{\lambda>0}\lambda(\mathcal N_\lambda(\a_r(x):r\in \mathcal I))^{1/q}\leq V_q(\a_r(x):r\in \mathcal I).
\end{equation*}
However, it is shown in \cite{JW04, Lewko-Lewko12} that the $2$-variational inequality (and thus $q<2$) does not hold true in general. Conversely, for any family of linear operators $\mathcal T=(T_t)_{t\in\mathcal I}$, the mapping properties of $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathcal T)}$ imply the corresponding $q$-variational inequalities for all $q>2$ by interpolation, see e.g. \cite{Bour89, Jones-Seeger-Wright08,Mirek-Stein-Zor20}. Thus for all $q>2$, the $q$-variational version of Theorem~\ref{main-thm1} and~\ref{main-thm2}, in particular the maximal ergodic theorem, holds true.
Furthermore, restricted to bounded function, from Theorem \ref{main-thm2}, we may deduce a kind of exponential decay estimates by using Vitali covering lemma and geometric argument.
\begin{thm}\label{exponential-estimate}
Assume that $G$ is a group of polynomial growth with a symmetric finite generating set and $d$ is the resulting word metric. Let $T$ be an action induced by a measure-preserving measurable transformation and $\mathbf{A}=\{A_r:r\in\mathbb{N}\}$ the sequence of averaging operators with respect to action $T$ given by~\eqref{averaging operator}. Let $\lambda>0$. Then for every $p\in[1,\infty)$, {there are two constants {$\tilde{c}_1>0$ and $\tilde{c}_2\in (0,1)$} depending on $\lambda$, $p$ and group $G$ such that for any $f\in L^p(X,\mu)$ with $\|f\|_{L^{\infty}(X,\mu)}\le 1$,}
\begin{equation*
\mu\big(\{x\in X:\mathcal{N}_{\lambda}(\mathbf{A}f)(x)>n\}\big)\le \tilde{c}_1\tilde{c}_2^n\|f\|^p_{L^p(X,\mu)}.
\end{equation*}
\end{thm}
This result immediately yields the upcrossing inequalities with exponential decay. Recall that for two real numbers $a$ and $b$ with $b>a$, the upcrossings of {a family of real-valued functions} $\a=\{\a_r(x):r\in\mathcal{I}\}$, $\mathcal{N}_{a,b}(\a)(x)$ is defined by
\begin{equation*}
\sup\{N\in\mathbb{N}|\exists~s_1<r_1<\cdots<s_N<r_N, r_i,s_i\in\mathcal{I}~\textit{such~that}~\a_{s_l}(x)<a~\textit{and}~\a_{r_{l}}(x)>b\}.
\end{equation*}
Taking $\lambda=b-a$, it is easy to see that $\mathcal{N}_{a,b}(\a)(x)\leq 2\mathcal{N}_{\lambda/2}(\a)(x)$. Thus we obtain
\begin{corollary}
Let $a$ and $b$ be two real numbers with $b>a$. Then for every $p\in[1,\infty)$, {there are two constants {$\tilde{c}_1>0$ and $\tilde{c}_2\in (0,1)$} depending on $\lambda$, $p$ and group $G$ such that for any real-valued function $f\in L^p(X,\mu)$ with $\|f\|_{L^{\infty}(X,\mu)}\le 1$,}
\begin{equation}\label{upcrossings-estimate}
\mu\big(\{x\in X:\mathcal{N}_{a,b}(\mathbf{A}f)(x)>n\}\big)\le \tilde{c}_1\tilde{c}_2^n\|f\|^p_{L^p(X,\mu)}.
\end{equation}
\end{corollary}
A variant of (\ref{upcrossings-estimate}) for non-negative functions has been recently proved by Moriakov~\cite{Moriakov18}, as a generalization of Kalikow and Weiss's exponential estimate for $G=\mathbb Z$ \cite{{Kalikow-Weiss99}}. Note that, our method here is completely different from the one used by Moriakov that is based on a generalization of Vitali covering theorem. On the other hand, Kalikow and Weiss's estimate was motivated by Bishop's fundamental result \cite{Bishop67} and the similar estimates in the martingale setting \cite{Doo53, Dub62}. Recently the upcrossing inequalities with exponential decay have been extended to stationary process and find many applications to ergodic theory and information theory, see e.g. \cite{Hoc09}.
\begin{comment}
.
For $\beta>\alpha$, letting $\lambda=\beta-\alpha$ in this proposition, one has
\begin{corollary}\label{upcrossings-estimate}
Let $G$ be a locally compactly group of polynomial growth with a symmetric compact generating set $V$. Let $T$ be an action induced by a $\mu$-preserving measurable transformation $\tau$ on $X$. Let $\mathbf{A}f=(\mathcal{A}_rf:r\in\mathbb{N})$ be the sequence of averaging operators given by~\eqref{averaging operator}. Then for every $f\in L^1(X,\mu)$ and $\lambda>0$, there exists a positive constant $C$ such that
\begin{equation*}%
\mu\{x\in X:\mathcal{N}_{\lambda}(\mathbf{A}f)(x)>n\}\le \frac{C}{\lambda\sqrt{n}}\|f\|_{L^1(X,\mu)}.
\end{equation*}
\end{corollary}
\begin{comment}
\begin{remark}
Kalikow and Weiss~\cite{Kalikow-Weiss99} proved when $G=\mathbb{Z}$, $\mathcal{A}_nf(x)=\sum_{i=-n}^n\frac{1}{2n+1}f(T^ig)$ and $\mathbf{A}f=(\mathcal{A}_nf:n\in\mathbb{N})$,
\begin{equation*
\mu\{x\in X:\mathcal{N}_{\lambda}(\mathbf{A}f)(x)>n\}\le \frac{C}{\lambda}\sqrt{\frac{\log n}{n}}\|f\|_{L^1(X,\mu)}.
\end{equation*}
Corollary~\ref{upcrossings-estimate} improves Kalikow and Weiss's estimate in twofold. On one hand, the constant $\sqrt{\frac{\log n}{n}}$ is improved to $\frac{1}{\sqrt{n}}$. On the other hand, the group $G$ is a general group of polynomial growth. Nevertheless, we don't know whether the constant about $n$ of our result is the best
\end{remark}
Taking $\lambda=\beta-\alpha$, it follows easily that $\mathcal{N}_{\alpha,\beta}(\a)(x)\le \mathcal{N}_\lambda(\a)(x)$. Let $\mathcal{A}^+_nf(x)=\sum_{i=0}^{n-1}\frac{1}{n}f(T^ig)$ and $\mathbf{A}^+f=(\mathcal{A}^+_nf:n\in\mathbb{N})$, Bishop~\cite{Bishop67} showed
\begin{equation*
\mu\{x\in X:\mathcal{N}_{\alpha,\beta}(\mathbf{A}^+f)(x)>n\}\le \frac{1}{(\beta-\alpha)n}\|f\|_{L^1(X,\mu)}.
\end{equation*}
In~\cite{Kalikow-Weiss99}, Kalikow and Weiss said that the constant $1/n$ in Bishop~\cite{Bishop67}'s result cannot improve for integrable functions, but they established the exponential estimate, namely for $f\ge 0$ and $0<\alpha<\beta$,
\begin{equation*
\mu\{x\in X:\mathcal{N}_{\alpha,\beta}(\mathbf{A}^+f)(x)>n\}\le c_1c_2^n\|f\|_{L^1(X,\mu)},
\end{equation*}
where the constants $c_1>0$ and $c_2\in (0,1)$, which depend on $\alpha$, $\beta$ and group $G$.
Recently, Moriakov~\cite{Moriakov18} extend Kalikow and Weiss's exponential estimate to a group of polynomial growth with a discrete generating set, where the proof is based on a generalization Vitali covering theorem. There we use another method, i.e., the jump inequality, to obtain this exponential estimate
\end{comment}
\subsection{Methods and more results}
The theorems rely on several key results obtained in this paper.
The first step to show Theorem~\ref{main-thm1} is now standard, that is, to control the jump quantity by the `dyadic' jump and the short variation operator, see e.g. \cite{Jones-Seeger-Wright08}. More precisely, let $f\in L^p(G,m)$,
we dominate the jump quantity $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)}$ in the following way,
\begin{equation*
\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)}\le 2\lambda\sqrt{\mathcal{N}_{\lambda/6}(A^\prime_{\delta^n}f:n> n_{r_0})}+16\bigg(\sum_{{n\geq n_{r_{0}}}}V_2(A^\prime_{r}f:r\in[\delta^{n},\delta^{n+1}))^2\bigg)^{1/2},
\end{equation*}
where $\delta>1$ is a constant depending on $G$ that will be determined in Proposition~\ref{dyadic cube} and $n_{r_0}$ is the unique integer such that $\delta^{n_{r_0}}<r_0\leq \delta^{n_{r_0}+1}$.
Moreover, compared with the resulting martingale $\mathbb{E}f=(\mathbb{E}_nf:n\in\mathbb{N})$ (see Definition~\ref{martingale sequence}), the dyadic jump is controlled by
\begin{equation*}
96\sqrt{2}\bigg(\sum_{n> n_{r_0}}|A^\prime_{\delta^n}f-\mathbb{E}_nf|^2\bigg)^{1/2}+ 2\sqrt{2}\lambda\sqrt{\mathcal{N}_{\lambda/24}(\mathbb{E}_nf:n>n_{r_0})}.
\end{equation*}
Thus we obtain the following pointwise estimate,
\begin{equation}\label{deal with variational operator}
\begin{split}
\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)}&\le 96\sqrt{2}\bigg(\sum_{n> n_{r_0}}|A^\prime_{\delta^n}f-\mathbb{E}_nf|^2\bigg)^{1/2}\\
&+16\bigg(\sum_{n\ge n_{r_0}}V_2(A^\prime_{r}f:r\in[\delta^{n},\delta^{n+1}))^2\bigg)^{1/2}
+2\sqrt{2}\lambda\sqrt{\mathcal{N}_{\lambda/24}(\mathbb{E}_nf:n>n_{r_0})}.
\end{split}
\end{equation
This inequality implies that in order to bound $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)}$, it suffices to estimate the three parts on the right hand, respectively. Note that the boundedness of the jump operator $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbb{E})}$ was proved in~\cite{JKRW98,PX88}, see Lemma~\ref{lem:jump}; and so for our purposes we only need to estimate the first two terms on the right hand side of inequality~\eqref{deal with variational operator}.
For abbreviation, we denote
$$S(f):=\bigg(\sum_{n> n_{r_0}}|A^\prime_{\delta^n}f-\mathbb{E}_nf|^2\bigg)^{1/2}$$
and
$$SV(f):=\bigg(\sum_{n\ge n_{r_0}}V_2(A^\prime_{r}f:r\in[\delta^{n},\delta^{n+1}))^2\bigg)^{1/2}.$$
For these two operators, we will show more results including ($L^\infty$, BMO) estimate which is also necessary to obtain the result for $2<p<\infty$ (see Section~\ref{ST5} for the definition of BMO). In what follows, $L^\infty_c$ denotes the compactly supported $L^\infty$ functions.
\begin{thm}\label{the estimate of square function}
Assume that $(G,d,m)$ satisfies conditions~\eqref{decay property} and~\eqref{geo-doubling}, then
\begin{enumerate}[\noindent]
\item\emph{(i)}~for every $p\in(1,\infty)$, there exists a constant $c_{p}>0$ such that for every $f\in L^p(G,m)$,
\begin{equation}\label{strong-type inequalities of square function}
\|S(f)\|_{L^p(G,m)}\le c_{p}\|f\|_{L^p(G,m)};
\end{equation}
\item \emph{(ii)}~there exists a constant $c>0$ such that for every $f\in L^1(G,m)$,
\begin{equation}\label{weak-type inequalities of square function}
m\big(\{g\in G:S(f)(g)>\gamma\}\big)\le \frac{c}{\gamma}\|f\|_{L^1(G,m)},~\forall~\gamma>0,
\end{equation}
and for every $f\in L^\infty_c(G,m)$,
\begin{equation}\label{BMO-type inequalities of square function}
\|S(f)\|_{BMO}\le c\|f\|_{L^{\infty}_c(G,m)}.
\end{equation}
\end{enumerate}
\end{thm}
The same results hold still true for the short variation operator $SV$.
\begin{thm}\label{the estimate of short variation}
Assume that $(G,d,m)$ satisfies conditions~\eqref{decay property} and~\eqref{geo-doubling}, then
\begin{enumerate}[\noindent]
\item \emph{(i)}~for every $p\in(1,\infty)$, there exists a constant $c_{p}>0$ such that for every $f\in L^p(G,m)$,
\begin{equation}\label{strong-type inequalities of short variation}
\|SV(f)\|_{L^p(G,m)}\le c_{p}\|f\|_{L^p(G,m)};
\end{equation}
\item \emph{(ii)}~there exists a constant $c>0$ such that for every $f\in L^1(G,m)$,
\begin{equation}\label{weak-type inequalities of short variation}
m\big(\{g\in G:SV(f)(g)>\gamma\}\big)\le \frac{c}{\gamma}\|f\|_{L^1(G,m)},~\forall~\gamma>0,
\end{equation}
and for every $f\in L^\infty_c(G,m)$,
\begin{equation}\label{BMO-type inequalities of short variation}
\|SV(f)\|_{BMO}\le c\|f\|_{L^{\infty}_c(G,m)}.
\end{equation}
\end{enumerate}
\end{thm}
The above two theorems, {parallel to Theorem 2.3 and 2.4 of \cite{GXHTM17}}, are not a surprise (cf. \cite{GXHTM17}). However, note that under conditions~\eqref{decay property} and~\eqref{geo-doubling}, $(G,d,m)$ is not necessarily a doubling metric measure space; this induces several new difficulties in showing Theorems \ref{the estimate of square function} and \ref{the estimate of short variation}, such as, that the `dyadic cubes' constructed in Proposition \ref{dyadic cube} may not admit the small boundary property (cf.~\cite[Theorem 11]{Christ90}) and that the standard Calder\'{o}n-Zygmund decomposition for homogeneous space is not enough \emph{etc.}. For these reasons, we have to pay great attention to the geometric argument involving the cubes and the balls and explore the non-doubling Calder\'on-Zygmund theory \emph{etc.}.
\bigskip
To deduce Theorem \ref{main-thm2} from Theorem \ref{main-thm1}, there needs two transference principles: the first one is for actions induced by measure-preserving measurable transforms, while another one is for regular actions. We refer the reader to~\eqref{regular} for the definition of regular actions and the related constant $\|\cdot\|_r$.
{
A group $G$ is called amenable if it admits a F${\o}$lner sequence $(F_n)_{n\in \mathbb N}$, that is, for every $g\in G$,
\begin{align}\label{asymptotically invariant}
\lim_{n\rightarrow\infty} \frac{m((F_n g) \bigtriangleup F_n)}{m(F_n)}= 0,
\end{align}
where $\bigtriangleup$ denotes the usual symmetric difference of two sets. }
\begin{thm}\label{thm:trans
Let $G$ be an amenable group equipped with invariant metric $d$ and a right Haar measure $m$, and $T$ an action on $L^p(X,\mu)$. Let $\mathbf{A}^\prime=\{A^\prime_r:r\in\mathcal I\}$ and $\mathbf{A}=\{A_r:r\in\mathcal I\}$ be two sequences of averaging operators given by~\eqref{averaging operator1} and~\eqref{averaging operator}, respectively.
\begin{enumerate}[\noindent]
\item \emph{(i)} Let $p\in[1,\infty)$. If $T$ is an action induced by a measure-preserving measurable transformation and $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}$ is of weak (resp. strong) type $(p,p)$ uniformly in $\lambda>0$, then $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}$ is of weak (resp. strong) type $(p,p)$ uniformly in $\lambda>0$, {and moreover $\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}\|_{L^p\rightarrow L^{p,\infty}}\le \sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}\|_{L^p\rightarrow L^{p,\infty}}$ (resp. $\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}\|_{L^p\rightarrow L^{p}}\le \sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}\|_{L^p\rightarrow L^{p}}$)}.
\item \emph{(ii)} Let $p\in(1,\infty)$. If $T$ is a strongly continuous {regular} action of $G$ on $L^p(X)$ and $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}$ is of strong type $(p,p)$ uniformly in $\lambda>0$, then $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}$ is of strong type $(p,p)$ uniformly in $\lambda>0$, {and moreover there exists a constant $c_p>0$ such that $\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A})}\|_{L^p\rightarrow L^{p}}\le c_p\sup_{h\in G}\|T_h\|^2_r\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime)}\|_{L^p\rightarrow L^p}$}.
\end{enumerate}
\end{thm}
It is a little bit surprising that Theorem \ref{thm:trans}(i) holds true due to the fact that $\lambda\sqrt{\mathcal{N}_{\lambda}(\cdot)}$ is \emph{a priori} not a norm; {while to prove Theorem \ref{thm:trans}(ii), in addition to the use of the fact that $\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\cdot)}\|_p$ is equivalent to a norm (cf. \cite{Mirek-Stein-Zor20}) for $p>1$, the argument is subtle since the appearance of the supremum over $\lambda$ outside $L^p$ norm.}
\bigskip
\begin{comment}
\begin{thm}\label{jump ineq in nonhomo-space}
{Let $(G,d,m)$ satisfy conditions~\eqref{decay property} and~\eqref{geo-doubling} and $\mathbf{A}^\prime=\{A^\prime_r:r\ge r_0\}$ be the sequence of averaging operators given by~\eqref{averaging operator1}. }Then the following assertions hold
\begin{enumerate}[\noindent]
\item\emph{(i)}~{When $p\in(1,\infty)$, there exists a constant $c_{p}>0$ such that for all $f\in L^p(G,m)$},
\begin{equation}\label{Lp ineq}
\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)(g)}\|_{L^p(G,m)}\le c_{p}\|f\|_{L^p(G,m)}.
\end{equation}
\item \emph{(ii)}~For $p=1$, {there exists a constant $c>0$ such that for every $\gamma>0$ and $f\in L^1(G,m)$,}
\begin{equation}\label{weak L1 ineq}
\sup_{\lambda>0} m\big(\{g\in G:\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbf{A}^\prime f)(g)}>\gamma\}\big)\le \frac{c}{\gamma}\|f\|_{L^1(G,m)}.
\end{equation}
\end{enumerate}
\end{thm}
\end{comment}
\bigskip
An outline of this paper is as follows. In Section~\ref{ST2}, we first recall some necessary preliminaries concerning the definition of `dyadic cubes', which was constructed by Hyt\"{o}nen and Kairema~\cite{Hyt-Anna12} in metric space; and then present some technical lemmas, which are essential in showing Theorems~\ref{the estimate of square function} and~\ref{the estimate of short variation} . The last two theorems will be proved in Section~\ref{ST3}-\ref{ST5}. In Section~\ref{ST6}, we prove the transfer principles, namely Theorem~\ref{thm:trans}. In Section~\ref{ST7}, we discuss the $(\epsilon,r_0)$-annular decay property, providing examples and formulating problems; in particular, the balls with respect to a word metric over groups of polynomial growth satisfies conditions~\eqref{decay property},~\eqref{geo-doubling} and~\eqref{asymptotically invariant}, and thus we obtain Theorem~\ref{main-thm2}.
Finally, in Section~\ref{ST8}, we give a proof of Theorem~\ref{exponential-estimate}.
Throughout this paper, we always denote $C$ by a positive constant respectively that may vary from line to line, while $c_p$ denote positive constant possibly depending on the subscripts.
\section{Preliminaries and some technical lemmas}\label{ST2}
In this section, we first recall the `dyadic cubes' constructed on the measure space $(G,d,m)$ satisfying conditions~\eqref{decay property} and~\eqref{geo-doubling} which might not be a measure doubling metric space, and the resulting martingale inequalities will play a key role in the probabilistic approach to jump or variational inequalities. We then collect several technical lemmas which involve or rely on the estimates of the boundaries of `dyadic cubes' or balls or certain configuration among them; these estimates are subtle, and thus we will set and fix in the whole paper several constants such as $k_1,n_0,n_1, c_0, C_0, C_1, L_0,L_1,\delta$ which depend on conditions~\eqref{decay property} and~\eqref{geo-doubling} and the construction. These preliminary results or technical lemmas, collected in such a way, will facilitate much the presentation of the proof of Theorems \ref{the estimate of square function} and~\ref{the estimate of short variation}.
We will exploit the system of `dyadic cubes' constructed in the setting of geometrically doubling metric measure spaces, which means that each ball of radius $r>0$ can be covered by fixed finitely many balls of radius $r/2$
(cf. \cite[Theorem 2.2]{Hyt-Anna12}). It is not difficult to see that
the measure space $(G,d,m)$ satisfying conditions~\eqref{decay property} and~\eqref{geo-doubling} is geometrically doubling, but which might not be measure doubling. Indeed, by a simple computation, the $(\epsilon,r_0)$-annular decay property---condition~\eqref{decay property}---implies the measure doubling condition for large balls, that is, for every $x\in G$ and $r_0<r\le R<\infty$
\begin{align}\label{int}
\frac{m(B(x,R))}{m(B(x,r))}\leq (K+1)\Big(\frac{R}{r}\Big)^{\epsilon}.
\end{align}
This yields the geometrically doubling condition for large balls, which, combined with condition~\eqref{geo-doubling}, deduces the geometrically doubling condition property of the space $(G,d,m)$, namely, each ball of radius $r>0$ in $G$ can be covered by no more than $D=\max\{D_0, [9^\epsilon(K+1)]+1\}$ balls of radius $r/2$.
{Moreover, from the geometrically doubling condition, one can easily deduce the following property.
\begin{prop}\label{geometry-doubling}
Let $(G,d,m)$ satisfy the conditions of Theorem \ref{main-thm1}. Let $0<r\le R$, any ball $B(x,R)$ can be covered by no more than $D^{\log_2[{R}/{r}]+1}$ balls of radius $r$.
\end{prop}
For more information about geometrically doubling property we refer the reader to~\cite{Coifman-Weiss71}.
\begin{prop}\label{dyadic cube}\cite[Theorem 2.2]{Hyt-Anna12}
Let $(G,d,m)$ satisfy the conditions of Theorem \ref{main-thm1}. Fix constants $0<c_0<C_0<\infty$ and $\delta>1$ such that
$$18C_0\delta^{-1}\le c_0.$$
Let $I_k$ \emph{($k\in\mathbb Z$)} be an index set and $\{z_\alpha^k\in G:\alpha\in I_k,k\in\mathbb{Z}\}$ be a collection of points with the properties that
\begin{equation}\label{distance}
d(z_\alpha^k,z_\beta^k)\ge c_0\delta^k~(\alpha\neq\beta),~\min_{\alpha}d(x,z_\alpha^k)<C_0\delta^k,~\forall~x\in G,~k\in\mathbb{Z}.
\end{equation}
Then there exist a family of sets $\big\{Q_\alpha^k\big\}_{\alpha\in I_k}$ associating with $\{z_\alpha^k\}_{\alpha\in I_k}$, and constants $a_0:=c_0/3$ and $C_1:=2C_0$ such that
\begin{enumerate}[\noindent]
\item\emph{(i)}~$\forall~k\in\mathbb{Z}$,~$\cup_{\alpha\in I_k} Q_\alpha^k=G$.
\item \emph{(ii)}~If $k\le l$ then either $Q_\alpha^k\subset Q_\beta^l$ or $Q_\alpha^k\cap Q_\beta^l=\emptyset$
\item \emph{(iii)}~For each $(k,\alpha)$ and each $k<n$ there is a unique $\beta$ such that $Q_\alpha^k\subset Q_\beta^n$, and for $n=k+1$, we call such $Q_\beta^{k+1}$ the parent of $Q_\alpha^{k}$.
\item \emph{(iv)}~$B(z_\alpha^k, a_0\delta^k)\subseteq Q_\alpha^k\subseteq B(z_\alpha^k, C_1\delta^k)$.
\end{enumerate}
\end{prop}
{We remark that the geometrically doubling property ensures that the minimum of the second inequality in~\eqref{distance} is attained.}
For $k\in\mathbb Z$, let $\mathcal{F}_k$ be the $\sigma$-algebra generated by the `dyadic cubes' $\{Q_\alpha^{k}:\alpha\in I_k\}$. We recall the following notions associated to the (reverse) martingale theory.
\begin{definition}\label{martingale sequence
Let $f:G\rightarrow \mathbb{C}$ be a locally integrable function and $k\in\mathbb Z$, the conditional expectation of $f$ with respect
to $\mathcal{F}_k$ is defined by
\begin{equation}\label{martingale}
\mathbb{E}_kf(x)=\sum_{\alpha\in I_k}\frac{1}{m(Q_\alpha^k)}\int_{Q_\alpha^k}f(y)dm(y)\mathds{1}_{Q_\alpha^k}(x);
\end{equation}
the resulting martingale difference operator $\mathbb{D}_k$ is defined a
\begin{equation*}
\mathbb{D}_kf=\mathbb{E}_{k-1}f-\mathbb{E}_{k}f.
\end{equation*}
\end{definition}
We check at once $\mathbb{E}_k\circ \mathbb{E}_j=\mathbb{E}_{\max(j,k)}$ and that for $f\in L^2$, $f=\sum_{k\in\mathbb{Z}}\mathbb{D}_kf$ and $$\|\big(\sum_{k\in\mathbb{Z}}|\mathbb{D}_kf|^2\big)^{1/2}\|_{L^2}=\|f\|_{L^2}.$$
Denote $\mathbb{E}=\{\mathbb{E}_k:k\in\mathbb Z\}$. We remark that the strong type $(p,p)$ with $1<p<\infty$ and the weak type (1,1) estimates for operator $\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbb{E})}$ were given implicitly in \cite{PX88} and explicitly in~\cite{JKRW98}. We state the results as follows
\begin{comment}
\begin{lemma}\label{lem:lepingle}
Let $\mathbb{E}=(\mathbb{E}_k:k\in\mathbb{Z})$ be a martingale sequence given by~\eqref{martingale}. Let $q\in(2,\infty)$, then
\begin{enumerate}[\noindent]
\item\emph{(i)}~for $p\in(1,\infty)$, there is a positive constant $C_{p,q}$ such that
\begin{equation*}
\|V_q(\mathbb{E}_kf:k\in\mathbb{Z})\|_{L^p(G,m)}\le C_{p,q}\|f\|_{L^p(G,m)},\forall~f\in L^p(G,m);
\end{equation*}
\item \emph{(ii)}~For $p=1$, there is a positive constant $C_q$ such that
\begin{equation*}
m\{x\in G:V_q(\mathbb{E}_kf(x):k\in\mathbb{Z})>\gamma\}\le\frac{C_q}{\gamma}\|f\|_{L^1(G,m)},~\forall~\gamma>0, f\in L^1(G,m).
\end{equation*}
\end{enumerate}
\end{lemma}
\end{comment}
\begin{lemma}\label{lem:jump}
Let $\mathbb{E}=\{\mathbb{E}_k:k\in\mathbb Z\}$ be defined as above.
\begin{enumerate}[\noindent]
\item\emph{(i)}~When $p\in(1,\infty)$, there is a constant $c_p>0$ such that for all $f\in L^p(G,m)$,
\begin{equation*}
\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbb{E}f)}\|_{L^p(G,m)}\le c_p\|f\|_{L^p(G,m)}.
\end{equation*}
\item \emph{(ii)}~For $p=1$, {there is a constant $c>0$ such that for every $\gamma>0$ and $f\in L^1(G,m)$,}
\begin{equation*}
\sup_{\lambda>0}m\big(\{x\in G:\lambda\sqrt{\mathcal{N}_{\lambda}(\mathbb{E}f)(x)}>\gamma\}\big)\le\frac{c}{\gamma}\|f\|_{L^1(G,m)}.
\end{equation*}
\end{enumerate}
\end{lemma}
\bigskip
Since $(G,d,m)$ might not be a measure doubling metric space, the small boundary property of the `dyadic cubes' constructed from \eqref{dyadic cube} (see e.g. \cite{Christ90}) does not hold in general. However, from the $(\epsilon,r_0)$ annular decay property, we do have some boundary property---Lemma \ref{boundary}, which will be enough for our purpose in the present paper.
Set
$$L_0=[\log_\delta(12/c_0)]+1, L_1=[\log_\delta(36r_0/c_0)]+1.$$
\begin{lemma}\label{boundary condition}
Let $k,L\in\mathbb{Z}$ satisfy $L_0<L<k+L_0-L_1$ and $\alpha\in I_k$. Then we have
\begin{equation*}
m\big(\{x\in Q_\alpha^k:d(x,G\setminus Q_\alpha^k)\le \delta^{k-L}\}\big)\le \frac{(K+1)^2}{(L-L_0+1)}\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}m(Q_\alpha^k).
\end{equation*}
\end{lemma}
\begin{proof
For a fixed point $x\in Q_\alpha^k$ with $d(x,G\setminus Q_\alpha^k)\le\delta^{-L}\delta^k$, we claim that there exists a chain $Q_{\sigma_{k-L+L_0}}^{k-L+L_0}\subset\cdots\subset Q_{\sigma_{k-1}}^{k-1}\subset Q_{\sigma_k}^k=Q_\alpha^k$ such that $x\in Q_{\sigma_{k-L+L_0}}^{k-L+L_0}$ an
\begin{equation}\label{points}
d(z_{\sigma_j}^j,z_{\sigma_i}^i)\ge c_0\delta^i/12,~\forall~k-L+L_0\le j< i\le k.
\end{equation}
Indeed, {let $x\in Q_\alpha^k$}, by Proposition~\ref{dyadic cube}(i)-(iii), for any $n\le k$, there exists a `dyadic cube' $Q_{\sigma}^n$ such that $x\in Q_{\sigma}^n\subseteq Q_\alpha^k$. Hence there exists a chain $Q_{\sigma_{k-L+L_0}}^{k-L+L_0}\subset\cdots\subset Q_{\sigma_{k-1}}^{k-1}\subset Q_{\sigma_k}^k=Q_\alpha^k$ with $x\in Q_{\sigma_{k-L+L_0}}^{k-L+L_0}$.
Now we verify \eqref{points}. First, by {Proposition~\ref{dyadic cube}(iv)}, for every $k-L+L_0\le j\le k$, we have $x\in B(z_{\sigma_{j}}^j, C_1\delta^j)$. We also have $B(z_{\sigma_i}^i,a_0\delta^i)\subset Q_{\sigma_i}^i\subset Q_\alpha^k$. If~\eqref{points} were not true, then
\begin{align*}
a_0\delta^i&\le d(z_{\sigma_i}^i,G\setminus Q_\alpha^k)\le d(z_{\sigma_i}^i,z_{\sigma_j}^j)+d(z_{\sigma_j}^j,x)+d(x,G\setminus Q_\alpha^k)< \frac{c_0}{12}\delta^i+C_1\delta^j+\delta^{-L}\delta^k\\
&\le \frac{a_0}{4}\delta^i+\frac{a_0}{3}\delta^{j+1}+\delta^{-L_0}\delta^{i}\le \frac{a_0}{4}\delta^i+\frac{a_0}{3}\delta^{i}+\frac{a_0}{4}\delta^{i}< a_0\delta^i,
\end{align*}
since $a_0=c_0/3$, $C_1=2C_0$, $3C_1\le a_0\delta$ and $\delta^{-L_0}\le \delta^{-\log_\delta(12/c_0)}=c_0/12$. This leads to a contradiction and the claim is proved.
For every `dyadic cube' $Q_\beta^m$, we denote it briefly by $(m,\beta)$. Let $E=\big\{x\in Q_\alpha^k:d(x,G\setminus Q_\alpha^k)\le \delta^{-L}\delta^k\big\}$. For each $x\in E$, there exists a chain of pair $(i,\beta(i,x))$ with the properties proved in the first paragraph. Set $S_i=\cup_{x\in E}\{z_{\beta(i,x)}^i\}$ for $k-L+L_0\le i\le k$. In the following, we abbreviate $z^i_{\beta(i,x)}$ to $z_{i(x)}$. We have the following observation: for $z_{i(x)}\neq z_{j(y)}$,
\begin{equation}\label{intersect}
B(z_{i(x)},c_0\delta^i/36)\cap B(z_{j(y)},c_0\delta^j/36)=\emptyset,~\forall~z_{i(x)}\in S_i,~z_{j(y)}\in S_j,~k-L+L_0\le i,j\le k.
\end{equation}
For $i=j$, by~\eqref{distance}, the above assertion is trivially right. For $i\neq j$, without loss of generality, we assume $i>j$. Note that by the definition of $S_j$, for each $z_{j(y)}\in S_j$, there exists a point $\zeta\in E$ such that $d(z_{j(y)},G\setminus Q_\alpha^k)\le d(z_{j(y)},\zeta)+d(\zeta,G\setminus Q_\alpha^k)\le C_1\delta^j+\delta^{-L}\delta^k$. It follows that if $B(z_{i(x)},c_0\delta^i/36)\cap B(z_{j(y)},c_0\delta^j/36)\neq \emptyset$, then there exists a point $z\in B(z_{i(x)},c_0\delta^i/36)\cap B(z_{j(y)},c_0\delta^j/36)$ such that
\begin{align*}
a_0\delta^i&\le d(z_{i(x)},G\setminus Q_\alpha^k)\le d(z_{i(x)},z_{j(y)})+d(z_{j(y)},G\setminus Q_\alpha^k)\\
&\le d(z_{i(x)},z)+d(z_{j(y)},z)+d(z_{j(y)},G\setminus Q_\alpha^k)\\
&\le \frac{c_0}{36}\delta^i+\frac{c_0}{36}\delta^j+C_1\delta^j+\delta^{-L}\delta^k\le \frac{c_0}{18}\delta^i+C_1\delta^j+\delta^{-L}\delta^k\le \frac{a_0}{6}\delta^i+\frac{a_0}{3}\delta^{j+1}+\delta^{-L_0}\delta^{i}\\
&\le \frac{a_0}{6}\delta^i+\frac{a_0}{3}\delta^{i}+\frac{a_0}{4}\delta^{i}< a_0\delta^i,
\end{align*}
where we used $a_0=c_0/3$, $C_1=2C_0$, $3C_1\le a_0\delta$ and $\delta^{-L_0}\le \delta^{-\log_\delta(12/c_0)}=c_0/12$ again. This leads to a contradiction and so~\eqref{intersect} holds.
We now prove the desired result. In the following, we write $z_i$ {for} $z_{i(x)}$. {Setting} $G_i=\cup_{z_{i}\in S_i}B(z_{i},c_0\delta^i/36)$, for any $k-L+L_0\le i\le k$, we have
\begin{align*
m(E)&\le\sum_{z_{k-L+L_0}\in S_{k-L+L_0}}m(B(z_{k-L+L_0}, C_1\delta^{k-L+L_0}))\\
&\le (K+1)\bigg(\frac{36C_1}{c_0}\bigg)^\epsilon\sum_{z_{k-L+L_0}\in S_{k-L+L_0}}m(B(z_{k-L+L_0},c_0\delta^{k-L+L_0}/36))\\
&=(K+1)\bigg(\frac{36C_1}{c_0}\bigg)^\epsilon\sum_{w_i\in S_i}\sum_{\substack{z_{k-L+L_0}\preceq w_i,\\z_{k-L+L_0}\in S_{k-L+L_0}}}m(B(z_{k-L+L_0},c_0\delta^{k-L+L_0}/36))\\
&\le (K+1)\bigg(\frac{36C_1}{c_0}\bigg)^\epsilon\sum_{w_i\in S_i}m(B(w_i,C_1\delta^i))\\
&\le(K+1)^2\bigg(\frac{36C_1}{c_0}\bigg)^{2\epsilon} \sum_{w_i\in S_i}m(B(w_i,c_0\delta^i/36))=(K+1)^2\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}m(G_i),
\end{align*}
where $z_{k-L+L_0}\preceq w_i$ in the third line means that the inclusion of the corresponding `dyadic cubes' $Q_{\beta(k-L+L_0)}^{k-L+L_0}\subset Q^i_{\beta(i)}$, and we used~\eqref{int} in the second and last inequalities since $\delta^i\ge \delta^{k-L+L_0}\ge \delta^{L_1}> 36r_0/c_0$. The equality in the third line follows from~\eqref{intersect}, as does the equality in the last line.
From the above inequalities and the disjointness of the sets $G_i$, we obtain
\begin{equation*
m(E)\le \frac{(K+1)^2}{(L-L_0+1)}\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}\sum_{i=k-L+L_0}^km(G_i)\le \frac{(K+1)^2}{(L-L_0+1)}\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}m(Q_\alpha^k),
\end{equation*}
and the lemma follows.
\end{proof}
Set
\begin{align*}
&L_2=[\log_{\delta}(4C_0+1)]+1,~L_3=[2(K+1)^2\big(\frac{72C_0}{c_0}\big)^{2\epsilon}]+L_0+L_2,\\
&\eta=(\log_\delta2)/L_3,~C_2=4(K+1)^2(72C_0/c_0)^{2\epsilon},~{C_2^\prime=(K+1)(3C_1/a_0)^{\epsilon}.}
\end{align*
\begin{lemma}\label{boundary
Under the assumption of Lemma~\ref{boundary condition}, we have
\begin{equation*}
\begin{split}
&m\big(\{x\in Q_\alpha^k:d(x,G\setminus Q_\alpha^k)\le \delta^{k-L}\}\big)\le C_2\delta^{-L\eta}m(Q_\alpha^k),\\
&m\big(\{x\in G\setminus Q_\alpha^k:d(x,Q_\alpha^k)\le \delta^{k-L}\}\big)\le C_2C^\prime_2\delta^{-L\eta}m(Q_\alpha^k).
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
{Let us focus on the first inequality.} Let $\ell\in\mathbb{N}$, set
\begin{equation*}
E_{-\ell}(Q_\alpha^k)=\{Q_\beta^{k-\ell}\subset Q_\alpha^k:d(Q_\beta^{k-\ell},G\setminus Q_\alpha^{k})\le (C_1+1)\delta^{k-\ell}\},
\end{equation*}
where $d(Q_\beta^{k-\ell},G\setminus Q_\alpha^{k})=\inf_{x\in Q_\beta^{k-\ell}}d(x,G\setminus Q_\alpha^{k})$. We denote by
\begin{equation*}
e_{-\ell}(Q_\alpha^k)=\{x:x\in Q_\beta^{k-\ell}~\textit{with}~Q_\beta^{k-\ell}\in E_{-\ell}(Q_\alpha^k)\}
\end{equation*}
the underlying point set. We proceed to show that
\begin{equation}\label{inequality}
\big\{x\in Q_\alpha^k:d(x,G\setminus Q_\alpha^k)\le \delta^{k-\ell}\big\}\subseteq e_{-\ell}(Q_\alpha^k)\subseteq\{x\in Q_\alpha^k:d(x,G\setminus Q_\alpha^k)\le\delta^{k-\ell+L_2}\}.
\end{equation
Fix $x\in Q_\alpha^k$ such that $d(x,G\setminus Q_\alpha^k)\le \delta^{k-\ell}$. There exists `dyadic cube' $Q_{\beta}^{k-\ell}$ such that $x\in Q_{\beta}^{k-\ell}\subset Q_\alpha^k$, and then
\begin{align*}
d(Q_{\beta}^{k-\ell},G\setminus Q_\alpha^k)&\le C_1\delta^{k-\ell}+d(x,G\setminus Q_\alpha^k)\le(C_1+1)\delta^{k-\ell}.
\end{align*}
On the other hand, fix a point $x\in e_{-\ell}(Q_\alpha^k)$, then there exists $ Q_\beta^{k-\ell}\in E_{-\ell}(Q_\alpha^k)$ such tha
\begin{equation*}
d(x,G\setminus Q_\alpha^k)\le C_1\delta^{k-\ell}+d(Q_\beta^{k-\ell},G\setminus Q_\alpha^k)\le (2C_1+1)\delta^{k-\ell}\le\delta^{k-\ell+L_2},
\end{equation*}
since $C_1=2C_0$, and~\eqref{inequality} is proved
To achieve our goal, we split $L$ into two cases: $L_0<L\le2L_3$ and $L>2L_3$. We first prove the case $L>2L_3$.
Set $M_0=[L/L_3]$; {here and below, $[t]$ denotes the integer part of a real number $t$}. Let $F_1(Q_\alpha^k)= E_{-L_3}(Q_\alpha^k)$ and
\begin{equation*}
F_{n}(Q_\alpha^k)=\bigcup_{Q_\beta^{k-(n-1)L_3}\in F_{n-1}(Q_\alpha^k)}E_{-L_3}(Q_\beta^{k-(n-1)L_3}),
\end{equation*}
for $2\le n\le M_0$. Let $f_n(Q_\alpha^k)$ be the underlying point set. It is easy to check that
\begin{equation}\label{contain}
e_{-nL_3}(Q_\alpha^k)\subset f_n(Q_\alpha^k).
\end{equation}
Moreover, for each $Q_\beta^{k-(n-1)L_3}\in F_{n-1}(Q_\alpha^k)$ and $1\le n\le M_0$, plugging~\eqref{inequality} into Lemma~\ref{boundary condition}, we obtain
\begin{align*}
m(e_{-L_3}(Q_\beta^{k-(n-1)L_3}))&\le m\{x\in Q_\beta^{k-(n-1)L_3}:d(x,G\setminus Q_\beta^{k-(n-1)L_3})\le\delta^{k-nL_3+L_2}\}\\
&\le\frac{(K+1)^2}{(L_3-L_2-L_0+1)}\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}m(Q_\beta^{k-(n-1)L_3})\le\frac{1}{2}m(Q_\beta^{k-(n-1)L_3}),
\end{align*
since $nL_3\le L<k+L_0-L_1$. Then
\begin{equation*}
m(f_n(Q_\alpha^k))\le \frac{1}{2}m(f_{n-1}(Q_\alpha^k)),
\end{equation*}
and iterating the above inequality we have $m(f_{M_0}(Q_\alpha^k))\le 2^{-M_0}m(Q_\alpha^k)$. From this inequality and~\eqref{contain}, one has
\begin{align*}
m\big(\{x\in Q_\alpha^k:&d(x,G\setminus Q_\alpha^k)\le \delta^{k-L}\}\big)\le m(e_{-L}(Q_\alpha^k))\le m(e_{-M_0L_3}(Q_\alpha^k))\\
&\le m(f_{M_0}(Q_\alpha^k))\le 2^{-M_0}m(Q_\alpha^k)\le 2^{-L/L_3+1}m(Q_\alpha^k)=2\delta^{-\eta L}m(Q_\alpha^k),
\end{align*}
where the second inequality in the first line follows from the definition of $e_{-\ell}(Q_\alpha^k)$.
For the case $L_0<L\le 2L_3$, using Lemma~\ref{boundary condition} again we have
\begin{align*}
m\big(\{x\in Q_\alpha^k:d(x,G\setminus Q_\alpha^k)\le \delta^{k-L}\}\big)&\le(K+1)^2\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}m(Q_\alpha^k)\\
&\le \delta^{-\eta L}\delta^{2\eta L_3}(K+1)^2\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}m(Q_\alpha^k)\\
&=4(K+1)^2\bigg(\frac{72C_0}{c_0}\bigg)^{2\epsilon}\delta^{-\eta L}m(Q_\alpha^k).
\end{align*}
{We now prove the second inequality.} Let $\widetilde{E}=\{x\in G\setminus Q_\alpha^k:d(x,Q_\alpha^k)\le \delta^{k-L}\}$. {Define the set $\widetilde{I}=\{\beta: Q_\beta^k\cap\widetilde{E}\neq\emptyset,\beta\in I_k\}$ and $e(Q_\beta^k)=\{x\in Q_\beta^k:d(x,G\setminus Q_\beta^k)\le \delta^{k-L}\}$. Then by Proposition~\ref{dyadic cube}(i), we obtain $\widetilde{E}=\cup_{\beta\in \widetilde{I}}e(Q_\beta^k)$.
Fix $\beta\in \widetilde{I}$. There exists a point $y_0\in Q_\beta^k\cap \widetilde{E}$. By Proposition~\ref{dyadic cube}(iv), for every $y\in Q_\beta^k$, we have
$$d(y,z_\alpha^k)\le d(y,y_0)+d(y_0,z_\alpha^k)\le C_1\delta^k+\delta^{k-L}+C_1\delta^k\le 3C_1\delta^k.$$
Hence $Q_\beta^k\subseteq B(z_\alpha^k,3C_1\delta^k)$. By Proposition~\ref{dyadic cube}(ii), the 'dyadic cubes' $Q_\beta^k$ are disjoint, hence $\cup_{\beta\in\widetilde{I}}Q_\beta^k\subseteq B(z_\alpha^k,3C_1\delta^k)$}. Then by the first inequality and~\eqref{int}, {we obtain
\begin{align*}
m(\widetilde{E})&\le m\big(\cup_{\beta\in\widetilde{I}}e(Q_\beta^k)\big)\le \sum_{\beta\in\widetilde{I}}m(e(Q_\beta^k))\\
&\le \sum_{\beta\in\widetilde{I}}C_2\delta^{-L\eta}m(Q_\beta^k)\le C_2\delta^{-L\eta}m(\cup_{\beta\in\widetilde{I}}Q_\beta^k)\le C_2\delta^{-L\eta}m(B(z_\alpha^k,3C_1\delta^k))\\
&\le C_2(K+1)(3C_1/a_0)^{\epsilon}m(B(z_\alpha^k,a_0\delta^k))\\
&\le C_2(K+1)(3C_1/a_0)^{\epsilon}m(Q_\alpha^k),
\end{align*}
}which completes the proof.
\end{proof}
\begin{comment}
Set $C_3=(K+1)(2C_1/a_0)^\epsilon$.
\begin{lemma}\label{outer
With the $k$ and $L$ satisfies the condition of Lemma~\ref{boundary condition}, we have
\begin{equation*}
m(\{x\in G\setminus Q_\alpha^k:d(x,Q_\alpha^k)\le \delta^{k-L}\})\le C_2C_3\delta^{-L\eta}m(Q_\alpha^k).
\end{equation*}
\end{lemma}
\begin{proof}
Let $\widetilde{E}=\{x\in G\setminus Q_\alpha^k:d(x,Q_\alpha^k)\le \delta^{k-L}\}$. Define the set $\widetilde{I}=\{\beta: Q_\beta^k\cap\widetilde{E}\neq\emptyset,\beta\in I_k\}$ and $E(Q_\beta^k)=\{x\in Q_\beta^k:d(x,G\setminus Q_\beta^k)\le \delta^{k-L}\}$. Recall that $L_0=[\log_{\delta}(12/c_0)]+1$, $L_1=[\log_{\delta}(36r_0/c_0)]+1$ and $c_0=3a_0$, for $L>L_0$ and $k>L_1$ , one can see that $\delta^{-L}<\delta^{-L_0}\le c_0/12=a_0/4$ and $a_0\delta^k>a_0\delta^{L_1}\ge12r_0$. By these observations and Proposition~\ref{dyadic cube} (i), we have
\begin{equation*}
\widetilde{E}\subseteq \cup_{\beta\in\widetilde{I}}E(Q_\beta^k)\subseteq B(z_\alpha^k,2C_1\delta^k).
\end{equation*}
Moreover, we have
\begin{equation*}
|\widetilde{I}|\le \frac{m(B_{2C_1\delta^k})}{m(B_{a_0\delta^k})}\le (K+1)(2C_1/a_0)^\epsilon,
\end{equation*}
where we used~\eqref{int} in the last inequality. Then applying Lemma~\ref{boundary} and~\eqref{int}, we obtain
\begin{align*}
m(\widetilde{E})&\le m\big(\cup_{\beta\in\widetilde{I}}E(Q_\beta^k)\big)\le \sum_{\beta\in\widetilde{I}}m(E(Q_\beta^k))\\
&\le \sum_{\beta\in\widetilde{I}}\delta^{-L\eta}m(Q_\beta^k)\le (K+1)(2C_1/a_0)^\epsilon \delta^{-L\eta} m(B_{C_1\delta^k})\\
&\le (K+1)^2(2C_1/a_0)^{2\epsilon}\delta^{-L\eta}m(B_{a_0\delta^k})\le (K+1)^2(2C_1/a_0)^{2\epsilon}\delta^{-L\eta}m(Q_\alpha^k),
\end{align*}
which completes the proof.
\end{proof}
\end{comment}
Given a ball $B_{\delta^{n}}$ and a `dyadic cube' $Q^{k}_\alpha$, define
$$\mathcal{H}(B_{\delta^{n}},Q^{k}_\alpha)=\{x\in Q_\alpha^{k}:B(x,\delta^{n})\cap (Q^{k}_\alpha)^{c}\neq \emptyset\}.$$
Set
$$n_0=\max\{L_1-L_0,0\}.$$
\begin{lemma}\label{measure estimate of cube}
Let $n>n_0$ and $k>L_0$. Then for every `dyadic cube' $Q_\alpha^{n+k}$, we have
\begin{equation*}
m(\mathcal{H}(B_{\delta^{n}},Q^{n+k}_\alpha))\le C_2\delta^{-k\eta} m(Q^{n+k}_\alpha).
\end{equation*}
\end{lemma}
\begin{proof}
We check at once that for every $x\in \mathcal{H}(B_{\delta^{n}},Q^{n+k}_\alpha)$, the distance $d(x, (Q^{n+k}_\alpha)^{c})$ is not bigger than $\delta^{n}$, and so
\begin{equation*}
\begin{split}
\mathcal{H}(B_{\delta^{n}},Q^{n+k}_\alpha
&\subseteq \{x\in Q_\alpha^{n+k}:d(x,G\setminus Q_\alpha^{n+k})\le \delta^{n}\}.
\end{split}
\end{equation*}
Note that $k>L_0$ and $n>L_1-L_0$, then by Lemma~\ref{boundary}, we obtain
\begin{equation*
\begin{split}
m(\mathcal{H}(B_{\delta^{n}},Q^{n+k}_\alpha))\le m\big(\{x\in Q_\alpha^{n+k}:d(x,G\setminus Q_\alpha^{n+k})\le \delta^{n+k-k}\}\big)\le C_2\delta^{-k\eta} m(Q^{n+k}_\alpha),
\end{split}
\end{equation*
which is the desired conclusion.
\end{proof}
\begin{remark}
\emph{
Set $\widetilde{\mathcal{H}}(B_{\delta^{n}},Q^{k}_\alpha)=\{x\in G\setminus Q_\alpha^{k}:B(x,\delta^{n})\cap (Q^{k}_\alpha)^{c}\neq \emptyset\}$. Similar to Lemma~\ref{measure estimate of cube}, by Lemma~\ref{boundary}, we have
\begin{equation*}
m(\widetilde{\mathcal{H}}(B_{\delta^{n}},Q^{n+k}_\alpha))\le C_2\delta^{-k\eta} m(Q^{n+k}_\alpha).
\end{equation*}}
\end{remark}
\begin{comment}
The proof is similar in spirit to~\cite[Lemma 2.5]{GXHTM17}. Let us prove it briefly.
\begin{proof}
For every $n\in\mathbb{N}$, by the assumption that $\mathbb{S}_n$ is sub-additivity, it follows that
\begin{equation*}
\sup_{j,m}|\mathbb{S}_n\big(\sum_{j\le k\le m}u_{k+n}\big)|\le \sup_{j,m}\sum_{j\le k\le m}|\mathbb{S}_nu_{k+n}|\le \sum_{k\in\mathbb{Z}}|\mathbb{S}_nu_{k+n}|.
\end{equation*}
Using the triangle inequality for the $L^2$-norm, condition~\eqref{condition of orth-prin2} and the Minkowski inequality imply
\begin{equation*}
\begin{split}
\bigg(&\sum_{n\in\mathbb{N}\setminus [0,\tilde{n}_0]}\|\sup_{j,m}|\mathbb{S}_n\big(\sum_{j\le k\le m}u_{k+n}\big)|\|^2_{L^2}\bigg)^{1/2}\le
\bigg(\sum_{n\in\mathbb{N}\setminus[0,\tilde{n}_0]}\bigg(\sum_{k\in\mathbb{Z}}\|\mathbb{S}_nu_{k+n}\|_{L^2}\bigg)^2\bigg)^{1/2}\\
&\le C\bigg(\sum_{n\in\mathbb{N}\setminus[0,\tilde{n}_0]}\bigg(\sum_{k\in\mathbb{Z}}|a(k)|\|v_{k+n}\|_{L^2}\bigg)^2\bigg)^{1/2}\le C\sum_{k\in\mathbb{Z}}
\bigg(\sum_{n\in\mathbb{N}\setminus[0,\tilde{n}_0]}|a(k)|^2\|v_{k+n}\|^2_{L^2}\bigg)^{1/2}\\
&\le C \sum_{k\in\mathbb{Z}}|a(k)|\bigg(\sum_{n\in\mathbb{Z}}\|v_{n}\|^2_{L^2}\bigg)^{1/2}.
\end{split}
\end{equation*}
Finally, if for each $n\in [0,\tilde{n}_0]$, $\mathbb{S}_n$ is controlled by maximal operator $M$, then
\begin{equation*}
\sum_{n\in[0,\tilde{n}_0]}\|\mathbb{S}_nf\|_{L^2}\le\sum_{n\in[0,\tilde{n}_0]}\|Mf\|_{L^2}\le C(\tilde{n}_0+1)\|f\|_{L^2},
\end{equation*}
which completes the proof.
\end{proof}
\end{comment}
Set
$$K_\epsilon=(2^\epsilon+1)K+2^\epsilon.$$
\begin{lemma}\label{lem:decay}
Let $r\ge2r_0$, then for every $s\in(0,r]$, we have
\begin{equation*
\begin{split}
m(B(x, r+s))-m(B(x,r-s))\le K_{\epsilon}\bigg(\frac{s}{r}\bigg)^{\epsilon}m(B(x,r)).
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
We only need to estimate the part $m(B(x, r))-m(B(x,r-s))$, since by the $(\epsilon,r_0)$-annular decay property---\eqref{decay property}, we have $m(B(x, r+s))-m(B(x,r))\le K(s/r)^{\epsilon}m(B(x,r))$.
In the following we split the $s$ into two cases $s\in(0,r/2)$ and $s\in[r/2,r)$.
For $s\in(0,r/2)$, since $r\ge 2r_0$, then $r-s>r/2\ge r_0$, applying the $(\epsilon,r_0)$-annular decay property again, we have
\begin{align*}
m(B(x, r))-m(B(x,r-s))&\le K\bigg(\frac{s}{r-s}\bigg)^{\epsilon}m(B(x,r-s))\\
&\le K\bigg(\frac{r}{r-s}\bigg)^{\epsilon}\bigg(\frac{s}{r}\bigg)^{\epsilon}m(B(x,r))\le K2^{\epsilon}\bigg(\frac{s}{r}\bigg)^{\epsilon}m(B(x,r)).
\end{align*}
For $s\in[r/2,r)$, since $1/2\le s/r<1$, so $ 2^\epsilon(s/r)^{\epsilon}\ge 1$ and $ m(B(x, r))-m(B(x,r-s))\le2^\epsilon(s/r)^{\epsilon}m(B(x, r))$.
{Combining the two cases}, we obtain
\begin{equation*}
m(B(x, r+s))-m(B(x,r-s))\le \big((2^\epsilon+1)K+2^\epsilon\big)\bigg(\frac{s}{r}\bigg)^{\epsilon}m(B(x,r)),
\end{equation*}
and the lemma follows.
\end{proof}
{Given a ball $B(x,r)$ and an integer $n$, we define}
$$\mathcal{I}(B(x,r),n)=\cup_\alpha\{Q_\alpha^{n}\cap B(x,r): Q_\alpha^{n}\cap\partial B(x,r) \neq\emptyset\}.$$
Set
$$n_1=\min\{n\in\mathbb{N}:\delta^n\ge2r_0\}, k_1=\max\{k\in\mathbb{Z}:C_1\delta^k\le 1\}.$$
Unless otherwise stated, we assume that $n>n_1$, $k<k_1$ in the following three lemmas
\begin{comment}
From the doubling condition \eqref{doubling condition of ball}, it is well-known that
\begin{align}\label{int}
\frac{m(B(x,R))}{m(B(x,r))}\leq C_m\Big(\frac{R}{r}\Big)^{\theta},\;\forall~0<r\leq R,~x\in G,
\end{align}
where $\theta=\log_2 C_m$ and $C_m$ is the smallest constant such that \eqref{doubling condition of ball} holds.
\end{comment}
\begin{lemma}\label{measure estimate of annulus}
For any $x\in G$, we hav
\begin{equation*}
\begin{split}
&\sup_{r\in[\delta^{n},\delta^{n+1}]}\frac{m(\mathcal{I}(B(x,r),n+k))}{m(B(x,r))}\le K_\epsilon C_1^\epsilon\delta^{\epsilon k}.\\
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Note that for $k<k_1$ and $r\in[\delta^n,\delta^{n+1}]$, ${\mathcal{I}}(B(x,r),n+k)$ is contained in the annulus $B(x,r+C_1\delta^{n+k})\setminus B(x,r-C_1\delta^{n+k})$. For every $n>n_1$ and $r\in[\delta^{n},\delta^{n+1}]$, Lemma~\ref{lem:decay} yields
\begin{equation*}
\begin{split}
m({\mathcal{I}}(B(x,r),n+k))&\le m\bigg(B(x,r+C_1\delta^{n+k})\setminus B(x,r-C_1\delta^{n+k})\bigg)\\
&\le K_\epsilon\bigg(\frac{C_1\delta^{n+k}}{r}\bigg)^{\epsilon}m(B(x,r))\\
&\le K_\epsilon C_1^\epsilon\delta^{\epsilon k}m(B(x,r)),\\
\end{split}
\end{equation*}
which is the desired conclusion
\end{proof}
To state the next technical lemmas, we need the following estimate (cf. \cite[Theorem 3.5]{AL19}). Recall that $A^\prime_r$ is the averaging operator given by~\eqref{averaging operator1}.
\begin{prop}\label{aver}
Let $r>0$, then for every $p\in[1,\infty]$, we have $\|A^\prime_r\|_{L^p(G,m)\rightarrow L^p(G,m)}\le D^{1/p}$.
\end{prop}
With $\mathcal{I}(B(x,r),n+k)$ being defined as above, we define
\begin{equation*}
\begin{split}
&M_{n+k}f(x)=\sup_{r\in[\delta^n,\delta^{n+1}]}|\frac{1}{m(B(x,r))} \int_{\mathcal{I}(B(x,r),n+k)}f(y)dm(y)|.\\
\end{split}
\end{equation*}
\begin{lemma}\label{lem:Akn}
Let {$p\in[1,\infty]$} and $p^\prime$ be its conjugate index. There exists a constant {$D_{p}=\big((K+1)(\delta+1)^\epsilon\big)^{1/p}D^{1/p}(K_\epsilon C_1^\epsilon)^{1/p^\prime}$} such that for all $f\in L^p(G,m)$,
\begin{equation*}
\|M_{n+k}f\|_{L^p(G,m)}\le D_{p}\delta^{\epsilon k/p^\prime}\|f\|_{L^p(G,m)}
\end{equation*}
\end{lemma}
\begin{proof}
For $p=\infty$, using Lemma~\ref{measure estimate of annulus}, the conclusion is obvious. For $p\in[1,\infty)$, fix $x\in G$, note that ${\mathcal{I}}(B(x,r),n+k)\subseteq B(x,r+C_1\delta^{n+k})$. Then using the H\"{o}lder inequality and Lemma~\ref{measure estimate of annulus}, we obtai
\begin{equation*}
\begin{split}
{M}_{n+k}f(x)&\le \sup_{r\in[\delta^n,\delta^{n+1}]}\frac{m({\mathcal{I}}(B(x,r),n+k))^{1/p^\prime}}{m(B(x,r))}\bigg(\int_{{\mathcal{I}}(B(x,r),n+k)}
|f(y)|^pdm(y)\bigg)^{1/p}\\
&\le (K_\epsilon C_1^\epsilon)^{1/p^\prime}\delta^{\epsilon k/p^\prime}\bigg(\frac{1}{m(B(x,\delta^n))}\int_{B(x,\delta^{n+1}+C_1\delta^{n+k})}|f(y)|^pdm(y)\bigg)^{1/p},\\
\end{split}
\end{equation*}
{and by inequality~\eqref{int}, the above inequality yields
\begin{align*}
&{M}_{n+k}f(x)\le (K_\epsilon C_1^\epsilon)^{1/p^\prime}\delta^{\epsilon k/p^\prime}\bigg(\frac{(K+1)(\delta+1)^\epsilon}{m(B(x,\delta^{n+1}+C_1\delta^{n+k}))}\int_{B(x,\delta^{n+1}+C_1\delta^{n+k})}|f(y)|^pdm(y)\bigg)^{1/p}
\end{align*}
{Then applying Proposition~\ref{aver}, we have}
\begin{equation*}
\|{M}_{n+k}f\|_{L^{p}}\le\big((K+1)(\delta+1)^\epsilon\big)^{1/p}D^{1/p}(K_\epsilon C_1^\epsilon)^{1/p^\prime}\delta^{\epsilon k/p^\prime}\|f\|_{L^p}.
\end{equation*}}
and the lemma follows.
\end{proof}
Given an annulus $B(x,r)\setminus B(x,s)$ and integers $k,n$, define
\begin{align*}
&\mathcal{I}(B(x,r)\setminus B(x,s),n)\\
&=\cup_\alpha\{Q_\alpha^{n}\cap (B(x,r)\setminus B(x,s)): Q_\alpha^{n}\cap\partial (B(x,r)\setminus B(x,s)) \neq\emptyset\},
\end{align*}
an
\begin{equation*}
M^S_{n+k}f(x)=\sup_{\delta^n\le r_0<\cdots r_J\le\delta^{n+1}}\bigg(\sum_{i=1}^J|\frac{1}{m(B(x,r_{i}))} \int_{\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),n+k)}f(y)dm(y)|^2\bigg)^{1/2}.
\end{equation*}
\begin{lemma}\label{lem:Mkn}
{There exists a constant $C_{3}=\big(2(K+1)D\delta^\epsilon C_1^\epsilon K_\epsilon\big)^{1/2}$ such that for all $f\in L^2(G,m)$,
\begin{equation*}
\|M^S_{n+k}f\|_{L^2(G,m)}\le C_3 \delta^{\epsilon k/2}\|f\|_{L^2(G,m)}.
\end{equation*}}
\end{lemma}
\begin{proof}
Let $x\in G$. Fixing $\delta^n\le r_{i-1}<r_i\le \delta^{n+1}$, by the Cauchy-Schwarz inequality and Lemma~\ref{measure estimate of annulus}, we hav
\begin{align*}
&|\frac{1}{m(B(x,r_{i}))}\int_{\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),n+k)}f(y)dm(y)|^2\\%\sup_{\delta^n\le r_0<\cdots<r_J\le\delta^{n+1}}\sup_{\delta^n\le r_0<\cdots<r_J\le\delta^{n+1}}\sum_{i=1}^J
&\le \frac{m(\mathcal{I}(B(x,r_{i})\setminus B(x,r_{i-1}),n+k))}{m(B(x,r_{i}))^2}\int_{\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),n+k)}|f(y)|^2dm(y)\\
&\le \frac{2C_1^\epsilon K_\epsilon\delta^{\epsilon k}}{m(B(x,\delta^{n}))} \int_{\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),n+k)}|f(y)|^2dm(y),
\end{align*}
where we used the fact that $\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),n+k)\subseteq \mathcal{I}(B(x,r_i),n+k)\cup\mathcal{I}(B(x,r_{i-1}),n+k)$ in the last inequality. From this, inequality~\eqref{int} and the observation that $\cup_{i}\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),n+k)\subseteq B(x,\delta^{n+1})$, it follows that
\begin{align*}
M^S_{n+k}f(x)&\le \bigg(\frac{2C_1^\epsilon K_\epsilon\delta^{\epsilon k}}{m(B(x,\delta^{n}))} \int_{B(x,\delta^{n+1})}|f(y)|^2dm(y)\bigg)^{1/2}\\
&\le \bigg(\frac{2(K+1)\delta^\epsilon C_1^\epsilon K_\epsilon\delta^{\epsilon k}}{m(B(x,\delta^{n+1}))} \int_{B(x,\delta^{n+1})}|f(y)|^2dm(y)\bigg)^{1/2}&.
\end{align*}
{Then using Proposition~\ref{aver}, we conclude}
\begin{equation*}
\|M^{S}_{n+k}f\|_{L^2} \le\big(2(K+1)D\delta^\epsilon C_1^\epsilon K_\epsilon\big)^{1/2}\delta^{\epsilon k/2}\|f\|_{L^p}
\end{equation*}
\end{proof}
\begin{comment}
Recall that for $f(x)\in L^p(G,m)$ and $r>0$, the ball averaging operator is defined by
\begin{equation*}
A_rf(x)=\frac{1}{m(B(x,r))}\int_{B(x,r)}f(y)dm(y).
\end{equation*
\end{comment}
Let $f$ be a locally integrable function on $G$. We define
\begin{equation*}
S_n(f)=|A^\prime_{\delta^n}f-\mathbb{E}_nf|,~SV_n(f)=V_2(A^\prime_{r}f:r\in[\delta^{n},\delta^{n+1})).
\end{equation*}
Note that
\begin{equation*}
\frac{1}{m(B_r)}\int_{B_r}f(xy)dm(y)=\frac{1}{m(B(x,r))}\int_{B(x,r)}f(y)dm(y).
\end{equation*}
Fixing $\delta^{n}\le r_{i-1}<r_i<\delta^{n+1}$, we have
\begin{equation*
\begin{split}
\big|\frac{1}{m(B(x,r_i))}&\int_{B(x,r_i)}f(y)dm(y)-\frac{1}{m(B(x,r_{i-1}))}
\int_{B(x,r_{i-1})}f(y)dm(y)\big|\\
&\le\big|\frac{1}{m(B(x,r_i))}\int_{B(x,r_i)\setminus B(x,r_{i-1})}f(y)dm(y)\big|\\
&+\big|\bigg(\frac{1}{m(B(x,r_i))}-\frac{1}{m(B(x,r_{i-1}))}\bigg)\int_{B(x,r_{i-1})}f(y)dm(y)\big|.
\end{split}
\end{equation*}
From the above observations and the triangle inequality of $\ell^2$-norm, we obtain
\begin{equation}\label{controll the short variation}
SV_n(f)(x)\le SV_I(f)(x)+SV_{II}(f)(x),
\end{equation}
where
\begin{equation*
\begin{split}
SV_I(f)(x)&=\bigg(\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J\frac{1}{m(B(x,r_i))^2}\big|\int_{B(x,r_i)\setminus B(x,r_{i-1})}f(y)dm(y)\big|^2\bigg)^{1/2}
\end{split}
\end{equation*}
and
\begin{equation*
\begin{split}
&SV_{II}(f)(x)=\\
&\bigg(\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\big(\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i))}\big)\int_{B(x,r_{i-1})}f(y)dm(y)\big|^2\bigg)^{1/2}.
\end{split}
\end{equation*}
Moreover, since the $\ell^1$ norm is not less than the $\ell^2$ norm, then
\begin{equation}\label{estimate of short variation}
\begin{split}
&SV_n(f)(x)\le \sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J\frac{1}{m(B(x,r_i))}\big|\int_{B(x,r_i)\setminus B(x,r_{i-1})}f(y)dm(y)\big|\\
&+\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\bigg(\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i))}\bigg)\int_{B(x,r_{i-1})}f(y)dm(y)\big|\\
&\le \frac{1}{m(B(x,\delta^{n}))}
\int_{B(x,\delta^{n+1})\setminus B(x,\delta^{n})}|f(y)|dm(y)\\
&+\bigg(\frac{1}{m(B(x,\delta^{n}))}-\frac{1}{m(B(x,\delta^{n+1}))}\bigg)\int_{B(x,\delta^{n+1})}|f(y)|dm(y)\\
&\le \frac{2}{m(B(x,\delta^{n}))}\int_{B(x,\delta^{n+1})}|f(y)|dm(y).\\
\end{split}
\end{equation}
{By~\eqref{int} and Proposition~\ref{aver}, the above discussions imply}
\begin{lemma}\label{controlled by maximal operator}
Let $p\in[1,\infty]$. There exists a constant $C_4=2(K+1)D^{1/p}\delta^\epsilon$ such that
\begin{equation*}
\sup_{{n\ge n_{r_0}}}\|S_n\|_{L^p(G,m)\rightarrow L^p(G,m)}\le D^{1/p}+1,~\sup_{n\ge n_{r_0}}\|SV_n\|_{L^p(G,m)\rightarrow L^p(G,m)}\le C_4.
\end{equation*}
\end{lemma}
Finally, we state the following version of the almost orthogonality principle, see e.g. \cite{JKRW03} for a proof.
\begin{lem}\label{orthogonality principle}
Let $\{T_n\}_{n\in\mathbb{N}}$ be a sequence of sub-linear operators from $L^2$ to $L^2$ on some $\sigma$-finite measure space. Let $\{u_n\}_{n\in\mathbb{Z}}$ and $\{v_n\}_{n\in\mathbb{Z}}$ be two sequences of $L^2$ functions. Let $N$ be a positive integer number. Assume that for $n\ge N$, there exists a sequence of positive constant $\{a(j)\}_{j\in\mathbb{Z}}$ with $w=\sum_{k\in\mathbb{Z}}a(k)<\infty$ such that
\begin{equation}\label{condition of orth-prin2}
\|T_n(u_{k+n})\|_{L^2}\le a(k)\|v_{k+n}\|_{L^2}
\end{equation}
then
\begin{equation*}
\sum_{n\ge N}\|\sup_{j,m}|T_n\big(\sum_{j\le k\le m}u_{k+n}\big)|\|^2_{L^2}\le w^2\sum_{n\in\mathbb{Z}}\|v_{n}\|^2_{L^2}.
\end{equation*}
Furthermore, if $T_*:=\max_{0<n\le N}\|T_n\|_{L^2\rightarrow L^2}<\infty$, $T_n$ is strongly continuous for every $n\in\mathbb{N}$, $f=\sum_{n\in\mathbb{Z}}u_n$
and $\sum_{n\in\mathbb{Z}}\|v_n\|^2_{L^2}\le C\|f\|^2_{L^2} $, then we have
\begin{equation*}
\sum_{n\in\mathbb{N}} \|T_nf\|^2_{L^2}\le(Cw^2+NT_*^2)\|f\|^2_{L^2}.
\end{equation*}
\end{lem}
\begin{comment}
\begin{proof}
The first inequality is obvious since for every $n\in\mathbb{N}$, $\mathbb{E}_n$ and $A_{\delta^n}$ are contraction.
For $SV_n(f)$, since the $\ell^1$ norm is greater than the $\ell^2$ norm, then using the inequality~\eqref{controll the short variation} and the inequality~\eqref{int}, we obtain
\begin{equation*
\begin{split}
&SV_n(f)(x)\le \sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J\frac{1}{m(B(x,r_i))}\big|\int_{B(x,r_i)\setminus B(x,r_{i-1})}f(y)dm(y)\big|\\
&+\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\bigg(\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i))}\bigg)\int_{B(x,r_{i-1})}f(y)dm(y)\big|\\
&\le \frac{1}{m(B(x,\delta^{n}))}
\int_{B(x,\delta^{n+1})\setminus B(x,\delta^{n})}|f(y)|dm(y)\\
&+\bigg(\frac{1}{m(B(x,\delta^{n}))}-\frac{1}{m(B(x,\delta^{n+1}))}\bigg)\int_{B(x,\delta^{n+1})}|f(y)|dm(y)\\
&\le \frac{C_{m,\delta}}{m(B(x,\delta^{n+1}))}\int_{B(x,\delta^{n+1})}|f(y)|dm(y)
\end{split}
\end{equation*}
so
$$V_2(A_{r}f(x):r\in[\delta^{n},\delta^{n+1}))\le C Mf(x),$$
and the lemma follows.
\end{proof}
\end{comment}
\section{Strong type $(2,2)$ estimates}\label{ST3}
In this section, we prove that the square function $S(f)$ and the short variation operator $SV(f)$ are of strong type $(2,2)$. We begin with the square function $S(f)$.
\begin{proof}[Proof of \eqref{strong-type inequalities of square function} in the case $p=2$]
Fix $f\in L^2(G,m)$. Recall that
$$S(f)=\bigg(\sum_{n>n_{r_0}}|A^\prime_{\delta^n}f-\mathbb{E}_nf|^2\bigg)^{1/2}.$$
Set $n_2=\max\{n_0,n_1,n_{r_0}\}$. By Lemma~\ref{controlled by maximal operator}, we first have
\begin{equation*}
\bigg\|\bigg(\sum_{n_{r_0}<n\le n_2}|A^\prime_{\delta^n}f-\mathbb{E}_nf|^2\bigg)^{1/2}\bigg\|_{L^2}\le 2(n_2-n_{r_0})\|f\|_{L^2}.
\end{equation*}
Note that $f=\sum_{n\in\mathbb{Z}}\mathbb{D}_{n}f$. Set $T_n=A^\prime_{\delta^n}-\mathbb{E}_n$, $u_n=v_n=\mathbb{D}_{n}f$, $N=n_2$ in Lemma~\ref{orthogonality principle}; it suffices for our purposes to prove that for every $n>n_2$ and $k\in\mathbb{Z}$, there exists a sequence $a(k)$ of positive number with $w=\sum_{k\in\mathbb{Z}}a(k)<\infty$ such tha
\begin{equation}\label{L2-estimate of square function}
\|A^\prime_{\delta^n}\mathbb{D}_{n+k}f-\mathbb{E}_n\mathbb{D}_{n+k}f\|_{L^2}\le a(k)\|\mathbb{D}_{n+k}f\|_{L^2}.
\end{equation}
In order to achieve our goal, we first set
\begin{equation}\label{k2}
k_2=\max\{L_0+1,|k_1|,\min\{k\in\mathbb{N}:a_0\delta^{k-1}>1\}\},
\end{equation}
and then divide $k$ into three cases $-k_2\le k\leq k_2$, $k>k_2$ and $k<-k_2$.\\
\noindent{\bf{Case $-k_2\le k\le k_2$}}. Applying Lemma~\ref{controlled by maximal operator} for function $\mathbb{D}_{n+k}f$, we obtain
\begin{equation*
\begin{split}
\|A^\prime_{\delta^n}\mathbb{D}_{n+k}f-\mathbb{E}_n\mathbb{D}_{n+k}f\|_{L^2}\le {(D^{1/2}+1)}\|\mathbb{D}_{n+k}f\|_{L^2}.
\end{split}
\end{equation*}
\noindent{\bf{Case $k>k_2$}}. Note that $\mathbb{E}_n\mathbb{D}_{n+k}f=\mathbb{D}_{n+k}f$, we write
\begin{equation*}
\begin{split}
\|A^\prime_{\delta^n}\mathbb{D}_{n+k}f-\mathbb{E}_n\mathbb{D}_{n+k}f\|^2_{L^2(G,m)}
=\sum_\alpha\int_{Q_\alpha^{n+k-1}}|A^\prime_{\delta^n}\mathbb{D}_{n+k}f(x)-\mathbb{D}_{n+k}f(x)|^2dm(x).
\end{split}
\end{equation*}
Fix a `dyadic cube' $Q^{n+k-1}_\alpha$. Note that $\mathbb{D}_{n+k}f(x)$ is a constant valued function on $Q^{n+k-1}_\alpha$, and so it follows that on $Q^{n+k-1}_\alpha$, $A^\prime_{\delta^n}\mathbb{D}_{n+k}f(x)-\mathbb{D}_{n+k}f(x)\neq 0$ only if $x$ belongs to the set $\mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha)$. Applying Lemma~\ref{measure estimate of cube}, we have
\begin{equation}\label{ball}
m(\mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha))\le C_2\delta^{(1-k)\eta} m(Q^{n+k-1}_\alpha).
\end{equation
Let $x\in \mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha)$ and define the set $I_{x}=\{\beta:B(x,\delta^n)\cap Q_{\beta}^{n+k-1}\neq\emptyset\}$. Setting $I_{\alpha}=\cup_{x\in\mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha)}I_x$. {Fix $\beta\in I_{\alpha}$, there exist a point $x\in\mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha)$ and $y_0\in Q_{\beta}^{n+k-1}$ such that $y_0\in B(x,\delta^n)\cap Q_{\beta}^{n+k-1}$. Then for every $y\in Q_{\beta}^{n+k-1}$, by Proposition~\ref{dyadic cube}(iv), we have
\begin{equation*}
d(y, z_{\alpha}^{n+k-1})\le d(y,y_0)+d(y_0, x)+d(x,z_{\alpha}^{n+k-1})\le 2C_1\delta^{n+k-1}+\delta^n.
\end{equation*}
It follows that for every $\beta\in I_{\alpha}$, $Q_\beta^{n+k-1}\subseteq B(z_{\alpha}^{n+k-1},2C_1\delta^{n+k-1}+\delta^n))$. By Proposition~\ref{dyadic cube}(ii), the `dyadic cubes' $Q_\beta^k$ are disjoint, hence $\cup_{\beta\in I_{\alpha}}Q_\beta^{n+k-1}\subseteq B(z_\alpha^k,2C_1\delta^{n+k-1}+\delta^n)$. On the other hand, by Proposition~\ref{geometry-doubling}, $B(z_{\alpha}^{n+k-1},2C_1\delta^{n+k-1}+\delta^n))$ is covered by at most $D^{\log_2[(2C_1+1)/a_0]+1}$ balls of radius of $a_0\delta^{n+k-1}$. Since by Proposition~\ref{dyadic cube}(iv), we know that each $Q_{\beta}^{n+k-1}$ contains a ball $B(z^{n+k-1}_\beta,a_0\delta^{n+k-1})$, hence
\begin{equation}\label{measure}
\#\{I_{\alpha}\}\le D^{\log_2[(2C_1+1)/a_0]+1}
\end{equation}}
{Here and} in what follows, $\#\{A\}$ stands for the number of the set $A$.
Set
$$m_{\alpha}=\max_{\beta\in I_{\alpha}}|\mathbb{D}_{n+k}f(z_\beta^{n+k-1})|,$$
where $z_\beta^{n+k-1}$ is the point associated with the corresponding `dyadic cube' $Q_\beta^{n+k-1}$. {Since $\mathbb{D}_{n+k}f(x)$ is a constant-valued function on $Q_\beta^{n+k-1}$. It follows that
\begin{equation}\label{maximal}
m_{\alpha}^2\le\sum_{\beta\in I_{\alpha}}\frac{1}{m(Q_\beta^{n+k-1})}\int_{Q_\beta^{n+k-1}}|\mathbb{D}_{n+k}f(x)|^2dm(x)
\end{equation}}
By the above inequalities, Proposition~\ref{dyadic cube}(iv),~\eqref{ball} and \eqref{int}, we conclude
\begin{equation*
\begin{split}
&\sum_\alpha\int_{Q_\alpha^{n+k-1}}|A^\prime_{\delta^n}\mathbb{D}_{n+k}f(x)
-\mathbb{D}_{n+k}f(x)|^2dm(x)\\
&= \sum_\alpha \int_{\mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha)}|A^\prime_{\delta^n}\mathbb{D}_{n+k}f(x)
-\mathbb{D}_{n+k}f(x)|^2dm(x)\\
&\le 2\sum_\alpha m_{\alpha}^2m(\mathcal{H}(B_{\delta^n},Q^{n+k-1}_\alpha))\\
&\le 2C_2\delta^{(1-k)\eta}\sum_\alpha m_{\alpha}^2m(Q^{n+k-1}_\alpha)\\
&\le {2C_2\delta^{(1-k)\eta}\sum_\alpha\sum_{\beta\in I_{\alpha}}\frac{m(Q^{n+k-1}_\alpha)}{m(Q_\beta^{n+k-1})}\int_{Q_\beta^{n+k-1}}|\mathbb{D}_{n+k}f(x)|^2dm(x)}\\
&\le { 2C_2\delta^{(1-k)\eta}\sum_\alpha\sum_{\beta\in I_{\alpha}}\frac{m(B(z_\beta^{n+k-1},2C_1\delta^{n+k-1}+\delta^n))}{m(B(z_\beta^{n+k-1},a_0\delta^{n+k-1}))}
\int_{Q_\beta^{n+k-1}}|\mathbb{D}_{n+k}f(x)|^2dm(x)}\\
&\le{ 2C_2(K+1)\big({(2C_1+1)}/{a_0}\big)^\epsilon D^{\log_2[(2C_1+1)/a_0]+1}\delta^{(1-k)\eta}\int_{G}|\mathbb{D}_{n+k}f(x)|^2dm(x)}
\end{split}
\end{equation*}
\noindent{\bf{Case}} $k<-k_2$. Since $\mathbb{E}_{n}\mathbb{D}_{n+k}f=0$ and
$\int_{Q^{k+n}_\alpha}\mathbb{D}_{n+k}f=0$ for every $Q^{k+n}_\alpha\in\mathcal{F}_{k+n}$, {thus for any $x\in G$,
\begin{align*
{|A^\prime_{\delta^n}\mathbb{D}_{n+k}f(x)|}&=|\frac{1}{m(B(x,\delta^n))}\sum_{\alpha}\int_{Q^{k+n}_\alpha\cap B(x,\delta^n)}\mathbb{D}_{n+k}f(y)dm(y)|\\
&=|\frac{1}{m(B(x,\delta^n))}\int_{\mathcal{I}(B(x,\delta^n),n+k)}\mathbb{D}_{n+k}f(y)dm(y)|\\
&\le M_{n+k}\mathbb{D}_{n+k}f(x).
\end{align*}
Form this and Lemma~\ref{lem:Akn}, we obtain
\begin{equation*}
\|A^\prime_{\delta^n}\mathbb{D}_{n+k}f\|_{L^2}\le\|M_{n+k}\mathbb{D}_{n+k}f\|_{L^2}\le D_2 \delta^{k\epsilon/2}\|\mathbb{D}_{n+k}f\|_{L^2}.
\end{equation*}
Therefore, for every $n>n_2$, we determine{
\begin{equation*}
a(k)= \left\{
\begin{array}{ll}
\bigg( 2C_2(K+1)({(2C_1+1)}/{a_0}\big)^\epsilon D^{\log_2[(2C_1+1)/a_0]+1}\delta^{\eta}\bigg)^{1/2}\delta^{-\eta k/2}, & \hbox{$k>k_2$,} \\
D^{1/2}+1, & \hbox{$-k_2\le k\le k_2$,} \\
D_2\delta^{\epsilon k/2}, & \hbox{$k<-k_2$,}
\end{array}
\right.
\end{equation*}}%
and $\sum_{k\in\mathbb{Z}}a(k)<\infty$, which completes the proof.
\end{proof}
The proof of strong type $(2,2)$ estimate for $SV(f)=\bigg(\sum_{n\ge n_{r_0}}|SV_n(f)|^2\bigg)^{1/2}$ is similar in spirit to that of the square function $S(f)$.
\begin{proof}[Proof of \eqref{strong-type inequalities of short variation} in the case $p=2$.]
First by Lemma~\ref{controlled by maximal operator}, we have
\begin{equation*}
\bigg\|\bigg(\sum_{n_{r_0}\le n\le n_2}|SV_n(f)|^2\bigg)^{1/2}\bigg\|_{L^2}\le 2(n_2-n_{r_0}+1)C_4\|f\|_{L^2}.
\end{equation*}
Set $T_n=SV_n$, $u_n=v_n=\mathbb{D}_{n}$, $N=n_2$ in Lemma~\ref{orthogonality principle}. For every $n>n_2$, we divide $k$ into three cases $-k_2\le k\leq k_2$, $k> k_2$ and $k<-k_2$, where $k_2$ is defined in~\eqref{k2}.\\
\noindent{\bf{Case}} $-k_2\le k\leq k_2$. For function $\mathbb{D}_{n+k}f$, using Lemma~\ref{controlled by maximal operator} again, we hav
\begin{equation*
\|V_2(A^\prime_{r}\mathbb{D}_{n+k}f:r\in[\delta^{n},\delta^{n+1}))\|_{L^2}\le C_4\|\mathbb{D}_{n+k}f\|_{L^2}.
\end{equation*}
\noindent{\bf{Case}} $k>k_2$. We write
\begin{equation*}
\begin{split}
\|&V_2(A^\prime_{r}\mathbb{D}_{n+k}f:r\in[\delta^{n},\delta^{n+1}))\|^2_{L^2(G,m)}\\
&=\sum_\alpha\int_{Q_\alpha^{n+k-1}}|V_2(A^\prime_{r}\mathbb{D}_{n+k}f(x):r\in[\delta^{n},\delta^{n+1}))|^2dm(x).
\end{split}
\end{equation*
Fix a `dyadic cube' $Q_\alpha^{n+k-1}$. Recall that $\mathbb{D}_{n+k}f(x)$ is a constant valued funtion on $Q_\alpha^{n+k-1}$, it follows that for every $\delta^n\le r_i<r_{i+1}<\delta^{n+1}$ and $x\in Q_\alpha^{n+k-1}$, $|A^\prime_{r_{i+1}}\mathbb{D}_{n+k}f(x)-A^\prime_{r_i}\mathbb{D}_{n+k}f(x)|\neq 0$ only if there exists at least one ball $B(x,r_i)$ or $B(x,r_{i+1})$ intersecting with $(Q^{n+k-1}_\alpha)^{c}$. {So $V_2(A^\prime_{r}\mathbb{D}_{n+k}f:r\in[\delta^{n},\delta^{n+1}))$} is supported on $\mathcal{H}(B_{\delta^{n+1}},Q^{n+k-1}_\alpha)$.
By Lemma~\ref{measure estimate of cube}, we have
\begin{equation}\label{ball2}
m(\mathcal{H}(B_{\delta^{n+1}},Q^{n+k-1}_\alpha))\le C_2\delta^{(2- k)\eta} m(Q^{n+k-1}_\alpha).
\end{equation}
On the other hand, by~\eqref{controll the short variation}, we know that $V_2(A^\prime_{r}\mathbb{D}_{n+k}f(x):r\in[\delta^{n},\delta^{n+1}))$ is controlled by the sum of $SV_I(\mathbb{D}_{n+k}f)(x)$ and $SV_{II}(\mathbb{D}_{n+k}f)(x)$, where
\begin{equation*
\begin{split}
SV_I(\mathbb{D}_{n+k}f)(x)&=\bigg(\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J\frac{1}{m(B(x,r_i))^2}\big|\int_{B(x,r_i)\setminus B(x,r_{i-1})}\mathbb{D}_{n+k}f(y)dm(y)\big|^2\bigg)^{1/2}
\end{split}
\end{equation*}
and
\begin{equation*
\begin{split}
&SV_{II}(\mathbb{D}_{n+k}f)(x)=\\
&\bigg(\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\big(\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i))}\big)\int_{B(x,r_{i-1})}\mathbb{D}_{n+k}f(y)dm(y)\big|^2\bigg)^{1/2}.
\end{split}
\end{equation*
Let $x\in \mathcal{H}(B_{\delta^{n+1}},Q^{n+k-1}_\alpha)$ and set $I_{x}=\{\beta:B(x,\delta^{n+1})\cap Q_{\beta}^{n+k-1}\neq\emptyset\}$. We define the set
~$I_\alpha=\cup_{x\in\mathcal{H}(B_{\delta^{n+1}},Q^{n+k-1}_\alpha)}I_x$. {Similar to}~\eqref{measure}, we have {$\#\{I_{\alpha}\}\le D^{\log_2[(2C_1+1)/a_0]+1}$}. Write
$$m_{\alpha}=\max_{\beta\in I_\alpha}|\mathbb{D}_{n+k}f(z_\beta^{n+k-1})|.$$
{It follows from~\eqref{int} that
\begin{equation*}
\begin{split}
SV_I&(\mathbb{D}_{n+k}f)^2(x)\\
&\le \frac{m(B(x,\delta^{n+1})\setminus B(x,\delta^{n}))}{m(B(x,\delta^{n}))^2} \sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J\int_{B(x,r_i)\setminus B(x,r_{i-1})}|\mathbb{D}_{n+k}f(y)|^2dm(y)\\
&\le \bigg(\frac{m(B(x,\delta^{n+1})\setminus B(x,\delta^{n}))}{m(B(x,\delta^{n}))}\bigg)^2m_{\alpha}^2\le (K+1)^2\delta^{2\epsilon} m_{\alpha}^2
\end{split}
\end{equation*}}
and
\begin{equation*}
\begin{split}
&SV_{II}(\mathbb{D}_{n+k}f)^2(x)\\
&\le m_{\alpha}^2m(B(x,\delta^{n+1}))^2 \bigg(\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
|\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i)))}|\bigg)^2\\
&\le m_{\alpha}^2m(B(x,\delta^{n+1}))^2\bigg(\frac{1}{m(B(x,\delta^{n}))}-\frac{1}{m(B(x,\delta^{n+1}))}\bigg)^2\le (K+1)^2\delta^{2\epsilon} m_{\alpha}^2.
\end{split}
\end{equation*}
{Combining the above two inequalities with~\eqref{maximal} and~\eqref{ball2}}, we have
\begin{equation*}
\begin{split}
&\int_{Q_\alpha^{n+k-1}}|V_2(A^\prime_{r}\mathbb{D}_{n+k}f(x):r\in[\delta^{n},\delta^{n+1}))|^2dm(x)\\
&=\int_{\mathcal{H}(B_{\delta^{n+1}},Q^{n+k-1}_\alpha)}|V_2(A^\prime_{r}\mathbb{D}_{n+k}f(x):r\in[\delta^{n},\delta^{n+1}))|^2dm(x)\\
&\le 2\int_{\mathcal{H}(B_{\delta^{n+1}},Q_\alpha^{n+k-1})}SV_I(\mathbb{D}_{n+k}f)^2(x)+SV_{II}(\mathbb{D}_{n+k}f)^2(x)dm(x)\\
&\le 4(K+1)^2\delta^{2\epsilon} m(\mathcal{H}(B_{\delta^{n+1}},Q_\alpha^{n+k-1}))m_{\alpha}^2\\
&\le{4C_2(K+1)^2\delta^{2\epsilon}\delta^{(2-k)\eta}\sum_{\beta\in I_{\alpha}}\frac{m(Q_\alpha^{n+k-1})}{m(Q_\beta^{n+k-1})}\int_{Q_{\beta}^{n+k-1}}|\mathbb{D}_{n+k}f(x)|^2dm(x)}\\
&\le 4C_2(K+1)^2\big((2C_1+1)/a_0\big)^\epsilon \delta^{2\epsilon}\delta^{(2-k)\eta}\sum_{\beta\in I_{\alpha}}\int_{Q_{\beta}^{n+k-1}}|\mathbb{D}_{n+k}f(x)|^2dm(x),
\end{split}
\end{equation*
and summing over all $\alpha$ shows,
\begin{align*
&\|V_2(A^\prime_{r}\mathbb{D}_{n+k}f:r\in[\delta^{n},\delta^{n+1}))\|_{L^2}\\
&\le \bigg(4C_2(K+1)^2\big((2C_1+1)/a_0\big)^\epsilon D^{\log_2[(2C_1+1)/a_0]+1}\delta^{2\epsilon}\delta^{(2-k)\eta}\bigg)^{1/2}\|\mathbb{D}_{n+k}f\|_{L^2}.
\end{align*
\noindent{\bf{Case}} $k<-k_2$. Let $x\in G$. Note that for every $\delta^n\le r_{i-1}<r_i\le\delta^{n+1}$,
\begin{equation*
\begin{split}
&\int_{B(x,r_i))\setminus B(x,r_{i-1})}\mathbb{D}_{n+k}f(y)dm(y)=\int_{\mathcal{I}(B(x,r_i))\setminus B(x,r_{i-1}),n+k)}\mathbb{D}_{n+k}f(y)dm(y),\\
&\int_{B(x,r_{i-1})}\mathbb{D}_{n+k}f(y)dm(y)=\int_{\mathcal{I}(B(x,r_{i-1}),n+k)}\mathbb{D}_{n+k}f(y)dm(y),
\end{split}
\end{equation*}
so $SV_I(\mathbb{D}_{n+k}f)(x)= M^S_{n+k}(\mathbb{D}_{n+k}f)(x)$. Moreover, using the estimate~\eqref{int} and the fact that the $\ell^1$-norm is greater than the $\ell^2$-norm, we obtai
\begin{align*}
&SV_{II}(\mathbb{D}_{n+k}f)(x)\\
&\le\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\big(\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i))}\big)\int_{\mathcal{I}(B(x,r_{i-1}),n+k)}\mathbb{D}_{n+k}f(y)dm(y)\big|\\
&\le (K+1)\delta^\epsilon \sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\frac{m(B(x,\delta^{n+1}))}{m(B(x,r_{i-1}))}-\frac{m(B(x,\delta^{n+1}))}{m(B(x,r_i))}\big|{M}_{n+k}(\mathbb{D}_{n+k}f)(x)\\
& \le (K+1)^2\delta^{2\epsilon}{M}_{n+k}(\mathbb{D}_{n+k}f)(x)
\end{align*}
Using Lemma~\ref{lem:Mkn} and Lemma~\ref{lem:Akn}, respectively, we have
\begin{align*
\|&V_2(A^\prime_{r}\mathbb{D}_{n+k}f:r\in[\delta^{n},\delta^{n+1}))\|_{L^2}\le\|SV_I(\mathbb{D}_{n+k}f)\|_{L^2}+\|SV_{II}(\mathbb{D}_{n+k}f)\|_{L^2}\\
&\le \|M^S_{n+k}(\mathbb{D}_{n+k}f)\|_{L^2}+(K+1)^2\delta^{2\epsilon}\|{M}_{n+k}(\mathbb{D}_{n+k}f)\|_{L^2}\\
&\le(C_3+ (K+1)^2\delta^{2\epsilon} D_2)\delta^{\epsilon k/2}\|\mathbb{D}_{n+k}f\|_{L^2}.
\end{align*}
Hence, for every $n>n_2$, we determine
\begin{equation*}
a(k)= \left\{
\begin{array}{ll}
{\bigg(4C_2(K+1)^2\big((2C_1+1)/a_0\big)^\epsilon D^{\log_2[(2C_1+1)/a_0]+1}\delta^{2(\epsilon+\eta)}\bigg)^{1/2}\delta^{-k\eta/2}}, & \hbox{$k>k_2$,} \\
C_4, & \hbox{$-k_2\le k\le k_2$,} \\
(C_3+ (K+1)^2\delta^{2\epsilon} D_2)\delta^{\epsilon k/2}, & \hbox{$k<-k_2$,}
\end{array}
\right.
\end{equation*}
and $\sum_k a(k)<\infty$, which completes the proof.%
\end{proof}
\section{Weak type $(1,1)$ estimates}\label{ST4}
In this section, we prove that the square function $f\rightarrow S(f)$ and the short variation operator $f\rightarrow SV(f)$ are of weak type $(1,1)$.
Under the conditions \eqref{decay property} and \eqref{geo-doubling}, the space $(G,d,m)$ might not be a doubling measure space and the usual Calder\'on-Zygmund decomposition does not work any more. We need the following version of Calder\'{o}n-Zygmund decomposition, which is motivated by Gundy's decomposition from martingale theory. This decomposition was constructed in~\cite{Lop-Mar-Pra14} for non-doubling measure on ${\mathbb R}^d$, and construction works without alterations for the space $(G,d,m)$.
Let $f:G\rightarrow \mathbb{C}$ be a locally integrable function. The `dyadic' maximal function is defined by
\begin{equation*}
M_df(x)=\sup_{k\in\mathbb{Z}}\mathbb{E}_k(|f|)(x)
\end{equation*}
Given a `dyadic cube' $Q_\alpha^k$, let $\widetilde{Q}_\alpha^k=\{y\in G:d(y,z_\alpha^k)\le 3C_1\delta^{k+1}\}$ and $\widehat{Q}_\alpha^k$ be its parent. We denote $\widehat{z}_\alpha^k$ the corresponding point of $\widehat{Q}_\alpha^k$. Set $\langle f\rangle_{Q_\alpha^k}=\frac{1}{m(Q_\alpha^k)}\int_{Q_\alpha^k}f$.
\begin{lem}\label{C-Z decomposition}
Let $f\in L^1(G,m)$, and $\gamma>0$, le
\begin{equation*}
\{x\in G:M_df(x)>\gamma\}=\cup_{k\in\mathbb{Z}}\Omega_k,
\end{equation*}
where
\begin{equation*}
\Omega_k=\{x\in G:\mathbb{E}_k(|f|)(x)>\gamma, \mathbb{E}_j(|f|)(x)\le\gamma, j>k\}.
\end{equation*}
Then we decompose
\begin{equation*}
\Omega=\cup_{k\in\mathbb{Z}}\Omega_k=\cup_{k\in\mathbb{Z}}(\cup_{\alpha\in\Lambda_k} Q_\alpha^k
\end{equation*}
into a disjoint union of maximal `dyadic cubes' $Q_\alpha^k$, where $\{\Lambda_k\}_k$ stands for the sequence of the corresponding index set. Le
\begin{equation*}
\begin{split}
&g(x)=f\mathds{1}_{\Omega^c}+\sum_{k}\sum_{\alpha\in\Lambda_k}\langle f \rangle_{\widehat{Q}_\alpha^k}\mathds{1}_{Q_\alpha^k}(x)+\sum_{k}\sum_{\alpha\in\Lambda_k}\big(\langle f\rangle_{Q_\alpha^k}-\langle f\rangle_{\widehat{Q}_\alpha^k}\big)\frac{m(Q_\alpha^k)}{m(\widehat{Q}_\alpha^k)}\mathds{1}_{\widehat{Q}_\alpha^k}(x),\\
&b(x)=\sum_{k}b_k=\sum_{k}\sum_{\alpha\in\Lambda_k}b_{\alpha}^{k}(x)=\sum_{k}\sum_{\alpha\in\Lambda_k}\big(f(x)-\langle f \rangle_{{Q}_\alpha^k}\big)\mathds{1}_{Q_\alpha^k}(x),\\
&\xi(x)=\sum_{k}\xi_k=\sum_{k}\sum_{\alpha\in\Lambda_k}\xi_{\alpha}^{k}(x)=\sum_{k}\sum_{\alpha\in\Lambda_k}\big(\langle f\rangle_{Q_\alpha^k}-\langle f\rangle_{\widehat{Q}_\alpha^k}\big)\bigg(\mathds{1}_{Q_\alpha^k}(x)-\frac{m(Q_\alpha^k)}{m(\widehat{Q}_\alpha^k)}\mathds{1}_{\widehat{Q}_\alpha^k}(x)\bigg).
\end{split}
\end{equation*}
Then
\begin{enumerate}[\noindent
\item\emph{(i)}~$f=g+b+\xi$;
\item\emph{(ii)}~let $p\in[1,\infty)$ and $m=[p]+1$, the function $g$ satisfies
$$\|g\|^p_{L^p}\le 3\cdot2^p(m!)^{\frac{p-1}{m-1}}\gamma^{p-1}\|f\|_{L^1};$$
\item \emph{(iii)}~the function $b$ satisfies
$$\int_G b_\alpha^k=0,~\|b\|_{L^1}=\sum_{k}\sum_{\alpha\in\Lambda_k}\|b_\alpha^k\|_{L^1}\le 2\|f\|_{L^1};$$
\item\emph{(iv)}~the function $\xi$ satisfies
$$\int_G \xi_\alpha^k=0,~\|\xi\|_{L^1}=\sum_{k}\sum_{\alpha\in\Lambda_k}\|\xi_\alpha^k\|_{L^1}\le 4\|f\|_{L^1}.$$
\end{enumerate}
\end{lem}
With the above decomposition, the almost orthogonality principle which was exploited well in~\cite{JKRW98} or~\cite{GXHTM17}, doesn't seem to work. Thereby, we have to provide another method to achieve our goal. A new input is the observation that the operators $S$ and $SV$ are essentially dyadic operator, in particular they are perfect dyadic in the small scale---small cubes {\it versus} large balls.
We first deal with the operator $S$.
\begin{proof}[Proof of \eqref{weak-type inequalities of square function}]
Fix $f\in L^1(G,m)$ and $\gamma>0$. Keeping the notations in Lemma~\ref{C-Z decomposition}, we get $f=g+b+\xi$. We first hav
\begin{align*}
m\big(\{x\in G:S(f)(x)>\gamma\}\big)&\le m\big(\{x\in G:S(g)(x)>{\gamma}/{3}\}\big)\\
&+m\big(\{x\in G:S(b)(x)>{\gamma}/{3}\}\big)+m\big(\{x\in G:S(\xi)(x)>{\gamma}/{3}\}\big).
\end{align*}
By the $L^2$-boundedness of the operator $S$ and Lemma~\ref{C-Z decomposition}(ii), the first term on the right side is controlled by
\begin{equation*}
\begin{split}
m\big(\{x\in G:S(g)(x)>{\gamma}/{3}\}\big)&\le \frac{9}{\gamma^2}\int_G|S(g)(x)|^2dm(x)\le\frac{9c_2}{\gamma^2}\int_G|g(x)|^2dm(x)\\
&\le \frac{108\sqrt{6} c_2}{\gamma}\int_G|f(x)|dm(x).
\end{split}
\end{equation*}
It remains to handle the other two terms, we set
\begin{equation*
k_3=\min\{k:a_0\delta^k>r_0\},~\widetilde{\Omega}=\big(\cup_{k<k_3}\Omega_k\big)\cup\big(\cup_{k\ge k_3}\widetilde{\Omega}_k\big),
\end{equation*}
where $\widetilde{\Omega}_k=\cup_{\alpha\in\Lambda_k} \widetilde{Q}_\alpha^k$. By~\eqref{int} and the fact that $M_d$ is of weak type $(1,1)$, $m(\widetilde{\Omega})$ is controlled by
\begin{equation*
\begin{split}
m(\widetilde{\Omega})&=\sum_{k<k_3}m(\Omega_k)+\sum_{k\ge k_3}m(\widetilde{\Omega}_k)\le \sum_{k<k_3}\sum_{\alpha\in\Lambda_k}m(Q_\alpha^k)+\sum_{k\ge k_3}\sum_{\alpha\in\Lambda_k}m(\widetilde{Q}_\alpha^k)\\
&\le\sum_{k<k_3}\sum_{\alpha\in\Lambda_k}m(Q_\alpha^k)+\sum_{k\ge k_3}\sum_{\alpha\in\Lambda_k}\frac{m(\widetilde{Q}_\alpha^k)}{m(Q_\alpha^k)}m(Q_\alpha^k)\\
&\le \sum_{k<k_3}\sum_{\alpha\in\Lambda_k}m(Q_\alpha^k)+(K+1)(3C_1\delta/a_0)^{\epsilon}\bigg(\sum_{k\ge k_3}\sum_{\alpha\in\Lambda_k}m(Q_\alpha^k)\bigg)\\
&\le \frac{(K+1)(3C_1\delta/a_0)^{\epsilon}}{\gamma}\|f\|_{L^1}.
\end{split}
\end{equation*}
We now focus on the term $m\big(\{x\in G\setminus\widetilde{\Omega}:S(\xi)(x)>{\gamma}/{3}\}\big)$. Recall that $n_2=\max\{n_0, n_1\}$, we decompose
\begin{equation*
\begin{split}
m\big(\{x\in G\setminus\widetilde{\Omega}:S(\xi)(x)>{\gamma}/{3}\}\big)&\le m\bigg(\{x\in G\setminus\widetilde{\Omega}:\big(\sum_{n_{r_0}<n\le n_2}|A_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|^2\big)^{1/2}>{\gamma}/{6}\}\bigg)\\
&+ m\bigg(\{x\in G\setminus\widetilde{\Omega}:\big(\sum_{n>n_2}|A_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|^2\big)^{1/2}>{\gamma}/{6}\}\bigg).
\end{split}
\end{equation*}
For the first part of the right side of the above inequality, using Lemma~\ref{controlled by maximal operator} and Lemma~\ref{C-Z decomposition}(iv), we have
\begin{equation*}
\begin{split}
m&\bigg(\{x\in G\setminus\widetilde{\Omega}:\big(\sum_{n_{r_0}<n\le n_2}|A^\prime_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|^2\big)^{1/2}>{\gamma}/{6}\}\bigg)\\
&\le\sum_{n_{r_0}<n\le n_2}\frac{6}{\gamma} \int_{G\setminus\widetilde{\Omega}}|A^\prime_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|dm(x)\\
&\le\frac{12(n_2-n_{r_0})}{\gamma}\|\xi\|_{L^1(G,m)}\le\frac{48(n_2-n_{r_0})}{\gamma}\|f\|_{L^1(G,m)}.
\end{split}
\end{equation*}
For the second part, we first have
\begin{equation}\label{bad-function}
\begin{split}
&m\bigg(\{x\in G\setminus\widetilde{\Omega}:\big(\sum_{n>n_2}|A^\prime_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|^2\big)^{1/2}>{\gamma}/{6}\}\bigg)\\
&\le\frac{6}{\gamma}\int_{ G\setminus\widetilde{\Omega}}\bigg(\sum_{n>n_2}|A^\prime_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|^2\bigg)^{1/2}dm(x)\\
&\le \frac{6}{\gamma}\int_{ G\setminus\widetilde{\Omega}}\sum_{n>n_2}|A^\prime_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|dm(x)\\
&\le\frac{6}{\gamma}\sum_{n>n_2}\sum_{k\in\mathbb{Z}}\sum_{\alpha\in\Lambda_{n+k}}\int_{G\setminus {\widetilde{\Omega}}}|A^\prime_{\delta^n}\xi_\alpha^{n+k}(x)-\mathbb{E}_n\xi_\alpha^{n+k}(x)|dm(x).
\end{split}
\end{equation}
We now deal with integral term $\int_{G\setminus \widetilde{\Omega}}|A_{\delta^n}\xi_\alpha^{n+k}(x)-\xi_\alpha^{n+k}(x)|dm(x)$. Let
$$k_4=\max\{k:k<0~\&~C_1\delta^{k+1}\le 1\},~k_5=\max\{|k_4|,k_3\}.$$
We split the $k$ into three cases: $-k_5\le k\le k_5$, $k>k_5$, and $k<-k_5$. We will prove that
\begin{equation*
\int_{G\setminus \widetilde{\Omega}}|A^\prime_{\delta^n}\xi_\alpha^{n+k}(x)-\mathbb{E}_{n}\xi_\alpha^{n+k}(x)|dm(x)=a(k)\|\xi_\alpha^{n+k}\|_{L^1(G,m)},
\end{equation*}
where
\begin{equation}\label{bad}
a(k)=
\left\{
\begin{array}{ll}
{(D+1)}, & \hbox{$-k_5\le k\le k_5$;} \\
0, & \hbox{$k>k_5$;} \\
{6^{\epsilon}(K+1)^2C_1^\epsilon DK_\epsilon\delta^{\epsilon k}}, & \hbox{$k<-k_5$.}
\end{array}
\right.
\end{equation}
Assume this result momentarily. Combining~\eqref{bad-function}, \eqref{bad} with Lemma~\ref{C-Z decomposition}(iv), one can see immediately that
\begin{align*}
&m\bigg(\{x\in G\setminus\widetilde{\Omega}:\big(\sum_{n>n_2}|A^\prime_{\delta^n}\xi(x)-\mathbb{E}_n\xi(x)|^2\big)^{1/2}>{\gamma}/{6}\}\bigg)\le
\frac{6}{\gamma}\sum_{n>n_2}\sum_{k\in\mathbb{Z}}\sum_{\alpha\in\Lambda_{n+k}}a(k)\|\xi_\alpha^{n+k}\|_{L^1(G,m)}\\
&\le \frac{6}{\gamma}\bigg(\sum_{k\in\mathbb{Z}}a(k)\bigg)\bigg(\sum_{n\in\mathbb{Z}}\sum_{\alpha\in\Lambda_{n}}\|\xi_\alpha^{n}\|_{L^1(G,m)}\bigg)\le
\frac{C_{\epsilon,\delta}}{\gamma}\|f\|_{L^1(G,m)}.
\end{align*}
We now prove~\eqref{bad}.\\
{\noindent \bf{Case}} $-k_5\le k\le k_5$. Using Lemma~\ref{controlled by maximal operator} for function $\xi_\alpha^{n+k}$, we have
\begin{align*}
\int_{G\setminus\widetilde{\Omega}}|A^\prime_{\delta^n}\xi_\alpha^{n+k}(x)-\mathbb{E}_n\xi_\alpha^{n+k}(x)|dm(x)&\le\int_{G}|A^\prime_{\delta^n}\xi_\alpha^{n+k}(x)
-\mathbb{E}_n\xi_\alpha^{n+k}(x)|dm(x)\\
&\le{(D+1)}\|\xi_{\alpha}^{n+k}\|_{L^1(G,m)}.
\end{align*
{\noindent\bf{Case}} $k>k_5$. Note that $\xi^{n+k}_\alpha$ is supported on $\widehat{Q}_\alpha^{n+k}$. {Recall that $\widetilde{Q}_\alpha^{n+k}=\{y\in G:d(y,z_\alpha^{n+k})\le 3C_1\delta^{n+k+1}\}$}. Let $y\in\widehat{Q}_\alpha^{n+k}$, we have $d(y,z_\alpha^{n+k})\le d(y,\widehat{z}_\alpha^{n+k+1})+d(z_\alpha^{n+k},\widehat{z}_\alpha^{n+k+1})\le 2C_1\delta^{n+k+1}$. This gives $\widehat{Q}_\alpha^{n+k}\subseteq\widetilde{Q}_\alpha^{n+k}$.
Fix $x\in G\setminus \widetilde{\Omega}$. There exists an unique `dyadic cube' $Q_\beta^n$ containing $x$. We claim that
\begin{equation}\label{emptyset}
B(x,\delta^n)\cap\widehat{Q}_\alpha^{n+k}=\emptyset,~Q_\beta^n\cap\widehat{Q}_\alpha^{n+k}=\emptyset.
\end{equation}
Since
$$A^\prime_{\delta^n}\xi_\alpha^{n+k}(x)=\frac{1}{m(B(x,r))}\int_{B(x,r)\cap \widehat{Q}_\alpha^{n+k}}\xi_\alpha^{n+k}(y)dm(y)$$
and
$$\mathbb{E}_n\xi_\alpha^{n+k}(x)=\frac{1}{m(Q_\beta^{n})}\int_{Q_\beta^n\cap\widehat{Q}_\alpha^{n+k}}\xi_\alpha^{n+k}(y)dm(y),$$
it follows from the claim that $A^\prime_{\delta^n}\xi_\alpha^{n+k}=\mathbb{E}_n\xi_\alpha^{n+k}=0$.
We now prove the claim. If $Q_\beta^n\cap\widehat{Q}_\alpha^{n+k}\neq\emptyset$, by Proposition~\ref{dyadic cube}(ii), it follows that $x\in Q_\beta^n\subseteq\widehat{Q}_\alpha^{n+k}\subseteq \widetilde{Q}_\alpha^{n+k}$, a contradiction
with $x\in G\setminus \widetilde{\Omega}$. On the other hand, if there exists a point $z\in B(x,\delta^n)\cap \widehat{Q}_\alpha^{n+k}$, then $d(x,\widehat{z}_\alpha^{n+k+1})\le d(x,z)+d(z,\widehat{z}_\alpha^{n+k+1})\le\delta^n+C_1\delta^{n+k+1}< 2C_1\delta^{n+k+1}$, and so it follows that
$$d(x,z_\alpha^{n+k})\le d(x,\widehat{z}_\alpha^{n+k+1})+d(z_\alpha^{n+k},\widehat{z}_\alpha^{n+k+1})<3C_1\delta^{n+k+1},$$
contrary to $x\in G\setminus \widetilde{\Omega}$, and \eqref{emptyset} is proved. This gives the conclusion.\\
{\noindent\bf{Case}} $k<-k_5$. We also have
\begin{equation*}
\mathbb{E}_n\xi_{\alpha}^{n+k}(x)=0,~\forall~x\in G\setminus\widetilde{\Omega}.
\end{equation*}
Indeed, let $x\in Q_\beta^{n}$, if $ Q_\beta^{n}\cap \widehat{Q}_\alpha^{n+k}\neq \emptyset$, then by Proposition~\ref{dyadic cube}(ii), it follows that $\widehat{Q}_\alpha^{n+k}\subseteq Q_\beta^{n}$ and
\begin{equation*}
\mathbb{E}_n\xi_\alpha^{n+k}(x)=\int_{\widehat{Q}_\alpha^{n+k}}\xi_\alpha^{n+k}(y)dm(y)=0.
\end{equation*}
{ Given a ball $B(x,r)$ and `dyadic cube' $Q_\alpha^k$, we define the set $\mathcal{I}(B(x,r),Q^{k}_\alpha)=\{Q_\alpha^{k}\cap B(x,r): Q_\alpha^{k}\cap\partial B(x,r) \neq\emptyset\}$. }Since $\xi_\alpha^{n+k}$ is supported on $\widehat{Q}_\alpha^{n+k}$ and $\int_{\widehat{Q}_\alpha^{n+k}}\xi_{\alpha}^{n+k}=0$, it follows that {$\forall x\in G$},
\begin{align*
|A_{\delta^n}\xi_{\alpha}^{n+k}(x)|&=\big|\frac{1}{m(B(x,\delta^n))}\int_{\mathcal{I}(B(x,\delta^n),\widehat{Q}^{n+k}_\alpha)}\xi_{\alpha}^{n+k}(y)dm(y)\big|\\
&\le \frac{1}{m(B(x,\delta^n))}\int_{\delta^n-C_1\delta^{n+k+1}\le d(x,y)\le\delta^n+C_1\delta^{n+k+1}}|\xi_{\alpha}^{n+k}(y)|dm(y).
\end{align*}
Then by the above estimate we obtain
\begin{equation}\label{L1-norm}
\begin{split}
&\int_{G}|A_{\delta^n}\xi_{\alpha}^{n+k}(x)|dm(x)\\
&\le\int_{G}\frac{1}{m(B(x,\delta^n))}\int_{\delta^n-C_1\delta^{n+k+1}\le d(x,y)\le\delta^n+C_1\delta^{n+k+1}}|\xi_{\alpha}^{n+k}(y)|dm(y)dm(x)\\
&\le \int_{G}|\xi_{\alpha}^{n+k}(y)|\cdot\int_{G}\frac{\mathds{1}_{B(y,\delta^n+C_1\delta^{n+k+1})\setminus B(y,\delta^n-C_1\delta^{n+k+1})}(x)}{m(B(x,\delta^n))}dm(x)dm(y)
\end{split}
\end{equation}
Set $A(y)={B(y,\delta^n+C_1\delta^{n+k+1})\setminus B(y,\delta^n-C_1\delta^{n+k+1})}$. {Fix $y\in G$. By~\eqref{int}, we have
\begin{equation*}
\frac{\mathds{1}_{A(y)}(x)}{m(B(x,\delta^n))}\le 2^\epsilon(K+1)\frac{\mathds{1}_{A(y)}(x)}{m(B(x,\delta^{n}+C_1\delta^{n+k+1}))}\mathds{1}_{B(y,\delta^{n}+C_1\delta^{n+k+1})}(x)
\end{equation*}
By the same argument as in the proof of~\cite[Theorem 3.5]{AL19}, we have the following properties. For any $0<\varepsilon<1$, there exist $M$-points $\{u_i:1\le i\le M\}$ with $M\le D$ in $B(y,\delta^{n}+C_1\delta^{n+k+1})$ such that $B(y,\delta^n+C_1\delta^{n+k+1})\setminus \bigcup_{i=1}^M B(u_i,\delta^n+C_1\delta^{n+k+1})=\emptyset$. Fix $x\in B(y,\delta^{n}+C_1\delta^{n+k+1})$, and let $j$ be the first index such that
\begin{equation*}
x\in B(u_j,\delta^n+C_1\delta^{n+k+1}),~m(B(u_j,\delta^n+C_1\delta^{n+k+1}))\le (1+\varepsilon) m(B(x,\delta^n+C_1\delta^{n+k+1})).
\end{equation*}
By the above discussions, we first have
\begin{equation}\label{L1-norm-1}
\begin{split}
&\int_{G}\frac{\mathds{1}_{A(y)}(x)}{m(B(x,\delta^n))}dm(x)\\
&\le 2^\epsilon(K+1)\int_G\frac{\mathds{1}_{A(y)}(x)}{m(B(x,\delta^{n}+C_1\delta^{n+k+1}))}\mathds{1}_{B(y,\delta^{n}+C_1\delta^{n+k+1})}(x)dm(x)\\
&\le 2^\epsilon(K+1)\int_G\frac{(1+\varepsilon)\mathds{1}_{A(y)}(x)}{m(B(u_j,\delta^{n}+C_1\delta^{n+k+1}))}\mathds{1}_{B(y,\delta^{n}+C_1\delta^{n+k+1})\cap B(u_j,\delta^{n}+C_1\delta^{n+k+1})}(x)dm(x)\\
&\le 2^\epsilon(K+1)\int_G\sum_{i=1}^M\frac{(1+\varepsilon)\mathds{1}_{A(y)\cap B(u_i,\delta^{n}+C_1\delta^{n+k+1})}(x)}{m(B(u_i,\delta^{n}+C_1\delta^{n+k+1}))}dm(x)\\
&\le 2^\epsilon(K+1)\int_G\sum_{i=1}^M\frac{(1+\varepsilon)\mathds{1}_{A(y)\cap B(u_i,\delta^{n}+C_1\delta^{n+k+1})}(x)}{m(B(u_i,\delta^{n}))}dm(x).
\end{split}
\end{equation}
Note that each $u_i\in B(y,\delta^n+C_1\delta^{n+k+1})$, then $B(y,\delta^n)\subseteq B(u_i,2\delta^{n}+C_1\delta^{n+k+1})$. It follows from~\eqref{int} that
\begin{equation*}
\frac{m(B(y,\delta^n))}{m(B(u_i,\delta^n))}\le \frac{m(B(u_i,2\delta^{n}+C_1\delta^{n+k+1}))}{m(B(u_i,\delta^n))}\le 3^\epsilon(K+1).
\end{equation*}
Moreover, by Lemma~\ref{lem:decay}, we have ${m(A(y))}/{m(B(y,\delta^n))}\le K_\epsilon C_1^\epsilon\delta^{\epsilon k}$. Combining these two estimates with~\eqref{L1-norm-1}, we conclude
\begin{equation}\label{L1-norm-2}
\int_{G}\frac{\mathds{1}_{A(y)}(x)}{m(B(x,\delta^n))}dm(x)\le (1+\varepsilon)6^{\epsilon}(K+1)^2C_1^\epsilon DK_\epsilon\delta^{\epsilon k}.
\end{equation}}
Combining \eqref{L1-norm-2} with~\eqref{L1-norm}, and then letting $\varepsilon\rightarrow 0$, we obtain $\|A_{\delta^n}\xi_{\alpha}^{n+k}\|_{L^1}\le 6^{\epsilon}(K+1)^2C_1^\epsilon DK_\epsilon\delta^{\epsilon k}\|\xi_{\alpha}^{n+k}\|_{L^1}$.
Together with the above three cases for $k$, \eqref{bad} is proved.
\bigskip
The estimate of $m\big(\{x\in G\setminus\widetilde{\Omega}:S(b)(x)>{\gamma}/{3}\}\big)$ is similar, and {will only be indicated briefly}. Fix $n>n_2$. We first prove that
\begin{equation}\label{cancellation}
\mathbb{E}_n{b}(x)=0,~\forall~x\in G\setminus{\widetilde{\Omega}}.
\end{equation}
Note that $b=\sum_{k}\sum_{\alpha\in\Lambda_k}b_\alpha^k$, by the linearity of operator $\mathbb{E}_n$, we only need to prove that for each $b_\alpha^k$, $\mathbb{E}_n{b_\alpha^k}=0$.
Fix $x\in G\setminus\widetilde{\Omega}$. Let $Q_\beta^n\ni x$. Since $b_\alpha^k$ is supported on $Q_\alpha^k$, it follows that $\mathbb{E}_n{b^k_\alpha}(x)=\int_{Q_\alpha^k\cap Q_\beta^n}b_\alpha^k(y)dm(y)$. If $Q_\alpha^k\cap Q_\beta^n\neq \emptyset$, we split $k$ into two cases: $k\ge n$ and $k<n$. For $k\ge n$, by Proposition~\ref{dyadic cube}(ii), it follows that $x\in Q_\beta^n\subseteq Q_\alpha^k$, this leads to a contradiction since $Q_\alpha^k\subseteq \widetilde{\Omega}$. For $k<n$, using Proposition~\ref{dyadic cube}(ii) again, we have $Q_\alpha^k \subseteq Q_\beta^n$, it follows immediately that
\begin{equation*}
\mathbb{E}_n{b^k_\alpha}(x)=\frac{1}{m(Q_\beta^n)}\int_{Q_\alpha^k}b_\alpha^k(y)dm(y)=0.
\end{equation*}
This gives~\eqref{cancellation}. Similar to~\eqref{bad}, we can also establish the following inequality
\begin{equation}\label{bad-1
\int_{G\setminus {\widetilde{\Omega}}}|A_{\delta^n}b_\alpha^{n+k}(x)|dm(x)\le
\left\{
\begin{array}{ll}
D\|b_\alpha^{n+k}\|_{L^1}, & \hbox{$-k_5\le k\le k_5$;} \\
0, & \hbox{$k>k_5$;} \\
6^{\epsilon}(K+1)^2C_1^\epsilon DK_\epsilon\delta^{\epsilon k}\|b_\alpha^{n+k}\|_{L^1}, & \hbox{$k<-k_5$.}
\end{array}
\right.
\end{equation}
Note that for $-k_5\le k\le k_5$, the estimate holds by Proposition~\ref{aver}. For $k>k_5$, the estimate holds true by \eqref{emptyset} and the fact that $b_\alpha^{n+k}$ is supported on $Q_\alpha^{n+k}$. For $k<-k_5$. We first have
\begin{align*
|A_{\delta^n}b_{\alpha}^{n+k}(x)|&=\big|\frac{1}{m(B(x,\delta^n))}\int_{\mathcal{I}(B(x,\delta^n),{Q}^{n+k}_\alpha)}b_{\alpha}^{n+k}(y)dm(y)\big|\\
&\le \frac{1}{m(B(x,\delta^n))}\int_{\delta^n-C_1\delta^{n+k}\le d(x,y)\le\delta^n+C_1\delta^{n+k}}|b_{\alpha}^{n+k}(y)|dm(y).
\end{align*}
{By the same estimates as~\eqref{L1-norm} and~\eqref{L1-norm-2}, the above inequality yields
\begin{align*}
\int_{G\setminus{\Omega}}|A_{\delta^n}b_{\alpha}^{n+k}(x)|dm(x)&\le \frac{1}{m(B(x,{\delta^n}))}\int_{G}\int_{\delta^n-C_1\delta^{n+k}\le d(x,y)\le\delta^n+C_1\delta^{n+k}}|b_{\alpha}^{n+k}(y)|dm(y)dm(x)\\
&\le 6^{\epsilon}(K+1)^2C_1^\epsilon DK_\epsilon\delta^{\epsilon k}\|b_{\alpha}^{n+k}\|_{L^1(G,m)};
\end{align*}}
together with the above two cases, this proves~\eqref{bad-1}, and the proof is complete.
\end{proof
The proof of weak type $(1,1)$ estimate for operator $SV$ is similar to $S$. Let us explain it briefly.
\begin{proof}[Proof of \eqref{weak-type inequalities of short variation}]
Fix $f\in L^1(G,m)$. Decompose $f=g+b+\xi$. The desired estimate for $g$ is true by the fact that $SV$ is of strong type $(2,2)$. In what follows, we only state the proof for $\xi$ since the proof for $b$ is similar.
By similar arguments as in the previous proof, we mainly need to prove the following inequality for $n>n_2$
\begin{equation*
\begin{split}
&\int_{G\setminus\widetilde{\Omega}}V_2(A^\prime_{r}\xi^{n+k}_{\alpha}(x):r\in[\delta^{n},\delta^{n+1}))dm(x)\le a(k)\|\xi_\alpha^{n+k}\|_{L^1(G,m)},
\end{split}
\end{equation*}
where
\begin{equation*}
a(k)=\left\{
\begin{array}{ll}
C_4, & \hbox{$-k_5\le k\le k_5$;} \\
0, & \hbox{$k>k_5$;} \\
2\cdot 3^\epsilon (\delta+1)^\epsilon (K+1)^2C_1^\epsilon D K_\epsilon\delta^{\epsilon k}, & \hbox{$k<-k_5$.}
\end{array}
\right.
\end{equation*}
We now focus on the above inequality. Fix $n>n_2$. Note that for $-k_5\le k\le k_5$, by Lemma~\ref{controlled by maximal operator}, the estimate is true. For $k>k_5$, similar to~\eqref{emptyset}, we can prove that for every $x\in G\setminus\widetilde{\Omega}$ and $r\in[\delta^n,\delta^{n+1}]$, $B(x,r)\cap \widehat{Q}_\alpha^{n+k}=\emptyset$, hence $V_2(A^\prime_{r}\xi^{n+k}_{\alpha}:r\in[\delta^{n},\delta^{n+1}))=0$ . It remains to prove the case $k<-k_5$.
Given an annular $B(x,R)\setminus B(x,r)$ and `dyadic cube' $Q_\alpha^k$, we define the set $\mathcal{I}(B(x,R)\setminus B(x,r),Q^{k}_\alpha)=\{Q_\alpha^{k}\cap(B(x,R)\setminus B(x,r)): Q_\alpha^{k}\cap\partial(B(x,R)\setminus B(x,r)) \neq\emptyset\}$. Since $\int_{\widehat{Q}_\alpha^{n+k}}\xi_{\alpha}^{n+k}=0$, it follows that
\begin{align*}
&V_2(A^\prime_{r}\xi^{n+k}_{\alpha}(x):r\in[\delta^{n},\delta^{n+1}))\\
&\le\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J\frac{1}{m(B(x,r_i))}\big|\int_{\mathcal{I}(B(x,r_i)\setminus B(x,r_{i-1}),\widehat{Q}^{n+k}_\alpha)}\xi_\alpha^{n+k}(y)dm(y)\big|\\
&+\sup_{\delta^{n}\le r_0<\cdots<r_J<\delta^{n+1}}\sum_{i=1}^J
\big|\bigg(\frac{1}{m(B(x,r_{i-1}))}-\frac{1}{m(B(x,r_i))}\bigg)\int_{\mathcal{I}(B(x,r_i), \widehat{Q}_\alpha^{n+k})}\xi_\alpha^{n+k}(y)dm(y)\big|\\
&\le \frac{2}{m(B(x,\delta^{n}))}\int_{\delta^{n+1}-C_1\delta^{n+k+1}\le d(x,y)\le\delta^{n+1}+C_1\delta^{n+k+1}}|\xi_{\alpha}^{n+k}(y)|dm(y).\\
\end{align*}
{Using the same argument as~\eqref{L1-norm} and~\eqref{L1-norm-2}, the above inequality yields
\begin{align*}
\int_{G\setminus\widetilde{\Omega}}V_2(A^\prime_{r}b^{n+k}_{\alpha}(x):r\in[\delta^{n},\delta^{n+1}))dm(x)\le 2\cdot 3^\epsilon(\delta+1)^\epsilon (K+1)^2C_1^\epsilon D K_\epsilon\delta^{\epsilon k}\|b_{n+k}^\alpha\|_{L^1(G,m)},
\end{align*}}
which completes the proof.
\end{proof}
\section{($L^\infty$, BMO) estimates}\label{ST5}
In this section, we prove that both operator $S$ and operator $SV$ map $L_c^\infty$ to dyadic BMO space.
Given a locally integrable function $f\in L^1_{loc}(G,m)$,
the dyadic sharp maximal function is defined by
\begin{equation*}
M^\sharp f(x)=\sup_{(\alpha,k),x\in Q^k_{\alpha}}\inf_c\frac{1}{m(Q^k_\alpha)}\int_{Q^k_\alpha}|f(y)-c|dm(y),
\end{equation*}
and then the dyadic BMO space is defined as $BMO=\{f\in L^1_{loc}(G,m):\| M^\sharp f\|_{L^{\infty}}<\infty\}$ with the norm $\|f\|_{BMO}=\| M^\sharp f\|_{L^{\infty}}$.
We first handle operator $S$, which was defined as
$$S(f)=\bigg(\sum_{n>n_{r_0}}|A^\prime_{\delta^n}f-\mathbb{E}_nf|^2\bigg)^{1/2}.$$ As in dealing with the weak type $(1,1)$ estimate, we also need the non-doubling analysis.
\begin{proof}[Proof of~\eqref{BMO-type inequalities of square function}
Fix a `dyadic cube' $Q^k_\beta$. Recall that $\widetilde{Q}_\beta^k=\{y\in G:d(y,z_\alpha^k)\le 3C_1\delta^{k+1}\}$ and $k_3=\min\{k:a_0\delta^k>r_0\}$. Set
\begin{equation}\label{dilation}
{Q^k_\beta}^*=\left\{
\begin{array}{ll}
\widetilde{Q^k_\beta}, & \hbox{$k>k_3$;}\\
Q^k_\beta, & \hbox{$k\le k_3$,}
\end{array}
\right.
\end{equation}
and
\begin{equation*
f(x)=f(x)\mathds{1}_{{Q^k_\beta}^*}(x)+f(x)\mathds{1}_{G\setminus {Q^k_\beta}^*}(x):=f_1(x)+f_2(x).
\end{equation*}
By the triangle inequality, it follows that
\begin{equation*}
\begin{split}
\frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f)(y)-c|&dm(y)\le \frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f)(y)-S(f_2)(y)|dm(y)\\
&+\frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f_2)(y)-c|dm(y)\\
&\le \frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f_1)(y)|dm(y)+\frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f_2)(y)-c|dm(y).
\end{split}
\end{equation*}
We now focus on the first term of the right hand side. By the Cauchy-Schwarz inequality, \eqref{dilation}, \eqref{int} and the fact that the operator $S$ is of strong type $(2,2)$, we have
\begin{equation*}
\begin{split}
\frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f_1)(y)|dm(y)&\le \bigg(\frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f_1)(y)|^2dm(y)\bigg)^{1/2}\\
&\le \bigg(\frac{c_2^2}{m(Q_\beta^k)}\int_{{Q^k_\beta}^*}|f(y)|^2dm(y)\bigg)^{1/2}\\
&\le c_2(3C_1\delta/a_0)^{\epsilon/2}\|f\|_{L^\infty}
\end{split}
\end{equation*}
It remains to handle the term $\frac{1}{m(Q_\beta^k)}\int_{Q_\beta^k}|S(f_2)(y)-c|dm(y)$. Taking $c=S(f_2)(z^k_\beta)$, we first claim that
$$\mathbb{E}_{n}f_2(x)-\mathbb{E}_{n}f_2(z^k_\beta)=0,~\forall~x\in Q^k_\beta.$$
Indeed, let $Q_\alpha^n\ni x$, if $n\le k$, by Proposition~\ref{dyadic cube}(ii), then $Q^n_{\alpha}\subseteq Q^k_{\beta}$. Since $f_2$ is supported on $G\setminus {Q_\alpha^k}^*$, it follows that $\mathbb{E}_{n}f_2(x)=\mathbb{E}_{n}f_2(z^k_\beta)=0$. If $n>k$, using Proposition~\ref{dyadic cube}(ii) again, we have $Q_\beta^k\subseteq Q_\alpha^n$ and $x,z_\beta^k\in Q_\alpha^n$. It follows that $\mathbb{E}_{n}f_2(x)=\mathbb{E}_{n}f_2(z^k_\beta)$, then the claim is proved.
On the other hand, let $n_3=k_3+[\log_{\delta}(2C_1/a_0)]+2$. By the triangle inequality, we first have
\begin{equation}\label{BMO-ineq}
\begin{split
&\bigg(\sum_{n>n_{r_0}}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|^2\bigg)^{1/2}\le \bigg(\sum_{n_{r_0}<n\le n_3}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|^2\bigg)^{1/2}\\
&+\bigg(\sum_{n>n_3}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|^2\bigg)^{1/2}\\
&\le 2(n_3-n_{r_0})\|f\|_{L^\infty}+\bigg(\sum_{n>n_3}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|^2\bigg)^{1/2}.
\end{split}
\end{equation}
Let $d(Q_\beta^k)$ be the diameter of $Q_\beta^k$. Set $n_4=\min\{n:\delta^n>d(Q_\beta^k)\}$. We now consider two cases for $n_3$: $n_3\ge n_4$ and $n_3<n_4$.
\noindent{\bf{Case} $n_3\ge n_4$}. Note that $\delta^{n_3}>d(Q_\beta^k)$. By~\eqref{BMO-ineq}, the proof will be completed if we proved the following inequality
\begin{equation*}
\bigg(\sum_{n>n_3}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|^2\bigg)^{1/2}\le \frac{2\big(K+2^\epsilon K(K+1)\big)}{1-\delta^{-\epsilon}}\|f\|_{L^\infty},~\forall~x\in Q_\beta^k.
\end{equation*}
We now focus on the above inequality. By the triangle inequality, we have
{
\begin{equation*}
\begin{split}
&|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|\\
&=|\frac{1}{m(B(x,\delta^n))}\int_{B(x,\delta^n)}f_2(y)dm(y)-
\frac{1}{m(B(z_\beta^k,\delta^n))}\int_{B(z_\beta^k,\delta^n)}f_2(y)dm(y)|\\
&\le |\frac{1}{m(B(x,\delta^n))}\int_{B(x,\delta^n)}f_2(y)dm(y)-
\frac{1}{m(B(x,\delta^n))}\int_{B(z_\beta^k,\delta^n)}f_2(y)dm(y)|\\
&+|\frac{1}{m(B(x,\delta^n))}\int_{B(z_\beta^k,\delta^n)}f_2(y)dm(y)-
\frac{1}{m(B(z_\beta^k,\delta^n))}\int_{B(z_\beta^k,\delta^n)}f_2(y)dm(y)|\\
&\le \frac{m(B(x,\delta^n)\triangle B(z_\beta^k,\delta^n))}{m(B(x,\delta^n))}\|f\|_{L^\infty}+\big|\frac{m(B(z_\beta^k,\delta^n))}{m(B(x,\delta^n))}-1\big|\|f\|_{L^\infty}\\
&\le 2\frac{m(B(x,\delta^n)\triangle B(z_\beta^k,\delta^n))}{m(B(x,\delta^n))}\|f\|_{L^\infty}.
\end{split}
\end{equation*}
Note that{
$$B(x,\delta^n)\triangle B(z_\beta^k,\delta^n)\subseteq \bigg(B(x,d(x,z_\beta^k)+\delta^n)\setminus B(x,\delta^n)\bigg)\cup \bigg(B(z_\beta^k,d(x,z_\beta^k)+\delta^n)\setminus B(z_\beta^k,\delta^n)\bigg).$$}
For $n>n_3$, we have $\delta^n>r_0$. { Note that $x\in Q_\beta^k$, then by Proposition~\ref{dyadic cube}(iv), we have $B(z_\beta^k, \delta^n)\subseteq B(x,\delta^n+d(Q_\beta^k))$. It follows from~\eqref{int} that
\begin{equation*}
\frac{m(B(z_\beta^k,\delta^n))}{m(B(x,\delta^n))}\le \frac{m(B(x,\delta^n+d(Q_\beta^k)))}{m(B(x,\delta^n))}\le 2^\epsilon(K+1).
\end{equation*}
Combining the above inequality with condition~\eqref{decay property}, we have
\begin{equation*}
\begin{split}
&\frac{m(B(x,\delta^n)\triangle B(z_\beta^k,\delta^n))}{m(B(x,\delta^n))}\\
&\le\frac{m\bigg(B(x,d(x,z_\beta^k)+\delta^n)\setminus B(x,\delta^n)\bigg)}{m(B(x,\delta^n))}+\frac{m\bigg(B(z_\beta^k,d(x,z_\beta^k)+\delta^n)\setminus B(z_\beta^k,\delta^n)\bigg)}{m(B(z_\beta^k,\delta^n))}\frac{m(B(z_\beta^k,\delta^n))}{m(B(x,\delta^n))}\\
&\le \big(K+2^\epsilon K(K+1)\big)\bigg(\frac{d(x,z_\beta^k)}{\delta^n}\bigg)^{\epsilon}.
\end{split}
\end{equation*}}
Therefore,
\begin{equation}\label{estimate of averaging operator}
|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|\le 2\big(K+2^\epsilon K(K+1)\big)\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon}\|f\|_{L^\infty}.
\end{equation
By the above inequality, we have
\begin{equation*
\begin{split}
\bigg(\sum_{n>n_3}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z_\beta^k)|^2\bigg)^{1/2}
&\le 2\big(K+2^\epsilon(K+1)\big)\|f\|_{L^\infty}\sum_{n:\delta^n>d(Q_\beta^k)}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon}\\
&\le \frac{2\big(K+2^\epsilon K(K+1)\big)}{1-\delta^{-\epsilon}}
\|f\|_{L^\infty}.
\end{split}
\end{equation*
\noindent{\bf{Case} $n_3<n_4$}. Note that $d(Q_\beta^k)\ge \delta^{n_3-1}$. It follows {from Proposition \ref{dyadic cube}}(iv) that
\begin{equation*}
a_0\delta^{k}\ge ({a_0}/{{2}C_1})\delta^{(n_3-1)}>\delta^{k_3}\ge r_0,
\end{equation*}
and $f_2=f\mathds{1}_{G\setminus \widetilde{Q^k_\beta}}$. We claim that for $n_3<n\le n_4$,
\begin{equation}\label{BMO-ineq1
A^\prime_{\delta^n}f_2(x)=0, ~\forall~x\in Q_\beta^k.
\end{equation}
Indeed, fix $x\in Q_\beta^k$ and let $y\in B(x,\delta^n)$, then $d(y,z_\beta^k)\le d(y,x)+d(z_\beta^k,x)\le \delta^n+C_1\delta^k\le d(Q_\beta^k)+C_1\delta^k\le 3C_1\delta^k$. From this, we have
$B(x,\delta^n)\subset\widetilde{Q}_{\beta}^k$. Then the claim {follows from the fact that $f_2$ is supported on $G\setminus \widetilde{Q^k_\beta}$.}
Combining~\eqref{BMO-ineq1} with a similar argument as in the previous proof, we also have
\begin{equation*}
\bigg(\sum_{n>n_3}|A^\prime_{\delta^n}f_2(x)-A^\prime_{\delta^n}f_2(z^k_\beta)|^2\bigg)^{1/2}\le {\frac{2\big(K+2^\epsilon K(K+1)\big)}{1-\delta^{-\epsilon}}}\|f\|_{L^\infty},~\forall~x\in Q_\beta^k,
\end{equation*}
and the proof is complete.
\end{proof}
The proof of $(L^\infty_c, BMO)$ boundedness of $SV$ is similar to $S$. Let us give a sketch of the proof.
\begin{proof}[Proof of \eqref{BMO-type inequalities of short variation}]
Let $f$ be a $L^\infty_c$ function. Recall that the short variation operator is defined by $SV(f)=\big(\sum_{n\ge n_{r_0}}V_2(A^\prime_{r}f:r\in[\delta^{n},\delta^{n+1}))^2\big)^{1/2}$. Fix a `dyadic cube' $Q^k_\beta$ and let $=f\mathds{1}_{{Q^k_\beta}^*}+f\mathds{1}_{G\setminus {Q^k_\beta}^*}:=f_1+f_2$. We only state the proof for case $n_3\ge n_4$, namely $\delta^{n_3}>d(Q_\beta^k)$, since the proof for case $n_3<n_4$ is similar. By an argument similar to that in the proof for operator $S$, it is sufficient to prove that there exists a constant $C_{\epsilon,\delta, K}>0$ such that for all $x\in Q_\beta^k$,
\begin{equation*
\bigg(\sum_{n>n_3}V_2(A^\prime_rf_2(x)-A^\prime_rf_2(z_\beta^k):r\in[\delta^{n},\delta^{n+1}))^2\bigg)^{1/2}\le C_{\epsilon,\delta, K}\|f\|_{L^\infty}.
\end{equation*}
Fix $n>n_3$ and $x\in Q_\beta^k$. By the definition of short variation operator, there exists a sequence $\{r_i\}\subseteq [\delta^n,\delta^{n+1})$ such that
\begin{equation}\label{controll-ineq}
\begin{split}
&V_2(A^\prime_rf_2(x)-A^\prime_rf_2(z_\beta^k):r\in[\delta^{n},\delta^{n+1}))\\
&\le 2\bigg(\sum_{i}|A^\prime_{r_i}f_2(x)-A^\prime_{r_i}f_2(z_\beta^k)-(A^\prime_{r_{i-1}}f_2(x)-A^\prime_{r_{i-1}}f_2(z_\beta^k))|^2\bigg)^{1/2}.
\end{split}
\end{equation}
We split such sequence $\{r_i\}$ into two cases: ${J_1}=\{i:r_i-r_{i-1}\le d(Q_\beta^k)^\epsilon/\delta^{(\epsilon-1)n}\}$ and $J_2=\{i:r_i-r_{i-1}>d(Q_\beta^k)^\epsilon/\delta^{(\epsilon-1)n}\}$. Therefore, the right hand side of the above inequality is controlled by the sum of
\begin{equation*}
\begin{split}
SV&_{I_n}(f_2)(x)=\bigg(\sum_{i:i\in J_1}
|A^\prime_{r_i}f_2(x)-A^\prime_{r_i}f_2(z_\beta^k)-(A^\prime_{r_{i-1}}f_2(x)-A^\prime_{r_{i-1}}f_2(z_\beta^k))|^2\bigg)^{1/2},
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
SV&_{II_n}(f_2)(x)=\bigg(\sum_{i:i\in J_2}|A^\prime_{r_i}f_2(x)-A^\prime_{r_{i}}f_2(z_\beta^k)-
(A^\prime_{r_{i-1}}f_2(x)-A^\prime_{r_{i-1}}
f_2(z_\beta^k))|^2\bigg)^{1/2}.
\end{split}
\end{equation*}
We now focus on $SV_{I_n}(f_2)(x)$. By the triangle inequality, we have
\begin{equation*}
\begin{split}
SV_{I_n}(f_2)(x)&\le \bigg(\sum_{i:i\in J_1}|A^\prime_{r_i}f_2(x)-A^\prime_{r_{i-1}}f_2(x)|^2\bigg)^{1/2}\\
&+\bigg(\sum_{i:i\in J_1}|A^\prime_{r_i}f_2(z_\beta^k)-A^\prime_{r_{i-1}}f_2(z_\beta^k)|^2\bigg)^{1/2}.
\end{split}
\end{equation*}
Note that for any $z\in Q_\beta^k$, we have
\begin{align*}
|&A^\prime_{r_i}f_2(z)-A^\prime_{r_{i-1}}f_2(z)|\le\frac{1}{m(B(z,r_i))}\int_{B(z,r_i)\setminus B(z,r_{i-1})}|f_2(y)|dm(y)\\
&+\big(\frac{1}{m(B(z,r_{i-1}))}-\frac{1}{m(B(z,r_{i}))}\big)\int_{B(z,r_{i-1})}|f_2(y)|dm(y)\\
&\le \frac{m(B(z,r_i))-m(B(z,r_{i-1}))}{m(B(z,r_i))}\|f_2\|_{L^\infty}+\bigg(\frac{m(B(z,r_{i-1}))}{m(B(z,r_{i-1}))}-\frac{m(B(z,r_{i-1}))}
{m(B(z,r_{i}))}\bigg)\|f_2\|_{L^\infty}\\
&\le 2\bigg(\frac{m(B(z,r_{i}))-m(B(z,r_{i-1}))}{m(B(z,r_{i}))}\bigg)\|f_2\|_{L^\infty}\\
&\le 2\|f\|_{L^\infty}\int_{m(B(z,r_{i-1}))}^{m(B(z,r_{i}))}\frac{1}{u}du,
\end{align*
by condition~\eqref{decay property}, and the integral term of the above inequality is controlled by
\begin{align*}
\int_{m(B(z,r_{i-1}))}^{m(B(z,r_{i}))}&\frac{1}{u}du\le \big(m(B(z,r_{i}))-m(B(z,r_{i-1}))\big)^{1/2}
\bigg(\int_{m(B(z,r_{i-1}))}^{m(B(z,r_{i}))}\frac{1}{u^2}du\bigg)^{1/2}\\
&\le \bigg(\frac{m(B(z,r_{i}))-m(B(z,r_{i-1}))}{m(B(z,r_{i-1}))}\bigg)^{1/2}\bigg(
m(B(z,r_{i-1}))\int_{m(B(z,r_{i-1}))}^{m(B(z,r_{i}))}\frac{1}{u^2}du\bigg)^{1/2}\\
&\le K\bigg(\frac{r_i-r_{i-1}}{r_{i-1}}\bigg)^{\epsilon/2}{\bigg(
\int_{m(B(z,{r_{i-1}}))}^{m(B(z,r_{i}))}\frac{m(B(z,\delta^{n+1}))}{u^2}du\bigg)^{1/2}}.
\end{align*}
{Then using~\eqref{int}, the above inequality yields
\begin{align*}
&\bigg(\sum_{i:i\in J_1}|A^\prime_{r_i}f_2(x)-A^\prime_{r_{i-1}}f_2(x)|^2\bigg)^{1/2}\\
&\le 2K\|f\|_{L^\infty}\bigg(\sum_{i:i\in J_1}\bigg(\frac{r_i-r_{i-1}}{r_{i-1}}\bigg)^{\epsilon}
\int_{m(B(x,{r_{i-1}}))}^{m(B(x,r_{i}))}\frac{m(B(x,\delta^{n+1}))}{u^2}du\bigg)^{1/2}\\
&\le 2K\|f\|_{L^\infty}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon^2/2}\bigg(
\int_{m(B(x,{\delta^{n}}))}^{m(B(x,\delta^{n+1}))}\frac{m(B(x,\delta^{n+1}))}{u^2}du\bigg)^{1/2}\\
&\le 2K\big((K+1)\delta^\epsilon\big)^{1/2}\|f\|_{L^\infty}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon^2/2}.
\end{align*}
By a similar argument, we have
\begin{equation*}
\bigg(\sum_{i:i\in J_1}|A^\prime_{r_i}f_2(z_\beta^k)-A^\prime_{r_{i-1}}f_2(z_\beta^k)|^2\bigg)^{1/2}\le 2K\big((K+1)\delta^\epsilon\big)^{1/2}\|f\|_{L^\infty}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon^2/2}.
\end{equation*}
Therefore,
\begin{align*}
SV_{I_n}(f_2)(x)&\le 4K\big((K+1)\delta^\epsilon\big)^{1/2}\|f\|_{L^\infty}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon^2/2}.
\end{align*}}
For the part $SV_{II_n}(f_2)(x)$, using the triangle inequality, we first have
\begin{equation*}
\begin{split}
SV_{II_n}(f_2)(x)&\le\bigg(\sum_{i:i\in J_2}|A^\prime_{r_i}f_2(x)-A^\prime_{r_{i}}f_2(z_\beta^k)|^2\bigg)^{1/2}\\
&+ \bigg(\sum_{i:i\in J_2}|A^\prime_{r_{i-1}}f_2(x)-A^\prime_{r_{i-1}}f_2(z_\beta^k)|^2\bigg)^{1/2}.
\end{split}
\end{equation*}
Note that $\#\{J_2\}\le{(\delta-1)\delta^{\epsilon n}}/{d(Q_\beta^k)^\epsilon}$. Similar to~\eqref{estimate of averaging operator}, for any $z\in Q_\beta^k$ and $r\in [\delta^n,\delta^{n+1})$, we have
\begin{equation*}
|A^\prime_{r}f_2(z)-A^\prime_{r}f_2(z^k_\beta)|\le{2\big(K+2^\epsilon K(K+1)\big)\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon}}\|f\|_{L^\infty}.
\end{equation*}
Combining the above inequalities, we hav
\begin{align*}
SV_{II_n}(f_2)(x) &\le 4\big(K+2^\epsilon K(K+1)\big)\|f\|_{L^\infty}\bigg(
(\delta-1)\bigg(\frac{\delta^n}{d(Q_\beta^k)}\bigg)^\epsilon\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{2\epsilon}\bigg)^{1/2}\\
&=4\big(K+2^\epsilon K(K+1)\big)(\delta-1)^{1/2}\|f\|_{L^\infty}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon/2}.
\end{align*}
Finally, together the estimates of $SV_{I_n}(f_2)(x)$ and $SV_{II_n}(f_2)(x)$ with~\eqref{controll-ineq}, we have
\begin{equation*}
\begin{split}
\bigg(\sum_{n>n_3}&V_2(A^\prime_rf_2(x)-A^\prime_rf_2(z_\beta^k):r\in[\delta^{n},\delta^{n+1}))^2\bigg)^{1/2}\\
&\le 8\big(K+2^\epsilon K(K+1)\big)(\delta-1)^{1/2}\|f\|_{L^\infty}\bigg(\sum_{n:\delta^n>d(Q_\beta^k)}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon}\bigg)^{1/2}\\
&+ 8K\big((K+1)\delta^\epsilon\big)^{1/2}\|f\|_{L^\infty}\bigg(\sum_{n:\delta^n>d(Q_\beta^k)}\bigg(\frac{d(Q_\beta^k)}{\delta^n}\bigg)^{\epsilon^2}\bigg)^{1/2}\\
&\le 8\big(K+2^\epsilon K(K+1)\big)\bigg(\frac{\delta-1}{1-\delta^{-\epsilon}}\bigg)^{1/2}\|f\|_{L^\infty}
+8K\bigg(\frac{(K+1)\delta^\epsilon}{1-\delta^{-\epsilon^2}}\bigg)^{1/2}\|f\|_{L^\infty},
\end{split}
\end{equation*}
and the proof is complete.
\end{proof}
\section{Transference principles}\label{ST6}
In this section, we establish the transference principles for the jump operator. Recall that a sequence of compact sets $\{F_n\}_{n\in\mathbb{N}}$ with positive measures in a locally compact group $G$ is called a F${\o}$lner sequence if for every $g\in G$
\begin{equation*}
\lim_{i}\frac{m((F_ng)\triangle F_n)}{m(F_n)}=0,
\end{equation*}
or equivalently
for all compact set $K$ in $G$,
\begin{equation}\label{amen}
\lim_{n}\frac{m(F_nK)}{m(F_n)}=1.
\end{equation}
A group $G$ is called amenable if it admits such a F${\o}$lner sequence. It is well known that
if $G$ is a group with polynomial volume growth, then it is amenable (cf. \cite{Gui}), and in particular the family of balls $\{B_r\}_{r>0}$ generated by any word metric on $G$ is a F${\o}$lner sequence (cf. \cite{Breuillard14, Nevo06, Tessera07}). For more information about F${\o}$lner sequences and amenable groups we refer the reader to~\cite{Pat88}.
\subsection{Weak type inequalities.}
\begin{proof}[Proof of Theorem~\ref{thm:trans}$\emph(i)$]
{We only give the proof of transference principle for weak type inequalities, since the strong type one can be done verbatim. }Let $p\in[1,\infty)$. Given a $f\in L^p(X,\mu)$. Let $T$ be an action induced by a $\mu$-preserving measurable transformation $\tau$, that is for all $g\in G$, $T_gf(x)=f(\tau_{g^{-1}}x)$. Fix $x\in X$ and a compact set $A$, define
\begin{equation*}
F_A(g)=\left\{
\begin{array}{ll}
T_gf(x), & \hbox{$g\in A$;} \\
0, & \hbox{$g\notin A$.}
\end{array}
\right.
\end{equation*}
Let $N$ be an integer large enough and $K$ a compact set such that for $r\le N$ we have $ B_r\subseteq K$. Clearly, $F_{AK}(g)=T_gf(x)\mathds{1}_{AK}(g)$. Moreover, if $h\in A$ and $k\in K$, we have
$$T_hT_kf(x)=T_{hk}f(x)=F_{AK}(hk).$
It follows that for all $h\in A$ and $r\in(0,N]\cap \mathcal{I}$, we have
\begin{equation}\label{equality-1}
T_hA_rf(x)=\frac{1}{m(B_r)}\int_{B_r}T_{hg}f(x)\mathds{1}_{AK}(hg)dm(g)=\frac{1}{m(B_r)}\int_{B_r}F_{AK}(hg)dm(g)
\end{equation}
Let $\mathbf{A}_Nf=\{A_rf:r\in(0,N]\cap \mathcal{I}\}$ and $\mathbf{A}^\prime_N F=\{A^\prime_rF:r\in(0,N]\cap \mathcal{I}\}$. For all $h\in G$, set $T_h\mathbf{A}_Nf=\{T_hA_rf:r\in(0,N]\cap \mathcal{I}\}$; here and subsequently, for {a sequence of measurable functions} $\a=\{\a_r:r\in\mathcal{I}\}$, $T\a$ stands for $\{T\a_r:r\in\mathcal{I}\}$. From~\eqref{equality-1}, we have
\begin{equation}\label{equality-2}
\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}=\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_NF_{AK})(h)}.
\end{equation}
Fix $\gamma>0$, define the se
\begin{equation*}
\mathcal{D}(\gamma)=\big\{(h,x)\in A\times X:\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}>\gamma\big\}.
\end{equation*}
Fix $h\in A$ and define the set
\begin{equation*}
\mathcal{D}^h(\gamma)=\big\{x\in X:\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}>\gamma\big\}.
\end{equation*}
Since $T_{h}(\mathds{1}_{\mathcal{D}^e(\gamma)})(x)=\mathds{1}_{\mathcal{D}^h(\gamma)}(x)$, it follows that
\begin{equation}\label{equality-3}
\int_{X}\mathds{1}_{\mathcal{D}^h(\gamma)}(x)d\mu(x)=\int_{X}T_{h}(\mathds{1}_{\mathcal{D}^e(\gamma)})(x)d\mu(x)
=\int_{X}\mathds{1}_{\mathcal{D}^e(\gamma)}(x)d\mu(x).
\end{equation}
On the other hand, fix $x\in X$ and define the set
\begin{equation*}
\mathcal{D}_x(\gamma)=\big\{h\in A:\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}>\gamma\big\}.
\end{equation*}
By~\eqref{equality-2}, one can see that
\begin{equation*}
\mathcal{D}_x(\gamma)=\big\{h\in A:\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_NF_{AK})(h)}>\gamma\big\}.
\end{equation*}
Moreover, using the assumption that the jump operator is of weak type $(p,p)$, we have
\begin{equation*}
m(\mathcal{D}_x(\gamma))\le \frac{\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p,\infty}}^p}{\gamma^p}\|F_{AK}\|^p_{L^p(G,m)}.
\end{equation*}
It follows from the above inequality that
\begin{equation}\label{inequality-1}
\begin{split}
\int_X\int_{A}\mathds{1}_{\mathcal{D}_x(\gamma)}(h)dm(h)d\mu(x)&\le \frac{\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p,\infty}}^p}{\gamma^p}\int_{X}\int_G|F_{AK}(h)|^pdm(h)d\mu(x)\\
&= \frac{\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p,\infty}}^p}{\gamma^p}m(AK)\int_{X}|f(x)|^pd\mu(x).
\end{split}
\end{equation}
By the Fubini theorem, we can see that
\begin{equation}\label{inequality-2}
\begin{split}
\int_X\int_A\mathds{1}_{\mathcal{D}_x(\gamma)}(h)dm(h)d\mu(x)&=\int_{G\times X}\mathds{1}_{\mathcal{D}(\gamma)}(h,x)dm(h)d\mu(x)\\
&=\int_{A}\int_X\mathds{1}_{\mathcal{D}^h(\gamma)}(x)d\mu(x)dm(h).
\end{split}
\end{equation}
Using~\eqref{equality-3}, we hav
\begin{equation*}
\begin{split}
\int_{A}\int_X\mathds{1}_{\mathcal{D}^h(\gamma)}(x)d\mu(x)dm(h)&=m(A)\int_X\mathds{1}_{\mathcal{D}^e(\gamma)}(x)d\mu(x)\\
&=m(A)\mu\big(\{x\in X:\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}_Nf)(x)}>\gamma\}\big).
\end{split}
\end{equation*
Together~\eqref{inequality-1}, \eqref{inequality-2} with the above inequality, we conclude
\begin{equation*}
\mu\big(\{x\in X:\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}_Nf)(x)}>\gamma\}\big)\le \frac{\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p,\infty}}^p}{\gamma^p}\frac{m(AK)}{m(A)}\int_{X}|f(x)|^pd\mu(x).
\end{equation*}
Since $G$ is an amenable group, by~\eqref{amen}, for any $\varepsilon>0$ we can choose the above subset $A$ such that $m(AK)/m(A)\le(1+\varepsilon)$. By the arbitrariness of $\varepsilon$ and the monotone convergence theorem, letting $N\rightarrow\infty$, we have
\begin{equation*}
\mu\big(\{x\in X:\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}f)(x)}>\gamma\}\big)\le \frac{\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime)}\|_{L^p\rightarrow L^{p,\infty}}^p}{\gamma^p}\int_{X}|f(x)|^pd\mu(x).
\end{equation*}
which is the desired conclusion.
\end{proof}
\subsection{Strong type inequalities.}\label{strong-type}
In this subsection, we {assume that} the action $T$ is a strongly continuous regular action of $G$ on $L^p(X,\mu)$. Before stating the result to be proved, we give some notations and lemmas. The following lemma was proved in~\cite{Mirek-Stein-Zor20}.
\begin{lemma}\label{equivalent norm}
Let $\a=\{\a_r(x),r\in \mathcal{I}\}$ be a sequence of measurable functions on measurable space $(X,\mu)$.~For every $p\in (1,\infty)$ and $\theta\in(0,1)$, there exists a positive constant $c_{p,\theta}$ such tha
\begin{equation*
c^{-1}_{p,\theta}\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\alpha)}\|_{L^p}\le [L^{\infty}(X;V_{\infty}),L^{\theta p}(X;V_{2\theta})]_{\theta,\infty}(\a)\le c_{p,\theta}\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\alpha)}\|_{L^p}.
\end{equation*}
Moreover, if $\max\{1/p,1/2\}<\theta<1$, then the vector-valued interpolation space
$[L^{\infty}(X;V_{\infty}),$ $L^{\theta p}(X;V_{2\theta})]_{\theta,\infty}$
admits an equivalent norm; in particular, if $p>1$, $\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\cdot)}\|_{L^p}$ admits an equivalent norm.
\end{lemma}
See \cite{Mirek-Stein-Zor20} for the definition of vector-valued interpolation spaces and more details about jump quasi-norms.
{An} operator $T:L^p(X,\mu)\rightarrow L^p(X,\mu)$ is called regular if there exists a constant $C>0$ such that
\begin{equation}\label{regular}
\|\sup_{k\ge 1}|T(f_k)|\|_{L^p}\le C\|\sup_{k\ge 1}|f_k|\|_{L^p},
\end{equation}
for any finite sequence $\{f_k:k\ge 1\}$ in $L^p(X,\mu)$. Let us denote by $\|T\|_r$ the smallest $C$ for which this holds. Let $\mathfrak{B}$ be a Banach space. If $T$ is a regular operator on $L^p(X,\mu)$, then the tensor product operator $T\otimes id_{\mathfrak{B}}:L^p(X,\mu)\otimes\mathfrak{B}\rightarrow L^p(X,\mu)\otimes\mathfrak{B}$ extends to a bounded operator $\widetilde{T\otimes id_{\mathfrak{B}}}$ from the Bochner space $L^p(X;\mathfrak{B})$ to $L^p(X;\mathfrak{B})$, and
\begin{equation}\label{extension}
\|\widetilde{T\otimes id_{\mathfrak{B}}}\|_{L^{p}(X;\mathfrak{B})\rightarrow L^{p}(X;\mathfrak{B})}\le \|T\|_{r}.
\end{equation}
For more information {on regular operators} we refer the reader to~\cite{Pis94}. {A group action $T$ of $G$ on $L^p(X,\mu)$ is called regular if for any $g\in G$, $T_g$ is regular and $\sup_{g\in G}\|T_g\|_r<\infty$.}
Moreover, together Lemma~\ref{equivalent norm} with~\eqref{extension}, one can obtain the following lemma.
\begin{lemma}\label{extension of Tg}
Fix $p\in(1,\infty)$. Let $T:L^p(X,\mu)\rightarrow L^p(X,\mu)$ be a regular operator. Given a sequence of measurable functions $\a=\{\a_r(x):r\in \mathcal{I}\}$ in $L^p(X,\mu)$, there exists a constant $c_p>0$ such that
\begin{equation*}
\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(T\a)}\|_{L^p}\le c_p \|T\|_{r} \sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\a)}\|_{L^p}.
\end{equation*}
\end{lemma}
In what follows, we state the strong type $(p,p)$ transference principle for strongly continuous regular group actions.
\begin{proof}[Proof of Theorem~\ref{thm:trans}$\emph{(ii)}$]
Let $p\in (1,\infty)$ and $f\in L^p(X,\mu)$. Fix $x\in X$ and a compact set $A$, define $F_{A}(g)=T_gf(x)\mathds{1}_{A}(g)$. Let $N$ be an integer large enough and $K$ a compact set such that for every $r\le N$ we have $B_r\subseteq K$. Clearly, $F_{AK}(h)=T_hf(x)\mathds{1}_{AK}(h)$.
Keeping the notations introduced in the proof of weak type inequalities and using~\eqref{equality-2}, we have
\begin{align*}
\int_A\big|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}\big|^pdm(h)&=\int_A\big| \lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_NF_{AK})(h)}\big|^pdm(h)\\
&\le \int_G\big|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_NF_{Ak})(h)}\big|^pdm(h).
\end{align*}
Using the strong type $(p,p)$ jump inequality which is assumed for the translation action, we obtain
\begin{equation*}
\begin{split}
\int_A\big|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}\big|^pdm(h)&\le\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p}}^p\int_{G}|F_{Ak}(h)|^pdm(h)\\
&= \|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p}}^p\int_{AK}|T_hf(x)|^pdm(h).
\end{split}
\end{equation*}
Moreover, integrating both sides of the above inequality over $X$ and using the Fubini theorem, we have
\begin{equation}\label{ineq-1}
\begin{split}
&\int_{X}\int_A\big|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}\big|^pdm(h)d\mu(x)\\
&\le \|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p}}^p\int_{AK}\int_{X}|T_hf(x)|^pd\mu(x)dm(h)\\
&= \|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p}}^p\sup_{h\in G}\|T_h\|_{r}^pm(AK)\int_{X}|f(x)|^pd\mu(x).
\end{split}
\end{equation}
On the other hand, by the assumption that $T$ is a strongly continuous {regular} action of $G$ on $L^p(X,\mu)$ and Lemma~\ref{extension of Tg}, we have
\begin{equation*
\begin{split}
\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}_Nf)}\|_{L^p}&= \inf_{h\in A}\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(T_{h^{-1}}T_h\mathbf{A}_Nf)}\|_{L^p}\\
&\le c_p \sup_{h\in G}\|T_h\|_{r} \inf_{h\in A}\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)}\|_{L^p}.
\end{split}
\end{equation*}
{By the above inequality, we have
\begin{equation*}
\begin{split}
&\sup_{\lambda>0}\int_{X}|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}_Nf)(x)}|^pd\mu(x)= \frac{1}{m(A)}\int_{A}\sup_{\lambda>0}\int_{X}|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}_Nf)(x)}|^pd\mu(x)dm(h)\\
&\le c_p\sup_{h\in G}\|T_h\|^p_{r} \frac{1}{m(A)}\int_{A}\sup_{\lambda>0}\inf_{h\in A}\int_X|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}|^pd\mu(x)dm(h)\\
&= c_p \sup_{h\in G}\|T_h\|^p_{r} \sup_{\lambda>0}\frac{1}{m(A)}\int_{A}\inf_{h\in A}\int_X|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}|^pd\mu(x)dm(h)\\
&\le c_p \sup_{h\in G}\|T_h\|^p_{r} \sup_{\lambda>0}\frac{1}{m(A)}\int_A
\int_X|\lambda\sqrt{\mathcal{N}_\lambda(T_h\mathbf{A}_Nf)(x)}|^pd\mu(x)dm(h)
\end{split}
\end{equation*}}
Using the Fubini theorem and~\eqref{ineq-1}, the above inequality yields
\begin{align*}
&\sup_{\lambda>0}\int_X\big|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}_Nf)(x)}\big|^pd\mu(x)\\
&\le c_p\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime_N)}\|_{L^p\rightarrow L^{p}}^p\sup_{h\in G}\|T_h\|_{r}^{2p}\frac{m(AK)}{m(A)}\int_{X}|f(x)|^pd\mu(x).
\end{align*}
By~\eqref{amen} and a similar argument as in the proof of weak type inequalities, letting $N\rightarrow\infty$, we have
\begin{equation*}
\sup_{\lambda>0}\|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}f)}\|_{L^p}\le c_p\sup_{\lambda>0} \|\lambda\sqrt{\mathcal{N}_\lambda(\mathbf{A}^\prime)}\|_{L^p\rightarrow L^{p}}\sup_{h\in G}\|T_h\|^2_{r}\|f\|_{L^p},
\end{equation*}
which is the desired conclusion.
\end{proof}
\section{{Annular decay property}}\label{ST7}
{In this section, we discuss the annular decay property. We first recall the $(\epsilon,1)$-annular decay property of word metrics and verify that this property is stable under $(1,C)$-quasi isometry (recalled below), and thus obtain the quantitative ergodic theorems, including Theorem \ref{main-thm2}, on the polynomial growth group equipped with a metric that is $(1,C)$-quasi isometric to a word metric. We then check that all the known examples of periodic metric, which was introduced by Breuillard~\cite{Breuillard14}, satisfy some $(\epsilon,r_0)$-annular decay property. At the moment of writing, we do not know how to verify this property for all periodic metrics. }
Let $G$ be a polynomial growth group with a symmetric compact generating set $V$. Recall that the word metric $d$ is defined by
\begin{equation*}
\forall~x,y\in G,\qquad~d(x,y)=\inf\{n\in\mathbb{N},x^{-1}y\in V^n\}.
\end{equation*}
It is clear that $d$ is a {(left-)} invariant metric on $G$. Let $r>0$ and $B_r$ be the ball generated by word metric $d$ of radius $r$ in $G$. It is well known that there exist two constants $C_{V}>0$ and $D_G>0$ such that for every $r\in(0,\infty)$,
\begin{equation}\label{ball-ineq1}
C_V^{-1}r^{D_G}\le m(B_r)\le C_Vr^{D_G}.
\end{equation}
Form the above inequality, it is easy to check that $(G,d,m)$ satisfies the measure doubling condition, it follows that such $(G,d,m)$ satisfies condition~\eqref{geo-doubling}. By an argument same as in the proof of~\cite[Theorem 4]{Tessera07}, we have the following proposition.
\begin{prop}\label{word}
Let $G$ be a polynomial growth group with a symmetric compact generating set $V$ and $\{B_r\}_{r>0}$ be the balls given by the corresponding word metric. Then there exist two constants $\theta=\log_2(1+\frac{1}{C_V^210^{D_G}})$ and $c_V=(1+\frac{1}{C_V^210^{D_G}})^3$ such that for all $r\in[1,\infty)$ and $s\in(0,r]$,
\begin{equation}\label{shell}
m(B_{r+s}\setminus B_{r})\le c_V\bigg(\frac{s}{r}\bigg)^{\theta}m(B_r).
\end{equation}
\end{prop}
\begin{remark}
\emph{In terms of our terminology, the above ball annular decay property is just the $(\epsilon, 1)$-annular decay property~\eqref{decay property} of $(G,d,m)$. Note that all the groups of polynomial growth are amenable (cf.~\cite{Gui}),
thus Theorem \ref{main-thm2} follows from Theorem \ref{main-thm1} by the transference principles. }
\end{remark}
In fact, in~\cite[Theorem 4]{Tessera07}, {the $(\epsilon,1)$-annular decay property is established for every metric measure space satisfying the measure doubling condition and Property ($M$).}
Recall that a metric space $(\mathcal{X},d,\mu)$ is said to satisfy Property $(M)$ if there exists a constant $C>0$ such that the Hausdorff distance between any pair of balls with same center and any radii between $r$ and $r+1$ is less than $C$. In other words, for all $x\in\mathcal{X}$, $r>0$ and $y\in B(x,r+1)$, we have $d(y,B(x,r))\le C$. Property $(M)$ is equivalent to the property: there exists a constant $C<\infty$ such that for all $r>0$, $s\ge1$, $y\in B(x,r+s)$
\begin{equation}\label{monotone}
d(y,B(x,r))\le Cs;
\end{equation}
and is also equivalent to that the metric space $(\mathcal{X},d,\mu)$ admits monotone geodesics, {see e.g. ~\cite[Proposition 2]{Tessera07} for the relevant definitions and proofs}. Moreover, Property (${M}$) is invariant under Hausdorff equivalence but unstable under quasi-isometry in the sense of \cite[Page 50]{Tessera07}, where one can find the relevant counterexamples.
We prove that Property ($M$) is invariant under the ($1,C$)-quasi-isometry.
Two metrics $d_1$ and $d_2$ on $\mathcal{X}$ are called $(1,C)$-quasi-isometric if there exists a constant $C>0$ such that for any $x,y\in\mathcal{X}$, $|d_{1}(x,y)-d_{2}(x,y)|\le C$
\begin{prop}\label{quasi-iso}
Let $d_{1}$ and $d_2$ be two metrics on $\mathcal{X}$. Assume that $d_1$ is $(1,C_{\mathcal{X}})$-quasi-isometric to $d_2$ and $(\mathcal{X}, d_{1})$ satisfies property ($M$), then $(\mathcal{X}, d_{2})$ satisfies property ($M$).
\end{prop}
\begin{proof}
Fix a point $x\in \mathcal{X}$, $s\ge 1$ and $r>0$. We denote by $B_1(x,r)$ and $B_2(x,r)$ the balls generated by $d_1$ and $d_2$, respectively. Since $d_{1}$ is $(1,C_{\mathcal{X}})$-quasi-isometric to $d_{2}$, it follows that $B_2(x,r)\subseteq B_1(x,r+C_\mathcal{X})$. Let $z\in B_2(x,r+s)$. We split $r$ into two cases: $r>C_{\mathcal{X}}$ and $r\le C_{\mathcal{X}}$. For $r>C_{\mathcal{X}}$, we have $B_1(x,r-C_\mathcal{X})\subseteq B_2(x,r)$ and it follows that
\begin{equation*}
d_2(z,B_2(x,r))\le d_1(z,B_1(x,r-C_{\mathcal{X}}))+C_{\mathcal{X}}.
\end{equation*}
Combining \eqref{monotone} for $d_1$ with the fact that $s\ge 1$ and $z\in B_1(x,r+s+C_\mathcal{X})$, the above inequality yields
\begin{equation*}
d_2(z,B_2(x,r))\le C(s+2C_{\mathcal{X}})+C_{\mathcal{X}}\le (C+2CC_{\mathcal{X}}+C_{\mathcal{X}})s.
\end{equation*}
For $r\le C_{\mathcal{X}}$, it is easy to check that $d_2(z,B_2(x,r))\le r+s\le (C_{\mathcal{X}}+1)s$. Combining this case with the case $r>C_{\mathcal{X}}$, we prove that $(\mathcal{X}, d_{2})$ satisfies~\eqref{monotone}, which completes the proof.
\end{proof
\begin{remark}
\emph{By the above observations, we can see that any left-invariant metric, defined on a polynomial volume growth group $G$ which is $(1,C)$-quasi-isometric to a word metric, satisfies the $(\epsilon,1)$-annular decay property. Thus Theorem \ref{main-thm2} holds for any metric that is ($1,C$)-quasi-isometric to a word metric.
\end{remark}
{Motivated by the notion---Property $(M)$, we can introduce Property $(M_{r_0})$, namely there exists a positive constant $C<\infty$ such that the Hausdorff distance between any pair of balls with same center and any radii belonging to $[r,r+1]$ with $r>r_0$ is less than $C$.
Similar to \cite[Theorem 4]{Tessera07}, one can show that for a doubling metric measure space $(X,d,\mu)$ with Property $(M_{r_0})$, then there exists $\theta>0$ and a constant $C>0$ such that for all $x\in X$, $r\in[r_0,\infty)$ and $s\in (0,r]$, we have $\mu(B_{r+s}\setminus B_{r})\le C\big({s}/{r}\big)^{\theta}\mu(B_r)$.} As shown in Proposition \ref{quasi-iso}, Property $(M_{r_0})$ is also stable under $(1,C)$-quasi-isometry.
\bigskip
{If a metric measure space satisfies $(\epsilon,r_0)$-annular decay property for all $r_0>0$, then we call it satisfy the $\epsilon$-annular decay property.}
This terminology, to our best knowledge, was introduced by Buckley~\cite[(1.1)]{Buck99} on metric space. A slight variant on manifold was introduced by Colding and Minicozzi~\cite{col-Min98}, which they called the $\epsilon$-volume regularity property. In recent years the $\epsilon$-annular decay property has been widely exploited in harmonic analysis, see~\cite{Arr-Llo19, Aus-Rou13, Kin-Shu, Kin-Shu14, Lin-Nakai-Yang11, Zorin-Kranich20} for more details.
{The following example tells us that there are numerous metric measure spaces only satisfying the $(\epsilon,r_0)$-annular decay property for some $r_0$ but not all $r_0>0$.}
\begin{example}
Fix a positive integer $r_0$. Let $\mathcal{X}=\mathbb{Z}$ endowed with the counting measure $\mu$. The metric $d$ is given by
\begin{equation*}
d(x,y)=\left\{
\begin{array}{ll}
0, & \hbox{$x=y$,} \\
r_0/2,&\hbox{$0<|x-y|\le r_0/2$,}\\
\max\{|x-y|,r_0\}, & \hbox{$|x-y|>r_0/2$.}
\end{array}
\right.
\end{equation*}
One can check that for any $k\in (-\infty,-1)\cap\mathbb{Z}$, ${\mu(B(0,r_0)\setminus B(0,r_0-2^k))}=r_0$, it follows that this metric space does not satisfy $\epsilon$-annular decay property but satisfies the $(\epsilon,r_0)$-annular decay property.
\end{example}
In \cite{Buck99}, Buckley proved that the metric space $(\mathcal{X},d,\mu)$ has the $\epsilon$-annular decay property when it satisfies the measure doubling condition and the $(\a,\b)$-chain ball property. {Lin, Nakai and Yang~\cite{Lin-Nakai-Yang11} established the $\epsilon$-annular decay property for the metric space $(\mathcal{X},d,\mu)$ if $(\mathcal{X},d,\mu)$ satisfies the measure doubling condition and the weak geodesic property (i.e., \eqref{monotone} holds for all $s>0$)}. Actually, Lin {\it et al} proved that the weak geodesic property is equivalent to the $(\a,\b)$-chain ball property, and is also equivalent to the monotone geodesic property. A typical class of metric spaces having the $(\a,\b)$-chain ball property (or the weak geodesic property) is the length spaces. The following proposition was established in \cite[Corollary 2.2]{Buck99}.
\begin{prop}\label{length space}
If $(\mathcal{X},d,\mu)$ is a length space (a metric space in which the distance between any two points is the infimum of the lengths of all curves joining the two points) and satisfies the measure doubling condition. Then $(\mathcal{X},d,\mu)$ satisfies the $\epsilon$-annular decay property.
\end{prop}
Basic examples of length spaces include the graph which is defined in~\cite[Page 51]{Tessera07}, the homogeneous groups endowed with homogeneous metrics (such as ${\mathbb R}^n$ and the Heisenberg group $\mathbb{H}^n$), the Riemannian metric spaces, the Finsler metric spaces, the subRiemannian metric and the subFinsler metric spaces (also called the Carnot-Carath\'{e}odory metric spaces). For more examples we refer the reader to the book~\cite{Gromov99}.
In addition to length spaces, the RD-spaces with condition ($H_\a$), $\a\in(0,1]$ satisfy also the $\epsilon$-annular decay property, see e.g. \cite[Example 4.1(iii)] {Lin-Nakai-Yang11} for the assertion and the related notions
\bigskip
Finally, we focus on the periodic metric spaces. Recall that a pseudodistance $d$ on locally compact group $G$ is called a periodic metric if it satisfies the following properties:
\begin{enumerate}[\noindent]
\item~{(i)}~$d$ is invariant under left translations by a closed co-compact subgroup $H$, meaning that for all $x,y\in G$ and all $h\in H$, $d(hx,hy)=d(x,y)$;
\item~{(ii)}~$d$ is locally bounded and proper;
\item~{(iii)}~$d$ is asymptotically geodesic.
\end{enumerate}
For more information about the periodic metrics we refer the reader to~\cite[Section 4]{Breuillard14}. Moreover, in \cite{Breuillard14} the following result is proved. There exist two constants $d_G>0$ and $c_d>0$ such that
\begin{equation*}
\lim_{r\rightarrow\infty}\frac{m(B_r)}{r^{d_G}}=c_d,
\end{equation*}
where $B_r$ is a ball given by the periodic metric of radius $r$ in a polynomial growth group $G$. From the above identity, we can see that equipped with a left-invariant periodic metric $d$, $(G,d,m)$ satisfies the doubling measure condition and~\eqref{asymptotically invariant}. However condition~\eqref{decay property} seems quite inaccessible from the above estimate. {But all the periodic metrics provided in~\cite[Section 4]{Breuillard14} satisfy condition~\eqref{decay property}.}
\begin{example}
\begin{enumerate}[\noindent]
\item~{(i)}~Let $G$ be a polynomial volume growth group with a compact symmetric generating set. {The corresponding word metric is a periodic metric.}
\item~{(ii)}~Let $G$ be a simply connected nilpotent Lie group. Let $\Gamma$ be a finitely generated torsion free nilpotent group which is embedded as a co-compact discrete subgroup of $G$. We denote by $V$ the generating set of $\Gamma$ and $d_V$ the word metric on $\Gamma$. {The metric $d$ on $G$ which is defined by $d(x,y)=d_V(h_x,h_y)$ is a periodic metric, where $x\in h_xF$ and $y\in h_yF$ and $F$ is some fixed fundamental domain for $\Gamma$ in $G$.}
\item~{(iii)}~Let $G/\Gamma$ be a nilmanifold with universal cover $G$ and fundamental group $\Gamma$. Let $d$ be a Riemannian metric on $G/\Gamma$. {The Riemannian metric $\tilde{d}$ on $G$ which is extended by Riemannian metric $d$ is a periodic metric.}
\item~{(iv)}~Let $G$ be a connected Lie group. {The left invariant Carnot-Carath\'{e}odory metric or left invariant Riemannian metric on $G$ is a periodic metric.}
\end{enumerate}
\end{example}
Note that Examples (i) and (ii) are the spaces endowed with word metrics, then by Proposition~\ref{word}, such spaces satisfy the $(\epsilon,1)$-annular decay property; Examples (iii) and (iv) are actually length spaces, and so by Proposition~\ref{length space}, such spaces satisfy the $\epsilon$-annular decay property
Based on the above examples, one can not expect the $\epsilon$-annular decay property for all periodic metrics, but the following conjecture is still expected; if this were the case, we would be able to obtain the quantitative ergodic theorems for all periodic metrics on polynomial growth groups
\begin{que}
Let $G$ be a polynomial growth group endowed with a left-invariant periodic metric $d$ and a Haar measure $m$. Then there exists one $r_0>0$ such that $(G,d,m)$ satisfies the $(\epsilon,r_0)$-annular decay property
\end{que}
Combining Proposition~\ref{quasi-iso} with the proof of Corollary 1.6 in~\cite{Breuillard14}, we can see that in order to prove the $(\epsilon,r_0)$-annular decay property for polynomial growth group $G$ endowed with left-invariant periodic metric, the question is left to prove the case when $G$ is a simply connected solvable Lie group of polynomial growth.
\section{Exponential decay estimates}\label{ST8}
\begin{comment}
In this section, we give some applications for jump ergodic estimates.
The word metric $d$ is defined by
\begin{equation*}
\forall~x,y\in G,\qquad~d(x,y)=\inf\{n\in\mathbb{N},x^{-1}y\in V^n\}.
\end{equation*}
The ball $B_r:=\{g\in G:d(e,g)\le r\}$ with $r\in\mathbb{N}$. It is well known that
Recall that we state in Section~\ref{ST7}, It is well known that there exists two constants $C_{V}\ge 1$ and $D_G>0$ such that for every $r\in\mathbb{N}$,
\begin{equation}\label{ball-ineq1}
C_V^{-1}r^{D_G}\le m(B_r)\le C_Vr^{D_G}.
\end{equation}
From the above inequality, for every $1\le r<R<\infty$, we have
\begin{equation}\label{ball-ineq2}
\frac{m(B_R)}{m(B_r)}\le C^2_V\bigg(\frac{R}{r}\bigg)^{D_G}.
\end{equation}
Moreover, there exists two constant $\theta=\log_2(1+C^2_V)$ and $c_V=(1+C_V^2)^3$ such that for every $r\in\mathbb{N}$ and $k\in(0,r]\cap\mathbb{N}$
\begin{equation}\label{ball-ineq3}
m(B_{r+k}\setminus B_{r})\le c_V\bigg(\frac{k}{r}\bigg)^{\theta}m(B_r),
\end{equation}
The ball averaging associated with group action $T$ on $L^p(X,\mu)$ is given by
\begin{equation}\label{ave}
A_rf(x)=\frac{1}{m(B_r)}\int_{B_r}T_gf(x)dm(g).
\end{equation}
Recall that the $\lambda$-jump function of the sequence $\mathbf{A}=(A_rf(x):r\in\mathbb{N})$ is defined by
\begin{equation*
\mathcal{N}_{\lambda}(\mathbf{A}f)(x)=\sup\{N|~\exists~1\le r_0< r_1<\cdots<r_N,r_i\in\mathbb{N}:\min_{1\le i\le N}|A_{r_i}f(x)-A_{r_{i-1}}f(x)|>\lambda\}.
\end{equation*}
Let $a$ and $b$ be two real numbers with $b>a$, the upcrossings of the measurable function sequence $\a=(\a_r(x):r\ge 1)$, $\mathcal{N}_{a,b}(\a)(x)$ is defined by
\begin{equation*}
\sup\{N\in\mathbb{N}:\exists~1\le s_1<r_1<\cdots<s_N<r_N,~\textit{such~that}~\a_{s_l}(x)<a~\textit{and}~\a_{r_{l}}(x)>b\}.
\end{equation*}
\end{comment}
\begin{comment}
As an application of Theorem~\ref{main-thm2}, we first hav
\begin{corollary}\label{jump}
Let $p\in[1,\infty)$. Let $T$ be an action induced by a $\mu$-preserving measurable transformation $\tau$ on $X$ and $\mathbf{A}=\{A_r:r\in\mathbb{N}\}$ the sequence of averaging operators given by~\eqref{averaging operator}. Then for every $p\in[1,\infty)$ and $\gamma>0$, there exists a constant $C_p>0$ such that
\begin{equation*
\mu\big(\{x\in X:\mathcal{N}_{\lambda}(\mathbf{A}f)(x)>\gamma\}\big)\le\frac{C_p}{\big(\lambda\sqrt{\gamma}\big)^p}\|f\|^p_{L^p(X,\mu)},~\forall~f\in L^p(X,\mu).
\end{equation*}
\end{corollary}
Moreover, as mentioned in the Introduction, we have $\mathcal{N}_{a,b}(\mathbf{A}f)\le 2\mathcal{N}_{\lambda/2}(\mathbf{A}f)$. Therefore, the above corollary yields the following upcrossings result.
\begin{corollary}\label{upcrossings-es}
For every $b>a$ and $f\in L^1(X,\mu)$, there exists a constant $C>0$ such that
\begin{equation*}%
\mu\big(\{x\in X:\mathcal{N}_{a,b}(\mathbf{A}f)(x)>n\}\big)\le \frac{C}{(b-a)\sqrt{n}}\|f\|_{L^1(X,\mu)}.
\end{equation*}
\end{corollary}
\begin{remark}
In the case $G=\mathbb{Z}$, Bishop~\cite{Bishop67} showed
\begin{equation*
\mu\big(\{x\in X:\mathcal{N}_{a,b}(\mathbf{A}^+f)(x)>n\}\big)\le \frac{1}{(b-a)n}\|f\|_{L^1(X,\mu)},
\end{equation*}
where $\mathbf{A}^+f=\{A^+_nf:n\in\mathbb{N}\}$ and $A^+_nf(x)=\sum_{i=0}^{n-1}\frac{1}{n}f(T^ig)$. Kalikow and Weiss~\cite{Kalikow-Weiss99} showed
\begin{equation*
\mu\big(\{x\in X:\mathcal{N}_{a,b}(\mathbf{A}f)(x)>n\}\big)\le \frac{C}{(b-a)}\sqrt{\frac{\log n}{n}}\|f\|_{L^1(X,\mu)},
\end{equation*}
where $\mathbf{A}f=\{{A}_nf:n\in\mathbb{N}\}$ and $A_nf(x)=\sum_{i=-n}^n\frac{1}{2n+1}f(T^ig)$.
It is clear that Corollary~\ref{upcrossings-es} improves Kalikow and Weiss's constant $\sqrt{\frac{\log n}{n}}$ to $\frac{1}{\sqrt{n}}$ and $\mathbb{Z}$ to the group of polynomial volume growth. Nevertheless, the fact that we derive our upcrossings result from a $\lambda$-jump result possibly prevents us from obtaining the constant $1/n$ in Bishop's result.
In order to study the relation between upcrossings and index $n$, Kalikow and Weiss \cite{Kalikow-Weiss99} established the exponential estimate, namely for every $f\ge 0$ and $0<a<b$,
\begin{equation*
\mu\big(\{x\in X:\mathcal{N}_{a,b}(\mathbf{A}^+f)(x)>n\}\big)\le c_1c_2^n\|f\|_{L^1(X,\mu)},
\end{equation*}
where the constants $c_1>0$ and $c_2\in (0,1)$ depending on $a$, $b$ and group $G$.
\end{remark}
\end{comment}
In this section, we establish the jump estimates with exponential decay, namely Theorem~\ref{exponential-estimate}. Let $G$ be a group of polynomial growth with a symmetric finite generating set $V$ in this section, and $\mathbf{A}^\prime=\{A^\prime_r:r\in\mathbb{N}\}$ be the sequence of averaging operators given by~\eqref{averaging operator1}. {It is certainly interesting to establish Theorem~\ref{exponential-estimate} in the more general setting. However, we do not know how to prove it.}
We start with several lemmas.
The first one is a transference principle, which can be established verbatim using the arguments as in the proof of Theorem~\ref{thm:trans}(i). We omit the details.
\begin{lemma}\label{flu}
Let $p\in[1,\infty)$. Let $T$ be an action induced by a $\mu$-preserving measurable transformation $\tau$ on $X$. Let $\mathbf{A}=\{A_r:r\in\mathbb{N}\}$
be the sequence of averaging operators given by~\eqref{averaging operator}. If there exist two constants $C_\lambda>0$ and $c_\lambda\in (0,1)$ such that for every $n\in\mathbb N$ and $F\in L^p(G,m)
\begin{equation*
m\big(\{g\in G:\mathcal{N}_{\lambda}(\mathbf{A}^\prime F)(g)>n\}\big)\le C_{\lambda}c_{\lambda}^n\|F\|^p_{L^p(G,m)},
\end{equation*
then for every $n\in\mathbb N$ and $f\in L^p(X,\mu)$,
\begin{equation*}%
\mu\big(\{x\in X:\mathcal{N}_{\lambda}(\mathbf{A}f)(x)>n\}\big)\le C_{\lambda}c_{\lambda}^n\|f\|^p_{L^p(X,\mu)}.
\end{equation*}
\end{lemma}
With the above transference principle, it suffices to show Theorem~\ref{exponential-estimate} when $X=G$.
The second lemma that we need is a trivial jump estimate following from Theorem~\ref{main-thm2}.
\begin{lem}\label{averaging-oper}
For every $p\in[1,\infty)$, there exists a constant $c_p>0$ such that for all $F\in L^p(G,m)$,
\begin{equation*
m\big(\{g\in G:\mathcal{N}_{\lambda}(\mathbf{A}^\prime F)(g)>n\}\big)\le \frac{c_p}{\big(\lambda\sqrt{n}\big)^p}\|F\|^p_{L^p(G,m)}.
\end{equation*}
\end{lem}
To present the next two lemmas, we fix $\lambda>0$ and $F\in L^p(G,m)$ with $\|F\|_{L^\infty}\le 1$. For $q\in\mathbb{N}$, define
\begin{equation*}
\mathcal{F}_q= \mathcal{F}_q(\lambda,F)=\{x\in G:\mathcal{N}_{\lambda}(\mathbf{A}^\prime F)(x)>q\},
\end{equation*}
and
\begin{equation*
\mathcal{F}^\prime_q(F,\lambda)=\{(x,r_0):x\in G,\exists~1\le r_0<r_1<\cdots<r_q~\textit{such~that}~\min_{1\le i\le q}|A^\prime_{r_i}F(x)-A^\prime_{r_{i-1}}F(x)|>\lambda\}.
\end{equation*}
Let $\mathcal{G}^\prime_q=\mathcal{G}^\prime_q(F,\lambda)=\{B(x,r_0):(x,r_0)\in \mathcal{F}^\prime_q(F,\lambda)\}$ and $\mathcal{G}_q=\mathcal{G}_q(F,\lambda)=\cup_{B\in \mathcal{G}^\prime_q}B$. It is clear that $\mathcal{F}_q\subseteq \mathcal{G}_q$.
Set
$$C_{V,\lambda}=\min\{1/(8c_V),1/\lambda\},$$
$$\Phi(q)=2^p\cdot3^{D_G}c_pC^4_VC_{V,\lambda}^{-\frac{D_G}{\theta}}\lambda^{-\frac{D_G}{\theta}-p}q^{-\frac{p}{2}},$$
where the constants $D_G$, $C_V$, $c_V$ and $\theta$ were given in Section~\ref{ST7}.
\begin{lemma}\label{measure estimate of Gq}
For any $q\in\mathbb{N}$, one has
\begin{equation*}
m(\mathcal{G}_q(F,\lambda))\le \Phi(q)\|F\|^p_{L^p(G,m)}.
\end{equation*}
\end{lemma}
\begin{proof}
We first prove the following inequality
\begin{equation}\label{con}
m(B(x,r))\le C^2_V(C_{V,\lambda}\lambda)^{{D_G}/\theta} m(B(x,(C_{V,\lambda}\lambda)^{1/\theta}r)),~\forall~r\in\mathbb{N}.
\end{equation}
Indeed, if $(C_{V,\lambda}\lambda)^{1/\theta}r<1$, since $m$ is a counting measure, then $ m(B(x,(C_{V,\lambda}\lambda)^{1/\theta}r))=1$. By~\eqref{ball-ineq1}, we have $m(B(x,r))\le C_V(C_{V,\lambda}\lambda)^{-\frac{D_G}{\theta}}m(B(x,(C_{V,\lambda}\lambda)^{1/\theta}r))$. If $(C_{V,\lambda}\lambda)^{1/\theta}r\ge1$, using~\eqref{ball-ineq1} again, we have $m(B(x,r))\le C^2_V(C_{V,\lambda}\lambda)^{-\frac{D_G}\theta} m(B(x,(C_{V,\lambda}\lambda)^{1/\theta}r))$, and so~\eqref{con} is proved.
Applying the Vitali covering lemma, we can select a subset
$$\{B_{j_1},B_{j_2}, \cdots, B_{j_n},\cdots\}\subseteq\mathcal{G}^\prime_q$$
of pairwise disjoint balls satisfying
\begin{equation}\label{Vitali covering}
\mathcal{G}_q\subseteq\cup_{i} 3B_{j_i}.
\end{equation}
For each ball $B_{j_i}=B(x_{j_i},r_{j_i})$ selected, by the definition of $\mathcal{G}^\prime_q$, there exists a sequence $1\le r_{j_{i}}<r_{j_{i}+1}<\cdots<r_{j_{i}+q}$ such that
\begin{equation*}
|A^\prime_{r_{j_{i}+k}}F(x_{j_i})-A^\prime_{r_{j_{i}+k-1}}F(x_{j_i})|>\lambda,~\forall~1\le k\le q.
\end{equation*}
Now we fix such ball $B(x_{j_i},r_{j_{i}})$ and sequence $1\le r_{j_{i}}<r_{j_{i}+1}<\cdots<r_{j_{i}+q}$. We claim that for all $y\in B(x_{j_i},(C_{V,\lambda}\lambda)^{1/\theta}r_{j_{i}})$ and $1\le k\le q$,
\begin{equation*}
|A^\prime_{r_{j_{i}+k}}F(y)-A^\prime_{r_{j_{i}+k-1}}F(y)|>\lambda/2,
\end{equation*}
namely $B(x_{j_i},(C_{V,\lambda}\lambda)^{1/\theta}r_{j_{i}})\in \mathcal{F}_q\big(\frac{\lambda}{2},F\big)$.
Assume this claim momentarily. We have
\begin{align*}
m\big(\cup_{i} B_{j_i}\big)&=\sum_im(B_{j_i}
\le C^2_V\big(C_{V,\lambda}\lambda\big)^{-\frac{D_G}{\theta}}\sum_i m\bigg(B\big(x_{j_i},(C_{V,\lambda}\lambda)^{1/\theta}r_{j_{i}}\big)\bigg)\\
&\le C^2_V\big(C_{V,\lambda}\lambda\big)^{-\frac{D_G}{\theta}} m\bigg(\cup_iB\big(x_{j_i},(C_{V,\lambda}\lambda)^{1/\theta}r_{j_{i}}\big)\bigg)\\
&\leq C^2_V\big(C_{V,\lambda}\lambda\big)^{-\frac{D_G}{\theta}}m\bigg(\mathcal{F}_q\big(\frac{\lambda}{2},F\big)\bigg)\le C^2_V\big(C_{V,\lambda}\lambda\big)^{-\frac{D_G}{\theta}}\frac{2^pc_p}{\big(\lambda\sqrt{q}\big)^p}\|F\|^p_{L^p},
\end{align*}
where {the equality} follows from the disjointness of the balls $B_{j_i}$, the first inequality follows from~\eqref{con}, the second inequality follows from the fact $C_{V,\lambda}\lambda\le 1$ and the disjointness of the balls $B_{j_i}$, the third inequality follows from the claim and the last inequality follows from Lemma~\ref{averaging-oper}.
Moreover, combining the above inequality with~\eqref{Vitali covering}, we have
\begin{equation*}
m(\mathcal{G}_q)\le m(\cup_{i} 3B_{j_i})\le 3^{D_G}C^2_V \sum_im(B_{j_i})\le 2^p\cdot3^{D_G}c_pC^4_VC_{V,\lambda}^{-\frac{D_G}{\theta}}\lambda^{-\frac{D_G}{\theta}-p}q^{-\frac{p}{2}}\|F\|^p_{L^p},
\end{equation*}
and the conclusion is proved.
We now prove the claim. Fix $y\in B(x_{j_i},(C_{V,\lambda}\lambda)^{1/\theta}r_{j_{i}})$.
By a simple geometric argument, we can check at once that for every $0\le k\le q$, $B(x_{j_i},r_{j_i+k})\triangle B(y,r_{j_i+k})$ is contained in
\begin{equation*}
\bigg(B\big(x_{j_i},r_{j_i+k}+(C_{V,\lambda}\lambda)^{1/\theta}r_{j_i}\big)\setminus B(x_{j_i},r_{j_i+k})\bigg)\cup\bigg( B(y,r_{j_i+k}+(C_{V,\lambda}\lambda)^{1/\theta}r_{0})\setminus B(y,r_{j_i+k})\bigg).
\end{equation*
Using the inequality~\eqref{shell} and the fact that the measure $m$ is invariant under the translation, the above inequality implies
\begin{equation*}
\begin{split}
m(B(x_{j_i},r_{j_i+k})\triangle B(y,r_{j_i+k}))&\le m\bigg(B(x_{j_i},r_{j_i+k}+(C_{V,\lambda}\lambda)^{1/\theta}r_{j_i})\setminus B(x_{j_i},r_{j_i+k})\bigg)\\
&+m\bigg(B(y,r_{j_i+k}+(C_{V,\lambda}\lambda)^{1/\theta}r_{j_i})\setminus B(y,r_{j_i+k})\bigg)\\
&\le 2c_V\bigg(\frac{(C_{V,\lambda}\lambda)^{1/\theta}r_{j_i}}{r_{j_i+k}}\bigg)^{\theta}m(B_{r_{j_i+k}})\\
&\le \frac{\lambda}{4}m(B_{r_{j_i+k}}).
\end{split}
\end{equation*}
Combining the above inequality with $\|F\|_{L^{\infty}}\le 1$, one has
\begin{equation}\label{controlled by averaging operator}
\begin{split}
\bigg|\int_{B(y,r_{j_i+k})}F(z)dm(z)-\int_{B(x_{j_i},r_{j_i+k})}F(z)dm(z)\bigg|&=\bigg|\int_{B(x_{j_i},r_{j_i+k})\triangle B(y,r_{j_i+k})}F(z)dm(z)\bigg|\\
&\le \frac{\lambda}{4}m(B_{r_{j_i+k}}).
\end{split}
\end{equation}
It follows that
\begin{equation*}
|A^\prime_{r_{j_i+k}}F(x_{j_i})-A^\prime_{r_{j_i+k}}F(y)|\le \frac{\lambda}{4},~\forall~0\le k\le q.
\end{equation*}
Using the triangle inequality, for every $1\le k\le q$, we have
\begin{align*}
|A^\prime_{r_{j_i+k}}F(x_{j_i})-A^\prime_{r_{j_i+k-1}}F(x_{j_i})|&\le |A^\prime_{r_{j_i+k}}F(y)-A^\prime_{r_{j_i+k-1}}F(y)|+|A^\prime_{r_{j_i+k}}F(x_{j_i})-A^\prime_{r_{j_i+k}}F(y)|\\
&+|A^\prime_{r_{j_i+k-1}}F(x_{j_i})-
A^\prime_{r_{j_i+k-1}}F(y)|\\
&\le |A^\prime_{r_{j_i+k}}F(y)-A^\prime_{r_{j_i+k-1}}F(y)|+\lambda/2
\end{align*}
By the above inequality, we have $|A^\prime_{r_{j_i+k}}F(y)-A^\prime_{r_{j_i+k-1}}F(y)|>\lambda/2$, and the claim is proved.
\begin{comment}
For each $\beta>\alpha$ and ball $B_{j_i}=B(x_{j_i},r_{j_i})$, our purpose is to prove that there exists a constant $\tilde{c}>0$ such that $\tilde{c}(\beta-\alpha)< 1$ and
\begin{equation}\label{controlled the ball}
B(x_{j_i},(\tilde{c}(\beta-\alpha))^{1/\theta}r_{j_i})\subseteq \mathcal{F}_q(\alpha^\prime,\beta^\prime),
\end{equation}
where $\theta>0$ is a constant defined in~\eqref{shell}.
If~\eqref{controlled the ball} is proved. Note that $m$ is a counting measure on $G$, using the doubling property~\eqref{int}, there exists a positive constant $C_m$ such that
\begin{equation*}
m(B(x_{j_i},r_{j_i}))\le C_m\bigg(\frac{1}{\tilde{c}(\beta-\alpha)}\bigg)^{c_m/\theta}m(B(x_{j_i},(\tilde{c}(\beta-\alpha))^{1/\theta}r_{j_i})).
\end{equation*}
Summing this over $j_i$, using \eqref{controlled the ball} and the fact that the balls $B_{j_i}$ are disjoint, we obtain
\begin{equation*}
\begin{split}
m\big(\cup_{i} B_{j_i}\big)&\le C_m\bigg(\frac{1}{\tilde{c}(\beta-\alpha)}\bigg)^{c_m/\theta}m\bigg(\cup_{j_i}B(x_{j_i},(\tilde{c}(\beta-\alpha))^{1/\theta}r_{j_i})\bigg)\\
&\le C_m\bigg(\frac{1}{\tilde{c}(\beta-\alpha)}\bigg)^{c_m/\theta}\mathcal{F}_q(\alpha^\prime,\beta^\prime),
\end{split}
\end{equation*}
and so we have~\eqref{measure eastimate}.
First, using the Vitali covering lemma, we can select a sequence of pairwise disjoint balls
$$\{B_{j_1},B_{j_2}, \cdots, B_{j_i},\cdots\}\subseteq\mathcal{K}.$$
We claim that
\begin{equation*
m(\cup_{B\in\mathcal{B}}B)\le 3^DC^2_V\Phi(q)m(\cup_{j_i}B_{j_i}).
\end{equation*}
Note that~\eqref{measure control} follows immediately from this claim.
We now prove the claim.
\end{comment}
\end{proof
\begin{lemma}\label{comparing lemma}
For positive integers $p$ and $q$, one has
\begin{equation*}
m(\mathcal{G}_{(p+1)q}(F,\lambda))\le 3^{D_G}C^2_V\Phi(q)m(\mathcal{G}_{pq}(F,\lambda)).
\end{equation*}
\end{lemma}
The proof of this lemma is inspired by the proof of \cite[inequality (5.7)]{JKRW98}.
\begin{proof}
For every ball $B=B(x,r)\in \mathcal{G}^\prime_{(p+1)q}$, by the definition of $\mathcal{G}^\prime_{(p+1)q}$, there exists a sequence $r=r_0<r_1<\cdots<r_{(p+1)q}$ such that
\begin{equation*}
|A^\prime_{r_k}F(x)-A^\prime_{r_{k-1}}F(x)|>\lambda,~\forall~1\le k\le (p+1)q.
\end{equation*}
So we have $B(x,r_{q})\in\mathcal{G}^\prime_{pq}$. Write $\widetilde{B}=B(x,r_{q})$. Set $\mathcal{B}=\mathcal{G}^\prime_{(p+1)q}$ and
\begin{equation*}
\mathcal{B}^\prime=\{B^\prime:B^\prime\in\mathcal{G}^\prime_{pq}~satisfies~B^\prime=\widetilde{B}~for~some~B\in\mathcal{B}\}.
\end{equation*}
Note that $\mathcal{B}^\prime\subseteq \mathcal{G}^\prime_{pq}$, then by the definition of $\mathcal{G}_{(p+1)q}$, the proof is finished if we show
\begin{equation}\label{measure control}
m(\cup_{B\in\mathcal{B}}B)\le 3^{D_G}C^2_V\Phi(q)m(\cup_{B^\prime\in\mathcal{B}^\prime}B^\prime).
\end{equation}
We now focus on the above inequality. Before proving this estimate, we introduce some new notations. We assume that $B_1$ is the maximal size ball of $\mathcal{B}^\prime$. We set $\mathcal{B}_1=\mathcal{B}$, $\mathcal{B}^\prime_1=\mathcal{B}^\prime$ and define the sets
\begin{align*}
&\mathcal{I}_1=\{B|B\in\mathcal{B}_1, \widetilde{B}\cap B_1\neq \emptyset\},\\
&\mathcal{I}^\prime_1=\{B^\prime|B^\prime\in\mathcal{B}^\prime_1,B^\prime\cap B_1\neq \emptyset\}.
\end{align*}
We first prove the following estimate
\begin{equation}\label{measure control-1}
m(\cup_{B\in\mathcal{I}_1}B)\le \Phi(q)m(3B_1),
\end{equation}
where $3B_1$ denotes the ball with the same center as $B_1$ and its radius is $3$ times that of $B_1$.
Fix $B=B(x,r)\in\mathcal{I}_1$. Since $B(x,r)\in\mathcal{G}_{(p+1)q}^\prime$, then there exists $r=r_0<r_1<\cdots<r_{(p+1)q}$ such that for all $1\le k\le (p+1)q$,
\begin{equation*}
|A^\prime_{r_k}F(x)-A^\prime_{r_{k-1}}F(x)|>\lambda.
\end{equation*}
Since $B_1$ is maximal size ball of $\mathcal{B}^\prime$ and $B_1\cap B(x,r_{q})\neq\emptyset$, so for all $0\le k\le q$, $ B(x,r_k)\subseteq 3B_1$. It follows tha
\begin{equation*}
|A^\prime_{r_k}(F\mathds{1}_{3B_1})(x)-A^\prime_{r_{k-1}}(F\mathds{1}_{3B_1})(x)|>\lambda,~\forall~1\le k\le q.
\end{equation*}
Hence $B\in\mathcal{G}^\prime_q(F\mathds{1}_{3B_1},\lambda)$. Combining this observation with Lemma~\ref{measure estimate of Gq} and~\eqref{ball-ineq1}, we have
\begin{align*}
m(\cup_{B\in\mathcal{I}_1}B)&\le m\big(\cup_{B\in\mathcal{G}^\prime_q(F\mathds{1}_{3B_1},\lambda)} B\big)= m({\mathcal{G}_q(F\mathds{1}_{3B_1},\lambda)})\le \Phi(q)\|F\mathds{1}_{3B_1}\|^p_{L^p(G,m)}\\
&\le \Phi(q)m(3B_1)\le 3^{D_G}C^2_V\Phi(q)m(B_1).
\end{align*}
So~\eqref{measure control-1} is proved.
Let $\mathcal{B}_2=\mathcal{B}_1\setminus\mathcal{I}_1$ and $\mathcal{B}^\prime_2=\mathcal{B}^\prime_1\setminus\mathcal{I}^\prime_1$. Select $B_2\in\mathcal{B}^\prime_2$ is a maximal size ball of $\mathcal{B}^\prime_2$. Note that $B_1\in \mathcal{I}^\prime_1$, then $B_2$ is disjoint from $B_1$. We define the sets
\begin{align*}
&\mathcal{I}_2=\{B|B\in\mathcal{B}_2, \widetilde{B}\cap B_2\neq \emptyset\},\\
&\mathcal{I}^\prime_2=\{B^\prime|B^\prime\in\mathcal{B}^\prime_2,B^\prime\cap B_2\neq \emptyset\}.
\end{align*}
By similar discussions of~\eqref{measure control-1}, we have
\begin{equation*}
m(\cup_{B\in\mathcal{I}_2}B)\le \Phi(q)m(3B_2)\le 3^{D_G}C^2_V\Phi(q)m(B_2).
\end{equation*}
Repeating the above process, then we can select the pairwise disjoint balls $B_1,~B_2,~\cdots$ which belongs to $\mathcal{B}^\prime$ and the sets $\mathcal{I}_1,~\mathcal{I}_2,\cdots$, $\mathcal{I}^\prime_1,~\mathcal{I}^\prime_2,\cdots$ with the properties
\begin{equation*}
\cup_i\mathcal{I}_i=\mathcal{B},~\cup_i\mathcal{I}^\prime_i=\mathcal{B}^\prime,~m(\cup_{B\in\mathcal{I}_i}B)\le 3^{D_G}C^2_V\Phi(q)m(B_i).
\end{equation*}
Summing $i$ for the latter inequality and using property that the balls $\{B_i\}_i$ are pairwise disjoint, we have
\begin{align*}
m(\cup_{B\in\mathcal{B}}B)&=m(\cup_i\cup_{B\in\mathcal{I}_i}B)\le \sum_im(\cup_{B\in\mathcal{I}_i}B)\le 3^{D_G}C^2_V\Phi(q)\sum_im(B_i)\\
&= 3^{D_G}C^2_V\Phi(q)m(\cup_iB_i)\le 3^{D_G}C^2_Vm(\cup_{B^\prime\in\mathcal{B}^\prime}B^\prime)
\end{align*}
We obtained~\eqref{measure control}, and the lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{exponential-estimate}]
By Lemma~\ref{flu}, it suffices for our purpose to show that for any $\lambda>0$, there exist two constants $\tilde{c}_{1}$ and $\tilde{c}_{2}\in(0,1)$ such that for any $n\in\mathbb N$ and $F\in L^p(G,m)$ with $\|F\|_{L^\infty}\le 1$,
\begin{equation*
m\big(\{g\in G:\mathcal{N}_{\lambda}(\mathbf{A}^\prime F)(g)>n\}\big)\le \tilde{c}_{1}\tilde{c}_{2}^n\|F\|^p_{L^p(G,m)}.
\end{equation*}
From now on, fix one $\lambda>0$ and one $F\in L^p(G,m)$ with $\|F\|_{L^\infty}\le 1$. First, by Lemma~\ref{measure estimate of Gq}, we set $q_0=\min\{q\in\mathbb{N}:\Phi(q)\le \frac{1}{2}\}$.
For each $n>0$, compared with $q_0$, we divide the $n$ into two cases: $n\ge q_0$ and $1\le n<q_0$.
We first consider the case $1\le n<q_0$. By Lemma~\ref{averaging-oper}, we can set $\tilde{c}_1={2c_p}/{\lambda^p}$, $\tilde{c}_2=\big({1}/{2}\big)^{{1}/{q_0}}$.
It remains to consider the case $n\ge q_0$. Write $n=sq_0+r$ with $0\le r<q_0$. Using Lemma~\ref{comparing lemma}, we have
\begin{equation*}
m(\mathcal{F}_n)\le m(\mathcal{F}_{sq_0})\le m(\mathcal{G}_{sq_0})\le \bigg(\frac{1}{2}\bigg)^{s-1}m(\mathcal{G}_{q_0}).
\end{equation*}
On the other hand, using Lemma~\ref{measure estimate of Gq} again, we have
\begin{equation*}
m(\mathcal{G}_{q_0})\le\Phi(q_0)\|F\|^p_{L^p}\le \frac{1}{2}\|F\|^p_{L^p}.
\end{equation*}
Note that $s=(n-r)/q_0$, by the above discussions we have
\begin{equation*}
m(\mathcal{F}_n)\le\bigg(\frac{1}{2}\bigg)^{-r/q_0}\bigg(\frac{1}{2}\bigg)^{n/q_0}\|F\|^p_{L^p},
\end{equation*}
and so we can set $\tilde{c}_1=2$ and $\tilde{c}_{2}=({1}/{2})^{1/q_0}$ in this case.
Finally, we can take
$$\tilde{c}_1=\max\{2, 2c_p/{\lambda}^p\},~\tilde{c}_2=({1}/{2})^{{1}/{q_0}},$$
and the proof is complete.
\end{proof}
\bigskip
| {
"timestamp": "2021-04-07T02:23:38",
"yymm": "2104",
"arxiv_id": "2104.02635",
"language": "en",
"url": "https://arxiv.org/abs/2104.02635",
"abstract": "We strengthen the maximal ergodic theorem for actions of groups of polynomial growth to a form involving jump quantity, which is the sharpest result among the family of variational or maximal ergodic theorems. As a consequence, we deduce in this setting the quantitative ergodic theorem, in particular, the upcrossing inequalities with exponential decay. The ideas or techniques involve probability theory, non-doubling Calderón-Zygmund theory, almost orthogonality argument and some delicate geometric argument involving the balls and the cubes on the group equipped with a not necessarily doubling measure.",
"subjects": "Dynamical Systems (math.DS); Classical Analysis and ODEs (math.CA)",
"title": "Quantitative ergodic theorems for actions of groups of polynomial growth",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232889752296,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460376913258
} |
https://arxiv.org/abs/1311.4450 | Counting subgraphs in hyperbolic graphs with symmetry | This note addresses some questions that arise in the series of works by Kyoji Saito on the growth functions of graphs. We study "hyperbolike" graphs, which include Cayley graphs of hyperbolic groups. We generalize some well-known results on hyperbolic groups to the hyperbolike setting, including rationality of generating functions, and sharp estimates on the growth rate of vertices. We then apply these results to confirm a conjecture of Saito on the "opposite series", which was originally posed for hyperbolic groups. | \section{Introduction}
This note addresses some questions that arise in the series of works
by Kyoji Saito on the growth functions of graphs \cite{Sa}, \cite{Sa2}.
We study ``hyperbolike'' graphs, which include Cayley graphs of
hyperbolic groups. We generalize some well-known results on hyperbolic
groups to the hyperbolike setting (Theorem \ref{theorem:rationality},
Theorem \ref{theorem:exponential}), including rationality of generating
functions, and sharp estimates on the growth rate of vertices.
We then apply these results to confirm a conjecture of Saito on the
``opposite series'', which was originally posed for hyperbolic groups
(Theorem \ref{main}, Corollary \ref{hyp}).
We also give a (standard) example of a hyperbolike graph with positive density of
dead ends, and point out its implications for the applicability of the
main theorems in \cite{Sa}.
\subsection{Acknowledgements}
We would like to thank Laurent Bartholdi, Pierre-Emmanuel Caprace,
Markus Pfeiffer, Kyoji Saito and Yasushi Yamashita
for helpful comments and discussions. Danny Calegari was partly supported by NSF grant DMS 1358592.
Koji Fujiwara was partly supported by a Grant-in-Aid for Scientific
Research No. 23244005.
\section{Generating functions of hyperbolike graphs}\label{section:definition_statement}
\begin{definition}[hyperbolike graph]
A connected graph $X$ of finite valence is {\em $\delta$-hyperbolike}
for some $\delta\ge 0$ if it satisfies the following properties:
\begin{enumerate}
\item{$X$ is $\delta$-hyperbolic; and}
\item{$\textnormal{Aut}(X)$ is transitive on the vertices.}
\end{enumerate}
\end{definition}
Condition (2) implies that all vertices have the same valence --- i.e.\/ $X$ is
regular. Moreover, by hypothesis, this (common) valence is finite. Thus $X$ is
proper as a (path) metric space.
\begin{example}[Hyperbolic group]
The main example of a hyperbolike graph is the Cayley graph of a hyperbolic group
with respect to a finite generating set. Different choices of generating sets
give rise to graphs which are $\delta$-hyperbolike for different $\delta$. Moreover,
the automorphism group of the graph depends on the choice of generating set. We
always have $G \subset \textnormal{Aut}(X)$ where $G$ acts (freely and transitively on the
vertices) on its Cayley graph by left multiplication.
\end{example}
\begin{example}[Free group]
Even for $X$ the Cayley graph of a hyperbolic group $G$, The group $\textnormal{Aut}(X)$ may be
much bigger than $G$. For example, we can take $G$ to be a free group, with
a free generating set. Then $X$ is a regular tree, and $\textnormal{Aut}(X)$ is uncountable.
\end{example}
\begin{example}[Quasi tree with parabolic symmetry group]\label{example:quasitree}
The following example was described to us by Pierre-Emmanuel Caprace. Let $T$ be a
$k$-regular tree (with $k\ge 3$ finite) and fix an end $e$ of $T$. For each vertex
$v$, let $\gamma_v$ denote the geodesic from $v$ to $e$, and let $v'$ denote the
vertex on $\gamma_v$ at distance $2$ from $v$. We obtain $X$ from $T$ by attaching
an edge from each vertex $v$ to the corresponding $v'$. Then $\textnormal{Aut}(X)$ is just the
subgroup of $\textnormal{Aut}(T)$ fixing $e$; in particular, it is vertex transitive, but not
unimodular, therefore there is no discrete subgroup acting cocompactly on $X$.
\end{example}
Pick a base point $x\in X$. Since $\textnormal{Aut}(X)$ is transitive, any two
choices are isomorphic. For any $n$ let $X_n$ denote the ball of
radius $n$ about $x$ in $X$ (i.e.\/ the complete subgraph spanned by
the vertices at distance $\le n$ from $x$).
We let $\textnormal{Aut}(X,x)$ denote the subgroup of $\textnormal{Aut}(X)$ fixing $x$. Evidently, $\textnormal{Aut}(X,x)$
fixes each subgraph $X_n$, so that there are homomorphisms
$$p_n:\textnormal{Aut}(X,x) \to \textnormal{Aut}(X_n)$$
and we can identify $\textnormal{Aut}(X,x)$ with the inverse limit
$$\textnormal{Aut}(X,x) = \varprojlim p_n(\textnormal{Aut}(X,x))$$
In particular, $\textnormal{Aut}(X,x)$ is compact, and therefore either finite or uncountable.
In the first case, $\textnormal{Aut}(X)$ is itself finitely generated and a hyperbolic group, and
the orbit map to $X$ is a quasi-isometry. But in general we do not know the answer to
the following:
\begin{question}\label{question:group}
Let $X$ be hyperbolike. Is there a hyperbolic group $G$ quasi-isometric to $X$?
\end{question}
\begin{remark}
If one removes the hypothesis that $X$ be $\delta$-hyperbolic, the analogue of Question~\ref{question:group}
has a {\em negative} answer in general.
If $T_r$ and $T_s$ are regular trees of valence $r$, $s$ respectively (where $r\ne s$ and
$\infty > r,s > 2$), and if $h_r$, $h_s$ are horofunctions on $T_r$ and $T_s$ respectively,
the {\em Diestel Leader graph} $DL(r,s)$ is the subgraph of the product $T_r\times T_s$
where $h_r+h_s=0$. These graphs were introduced in \cite{Diestel_Leader}.
Firstly, it was shown in
\cite{Bartholdi_Neuhauser_Woess} that
they do not admit a group action with finitely many orbits and finite vertex stabilizers, and
then it was shown in \cite{Eskin_Fisher_Whyte} that $DL(r,s)$ is not even quasi-isometric
to a Cayley graph.
The Diestel Leader graphs are reminiscent of non-unimodular solvable groups, and it is harder to
imagine an analogue in the hyperbolic world.
\end{remark}
\begin{remark}
Random walks on hyperbolike graphs (and more general graphs with vertex transitive symmetry
groups, which might not be hyperbolic or finite valence) are studied in
\cite{Kaimanovich_Woess}.
\end{remark}
\begin{definition}
Let $Y,Z$ be any two {\em finite} graphs. Let $(Y| Z)$ denote the
number of distinct embeddings of $Y$ as a complete subgraph of $Z$.
For any finite graph $Y$, define the generating function
$$b_Y(t):= \sum (Y|X_n) t^n$$
In words: the coefficients of $b_Y(t)$ count the number of copies of $Y$
in the balls of each fixed radius in $X$.
\end{definition}
By abuse of notation, we can think of $x$ as a graph with 1 vertex, so
that $b_x(t)$ is the generating function for the sizes of the
balls $|X_n|$.
With this notation, we have the formula
$$b_{X_n}(t)/|\textnormal{Aut}(X_n)| = b_x(t)/t^n - \text{polar part at 0}$$
To see this, observe first that every embedding of
$X_n$ into $X$ as a complete subgraph (taking $x$ to $x$ without loss of
generality) has image equal to exactly $X_n$. For, every point in $X_n$
is within distance $n$ of $x$, so the image is contained in $X_n$. So
the claim follows by counting.
Then the formula follows, since embeddings of $X_n$ in $X_m$ up to
automorphisms are in bijection with points in $X_{m-n}$.
The following theorem generalizes a result in \cite{Ep2}:
\begin{theorem}[Rationality]\label{theorem:rationality}
Let $X$ be $\delta$-hyperbolike.
For any connected graph $Y$, let $b_Y(t)$ be the generating function
whose coefficient of $t^n$ is the number of distinct embeddings of $Y$ as
a complete subgraph of $X_n$. Then $b_Y(t)$ is rational.
\end{theorem}
The result is known for a Cayley graph of a hyperbolic group, \cite{Ep2}.
\section{Proof of the Rationality Theorem}
In this section we give the proof of Theorem~\ref{theorem:rationality}.
The argument borrows heavily from the well-known proof by Cannon \cite{Ca} in the
case of a hyperbolic group; but there are some subtleties, which are
worth spelling out now in informal language.
The main subtlety is the possibility that there are distinct geodesics
$\gamma$, $\gamma'$ between points $x$ and $y$, and some $\phi \in \textnormal{Aut}(X)$
with $\phi(\gamma)=\gamma'$. This situation can certainly occur: consider
a surface group with a presentation like $\langle a,b,c,d\; | \; [a,b][c,d]\rangle$.
The Cayley graph is the 1-skeleton of the tiling of $\mathbb H^2$ by regular octagons with
angles $\pi/4$ at the vertices. Two antipodal vertices of an octagon may be joined
by two distinct paths of length 4 in the Cayley graph, and these paths may
be interchanged by an automorphism of the graph.
This ambiguity makes it tricky to define a regular language of geodesics in bijection
with the elements of $X$. Simply put, there is no way to make such a choice
without breaking the symmetry --- in other words, without finding a subgroup $G$
of $\textnormal{Aut}(X)$ which is still (coarsely) transitive, but acts freely on some rich set
of (sufficiently long) geodesics. Such a subgroup does not exist in general
(e.g.\/ Example~\ref{example:quasitree}), and it is not clear what to use as a substitute;
morally this is the sort of issue we are raising with Question~\ref{question:group}.
\begin{definition}[Synchronous fellow travelers]
Let $\gamma$ and $\gamma'$ be two geodesics with the same initial vertex. Let
their lengths be $\ell$ and $\ell'$ respectively.
For any $T\ge 0$, the geodesics $\gamma$, $\gamma'$ are said to
{\em $T$-synchronously fellow travel} if
for all $i$ up to $\min(\ell,\ell')$, there is
an inequality
$$d(\gamma(\ell-i),\gamma'(\ell'-i))\le T$$
\end{definition}
\begin{definition}[Competitor]
Let $B_{2\delta+1}(y)$ be the ball of radius $(2\delta+1)$ about $y$ in $X$.
For any $y$, an element $z \in B_{2\delta+1}(y)$ is a {\em competitor} of $y$
if $d(x,z) \le d(x,y)$, and if {\em some} geodesic from $x$ to $z$ $(2\delta+1)$-synchronously
fellow travels {\em every} geodesic from $x$ to $y$.
\end{definition}
Note that with this definition, $y$ is a competitor of itself, since every geodesic
from $x$ to $y$ $(2\delta+1)$-synchronously fellow travels every (other)
geodesic from $x$ to $y$.
\begin{definition}[Tournament and tournament type]
A function $F$ from
the vertices of $B_{2\delta+1}(y)$ to $\mathbb Z$ is a {\em tournament} if it
satisfies the following conditions:
\begin{enumerate}
\item{for any $z$ there is an inequality $d(x,z) - d(x,y) \le F(z) \le d(y,z)$; and}
\item{if $z$ is a competitor of $y$, then $d(x,z) = d(x,y)+F(z)$.}
\end{enumerate}
Two tournaments $F:B_{2\delta+1}(y) \to \mathbb Z$ and $F':B_{2\delta +1}(y') \to \mathbb Z$
have the same {\em type} if there is an automorphism $\phi \in \textnormal{Aut}(X)$ with
$\phi(y)= \phi(y')$ so that $F = F' \circ \phi$.
\end{definition}
Note that if $F$ is a tournament then $|F(z)|\le 2\delta+1$ for all $z$, so
there are only finitely many types of tournament.
\begin{remark}\label{remark:tournament}
The meaning of a tournament is roughly as follows.
As we march along a path, we would like to know the relative distance from $x$
to the different elements $z$ in $B_{2\delta+1}(y)$ in order to certify that
we are really traveling along a geodesic. The problem is that it is hard to keep
track of relative distance to points $z$ that are on the periphery. So we keep track
of an {\em upper bound} on their relative distance (i.e.\/ the value $F(z)$),
which measures (roughly) the length of the shortest path from $x$ to $z$ which
stays (synchronously) close to the geodesic we have traveled along.
\end{remark}
\begin{definition}[cone and cone type]
The {\em cone} associated to a point $y$,
denoted $\textnormal{cone}(y)$, is the full
subgraph of $X$ consisting of points $z$ so that $d(x,z)=d(x,y)+d(y,z)$.
We say that $y$ and $y'$ have the same {\em cone-type} if there is
$\phi \in \textnormal{Aut}(X)$ taking $y$ to $y'$ and taking $\textnormal{cone}(y)$ to $\textnormal{cone}(y')$.
\end{definition}
The following lemma is the analogue of Cannon's key lemma, that $(2\delta+1)$-level
determines cone type in hyperbolic groups.
\begin{lemma}[Tournament determines cone type]\label{lemma:Cannon}
Let $X$ be $\delta$-hyperbolike with base point $x$. Let $y$ and $y'$
with tournaments $F$ and $F'$ be given.
Suppose there
is $\phi \in \textnormal{Aut}(X)$ with $F=F'\circ \phi$. Then
$\phi$ takes $\textnormal{cone}(y)$ to $\textnormal{cone}(y')$.
\end{lemma}
\begin{proof}
Let $z \in \textnormal{cone}(y)$. We need to show that $\phi(z) \in \textnormal{cone}(y')$.
This is proved by induction on $d(y,z)$. If $\gamma$ is a geodesic
from $y$ to $z$, and $w$ is the penultimate point on the geodesic,
then $\phi(w) \in \textnormal{cone}(y')$ by the induction hypothesis.
So if $\phi(z)$ is not in $\textnormal{cone}(y')$ we must have
$d(x,y')+d(y',\phi(z))\ge d(x,\phi(z))+1$. A geodesic from $x$ to
$\phi(z)$ must pass through $B_{\delta}(y')$ and therefore some
point on that geodesic must be a competitor to $y'$. Applying
$\phi^{-1}$ to the restriction of this geodesic gives a shortcut from
a corresponding competitor to $z$, contrary to the fact that $z$ is in
$\textnormal{cone}(y)$. So $\phi(z) \in \textnormal{cone}(y')$ as claimed.
\end{proof}
\begin{lemma}\label{lemma:tournament}
Let $y \in X$, and let $F:B_{2\delta+1}(y) \to \mathbb Z$ be a tournament.
Then for any $y' \in X$ with $d(y,y')=1$ and $d(x,y')=d(x,y)+1$ there is a tournament
$F':B_{2\delta+1}(y') \to \mathbb Z$ whose type depends only on the type of $F$
and the choice of $y'$ in the type of $\textnormal{cone}(y)$.
\end{lemma}
\begin{proof}
We construct $F'$ as follows. First, note that $d(x,y')=d(x,y)+1$ means that
there is a geodesic $\gamma$ from $x$ to $y'$ whose penultimate vertex is $y$.
Now, let $z' \in B_{2\delta+1}(y')$ be a competitor of $y'$. Thus, by definition,
there is some geodesic $\gamma'$ from $x$ to $z'$ which $(2\delta+1)$-synchronously
fellow travels $\gamma$.
Let $z$ be on $\gamma'$ with $d(z,z')=1$. Then by the definition of synchronous
fellow-traveling, $d(z,y)\le 2\delta+1$, and $z$ is a competitor of $y$.
So, we define
$$F'(z') = \min \lbrace F(z)+d(z,z')-1 \; | \; z \in B_{2\delta+1}(y) \rbrace$$
Then $F'(z') \le d(y',z')$ since $F(z) \le d(y,z)$.
Evidently, $d(x,z')=d(x,y')+F'(z')$ for every competitor $z'$ of $y'$. Moreover, for
every $z'$ there is some $z\in B_{2\delta+1}(y)$ with
$$d(x,z') - d(x,y') \le d(x,z) + d(z,z') - d(x,y) - 1 \le F(z) + d(z,z') - 1 = F'(z')$$
(take $z$ to attain the minimum in the definition of $F'(z')$.
If $z'$ is a competitor of $y'$, then each $\le$ becomes $=$).
By definition, the type of $F'$ depends only on the type of $F$ and
the choice of $y'$ in the type of cone($y$).
\end{proof}
\begin{definition}[child and parent]
A point $y'$ is a {\em child} of $y$, and $y$ is a {\em parent} of $y'$, if
$d(y,y')=1$ and $d(x,y') = d(x,y)+1$.
\end{definition}
Every parent of $y'$ is a competitor of every other parent. Therefore the tournament
type of any parent $y$ determines the number of parents of each child of $y$.
We now give the proof of Theorem~\ref{theorem:rationality}.
\begin{proof}
Define a finite directed graph as follows. Each vertex corresponds to a possible
tournament type. There is a directed edge from the tournament type of
$y,F$ to the tournament type of $y',F'$ if $y'$ is a child of $y$, and $F'$ is
the tournament type constructed from the tournament type of $F$ by
Lemma~\ref{lemma:tournament}.
There is a unique vertex, the base vertex, for the tournament type for the base vertex $x$.
We take the connected component of the base vertex in the following argument.
We put a (rational) weight $w$ on the edge from $(y,F)$ to $(y',F')$ which is
equal to the reciprocal of the number of parents of $y'$. We need to show that this
is well-defined; i.e.\/ it can be determined from the tournament type of
$(y,F)$ and $(y',F')$.
In the ball $B_2(y)$, count the number of $z$ with $F(z)=0$ and $d(z,y')=1$.
We claim that these are in bijection with the parents of $y'$. For, if
$F(z)=0$ then $d(x,z) \le d(x,y)$ by definition, so if furthermore
$d(z,y')=1$ then $d(x,y') \le d(x,z) + 1 \le d(x,y) + 1 = d(x,y')$, so these
inequalities are equalities. Conversely, every parent $z$ is a competitor, and
thus $F(z)=0$ since $d(z,y')=1$. This proves the claim, and shows that the weight
is well-defined.
Label the vertices of the graph by distinct integers, and define a
non-negative matrix $M$ whose $ij$ entry is equal to $w(e)$ if there is an edge
$e$ from vertex $i$ to vertex $j$, and $0$ otherwise. Let $\iota$ be the row vector
$(1,0,0\cdots 0)$ and let $\pmb{1}$ be the column vector whose entries are all $1$s.
Then there is a formula
$$b_x(t) = \sum_n (\iota M^n \pmb{1}) t^n$$
whose coefficients by their form satisfy a finite linear recurrence (due to
the fact that $M$ is a root of its own characteristic polynomial), and therefore
$b_x(t)$ is a rational function.
In \S~\ref{section:definition_statement} we saw that
$b_{X_n}(t)/|\textnormal{Aut}(X_n)|$ is derived from $b_x(t)/t^n$ by throwing away the
polar part at $0$. Thus $b_{X_n}(t)$ is also a rational function.
Finally, for any connected graph $Y$ we choose a base point $y \in Y$ and
an integer $n$ which is at least as big as the diameter of $Y$, and we count how
many copies of $Y$ there are in $X_n$ with $y$ at the center. For each of these
copies, we let $D$ be the least number so that $Y$ is in $X_D$, and call these the
{\em $D$-copies}. From these finitely many coefficients we can reconstruct
$b_Y(t)$ from $b_{X_n}(t)$ in an obvious way and express it as a finite linear
combination of series of the form $b_D(t)$ for $D\le n$.
\end{proof}
\begin{remark}[explanation of the formula]
Let $v_0$ be the base vertex of the directed graph $\Gamma$
in the argument.
Lemma \ref{lemma:tournament} gives a natural map $B$ from the set
of all finite geodesics starting at $x$ in $X$ to
the set of all directed finite paths
starting at $v_0$ in $\Gamma$.
Indeed the map $B$ is a bijection.
The inverse $B^{-1}$ is given by the induction on the length
of a path. We call the inverse image a {\em lift}. Suppose $v(i_j)$, $j=0,1,2, \cdots, n+1$
is a directed path in $\Gamma$ with $v(i_0)=v_0$ and
it is lifted for $0 \le j \le n$ to a geodesic starting at $x$ and ending at $z$ in $X$.
Since there is a directed edge from $v(i_n)$ to $v(i_{n+1})$,
there must be a point $y \in X$ and a child of $y$, $y'$ such that
the tournament type of $y$ is $v(i_n)$ and the tournament type
of $y'$ is $v(i_{n+1})$, and that there is $\phi\in \textnormal{Aut}(X)$ with
$\phi(y)=z$. Now extend the geodesic by adding $\phi(y')$
after $z$, which is the lift of $v(i_{n+1})$.
This is a geodesic by Lemma \ref{lemma:Cannon}.
The map $P$ assigning the end points
to those geodesics is a surjection to $X$.
Notice that for each point $y \in X$ with $y\not=x$, by the
definition of the weight of each edge in $\Gamma$, the total weight of the paths
in the set $B P^{-1}(y)$ is always $1$ (again, by the induction
on $d(x,y)$). Now the formula follows.
\end{remark}
\section{Patterson--Sullivan measures for hyperbolike graphs}
From Theorem~\ref{theorem:rationality} and from elementary linear algebra it
follows that if $X$ is $\delta$-hyperbolike for some $\delta$, and is not
quasi-isometric to a point or a line, then there are constants $\lambda>1$
and $C>1$, and an integer $k\ge 0$ so that there is an estimate of the form
$$C^{-1} \lambda^n n^k \le |X_n| \le C \lambda^n n^k$$
In this section we refine this estimate, showing that $k=0$. Explicitly, we show
\begin{theorem}[Exponential]\label{theorem:exponential}
Let $X$ be $\delta$-hyperbolike. Then there are constants $\lambda>1$ and $C>1$ so
that there is an estimate of the form
$$C^{-1} \lambda^n \le |X_n| \le C \lambda^n$$
\end{theorem}
\begin{remark}
If we use the notation $X_{=n}$ for the subset of $X$ at distance {\em exactly} $n$
from the base point, then a similar estimate
$$C^{-1} \lambda^n \le |X_{=n}| \le C \lambda^n$$
holds, with the same constant $\lambda$ but a possibly different constant $C$.
\end{remark}
If $X$ is the Cayley graph of a hyperbolic group, Theorem~\ref{theorem:exponential}
is due to Coornaert \cite{Co}, and is proved by generalizing the theory of Patterson--Sullivan
measures. As explained in \cite{Calegari_ergodic} \S~2.5, the proof of Coornaert's theorem
can be considerably simplified by first showing that the generating function
$b_x(t)$ is rational, as a corollary of Cannon's theorem for hyperbolic groups.
Our proof of Theorem~\ref{theorem:exponential} runs along very similar lines, and
amounts to little more than the verification that the steps in the argument
given in \cite{Calegari_ergodic} hold in the more general context of hyperbolike graphs.
We carry out this verification in the remainder of the section.
\subsection{Visual boundary}
The first step is to metrize $\partial_\infty X$ following Gromov. Let
$d_X$ denote the ordinary (path) metric in $X$.
\begin{definition}
Fix some base point $x\in X$ and some constant $a>1$. The
{\em $a$-length} of a rectifiable path $\gamma$ in $X$, denoted $\textnormal{length}_a(\gamma)$,
is the integral along $\gamma$ of $a^{-d_X(x,\cdot)}$ with respect to its ordinary
length, and the {\em $a$-distance} from $y$ to $z$, denoted $d_X^a(y,z)$ is the
infimum of the $a$-lengths of paths between $y$ and $z$.
\end{definition}
The following comparison lemma, due to Gromov, lets us compare $a$-length to ordinary
length.
\begin{lemma}[Gromov]\label{lemma:comparison}
There is some $a_0>1$ so that for $1<a<a_0$ the completion $\overline{X}$ of $X$ in
the $a$-length metric is homeomorphic to $X \cup \partial_\infty X$. Moreover, for
such an $a$ there is a constant $C$ so that for all $y,z \in \partial_\infty X$ there
is an inequality
$$C^{-1}a^{-(y|z)} \le d_X^a(y,z) \le Ca^{-(y|z)}$$
where $(y|z)$ denotes the Gromov product.
\end{lemma}
The Gromov product $(y|z)$ is usually taken to denote the expression
$$(y|z):=\frac 1 2 \left(d_X(x,y) + d_X(x,z) - d_X(y,z)\right)$$
but since we are only ever interested in the value of this expression up to a
(uniformly bounded) additive constant, we could just as easily use the normalization
$(y|z) = d_X(x,yz)$, i.e.\/ the distance from $x$ to some (equivalently, any) geodesic
$yz$ from $y$ to $z$. We stress that this expression is to be interpreted as
denoting ``equality up to a uniform additive constant''; this (unspecified but effective)
constant will later be absorbed into a multiplicative constant.
\subsection{Patterson--Sullivan measure}
The next step is to construct a (so-called) Patterson--Sullivan (probability)
measure on $\overline{X}$. Theorem~\ref{theorem:rationality} has the key corollary
that this measure will be supported on the {\em boundary}.
Define the Poincar\'e {\em zeta function} $\zeta_X(s)$, well-defined for $s$ sufficiently
large, by the formula
$$\zeta_X(s):= \sum_{y \in X} e^{-sd_X(x,y)}$$
Recall that we have already shown that there is an estimate of the form
$$C^{-1} \lambda^n n^k \le |X_n| \le C \lambda^n n^k$$
It follows that $\zeta_X$ converges if $s>h:=\log(\lambda)$ and {\em diverges} at $h$.
We may therefore define, for each $s>h$, a probability measure $\nu_s$ on $\overline{X}$
(supported in $X$) by putting an atom of size $e^{-sd_X(x,y)}/\zeta_X(s)$ at each
$y \in X$. Take a subsequence of measures that converges as $s \to h$ from above,
and define $\nu$ to be the limit. By construction, this is a probability measure
{\em supported on $\partial_\infty X$}.
\subsection{Quasiconformal measure}
Recall that if $y \in \partial_\infty X$, a {\em horofunction} $b_y$ centered at
$y$ is a limit of a convergent
subsequence of functions of the form $d_X(y_i,\cdot) - d_X(y_i,x)$
for $y_i \to y$. Such a horofunction is not unique, but is well-defined up to
a uniformly bounded additive constant.
\begin{definition}[Coornaert]
For $\phi \in \textnormal{Aut}(X)$ define $j_\phi:\partial_\infty X \to \mathbb R$ by
$$j_\phi(y) = a^{b_y(x) - b_y(\phi(x))}$$
for some horofunction $b_y$ centered at $y$.
A probability measure $\nu$ on $\partial_\infty X$ is {\em quasiconformal of dimension
$D$} if for every $\phi \in \textnormal{Aut}(X)$, the measure $\phi_*\nu$ is absolutely continuous
with respect to $\nu$, and there is a constant $C$ (independent of $\phi$) so that
$$C^{-1} j_\phi(y)^D \le d(\phi_*\nu)/d\nu \le C j_\phi(y)^D$$
\end{definition}
Note that the uniform additive ambiguity in the definition of $b_y$ is absorbed into
a uniform multiplicative ambiguity in the definition of $j_\phi$, which is then
absorbed into the constant $C$; so this definition makes sense.
\begin{proposition}\label{proposition:quasiconformal}
The measure $\nu$ is quasiconformal of dimension $D$, where $D=h/\log{a}$.
\end{proposition}
\begin{proof}
>From the definition of Radon-Nikodym derivative, it suffices to show that there is
a constant $C$, so that for all
$y \in \partial_\infty X$ there is a neighborhood $V$ of $y$ in $\overline{X}$
so that for all $A \subset V$,
$$C^{-1}j_\phi(y)^D \nu(A) \le \nu(\phi^{-1}A) \le Cj_\phi(y)^D\nu(A)$$
By the definition of a horofunction, and $\delta$-thinness, there is a neighborhood
$V$ of $y$ in $\overline{X}$ so that
$$d_X(x,\phi^{-1}z) - d_X(x,z) - C \le b_y(\phi(x)) - b_y(x) \le d_X(x,\phi^{-1}z) - d_X(x,z) + C$$
for some $C$, and for all $z \in V$.
For each $s>h$ we have
$$\phi_*\nu_s(z)/\nu_s(z) = \nu_s(\phi^{-1}z)/\nu_s(z) = e^{-s(d_X(x,\phi^{-1}z) - d_X(x,z))}$$
Taking $s \to h$ and defining $a^D = e^h$ proves the proposition.
\end{proof}
\subsection{Shadows}
We now recall Sullivan's definition of {\em shadows}:
\begin{definition}
For $y \in X$ and $R>0$ the {\em shadow} $S(y,R)$ is the set of $z \in \partial_\infty X$
such that every geodesic ray from $x$ to $z$ comes within distance $R$ of $y$.
\end{definition}
\begin{lemma}\label{lemma:uniform_cover}
Fix $R>2\delta$. Then there is a constant $N$ so that for any $z\in\partial_\infty X$
and any $n$ there is at least $1$ and there are at most $N$ elements $y$ with
$d_X(x,y)=n$ and $z\in S(y,R)$.
\end{lemma}
\begin{proof}
If $\gamma$ is any geodesic from $x$ to $z$, and if $y$ is any point on $\gamma$, then
$z \in S(y,R)$. Conversely, if $y$ and $y'$ are two elements with $d_X(x,y)=d_X(x,y')$
and $z \in S(y,R)\cap S(y',R)$ then $d_X(y,y')\le 2R$.
\end{proof}
\begin{lemma}\label{lemma:uniform_size}
Fix $R$. Then there is a constant $C$ so that for any $y \in X$ there is an inequality
$$C^{-1} a^{-d_X(x,y)D} \le \nu(S(y,R)) \le Ca^{-d_X(x,y)D}$$
\end{lemma}
\begin{proof}
First observe by $\delta$-thinness and the definition of a shadow, that there is
some constant $C'$ so that
$$d_X(x,y) - C' \le b_z(x) - b_z(y) \le d_X(x,y) + C'$$
for any $z \in S(y,R)$. Since $j_\phi(z) = a^{b_z(x) - b_z(\phi(x))}$ it follows
that there is a constant $C$ so that
$$C^{-1}a^{d_X(x,\phi(x))} \le j_\phi(z) \le Ca^{d_X(x,\phi(x))}$$
for any $\phi \in \textnormal{Aut}(X)$ and any $z \in S(\phi(x),R)$.
Now, since $\nu$ is a quasiconformal measure, $\nu$ cannot consist of a single
atom. So let $m_0<1$ be the measure of the biggest atom of $\nu$, and fix $m_0 < m < 1$.
By compactness of $\partial_\infty X$ there is some $\epsilon$ so that every ball
in $\partial_\infty X$ of diameter $\le \epsilon$ (in the $a$-metric) has mass at most $m$.
Now, for any $\phi \in \textnormal{Aut}(X)$, the set $\phi^{-1}S(\phi(x),R)$ consists of exactly
the $y \in \partial_\infty X$ for which every geodesic ray from $\phi^{-1}(x)$ to $y$
comes within distance $R$ of $x$. As $R \to \infty$, the diameter of
$\partial_\infty X - \phi^{-1}S(\phi(x),R)$ goes to zero uniformly in $\phi$, and
so for some $R_0$, and for all $R\ge R_0$, we have
$$1-m \le \nu(\phi^{-1} S(\phi(x),R)) \le 1$$
independent of $\phi$.
But by Proposition~\ref{proposition:quasiconformal} and the discussion above,
there is some constant $C_1$ so that
$$C_1a^{d_X(x,\phi(x))D} \le \nu(\phi^{-1}S(\phi(x),R))/\nu(S(\phi(x),R)) \le C_1a^{d_X(x,\phi(x))D}$$
Taking reciprocals, and using $1-m \le \nu(\phi^{-1} S(\phi(x),R)) \le 1$ completes the proof.
\end{proof}
We now give the proof of Theorem~\ref{theorem:exponential}.
\begin{proof}
We already know the lower bound. For each $y$ with $d(x,y)=n$ we have
$e^{-hn} = a^{-Dn} \le C\nu(S(y,R))$. On the other hand, by Lemma~\ref{lemma:uniform_cover},
every point $z \in \partial_\infty X$ is contained in at least 1 and at most $N$ sets
$S(y,R)$ with $d(x,y)=n$. So
$$|X_{=n}|e^{-hn}C^{-1} \le \sum_{d(x,y)=n} \nu(S(y,R)) \le N\nu\Bigl(\bigcup_{d(x,y)=n} S(y,R)\Bigr) = N$$
\end{proof}
\section{The opposite series $\Omega(P)$ for a power series $P$}
\subsection{Definitions}\label{subsection:definitions}
We recall the definition of {\em opposite series} from Saito \cite{Sa}.
Let $P(t)=\sum_{n=0}^\infty a_nt^n$ be a power series in $t$
with $a_n$ real numbers. Assume there exist $u, v$ such that
for all $n$, $u \le a_{n-1}/a_n \le v$.
Define a polynomial in $s$ for each
$n \ge 0$ as follows:
$$X_n(P) = \sum_{k=0}^{n} \frac{a_{n-k}}{a_n} s^k.$$
Define $\Omega(P)$ as the set of accumulation points
of the sequence $\{X_n\}_n$ in the set of formal power series on $s$
(with respect to the product topology on each coefficients).
An element in $\Omega(P)$ is called an {\em opposite series}
in \cite[\S 11.2]{Sa}.
Now, let $G$ be a group with a finite generating set $S$.
Let $a_n$ denote the number of elements $g \in G$
whose word length is $n$ with respect to $S$.
Using $a_n$'s, we define $P(t)$, denoted
by $P_{G,S}$, and obtain $\Omega(P_{G,S})$.
Saito also defined another set $\Omega(G,S)$,
a map $\pi_{\Omega}:\Omega(G,S) \to \Omega(P_{G,S})$
and proved (Theorem in \S11.2) that the map is surjective under two
assumptions ({\bf S} and {\bf I} in his paper; we will discuss {\bf S}).
Saito's theory is most interesting when $\Omega(G,S)$ or $\Omega(P_{G,S})$
is finite, but his paper gives only a few examples where finiteness
is shown to hold.
Saito proposed the following conjecture in the last section of his paper
\cite[\S 12. Conjecture 4]{Sa}:
\begin{conjecture}[Saito] \label{question:saito}
$\Omega(G,S)$ is finite if $G$ is a word hyperbolic group.
\end{conjecture}
In view of Saito's theorem relating $\Omega(G,S)$ to $\Omega(P_{G,S})$, it is natural to ask:
\begin{question}
Is $\Omega(P_{G,S})$ finite if $G$ is hyperbolic ?
\end{question}
Saito conjectures
that this will be the case \cite{Sa2}, and
we will answer this question
in the affirmative (Corollary \ref{hyp}).
Conjecture \ref{question:saito} is still open.
Interestingly it turns out that there is an example of a hyperbolic group
which does not satisfy the assumption {\bf S} (see Example
\ref{example:triangle}).
\begin{theorem}[finiteness]\label{main}
Let $X$ be a hyperbolike graph, and for any connected graph $Y$, let $b_Y(t)$
be the generating function whose coefficient of $t^n$ is the number of
distinct embeddings of $Y$ as a complete subgraph of $X_n$. Then $\Omega(b_Y)$
is finite.
\end{theorem}
A special case, which answers Saito's question, is:
\begin{corollary}\label{hyp}
If $G$ is a word hyperbolic group, then
$\Omega(P_{G,S})$ is finite for any finite generating set $S$.
\end{corollary}
\subsection{Proof of Theorem \ref{main} and Corollary \ref{hyp}}
We recall a well-known result from analytic combinatorics.
\begin{theorem}\cite[Th IV.9]{CUP}\label{cup}
If $f(z)$ is a rational function that is analytic at zero and has poles at points
$\alpha_1, \alpha_2, \cdots \alpha_m$, then its coefficients are a sum of
{\em exponential-polynomials}: there exist $m$ polynomials $\Pi_j(x)$ such that, for
$n$ larger than some fixed $n_0$,
$$f_n = \sum_j \Pi_j(n) \alpha_j^{-n}$$
where $f_n$ is the coefficient of $z^n$ in $f(z)$. Furthermore, the degree of $\Pi_j$
is equal to the order of the pole of $f$ at $\alpha_j$ minus one.
\end{theorem}
Let's apply this to prove Theorem~\ref{main}.
\proof
The power series $b_Y(t)$ is a rational
function, whose poles are (a subset of) the reciprocals of the roots of the matrix $M$
constructed in the proof of Theorem~\ref{theorem:rationality}. Since $M$ is a non-negative
matrix, Perron--Frobenius theory says that there is a root of largest absolute value which
is real and positive, and all other roots with this absolute value differ by multiplication
by a root of unity. From Theorem~\ref{theorem:exponential} and Theorem~\ref{cup} we conclude
that these roots of maximum modulus are {\em simple}, or else the dominant term in the
growth rate of the coefficients of $b_Y(t)$ would be of the form polynomial times exponential,
where the polynomial had positive degree (contrary to Theorem~\ref{theorem:exponential}).
It follows from Theorem~\ref{cup} that for $n$ sufficiently big, there is some $m_0 \le m$
so that after reordering the poles of $b_Y(t)$ in non-decreasing modulus,
we have an expression of the form
$$f_n = \sum_{j\le m_0} \pi_j \alpha_j^{-n} + \sum_{j>m_0} \Pi_j(n) \alpha_j^{-n}$$
where $\alpha_1$ is real and positive, where $\alpha_j$ for $j\le m_0$ is of the form
$\alpha_1 \omega_j$ for some root of unity $\omega_j$, and where $|\alpha_j| > \alpha_1$
for $j>m_0$.
Evidently $\alpha_1^{-1} = \lambda$ with notation from Theorem~\ref{theorem:exponential}.
Moreover, if $N$ is the least common multiple of the order of the roots of unity $\omega_j$,
then we can rewrite this expression as
$$f_n = C_{[n]} \lambda^n + o(\lambda^n)$$
where $C_{[n]}$ depends only on the residue of $n$ mod $N$. Again, by Theorem~\ref{theorem:exponential}
we can conclude that $C_{[n]}$ is real and {\em positive} for all $n$ mod $N$.
If we define the polynomial $X_n(b_Y) = \sum_{k=0}^n \frac {f_{n-k}} {f_n} s^k$ as in
Definition~\ref{subsection:definitions}, then as $n \to \infty$ for every fixed $k$ the
coefficient of $s^k$ in $X_n(P)$ approaches a value depending only on $n$ mod $N$. Hence there
are finitely many accumulation points of the $X_n$, which is exactly the conclusion of
Theorem~\ref{main}.
\qed
\section{Dead ends}\label{section:dead}
\begin{definition}
Let $X$ be a graph and $x$ a base point. A vertex $y$ is a {\em dead end} if there
is no $z \ne y$ with $d(x,z) = d(x,y) + d(y,z)$.
\end{definition}
It is important for Saito to study graphs with the additional hypothesis that
the asymptotic density of dead end elements is zero.
This is one of the assumptions he puts in the main theorem
in \cite[\S 11.2, Assumption 2. {\bf S}]{Sa}.
Unfortunately, we show now
that this hypothesis is genuinely restrictive, since there are (very simple) hyperbolic
groups with finite generating sets whose Cayley graphs have a positive density of
dead ends. Actually, these examples are already well-known; we simply bring them
up to point out the implications for Saito's theory.
The following example is worked out in detail by Pfeiffer \cite{Pf}, Appendix~C; we summarize the
story.
\begin{example}[Triangle group]\label{example:triangle}
Let $G$ be the $(2,3,7)$ triangle group; i.e.\/ the group with the following presentation
$$G:=\langle a,b \; | \; a^2, b^3, (ab)^7\rangle$$
We abbreviate $b^{-1}$ by $B$.
Every geodesic word in $G$ alternates between $a$ and either $b$ or $B$.
Moreover, {\em infinite} geodesics are exactly those that don't contain
(except possibly at the very start)
substrings of the form $ababab$ or $aBaBaB$. For, suppose $ababab$ appears in the
middle of the word. It must be followed by an $a$, and preceded by either $b$ or $B$.
If we have $babababa$ then of course we can replace it by $aBaBaB$ which is shorter.
If we have $Babababa$ we can rewrite it as $BBaBaBaB = baBaBaB$ which is shorter.
Now, if $W$ is any word with at most 2 consecutive $ab$s or $aB$s in a row (and is
therefore a geodesic), we can extend it to something like
$WXBabaBababab$ which now we claim is a dead-end.
For, it can only be extended to $$WXBabaBabababa = WXBabaBBaBaBaB = WXBababaBaBaB$$
which is definitely shorter. On the other hand, $WXBabaBababab$ is itself a geodesic;
trying to rewrite it, one can only replace $ababab$ by $BaBaBaBa$ giving
$$WXBabaBBaBaBaBa = WXBababaBaBaBa$$ which is longer.
Thus this group has dead end elements with positive density (at least $2^{-6}$).
\end{example}
| {
"timestamp": "2013-11-19T02:15:13",
"yymm": "1311",
"arxiv_id": "1311.4450",
"language": "en",
"url": "https://arxiv.org/abs/1311.4450",
"abstract": "This note addresses some questions that arise in the series of works by Kyoji Saito on the growth functions of graphs. We study \"hyperbolike\" graphs, which include Cayley graphs of hyperbolic groups. We generalize some well-known results on hyperbolic groups to the hyperbolike setting, including rationality of generating functions, and sharp estimates on the growth rate of vertices. We then apply these results to confirm a conjecture of Saito on the \"opposite series\", which was originally posed for hyperbolic groups.",
"subjects": "Group Theory (math.GR); Geometric Topology (math.GT)",
"title": "Counting subgraphs in hyperbolic graphs with symmetry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232884721166,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460373282073
} |
https://arxiv.org/abs/1709.00313 | A Simple Proof Characterizing Interval Orders with Interval Lengths between 1 and $k$ | A poset $P= (X, \prec)$ has an interval representation if each $x \in X$ can be assigned a real interval $I_x$ so that $x \prec y$ in $P$ if and only if $I_x$ lies completely to the left of $I_y$. Such orders are called \emph{interval orders}. Fishburn proved that for any positive integer $k$, an interval order has a representation in which all interval lengths are between $1$ and $k$ if and only if the order does not contain $\mathbf{(k+2)+1}$ as an induced poset. In this paper, we give a simple proof of this result using a digraph model. | \section{Introduction}
\subsection{Posets and Interval Orders}
A poset $P$ consists of a set $X$ of \emph{points} and a relation $\prec$ that is irreflexive and transitive, and therefore antisymmetric. It is sometimes convenient to write $y \succ x$ instead of $x \prec y$. If $x \prec y$ or $y \prec x$, we say that $x$ and $y$ are \emph{comparable}, and otherwise we say they are \emph{incomparable}, and denote the incomparability by $x \parallel y$. An \emph{interval representation} of a poset $P=(X,\prec)$ is an assignment of a closed real interval $I_v$ to each $v\in X$ so that
$x \prec y$ if and only if $I_x$ is completely to the left of $I_y$. A poset with such a representation is called an \emph{interval order}. It is well-known that the classes studied in this paper are the same if open intervals are used instead of closed intervals, e.g., see Lemma 1.5 in \cite{GoTr04}.
The poset $\mathbf{2+2}$ shown in Figure~\ref{chains-fig} consists of four elements $\{a,b,x,y\}$ and the only comparabilities are $a \prec x$ and $b \prec y$. The following elegant theorem characterizing interval orders was anticipated by Wiener in 1914 (see \cite{FiMo92}) and shown by Fishburn \cite{Fi70}:
Poset $P$ is an interval order if and only if it contains no induced $\mathbf{2+2}$.
Posets that have an interval representation in which all intervals are the same length are known as \emph{unit interval orders} or \emph{semiorders}. Scott and Suppes \cite{ScSu58} characterize unit interval orders as those posets with no induced $\mathbf{2+2}$ and no induced $\mathbf{3+1}$. Figure~\ref{chains-fig} shows the posets $\mathbf{2+2}$, \ $\mathbf{3+1}$, and $\mathbf{4+1}$. More generally, the poset $\mathbf{n+1}$ consists of a chain of $n$ distinct elements $a_1 \prec a_2 \prec \cdots \prec a_n$ and an additional element that is incomparable to each $a_i$.
\begin{figure}
\begin{center}
\begin{picture}(300,65)(0,15)
\thicklines
\put(20,40){\circle*{5}}
\put(20,70){\circle*{5}}
\put(50,40){\circle*{5}}
\put(50,70){\circle*{5}}
\put(20,40){\line(0,1){30}}
\put(50,40){\line(0,1){30}}
\put(7,38){$a$}
\put(7,68){$x$}
\put(55,38){$b$}
\put(55,68){$y$}
\put(22,10){$\mathbf{2+2}$}
\put(105,10){$\mathbf{3+1}$}
\put(196,10){$\mathbf{4+1}$}
\put(120,35){\circle*{5}}
\put(120,55){\circle*{5}}
\put(120,75){\circle*{5}}
\put(140,55){\circle*{5}}
\put(120,35){\line(0,1){40}}
\put(107,33){$a$}
\put(107,53){$b$}
\put(107,73){$c$}
\put(145,53){$x$}
\put(210,30){\circle*{5}}
\put(210,50){\circle*{5}}
\put(210,70){\circle*{5}}
\put(210,90){\circle*{5}}
\put(230,60){\circle*{5}}
\put(210,30){\line(0,1){60}}
\put(197,28){$a$}
\put(197,48){$b$}
\put(197,68){$c$}
\put(197,88){$d$}
\put(235,58){$x$}
\end{picture}
\end{center}
\caption{The posets $\mathbf{2+2}$, \ $\mathbf{3+1}$, and $\mathbf{4+1}$.}
\label{chains-fig}
\end{figure}
In this paper, we consider an intermediate class between the extremes of interval orders (no restrictions on interval lengths) and unit interval orders (all intervals the same length). In particular, we allow interval lengths to range from 1 to $k$, where $k$ is a positive integer. Fishburn \cite{Fi83} characterizes this class as those posets with no induced $\mathbf{2+2}$ and no induced $\mathbf{(k+2)+1}$, generalizing the result of Scott and Suppes. In fact, Fishburn characterizes those posets that have an interval representation by intervals whose lengths are between $m$ and $n$ for any relatively prime integers $m,n$ in terms of what he calls \emph{picycles}. The proof is technical, and it does not immediately yield a forbidden poset characterization in the general case.
We use a digraph model from Isaak~\cite{Is09} to give a shorter and more accessible proof in the case $m=1, n=k$. Our digraph model and the equivalence of statements (1) and (3) in Theorem~\ref{lengths1tok} can easily be extended to general $m,n$. It is also natural to consider allowing the interval lengths to vary between 1 and any real value. Fishburn and Graham \cite{FiGr85} study the classes $C(\alpha)$ of interval graphs that have a representation by intervals with lengths between $1$ and $\alpha$ for any real $\alpha\geq 1$, showing that the points where $C(\alpha)$ expands are the rational values of $\alpha$.
The problem of characterizing posets that have an interval representation in which the possible interval lengths come from a discrete set (rather than from an interval) is more challenging, and we consider two variants of this question in \cite{BoIsTr17}.
\subsection{Digraphs and Potentials}
A {\em directed graph}, or {\em digraph}, is a pair $G=(V,E)$, where $V$ is a finite set of {\em vertices}, and $E$ is a set of ordered pairs $(x,y)$ with $x,y\in V$, called {\em arcs}. A \emph{weighted digraph} is a digraph in which each arc $(x,y)$ is assigned a real number weight $w_{xy}$. We sometimes denote the arc $(x,y)$ by $x\rightarrow y$, and in a weighted digraph, by $x \xrightarrow{w_{xy}}y$. A \emph{potential function} $p:V\rightarrow \mathbb{R}$, defined on the vertices of a weighted digraph, is a function satisfying $p(y) - p(x) \leq w_{xy}$ for each arc $(x,y)$. Theorem~\ref{potential-no-neg} is a well-known result that specifies precisely which digraphs have potential functions.
A \emph{cycle} in digraph $G$ is a subgraph with vertex set $\{ x_1, x_2, x_3, \dots, x_t \}$ and arc set $\{ (x_{i}, x_{i+1}): 1 \le i \le t-1\} \cup \{(x_t,x_1)\}$.
In a weighted digraph, the \emph{weight} of cycle $C$, denoted by $wgt(C)$, is the sum of the weights of the arcs
of $C$. A cycle with negative weight is called a \emph{negative cycle}. The following theorem is well-known, see Chapter 8 of \cite{Sc03} for example, and we provide a proof in \cite{BoIsTr17}.
\begin{theorem}
A weighted digraph has a potential function if and only if it contains no negative cycle.
\label{potential-no-neg}
\end{theorem}
\section{Orders with a $[1,k]$-Interval Representation}
We say that poset $P$ has an $[a,b]$-interval representation if it has a representation by intervals whose lengths are between $a$ and $b$ (inclusive). When $a=b>0$, the posets with such a representation are the unit interval orders. Because representations can be scaled, for any $b>0$, all interval orders have a $[0,b]$-interval representation.
This motivates us to consider the lower bound $a=1$, and in particular, posets that have a $[1,k]$-interval representation where $k$ is a positive integer. Fishburn characterized this class in \cite{Fi83} by showing the equivalence of (1) and (2) in Theorem~\ref{lengths1tok}; however, the proof is quite technical.
Using the framework in \cite{Is09}, we construct a weighted digraph $G_{P,k}$ associated with poset $P$ and show that $P$ has a $[1,k]$-interval representation if and only if $G_{P,k}$ has no negative cycle. This allows for a more accessible proof of Theorem~\ref{lengths1tok}. We choose the value of $\epsilon$ appearing as a weight in $G_{P,k}$ so that $0 < \epsilon < \frac{1}{2|X|}$.
\begin{Def} {\rm
Let $P=(X,\prec)$ be a partial order. Define $G_{P,k}$ to be the weighted digraph with vertices $\{\ell_x,r_x\}_{x\in X}$ and the following arcs:
\begin{itemize}
\item $(\ell_y, r_x)$ with weight $-\epsilon$ for all $x,y\in X$ with $x\prec y$,
\item $(r_x,\ell_y)$ with weight $0$ for all $x,y\in X$ with $x||y$,
\item $(r_x,\ell_x)$ with weight $-1$ for all $x\in X$,
\item $(\ell_x,r_x)$ with weight $k$ for all $x\in X$.
\end{itemize}
}
\label{GPk-def}
\end{Def}
It is helpful to think of the arcs of $G_{P,k}$ as coming in two categories: $\ell \to r$ and $r \to \ell$. We list the arcs by category for easy reference.
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Type & Arc & Weight & $x,y$ Relation \\
\hline
$\ell \to r$ & $(\ell_y,r_x)$ & $-\epsilon $ & $y \succ x$ \\
\hline
& $(\ell_x,r_x)$ & $k $ & \\
\hline
$r \to \ell$ & $(r_x,\ell_y)$ & $0 $ & $x \parallel y$ \\
\hline
& $(r_x,\ell_x)$ & $-1 $ & \\
\hline
\end{tabular}
\end{center}
Any negative cycle in $G_{P,k}$ with a minimum number of arcs will have at most $2|X|$ arcs since $G_{P,k}$ has $2|X|$ vertices. Since $\epsilon$ satisfies $0 < \epsilon < \frac{1}{2|X|}$, the
arcs of weight $- \epsilon$ will have combined weight $w$, where $-1 < w \le 0$. We record a consequence of this observation in the following remark.
\begin{remark} {\rm
If $C$ is a negative weight cycle in $G_{P,k}$ containing the minimum number of arcs, then $C$ contains at least $k$ arcs of weight $-1$ for every arc of weight $k$. }
\label{remark-k}
\end{remark}
\begin{theorem}
Let $P=(X,\prec)$ be a partial order and let $k\in \mathbb{Z}_{\geq 1}$. The following are equivalent:
\begin{enumerate}
\item $P$ has a $[1,k]$-interval representation.
\item $P$ contains no induced $\mathbf{2+2}$ or $\mathbf{(k+2)+1}$.
\item The weighted digraph $G_{P,k}$ contains no negative cycle.
\end{enumerate}
\label{lengths1tok}
\end{theorem}
\begin{proof}
\noindent $(1) \Rightarrow (3)$ Suppose that $P$ has an interval representation \mbox{$\mathcal{I} = \{I_x\}_{x\in X}$}, where $I_x = [L(x), R(x)]$, and for each $x \in X$ we have $1 \le |I_x| \le k$. Choose
$\epsilon = \min\{\frac{1}{2|X| + 1},\delta \}$, where $\delta$ is the smallest distance between unequal endpoints in the representation $\mathcal{I}$. By the definition of an interval representation and the conditions on the interval lengths, we have
\begin{enumerate}
\item $R(x) - L(y) \leq -\epsilon$ for all $x,y\in X$ with $x\prec y$,
\item $L(y) - R(x) \leq 0$ for all $x,y\in X$ with $x||y$,
\item $L(x) - R(x)\leq -1$ for all $x\in X$,
\item $R(x) - L(x) \leq k$ for all $x\in X$.
\end{enumerate}
Now define the function $p$ on the vertex set of $G_{P,k}$ as follows. For each $x \in X$ let
$p(r_x) = R(x)$ and $p(\ell_x) = L(x)$. So $p$ satisfies
\begin{enumerate}
\item[(a)] $p(r_x) - p(l_y) \leq -\epsilon$ for all $x,y\in X$ with $x\prec y$,
\item[(b)] $p(l_y) - p(r_x) \leq 0$ for all $x,y\in X$ with $x||y$,
\item[(c)] $p(l_x) - p(r_x)\leq -1$ for all $x\in X$,
\item[(d)] $p(r_x) - p(l_x) \leq k$ for all $x\in X$.
\end{enumerate}
Thus, for all $(u,v) \in E(G_{P,k})$, we have $p(v) - p(u)\leq w_{uv}$.
Hence $p$ is a potential function on $G_{P,k}$ and by
Theorem~\ref{potential-no-neg}, $G_{P,k}$ has no negative cycle.
\medskip
\noindent$(3) \Rightarrow (1)$
Given $G_{P,k}$ has no negative cycle, by Theorem~\ref{potential-no-neg}, there exists a potential function $p$ on $G_{P,k}$, and by definition, $p$ satisfies (a), (b), (c), (d). For each $x \in X$, let $L(x) = p(\ell_x)$ and $R(x) = p(r_x)$. By (c) we know $L(x) + 1 \le R(x)$, so $I_x = [L(x),R(x)]$ is indeed an interval with $|I_x| \ge 1$. By (d), the length of interval $I_x$ satisfies $ |I_x| \le k$, and by (a) and (b), $x \prec y$ in $P$ if and only if $R(x) < L(y)$. Thus the set of intervals $\{I_x\}_{x \in X}$ forms a representation of $P$ in which each interval has length between 1 and $k$.
\medskip
\noindent$(3) \Rightarrow (2)$
If $P$ contains an induced $\mathbf{2+2}$, denoted by $(x \succ a)||(y \succ b)$, then $\ell_x \xrightarrow{-\epsilon}r_a\xrightarrow{0}\ell_y\xrightarrow{-\epsilon}r_b\xrightarrow{0}\ell_x$ is a cycle in $G_{P,k}$ with weight $-2\epsilon$. Similarly, if $P$ contains an induced $\mathbf{(k+2)+1}$, denoted by $x \parallel (a_{k+2} \succ a_{k+1 } \succ \cdots \succ a_2 \succ a_1) $, then $G_{P,k}$ contains the cycle
\noindent
\scalebox{0.9}{\parbox{\linewidth}{
$$r_x\xrightarrow{0}\ell_{a_{k+2}}\xrightarrow{-\epsilon}r_{a_{k+1}}\xrightarrow{-1}\ell_{a_{k+1}}\xrightarrow{-\epsilon}r_{a_k} \xrightarrow{-1}\ell_{a_k} \xrightarrow{-\epsilon}
\cdots \xrightarrow{-\epsilon} r_{a_2}\xrightarrow{-1}\ell_{a_2}\xrightarrow{-\epsilon}r_{a_1}\xrightarrow{0}l_x\xrightarrow{k}r_x,$$
}}
\noindent whose weight is $(-1)k + k + (-\epsilon)(k+1) < 0.$ In either case, we obtain a negative cycle in $P$, a contradiction.
\medskip
\noindent $(2) \Rightarrow (3)$
Now assume $P$ contains no induced $\mathbf{2+2}$ or $\mathbf{(k+2)+1}$.
For a contradiction, assume that $G_{P,k}$ contains a negative cycle, and
let $C$ be a negative cycle in $G_{P,k}$ containing the minimum number of arcs. By definition of $G_{P,k}$, the arcs in $C$ must alternate between arcs of type $\ell \rightarrow r$ and arcs of type $r \rightarrow \ell$, thus $C$ has the form \mbox{$\ell_{x_1}\rightarrow r_{x_2} \rightarrow \ell_{x_3} \rightarrow \dots \rightarrow r_{x_n} \rightarrow \ell_{x_1}$} for some $x_1,x_2,\dots,x_n \in X$, not necessarily distinct. Since no cycle in $G_{P,k}$ contains exactly two arcs, we know $n \ge 4$. Furthermore, since vertices of a cycle are distinct, we know that $x_i \neq x_{i+2}$ for $1 \le i \le n$, where the indices are taken modulo $n$.
Next we show $wgt(C) \le -2\epsilon$. Since $x_i \neq x_{i+2}$ for $1 \le i \le n$ (indices taken modulo $n$), the arcs of $C$ immediately before and after a weight $k$ arc must have weight 0. If $C$ has at most one arc of weight $-\epsilon$ , then the remaining $\ell \to r$ arcs have weight $k$, resulting in a positive weight for $C$, a contradiction. Thus $C$ contains at least
two arcs of weight $-\epsilon$, and Remark~\ref{remark-k} implies that $wgt(C) \le -2\epsilon$.
We next claim that $C$ does not contain a segment of three consecutive arcs of weights $-\epsilon, 0, -\epsilon$. For a contradiction, suppose $C$ contains the segment
\mbox{$S_1: \ell_{a}\xrightarrow{-\epsilon} r_{b} \xrightarrow{0}\ell_{c} \xrightarrow{-\epsilon}r_{d}$}. Then by the definition of $G_{P,k}$, we have $a \succ b$, \ $b \parallel c$, \ and $c \succ d$. If $d \succ a$, we get $c \succ d \succ a \succ b$, contradicting $ b \parallel c$. If $a \parallel d$, then the elements $a,b,c,d$ induce in $P$ the poset
$\mathbf{2+2}$, a contradiction. Otherwise, $a \succ d$ and we can replace the segment $S_1$ by
\mbox{$\ell_{a}\xrightarrow{-\epsilon} r_{d}$} to yield a shorter cycle $C'$ with $wgt(C') = wgt(C) + \epsilon \le -2\epsilon + \epsilon = -\epsilon < 0$. This contradicts the minimality of $C$.
We now consider two cases depending on whether or not $C$ contains an arc of weight $k$.
\noindent
{\bf Case 1: $C$ has no arc of weight $k$.}
In this case, $C$ alternates between arcs with weight $-\epsilon$ and arcs with weight in the set $\{0,-1\}$. Since $C$ has at least four arcs and no segment of the form $(-\epsilon, 0, -\epsilon)$, there must be an arc of weight $-1$. Without loss of generality, choose a starting point for $C$ so that it begins with the segment \mbox{$S_2: \ell_{x_1}\xrightarrow{-\epsilon} r_{x_2} \xrightarrow{-1} \ell_{x_3} \xrightarrow{-\epsilon} r_{x_4}.$} By the definition of $G_{P,k}$ we have $x_1 \succ x_2 = x_3 \succ x_4$, so $x_1 \succ x_4$. Replace segment $S_2$ by \mbox{$\ell_{x_1}\xrightarrow{-\epsilon} r_{x_4}$} to obtain a cycle $C'$ whose weight is also negative since it contains no arcs of weight $k$. Since $C'$ has fewer arcs than $C$, this contradicts the minimality of $C$.
\noindent
{\bf Case 2: $C$ contains an arc of weight $k$.}
By Remark~\ref{remark-k}, there is a segment of $C$ that starts with an arc of weight $k$ and has at least $k$ arcs of weight $-1$ before the next arc of weight $k$. Without loss of generality, we can choose the starting point of $C$ so that it begins with the segment
\mbox{$\ell_{x_1}\xrightarrow{k} r_{x_2} \xrightarrow{} \ell_{x_3} \xrightarrow{-\epsilon} r_{x_4} \xrightarrow{} \cdots \xrightarrow{-\epsilon} r_{x_{2k}} \xrightarrow{} \ell_{x_{2k+1}} $}.
If the arc $(r_{x_2},\ell_{x_3})$ has weight $-1$, then $x_1 = x_2 = x_3$, a contradiction since $x_1 \neq x_3$. Thus, the arc $(r_{x_2},\ell_{x_3})$ has weight 0 and $C$ begins with the segment \mbox{$\ell_{x_1}\xrightarrow{k} r_{x_2} \xrightarrow{0} \ell_{x_3} \xrightarrow{-\epsilon} r_{x_4} .$}
If any of the next $k$ arcs of the type $r \rightarrow \ell$ on $C$ had weight 0, then $C$ would contain a segment of the form $(-\epsilon, 0, -\epsilon)$, contradicting our earlier claim.
Thus each of these arcs has weight $-1$ and $C$ starts with the following segment:
\mbox{$ \ell_{x_1}\xrightarrow{k} r_{x_2} \xrightarrow{0} \ell_{x_3} \xrightarrow{-\epsilon} r_{x_4} \xrightarrow{-1} \ell_{x_5} \xrightarrow{-\epsilon} r_{x_6} \xrightarrow{-1} \cdots \xrightarrow{-\epsilon} r_{x_{2k+2}} \xrightarrow{-1} \ell_{x_{2k+3}}.$}
\smallskip
By the definition of $G_{P,k}$, we have the following relations in $P$:
\noindent
$x_1 = x_2 \parallel x_3 \succ x_4 = x_5 \succ x_6 = x_7 \succ \cdots = x_{2k+1} \succ x_{2k+2} = x_{2k+3}$.
If $x_1 = x_{2k+3}$, then by transitivity, $x_1 \prec x_3$, contradicting the relation $x_1 = x_2 \parallel x_3$. Thus $C$ contains at least two more arcs $(\ell_{x_{2k+3}} , r_{x_{2k+4}})$ and $(r_{x_{2k+4}}, \ell_{x_{2k+5}})$. If arc $(\ell_{x_{2k+3}} , r_{x_{2k+4}})$ had weight $k$, then $x_{2k+2} = x_{2k+3} =x_{2k+4}$, a contradiction since $x_{2k+2} \neq x_{2k+4}$. Thus arc $(\ell_{x_{2k+3}} , r_{x_{2k+4}})$ has weight $- \epsilon$, and $x_{2k+3} \succ x_{2k+4}$ in $P$, and $C$ starts with the following segment:
\noindent
\scalebox{0.95}{\parbox{\linewidth}{
$$S: \ell_{x_1}\xrightarrow{k} r_{x_2} \xrightarrow{0} \ell_{x_3} \xrightarrow{-\epsilon} r_{x_4} \xrightarrow{-1} \ell_{x_5} \xrightarrow{-\epsilon} r_{x_6} \xrightarrow{-1} \cdots \xrightarrow{-\epsilon} r_{x_{2k+2}} \xrightarrow{-1} \ell_{x_{2k+3}} \xrightarrow{-\epsilon} r_{x_{2k+4}} .$$ } }
Finally, we consider the relation between $x_1$ and $x_{2k+4}$ in $P$. If $x_1 \prec x_{2k+4}$, then by transitivity, $x_1 \prec x_3$, a contradiction. If $x_1 \succ x_{2k+4}$, we can replace segment $S$ by $\ell_{x_1}\xrightarrow{-\epsilon} r_{x_{2k+4}} $ to obtain a shorter cycle $C'$ in $G_{P,k}$. As noted earlier, the combined weight of the arcs of $C$ that have weight $- \epsilon$ is strictly greater than $-1$, so $C'$ also has negative weight, contradicting the minimality of $C$. Hence $x_1 \parallel x_{2k+4}$ and
the $k+3$ elements in the set $\{x_1, x_3, x_5, \ldots, x_{2k+3}, x_{2k+4}\}$ induce a $\mathbf{(k+2) + 1}$ in $P$, a contradiction.
\end{proof}
We end by describing an algorithm that constructs a $[1,k]$-interval representation of a poset $P$ if one exists and otherwise produces a forbidden poset, either $\mathbf{2+2}$ or $\mathbf{(k+2) + 1}$. Use a standard shortest-paths algorithm such as the Bellman-Ford or the matrix multiplication method on $G_{P,k}$ to compute the weight of a minimum-weight path between each pair of vertices or detect a negative cycle.
If there is a negative cycle, these algorithms detect one with a minimum number of arcs. If such a negative cycle exists in $G_{P,k}$, then as in the proof of $(2) \Rightarrow (3)$ of Theorem~\ref{lengths1tok}, either the cycle contains the segment $-\epsilon, 0, -\epsilon$, and a $\mathbf{2+2} $ is detected in $P$, or else as in Case 2 of that proof, a $\mathbf{(k+2) + 1}$ is detected in $P$.
If there is no negative cycle,
Theorem~\ref{potential-no-neg} ensures that a potential function $p$ exists for $G_{P,k}$. Indeed, setting $p(v)$ to be the minimum weight of a walk ending at $v$ produces a potential function. As we showed in the proof of $(3) \Rightarrow (1)$, the intervals
$[p(\ell_x),p(r_x)]$ provide a $[1,k]$-interval representation of $P$.
Thus there is a polynomial-time certifying algorithm.
| {
"timestamp": "2017-09-04T02:08:42",
"yymm": "1709",
"arxiv_id": "1709.00313",
"language": "en",
"url": "https://arxiv.org/abs/1709.00313",
"abstract": "A poset $P= (X, \\prec)$ has an interval representation if each $x \\in X$ can be assigned a real interval $I_x$ so that $x \\prec y$ in $P$ if and only if $I_x$ lies completely to the left of $I_y$. Such orders are called \\emph{interval orders}. Fishburn proved that for any positive integer $k$, an interval order has a representation in which all interval lengths are between $1$ and $k$ if and only if the order does not contain $\\mathbf{(k+2)+1}$ as an induced poset. In this paper, we give a simple proof of this result using a digraph model.",
"subjects": "Combinatorics (math.CO)",
"title": "A Simple Proof Characterizing Interval Orders with Interval Lengths between 1 and $k$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232884721166,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460373282073
} |
https://arxiv.org/abs/1909.07270 | A Weighted $\ell_1$-Minimization Approach For Wavelet Reconstruction of Signals and Images | In this effort, we propose a convex optimization approach based on weighted $\ell_1$-regularization for reconstructing objects of interest, such as signals or images, that are sparse or compressible in a wavelet basis. We recover the wavelet coefficients associated to the functional representation of the object of interest by solving our proposed optimization problem. We give a specific choice of weights and show numerically that the chosen weights admit efficient recovery of objects of interest from either a set of sub-samples or a noisy version. Our method not only exploits sparsity but also helps promote a particular kind of structured sparsity often exhibited by many signals and images. Furthermore, we illustrate the effectiveness of the proposed convex optimization problem by providing numerical examples using both orthonormal wavelets and a frame of wavelets. We also provide an adaptive choice of weights which is a modification of the iteratively reweighted $\ell_1$-minimization method. | \section{Introduction}
We investigate recovering an object of interest (OoI) from either a small number of samples
or a noisy version using a weighted $\ell_1$-norm regularized convex optimization scheme
with a specific choice of weights.
Throughout this effort, the functional representation of an OoI is given by
\begin{equation} \label{eq:approximation_expansion}
f(\bm{y}) := \sum_{\bm{\nu} \in \mathcal{S}} c_{\bm{\nu}} \Phi_{\bm{\nu}}(\bm{y})
+ \sum_{\bm{\nu} \in \mathcal{W}} c_{\bm{\nu}} \Psi_{\bm{\nu}}(\bm{y}),
\end{equation}
where $\bm{y}$ is in the domain $\mathcal{U}$ of $f$,
$\mathcal{S}$ and $\mathcal{W}$ are two finite sets of multi-indices
which we will specify later, $\{ \Phi_{\bm{\nu}} \}_{\bm{\nu \in \mathcal{S}}}$
is a family of scaling functions, $\{ \Psi_{\bm{\nu}} \}_{\bm{\nu \in \mathcal{W}}}$ is a family of wavelet
functions, and $c_{\bm{\nu}}$ is either a wavelet or scaling function coefficient.
We will discuss the wavelet and scaling functions in Section \ref{sec:theoretical_discussion}.
The recovery of $f$ is achieved by identifying a vector of coefficients,
$\bm{c} :=(c_{\bm{\nu}})_{\bm{\nu} \in \mathcal{S} \cup \mathcal{W}}$,
from our proposed convex optimization problem. The weighted $\ell_1$-norm,
$\| \cdot \|_{\bm{\omega},1}$ is defined as
\begin{equation}
\| \bm{c} \|_{\bm{\omega},1} = \sum_{{\bm{\nu}} \in \mathcal{J}} \omega_{\bm{\nu}} |c_{\bm{\nu}}|,
\end{equation}
given the vector of $N$ weights $\bm{\omega} = (\omega_{\bm{\nu}})_{{\bm{\nu}} \in \mathcal{J}}$
where $\mathcal{J} := \mathcal{S} \cup \mathcal{W}$ and the cardinality of $\mathcal{J}$ is $N$.
The coefficients $\bm{c}$ are obtained by solving
\begin{equation} \label{eq:weighted_l1_min}
\min_{\bm{c} \in \mathbb{C}^N} \lambda \| \bm{c} \|_{\bm{\omega},1}
+ \| \bm{A} \bm{c} - \tilde{\bm{f}} \|_{2}^2,
\end{equation}
where $\bm{f} = (f(\bm{y}_1),\dots,f(\bm{y}_m))$ is an $m \le N$-dimensional vector of evaluations
of $f$ at the points $\bm{y}_i \in \mathbb{R}^d$ which may or may not be noisy,
$\tilde{\bm{f}}$ is the scaled vector $\tilde{\bm{f}} = \bm{f}/\sqrt{m}$ and
$\bm{A}$ is the $m \times N$ matrix whose entries are
\begin{equation} \label{eq:measurement_matrix}
A_{i, \rho({\bm{\nu}})} = \left \{
\begin{array}{cc}
\frac{\Phi_{\rho(\bm{\nu})}(\bm{y}_i)}{\sqrt{m}} & \text{ if } \bm{\nu} \in \mathcal{S} \\
\frac{\Psi_{\rho(\bm{\nu})}(\bm{y}_i)}{\sqrt{m}} & \text{ if } \bm{\nu} \in \mathcal{W},
\end{array}
\right .
\end{equation}
given the bijective mapping $\rho: \mathcal{J} \rightarrow \{1,\dots,N\}$,
$m$ evaluation points $\{\bm{y}_i\}_{i=1}^m \subset \mathbb{R}^d$ and $\bm{\nu} \in \mathcal{J}$.
The parameter $\lambda$ in \eqref{eq:weighted_l1_min} controls the trade off between
the regularization of the solution enforced by the weighted $\ell_1$-norm and the
fidelity to the observation $\bm{f}$ enforced by the $\ell_2$-norm.
The effectiveness of $\ell_1$-minimization is highlighted by its use in compressed sensing
(CS) \cite{original_cs_paper,donoho_compressed_sensing} and has been successfully deployed
in many applications such as photography \cite{single_pix_camera},
medical imaging \cite{cs_mri} or radar and electromagnetic imaging \cite{cs_electromagnetics}.
Wavelet representations are extensively employed in data compression and denoising
\cite{wavelet_shrinkage,wavelet_compress_and_denoise}. Despite these triumphs,
standard, unweighted $\ell_1$-minimization, i.e., the minimization problem \eqref{eq:weighted_l1_min}
where $\bm{\omega} = (1,\dots,1)$, does not seem suitable for the recovery of wavelet coefficients
even for functions with sparse or compressible representations in a wavelet basis. Consider Figure
\ref{fig:sine_and_subsam} where a piecewise smooth function is plotted.
As seen in Figure \ref{fig:all_coefs}, many of its coefficients
are relatively small (only $95$ out of the
$1053$ plotted coefficients have magnitude larger than $0.01$), so this function is compressible in wavelet basis.
The indices of these large coefficients are given in
Figure \ref{fig:thresh_coefs}.
From Figure \ref{fig:recovs} which plots the recovery of the piecewise smooth function from $80$ randomly chosen
samples, it is readily seen that using unweighted $\ell_1$-minimization is not satisfactory.
Comparing the distribution of the large wavelet coefficients
recovered by unweighted $\ell_1$-minimization to those of
the original signal, shown in Figure \ref{fig:all_coefs},
it is clear that the unweighted approach leads to the
recovery of spurious large coefficients that do not correspond to the true signal's coefficients.
Figure \ref{fig:thresh_coefs} shows the indices of the $123$ coefficients larger
than the threshold $0.01$ recovered by unweighted $\ell_1$-minimization.
In particular, we notice that most of the large coefficients of the original signal
are those with low indices, whereas the large coefficients recovered by
unweighted $\ell_1$-minimization are more uniformly distributed.
In this effort, we study a model for the structured sparsity of wavelet coefficients of OoI's
and consider several choices of weights chosen in a particular way which encourage that structure.
We will use the weights
\begin{equation} \label{eq:choice_of_weight_alpha}
\omega_{\pmb{\nu}} = \left \{
\begin{array}{cc}
\| \Phi_{\bm{\nu}} \|_{L_\infty} & \text{ if } \bm{\nu} \in \mathcal{S} \\
\| \Psi_{\bm{\nu}} \|_{L_\infty} & \text{ if } \bm{\nu} \in \mathcal{W}
\end{array}
\right . .
\end{equation}
This choice is inspired by \cite{high_d_poly_approx} where
recovering the polynomial coefficients of high-dimensional functions by weighted $\ell_1$-minimization
is considered, and the indices of large polynomial coefficients of smooth functions typically
fall in certain kinds of sets called ``lower sets". They show that using
\eqref{eq:choice_of_weight_alpha} vastly improves the recovery of the functions by
proving that the recovered vector of coefficients has support which is very close to
a lower set. In other words, the choice of weights promotes
structure in the recovered coefficients.
The same choice of weights, but definied with respect to wavelet functions in stead of polynomial ones,
also promotes structure of wavelet coeffienits. Consider Figure \ref{fig:thresh_coefs} which
compares the indices of the $66$ coefficients larger than
the threshold $0.01$ for the original signal, those recovered
by unweighted $\ell_1$-minimization, and those recovered by weighted $\ell_1$-minimization.
Notice that the distribution of those coefficients recovered by weighted $\ell_1$-minimization
more closely resembles the distribution of the coefficients of the original signal.
Furthermore, this choice of weights makes weighted $\ell_1$-minimization robust in the sense that
the recovered sparse vector is close to the true coefficients even when the measurements have been
perturbed by noise. Our numerical examples in Section \ref{sec:numerics} show that weighted
$\ell_1$-minimization improves recovery for both inpainting and denoising, and
encourages structured sparsity associated with wavelet coefficients. We also consider
solving the inpainting problem using a frame of wavelets.
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/original_and_samples.png}
\caption{}
\label{fig:sine_and_subsam}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/weighted_vs_unweighted_recov.png}
\caption{}
\label{fig:recovs}
\end{subfigure}
\caption{Reconstruction of the original signal with both weighted and unweighted $\ell_1$-minimization.
Here we plot: in Figure (\ref{fig:sine_and_subsam}) the piecewise smooth signal
where the circles indicate 80 randomly subsampled values; and in Figure
(\ref{fig:recovs}) the reconstruction from the 80 subsampled values using weighted and unweighted $\ell_1$-minimization.}
\label{fig:weighted_vs_unweighted}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/wave_coef_all.png}
\caption{}
\label{fig:all_coefs}
\end{subfigure}
~ %
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/threshold_wave_coef.png}
\caption{}
\label{fig:thresh_coefs}
\end{subfigure}
\caption{A visualization of how weighted $\ell_1$-minimization recovers a set of coefficients whose sparsity is structured similarly to the original signal.
The coefficients plotted here are associated wtih the Daubechies $3$ wavelet basis also denoted as as $db3$.
For a construction of this wavelet see \cite{book:10_lec_daubechies}.
Here we plot: in Figure (\ref{fig:all_coefs}) the values of all wavelet coefficients where the coefficients recovered by
unweighted and weighted $\ell_1$-minimization
are shifted so that their differences are more readily seen; and in Figure (\ref{fig:thresh_coefs})
the coefficients whose magnitudes are larger than $0.01$.}
\label{fig:weighted_vs_unweighted_coefs}
\end{figure}
In this effort we also provide a choice of weights which can adapt
to the structure of the wavelet coefficients of a given OoI.
Since wavelets functions are scaled, shifted versions of a mother wavelet,
the weights \eqref{eq:choice_of_weight_alpha} depend only the scale of the
associated coefficient. More complicated structures beyond the parent-child relationship
may exist. That is, coefficients with large values are not randomly distributed within each scale.
They may depend on other values within the same scale in addition to those
on adjacent scales. Intuitively, improved performance can be obtained by choosing weights which are
adapted to the inherent structure of a given set of wavelet coefficients both across and within scale.
We consider a modification of iterative reweighted $\ell_1$-minimization (IRW $\ell_1$-minimization),
introduced in \cite{reweightedl1}, where a sequence of weighted $\ell_1$-minimization problems are solved.
The weights used in IRW $\ell_1$-minimization are updated based on the previously recovered vector of coefficients.
Our modification to IRW $\ell_1$-minimization described in Section \ref{sec:theoretical_discussion}
updates the weights based on both the scale of the associated coefficients and the value of the
coefficients recovered at the previous iteration.
Our numerical examples which follow show that this adaptive choice of weights
produces better results at the cost of solving several weighted $\ell_1$-minimization
problems.
\subsection{Related Results}
Compressed Sensing based approaches for recovering a function from a limited
collection of measurements or evaluations of a function were considered in
\cite{model_based_cs,bui2015, Candes2008, dwb2008, LI2017, polania2014,fourcartCSbook}
among others. Many of these works use the underlying assumption that the OoI can be
well approximated by an expansion like \eqref{eq:approximation_expansion} were only a few
coefficients are large. Both the recovery of signals using weighted $\ell_1$ minimization
and the use of structured sparsity have also been considered previously.
For example, \cite{WardWeighted} studies a weighted $\ell_1$ approach and proposes some conditions for the weights,
but does not provide a specific choice. An iterative process for choosing adaptive weights
was introduced in \cite{reweightedl1} where weights are updated
based on the coefficients recovered on the previous iteration.
A specific choice of weights is given in \cite{high_d_poly_approx}
which yields a quantifiable improvement to the sample
complexity. Binary weights are considered in \cite{cs_electromagnetics}.
A general class of structured sparse signals is considered in \cite{model_based_cs},
where the authors establish a recovery guarantee with complexity estimates for two kinds
of greedy algorithms. Another example where the structure of the wavelet trees is utilized is
\cite{bui2015}, where a novel, Gram-Schmidt process inspired implementation of an orthogonal matching
pursuit algorithm is developed.
The practicality of using sparse tree structures for real world signals has also
been shown. The work \cite{polania2014} uses Compressed Sensing based recovery of the wavelet
coefficients of electrocardiogram signals.
Under certain structured sparsity assumption on the representation coefficients
the authors in \cite{adcock2018oracletype,ADCOCK_2017} show that optimal sampling complexity can be
achieved by unweighted $\ell_1$-minimization if special sampling strategy is adopted.
In particular this applies to the inpainting problem,
however, in our case we assume that the samples are uniform and we do not have the freedom to choose the
sampling strategy.
Moreover, our structured assumption does not fit into their paradigm.
Exploiting the structure of wavelet coefficients has also been used to solve the denoising problem.
Notice that noise added to the measurement $\bm{f}$ principally contributes to the high frequency wavelet
coefficients. Therefore, a naive wavelet denoising scheme is to take the wavelet transform of the noisy
vector $\bm{f}$, threshold the wavelet coefficients and transform back into the original domain. By
thresholding the wavelet coefficients we have removed some high frequency information from the
wavelet coefficients and therefore we can expect that the some of the noise is also removed.
More sophisticated thresholding methods have been considered, see e.g.,
\cite{wavelet_shrinkage, Donoho_denoise, denoise_across_scale, bayesian_denoise}.
Whereas these works employ statistical estimation to find important wavelet coefficients,
our work finds out that with a simple choice of weights which is independent of the OoI,
we can obtain satisfactory denoising results.
Our proposed weighted $\ell_1$-minimization recovers a vector of coefficients which,
due to our choice of weights, is less likely to be affected by the high-frequency perturbations
in the function samples.
\subsection{Organization}
In Section \ref{sec:theoretical_discussion} we present our choices of weights and
review the relevant research which influenced our approach. We also introduce
a model for wavelet coefficients which futher supports our choice of weights.
In Section \ref{sec:numerics}, we present some numerical experiments
which show that an OoI can be successfully recovered using \eqref{eq:weighted_l1_min}
our specific choices of weights \eqref{eq:choice_of_weight_alpha} and \eqref{eq:wirwl1}.
In particular, we consider the recovery of signals, images, and hyperspectral
images from a set of incomplete measurements. We also solve the denoising problem
for signals and images.
In Section \ref{sec:conclusion} we discuss possible extensions of this work.
\section{Theoretical Discussion} \label{sec:theoretical_discussion}
In this section we discuss several theoretical elements, which inspired our
choice of weights, that we claim to promote the natural structure exhibited by
the important wavelet coefficients of real-world OoI.
Before justifying this claim and presenting a model for wavelet coefficients,
we will first define $k$-ary trees, which are a special case of a kind of graph called a tree.
A directed graph is called a tree if it satisfies the following two conditions:
(i) there is a single node, $\bm{\nu}_0$, which is called the root; and,
(ii) there exists one and only one path from $\bm{\nu}_0$ to any other node
$\bm{\nu}$ in the graph \cite{arborescence_def}.
The indices of the wavelet coefficients can be identified
with a node on a \textit{full $k$-ary tree}, i.e., a tree so that every node has either
$k$ edges or zero edges leaving it. For example, Figure \ref{fig:full_tree}
shows an example of a $2$-tree with the indices
$\{ \bm{\nu_0},\dots,\bm{\nu_6} \}$. In our model, the edges between nodes are directed
and the direction determines a parent-child relationship between nodes.
We say that node $\bm{\nu}_i$ is the \textit{parent} of node $\bm{\nu}_j$,
or equivalently, the node $\bm{\nu}_j$ is the \textit{child} of node $\bm{\nu}_i$
if one of the edges emanating from $\bm{\nu}_i$ terminates at node $\bm{\nu}_j$.
In general, we denote the parent of node $\bm{\nu}_j$ as $p(\bm{\nu}_j)$.
To illustrate, consider Figure \ref{fig:full_tree} where $\bm{\nu}_0$ has two child nodes,
$\bm{\nu}_1$ and $\bm{\nu}_2$, so that $p(\bm{\nu}_2) = p(\bm{\nu}_1) = \bm{\nu}_0$.
We consider the closed tree model for describing the subsets of large coefficients
of signals and images.
\begin{definition}[Closed Tree]\label{def:closedtree}
A multi-index set $T$ is called a closed tree if the
following two conditions hold:
\begin{enumerate}
\item Each $\bm{\nu} \in T$ may be uniquely identified with a node on a
$k$-ary tree.
\item For each node $\bm{\nu} \in T$,
\begin{equation*}
\bm{\nu} \in T \implies p(\bm{\nu}) \in T.
\end{equation*}
That is, if a node is in $T$, then so is its parent.
\end{enumerate}
\end{definition}
\begin{figure}
\centering
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{fig/full_binary_tree.pdf}
\caption{}
\label{fig:full_tree}
\end{subfigure}
~
\begin{subfigure}[b]{0.47\textwidth}
\includegraphics[width=\textwidth]{fig/closed_tree.pdf}
\caption{}
\label{fig:closed_tree}
\end{subfigure}
\caption{The wavelet coefficients of real-world signals are associated with
$2$-trees. Here we plot: in Figure (\ref{fig:full_tree}) an example of a $2$-tree; and in Figure (\ref{fig:closed_tree})
an example of a subset of nodes of the $2$-tree which forms a closed tree.}
\label{fig:full_vs_closed}
\end{figure}
An example of a closed tree is given in Figure \ref{fig:closed_tree}. The
motivation for considering closed trees as a model for wavelet coefficients is three-fold.
\begin{itemize}
\item One can construct orthogonal wavelets from a set of the nested approximation spaces
called multi-resolution analyses that satisfy certain properties, see for example \cite{HernandezWave}.
The nested relationship between these induces an association between certain wavelet functions on adjacent levels.
With appropriate indexing of wavelet function, the parent and child relationship of the closed tree
corresponds to this association.
\item The coefficients of a function expressed in an orthonomral wavelet system
are given by
the inner product of the function with a wavelet function. In practice, this value
is approximated using a quadrature rule. This quadrature can be implemented as a
linear combination
of scaling function coefficients at the previous scale
\cite{book:10_lec_daubechies}. Calculating coefficients in this
way clear associates the value of the coefficient associated with a parent to
the coefficients associated with its child nodes.
\item The successful application of hidden markov tree models
in works such as \cite{hmtModel,dwb2008,choi2000hidden,crouse1998wavelet}
in image and signal processing show that it is beneficial to enforce
correlation between parent nodes and child nodes.
\end{itemize}
This model makes rigorous a widely known property of wavelet representation of
signals and images that nodes associated with small wavelet coefficients
are more likely to have small children and nodes associated
with large wavelet coefficients may have either large or small children.
In light of this it is natural to find a choice of weights which promotes this structure.
Our choice of weights is inspired by \cite{high_d_poly_approx} where it was proven that polynomial coefficients
that are associated with certain kinds of subsets, called lower sets,
can be recovered with weighted $\ell_1$-minimization with weights equal to
the uniform norms of the tensor product polynomials associated with the coefficients.
\begin{definition}[lower set]
A multi-index set $\mathcal{S} \subset \mathbb{N}_0^d$ is called a lower set if and only if
\begin{equation*}
\bm{\nu} \in \mathcal{S} \text{ and } \bm{\mu} \le \bm{\nu} \implies \bm{\mu} \in \mathcal{S},
\end{equation*}
where $ \bm{\mu} \le \bm{\nu}$ is interpreted as $\mu_k \le \nu_k$ for each $k=1,\dots,d$.
\end{definition}
Closed trees have analogous structure to lower sets in the sense that the parent of
every node in the closed tree is also in the closed tree.
Given a family of pre-defined wavelets, such as Haar, Daubechies, etc.,
the weight given in \eqref{eq:choice_of_weight_alpha} is
\begin{equation} \label{eq:wavenorm}
\omega_{\bm{\nu}} = \left \{
\begin{array}{cc}
\| \Phi_{\bm{\nu}} \|_{L_\infty} = 2^{jd/2}, & \text{ if } \bm{\nu} \in \mathcal{S} \\
\| \Psi_{\bm{\nu}} \|_{L_\infty} = 2^{jd/2}, & \text{ if } \bm{\nu} \in \mathcal{W}
\end{array}
\right .
\end{equation}
where the multi-index $\bm{\nu} = (j,k_1,\dots,k_{d})$ and $j$ is
the level on which the coefficients $c_{\bm{\nu}}$ lies.
In this section we established a structured sparsity model for wavelet coefficients
and related wavelet and tensor product polynomial representations.
In the next section we consider using weighted $\ell_1$ to recover
a signal from incomplete or noisy measurements and justify our approach
using these connections.
\subsection{Recovery of OoI from incomplete measurements}
The minimum number of measurements $m$ required for the guaranteed recovery of a
sparse vector is sometimes called the \textit{sampling complexity} in the compressed
sensing literature. For a measurement scheme arising from a bounded, orthonormal system,
as in \eqref{eq:measurement_matrix}, the number of samples $m$ required for recovery
using unweighted $\ell_1$-minimization depends on the maximum of the uniform
norms of the orthonormal system \cite{fourcartCSbook}. That is, let
\begin{equation} \label{eq:theta_def}
\Theta := \max_{\bm{\nu}\in \mathcal{J}} \| \Psi_{\bm{\nu}} \|_{\infty},
\end{equation}
then whenever $m$ satisfies
\begin{equation} \label{eq:theta_bound}
m \ge \Theta^2s \times \text{ log factors }
\end{equation}
one can recover the \textit{best s-term approximation} to the target function,
i.e., an approximation formed by superimposing the $s$ functions from the orthonormal
system corresponding to the $s$ largest coefficients. This condition is sharp or optimal for many sparse
recovery problems of interest, for example, from Fourier measurements.
However, for wavelets and high-dimensional polynomials, $\Theta$ can become so large that renders \eqref{eq:theta_bound}
useless, see \cite{TranWebster18}.
Motivated by the need of improved algorithms which can exploit the structure of sparse polynomial
expansions with better recovery guarantee, \cite{high_d_poly_approx} proposes a weighted $\ell_1$
approach where the sampling complexity depends on a quantity
$K(s)$ which is strictly smaller than $\Theta^2 s$. More rigorously, they showed that
\begin{equation} \label{eq:high_d_sampling_complex}
m \ge K(s) \times \text{log factors},
\end{equation}
where
\begin{equation} \label{eq:def_k_omega}
K(s) := \sup_{S \text{ is a lower set}, |S| \le s }
\left \| \sum_{\bm{\nu} \in S} |\Psi_{\bm{\nu}}|^2 \right \|_{L_{\infty}}
\end{equation}
is sufficient for the recovery of best $s$ term approximations with lower set structures.
Assuming that an OoI has large wavelet coefficients lying on a closed tree, a similar conclusion
about the sampling complexity of weighted $\ell_1$-minimization \eqref{eq:weighted_l1_min} and
\eqref{eq:choice_of_weight_alpha} can be made. Let us define the analogous quantity to
\eqref{eq:def_k_omega} for wavelets
\begin{equation} \label{eq:def_k_tree}
K_{\mathcal{T}}(s) := \sup_{\substack{T \text{ is closed tree, } |T|\le s}}
\left \| \sum_{\bm{\nu} \in T} |\Psi_{\bm{\nu}}|^2 \right \|_{L_{\infty}}.
\end{equation}
Then it can be shown that the recovery guarantee is
\begin{equation} \label{eq:tree_sample_complexity}
m \ge K_{\mathcal{T}}(s) \times \text{ log factors },
\end{equation}
and that,
\begin{equation} \label{eq:compare_tree_sample}
K_{\mathcal{T}(s)} \le \Theta^2 s.
\end{equation}
so the sufficient condition on sampling
complexity is improved.
Unlike the polynomial bases considered in \cite{high_d_poly_approx},
the guarantee \eqref{eq:tree_sample_complexity} for wavelet bases
is still too demanding. Moreover, it does not reflect the successful recovery from
\textit{underdetermined} systems, which is the main objective of a compressed sensing
approach. We postulate that this is due to the limitation
of our current analysis technique, and plan to address this issue in future work.
In experiments, some shown in the following sections, we consistently observe that
weighted $\ell_1$-minimization is able to reconstruct
signals and images given a small percentage of pixels.
Therefore, \eqref{eq:tree_sample_complexity} may be very pessimistic.
More remarkably, the superiority of our proposed weighted $\ell_1$-minimization approach
over the unweighted approach is clear.
In fact, our numerical examples show that it performs much better not only
for orthonormal systems of wavelets but also for a frame wavelets which we introduce in
Section \ref{sec:numerics}.
\subsection{Recovery of OoI from noisy measurements}
Suppose that the samples used for the recovery of a function
using \eqref{eq:weighted_l1_min} are noisy. In particular, we
assume that $\hat{f}(\bm{y}) := f(\bm{y}) + \eta$ where $\eta$ is modeled
as a Gaussian noise.
The denoising problem is to recover $f$ given $\hat{\bm{f}} := ( \hat{f}(\bm{y_k}) )_{k=1}^m$.
This can be solved by using our proposed weighted $\ell_1$-minimization problem to
recover the true coefficients of $f$. In Section \ref{sec:numerics}, we give numerical
examples of denoising full, noisy signals and images, i.e., $m=N$.
As mentioned in the introduction, a basic denoising approach is to threshold
the wavelet coefficients of the noisy signal or image. This simple approach
is effective if the noise level is small. For larger noise levels, more advanced thresholding
algorithms have been proposed which adapt to the signal itself, for example, \cite{wavelet_shrinkage}.
Our proposed weighted $\ell_1$-minimization problem can be related to an iterative
weighted soft-thresholding approach, where our choice of weights encourages the
recovered wavelet coefficients to exhibit structure similar to the original signal.
According to \eqref{eq:wavenorm}, the deeper a wavelet coefficient lies in the
tree, the larger the weight associated with it is, resulting in more aggressive thresholding.
\subsection{Scale and Wavelet Aware Iteratively Updated Weights}
Our choice of weights \eqref{eq:choice_of_weight_alpha} naturally
encourages the property that wavelet coefficients of different scales have
appropriately scaled values. A natural extension would be to pick weights
which take into account the intra-level magnitude correlation of coefficients.
Although the true wavelet coefficients of an OoI have large and
small values within each scale, our chosen weights do not discriminate between large and small
coefficients within each scale. A method introduced in \cite{reweightedl1} iteratively solves several
weighted $\ell_1$-minimizations and updates the weights at each iteration based on the recovered
sparse vector, specifically,
\begin{equation} \label{eq:rwl1}
\omega_{\bm{\nu}}^{(t)} = \frac{1}{|c_{\bm{\nu}}^{(t-1)}| + \varepsilon}
\end{equation}
where $c_{\bm{\nu}}^{(t-1)}$ is the $\bm{\nu}^{th}$ coefficient recovered at step
$t-1$ and $\varepsilon$ is a parameter that must be chosen.
Intuitively, this approach tries to find and minimize a concave penalty
function that more closely resembles $\ell_0$ minimization.
In practice however, this weighting strategy does
not lead to significantly better results for recovering wavelet coefficients.
In Figure \ref{fig:rwl1}, we see that similarly to the unweighted
$\ell_1$-minimization case, reweighted $\ell_1$-minimization over emphasizes
coefficients very deep in the wavelet tree leading to poor recovery.
We recreated the results from the paper using the parameters provided by the authors.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/rational_poly_haar_thresh_coef.png}
\caption{}
\label{fig:rwl1_thresh}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/rational_poly_haar_coef_recov.png}
\caption{}
\label{fig:rwl1_coefs}
\end{subfigure}
\caption{A comparison of the performance of unweighted, weighted, IRW, and wavelet reweighted
$\ell_1$-minimization for recovering the coefficients of a given signal. The IRW example
uses the same parameters as \cite{reweightedl1} and the wavelet reweighted example uses the
weights given in \eqref{eq:wirwl1}.
}
\label{fig:rwl1}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{fig/compare_weights.png}
\caption{A comparison of the weights used by IRW and wavelet reweighted $\ell_1$-minimization
after $5$ iterations relative to the choice of weights
\eqref{eq:choice_of_weight_alpha}. These weights were obtained in the
experiment associated with Figures \ref{fig:closed_tree_coef_recov_magnitude}
and \ref{fig:closed_tree_coef_recov} descibed in Section \ref{sec:numerics}. }
\label{fig:compare_weights}
\end{figure}
From \eqref{eq:wavenorm}, it is clear that our choice of weights \eqref{eq:choice_of_weight_alpha}
depend on their level only. On the other hand, notice that the adaptive choice of weights
used in the usual IRW $\ell_1$-minimization does not take into account the level of the coefficients.
We propose an alteration of IRW $\ell_1$-minimization, where the weights are updated by the formula
\begin{equation} \label{eq:wirwl1}
\omega_{\bm{\nu}}^{(t)} = \omega_{p(\bm{\nu})}^{(0)}
+
\frac{1}{ |c_{\bm{\nu}}^{(t-1)}| + \varepsilon_{\bm{\nu}} },
\end{equation}
where $\omega_{\bm{\nu}}^{(0)}$ is the weight $\omega_{\bm{\nu}}$ from \eqref{eq:choice_of_weight_alpha}
and $\varepsilon_{\bm{\nu}} := 1 / (\omega_{\bm{\nu}}^{(0)} - \omega_{p(\bm{\nu})}^{(0)})$.
Observe that the parent of each node is on a shallower level, which implies that
$\omega_{\bm{\nu}}^{(0)} - \omega_{p(\bm{\nu})}^{(0)} \ge 0$, hence
$\varepsilon_{\bm{\nu}} \ge 0$. The update \eqref{eq:wirwl1} takes into account both the scale and specific choice of
wavelet function and can be called scale and wavelet aware iteratively reweighted
$\ell_1$-minimization (hereafter referred to as wavelet reweighted $\ell_1$-minimization).
The motivation for the updates used in wavelet reweighted are twofold. First, on the first iteration,
the weights \eqref{eq:wirwl1} are the same as \eqref{eq:choice_of_weight_alpha}, and therefore,
they similarly encourage wavelet structured sparsity across levels. On later iterations,
by \eqref{eq:wirwl1}, $\omega_{\bm{\nu}}^{(t)} \ge \omega_{p(\bm{\nu})}^{(t)}$, hence
the relative scales of recovered coefficients are maintained.
Second, the term $1/(|c_{\bm{\nu}}^{(t-1)}| + \varepsilon_{\bm{\nu}})$ ensures that
large coefficients have smaller weights than their sibling coeffients on the
same scale. Our numerical examples show that the adaptive choice of weights
\eqref{eq:wirwl1} can perform somewhat better than the choice of weights
\eqref{eq:choice_of_weight_alpha}, but at the cost of having to
solve several weighted $\ell_1$-minimization problems. We also see that it
consistently performs much better than the usual IRW $\ell_1$-minimization.
\section{Numerical experiments} \label{sec:numerics}
In this section, we provide numerical results which show the effectiveness of weighted
$\ell_1$-minimization with our choice of weights
for the recovery of the wavelet representations of signals, images
and hyperspectral images.
We also consider the weights
\begin{equation} \label{eq:choice_of_weight_alpha_pos}
\omega_{\pmb{\nu}} = \left \{
\begin{array}{cc}
\| \Phi_{\bm{\nu}} \|_{L_\infty} & \text{ if } \bm{\nu} \in \mathcal{S} \\
\| \Psi_{\bm{\nu}} \|_{L_\infty}^{\alpha} & \text{ if } \bm{\nu} \in \mathcal{W}
\end{array}
\right . .
\end{equation}
Our experiements indicate that choosing $\alpha \ge 1$ consistently performs well, where as
choosing $0 < \alpha < 1$ consistently performs poorly. There is not much difference
in choosing $\alpha > 1$, therefore the choice $\alpha = 1$ seems to be sufficient in
general. We additionally present examples related to a frame
of wavelets for use in the recovery of a signal from partial measurements
as well as experiments using our adaptive
choice of weights \eqref{eq:wirwl1}.
Recovery of a functional representation of an OoI
\eqref{eq:approximation_expansion} is achieved by identifying the coefficients $\bm{c}$
which minimize \eqref{eq:weighted_l1_min}, then applying an inverse discrete
wavelet transform to $\bm{c}$. The recovered signals and images presented below
were obtained using \texttt{SPGL1} \cite{spgl1:2007, BergFriedlander:2008} for both the
unweighted and the weighted cases. The wavelet transforms used are from the
built-in \texttt{MATLAB} wavelet toolbox.
\subsection{Recovery of synthetic data compressible in wavelet basis}
In this section we consider a synthetic example where the wavelet coefficients of a signal
are exactly supported on a closed tree. We construct such a signal by
randomly choosing a closed wavelet tree with $s$ nodes which is a sub-tree
of a full binary tree with $N=2^J$ nodes. The coefficient values of these $s$ nodes
are randomly assigned according to a Gaussian distribution whose mean and variance
depend on the depth on the node. We reconstruct the signal using an
inverse wavelet transform and randomly sample this signal at $m$ locations.
These samples are used to recover coefficients using \eqref{eq:weighted_l1_min}
with several choices of weights,
IRW $\ell_1$-minimization, and our wavelet reweighted $\ell_1$-minimization.
Our numerical experiments indicate that
\begin{itemize}
\item the weighted approach outperforms the unweighted approach,
\item the success of weighted $\ell_1$-minimization
does not depend too heavily on the choice of $\alpha$, and
\item our wavelet reweighted approach slightly improves recovery.
\end{itemize}
Figure \ref{fig:closed_tree_coef_recov_magnitude} and Figure \ref{fig:closed_tree_coef_recov}
compare the recovery of a randomly generated closed tree with $90$ nodes which is a
subtree of wavelet tree with $2^9-1=511$ total nodes using
weighted and unweighted $\ell_1$-minimization. In each of the Figures, the recovered coefficients are
associated with the vertical axis and the true coefficients are associated with
the horizontal axis. If exact recovery is achieved then the points should all lie on the red
line. Using a random sample of $m = 179$ evaluations, we see that unweighted,
weighted with $\alpha < 1$, and reweighted $\ell_1$-minimization identifies the significant
coefficents. This can be seen in Figure \ref{fig:closed_tree_coef_recov_magnitude}, where the magnitudes
of the recovered coefficients are plotted. Notice that the weighted approach is better able to capture the
small coefficients. This is highlighted by Figure \ref{fig:closed_tree_coef_recov} where
we plot the recovered coefficients against the true coefficietns in the interval $[-1,1]$.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/magnitude_unweighted.png}
\caption{}
\label{fig:ct_mag_unweighted}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/magnitude_alphaone.png}
\caption{}
\label{fig:ct_mag_w_half}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/magnitude_reweighted.png}
\caption{}
\label{fig:ct_mag_w_1}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/magnitude_wavelet_reweighted.png}
\caption{}
\label{fig:ct_mag_w_threehalf}
\end{subfigure}
\caption{A series of plots of the magnitude of the recovered coefficients on the vertical axis and the true
coefficient on the horizontal axis for various choice of weights.}
\label{fig:closed_tree_coef_recov_magnitude}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/coefficient_unweighted.png}
\caption{}
\label{fig:ct_unweighted}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/coefficient_alphaone.png}
\caption{}
\label{fig:ct_w_half}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/coefficient_reweighted.png}
\caption{}
\label{fig:ct_w_1}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{fig/coefficient_wavelet_reweighted.png}
\caption{}
\label{fig:ct_w_threehalf}
\end{subfigure}
\caption{A series of plots of the recovered coefficients which correspond to true coefficients on the interval $[-1,1]$ with the
value of the recovered coefficient on the vertical axis and the true value of the coefficient on the horizontal axis
for various choice of weights.}
\label{fig:closed_tree_coef_recov}
\end{figure*}
Real-world signals and images do not possess wavelet coefficients which are exactly sparse
and the large coefficients are unlikely exactly closed trees. Rather,
they are often compressible in a wavelet basis. In this section we show that
signals and images can be recovered from a relatively small number of measurements
using weighted $\ell_1$-minimization for the specific choice of weights
\eqref{eq:choice_of_weight_alpha_pos}. Our numerical experiments show that
for $\alpha = 1$, weighted $\ell_1$-minimization far outperforms both unweighted
$\ell_1$-minimization and the usual reweighted $\ell_1$-minimization.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_rational_poly_80samples.png}
\caption{}
\label{fig:rational_poly}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_rational_poly_unweighted_80samples.png}
\caption{}
\label{fig:rational_poly_unweighted}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_rational_poly_weighted_80samples.png}
\caption{}
\label{fig:rational_poly_weighted}
\end{subfigure}
\caption{Reconstruction of a rational polynbomial using unweighted and weighted $\ell_1$-minimization. Here we plot:
in Figure
(\ref{fig:rational_poly}) a plot of the rational polynomial $f(x) = 1/(1 + 25x^2)$.
The black dots indicate 80 randomly subsampled values;
in Figure (\ref{fig:rational_poly_unweighted}) Reconstruction using unweighted
$\ell_1$-minimization and 80 subsampled values; and in Figure (\ref{fig:rational_poly_weighted}) Reconstruction using weighted $\ell_1$-minimization and 80 subsampled values.
}
\label{fig:rational_poly_recovery}
\end{figure*}
Figure \ref{fig:rational_poly_recovery} compares the recovery of the {function} $1/(1+25x^2)$
from $80$ {uniformly} subsampled points chosen in the interval $[-1,1]$ for different values of $\alpha$
from \eqref{eq:choice_of_weight_alpha_pos} as well as unweighted and reweighted $\ell_1$-minimization.
The chosen wavelets are the one-dimensional \textit{coiflets} constructed in \cite{coiflet_construction}.
The black dots in Figure \ref{fig:rational_poly} are the sampling points used in the reconstruction.
Notice that the function recovered by our weighted approach is better than the one obtained
using the unweighted approach. To quantify this, we calculated the Root-mean-square-error (RMSE) in each case.
The unweighted case produced an RMSE of $0.3100$ where as the the weighted case produced an
RMSE of $0.0072$.
We compare two denoising schemes in Figure \ref{fig:denoise_1d}.
A Gaussian noise was added to the piecewise smooth function as shown
in Figure \ref{fig:noise_heavisine} so that the PSNR between
the original Heavisine function and the noisy one is $26.0184$.
Figure \ref{fig:matlab_denoise} shows the reconstruction using the built-in
\texttt{MATLAB} function \texttt{wden} which automatically denoises using
the adaptive wavelet shrinkage of the work \cite{wavelet_shrinkage}.
This produces a reconstruction with
PSNR $ = 29.2454$. Figure \ref{fig:weighted_denoise} shows the reconstruction using our
proposed weighted $\ell_1$-minimization scheme and the PSNR is $27.6637$.
While the built-in \texttt{MATLAB} function \texttt{wden} yields a reconstruction with better
PSNR, notice that our reconstruction is more faithful to the
features of the original signal and does not exhibit the extraneous fluctuations seen
in Figure \ref{fig:matlab_denoise}.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_noise_heavisine.png}
\caption{}
\label{fig:noise_heavisine}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_noise_heavisine_matlab_recov.png}
\caption{}
\label{fig:matlab_denoise}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_noise_heavisine_weighted_recov.png}
\caption{}
\label{fig:weighted_denoise}
\end{subfigure}
\caption{Denoising a perturbed HeaviSine function. Here we plot: in Figure
(\ref{fig:noise_heavisine}) the HeaviSine function perturbed by noise; in Figure
(\ref{fig:matlab_denoise}) denoised using db3 based wavelet thresholding with the
built in matlab function \texttt{wden}; and in Figure
(\ref{fig:weighted_denoise}) Denoised using db3 based weighted
$\ell_1$-minimization.
}
\label{fig:denoise_1d}
\end{figure*}
\subsection{Recovery of Images}
In this section we consider the problem of reconstructing images from a small percentage of
its pixels. In the RGB color model, the pixels of images are associated with 3-tuple describing
a color. Images may be recovered by solving the multiple measurement vectors
{(MMV)} version of weighted $\ell_1$-minimization, i.e., we solve
\begin{equation} \label{eq:mmv_weighted_l1_min}
\min_{\bm{C} \in \mathbb{C}^{N \times k}} \lambda \| \bm{C} \|_{\bm{\omega},1,2}
+ \| \bm{A} \bm{C} - \tilde{\bm{F}} \|_{F}^2,
\end{equation}
where $\| \bm{C} \|_{\bm{\omega},1,2}$ is a mixed norm defined as the weighted sum of the $\ell_2$-norms
of the rows of the $N \times k$ matrix $\bm{C}$, $\tilde{\bm{F}}$ is a $m \times 3$ matrix whose columns are
the normalized observations of $f$ along each color band and where $\| \cdot \|_F$ is the Frobenius norm.
Figure \ref{fig:house_recovery} shows the recovery of a greyscale house image using several choices of $\alpha$.
The original image has $256 \times 256$ pixels and can be represented in the Haar wavelet basis
with $256^2$ coefficeints. The measurements, $F$, are randomly chosen pixels of the image so that
$m = 9830$, that is, the measurements are $15\%$ of the $256^2$ pixels, randomly chosen.
Notice that the cases when $\alpha \ge 1$ vastly out perform IRW $\ell_1$-minimization and unweighted $\ell_1$-minimization.
However, the differences between $\alpha = 3/2$, $\alpha = 2$, and $\alpha = 1$ are minimial. Therefore,
choosing the weights as \eqref{eq:choice_of_weight_alpha} is a reasonable choice in a general situation.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_orig.png}
\caption{}
\label{fig:cameraman_orig}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_unweighted.png}
\caption{}
\label{fig:cameraman_unweighted}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_alphahalf.png}
\caption{}
\label{fig:cameraman_alpha12}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_alphaone.png}
\caption{}
\label{fig:cameraman_alpha1}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_alphathreehalf.png}
\caption{}
\label{fig:cameraman_32}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_alphatwo.png}
\caption{}
\label{fig:cameraman_2}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_reweighted.png}
\caption{}
\label{fig:cameraman_rw}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/cameraman_wavelet_reweighted.png}
\caption{}
\label{fig:cameraman_nrw}
\end{subfigure}
~
\caption{A comparison of the recovered image of a cameraman for a subsample of $10\%$ randomly chosen pixels
using several choices of weights and
iterated weight choices. The measurements where taken with respect to the Daubechies 2 (db2) wavelet basis.
}
\label{fig:cameraman_recovery}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_orig.png}
\caption{}
\label{fig:house_orig}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_unweighted.png}
\caption{}
\label{fig:house_unweighted}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_alphahalf.png}
\caption{}
\label{fig:house_alpha12}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_alphaone.png}
\caption{}
\label{fig:house_alpha1}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_alphathreehalf.png}
\caption{}
\label{fig:house_32}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_alphatwo.png}
\caption{}
\label{fig:house_2}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_normal_reweighted.png}
\caption{}
\label{fig:house_rw}
\end{subfigure}
~
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{fig/house_gray_wavelet_reweighted.png}
\caption{}
\label{fig:house_nrw}
\end{subfigure}
~
\caption{A comparison of the recovered image of a house from $15\%$ randomly chosen pixels
using several choices of weights and
iterated weight choices. The measurements where taken with respect to the Daubechies 2 (db2) wavelet basis.
}
\label{fig:house_recovery}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_lighthouse_original.png}
\caption{}
\label{fig:lighthouse}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_lighthouse_unweighted_15perc.png}
\caption{}
\label{fig:lighthouse_unweighted}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_lighthouse_weighted_15perc.png}
\caption{}
\label{fig:lighthouse_weighted}
\end{subfigure}
\caption{Subsampling reconstruction of an image with unweighted and weighted $\ell_1$-minimization. Here we plot: in Figure (\ref{fig:lighthouse}) the original $640 \times 480$ pixel image of a lighthouse; in Figure (\ref{fig:lighthouse_unweighted}) Reconstruction using db3 based unweighted
$\ell_1$-minimization and $15$ \% randomly subsampled pixels; and in Figure (\ref{fig:lighthouse_weighted}) Reconstruction using db3 based weighted
$\ell_1$-minimization and $15$ \% randomly subsampled pixels.
}
\label{fig:lighthouse_recovery}
\end{figure*}
We can also recover color images by solving the minimization problem
\eqref{eq:mmv_weighted_l1_min}. Figure \ref{fig:lighthouse_recovery} shows that
the weighted approach performs better than unweighted for color images. The PSNR
of the reconstruction using unweighted $\ell_1$-minimization is $21.3119$,
see Figure \ref{fig:lighthouse_unweighted}. On the other hand,
the PSNR using weighted $\ell_1$-minimization is $24.5694$, see Figure \ref{fig:lighthouse_weighted}.
Notice that the unweighted recovery features blurring of edges and does not recover the texture
of either the grass or the red roof tile. The weighted recovery exhibits a better recovery of sharp edges and the
texture of the grass with yellow flowers. Weighted $\ell_1$-minimization can also be deployed to
recover other kinds of images besides the ``natural landscape" type images typified by the lighthouse.
Below we consider recovering cartoons, textures, and scientific data.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_lighthouse_noisy.png}
\caption{}
\label{fig:noisy_lighthouse}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_lighthouse_unweighted_denoised.png}
\caption{}
\label{fig:unweighted_denoised_lighthouse}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{fig/_fig_lighthouse_weighted_denoised.png}
\caption{}
\label{fig:weighted_denoised_lighthouse}
\end{subfigure}
\caption{Denoising an image with unweighted and weighted $\ell_1$-minimization. Here we plot: in Figure
(\ref{fig:noisy_lighthouse}) A image of a lighthouse with noise; in Figure
(\ref{fig:unweighted_denoised_lighthouse}) Denoised using db3 based unweighted
$\ell_1$-minimization; and in Figure
(\ref{fig:weighted_denoised_lighthouse}) Denoised using db3 based weighted
$\ell_1$-minimization.
}
\label{fig:denoise_2d}
\end{figure*}
We also present an example of image denoising. Figure \ref{fig:noisy_lighthouse} is a noisy
image generated by adding a Gaussian noise so that the PSNR of the noisy version is 26.0184. The reconstruction obtained using unweighted $\ell_1$-minimization
has PSNR $=30.6720$, see Figure \ref{fig:unweighted_denoised_lighthouse},
and the weighted $\ell_1$-minimization reconstruction has a PSNR $=31.1165$,
see Figure \ref{fig:weighted_denoised_lighthouse}.
\subsection{Recovering Hyperspectral Images}
The pixels of the color images we recovered in the previous section
can be viewed as $3$-tuples of numbers which represent the color at each pixel.
The image itself can then be viewed as an object in $\mathbb{R}^{M \times N \times 3}$
where $M$ is the number of pixel along the width and $N$ is the number
of pixels along the length.
A hyperspectral image is an object in
$\mathbb{R}^{M \times N \times k}$ for some $k>1$ where
$M$ and $N$ are the spatial dimensions and $k$ is the number of spectral bands.
One can use the information stored in a hyperspectral image in a variety of contexts.
Frequently, hyperspectral images
are used for the remote detection or classification \cite{hyper_image}. In particular,
it has been used in medicine \cite{hyper_medicine} for detection and classification
of disease, and geology \cite{hyper_geo} for detection and classification of minerals or oil.
In our numerical experiment, we consider recovering a hyperspectral image from a set of
subsampled spectral profiles at $m$ randomly chosen locations. In other words, we sample
$m$ vectors $\mu_{i,j} \in \mathbb{R}^k$ from the hyperspectral image
and wish to recover the full tensor. We do this by solving \eqref{eq:mmv_weighted_l1_min}.
For our experiment we have used a hyperspectral image associated with
a natural landscape of fields. The spectrum at each pixel corresponds to
the presense of certain wavelengths of light. For a sample of the spectral profiles
at $25\%$ of the pixels we recover the tensor using weighted and unweighted
$\ell_1$-minimization.
In Figure \ref{fig:hyper_slice_1} and Figure \ref{fig:hyper_slice_100} we compare
recovered slices of the tensor at spectral index $1$ and spectral index $100$
respectively. Notice that the unweighted approach does not yield as good results
as the weighted approach.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig/slice_1_indian_pines.png}
\caption{A comparison of the recovered slices at the first spectral index.}
\label{fig:hyper_slice_1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig/slice_100_25_p.png}
\caption{A comparison of the recovered slices at the $100^{th}$ spectral index.}
\label{fig:hyper_slice_100}
\end{figure*}
For a particular pixel we can compare the recovery by looking at the spectral profile
associated with that pixel. The spectral profile for the pixel $(50,25)$ and the recovered versions
are plotted in Figure \ref{fig:hyper_spec_pro}.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{fig/50_25_spectral_profile.png}
\caption{A comparison of the recovered slices at the $100^{th}$ spectral index.}
\label{fig:hyper_spec_pro}
\end{figure}
\subsection{Haar Framelets} \label{sec:framelets}
Many successful image processing methods incorporate both local
and global information about a signal to increase performance
\cite{nonlocal_means,morel_denoising2005,Kheradmand2014,LDMM}.
In this section we consider a specific case of a representation system introduced in
\cite{a_tale_of_two_bases} where the simultaneous local and global feature
analysis of an OoI is performed by a dictionary called a
\textit{framelet}. A sparse representation in the framelet dictionary
recovered from a subsample set of measurements using our proposed
weighted $\ell_1$-minimization problem. The dictionary is constructed
by taking the convolution of so called ``local" and ``global"
bases discussed in more detail below.
Let $\bm{F} =(F_0,F_1,\dots,F_{N-1})\in \mathbb{R}^N$ be the vector representing
the target digital signal. Local information is gathered by grouping
neighboring evaluations around every point
together into an array called a \textit{patch}. For each $k$,
$0\leq k< N$, the patch of length $\ell$ at location $k$ is defined
$\bm{p}_{k} = (F_k,F_{k+1}, \dots,F_{k+\ell-1})$ where $k+\ell-1$ is
interpreted as circular addition, i.e. $(N-1)+1$ is identified with $0$,
$(N-1)+2$ is identified with $1$ and so on.
The \textit{patch matrix} $P$ is constructed by setting the vector $\bm{p}_{k}$ as
the $k^{th}$ row of $P$. Notice that $P$ has $N$ rows, one for each
value in $\bm{F}$, and $\ell$ columns corresponding to the patch size.
The global basis is given as a matrix $G \in \mathbb{R}^{N \times N}$ with its columns forming
an orthonormal basis in $\mathbb{R}^N$, and the local basis is given as a matrix
$L \in \mathbb{R}^{\ell \times \ell}$ with its columns forming
an orthonormal basis in $\mathbb{R}^{\ell}$.
The patch matrix $P$ can be represented in the tensor product
basis genereated from $G$ and $L$ with the coefficients computed by
\begin{equation}\label{eq:C_coef}
C=G^TPL.
\end{equation}
The entries of the matrix $C=(c_{i,j})$ can also be viewed as coefficients of
$\bm{F}$ in the convolutional framelet formed by the columns of $G$ and $L$.
\begin{definition}[Discrete, Circular Convolution]
For two vectors $\bm{v},\bm{w}$ of length $N$ we define the discrete,
circular convolution as an operator which returns a length
$N$ vector $(\bm{v} \ast \bm{w})$ whose
$k^{th}$ component is
\begin{equation}
(\bm{v} \ast \bm{w})[k] = \sum_{p=0}^{N-1} \bm{v}[k-p] \bm{w}[p]
\end{equation}
\end{definition}
Let $\bm{G}_i$ be the $i^{th}$ column of the matrix
$G$ and $\bm{L}_j$ be the $j^{th}$ column of $L$. Denote by $\bar{\bm{L}}_j$
the vector $\mathbb{R}^N$ whose first $l$ entries are identical with corresponding
entries in $\bm{L}_j$, and the rest are equal to $0$. The convolutional framelets
are constructed as the circular convolution of
$\bm{G}_i$ with $\bar{\bm{L}}_j$:
\begin{equation}
\bm{\varphi}_{i,j} = \frac{1}{\sqrt{\ell}}\, \bm{G}_i \ast \bar {\bm{L}}_j.
\end{equation}
The vectors $\bm{\varphi}_{i,j}$ form a Parseval frame in $\mathbb{R}^N$
(see \cite{christensen2016introduction} for definitions).
The coefficients $c_{i,j}$ from \eqref{eq:C_coef} satisfy
\begin{equation*}
c_{i,j}=\langle \bm{F},\bm{\varphi}_{i,j}\rangle,
\end{equation*}
and the vector $\bm{F}$ can be recovered by the reconstruction formula
\begin{equation} \label{eq:recov_f}
\bm{F} = \sum_{i=1}^N \sum_{j=1}^\ell c_{ij} \bm{\varphi}_{ij}.
\end{equation}
We choose the Haar basis for both the global and local basis in our numerical example. Consequently, for the the weights $\omega_{i,j}$ we have
\begin{equation}
\omega_{i,j} :=\|\bm{\varphi}_{i,j}\|_{i,j}= 2^{\gamma_i \lambda_j /2}
\end{equation}
where $\gamma_i$ is the depth of the node associated with the $i^{th}$ wavelet function whose discretization
is the $i^{th}$ row of $G$ and where $\lambda_j$ is defined similarly for the Haar basis associated with $L$.
In Figure \ref{fig:framelets_ortho_1d}
each of the reconstructions was created from 80 samples of a piecewise smooth function.
Since the function is piecewise smooth, it is not necessary compressible
in the Haar basis. The reconstructions using weighted and unweighted $\ell_1$-minimization
with the orthonormal Haar basis show ``step"-like artifacts. On the other hand,
the recovered framelet representation does not
exhibit the step-like affects. Heuristically, the observed improved performance may be
explained by the property that Haar framelets
use local and global information simultaneously.
\begin{figure*}
\centering
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{fig/_fig_heavisine_u_orthonormal_haar_80ssp.png}
\caption{}
\label{fig:unweighted_ortho}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{fig/_fig_heavisine_w_orthonormal_haar_80ssp.png}
\caption{}
\label{fig:weighted_ortho}
\end{subfigure}
\bigskip
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{fig/_fig_heavisine_u_framelet_haar_by_haar_80ssp.png}
\caption{}
\label{fig:unweighted_framelet}
\end{subfigure}
\hfill
\begin{subfigure}{0.45\linewidth}
\includegraphics[width=\linewidth]{fig/_fig_heavisine_w_framelet_haar_by_haar_80ssp}
\caption{}
\label{fig:weighted_framelet}
\end{subfigure}
\caption{\eqref{fig:unweighted_ortho} The recovery of the HeaviSine function using
an orthonormal Haar basis and unweighted $\ell_1$-minimization.
\eqref{fig:weighted_ortho} The recovery of the HeaviSine function using
an orthonormal Haar basis and weighted $\ell_1$-minimization.
\eqref{fig:unweighted_framelet} The recovery of the HeaviSine function
using Haar Framelets and unweighted $\ell_1$-minimization.
\eqref{fig:weighted_framelet} The recovery of the HeaviSine function using
Haar Framelets and weighted $\ell_1$-minimization.
}
\label{fig:framelets_ortho_1d}
\end{figure*}
\section{Conclusion}
\label{sec:conclusion}
This effort has shown that weighted $\ell_1$-minimization is effective for
solving the interpolation/inpainting and denoising problems by recovering
wavelet coefficients. Moreover, this effort provides two explicit choices for weights that
do not require the identification of parameters beyond the choice of a wavelet family for use as
a representation system. Provided numerical examples indicate that the choice of weights
\eqref{eq:choice_of_weight_alpha} far outperforms unweighted $\ell_1$-minimization for recovering
wavelet coefficients and that there is little difference between the case when $\alpha > 1$ and $\alpha = 1$ for
the weights \eqref{eq:choice_of_weight_alpha_pos}, hence, $\alpha = 1$ is a good choice.
According to Figure \ref{fig:compare_weights}, the weights used in IRW $\ell_1$-minimization are not scaled
appropriately. Our choice of weights \eqref{eq:wirwl1} both iteratively updates weights so that large coefficients have
smaller associated weights and ensures that the updated weights do not become too small. We also show
that weighted $\ell_1$-minimization can be used for measurement systems that do not happen
to be an orthonormal system, see Section \ref{sec:framelets}. We have a proof which shows that
the sampling complexity for our weighted $\ell_1$-minimization is no worse than the sampling complexity
for unweighted $\ell_1$-minimization assuming that the sparse signal satisfies the closed tree assumption.
In future work, it would be interesting to establish sharp estimates associated with wavelet based measurement systems.
Such a result would theoretically explain the gap in performance between unweighted and weighted $\ell_1$-minimizations for
recovering wavelet coefficients. In this work we mainly consider images and signals. Another interesting
direction to pursue would be to apply our choice of weights for recovering wavelet coefficients of functions
which are solutions to partial differential equations.
\section*{Acknowledgements}
This material is based upon work supported in part by: the U.S. Department of Energy, Office of Science, Early Career Research Program under award number ERKJ314; U.S. Department of Energy, Office of Advanced Scientific Computing Research under award numbers ERKJ331 and ERKJ345; the National Science Foundation, Division of Mathematical Sciences, Computational Mathematics program under contract number DMS1620280; Scientific Discovery through Advanced Computing (SciDAC) program through the FASTMath Institute under Contract No. DE-AC02-05CH11231; and by the Laboratory Directed Research and Development program at the Oak Ridge National Laboratory, which is operated by UT-Battelle, LLC., for the U.S. Department of Energy under contract DE-AC05-00OR22725.
| {
"timestamp": "2019-09-17T02:30:38",
"yymm": "1909",
"arxiv_id": "1909.07270",
"language": "en",
"url": "https://arxiv.org/abs/1909.07270",
"abstract": "In this effort, we propose a convex optimization approach based on weighted $\\ell_1$-regularization for reconstructing objects of interest, such as signals or images, that are sparse or compressible in a wavelet basis. We recover the wavelet coefficients associated to the functional representation of the object of interest by solving our proposed optimization problem. We give a specific choice of weights and show numerically that the chosen weights admit efficient recovery of objects of interest from either a set of sub-samples or a noisy version. Our method not only exploits sparsity but also helps promote a particular kind of structured sparsity often exhibited by many signals and images. Furthermore, we illustrate the effectiveness of the proposed convex optimization problem by providing numerical examples using both orthonormal wavelets and a frame of wavelets. We also provide an adaptive choice of weights which is a modification of the iteratively reweighted $\\ell_1$-minimization method.",
"subjects": "Image and Video Processing (eess.IV); Signal Processing (eess.SP); Numerical Analysis (math.NA)",
"title": "A Weighted $\\ell_1$-Minimization Approach For Wavelet Reconstruction of Signals and Images",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232879690035,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7093460369650889
} |
https://arxiv.org/abs/2006.10430 | The turnpike property and the long-time behavior of the Hamilton-Jacobi-Bellman equation for finite-dimensional LQ control problems | We analyze the consequences that the so-called turnpike property has on the long-time behavior of the value function corresponding to a finite-dimensional linear-quadratic optimal control problem with general terminal cost and constrained controls.We prove that, when the time horizon $T$ tends to infinity, the value function asymptotically behaves as $W(x) + c\, T + \lambda$, and we provide a control interpretation of each of these three terms, making clear the link with the turnpike property.As a by-product, we obtain the long-time behavior of the solution to the associated Hamilton-Jacobi-Bellman equation in a case where the Hamiltonian is not coercive in the momentum variable. As a result of independent interest, we showed that linear-quadratic optimal control problems with constrained control enjoy a turnpike property, also particularly when the steady optimum may saturate the control constraints. | \section{Introduction}
\subsection{Motivation and setting}
We are interested in the asymptotic behavior of the value function associated to an optimal control problem, when the time-horizon tends to infinity.
In particular, we want to deduce it as a consequence of a property that is satisfied by a large class of optimal control problems and arises when the time horizon is considered to be sufficiently large.
This is the so-called \emph{turnpike property}, which establishes that the optimal strategy in a controlled system during a sufficiently long time interval is to quickly stabilize from the initial state to the steady optimal one and not leave the latter until the time is close to the end. Morevoer, as a by-product of our study, we obtain the long-time behavior of the associated Hamilton-Jacobi-Bellman (HJB for short) equation in a case where the Hamiltonian is not strictly convex and not even coercive, a scenario much less considered in the literature.
This indicates that, in some cases, this kind of assumptions on the structure of the Hamiltonian can be merely relaxed to weaker assumptions concerning the controllability and observability of the optimal control problem.
Let us introduce the mathematical framework that we will use throughout the paper. We denote by $\mathcal{M}_{n,m}(\mathbb{R})$ (resp. $\mathcal{M}_{n}(\mathbb{R})$) the set of matrices over $\mathbb{R}$ with $n$ rows and $m$ columns (resp. $n$ rows and columns).
We consider the following optimal control problem in the finite-dimensional linear-quadratic setting, with constrained controls:
for a given time horizon $T>0$ and an initial state $x\in \mathbb{R}^n$,
we denote the trajectory of the system by $y (\cdot)$, which is determined by the solution to the following controlled linear ODE:
\begin{equation}\label{eq: linear ODE}
\begin{array}{ll}
\dot{y}(s) = A\, y(s) + B\, u(s), & s\in (0,T) \\
y (0) = x,
\end{array}
\end{equation}
where $A\in \mathcal{M}_n (\mathbb{R})$, $B\in \mathcal{M}_{n,m} (\mathbb{R})$, with $n,m\geq 1$, are two given matrices,
and $u$, that will be referred to as the control, can be any function in the set of admissible controls $\mathcal{U}_T \coloneqq L^2(0,T; U)$, i.e. square-integrable functions $[0,T]\to U$, where $U\subseteq\mathbb{R}^m$ is a given nonempty closed and convex set, which can be either bounded or not.
The optimal control problem is to minimize, over the admissible controls $u\in \mathcal{U}_T$, the cost functional
\begin{equation}\label{eq: functional}
J_{T,x} (u) \coloneqq \dfrac{1}{2} \int_0^T \left[\| u(s)\|^2 + \| C \, y (s)-z\|^2\right] ds + g(y(T)),
\end{equation}
where $C\in \mathcal{M}_n(\mathbb{R})$ is a given matrix, $z\in \mathbb{R}^n$ is the prescribed \emph{target} and $g:\mathbb{R}^n\to \mathbb{R}$ is a given locally Lipschitz function bounded from below, known as the \emph{final cost}. The value function associated to the optimal control problem \eqref{eq: linear ODE}-\eqref{eq: functional} is defined as
\begin{equation}\label{eq: value function}
V (x,T) \coloneqq \inf_{u\in \mathcal{U}_T} J_{T,x}(u),\quad \text{s.t.}\; \eqref{eq: linear ODE}.
\end{equation}
We also consider the associated stationary problem, consisting in the minimization of the steady functional
\begin{equation}\label{eq: steady_functional
J_s (u,y) := \dfrac{1}{2} \left( \|u\|^2 + \|C\, y-z\|^2\right),
\end{equation}
over the set of controlled steady states
\begin{equation}\label{def_M}
M_s\coloneqq \left\{\left(u,y\right)\in U\times \mathbb{R}^n \ | \ Ay+Bu = 0\right\}.
\end{equation}
We denote by $V_{s}$ the value of the optimal steady cost, defined as
\begin{equation}\label{V_s def
V_{s} := \min \left\{ J_{s}(u,y),\ \text{s.t.}\ (u,y)\in M_s\, \right\}.
\end{equation}
\subsection{A turnpike result for control-constrained LQR}
In \cite{porretta2013long}, it is proved that, in the case where
$U=\mathbb{R}^m$, exponential turnpike holds under \textit{controllability} of $(A,B)$ and \textit{observability} of $(A,C)$.
As we shall prove in \Cref{th_TURNPIKE} below, for the constrained control case (i.e. $U\subseteq \mathbb{R}^m$), the validity of a weaker version of the turnpike property follows from the \textit{detectability} of $(A,C)$, the \textit{invertibility} of $A$ and the $U$-\textit{stabilizability} of $(A,B)$ (see Definition \ref{definition_U_stab}). In \cite{grune2021relation}, for the constrained case, the validity of an even weaker version of the turnpike property (measure turnpike) was investigated under strong dissipativity assumptions and for the case when the steady optimum is at the interior of the admissible set.
We point out that we shall not make the latter assumption, and thus, our turnpike result also applies to the case when the steady optimal control is on the boundary of the set of admissible controls $U$.
In the sequel, we will sometimes refer to the steady optimal control $\overline{u}$ and its corresponding state $\overline{y}$ as the turnpike.
Note that the steady functional $J_s$, hence also the turnpike, are independent of the final cost $g$.
Let us first of all make precise the notion of constrained stabilizability.
\begin{definition}\label{definition_U_stab}
Let $A\in\mathcal{M}_{n\times n}(\mathbb{R})$, $B\in\mathcal{M}_{n,m}(\mathbb{R})$ and $U\subseteq \mathbb{R}^m$ be closed and convex. We say that $(A,B)$ is $U$-\textit{stabilizable} to a trajectory $y_{1}$, with some control $u_1\in L^2_{loc}(0,+\infty;U)$, solution to
\begin{equation}\label{target_controlled_ODE}
\frac{d}{ds}y_1(s)=Ay_1(s)+Bu_1\hspace{1 cm} s\in (0,+\infty),
\end{equation}
if there exists a control $u\in L^2_{loc}(0,+\infty;U)$ for which the corresponding trajectory $y$ solves \eqref{target_controlled_ODE} with an initial datum $x\in \mathbb{R}^n$, and such that
\begin{enumerate}
\item $u-u_1\in L^1(0,+\infty;\mathbb{R}^m)\cap L^2(0,+\infty;\mathbb{R}^m)$;
\item $y-y_1\in L^1(0,+\infty;\mathbb{R}^n)\cap L^2(0,+\infty;\mathbb{R}^n)$;
\item $\left\|u-u_1\right\|_{L^1\cap L^2}+\left\|y-y_1\right\|_{L^1\cap L^2}\leq K\left\|y_1\left(0\right)-x\right\|$,\\
where $\left\|\cdot\right\|_{L^1\cap L^2}\coloneqq \left\|\cdot\right\|_{L^1\left(0,+\infty\right)}+\left\|\cdot\right\|_{L^2\left(0,+\infty\right)}$\; and $K=K(A,B,U)$
\end{enumerate}
We say that $(A,B)$ is $U$-\textit{stabilizable}, when it is $U$-\textit{stabilizable} to any trajectory $y_{1}$ solution to \eqref{target_controlled_ODE} with some control $u_1\in L^2_{loc}(0,+\infty;U)$ and for every initial datum $x\in \mathbb{R}^n$.
\end{definition}
We refer to Remark \ref{rmk_stab} for additional comments on the notion of $U$-stabilizability.
\begin{comment}
\begin{definition}\label{definition_U_stab}
Let $A\in\mathcal{M}_{n\times n}(\mathbb{R})$, $B\in\mathcal{M}_{n,m}(\mathbb{R})$ and $U\subseteq \mathbb{R}^m$ be closed and convex. We say that $(A,B)$ is $U$-\textit{stabilizable} if for any target trajectory $y_1$ solution to
\begin{equation}\label{target_controlled_ODE}
\frac{d}{ds}y_1(s)=Ay_1(s)+Bu_1\hspace{1 cm} s\in (0,+\infty),
\end{equation}
with some control $u_1\in L^2_{loc}(0,+\infty;U)$, and for every initial datum $x\in \mathbb{R}^n$, there exists a control $u\in L^2_{loc}(0,+\infty;U)$, such that
\begin{enumerate}
\item $u-u_1\in L^1(0,+\infty;\mathbb{R}^m)\cap L^2(0,+\infty;\mathbb{R}^m)$;
\item $y-y_1\in L^1(0,+\infty;\mathbb{R}^n)\cap L^2(0,+\infty;\mathbb{R}^n)$;
\item $\left\|u-u_1\right\|_{L^1\cap L^2}+\left\|y-y_1\right\|_{L^1\cap L^2}\leq K\left\|y_1\left(0\right)-x\right\|$,\\
where $\left\|\cdot\right\|_{L^1\cap L^2}\coloneqq \left\|\cdot\right\|_{L^1\left(0,+\infty\right)}+\left\|\cdot\right\|_{L^2\left(0,+\infty\right)}$\; and $K=K(A,B,U)$
\end{enumerate}
$y$ being the solution to \eqref{eq: linear ODE}, with initial datum $x$ and control $u$.
\end{definition}
\end{comment}
We may now give the statement of the turnpike result for the optimal control problem \eqref{eq: linear ODE}--\eqref{eq: functional} with control constraints.
\begin{theorem}\label{th_TURNPIKE}
Assume $(A,B)$ is $U$-stabilizable, $(A,C)$ is detectable and $A$ is invertible. Let $(\overline{y},\overline{u})$ be the unique pair in $M_s$ minimizing \eqref{eq: steady_functional}, and for any $T>0$, let $u_{_T}\in \mathcal{U}_T$ be an optimal control minimizing $J_{T,x}$ in \eqref{eq: functional}, and let $y_{_{T}}$ be its associated state trajectory, solution to \eqref{eq: linear ODE}. Then, for any $\varepsilon \in (0,1)$, there exists $\tau=\tau(A,B,C,U,x,z,g,\varepsilon)>0$ such that, if $T\geq 2\tau+1$, we have
\begin{equation}\label{epsturnpike}
\left\|y_{_{T}}\left(t\right)-\overline{y}\right\|<\varepsilon
\qquad \forall t\in \left[\tau,T-\tau\right].
\end{equation}
Furthermore, there exists $K=K(A,B,C,U,x,z,g)$, such that
\begin{equation}\label{lemma_unif_boun_2_intro}
\int_0^T \left[\| u_{_{T}}(s)-\overline{u}\|^2 + \| y_{_{T}}(s)-\overline{y}\|^2\right] ds\leq K.
\end{equation}
\end{theorem}
We note that existence and uniqueness of a minimizer $(\overline{u},\overline{y})\in M_s$ for \eqref{eq: steady_functional}, and then, for the turnpike,
follows from the detectability of $(A,C)$ (see the Remark \ref{rmk: steady system - appendix} in Appendix \ref{sec:Proof of the turnpike property}).
The uniqueness of the turnpike may fail, for instance, if the constraint set is nonconvex \cite{pighin2020nonuniqueness}, and also if the constraint set is convex but the pair $(A,C)$ is not detectable \cite{pighin2020turnpike2}.
\subsection{Main result}
Our main result establishes the connection between the turnpike property
and the long-time behavior of the value function \eqref{eq: value function}.
The latter is closely related to the value function associated to the corresponding infinite-horizon optimal control problem, that we define as
\begin{equation}\label{eq: W def}
W(x) := \inf_{u\in \mathscr{A}_x} \ J_{\infty,x}(u) = \ \int_{0}^{\infty}\left[\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s \right]\;ds,
\end{equation}
where, for each $u\in L^2_{loc}(0,+\infty;U)$, the function $y\in C([0,+\infty);\mathbb{R}^n)$ is the solution to \eqref{eq: linear ODE} in the time-interval $(0,+\infty)$, with initial condition $x$ and control $u$.
Here, the set of admissible controls is
\begin{equation}
\label{eq: admissible controls}
\mathscr{A}_x\coloneqq \left\{u\in L^2_{\mbox{\tiny{loc}}}(0,+\infty;U) \ : \ \int_0^{\infty}\left|\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s\right|ds < +\infty\right\}.
\end{equation}
\begin{theorem}\label{th_turnpikeHJB}
Assume $(A,B)$ is $U$-stabilizable, $(A,C)$ is detectable and $A$ is invertible. Let $z\in \mathbb{R}^n$ be given, and let $g:\mathbb{R}^n\to\mathbb{R}$ be a given locally Lipschitz function bounded from below. Let $V$ and $W$ be the value functions defined in \eqref{eq: value function} and \eqref{eq: W def} respectively.
Then, the following statements hold true:
\begin{enumerate}[label = (\roman*)]
\item For any bounded set $\Omega\subset \mathbb{R}^n$, we have
\begin{equation*}
V(x,T) - V_{s}\, T \longrightarrow W(x) + \lambda, \quad \text{as} \ T\to \infty,\quad \mbox{uniformly in}\ x\in\Omega
\end{equation*}
where $V_{s}$ is the constant defined in \eqref{V_s def}
and the constant $\lambda$ is given by
\begin{equation}\label{lambda in main thm}
\lambda = \lim\limits_{T\to +\infty} V(\bar{y},T)-V_sT,
\end{equation}
where $\bar{y}$ is the state in the unique pair $(\bar{u},\bar{y})\in M_s$ minimizing \eqref{eq: steady_functional}.
\item Moreover, $W(\cdot)$ is, up to an additive constant, the unique viscosity solution bounded from below to the stationary problem
\begin{equation}\label{eq: ergodic HJB_thm}
V_{s} + \max\limits_{u\in U} \left\{ -\nabla W(x)\cdot(Ax+Bu) - \frac{1}{2}\|u\|^{2} \right\} = \frac{1}{2}\|Cx-z\|^{2}
\quad \quad x\in \mathbb{R}^{n}.
\end{equation}
In addition, the equation \eqref{eq: ergodic HJB_thm} with a different constant $c\neq V_s$ does not admit any viscosity solution bounded from below.
\end{enumerate}
\end{theorem}
As it is well-known, the value function $V(x,T)$ defined in \eqref{eq: value function} is the unique viscosity solution to the following Cauchy problem\footnote{Note that, as defined in \eqref{eq: value function}, the value function $V$ depends on the initial condition $x$ and the time horizon $T$. We use the notation $\nabla V$ for the derivative of $V$ with respect to $x$, and $\partial_{T}V$ for its derivative with respect to $T$. These derivatives must be interpreted in the appropriate classical or viscosity sense depending on the situation, which will be specified in each situation.}
\begin{equation}\label{eq: HJ_mainth}
\left\{
\begin{array}{l}
\partial_{T} V + \max\limits_{u\in U}\left\{ -\nabla V \cdot (Ax+Bu) - \frac{1}{2}\|u\|^{2} \right\} = \frac{1}{2}\|Cx-z\|^{2} \\
\noalign{\vskip 2mm}
V(x,0) = g(x).
\end{array}\right.
\end{equation}
A proof can be found in \cite[\S 5, Thm.5 and Thm.6]{kouhkouh2018dynamic} (see also \cite[Thm. 7.2.4]{cannarsa2004semiconcave}). It relies on the methods
in \cite[Section III.3]{bardi2008optimal} (see also \cite[Section 10.3]{evans2010partial}) and is based on the Dynamic Programming Principle.
Our main result in Theorem \ref{th_turnpikeHJB} then describes the long-time behavior of the solution to this problem.
Note that in the case where the final cost $g$ is a nonconvex function, even if it is very smooth, the solution to \eqref{eq: HJ_mainth} eventually loses regularity for $T$ sufficiently large (see Example \ref{lemma_nouniq}), and the equation must be interpreted in the viscosity sense \cite{crandall1992user,crandall1983viscosity,lions1982generalized} (see also \cite{bardi2008optimal,evans2010partial}).
\begin{figure}
\begin{center}
\begin{tikzpicture}
\begin{axis}[
axis x line=middle, axis y line=middle,
ymin=-1, ymax=11, ytick={4, 8}, yticklabels={$\overline{y}$, $y_0$}, ylabel=$y$,
xmin=-1, xmax=11, xtick={2,8,10}, xticklabels={$\tau$,$T-\tau$,$T$}, xlabel=$t$,
domain=0:10,samples=101,
]
{\addplot [dashed, red,line width=0.06cm] {4};}
\addlegendentry{steady optimum}
\draw [dashed] (30,10) -- (30,50);
\draw [dashed] (90,10) -- (90,50);
{\addplot [black,line width=0.1cm] {4*exp(-1.8*\mathbf{x})-2*exp(-1.8*(10-\mathbf{x}))+4};}
\addlegendentry{time evolution optimum}
\node [below left, black] at (10,10) {$O$};
\end{axis}
\draw [decorate,decoration={brace,amplitude=8pt,mirror,raise=0pt},yshift=0pt]
(0.57,-0.1) -- (1.75,-0.1) node [black,below,midway,yshift=-0.24cm] {$W(x)$};
\draw [decorate,decoration={brace,amplitude=8pt,mirror,raise=0pt},yshift=0pt]
(1.75,-0.1) -- (5.15,-0.1) node [black,below,midway,yshift=-0.24cm] {$V_s T$};
\draw [decorate,decoration={brace,amplitude=8pt,mirror,raise=0pt},yshift=0pt]
(5.15,-0.1) -- (6.29,-0.1) node [black,below,midway,yshift=-0.24cm] {$\lambda$};
\end{tikzpicture}
\caption{Optimal state fulfilling the turnpike property and associated asymptotic decomposition of the value function.}
\label{turnpike_asymptotic decomposition of the value function}
\end{center}
\end{figure}
\begin{remark}\label{Rmk main thm}
In view of Theorem \ref{th_turnpikeHJB}, the value function $V(x,T)$ admits the asymptotic decomposition
$$
V(x,T) \sim W(x) + V_s\, T + \lambda \qquad \text{as} \quad T\sim \infty,
$$
where each term can be identified with one of the three stages in the turnpike optimal trajectory (see Figure \ref{turnpike_asymptotic decomposition of the value function}).
\begin{enumerate}
\item[1.] The term $W(x)$ represents the cost of stabilizing the trajectory from the initial state $x$ to the turnpike. Indeed,
in a large time interval, optimal strategies will spend most of the time close to the turnpike where, in view of \eqref{eq: W def}, the running cost of the infinite-horizon problem equals zero.
\item[2.] The term $V_s\, T$ corresponds to the running cost accumulated in the intermediate arc, where the time-evolution optima are close to the steady ones.
\item[3.] The constant $\lambda$ represents the cost of leaving the turnpike
in order to minimize the final cost $g$.
This final arc does not appear in the infinite horizon problem, but
it is always present in the finite horizon one, no matter how long the time-horizon is.
Therefore, it has to be considered in the long-time decomposition of the value function.
The way to single out this final arc from the rest of the trajectory is to consider the finite time horizon problem taking $\overline{y}$ as initial state, so that the cost of reaching the turnpike is $0$ and then to subtract the cost during the transient arc $V_s\, T$ (see the definition of $\lambda$ in \eqref{lambda in main thm}).\\
Let us also mention that such a constant $\lambda$ is usually refered to as \textit{the ergodic constant} (see \S \ref{sec: survey HJB}) which, roughly speaking, insures the existence of a set which attracts all controlled trajectories (see \cite{arisawa1997ergodic,arisawa1998ergodic}). And the \textit{turnpike} plays the role of such set where all the trajectories get as close as we wish regardless of their initial position. Moreover, the value of $\lambda$ represents the \textit{averaged cost} of being around the \textit{turnpike} (after omitting the first and second arcs).
\end{enumerate}
\end{remark}
In case the control constraints are not imposed (i.e. $U=\mathbb{R}^m$), one can actually follow similar arguments as in \cite{porretta2013long} to prove exponential turnpike for the optimal control problem \eqref{eq: linear ODE}-\eqref{eq: functional}, under the assumptions of $(A,B)$ being stabilizable and $(A,C)$ detectable.
We recall that exponential turnpike stands for the existence of constants $K,\mu>0$, independent of $T$, such that any optimal control-state trajectory $(y_T,u_T)$ satisfies
$$
\|u^T(t)-\overline{u}\|+\|y^T(t)-\overline{y}\|\leq K\left[e^{-\mu t}+e^{-\mu \left(T-t\right)}\right] \quad \forall t\in [0,T].
$$
This allows us to deduce the same conclusions of Theorem \ref{th_turnpikeHJB} for the unconstrained case without assuming that $A$ is invertible.
We note that in this unconstrained case $U=\mathbb{R}^m$, the value function $W$ for the infinite horizon problem, defined in \eqref{eq: W def}, is the viscosity solution to the Hamilton-Jacobi-Bellman equation
\begin{equation}\label{eq: ergodic HJB_thm unconstrained}
V_s + \dfrac{1}{2}\|B^{*}\nabla W(x)\|^{2} - A\, x\cdot \nabla W(x) = \dfrac{1}{2}\|C\,x - z\|^{2}
\quad \quad x\in \mathbb{R}^{n},
\end{equation}
and the solution can be given as a quadratic form using Riccati Theory.
We recall this result in the following proposition, and give a sketch of the proof in Appendix B for the sake of completeness.
\begin{proposition}\label{prop_rappr_riccati}
Let $\left(\overline{u},\overline{y}\right)$ be the minimizer for $J_s$ defined in \eqref{eq: steady_functional}. Then,
\begin{equation}\label{feedback}
F(y)\coloneqq -B^*\widehat{E}\left(y-\overline{y}\right)+\overline{u}
\end{equation}
defines an optimal feedback law for $J_{\infty,x}$ defined in the right hand side of \eqref{eq: W def}, meaning that, for any $x\in\mathbb{R}^n$, the unique optimal control is given by
\begin{equation}\label{opt_inf_1}
u^{*}(s)= -B^*\widehat{E}\left(y^{*}(s)-\overline{y}\right)+ \overline{u},\qquad s\in (0,T)
\end{equation}
where $\widehat{E}$ is the unique symmetric positive semidefinite solution to the Algebraic Riccati Equation
\begin{equation*}\label{ARE_3}
-\widehat{E}A-A^*\widehat{E}+\widehat{E}BB^*\widehat{E}=C^*C \hspace{1 cm} \mbox{(ARE)}
\end{equation*}
and $y^{*}$ solves the closed loop equation
\begin{equation*}\label{}
\left\{ \begin{array}{ll}
\frac{d}{ds}y^{*}(s) = \left(A\, + B\,F\right) y^{*}(s), & s\in (0,\infty) \\
y^{*}(0) = x.
\end{array}\right.
\end{equation*}
Moreover, the value function $W$ defined in \eqref{eq: W def} is given by
\begin{equation*}\label{infinity_value_function}
W(x) =\dfrac{1}{2}\left(x-\overline{y}\right)^*\widehat{E}\left(x-\overline{y}\right)+(\overline{p},x-\overline{y})_{\mathbb{R}^n},
\end{equation*}
and is, up to an additive constant, the unique viscosity solution, bounded from below, to the equation \eqref{eq: ergodic HJB_thm}.
\end{proposition}
\begin{comment}
\begin{remark}[Turnpike property with lack of observability]
The proof of Theorem \ref{th_turnpikeHJB} is based on the validity of the turnpike property \eqref{epsturnpike_eq1}. In Theorem \ref{th_TURNPIKE}, we prove that this property is satisfied for any final cost $g$ bounded from below whenever $(A,B)$ is $U$-stabilizable and $(A,C)$ is detectable.
However, if $U=\mathbb{R}^m$ and for special final costs, the conclusion of Theorem \ref{th_turnpikeHJB} can be deduced as well from a weaker version of the turnpike property. If for instance $g\equiv 0$, it suffices that the following inequality is satisfied
\begin{equation}\label{C turnpike prop intro}
\|u_{_{T}}(s)-\overline{u}\|+\|Cy_{_{T}}(s)-C\overline{y}\|\leq K\left[\exp\left(-\mu s\right)+\exp\left(-\mu \left(T-s\right)\right)\right],
\end{equation}
where $K$ and $\mu>0$ are $T$-independent constants. For this particular case, the detectability of $(A,C)$ is no longer necessary and it is sufficient to only assume the $C$-stabilizability, i.e. the stabilizability of observable modes (see \cite{TLA} and references therein). Given a specific final cost $g$ bounded from below, it would be interesting to obtain sharp conditions for a weaker turnpike property as \eqref{C turnpike prop intro} to hold.
\end{remark}
\end{comment}
\subsection{Related literature}\label{subsec:related results}
In the definition of the value function $W$ in \eqref{eq: W def} associated to the infinite horizon problem, the normalisation of the running cost by subtracting the cost at the optimal steady state is indeed very often used in dissipativity-based approaches to turnpike properties, and in the analysis of receding-horizon optimal control. In this context, the normalised running cost is known as the \textit{supply rate}, see \cite[Thm.2]{angeli2011average}.
In this direction, let us mention the seminal work of Willems \cite{willems1971least}, where existence results for infinite horizon optimal control problems are proved under frequency-domain and time-domain conditions. Moreover, a full characterisation of the solutions to the algebraic Riccati equation is also provided. We also refer to the books \cite{anderson2007optimal,kwakernaak1972linear} for the classical Riccati theory.
The relation between the value function and the turnpike property has been investigated by Gr\"une in \cite{grune2016approximation} for the discrete-time setting in the context of receding horizon control (see also \cite{grune2018turnpike,grune2016relation}). However. their motivations and also their conclusions are different to ours. There, the author presents an iterative method based on the turnpike property, which is used to approximate the infinite horizon problem by a sequence of finite horizon ones. The Assumption 4.1 in \cite{grune2016approximation} represents, in fact, a discrete-time version of the turnpike property (see also Chapter 6 and 8 in \cite{grune2017nonlinear} for further details).
Other related results in connection with Model Predictive Control can be found in \cite{zanon2016tracking, zanon2018economic} and the references therein.
Other recent works related to our study are \cite{faulwasser2020continuous} and \cite{breiten2020turnpike}. In \cite{faulwasser2020continuous}, the interplay between dissipativity and stability properties in continuous-time infinite horizon optimal control problems have been studied, and moreover, the question on the link between the latter problem and the associated HJB equation has also been raised. In \cite{breiten2020turnpike}, the authors analyse the receding horizon control problem with finite stages in infinite-dimensions in order to study the corresponding infinite horizon problem. In particular, their Theorem 6.4 provides an error estimate between these two problems which is intimately related to the turnpike property. A recent survey of dissipativity methods in optimal control can be found in \cite{grune2021dissipativity}.
A similar result to Theorem \ref{th_turnpikeHJB}(i) also appears in \cite{kouhkouh2018dynamic}, where the terms $\lambda, V_{s}$ and $W(\cdot)$ are represented by a different approach: a Riccati operator in an augmented state space is introduced, taking into account the target $z$ as a state variable (with zero dynamics). A description of the asymptotic behavior of such Riccati operators is provided in \cite[Lemma 2]{kouhkouh2018dynamic}, which then is the key ingredient to determine the desired asymptotic behavior.
In the present manuscript, our analysis relies on the study of the value functions of both the finite-time and infinite-time horizon problems and their corresponding partial differential equations, together with the optimality conditions they satisfy. The latter tools (in particular the PDE characterization) are different from those usually encountered in Riccati theory and in dissipativity-based approaches, and help us to establish the link between the turnpike property as known in control theory with the asymptotic behavior of a certain class of PDEs, that is, a time-evolutive HJB equation and its corresponding ergodic version.
\begin{comment}
\begin{itemize}
\item quadratic functions of the form $g(y(T)) = K \|D\, y(T) - z_T\|^{2}$ where $K>0$, $D\in \mathcal{M}_{n}(\mathbb{R})$ and $z_T\in\mathbb{R}^n$ are given. This corresponds to the fully quadratic LQ problem,
and can be seen as a penalization for the final state. By letting $K\to \infty$, it converges to the optimal control problem with fixed final state;
\item the $L^{1}$-norm of the final state, i.e. $g(y(T)) = \|y(T)\|_{1} = \sum\limits_{i=1}^{n}|y_{i}(T)|$ which has the effect of optimally sparsifying the vector $y(T)$.
\item distance function to a given set of points, i.e.
$$
g(y(T))=\min \{ \| y(T)-z_1\|, \ldots,\|y(T) - z_N\| \}.
$$
\end{itemize}
\end{comment}
\begin{comment}
\begin{proposition}\label{prop_rappr_riccati}
Let $\left(\overline{u},\overline{y}\right)$ be the minimizer for $J_s$ defined in \eqref{eq: steady_functional}. Then,
\begin{equation}\label{feedback}
F(y)\coloneqq -B^*\widehat{E}\left(y-\overline{y}\right)+\overline{u}
\end{equation}
defines an optimal feedback law for the running cost in \eqref{eq: W def}, meaning that, for any $x\in\mathbb{R}^n$, the unique optimal control is given by
\begin{equation}\label{opt_inf_1}
u^{*}(s)= -B^*\widehat{E}\left(y^{*}(s)-\overline{y}\right)+ \overline{u},\qquad s\in (0,T)
\end{equation}
where $\widehat{E}$ is the unique symmetric positive semidefinite solution to the Algebraic Riccati Equation
\begin{equation*}\label{ARE_3}
-\widehat{E}A-A^*\widehat{E}+\widehat{E}BB^*\widehat{E}=C^*C \hspace{1 cm} \mbox{(ARE)}
\end{equation*}
and $y^{*}$ solves the closed loop equation
\begin{equation*}\label{}
\begin{array}{ll}
\frac{d}{ds}y^{*}(s) = \left(A\, + B\,F\right) y^{*}(s), & s\in (0,\infty) \\
y^{*}(0) = x.
\end{array}
\end{equation*}
Moreover, the value function $W$ is given by
\begin{equation*}\label{infinity_value_function}
W(x) =\dfrac{1}{2}\left(x-\overline{y}\right)^*\widehat{E}\left(x-\overline{y}\right)+(\overline{p},x-\overline{y})_{\mathbb{R}^n}.
\end{equation*}
\end{proposition}
\end{comment}
\begin{comment}
\begin{figure}[h]
\begin{center}
\includegraphics[width=12cm]{longtimeversussteadyoptimalcontrol}\\
\caption{Optimal control $u(s)$ for \eqref{eq: linear ODE}-\eqref{eq: functional} (in blue) and optimal steady control $\overline{u}$ (in red).}\label{longtimeversussteadyoptimalcontrol}
\end{center}
\end{figure}
\end{comment}
\subsection{Known results on long-time behavior for Hamilton-Jacobi equations}\label{sec: survey HJB}
Observe that the PDE in \eqref{eq: HJ_mainth} is a Hamilton-Jacobi equation of the form
\begin{equation}\label{eq: HJ general}
\partial_{T}V + H(x,\nabla V) = \ell(x),\quad \text{in} \ \mathbb{R}^n\times (0,+\infty),
\end{equation}
where, in our case, the function $H: \mathbb{R}^n\times \mathbb{R}^n \to \mathbb{R}$, known as the \emph{Hamiltonian}, and the function $\ell :\mathbb{R}^n \to \mathbb{R}$ are given by
\begin{equation}
\label{intro - ham}
\begin{aligned}
H(x,p) = \; \max\limits_{u\in U}\left\{ -p\cdot (Ax+Bu) - \dfrac{1}{2}\|u\|^{2}\right\}\quad\text{and }\quad
\ell(x) = \dfrac{1}{2}\|Cx-z\|^{2}.
\end{aligned}
\end{equation}
The long-time behavior for equations like \eqref{eq: HJ general}
has been widely studied in the literature, especially in the flat torus, but also in more general settings,
e.g. \cite{barles2019large,barles2000large,fujita2006asymptotic,ishii2006asymptotic,ishii2008asymptotic,ishii2013short,roquejoffre2001convergence} and the references therein.
Here, we deal with unbounded solutions in the whole space $\mathbb{R}^n$,
a scenario much less studied compared to the case in the $n$-dimensional torus.
In the recent work \cite{barles2019large} it is proved, under suitable hypotheses on $H$, the existence of a constant $c\in\mathbb{R}$ such that
\begin{equation}\label{eq: long-time intro}
V(x,T) - c\, T \to \varphi (x), \qquad \text{as} \ T\to \infty,
\end{equation}
where $\varphi$ is called a \textit{corrector}, and is a viscosity solution to the \emph{stationary} Hamilton-Jacobi equation
\begin{equation}
\label{eq: stationary equation}
c + H(x, \nabla \varphi ) = \ell(x) , \qquad \text{in} \ \mathbb{R}^n.
\end{equation}
This is also called the \emph{ergodic problem} \cite{arisawa1997ergodic,arisawa1998ergodic,barles2006ergodic} and $c$ is known as the \textit{ergodic constant}.
For equations like \eqref{eq: stationary equation}, a solution is understood as a pair $(c,\varphi)$, where $c$ is a constant and $\varphi$ is a (continuous) viscosity solution to \eqref{eq: stationary equation}
\begin{comment}
\begin{definition}\label{def: viscosity}
A continuous function $\varphi$ is a \textit{viscosity solution} to the nonlinear partial differential equation\footnote{For the time-dependent case, it suffices to see $x\in \mathbb{R}^{n+1}$ in the definition as $(y,t)\in\mathbb{R}^{n}\times (0,\infty)$.} $F(x,\varphi(x),D\varphi(x))=0$, $x\in\mathbb{R}^n$, if the following holds
\begin{equation*}
\begin{aligned}
& F(x,\varphi(x),p)\leq 0,\quad \forall\,x\in\mathbb{R}^n,\,\forall\,p\in D^{+}\varphi(x)\\
& F(x,\varphi(x),q)\geq 0,\quad \forall\,x\in\mathbb{R}^n,\,\forall\,q\in D^{-}\varphi(x)
\end{aligned}
\end{equation*}
\end{definition}
\end{comment}
In Theorem \ref{th_turnpikeHJB}, we have obtained long-time asymptotics of the form \eqref{eq: long-time intro} for the solution to the HJB equation \eqref{eq: HJ_mainth}.
Although in \cite{barles2019large}, the unbounded case (space domain $\Omega =\mathbb{R}^n$) is treated, we point out that in our setting, the Hamiltonian does not satisfy all the assumptions required in \cite{barles2019large}. In particular, our function $H(x,p)$ defined in \eqref{intro - ham} is neither strictly convex in the $p$ variable nor coercive (in view of \eqref{intro - ham}, the Hamiltonian is coercive if and only if $B^*$ has a trivial kernel).
\begin{comment}
Concerning the solutions to the stationary equation \eqref{eq: stationary equation}, the ergodic constant is commonly identified in the literature as the limit of averages
\begin{equation*}
c = \lim_{T\to +\infty}\frac{1}{T}V (x,T),
\end{equation*}
Here we give an alternative characterization of $c$ as the minimum value of the steady functional $J_s$ defined in \eqref{eq: steady_functional}. Observe that this characterization is based on the turnpike property and does not involve the value function $V$.
\end{comment}
The paper is structured as follows.
In \textbf{subsection \ref{sec:CTLT Hamilton-Jacobi}}, we prove a first result which is a direct consequence of the turnpike property, namely, the time-averages of the value function converge to the ergodic constant as the time horizon tends to infinity. In \textbf{subsection \ref{subsection infinte time horizon pbm}}, we study the auxiliary infinite horizon optimal control problem introduced in \eqref{eq: W def}. Finally, in \textbf{subsection \ref{subsection: proof Thm 1.1 (1)}}, we give the proof of Theorem \ref{th_turnpikeHJB}.
In \textbf{Section \ref{sec:Conclusions and open problems}}, we sum up the conclusions of the paper and give a list of possible research lines.
Finally, for the reader's convenience and self-consistency of the paper, we include \textbf{Appendix A}, with the proof of the turnpike property stated in Theorem \ref{th_TURNPIKE}, and \textbf{Appendix B}, with some elements taken from classical Riccati Theory, without proof, which are necessary to justify Proposition \ref{prop_rappr_riccati}.
\section{Infinite horizon problem and proof of Theorem \ref{th_turnpikeHJB}}
\label{sec: Proof main thm}
The proof of Theorem \ref{th_turnpikeHJB} relies on the turnpike property, which ensures that the optimal control for the problem \eqref{eq: value function} and its corresponding state trajectory remain close to the steady optima for any $t$ far away from $0$ and $T$.
\subsection{A first consequence of the turnpike property}
\label{sec:CTLT Hamilton-Jacobi}
We start with a result which is a direct consequence of the turnpike property in Theorem \ref{th_TURNPIKE}. It ensures that the time-averages of the cost-functional $J_{T,x}(\cdot)$, evaluated in the optimal control $u_{_{T}}$, converge to the value of the steady optimal control problem as $T\to +\infty$.
\begin{proposition}\label{lemma_convaver}
Under the assumptions of Theorem \ref{th_turnpikeHJB}, let $V(x,T)$ be the value function defined in \eqref{eq: value function} and $V_s$ defined as in \eqref{V_s def}. Then, for any $x\in \mathbb{R}^n$, we have
\begin{equation}
\label{eq: limit time average}
\frac{1}{T}V (x,T)\underset{T\to +\infty}{\longrightarrow}V_s.
\end{equation}
\end{proposition}
In order to prove the above proposition we need to rewrite the functional $J_{T,x}$ defined in \eqref{eq: functional} in a different way. Roughly speaking, we need the running cost to be centered at the turnpike. This is the content of the following Lemma.
\begin{lemma}\label{lemma_rappr_lower_bound}
Under assumptions of Theorem \ref{th_turnpikeHJB}, let $(\overline{u},\overline{y})$ be the steady optimal control-state pair for the functional $J_s$ defined in \eqref{eq: steady_functional} and $V_s := J_s (\overline{u},\overline{y})$.
Then, for any $T>0$, $x\in\mathbb{R}^n$ and $u\in \mathcal{U}_T$, we have
\begin{eqnarray}\label{eq}
J_{T,x}(u) &=&T\, V_s +\dfrac{1}{2} \int_0^T \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y (s)-\overline{y}\right)\|^2\right] ds\nonumber\\
&\;& +\int_0^T \left[\left(\overline{u},u(s)-\overline{u}\right)_{\mathbb{R}^m}+\left(C\overline{y}-z,\, C \left( y (s)-\overline{y}\right)\right)_{\mathbb{R}^n}\right] ds+g(y(T))
\end{eqnarray}
and
\begin{eqnarray}\label{ineq}
J_{T,x}(u) &\geq&T\, V_s+\frac{1}{2} \int_0^T \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\\
&\;&+\left(\overline{p},x-y(T)\right)_{\mathbb{R}^n}+g(y(T)),\nonumber
\end{eqnarray}
where $\overline{p}\in \mathbb{R}^n$ is the optimal adjoint steady state (Lagrange multiplier) and is independent of $T,x$ and $u$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma_rappr_lower_bound}]
In view of the definition of $J_{T,x}$ in \eqref{eq: functional}, we can compute
\begin{eqnarray}\label{eq: value T and J_2}
J_{T,x}(u)&=& \dfrac{1}{2} \int_0^T \left[\| u(s)-\overline{u}+\overline{u}\|^2 + \| C \, y (s)-C\overline{y}+C\overline{y}-z\|^2\right] ds + g(y(T))\nonumber\\
&=&\dfrac{T}{2} \left[\| \overline{u}\|^2 + \| C \, \overline{y}-z\|^2\right] +\dfrac{1}{2} \int_0^T \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y (s)-\overline{y}\right)\|^2\right] ds\nonumber\\
&\;&\quad \quad +\int_0^T \left[\left(\overline{u},u(s)-\overline{u}\right)_{\mathbb{R}^m}+\left(C\overline{y}-z,\, C \left( y (s)-\overline{y}\right)\right)_{\mathbb{R}^n}\right] ds+g(y(T))\nonumber\\
&=& T\, V_s +\dfrac{1}{2} \int_0^T \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y (s)-\overline{y}\right)\|^2\right] ds\nonumber\\
&\;& \quad \quad +\int_0^T \left[\left(\overline{u},u(s)-\overline{u}\right)_{\mathbb{R}^m}+\left(C\overline{y}-z,\, C \left( y (s)-\overline{y}\right)\right)_{\mathbb{R}^n}\right] ds+g(y(T)).
\end{eqnarray}
We now focus on the term
\begin{equation}\label{term to be computed}
\int_0^T\left(C\, \overline{y}-z,\, C \left( y(s)-\overline{y}\right)\right)_{\mathbb{R}^n}ds.
\end{equation}
We recall that the pair $(\overline{u},\overline{y})$ is optimal. Then, by using the convexity of $U$ and the invertibility of $A$, for any $u\in U$ we have the first order optimality condition\footnote{Indeed, consider the function $f:\left[0,1\right]\longrightarrow \mathbb{R}$, defined as $f\left(\delta\right)\coloneqq J_s\left(\left(\overline{u},\overline{y}\right)+\delta \left(u-\overline{u},y-\overline{y}\right)\right)$. Since $\left(\overline{u},\overline{y}\right)$ minimizes $J_s$, $f$ achieves its minimum at $\delta = 0$, whence $f^{\prime}\left(0\right)\geq 0$. Now, by \eqref{eq: steady_functional}, $f^{\prime}\left(0\right)=\left(\overline{u},u-\overline{u}\right)_{\mathbb{R}^m}+\left(C \, \overline{y}-z,C\, \left(y-\overline{y}\right)\right)_{\mathbb{R}^n}$. Now, the invertibility of $A$ guarantees the existence of an adjoint state $\overline{p}$ solving $0=A^*\overline{p}+C^*(C\, \overline{y}-z)$. Then, we can rewrite $f^{\prime}\left(0\right)=\left(\overline{u}+B^*\overline{p},u-\overline{u}\right)_{\mathbb{R}^m}$, whence (remembering that $f^{\prime}\left(0\right)\geq 0$) $\left(\overline{p},B\left(u-\overline{u}\right)\right)_{\mathbb{R}^n}\geq -\left(\overline{u},u-\overline{u}\right)_{\mathbb{R}^m}$.
}
\begin{equation}\label{steady_opt_cond}
\left(\overline{p},B\left(u-\overline{u}\right)\right)_{\mathbb{R}^n}\geq -\left(\overline{u},u-\overline{u}\right)_{\mathbb{R}^m},
\end{equation}
where $0=A^*\overline{p}+C^*(C\, \overline{y}-z)$,
which means that $\overline{u}$ is the projection of $-B^*\overline{p}$ onto $U$.
On the other hand, the pairs $(u(\cdot),y(\cdot))$ and $(\overline{u},\overline{y})$ satisfy the equation in \eqref{eq: linear ODE}. Hence, we have
\begin{equation}\label{difference_eq: linear ODE}
\begin{cases}
\frac{d}{ds}(y-\overline{y})=A(y-\overline{y})+B(u-\overline{u})\hspace{1 cm}& s\in (0,T)\\
y(0)-\overline{y}=x-\overline{y}.
\end{cases}
\end{equation}
Now, since $A$ is invertible the set $M_s\coloneqq \left\{\left(u,-A^{-1}Bu\right) \ | \ u\in U\right\}$ is well defined. Therefore, for a.e. $s\in (0,T)$, $u(s)$ (i.e. the time-evolution control evaluated at time $s$) is an admissible steady control in $M_s$. Hence, we are allowed to use \eqref{steady_opt_cond}, getting
\begin{equation*}
\int_0^T \left(\overline{p},B(u(s)-\overline{u})\right)_{\mathbb{R}^m} ds\geq -\int_0^T \left(\overline{u},u(s)-\overline{u}\right)_{\mathbb{R}^m} ds.
\end{equation*}
Employing the above inequality and \eqref{difference_eq: linear ODE} and taking into account $y(0)=x$, the term \eqref{term to be computed} writes as:
\begin{eqnarray}\label{lemma_rappr_lower_bound_eq6}
\int_0^T\left(C\overline{y}-z,\, C \left( y(s)-\overline{y}\right)\right)_{\mathbb{R}^n}ds &=&\int_0^T\left(C^*\left(C\overline{y}-z\right),\, y(s)-\overline{y}\right)_{\mathbb{R}^n}ds\nonumber\\
&=&-\int_0^T\left(\overline{p},\, A\left(y(s)-\overline{y}\right)\right)_{\mathbb{R}^n}ds\nonumber\\
&=&-\int_0^T\left(\overline{p}, \frac{d}{ds}(y-\overline{y})-B(u-\overline{u})\right)_{\mathbb{R}^n}ds\nonumber\\
&=&\left(\overline{p},y(0)-\overline{y}\right)_{\mathbb{R}^n}-\left(\overline{p},y(T)-\overline{y}\right)_{\mathbb{R}^n}\nonumber\\
&\;&\quad \quad + \int_0^T \left(\overline{p},B(u-\overline{u})\right)_{\mathbb{R}^m} ds\nonumber\\
&\geq&\left(\overline{p},x-y(T)\right)_{\mathbb{R}^n} -\int_0^T \left(\overline{u},u(s)-\overline{u}\right)_{\mathbb{R}^m} ds.
\end{eqnarray}
Finally, the conclusion follows by combining \eqref{eq: value T and J_2} and \eqref{lemma_rappr_lower_bound_eq6}.
\end{proof}
We can now give the proof of Proposition \ref{lemma_convaver}, which follows from Lemma \ref{lemma_rappr_lower_bound} and the $U$-stabilizability assumption in Definition \ref{definition_U_stab}.
\begin{proof}[Proof of Proposition \ref{lemma_convaver}]
Let $x\in \mathbb{R}^n$ be fixed, and consider, for all $T>0$, the trajectory $y_1(t) = \overline{y}$ for all $t\in [0,T]$, which is associated to the constant control $u_1(t) = \overline{u}$ for all $t\in (0,T)$. By the $U$-stabilizability assumption (see Definition \ref{definition_U_stab}), there exist a control $u$ and its associated state trajectory $y$ such that
$$
\|u-\overline{u}\|_{L^1\cap L^2} + \|y-\overline{y}\|_{L^1\cap L^2} \leq K \| y_1(0) - x\|.
$$
Using the linearity of the dynamics, we can deduce from this that
$\|y(t) - \overline{y}\|\leq K\| y_1(0) - x\|$ for all $t\in [0,T]$.
Then, using Lemma \ref{lemma_rappr_lower_bound} and the definition of the value function, we deduce that
\begin{equation}\label{upper bound V}
V(x,T) \leq J_{T,x} (u) \leq T\, V_s + C,
\end{equation}
for some constant independent of $T$.
The lower bound follows from \eqref{ineq} applied to the optimal control $u_{_{T}}$, that is
\begin{eqnarray
J_{T,x}(u_{_{T}}) &\geq&T\, V_s+\frac{1}{2} \int_0^T \left[\| u_{_{T}}(s)-\overline{u}\|^2 + \| C \, \left(y_{_{T}}(s)-\overline{y}\right)\|^2\right] ds\nonumber\\
&\;&+\left(\overline{p},x-y_{_{T}}(T)\right)_{\mathbb{R}^n}+g(y_{_{T}}(T)).\nonumber
\end{eqnarray}
It then suffices to notice that the integral term is positive and, by the $T$-uniform bound of the optimal trajectory in \Cref{lemma_unif_bound}, the two last terms in the above inequality are bounded by a constant $K$ independent of $T$. Hence, one has
\begin{equation}\label{lower bound V}
V(x,T) = J_{T,x}(u_{_{T}}) \geq T V_{s} - K.
\end{equation}
The conclusion then follows after dividing the inequalities \eqref{upper bound V} and \eqref{lower bound V} by $T$ and taking the limit as $T\to +\infty$.
\end{proof}
We end this subsection with the following Lipschitz estimate, uniform in $T$, that is also consequence of the turnpike property and will be useful in the proof of Theorem \ref{th_turnpikeHJB}.
\begin{lemma}\label{lemma V lip estimate}
Assume
$(A,B)$ is $U$-stabilizable and $(A,C)$ is detectable. Let $V$ be the function defined in \eqref{eq: value function}.
Then, for any $M>0$, there exists a constant $K_M>0$ such that for all $T>0$ and all $x_1$ and $x_2$ in $\mathbb{R}^n$ satisfying $\|x_i\|\leq M$, we have
\begin{equation*}
\left|V(x_2,T)-V(x_1,T)\right|\leq K_M\left\|x_2-x_1\right\|.
\end{equation*}
\end{lemma}
\begin{proof}
We prove this Lemma by using the definition of $V(x,T)$ as minimal value of $J_{T,x}$. Let $u_{_{T},x_1}\in \mathcal{U}_T$ be an optimal control for $J_{T,x_1}$.
\textit{Step 1} \ \textbf{Construction of stabilizing control}\\
Since $(A,B)$ is $U$-stabilizable, there exists a control $\hat{u}\in L^2(0,T;U)$, such that
\begin{equation}
\left\|\hat{u}-u_1\right\|_{L^1\cap L^2(0,T)}+\left\|\hat{y}-y_1\right\|_{L^1\cap L^2(0,T)}\leq K\left(A,B,U\right)\left\|x_2-x_1\right\|,
\end{equation}
$\hat{y}$ being the solution to \eqref{eq: linear ODE}, with initial datum $x_2$ and control $\hat{u}$ and $\left\|\cdot\right\|_{L^1\cap L^2}\coloneqq \left\|\cdot\right\|_{L^1\left(0,T\right)}+\left\|\cdot\right\|_{L^2\left(0,T\right)}$.
We have then
\begin{equation}\label{diff_functionals}
\left|J_{T,x_2}(\hat{u})-J_{T,x_1}(u_{_{T},x_1})\right|\leq K_M\left\|x_2-x_1\right\|,
\end{equation}
where $K_M$ is independent of $T>0$.
\textit{Step 2} \ \textbf{Conclusion}\\
For $i=1,2$, let $u_{_{T},x_i}$ be optimal controls for $J_{T,x_i}$ and let $\hat{u}$ defined as above for $u\coloneqq u_{_{T},x_1}$. Then, by definition of value function and \eqref{diff_functionals}
\begin{eqnarray*}
V(x_2,T)-V(x_1,T)&=&J_{T,x_2}(u_{_{T},x_2})-J_{T,x_1}(u_{_{T},x_1})\nonumber\\
&\leq &J_{T,x_2}(\hat{u})-J_{T,x_1}(u_{_{T},x_1})\nonumber\\
&\leq &K_M\left\|x_2-x_1\right\|.
\end{eqnarray*}
By the arbitrariness of $x_1$ and $x_2$, we obtain the desired Lipschitz property.
\end{proof}
\subsection{The infinite horizon linear-quadratic problem}
\label{subsection infinte time horizon pbm}
Here we introduce the auxiliary infinite time horizon optimal control problem announced in the introduction,
that allows us to compute the optimal cost of stabilizing the trajectory to the turnpike from the initial state.
For each $x\in\mathbb{R}^n$, the dynamics are determined by the same ODE in \eqref{eq: linear ODE}, in this case considering the time interval $(0,\infty)$:
\begin{equation}\label{eq: ODE infinity}
\begin{array}{ll}
\dot{y}(s) = A\, y(s) + B\, u(s), & s\in (0,\infty) \\
y (0) = x.
\end{array}
\end{equation}
The set of admissible controls is $\mathscr{A}_x$ as defined in \eqref{eq: admissible controls} where $V_s = J_s(\overline{u},\overline{y})$ is the constant defined in \eqref{V_s def}. And the problem we shall consider is to minimize the cost functional
\begin{equation}\label{eq: functional_infinity}
J_{\infty,x}(u):= \displaystyle\int_{0}^{\infty}\left[\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s \right]\;ds,
\end{equation}
over the controls $u\in \mathscr{A}_x$. The value function for this problem is $W(x)$ as is defined in \eqref{eq: W def}. Note that the set of admissible controls is different for each $x$. In addition, since $(A,B)$ is $U$-stabilizable to $\overline{y}$, we deduce that it is nonempty for all $x$.
The following lemma follows directly from the definition of $\mathscr{A}_x$.
\begin{lemma}\label{lemma_rappr_infinity 1}
Let $\left(\overline{u},\overline{y}\right)$ be the minimizer for $J_s$ defined in \eqref{eq: steady_functional}. For any $x\in\mathbb{R}^n$ and any control $u\in \mathscr{A}_x$, we denote by $y$ the solution to \eqref{eq: ODE infinity} with control $u$ and initial datum $x$. Then it holds
$$
u-\overline{u}\in L^2(0,+\infty;\mathbb{R}^m) \quad \text{and} \quad y-\overline{y}\in L^2(0,+\infty;\mathbb{R}^n).
$$
In addition, $\{y(t)\}_{t> 0}$ is bounded in $\mathbb{R}^n$ and satisfies
$$
y(t)\longrightarrow\overline{y} \quad \text{as} \quad t\to +\infty.
$$
The functional $J_{\infty,x}$ can be written as
\begin{equation}\label{lemma_rappr_infinity_eq1}
J_{\infty,x}(u) \geq \dfrac{1}{2} \int_0^{\infty} \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds +(\overline{p},x-\overline{y})_{\mathbb{R}^n}
\end{equation}
and it admits a minimizer $u^{*}$ in $\mathscr{A}_x$.
\end{lemma}
\begin{proof}
\textit{Step 1} \ \textbf{Boundedness of $\left\{y(t)\right\}_{t>0}\subset \mathbb{R}^n$}\\
Take any $u\in \mathscr{A}_x$ and let $y$ be the solution to \eqref{eq: ODE infinity}, with initial datum $x$ and control $u$.
By Lemma \ref{lemma_obs_conseq_dynamical} applied to $y-\overline{y}$, we have
\begin{equation*}
\|y(t)-\overline{y}\|^2\leq K\left[\|x-\overline{y}\|^2+\int_0^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\right],
\end{equation*}
whence
\begin{equation*}
\dfrac{1}{2} \int_0^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\geq \alpha \|y(s)-\overline{y}\|^2-K,
\end{equation*}
where $\alpha = \alpha(A,C)>0$ and $K=K(A,B,C,x,z)\geq 0$. Using the above inequality and adapting \eqref{ineq}, yields
\begin{eqnarray*}
J_{\infty,x}(u)&=& \lim_{t\to +\infty}\int_{0}^{t}\left[\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s \right]\;ds\nonumber\\
&\geq&\lim_{t\to +\infty}\left[\dfrac{1}{2} \int_0^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\right.\nonumber\\
&\;&\quad\quad\quad +(\overline{p},x-y(t))_{\mathbb{R}^n}\bigg]\nonumber\\
&\geq&\limsup_{t\to +\infty}\left[\dfrac{1}{2} \int_0^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\right.\nonumber\\
&\;&\quad\quad\quad -K\left(1+\left\|y(t)-\overline{y}\right\|\right)\bigg]\nonumber\\
&\geq&\limsup_{t\to +\infty}\left[\alpha \|y(t)-\overline{y}\|^2-K\left(\|y(t)-\overline{y}\|+2\right)\right]\nonumber\\
&\geq&\dfrac{\alpha}{2}\limsup_{t\to +\infty} \|y(t)-\overline{y}\|^2-K.\nonumber\\
\end{eqnarray*}
Now, since $u\in \mathscr{A}_x$, the functional $J_{\infty,x}(u)<+\infty$. This, together with the above estimate, implies the boundedness of $\left\{y(t)\right\}_{t>0}\subset \mathbb{R}^n$.\\
\textit{Step 2} \ \textbf{Proof of $u-\overline{u}\in L^2(0,+\infty;\mathbb{R}^m)$ and $y-\overline{y}\in L^2(0,+\infty;\mathbb{R}^n)$.}\\
By Step 1, there exists a constant $K(u)\geq 0$, such that $\forall \, t>0$, $\left\|y(t)\right\|\leq K(u)$.
By \eqref{ineq}, one gets
\begin{equation}
\label{lemma_rappr_infinity_eq3}
\begin{aligned}
&\int_{0}^{t}\left[\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s \right]\;ds\\
& \qquad \qquad \geq \dfrac{1}{2} \int_0^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds + (\overline{p},x-y(t))_{\mathbb{R}^n}
\end{aligned}
\end{equation}
and using the above bound, for any $t>0$, we have
\begin{equation*}
\begin{aligned}
&\int_{0}^{t}\left[\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s \right]\;ds\\
& \qquad \qquad \geq \dfrac{1}{2} \int_0^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] -K(u),
\end{aligned}
\end{equation*}
whence, since $u\in \mathscr{A}_x$,
\begin{eqnarray*}\label{}
+\infty > J_{\infty,x}(u)&=&\lim_{t\to +\infty}\int_{0}^{t}\left[\dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\, y(s) - z\|^{2} - V_s \right]\;ds\nonumber\\
&\geq&\dfrac{1}{2} \int_0^{\infty} \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds-K(u),
\end{eqnarray*}
which in turn implies $u-\overline{u}\in L^2(0,+\infty;\mathbb{R}^m)$ and $C(y-\overline{y})\in L^2(0,+\infty;\mathbb{R}^n)$. Now, since the pair $(A,C)$ is detectable, adapting the techniques of the proof of Lemma \ref{lemma_obs_conseq_dynamical}, we have in fact $y-\overline{y}\in L^2(0,+\infty;\mathbb{R}^n)$.
\textit{Step 3} \ \textbf{Proof of $y(t)\longrightarrow \overline{y}$ as $t\to +\infty$.}\\
Now, since $y-\overline{y}\in L^2(0,+\infty;\mathbb{R}^n)$, there exists a sequence $t_m\to +\infty$, such that
\begin{equation*}
y(t_m)\underset{m\to +\infty}{\longrightarrow}\overline{y}.
\end{equation*}
By the above convergence and $u-\overline{u}\in L^2(0,+\infty;\mathbb{R}^m)$ and $C(y-\overline{y})\in L^2(0,+\infty;\mathbb{R}^n)$, for any $\varepsilon >0$, there exists $m_{\varepsilon}\in \mathbb{N}$ such that for every $m>m_{\varepsilon}$
\begin{equation*}
\left\|y(t_m)-\overline{y}\right\|< \varepsilon \hspace{0.3 cm}\mbox{and}\hspace{0.3 cm}\int_{t_m}^{+\infty} \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds<\varepsilon^2.
\end{equation*}
Then, by Lemma \ref{lemma_obs_conseq_dynamical}, for any $m>m_{\varepsilon}$ and for any $t>t_m$ we have
\begin{equation*}
\left\|y(t)-\overline{y}\right\|^2\leq K\left[\left\|y(t_m)-\overline{y}\right\|^2+\int_{t_m}^t \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\right]<2K \varepsilon^2,
\end{equation*}
whence $y(t) \longrightarrow\overline{y}\,$ as $\,t\to +\infty$.
\textit{Step 4} \ \textbf{Proof of \eqref{lemma_rappr_infinity_eq1}}\\
The representation formula \eqref{lemma_rappr_infinity_eq1} is a consequence of \eqref{eq: functional_infinity}, \eqref{lemma_rappr_infinity_eq3}, $u-\overline{u}\in L^2(0,+\infty;\mathbb{R}^m)$, $\,y-\overline{y}\in L^2(0,+\infty;\mathbb{R}^n)$ and $y(t)\underset{t \to +\infty}{\longrightarrow}\overline{y}$. Existence of the minimizer follows from \eqref{lemma_rappr_infinity_eq1} and the Direct Method in the Calculus of Variations.
\end{proof}
Next we prove a local Lipschitz estimate for $W$ that will be used in the proof of Theorem \ref{th_turnpikeHJB}.
\begin{lemma}\label{lemma W lip estimate}
Assume
$(A,B)$ is $U$-stabilizable to $\overline{y}$ and $(A,C)$ is detectable and let $W$ be the function defined in \eqref{eq: W def}.
Then, for any $M>0$, there exists a constant $K_M>0$ such that
\begin{equation*}
\left|W(x_2)-W(x_1)\right|\leq K_M\left\|x_2-x_1\right\|,
\end{equation*}
for all $x_1$ and $x_2$ in $\mathbb{R}^n$ satisfying $\|x_i\|\leq M$.
\end{lemma}
\begin{proof}
The proof follows the techniques in the proof of Lemma \ref{lemma V lip estimate}.
\end{proof}
\begin{comment}
\begin{proof}
Let us present the strategy to prove this Lemma. Let $u^{*}_{x_1}\in \mathscr{A}_{x_1}$ be an optimal control for $J_{\infty,x_1}$. Since $(A,B)$ is stabilizable, there exists a feedback matrix $F\in \mathcal{M}_{m,n} (\mathbb{R})$, such that $A+BF$ generates an exponentially stable semigroup. Set the control
\begin{equation}\label{opt_inf_infinity}
\hat{u}(s)\coloneqq F\tilde{y}(s)+ u^{*}_{x_1}(s),
\end{equation}
where $\tilde{y}$ solves the closed loop equation
\begin{equation*}\label{}
\begin{array}{ll}
\frac{d}{ds}\tilde{y}(s) = \left(A\, +B\,F\right) \tilde{y}(s), & s\in (0,\infty) \\
\tilde{y}(0) = x_2-x_1.
\end{array}
\end{equation*}
We start by proving
\begin{equation}\label{diff_functionals_infinity}
\left|J_{\infty,x_2}(\hat{u})-J_{\infty,x_1}(u^{*}_{x_1})\right|\leq K_M\left\|x_2-x_1\right\|.
\end{equation}
\textit{Step 1} \ \textbf{Proof of \eqref{diff_functionals_infinity}}\\
Set $y^{*}_{x_1}$ solution to \eqref{eq: linear ODE} with initial datum $x_1$ and control $u^{*}_{x_1}$ and $\hat{y}$ solution to
\begin{equation}
\begin{array}{ll}
\frac{d}{ds}\hat{y}(s) = A\, \hat{y}(s) + B\, \hat{u}(s), & s\in (0,T) \\
\hat{y} (0) = x_2.
\end{array}
\end{equation}
By definition of $F$, we have for any $s\geq 0$
\begin{equation}
\left\|\hat{y}(s)-y^{*}_{x_1}(s)\right\|=\left\|\tilde{y}(s)\right\|\leq K\left\|x_2-x_1\right\|\exp\left(-\mu s\right),
\end{equation}
whence
\begin{equation}
\left\|\hat{u}(s)-u^{*}_{x_1}(s)\right\|=\left\|F\tilde{y}(s)\right\|\leq K\left\|x_2-x_1\right\|\exp\left(-\mu s\right),
\end{equation}
the constants $K$ and $\mu >0$ being independent of $s\geq 0$. From the above inequalities, the definition of $\mathscr{A}_x$ and \eqref{lemma_rappr_infinity_eq1}, \eqref{diff_functionals_infinity} follows.\\
\textit{Step 2} \ \textbf{Conclusion}\\
For $i=1,2$, let $u^{*}_{x_i}$ be optimal controls for $J_{\infty,x_i}$ and let $\hat{u}$ defined as above for $u\coloneqq u^{*}_{x_1}$. Then, by definition of value function and \eqref{diff_functionals_infinity}
\begin{eqnarray}
W(x_2)-W(x_1)&=&J_{\infty,x_2}(u^{*}_{x_2})-J_{\infty,x_1}(u^{*}_{x_1})\nonumber\\
&\leq &J_{\infty,x_2}(\hat{u})-J_{\infty,x_1}(u^{*}_{x_1})\nonumber\\
&\leq &K_M\left\|x_2-x_1\right\|.
\end{eqnarray}
By the arbitrariness of $x_1$ and $x_2$, we obtain the desired Lipschitz estimate.
\end{proof}
\end{comment}
\begin{comment}
We start by proving that the value function $W$ is finite for all $x\in\mathbb{R}^n$.
\begin{lemma}
The value function $W$ of the problem \eqref{auxiliary OCP} satisfies
\begin{equation*}
-\infty < W(x) < +\infty,\quad \forall\;x\in \mathbb{R}^n
\end{equation*}
\end{lemma}
\begin{proof}
We first start by proving that $W(x)<+\infty$ for any $x$. Recall the definition of $V_s = \dfrac{1}{2}\|\bar{u}\|^{2}+\dfrac{1}{2}\|C\bar{y}-z\|^{2}$. Then, for $0<T<\infty$ one has
\begin{equation*}
\begin{array}{l}
\displaystyle\int_{0}^{T}\left[\dfrac{1}{2}\|u_{_{T}}(s)\|^{2} + \dfrac{1}{2}\|C\, y_{_{T}}(s) - z\|^{2} - V_s \right]\;ds = \\
\noalign{\vskip 2mm}
\qquad \qquad \dfrac{1}{2} \displaystyle\int_0^T \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds +\left(\overline{p},x-y(T)\right)_{\mathbb{R}^n}.
\end{array}
\end{equation*}
Applying theorem \ref{th_TURNPIKE}, one gets an upperbound independent of $T$. Taking $T\to \infty$ implies $W(x)<+\infty$.
We now concentrate on proving $W(x)>-\infty$. Take any control $u\in L^2_{\mbox{\tiny{loc}}}([0,+\infty))$. Then, for any time horizon $T>0$, we have
\begin{equation}\label{eq: proof lower bound}
\int_0^T \left(\dfrac{1}{2}\| u(s)\|^2 + \dfrac{1}{2}\| C\, y(s)-z\|^2 -V_s\right)ds = J_{T,x} (u) - T\, V_s.
\end{equation}
By Lemma \ref{lemma_rappr_lower_bound},
\begin{eqnarray}\label{lemma_rappr_lower_bound_eq1_2}
\int_0^T \left(\dfrac{1}{2}\| u(s)\|^2 + \dfrac{1}{2}\| C\, y(s)-z\|^2 -V_s\right)ds &=&J_{T,x}(u)-T\inf_{M} J_s\nonumber\\
&=&+\dfrac{1}{2} \int_0^T \left[\| u(s)-\overline{u}\|^2 + \| C \, \left(y(s)-\overline{y}\right)\|^2\right] ds\nonumber\\
&\;&+\left(\overline{p},x-y(T)\right)_{\mathbb{R}^n}.\nonumber\\
\end{eqnarray}
Now, let $u_{_{T}}$ be the optimal control for problem \eqref{eq: linear ODE}-\eqref{eq: value function} and let $y_{_{T}}$ be the corresponding state trajectory.
In view of the turnpike property, i.e. Theorem \ref{th_TURNPIKE}, we have
$$
\| y_{_{T}} (T)\| \leq K\left(e^{-\mu T} + 1 \right)\leq 2K.
$$
Using this inequality, \eqref{eq: proof lower bound} and \eqref{lemma_rappr_lower_bound_eq1_2},
along with the fact that $J_{T,x}(u) \geq J_{T,x}(u_{_{T}}),$ we obtain
\begin{eqnarray*}
J_{T,x} (u) - T\, V_s & \geq &
\dfrac{1}{2}\int_0^T \left(\| u_{_{T}}(s)\|^2 + \| C\, y_{_{T}}(s)-z\|^2 \right)ds - T\, V_s \\
& \geq & \left(\overline{p},x-y(T)\right)_{\mathbb{R}^n} \\
& \geq & -C_1 - C_2 \left( e^{-\mu T} + 1\right) \\
&\geq & -C_3.
\end{eqnarray*}
where $C>0$ is independent of $T$ and $u$.
It then follows that $J_{\infty, x}(u)$ is bounded from below for all $u\in L_{loc}^2(0,\infty)$.
\end{proof}
\end{comment}
\subsection{Proof of Theorem \ref{th_turnpikeHJB}}
\label{subsection: proof Thm 1.1 (1)}
We are now in position to give the proof of Theorem \ref{th_turnpikeHJB}. We split the proof in three steps. In the first one, we prove the statement (i) of the Theorem about the convergence of the value function. In the step 2, we prove the uniqueness result for the solution of the Hamilton-Jacobi-Bellman equation associated to the infinite horizon problem. Finally, in step 3, we prove that, in the unconstrained case $U=\mathbb{R}^n,$ the value function for the infinite horizon problem is in $C^1(\mathbb{R}^n)$.
\begin{proof}[Proof of Theorem \ref{th_turnpikeHJB}]
\textit{Step 1:} \ \textbf{Convergence.}
Let $\Omega\subset \mathbb{R}^n$ be a bounded set. For any given $x\in\Omega$ and $T>0$, let $u_{_{T}}(\cdot)$ and $y_{_{T}}(\cdot)$ be an optimal control for problem \eqref{eq: linear ODE}--\eqref{eq: functional} and its corresponding state trajectory. Then, as a consequence of the DPP, for any $T>0$ we can write
\begin{equation}\label{proof Thm 1.1 DPP}
V(x,T) = \dfrac{1}{2}\int_0^{\frac{T}{2}} \left[ \| u_{_{T}}(s)\|^2 + \|C\, y_{_{T}}(s)-z\|^2\right]ds +
V\left( y_{_{T}}\left(\dfrac{T}{2}\right), \dfrac{T}{2} \right).
\end{equation}
Now, using Lemma \ref{lemma V lip estimate} and that, as a consequence of the turnpike property \eqref{epsturnpike}, $y_{_{T}}(T/2)\to \bar{y}$ as $T\to \infty$, we deduce that
\begin{equation*}\label{Proof Thm 1.1 second term}
\lim_{T\to \infty} \left| V\left( y_{_{T}}\left(\dfrac{T}{2}\right), \dfrac{T}{2} \right) - V\left( \bar{y}, \dfrac{T}{2} \right)\right|=0.
\end{equation*}
Hence, we have
\begin{eqnarray}
\lim_{T\to \infty} V\left( y_{_{T}}\left(\dfrac{T}{2}\right), \dfrac{T}{2} \right) - \dfrac{T}{2} V_s &=& \lim_{T\to \infty} \left[V\left( y_{_{T}}\left(\dfrac{T}{2}\right), \dfrac{T}{2} \right) - V\left( \bar{y}, \dfrac{T}{2} \right)\right. \nonumber \\
& & \left.\quad \quad + V\left( \bar{y}, \dfrac{T}{2} \right) - \dfrac{T}{2}V_s \right]\nonumber \\
&=& \lim_{T\to \infty} V\left( \bar{y}, \dfrac{T}{2} \right) - \dfrac{T}{2}V_s\; =:\; \lambda.
\label{proof Thm1.1 lambda}
\end{eqnarray}
The existence of this limit can be justified by proving that the function
$$
T\longmapsto V(\bar{y},T) - T\, V_s
$$
is decreasing and bounded from below.
Indeed, observe that if $u_{_{T}}$ is an optimal control for $J_{x,T}$, then for any $T'>T$, we can use the control
$$
\hat{u} (s) := \left\{
\begin{array}{cc}
\bar{u} & s\in (0,T'-T) \\
u_{_{T}} (s) & s\in [T'-T,T')
\end{array}
\right.
$$
to prove the monotonicity. The boundedness from below can be obtained from the turnpike property.
Let us now prove that
\begin{equation}\label{proof thm1.1 first half convergence}
\lim_{T\to+\infty} \dfrac{1}{2}\int_{0}^{\frac{T}{2}}\left[\|u_{_{T}}(s)\|^{2} + \|C\, y_{_{T}}(s) - z\|^{2} \right]\;ds-\dfrac{T}{2}V_s = W(x).
\end{equation}
Let $u^*\in \mathscr{A}_x$ be the optimal control for the functional $J_{\infty,x}$ defined in \eqref{eq: W def} and $y^*$ its corresponding state trajectory. For any $T>0$, as a consequence of the DPP for the infinite horizon problem, we have
\begin{eqnarray}
W(x) &=& \int_0^{\frac{T}{2}} \left[ \dfrac{1}{2} \| u^*(s)\|^2 + \dfrac{1}{2}\|C\, y^*(s)-z\|^2 - V_s\right] ds +W\left(y^*\left(\frac{T}{2}\right)\right) \nonumber \\
&\leq & \dfrac{1}{2}\int_0^{\frac{T}{2}} \left[ \| u_{_{T}}(s)\|^2 + \|C\, y_{_{T}}(s)-z\|^2 \right] ds - \dfrac{T}{2}V_s +W\left(y_{_{T}}\left(\frac{T}{2}\right)\right).\label{proof Thm 1.1 DPP W}
\end{eqnarray}
Now, observe that by plugging $\Bar{y}$ in formula \eqref{lemma_rappr_infinity_eq1} in Lemma \ref{lemma_rappr_infinity 1}, one can easily see that $W(\Bar{y})=0$.
Then, using Theorem \ref{th_TURNPIKE} and that, by Lemma \ref{lemma W lip estimate}, the function $W(\cdot)$ is continuous, we deduce that
\begin{equation}\label{proof thm1.1 liminf}
\liminf_{T\to+\infty} \dfrac{1}{2}\int_{0}^{\frac{T}{2}}\left[\|u_{_{T}}(s)\|^{2} + \|C\, y_{_{T}}(s) - z\|^{2} \right]\;ds-\dfrac{T}{2}V_s \geq W(x).
\end{equation}
Using again the DPP, this time for the value function $V$, we obtain for any $T>0$:
\begin{eqnarray}
V(x,T) &=& \dfrac{1}{2}\int_0^{\frac{T}{2}} \left[\|u_{_{T}} (s)\|^2 + \|C\,y_{_{T}}(s)-z\|^2\right] ds + V\left(y_{_{T}}\left(\dfrac{T}{2}\right),\dfrac{T}{2}\right) \nonumber \\
&\leq & \dfrac{1}{2}\int_0^{\frac{T}{2}} \left[\|u^*(s)\|^2 + \|C\,y^*(s)-z\|^2\right] ds + V\left(y^*\left(\dfrac{T}{2}\right),\dfrac{T}{2}\right). \label{proof thm 1.1 limsup}
\end{eqnarray}
Using this time the DPP for $W$ (the first equality in \eqref{proof Thm 1.1 DPP W}), we can compute
$$
\dfrac{1}{2}\int_0^{\frac{T}{2}} \left[ \| u^*(s)\|^2 + \|C\, y^*(s)-z\|^2\right] ds = W(x) + \dfrac{T}{2}V_s - W\left(y^*\left(\frac{T}{2}\right)\right).
$$
And combining this identity with \eqref{proof thm 1.1 limsup}, we obtain
\begin{eqnarray*}
& &\dfrac{1}{2}\int_0^{\frac{T}{2}} \left[\|u_{_{T}} (s)\|^2 + \|C\,y_{_{T}}(s)-z\|^2\right] ds - \dfrac{T}{2}V_s \leq W(x) - W\left(y^*\left(\dfrac{T}{2}\right)\right) \\
& &\qquad \qquad \qquad \qquad + V\left(y^*\left(\dfrac{T}{2}\right),\dfrac{T}{2}\right) - V\left(y_{_{T}}\left(\dfrac{T}{2}\right),\dfrac{T}{2}\right).
\end{eqnarray*}
This inequality, together with $W(\bar{y})=0$, the Lipschitz continuity of $V$ from Lemma \ref{lemma V lip estimate} and the fact that, by the turnpike property and Lemma \ref{lemma_rappr_infinity 1}, we have that $y_{_{T}}(T/2)$ and $y^*(T/2)$ converge to $\bar{y}$ as $T\to \infty$, gives
$$
\limsup_{T\to+\infty} \dfrac{1}{2}\int_{0}^{\frac{T}{2}}\left[\|u_{_{T}}(s)\|^{2} + \|C\, y_{_{T}}(s) - z\|^{2} \right]\;ds-\dfrac{T}{2}V_s \leq W(x).
$$
From this inequality and \eqref{proof thm1.1 liminf}, it follows \eqref{proof thm1.1 first half convergence}.
Finally, combining \eqref{proof Thm 1.1 DPP}, \eqref{proof Thm1.1 lambda} and \eqref{proof thm1.1 first half convergence} we obtain
\begin{equation}\label{conv_value_function_identification}
V(x,T)-TV_s\underset{T\to +\infty}{\longrightarrow}W(x)+\lambda.
\end{equation}
\textit{Step 2:} \textbf{Uniqueness for the ergodic equation.}
The proof that $(V_s,W(\cdot))$ satisfies the equation \eqref{eq: ergodic HJB_thm} can be carried out by standard methods in optimal control theory.
In the case where the inclusion $U\subset \mathbb{R}^m$ is strict (i.e. when we have constraints on the control), the function $W$ is not expected to enjoy $C^{1}$ regularity, and one has to use the theory of viscosity solutions \cite{crandall1983viscosity,crandall1992user}. We omit the proof since it follows exactly the arguments in \cite[Thm. 5]{kouhkouh2018dynamic} (which is an adaptation of \cite[Thm. 7.2.4]{cannarsa2004semiconcave} to the LQ setting), dropping the dependency on time.
In order to prove that $W(x)$ is the unique (up to an additive constant) viscosity solution to \eqref{eq: ergodic HJB_thm} bounded from below, we argue by contradiction.
Let $c\in\mathbb{R}$, and let $W_1\in C(\mathbb{R}^n)$ be a bounded from below continuous function satisfying the equation
$$
c + H(x,\nabla W_1) = \ell (x)
$$
in the viscosity sense. Here, the Hamiltonian $H$ and the function $\ell$ are defined as in \eqref{intro - ham}.
Observe that the function given by
$$
V_1(x,T) = c \, T + W_1(x)
$$
is a viscosity solution to the problem \eqref{eq: HJ_mainth} with initial condition $g(x) = W_1(x)$, which is bounded from below.
We can then deduce that $V_1(x,T)$ is actually the value function associated to the optimal control problem \eqref{eq: linear ODE}--\eqref{eq: functional} with final cost $g(x)=W_1(x)$.
And since $W_1(\cdot)$ is bounded from below, we can use the statement (i) in Theorem \ref{th_turnpikeHJB} to deduce that
$$
\lim_{T\to +\infty} V_1(x,T) - V_s\, T = W(x) + \lambda, \qquad \text{for all}\ x\in \mathbb{R}^n,
$$
for some $\lambda\in\mathbb{R}$ depending on the final cost $W_1(\cdot)$.
Hence, using the definition of $V_1(x,T)$ we obtain
$$
\lim_{T\to +\infty} W_1(x) + (c-V_s)\, T = W(x) + \lambda,
\qquad \text{for all}\ x\in \mathbb{R}^n.
$$
This implies that $c=V_s$ and also that $W_1(x)-W(x) = \lambda$, for all $x\in \mathbb{R}^n$.
\end{proof}
\begin{comment}
\begin{remark}\label{lemma_nouniq}
The value function is in general not differentiable although the final cost is smooth enough. This happens for example when the final cost is nonconvex hence yielding multiple optimal solutions. Indeed, singularities of the value function are related to the points where the optimal trajectory is not unique, see \cite[page 200]{cannarsa2004semiconcave} for some examples. And this can be seen when using the method of characteristics (see \cite[\S 5.1]{cannarsa2004semiconcave}) for solving the HJB equation: It can be shown (see \cite[Thm. 1.5.3]{cannarsa2004semiconcave}) that there exists a finite time horizon during which the solution is smooth, but afterwards, it develops singularities (see \cite[Thm. 1.5.6]{cannarsa2004semiconcave}).
\end{remark}
\end{comment}
Let us finish this section with an illustrative example that shows why the value function $V(x,T)$ is not in general differentiable.
As we will see, for a suitable nonconvex final cost $g$, the global minimizer for $J_{T,x}$ with $x=0$ and $T$ sufficiently large is not unique. This implies in particular that the subdifferential of $V(\cdot, T)$ contains more than one element and hence $V(\cdot,T)$ is not differentiable at $0$ for $T$ sufficiently large (see \cite[Theorem 7.4.17]{cannarsa2004semiconcave}, and further examples can be found in \cite[page 200]{cannarsa2004semiconcave}). In general, It can be shown (see \cite[Theorem 1.5.3]{cannarsa2004semiconcave}) that there exists a finite time horizon during which the solution is smooth, but afterwards, it develops singularities (see \cite[Theorem 1.5.6]{cannarsa2004semiconcave}).
\begin{example}\label{lemma_nouniq}
Let us consider the optimal control problem \eqref{eq: linear ODE}--\eqref{eq: functional} with the pair of matrices $(A,B)$ being controllable and $C$ being any matrix.
As a final cost, we consider the function
$$
g_\varepsilon(x) = \dfrac{1}{\varepsilon} [\| x\|^4 - \|x\|^2],
$$
where $\varepsilon>0$ will be chosen later.
Our goal is to show that
if $\varepsilon>0$ sufficiently small, the functional
\begin{equation}\label{eq: functional_nouniq}
J_{T,0} (u) \coloneqq \dfrac{1}{2} \int_0^T \left[\| u(s)\|^2 + \| C \, y (s)\|^2\right] ds + g_\varepsilon (y(T)),
\end{equation}
admits (at least) two distinguished global minimizers whenever $T>2$.
Let us first prove that, if $\varepsilon>0$ is sufficiently small, then for any $T>1$, the control $u\equiv 0$ is not optimal.
Fix $x_1$ a minimizer of the function $g:\mathbb{R}^n\longrightarrow \mathbb{R}$ defined as $g(x)\coloneqq \left\|x\right\|^4-\left\|x\right\|^2$ and set
\begin{equation*}
\tilde{u}(s)=\begin{cases}
0 \quad &s \in \ (0,T-1)\\
u_1(t-T+1) \quad &s\in \ (T-1,T),
\end{cases}
\end{equation*}
where $u_1$ is any control solving the controllability problem
\begin{equation*}
\begin{array}{ll}
\dot{y_1}(s) = A\, y_1(s) + B\, u_1(s), & s\in [0,1] \\
y_1 (0) = 0, \ y_1 (1) = x_1.
\end{array}
\end{equation*}
Let $\tilde{y}$ be the solution to \eqref{eq: linear ODE}
with control $\tilde{u}$. Since $x=0$ with control $u=0$ is a stationary point of \eqref{eq: linear ODE}, by uniqueness of solution
we have
\begin{equation*}
\tilde{y}(s)=\begin{cases}
0 \quad &s \in \ (0,T-1)\\
y_1(t-T+1) \quad &s\in \ (T-1,T),
\end{cases}
\end{equation*}
Let us now evaluate the functional $J_{T,0}$ defined in \eqref{eq: functional_nouniq} at $\tilde{u}$ and compare it with the control $u\equiv 0$. Since $\min_{\mathbb{R}^n} g(x)<0$, we have
\begin{eqnarray*
J_{T,0} (\tilde{u})
&=&\dfrac{1}{2} \int_0^1 \left[\| u_1(s)\|^2 + \| C \, y_1 (s)\|^2\right] ds + \frac{1}{\varepsilon}\left[\left\|x_1\right\|^4-\left\|x_1\right\|^2\right]\nonumber\\
&=&\dfrac{1}{2} \int_0^1 \left[\| u_1(s)\|^2 + \| C \, y_1 (s)\|^2\right] ds + \frac{1}{\varepsilon}\min_{\mathbb{R}^n}g \nonumber \\
&<& 0 = J_{T,0}(0)
\end{eqnarray*}
for a sufficiently small $\varepsilon$. This means that $u\equiv 0$ is not a global minimizer of \eqref{eq: functional_nouniq}.
Finally, since the final cost $g_\varepsilon$ and the running cost in \eqref{eq: functional_nouniq} are continuous and bounded from below, then by the Direct Method in the Calculus of Variations, there exists a minimizer $u_{_{T}}$ of \eqref{eq: functional_nouniq}. Moreover, we have that $u_{_{T}}\neq 0$ if $T>1$. And since the initial condition of the admissible trajectories is $0$, then if we denote by $y_{_{T}}$ the optimal trajectory corresponding to $u_{_{T}}$, we have $-y_{_{T}}$ is the trajectory corresponding to $-u_{_{T}}$. Now, by definition of \eqref{eq: functional_nouniq}, $J_{T,0} (-u_{_{T}})=J_{T,0} (u_{_{T}})=\min_{\mathcal{U}_T}J_{T,0}$, whence $u_{_{T}}$ and $-u_{_{T}}$ are two distinguished global minimizers of \eqref{eq: functional_nouniq}.
\end{example}
\begin{comment}
Let us recall that, for each $x\in \mathbb{R}^n$ and $T>0$, the value function is defined as
\begin{equation}\label{value function V sec 3}
V(x,T) := \inf_{u\in \mathcal{U}_T} J_{T,x}(u),
\end{equation}
where the functional $J_{T,x}$ is given by
\begin{equation*}
J_{T,x}(u) = \dfrac{1}{2} \int_0^T \left[ \| u(s)\|^2 + \|C\, y(s)-z\|^2\right] ds + g(y(T)),
\end{equation*}
and for each $u$, the function $y:(0,T)\to\mathbb{R}^n$ is the solution to
\begin{equation*}
\left\{ \begin{array}{ll}
\dot{y}(s) = A\, y(s) + B\, u(s) & s\in (0,T) \\
y(0) = x.
\end{array}\right.
\end{equation*}
\end{comment}
\begin{comment}
\begin{remark}\label{rmk: bounded controls in general case}
Assuming the turnpike property holds, the problem with unconstrained control set $\mathcal{U}_{T}$ (i.e. $U=\mathbb{R}^m$) can be equivalently represented as a problem with controls having values in a compact set. Indeed, from the turnpike property \eqref{epsturnpike_eq1} we know that $\|u_{_{T}}(s)\|^{2} \leq \|\overline{u}\|^{2} + 2K$ for any $s\in (0,+\infty)$. Therefore, setting $M^{2}\coloneqq \|\overline{u}\|^{2} + 2K$, we can consider the optimal control problem with admissible control set $\{u\in L^{2}(0,T;\mathbb{R}^{m})\; :\; \|u(s)\|\leq M\}$, and the maximum in the Hamiltonian $H(x,\nabla V(x))$ as defined in \eqref{intro - ham} is an interior maximum, achieved at $u_{_{T}}(s) = -B^{*}\nabla V(x)$. The PDE in \eqref{eq: HJ sec 3} then writes as
\begin{equation*}
\partial_{T}V(x,T) + \frac{1}{2}\|B^{*}\nabla V(x,T)\|^{2} - Ax\cdot \nabla V(x,T) = \frac{1}{2}\|C\,x - z\|^{2}.
\end{equation*}
Another argument in this direction is presented later in Remark \ref{remark_bounded}.
\end{remark}
\end{comment}
\begin{comment}
Now, using the DPP from Lemma \ref{DPP V}, we prove that $V$ is the unique viscosity solution to \eqref{eq: HJ sec 3}.
Here we recall the definition of viscosity solution.
\begin{definition}\label{Def viscosity sol}
Let $H$ and $\ell$ be defined as in \eqref{intro - ham}. The continuous function $V\in C(\mathbb{R}^n\times[0,\infty))$ is a viscosity solution to \eqref{eq: HJ sec 3} if:
\begin{enumerate}
\item[(i)] $V(x,0) = g(x)$, for all $x\in \mathbb{R}^n$;
\item[(ii)] for each $\varphi\in C^\infty(\mathbb{R}^n\times (0,\infty))$ it holds
\begin{equation}\label{visc subsol}
\begin{aligned}
&\partial_T \varphi(x_0,T_0) + H(x_0,\nabla \varphi(x_0,T_0))\leq \ell(x_0),\\
& \text{whenever $V-\varphi$ has a local maximum at $(x_0,T_0)\in \mathbb{R}^n\times (0,\infty)$;}
\end{aligned}\end{equation}
and
\begin{equation}\label{visc supersol}
\begin{aligned}
&\partial_T \varphi(x_0,T_0) + H(x_0,\nabla \varphi(x_0,T_0))\geq \ell(x_0), \\
&\text{whenever $V-\varphi$ has a local minimum at $(x_0,T_0)\in \mathbb{R}^n\times (0,\infty)$.}
\end{aligned}
\end{equation}
\end{enumerate}
\end{definition}
\end{comment}
\begin{comment}
\textit{Step 1} \textbf{We show that $(V_s,W(\cdot))$ satisfies $V_s+H(x,\nabla W(x))\leq \ell(x)$ for every $x\in \mathbb{R}^n$.}
For any $u_{o}\in \mathbb{R}^n$, let us consider a continuous admissible control $u\in \mathscr{A}_x$ such that $u(0) = u_{o}$.
Thanks to the $C^1$ regularity of $W$ proved in subsection \ref{subsection: proof Thm 1.1 (1)}, we can compute
\begin{eqnarray}
W(y(\delta)) &=& W(x) + \int_0^\delta \frac{d}{ds} \left( W(y(s)) - W(x)\right) ds \nonumber \\
&=& W(x) + \int_0^\delta (\nabla W(y(s)),\, \dfrac{d}{ds} y(s))_{\mathbb{R}^n} ds \nonumber \\
&=& W(x) + \int_0^\delta (\nabla W(y(s)),\, Ay(s) + Bu(s))_{\mathbb{R}^n} ds \label{W integral}
\end{eqnarray}
On the other hand, by DPP, one has
\begin{equation*}
W(x) \leq \int_{0}^{\delta} \left[ \dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\,y(s)-z\|^{2} - V_s \right] \; ds + W(y(\delta)),
\end{equation*}
which combined with \eqref{W integral} gives
$$
0 \leq \int_{0}^{\delta} \left[ \dfrac{1}{2}\|u(s)\|^{2} + \dfrac{1}{2}\|C\,y(s)-z\|^{2} - V_s + \left(\nabla W(y(s)),\, Ay(s) + Bu(s)\right)_{\mathbb{R}^n} \right] \; ds.
$$
Dividing the above inequality by $\delta$ and taking the limit as $\delta\to 0$, we deduce, from the arbitrariness of $u_{o}$, that
\begin{equation*}
V_s + \max_{u_{o}\in \mathbb{R}^m}\left\{-\dfrac{1}{2}\|u_{o}\|^{2} - \nabla W(x)\cdot (Ax+Bu_{o})\right\} \leq \ell(x).
\end{equation*}
\textit{Step 2} \textbf{We show that $(V_s,W(\cdot))$ satisfies $V_s+H(x,\nabla W(x))\geq \ell(x)$ for every $x\in \mathbb{R}^n$.}
Let us assume by contradiction that for some $x_0\in\mathbb{R}^n$ and $r>0$, there exists $\varepsilon >0$ such that, for any $u_{o}\in \mathbb{R}^m$,
\begin{equation}\label{contradd}
V_s -\dfrac{1}{2}\|u_{o}\|^{2} - \nabla W(x)\cdot (Ax+Bu_{o}) \leq \ell(x) - \varepsilon
\end{equation}
for any $x\in B(x_0,r)$, where $B(x_0,r)$ is a ball of $\mathbb{R}^n$ centered at $x_0$ of radius $r>0$. By remark \ref{remark_bounded}, it suffices to consider $u_{o}$, with $\left\|u_{o}\right\|\leq M$, for some $M\geq 0$.
Take any $u\in L^2(0,1;\mathbb{R}^m)$, with $\left\|u(t)\right\|\leq M$, a.e. in $(0,1)$. By continuous dependence from the data for \eqref{eq: linear ODE}, there exists $\overline{\delta}\in (0,1)$, such that for any $\delta \in [0,\overline{\delta})$ the state $y$ associated to control $u$ and initial datum $x_0$, verifies $y(\delta)\in B(x_0,r)$.
Now, using remark \ref{remark_bounded}, there exists $u^{\varepsilon}$ and an associated state $y^{\varepsilon}$, such that
\begin{eqnarray}
W(x) & \geq & \int_{0}^{\delta} \left[ \dfrac{1}{2}\|u^{\varepsilon}(s)\|^{2} + \dfrac{1}{2}\|C\,y^\varepsilon(s)-z\|^{2} - V_s \right]\; ds + W(y^\varepsilon(\delta)) - \dfrac{\varepsilon\delta}{2}\nonumber\\
& = &\int_{0}^{\delta} \left[ \dfrac{1}{2}\|u^{\varepsilon}(s)\|^{2} + \dfrac{1}{2}\|C\,y^\varepsilon(s)-z\|^{2} - V_s \right] ds + W(x) \label{employ_W integral}\\
& \; &+ \int_0^\delta \left( \nabla W(y^{\varepsilon}(s)),\, Ay^{\varepsilon}(s) + Bu^{\varepsilon}(s)\right)_{\mathbb{R}^n}\; ds - \dfrac{\varepsilon\delta}{2}\nonumber,
\end{eqnarray}
where in \eqref{employ_W integral} we have employed the identity \eqref{W integral}. We have then
\begin{equation*}\label{conseq_W integral}
\int_0^{\delta}\left[V_s -\dfrac{1}{2}\|u^{\varepsilon}\|^{2} - \left(\nabla W(x), Ay^{\varepsilon}(s)+Bu^{\varepsilon}\right)_{\mathbb{R}^n}-\ell(y^{\varepsilon}(s))\right]ds \geq - \dfrac{\varepsilon\delta}{2},
\end{equation*}
so obtaining a contradiction \eqref{contradd}. Then one recovers the second inequality and hence the desired result.
Finally, notice that in the case $U=\mathbb{R}^{m}$, the equation \eqref{eq: stationary HJB LQ} writes as
\begin{equation*
V_s + \dfrac{1}{2}\|B^{*}\nabla W(x)\|^{2} - A\, x\cdot \nabla W(x) = \dfrac{1}{2}\|C\,x - z\|^{2}
\quad \quad x\in \mathbb{R}^{n}.
\end{equation*}
which also holds for $U\subset \mathbb{R}^m$ whenever the maximum is attained in an interior point (see also Remark \ref{rmk: bounded controls in general case}).
\end{comment}
\begin{comment}
\begin{proof}[Proof Theorem \ref{th_turnpikeHJB} (2)]
Since $W(x)$ satisfies the dynamic programming in lemma \ref{lem: DPP}, one has (for $y(0)=x$)
\begin{equation*}
0= \inf_{u\in L^{2}(0,\delta)}\left\{\frac{1}{\delta}\int_{0}^{\delta} L(y(s),u(s)) \;ds - c + \frac{W(y(\delta))-W(y(0))}{\delta}\right\}
\end{equation*}
Thanks to Theorem \ref{th_turnpikeHJB} (1), we have $(W(y(\delta))-W(y(0))) = \nabla W(x)\cdot (Ax+Bu(0))\delta + o(\delta)$ and $\int_{0}^{\delta} L(y(s),u(s)) \;ds = L(x,u(0))\delta + o(\delta)$. Moreover, the only variable in the optimization problem is $u(0)\in \mathbb{R}^{m}$. Hence, we get
\begin{equation*}
0 = -c + \inf\limits_{u\in \mathbb{R}^{m}}\left\{\dfrac{1}{2}\|u\|^{2} + \nabla W(x)\cdot Bu\right\} + \dfrac{1}{2}\|Cx -z\|^{2} + \nabla W(x)\cdot Ax
\end{equation*}
The minimization yields $u=-B^{*}\nabla W(x)$ and hence we get
\begin{equation*}
c + \left(\dfrac{1}{2}\|B^{*}\nabla W(x)\|^2-\nabla W(x)\cdot Ax\right) = \dfrac{1}{2}\|Cx-z\|^{2}, \quad \forall\;x\in\mathbb{R}^n
\end{equation*}
i.e. $c + H(x,\nabla W(x)) = \ell(x)$, for all $x\in\mathbb{R}^n$.
\end{proof}
\end{comment}
\section{Conclusions and open problems}
\label{sec:Conclusions and open problems}
In this manuscript, we have studied the long time behavior of the value function associated to a finite-dimensional linear-quadratic optimal control problem with any target $z$, a general terminal cost $g$ and constrained controls. To do so, we have introduced an infinite-time horizon optimal control problem and studied its value function $W(x)$. This allows us to provide an asymptotic decomposition of the value function $V(T,x)$ for the original control problem with finite time horizon and which is of the form $W(x) + V_s\, T + \lambda$, where each of the terms corresponds to the cost of the optimal trajectory during one of the three stages of the turnpike strategy.
We now present some open problems.
\subsection{Control problems governed by nonlinear state equations}
\label{subsec:Conclusions and open problems_Control problems governed by nonlinear state equations}
We formulate this for a special control problem. Let $A$ be an $n\times n$ symmetric positive definite matrix and let $f:\mathbb{R}\longrightarrow \mathbb{R}$ be an increasing nonlinearity of class $C^1$ and with $f(0)=0$. For a given time horizon $T>0$, an initial state $x$ in $\mathbb{R}^n$ and a control $u\in \mathcal{U}_T\coloneqq L^2(0,T;U)$ the corresponding trajectory $y (\cdot)$ solves
\begin{equation*}\label{eq: semilinear ODE}
\begin{array}{ll}
y'(s) + A\, y(s) + f\left(y(s)\right)= B\, u(s), & \text{for} \ s\in [0,T] \\
y (0) = x,
\end{array}
\end{equation*}
where the control operator is given by the matrix $B\in \mathcal{M}_{n,m} (\mathbb{R})$ and the nonlinear term $f\left(y(s\right))=\left(f\left(y_1(s)\right),\dots,f\left(y_n(s)\right)\right)$.
The optimal control problem is to minimize, over the admissible controls $u\in L^2(0,T;U)$, the cost functional $J_{T,x} (u) \coloneqq \dfrac{1}{2} \int_0^T \left[\| u(s)\|^2 + \| C \, y (s)-z\|^2\right] ds $
where $C\in \mathcal{M}_n(\mathbb{R})$ is a given matrix and $z\in \mathbb{R}^n$ is the prescribed running target. The value function
is defined as $V (x,T) \coloneqq \inf_{u\in \mathcal{U}_T} J_{T,x}(u)$.
In the same line as for the LQ problem treated in this manuscript, one can also introduce the steady functional $J_{s} \left(\overline{u},\overline{y}\right) \coloneqq \dfrac{1}{2} \left[\| \overline{u}\|^2 + \| C \, \overline{y}-z\|^2\right]$
to be minimized over the subset of controlled steady states $M_s\coloneqq \left\{\left(\overline{u},\overline{y}\right)\in U\times \mathbb{R}^n \ | \ A\overline{y}+f\left(\overline{y}\right)=B\overline{u}\right\}$ and define $V_s\coloneqq \min_{M_s}J_s$. These kind of problems have been treated both in a finite dimensional framework \cite{trelat2015turnpike} and in a PDE framework \cite{PZ2,trelat2018steady,trelat2018integral,pighin2020turnpike}. Available results in the literature typically require smallness conditions on the running target.
By using the techniques developed in the above references, it is possible to get bounds on the space derivatives of the value function, which allow to apply the Ascoli-Arzel\`a Theorem as in the proof of Theorem \ref{th_turnpikeHJB}. For small targets, by using the turnpike results of \cite{pighin2020turnpike} and adapting the techniques of the present manuscript, we can deduce large time asymptotics of the value function as in Theorem \ref{th_turnpikeHJB}. However, for large targets, to the best of our knowledge, the turnpike theory is not complete. In particular, we cannot identify the limit as we do in \eqref{conv_value_function_identification}, because we do not have a result like
\begin{equation*}
\label{eq: limit time average_semil}
\frac{1}{T}V (x,T)\underset{T\to +\infty}{\longrightarrow}V_s,
\end{equation*}
which identifies the limit of the time-average of the value function as the value function for the steady problem. Note that, by adapting the techniques of \cite[Lemma 2.1, page 12]{pighin2020turnpike}, it is possible to prove that $\limsup_{T\to +\infty}\frac{1}{T}V (x,T)\leq V_s.$
But we are not able to prove the converse inequality
\begin{equation}
\label{eq: liminfgeq time average}
\liminf_{T\to +\infty}\frac{1}{T}V (x,T)\geq V_s.
\end{equation}
From a control perspective, the above inequality means that in time large there is no time-evolving strategy significantly better than the steady ones. Actually, in case the time evolving functional is restricted to time independent controls, \eqref{eq: liminfgeq time average} has been proved in \cite[section 4]{PZ2}, by $\Gamma$-convergence. However,
to the best of our knowledge, the above inequality is
unknown if the time-evolution functional is minimized over time dependent controls and it is an interesting open problem.
\subsection{Complete turnpike theory in constrained control}
Throughout our manuscript, we assumed $A$ invertible. It would be nice to get a turnpike result under constraints, without this assumption.
Furthermore, the rate of convergence of the time-evolution optima towards the steady ones should be investigated. An exponential bound could be obtained by applying the arguments of \cite{esteve2020turnpike} to $\tilde{u}\coloneqq u-\overline{u}$ and $\tilde{y}\coloneqq y-\overline{y}$.
\subsection{Characterization of the ergodic constant in a more general case}
\label{subsec:Conclusions and open problems_Characterization of the ergodic constant in a more general case}
In our setting, the constant $c$ in \eqref{eq: stationary equation} corresponds to $V_s$, which is the minimal value of the steady problem. This has been obtained as a consequence of the validity of the turnpike property for our problem. It would be interesting to generalize this characterization for more general problems, by using Hamilton-Jacobi techniques instead of turnpike theory.
\subsection{Hamilton-Jacobi equations with non-coercive Hamiltonian}
\label{subsec:Conclusions and open problems_Revise Hamilton-Jacobi literature to include the case of lack of coercivity and lack of Lipschitz property}
As we have anticipated, the function $p \longmapsto H(x,p)$, as defined in \eqref{intro - ham}, is not coercive whenever $B^*$ has a nontrivial kernel. This prevented us from using available results in the Hamilton-Jacobi literature. We have then employed turnpike theory to obtain long time behavior results in our context.
\subsection{The infinite dimensional case}
\label{subsec:Conclusions and open problems_Tunrpike in infinite dimension}
It is well known that the turnpike property holds as well in the infinite dimensional case (see \cite{porretta2013long,PZ2,trelat2018steady,trelat2018integral,pighin2020turnpike}). In this setting, one can still associate to the infinite dimensional optimal control problem an analogue of the HJB equation that captures the evolution of the value function. Indeed, this can be handled for instance by means of the so-called \textit{Master} equation whose characteristics are of HJB type. Such equation appears in the context of Mean Field Games and its long time behavior was studied for instance in \cite{cardaliaguet2019long}. We refer also to \cite{bensoussan2015master, bensoussan2017interpretation}.
| {
"timestamp": "2021-11-23T02:22:06",
"yymm": "2006",
"arxiv_id": "2006.10430",
"language": "en",
"url": "https://arxiv.org/abs/2006.10430",
"abstract": "We analyze the consequences that the so-called turnpike property has on the long-time behavior of the value function corresponding to a finite-dimensional linear-quadratic optimal control problem with general terminal cost and constrained controls.We prove that, when the time horizon $T$ tends to infinity, the value function asymptotically behaves as $W(x) + c\\, T + \\lambda$, and we provide a control interpretation of each of these three terms, making clear the link with the turnpike property.As a by-product, we obtain the long-time behavior of the solution to the associated Hamilton-Jacobi-Bellman equation in a case where the Hamiltonian is not coercive in the momentum variable. As a result of independent interest, we showed that linear-quadratic optimal control problems with constrained control enjoy a turnpike property, also particularly when the steady optimum may saturate the control constraints.",
"subjects": "Analysis of PDEs (math.AP); Optimization and Control (math.OC)",
"title": "The turnpike property and the long-time behavior of the Hamilton-Jacobi-Bellman equation for finite-dimensional LQ control problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982823294509472,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7093460358032347
} |
https://arxiv.org/abs/1511.04580 | On Erasure Combinatorial Batch Codes | Combinatorial batch codes were defined by Paterson, Stinson, and Wei as purely combinatorial versions of the batch codes introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai. There are $n$ items and $m$ servers, each of which stores a subset of the items. A batch code is an arrangement for storing items on servers so that, for prescribed integers $k$ and $t$, any $k$ items can be retrieved by reading at most $t$ items from each server. Silberstein defined an erasure batch code (with redundancy $r$) as a batch code in which any $k$ items can be retrieved by reading at most $t$ items from each server, while any $r$ servers are unavailable (failed).In this paper, we investigate erasure batch codes with $t=1$ (each server can read at most one item) in a combinatorial manner. We determine the optimal (minimum) total storage of an erasure batch code for several ranges of parameters. Additionally, we relate optimal erasure batch codes to maximum packings. We also identify a necessary lower bound for the total storage of an erasure batch code, and we relate parameters for which this trivial lower bound is achieved to the existence of graphs with appropriate girth. | \section{Introduction}
We study a class of combinatorial objects that we call \emph{combinatorial batch codes with redundancy}. These are motivated by a data retrieval problem in which a collection of items (such as files) are stored, with possible duplication, on a collection of servers. After the items are stored, a demand will be made for some $k$-subset of the items, where $k$ is fixed in advance. The goal is to store as few total copies of the items as possible, while still being able to retrieve any $k$-subset of the items without taking too many from any one server. The study of such problems was initiated by Ishai, Kushilevitz, Ostrovsky, and Sahai~\cite{ishai}, who proposed a class of \emph{combinatorial batch codes} that provide solutions to the following particular data retrieval problem.
\begin{question}[Ishai~et~al.~\cite{ishai}]\label{q1}
Suppose a collection of $n$ items is to be stored over a set of $m$ servers. Can the items be stored so that any $k$ items are simultaneously accessible by taking at most $t$ items from each server? If so, what amount $N$ of total storage is needed? An optimal solution has the smallest total storage for given parameters $n$, $k$, $m$, and~$t$.
\end{question}
We generalize the question by adding a requirement of \defn{redundancy}. At each moment, we allow some of the servers to be unavailable (for example, they may be down for maintenance). If we place a bound, $r$, on the number of servers that may be unavailable at the same time, we are faced with a problem of retrieving each $k$-subset of the items from every collection of $m-r$ servers. To ensure this is possible, intuitively, it will be necessary to store some ``redundant'' copies of items. We thus study the following particular question.
\begin{question}\label{q2}
Suppose a collection of $n$ items is to be stored over a set of $m$ servers. At each moment, some number $r$ of the servers may be unavailable.
Can the items be stored so that any $k$ items are simultaneously accessible from each collection of $m - r$ servers, while taking at most $t$ items from each server?
What amount $N$ of total storage is needed? An optimal solution has the smallest total storage for given parameters $n$, $k$, $m$, $t$, and~$r$.
\end{question}
In Section~\ref{sec:prelim}, we generalize previous definitions to include the redundancy parameter~$r$, and derive several construction lemmas paralleling the results of Bujit\'as and Tuza~\cite{tuza}. The following two sections establish
the optimal value of $N$ for certain parameter ranges, which are illustrated in
Figure~\ref{fig1}.
In Section~\ref{sec:extremal}, we study the extremal cases when $n$ is large, or small, compared to $m$. Theorem~\ref{thm:tall} characterizes $N$ when $k \leq n \leq m$. %
Theorem~\ref{thm:nbddbelow} characterizes $N$ when $n \geq (k-1)\binom{m}{r+k-1}$.
Section~\ref{sec:gap} studies the ``gap'' between these results. We obtain a result characterizing $N$ for certain values of $n$ as small as $\frac{k-1}{r+k-1}\binom{m}{r+k-2}$.
In Section~\ref{sec:minimum}, we discuss a natural lower bound for $N(n,k,m;r)$ and discuss some nontrivial cases when the lower bound is achieved. These lower bounds are related to the existence of graphs with a lower bound on their girth.
\begin{figure}
\begin{center}
\begin{tikzpicture}[>=stealth',thick]
\draw[->] (0,0)--(12.25,0);
\draw (0,0.15)--(0,-0.15);
\draw (3,0.15)--(3,-0.15);
\draw (6,0.15)--(6,-0.15);
\draw (9,0.15)--(9,-0.15);
\node at (0,-0.5) {$k$};
\node at (3,-0.5) {$m$};
\node at (12.5,0) {$n$};
\node at (5.5,-0.8) {};
\node at (9,-0.8) {\footnotesize $\displaystyle (k-1)\binom{m}{r+k-1}$};
\node at (1.5,0.9) {Theorem~\ref{thm:tall}};
\node at (7.5,0.9) {Theorem~\ref{thm:fmkr}};
\node at (10.5,0.9) {Theorem~\ref{thm:nbddbelow}};
\node at (7.5, 1.8) {Theorem~\ref{thm:maxk} for $k = m-r$};
\foreach \A/\B in { 2.9/0, 8.9/6, 12/9.1 }
{
\draw [
thick,
decoration={
brace,
mirror,
raise=0.4cm
},
decorate
] (\A,0)--(\B,0);
}
\draw [
thick,
decoration={
brace,
mirror,
raise=1.3cm
},
decorate
] (12,0)--(3.1,0);
\end{tikzpicture}
\caption{Ranges of the parameter $n$ addressed here, in terms of $k$, $m$, and $r$, always assuming $t = 1$ and that the conditions of Lemma~\ref{lem:exists} are met. Theorem~\ref{thm:tall} applies when $k \leq n \leq m$, and Theorem~\ref{thm:maxk} applies when $n > m$ and $k = m-r$. Theorem~\ref{thm:fmkr} applies to certain values of $n$ less than $(k-1)\binom{m}{r+k-1}$, while Theorem~\ref{thm:nbddbelow} applies when $(k-1)\binom{m}{r+k-1} \leq n$. When $n = (k-1)\binom{m}{r+k-1}$, the constructions in the latter two theorems are the same.}\label{fig1}
\end{center}
\end{figure}
Solutions to Question~\ref{q1} are provided by \defn{combinatorial batch codes}, which were introduced by Paterson, Stinson, and Wei~\cite{psw} and have been the subject of several subsequent papers~\cite{brualdi,tuza3,tuza,tuza2}. Although we will not directly use combinatorial batch codes here, we state their definition for comparison with our definition of combinatorial batch codes with redundancy in the next section.
\begin{definition}[Paterson~et~al.~\cite{psw}]
A \defn{combinatorial batch code} with parameters $(n,k,m,t)$, abbreviated CBC or CBC$(n,k,m,t)$, is a multifamily $\mathcal{B}=\{B_1,\ldots,B_m\}$ of $m$ subsets, called \defn{servers}, of a set $X=\{x_1,\dots,x_n\}$ of $n$ items, called \defn{files}, such that for each $Y\subseteq X$ with $|Y|\leq k$ there exists subsets $C_i\subseteq B_i$ for which $|C_i|\leq t$ and $Y=C_1\cup C_2\cup \cdots \cup C_m$.
The \defn{weight} of a CBC $\mathcal{B}$ is the value
\[
N(\mathcal{B}) = |B_1|+|B_2|+\cdots+|B_m|.
\]
A CBC that obtains the minimal weight (as a function of the other parameters) is \defn{optimal}, and the minimum value is denoted $N(n,k,m,t)$.
\end{definition}
Previous work has characterized $N(n,k,m,1)$ for several ranges of parameters. In many cases, our theorems yield these previous results as a special case when we set the redundancy parameter $r$ to zero. We note this after each theorem, as appropriate.
\section{Combinatorial batch codes with redundancy}\label{sec:prelim}
To address Question~\ref{q2}, we generalize combinatorial batch codes to include redundancy. We introduce a new parameter, $r$, to measure the number of servers that may be inaccessible at one time. As usual, we let $[n]$ denote the set $\{1,2,\dots,n\}$. The codes studied in the following definition
have also been investigated by Silberstein~\cite{silberstein} under the name ``erasure batch codes''.
\begin{definition}
A \defn{combinatorial batch code with redundancy $r$} with parameters $(n,k,m,t)$ (abbreviated $r{\rm\textnormal{-}CBC}$, $r{\rm\textnormal{-}CBC}(n,k,m,t)$, or $r{\rm\textnormal{-}CBC}(n,k,m)$ when $t=1$) is a multifamily $\mathcal{B}=\{B_1,\dots,B_m\}$ of $m$ subsets of $[n]$ such that for each $Y\subseteq [n]$ with $|Y|\leq k$ and $J\subseteq[m]$ with $|J|\geq m-r$, there exists subsets $C_j\subseteq B_j$ for each $j\in J$ such that $|C_i|\leq t$ and $Y=\bigcup_{j\in J}C_j$. Informally put, this means that for each collection $Y$ of $k$ or fewer files, and each collection $J$ of $m -r$ or more servers, it is possible to obtain all the files in $Y$ from the servers in $J$ while taking no more than~$t$ from each server.
The \defn{weight} of an $r{\rm\textnormal{-}CBC}$ $\mathcal{B}$ is
\[
N(\mathcal{B}) = |B_1|+|B_2|+\cdots+|B_m|,
\]
and an $r{\rm\textnormal{-}CBC}$ that obtains the minimal $N$ (as a function of $n$, $k$, $m$, $t$, and~$r$) is \defn{optimal}.
We denote this minimal value as $N(n,k,m,t;r)$, or simply $N(n,k,m;r)$ if $t=1$.
\end{definition}
We may represent an $r{\rm\textnormal{-}CBC}(n,k,m)$ $\mathcal{B}$ by a $m \times n$ incidence matrix $A$
such that row $i$ of $A$ represents the set $B_i \subseteq [n]$. We let $A_j$ denote the subset of $[m]$ represented by column $j$ of~$A$.
We let $N(A)$ be the number of $1$s that appear in $A$, and say that $A$ is optimal if $N(A) = N(n,k,m;r)$.
In this paper, we study the case $t = 1$ exclusively. The next lemma establishes the
basic relations between the remaining parameters that are required for the existence of
an $r{\rm\textnormal{-}CBC}$. For the remainder of the paper, we will always assume that our parameters satisfy the inequalities stated in the lemma.
\begin{lemma}\label{lem:exists}
There is an $r{\rm\textnormal{-}CBC}$ with parameters $(m,k,n,1)$, where $k \geq 1$, if and only if $r< m$ and $k\leq \min\{n, m-r\}$.
\end{lemma}
\begin{proof}
For the forward direction, it is enough to note that an $m \times n$ matrix of all 1s will represent an $r{\rm\textnormal{-}CBC}$ with the stated parameters, under the stated conditions. There will
be $m - r$ servers available, each containing every file, and thus we can retrieve any collection of $k$ files for each $k \leq\min\{n,m-r\}$.
For the reverse direction, we show that the conditions are necessary. If $r \geq m$ then it would be possible for every server to be down, which cannot yield an $r{\rm\textnormal{-}CBC}$ when $k \geq 1$. The inequality $n < k$ is impossible because one cannot retrieve more than the total number of files. The inequality $k > m - r$ is impossible because there will be $m-r $ available servers, and with $t = 1$ we may only take one file from each server.
\end{proof}
Our first theorem extends Theorem~3 of Bujit\'as and Tuza~\cite{tuza2} to combinatorial batch codes with redundancy. To achieve this, we use the following extension of Hall's Marriage Theorem, which is also stated by Silberstein~\cite[Theorem 5]{silberstein}.
\begin{theorem}
\label{thm:hall_extension}
Let $\{A_1,\dots,A_n\}$ be a family of subsets of $M$ and $r\geq 0$.
For each $r$-subset $M'\subseteq M$, the following are equivalent:
\begin{enumerate}
\item There exist distinct elements $a_1,\dots,a_n$ such that $a_i\in A_i\backslash M'$ for each $i\in [n]$.
\item $\left |\bigcup_{j\in J}A_j\right |\geq r+c$ for every $c$-subset $J$ of $[n]$.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{thm:grhc}
Suppose that $A$ is an $m\times n$ matrix with values in $\{0,1\}$. For each $j \leq n$, let $A_j$ be the subset of $[m]$ determined by column $j$ of~$A$. The following are equivalent:
\begin{enumerate}
\item\label{grhc3} The matrix $A$ represents an $r{\rm\textnormal{-}CBC}(n,k,m)$.
\item\label{grhc1} For every $c \in [k]$ and every $c$-subset $J$ of $[n]$, $\left | \bigcup_{j \in J} A_j \right | \geq r+c$. Informally put, for each $c \in [k]$, each collection of $c$ columns of $A$ spans at least $r+c$ rows.
\item\label{grhc2} For every $d$-subset $I$ of $[m]$, with $r \leq d \leq r + k -1$, $|\{ i : A_i \subseteq I \}| \leq d-r$. Informally put, for each collection of $d$ rows of $A$, with $r \leq d \leq r+k-1$, the number of columns whose 1s are completely contained by these rows is at most $d-r$.
\end{enumerate}
\end{theorem}
\begin{proof}
The implication from (\ref{grhc3}) to (\ref{grhc1}) follows from the definition of an $r{\rm\textnormal{-}CBC}$ with $t = 1$. The implication from (\ref{grhc1}) to (\ref{grhc3}) is a direct application of Theorem~\ref{thm:hall_extension}.
Therefore, it suffices to prove that (\ref{grhc1}) and (\ref{grhc2}) are equivalent.
First, assume that $A$ satisfies condition~(ii).
Choose $d$ with $r \leq d < r + k $
and let $I$ be a $d$-subset of $[m]$.
Let $J = \{ i : A_i \subseteq I \}$ and let $w = |J|$.
We want to show that $w \leq d - r$.
Suppose otherwise: then $w > d - r$, that is, $d < r+ w$.
So we have a collection of more than $d - r$ columns that
are contained in at most $d$ rows, where $r \leq d < r + k$.
So, if we let $c = d - r + 1$, we have a collection of $c$ columns
that are contained in fewer than $c + r$ rows.
Because $r \leq d$, we have $c > 0$. Because $d < r + k$,
we have $c \leq k$. Thus $c \in [k]$. This contradicts~(ii), which
states that each collection of $c$ rows
must span at least $d = c+r$ columns. Thus we have $w \leq d - r$,
as desired.
Now assume that $A$ satisfies condition~(iii).
First, we verify that $A$ satisfies condition~(ii) in the special case
$c = 1$. This follows from the special case of
(iii) with $d = r$, which says that
for every $r$-subset $I$ of $[m]$ we have
$|\{i : A_i \subseteq I\}| \leq 0$. Thus there is no column
with fewer than $r+1$ ones, which is precisely the statement of (ii) in the case~$c=1$.
It remains to prove condition~(ii) for $c \geq 2$. To this end,
choose $c \in [k]$ with $c \geq 2$ and let $J$ be a $c$-subset of~$[n]$.
Let $I = \bigcup_{j \in J} A_j$ and let $d = |I|$.
Note that, by the previous paragraph, each $A_j$ contains at least $r+1$ ones,
and thus $d \geq r+1$.
We want to show that $|I| \geq r + c$. Suppose otherwise; then
we have $r + 1 \leq d \leq r + c - 1$, so $r \leq d \leq r + k - 1$, as $c \leq k$.
Thus, by~(iii), we have
\[
|\{i : A_i \subseteq I\}| \leq d-r \leq (r+c-1) - r = c-1.
\]
However, we also have
$J \subseteq \{i : A_i \subseteq I\}$, so $|\{i : A_i \subseteq I\}| \geq c$. This is a contradiction, so we conclude $d \geq r + c$, as desired.
\end{proof}
\begin{lemma}\label{lem:minmax}
Let $A$ be a matrix that represents an $r{\rm\textnormal{-}CBC}(n,k,m)$.
Then the number of $1$s in each column of $A$ is at least $r+1$, and if $A$ is optimal, at most $r+k$.
\end{lemma}
\begin{proof}
By setting $c=1$ in Theorem~\ref{thm:grhc}~(\ref{grhc1}), we see that each column of $A$ has cardinality at least $r+1$.
Now let $A$ represent an $r{\rm\textnormal{-}CBC}(n,k,m)$, and assume without loss of generality that $A_1$ has cardinality greater than $r+k$. We will show that $A$ is not optimal.
Remove an element from $A_1$ and call the resulting matrix $A'$.
Thus $|A'_1|+1=|A_1|> r+k$ and $A'_j=A_j$ for $2\leq j \leq n$.
To show $A'$ represents an $r{\rm\textnormal{-}CBC}(n,k,m)$, let $J$ be a $c$-subset of~$[n]$ with $c\leq k$.
If $1\notin J$, then \[\left |\bigcup_{j\in J}A'_j \right | = \left |\bigcup_{j\in J}A_j \right |\geq r+c.\]
If $1\in J$, then
$$\left |\bigcup_{j\in J}A'_j \right| \geq \left |A'_1 \right | = |A_1|-1\geq r+k\geq r+c.$$
Thus, by Theorem~\ref{thm:grhc}, $A'$ represents an $r{\rm\textnormal{-}CBC}(n,k,m)$. By construction, $N(A') = N(A) - 1$, and thus $A$ is not optimal.
\end{proof}
The lemma allows us to bound the weight of an optimal $r{\rm\textnormal{-}CBC}$.
\begin{corollary}\label{cor:bounds}
The following inequalities hold for $N(n,k,m;r)$:
\[
(r+1)n \leq N(n,k,m;r) \leq (r+k)n.
\]
\end{corollary}
The corollary allows us to prove $N(n,1,m;r)=(r+1)n$. Assuming $k = 1$, form an $r{\rm\textnormal{-}CBC}$ $A$ with a matrix that has $n$ columns each of cardinality $r+1$. This matrix $A$ is an $r{\rm\textnormal{-}CBC}$ with $N(A) = (r+1)n$, and by the corollary it is impossible to have a smaller value of $N$.
\section{Extremal Results}\label{sec:extremal}
In this section, we establish the exact value of $N(m,k,n;r)$ for several families of
parameter values. We first consider $r{\rm\textnormal{-}CBC}$s where there are at least as many servers as files, that is, $n\leq m$.
\newcommand{\Mod}[1]{\ (\text{mod}\ #1)}
\begin{theorem}\label{thm:tall}
Let $n\leq m$. Then $$N(n,k,m;r) = (r+1)n.$$
\end{theorem}
\begin{proof}
Let $A$ be the $m\times n$ matrix with columns given by
$$A_j = \{j+x \Mod m : x = 0, 1, \ldots, r \}.$$
See Figure~\ref{fig:tall} for an example. Let $J \subset [n]$ with $|J|=c$, $c\leq k$. Then $J = \{j_1, j_2,\ldots, j_c\}$ with $j_i<j_{i+1}$ for each $i \in [c-1]$. We consider two cases, based on the size of the gaps between consecutive entries of $J$ (considering $j_c$ and $j_1$ consecutive in $J$). Let $g_i = j_{i+1}-j_i$ for each $i\in [c-1]$ and $g_c=m+j_1-j_c$.
Case 1: Suppose $g_i>r$ for some $i \in [c]$. If $i=c$, then $m+j_1-j_c>r$, so $j_c+r-m<j_1$. If $j_c+r\leq m$, then $A_{j_c}\cap (J-\{j_c\})=\emptyset$. Similarly, if $j_c+r>m$, then since $j_c+r-m<j_1$, $A_{j_c}=\{j_c,j_c+1,\ldots, m, 1,\ldots, j_c+r-m\}$ and $A_{j_c}\cap (J-\{j_c\})=\emptyset$. Thus
\[\left|\bigcup_{j\in J}A_j\right|\geq |A_{j_c}\cup (J-\{j_c\})|=r+1+c-1=r+c.\]
If $i<c$, then $j_i+r<j_{i+1}\leq n\leq m$, so $A_{j_i} = \{j_i,j_i+1, \ldots, j_i+r \}$ and $j_{i+1}>j_i+r$, and thus $A_{j_i} \cap (J-\{j_i\}) = \emptyset$. It follows that
\[\left|\bigcup_{j \in J} A_j\right| \geq |A_{j_i}\cup (J- \{j_i\})| = r+1 + c-1 = r+c.\]
Thus, by Theorem~\ref{thm:grhc}, $A$ is an $r{\rm\textnormal{-}CBC}$.
Case 2: Suppose $g_i\leq r$ for all $i \in [c]$. Suppose $\bigcup_{j \in J}A_j \neq [m]$. Then there are $j_i, j_{i+1}$ such that $A_{j_i}\cap A_{j_{i+1}}=\emptyset$. Thus either $j_i+r<m$, and so $j_i+r<j_{i+1}$ and $g_j>r$, a contradiction, or $j_i+r\geq m$. In the second case, $A_{j_i} = \{j_i,j_{i+1},\ldots,m,1,2,\ldots, j_i+r\Mod{m}\}$ and $A_{j_{i+1}}=\{j_{i+1},j_{i+1}+1,\ldots, j_{i+1}+r\}$, where $j_{i+1}+r<j_i$, but this is also a contradiction. Thus $\bigcup_{j\in J}A_j = [m]$ and
\[\left|\bigcup_{j \in J} A_j\right| = m \geq r+k \geq r+c.\]
Therefore, by Theorem~\ref{thm:grhc}, $A$ is an $r{\rm\textnormal{-}CBC}$.
Now, because $A$ is an $r{\rm\textnormal{-}CBC}$ and $\sum_{j\in [n]} |A_j| = (r+1)n$, $A$ is optimal by Corollary~\ref{cor:bounds}.
\end{proof}
When $r=0$, Theorem~\ref{thm:tall} reduces to $N(n,k,m;0)= n$, which is Theorem~3 of Paterson, Stinson, and Wei~\cite{psw}.
We next consider $r{\rm\textnormal{-}CBC}$s with at least as many files as servers, that is, $n\geq m$, and a maximal number of files being retrieved, $k=m-r$.
\begin{figure}
\[A = \begin{bmatrix}1&0&0&1\\1&1&0&0\\1&1&1&0\\1&1&1&1\\0&1&1&1\\0&0&1&1\end{bmatrix}\]
\caption{The matrix $A$ constructed as in the proof of Theorem~\ref{thm:tall} for $m=6, n=4, r=3,$ and $k\leq 3$.}\label{fig:tall}
\end{figure}
\begin{theorem}\label{thm:maxk}
If $m \leq n$ and $k=m-r$, then $$N(n,m-r,m;r) = mn-m(m-r-1).$$
\end{theorem}
\begin{proof} Let $A$ be the $m\times n$ matrix with columns given by
$$A_j = \{j+x \Mod m : x = 0, 1, \ldots, r \} \text{ for }j\leq m.$$
and $A_j = \{1,2,\ldots,m\}$ for $m<j\leq n$. See Fig.~\ref{fig:maxk} for an example. We first show that $A$ is an $r{\rm\textnormal{-}CBC}$. Let $c\leq m-r$ and let $J\subseteq [n]$ with $|J|=c$. Then, $|\bigcup_{j\in J} A_j| = m\geq r+c$ if $J\cap\{m+1, \ldots, n\}\neq\emptyset$. So, suppose $J \cap \{m+1,\ldots,n\} = \emptyset$. Then, we are only considering the first $m$ columns of $A$, so let $A'$ be the $m\times m$ matrix with columns given by $\{A_1,\ldots,A_m\}$. We can use the argument from the proof of Theorem~\ref{thm:tall} to show that $A'$ is an $r{\rm\textnormal{-}CBC}$, and thus satisfies Theorem~\ref{thm:grhc}. Therefore $A$ is an $r{\rm\textnormal{-}CBC}$. Note that in $A$, for each $d \in [m]$, $|\{i:d \in A_i\}|=n-m+r+1$.
Because $A$ is an $r{\rm\textnormal{-}CBC}$ with $N(A)=m(n-m+r+1)$, we know that $N(n,m-r,m;r)\leq m(n-m+r+1)$. Suppose that $N(n,m-r,m;r)<m(n-m+r+1)$. Let $B$ be an $r$-CBC with $N(B)=N(n,m-r,m;r)$. Then there must be some $d \in [m]$ for which $|\{i:d \in B_i\}|<n-m+r+1$. Let $J \subseteq [n]\setminus \{i:d\in B_i\}$ with $|J|=m-r$. Such a $J$ exists because
$$\left |[n]\setminus \{i:d\in B_i\} \right |>n-(n-m+r+1) = m-r-1.$$
Then $|\bigcup_{j\in J}B_j|<(m-r)+r$, because $d\notin B_j$ for any $j \in J$. Therefore, by Theorem~\ref{thm:grhc}, $B$ is not an $r{\rm\textnormal{-}CBC}$.
We can thus conclude that $A$ is optimal and \[N(n,m-r,m;r) = m(n-m+r+1).\qedhere\]
\end{proof}
When $r=0$, Theorem~\ref{thm:maxk} reduces to $N(n,m,m;0)=mn-m(m-1)$, which appears as Theorem~4 of Paterson, Stinson, and Wei~\cite{psw}.
\begin{figure}
\[A=\begin{bmatrix} 1&0&0&1&1&1&1\\1&1&0&0&1&1&1\\1&1&1&0&0&1&1\\1&1&1&1&0&0&1\\0&1&1&1&1&0&1\\0&0&1&1&1&1&1\end{bmatrix}\]
\caption{The matrix $A$ constructed as in the proof of Theorem~\ref{thm:maxk} when $m=6, n=7, r=3,$ and $k=3$.}\label{fig:maxk}
\end{figure}
We now examine what occurs when $n$ is large.
We first prove two technical lemmas.
\begin{lemma}
\label{lem:move_ones}
Let $A$ be an $r{\rm\textnormal{-}CBC}$, and assume there are $i, j \in[n]$ with $A_i \subsetneq A_j$.
Let $R$ be a nonempty set with $R \subseteq A_j \setminus A_i$. Replacing columns $A_i$ and $A_j$ with $A'_i = A_i \cup R$ and $A'_j = A_j \setminus R$, respectively, produces a matrix $A'$ which is an $r{\rm\textnormal{-}CBC}$ with the same weight as $A$.
\end{lemma}
\begin{proof}
Assume that $A$, $A'$, $i$, and $j$ are as in the statement of the theorem.
We wish to prove that $A'$ satisfies condition~(iii) of Theorem~\ref{thm:grhc}.
To this end, assume that $I$ is a $d$-subset of $[m]$.
It is sufficient to show that the number $\sigma'$ of values of $k$ for which $A'_k \subseteq I$ is no more than the number $\sigma$ of values of $k$ for which $A_k \subseteq I$.
The proof has several cases.
If $A_j \subseteq I$, then $A_i \subseteq I$ as well, because $A_i \subseteq A_j$.
Thus $A'_i$ and $A'_j$ are also subsets of $I$.
Thus if $A_j \subseteq I$ then $\sigma = \sigma'$.
If $A_i \not \subseteq I$ then, because $A_i \subseteq A_j$, we have that $A_j \not \subseteq I$.
Thus neither of $A'_i$ and $A'_j$ is a subset of $I$, so again $\sigma = \sigma'$.
Because $A_i \subseteq A_j$, there is only one additional case, which occurs when $A_i \subseteq I$ but $A_j \not \subseteq I$.
There are two subcases: if $A_j \setminus I \subseteq R$, then $A'_j \subseteq I$ and $A'_i \not \subseteq I$.
Otherwise, if $A_j \setminus I \not \subseteq R$, then neither $A'_i$ nor $A'_j$ is a subset of~$I$.
Thus, in both subcases, we have $\sigma' \leq \sigma$.
So $A'$ is an $r{\rm\textnormal{-}CBC}$ and since $|A_i|+|A_j|=|A_i'|+|A_j'|$, it follows that $A'$ has the same weight as $A$.
\end{proof}
\begin{lemma}
\label{lem:types}
For every matrix $A$ representing an optimal $r{\rm\textnormal{-}CBC}(n,k,m)$, there exists a matrix $A'$ also representing an optimal $r{\rm\textnormal{-}CBC}(n,k,m)$ for which one of the two properties hold:
\begin{enumerate}
\item\label{lem:prop1} $r+1\leq |A'_j| \leq r+k-1$ for each $j \leq n$, or
\item\label{lem:prop2} $r+k-1\leq |A'_j| \leq r+k$ for each $j\leq n$.
\end{enumerate}
\begin{proof}
By Lemma~\ref{lem:minmax}, if $A$ represents an optimal $r{\rm\textnormal{-}CBC}$ then all columns of $A$ have cardinality at most $r+k$.
Assume $A$ does not satisfy properties (\ref{lem:prop1}) or (\ref{lem:prop2}).
Then $A$ contains columns $C$ and $C'$ of cardinality $j$ $(r+1\leq j\leq r+k-2)$ and $r+k$, respectively.
Observe that if an $r{\rm\textnormal{-}CBC}$ has a column of cardinality $r+k$, one can replace the column with any other column of cardinality $r+k$, and still satisfy the $r{\rm\textnormal{-}CBC}$ property.
So we can assume that $C\subseteq C'$.
By our previous result, we can then produce another $r{\rm\textnormal{-}CBC}$ with one fewer column of cardinality $r+k$, but with the same weight.
Proceed inductively until either the resulting $r{\rm\textnormal{-}CBC}$ has no columns of cardinality $r+k$ or there are no columns of cardinality less than $r+k-1$.
In these cases, we produce either a $r{\rm\textnormal{-}CBC}$ of type (i) or (ii), respectively.
At every step, the weight of the $r{\rm\textnormal{-}CBC}$ remains unchanged, and therefore the final $r{\rm\textnormal{-}CBC}$ is optimal.
\end{proof}
\end{lemma}
\begin{theorem}\label{thm:nbddbelow}Let $r\geq 0$ and $k\leq m-r$ be integers. If $n \geq (k-1)\binom{m}{r+k-1}$, then
\[N(n,k,m;r) = (r+k)n - (k-1)\binom{m}{r+k-1}.\]
\end{theorem}
\begin{proof}
Following Corollary~\ref{cor:bounds} of Section~\ref{sec:prelim}, we showed that $N(n,1,m;r)=(r+1)n$, so the result holds when $k=1$.
We may thus assume that $k\geq 2$.
Set $M = \binom{m}{r+k-1}$.
Let $\{A_i\mid 1\leq i\leq (k-1)M\}$ consist of $k-1$ copies of each possible subset of $[m]$ with cardinality $r+k-1$, and let $\{A_i\mid (k-1)M+1 \leq i \leq n\}$ be a set of any $n-(k-1)M$ subsets of $[m]$ with cardinality $r+k$.
Let $A$ be the $m\times n$ matrix defined by the union of these two sets. For an example, see Figure~\ref{fig:nbddbelow}. It follows that
\begin{align*}
N(A) &= (r+k-1)(k-1)M+(r+k)[n-(k-1)M] \\
&= (r+k)n - (k-1)M.
\end{align*}
Let $c \leq k$ and $J \subseteq[n]$ with $|J|=c$. If there is some $j' \in J$ with $j'\geq (k-1)M+1$, then $\left|\bigcup_{j\in J}A_j\right|\geq |A_{j'}|=r+k \geq r+c$. Suppose then that $J \subseteq \{1,\ldots,(k-1)M\}$. Then, for each $j$, $|A_j| = r+k-1$, so if $c<k$, $\left|\bigcup_{j\in J}A_j\right|\geq r+k-1\geq r+c$. Suppose then that $c=k$. Consider $j_1, j_2 \in J$ with $A_{j_1} \neq A_{j_2}$. Such a pair of sets must exist since there are only $k-1$ copies of each subset of $[m]$ with cardinality $r+k-1$. Thus $\left|\bigcup_{j\in J}A_j \right| \geq |A_{j_1}\cup A_{j_2}| \geq r+k-1+1 = r+c$. Thus, in all cases, Theorem~\ref{thm:grhc}~(ii) is satisfied and $A$ is an $r{\rm\textnormal{-}CBC}$.
Let $B$ be an optimal $r{\rm\textnormal{-}CBC}(n,k,m)$.
Without loss of generality, we can assume $B$ is of type (i) or (ii) as outlined in Lemma \ref{lem:types}.
Suppose that $B$ is of type (i), so that $r+1\leq |B_j|\leq r+k-1$ for each $j \leq n$.
Since $B$ is an $r{\rm\textnormal{-}CBC}$, every $(r+k-1)$-subset of $[m]$ contains at most $k-1$ columns of $B$, meaning that $n\leq (k-1)M$ and thus $n=(k-1)M$.
Let $\mathcal{C}$ be the set of ordered pairs
\[ \mathcal{C} = \{ (B_k,I)\mid k\in[n],\ B_k\subseteq I \subseteq[m],\ |I|=r+k-1\}.\]
Observe that for each $k\in[n]$, since $|B_k|\leq r+k-1$, there is at least one such ordered pair in $\mathcal{C}$ including $B_k$.
Therefore $|\mathcal{C}|\geq n$.
However, for each $I\subseteq[m]$ with $|I|=r+k-1$, there are at most $k-1$ columns of $B$ which $I$ contains.
So $|\mathcal{C}| \leq (k-1)M$.
Therefore $|\mathcal{C}|=(k-1)M$ and hence each column of $B$ is contained in exactly one subset of cardinality $r+k-1$.
So each column of $B$ has cardinality $r+k-1$, and it follows that $N(B) = (r+k)n-(k-1)M$.
Now, suppose that $B$ is of type (ii), that is, $r+k-1\leq |B_j|\leq r+k$ for each $j\leq n$.
Because $B$ is an $r{\rm\textnormal{-}CBC}$, the maximal number of columns of $B$ with cardinality $r+k-1$ is $(k-1)M$.
Therefore
\begin{align*}
N(B)&\geq (r+k-1)(k-1)M+(r+k)[n-(k-1)M] \\
&= (r+k)n - (k-1)M. \qedhere
\end{align*}
\end{proof}
If $r=0$, the previous result simplifies to $N(n,k,m;0) = kn-(k-1)\binom{m}{k-1}$, which is Theorem~8 of Paterson, Stinson, and Wei~\cite{psw}.
\begin{figure}
\[A=\begin{bmatrix} 1&1&1&0&0&0&1&0\\1&0&0&1&1&0&1&1\\0&1&0&1&0&1&1&1\\0&0&1&0&1&1&0&1\end{bmatrix}\]
\caption{The matrix $A$ constructed as in the proof of Theorem \ref{thm:nbddbelow} when $m=4, k=2, r=1,$ and $n=8\geq (k-1)\binom{m}{r+k-1}$.}\label{fig:nbddbelow}
\end{figure}
\section{Narrowing the gap}\label{sec:gap}
In the previous section, for a fixed $m$, $k$, and $r$, we established $N(n,k,m;r)$ if $n\leq m$ and $n\geq (k-1)\binom{m}{r+k-1}$.
In this section, we address the ``gap'' between these results, and establish $N(n,m,k;r)$ for values immediately below the latter interval.
We begin by identifying $N(n,2,m;r)$ for all possible parameters $n$, $m$, and $r$.
If $n\geq \binom{m}{r+1}$, then by Theorem~\ref{thm:nbddbelow}, $N(n,2,m;r)=(r+2)n-\binom{m}{r+1}$.
Suppose that $n\leq \binom{m}{r+1}$.
Let $A$ be a $m\times n$ matrix whose columns are distinct $(r+1)$-subsets of $[m]$.
Then $A$ is an $r$-CBC$(n,2,m)$ and $N(A)=(r+1)n$.
So, by Corollary~\ref{cor:bounds}, $N(n,2,m;r)=(r+1)n$.
Having completed the case $k = 2$ we assume that $k\geq 3$ for the remainder of this section.
\begin{theorem}
Let $r\geq 0$, $k\geq 3$, and $m\geq r+k$.
If
\begin{equation}
\label{eqn:midinterval}
(k-1)\tbinom{m}{r+k-1}-(m-r-k+1)\cdot F(k,m,r) \leq n \leq (k-1)\tbinom{m}{r+k-1}
\end{equation}
for an appropriate constant $F(k,m,r)\leq \frac{k-1}{r+k-1}\binom{m}{r+k-2}$, then
\[ N(n,k,m;r) = (r+k-1)n - \left\lfloor\dfrac{(k-1)\binom{m}{r+k-1}-n}{m-r-k+1}\right\rfloor.\]
\label{thm:fmkr}
\end{theorem}
Before proving the theorem, we first prove some technical lemmas that are in line with the argument of Bujit\'as and Tuza~\cite{tuza2}.
After proving Theorem~\ref{thm:fmkr}, we follow up with some concepts from design theory, then show that the existence of a design with certain parameters significantly reduces the complexity of the lower bound in Theorem~\ref{thm:fmkr}.
\begin{lemma}
\label{prop:upperbound}
Let $A$ represent an $r$-CBC and
for each $i > r$,
let $\ell_i$ denote the number of columns of $A$ with cardinality $i$.
Then
\[ \sum_{i=r+1}^{r+k-1}\ell_i\binom{m-i}{r+k-1-i}\leq (k-1)\binom{m}{r+k-1}.\]
\end{lemma}
\begin{proof}
We count in two ways the number $a$ of pairs $(R,A_j)$ where $R\subseteq[m]$, $|R|=r+k-1$, $j\in[n]$, and $A_j\subseteq R$.
By Theorem~\ref{thm:grhc}, every $(r+k-1)$-subset $R$ of $[m]$ contains at most $k-1$ columns of $A$.
Therefore $a\leq (k-1)\binom{m}{r+k-1}$.
Furthermore, for each column of $A$ with cardinality $i$, there are $\binom{m-i}{r+k-1-i}$ $(r+k-1)$-subsets $R$ of $[m]$ containing it.
Therefore
\[ a = \sum_{\substack{R\subseteq[m]\\|R|=r+k-1}}
|\{j:A_j\subseteq R\}| = \sum_{i=r+1}^{r+k-1}\ell_i\binom{m-i}{r+k-1-i}.\]
The result follows.
\end{proof}
The next result identifies the maximum number of columns of cardinality $r+k-1$ that can be appended to an $r$-CBC to obtain a larger $r$-CBC.
\begin{lemma}
\label{prop:extension}
Let $A$ represent an $r{\rm\textnormal{-}CBC}(n,k,m)$.
Then $A$ can be extended to an $r{\rm\textnormal{-}CBC}(n+t,k,m)$ $A'$ with $t$ additional columns of cardinality $r+k-1$ if and only if
\begin{equation}
t \leq (k-1)\binom{m}{r+k-1} - \sum_{i=r+1}^{r+k-1}\ell_i\binom{m-i}{r+k-1-i},
\label{eqn:prop3}
\end{equation}
where for each $i>r$, $\ell_i$ denotes the number of columns in $A$ of cardinality $i$
\end{lemma}
\begin{proof}
It is sufficient to show the result holds when $A$ has no column with cardinality greater than $r+k-1$.
Suppose such an extension is possible and let $\ell_i'$, for $r+1\leq i\leq r+k-1$, denote the number of columns in $A'$ of cardinality $i$.
Observe that $\ell_{r+k-1}'=\ell_{r+k-1}+t$ and $\ell_i'=\ell_i$ for all other $i$.
By Lemma \ref{prop:upperbound}, we have that
\[ \sum_{i=r+1}^{r+k-1}\ell_i'\binom{m-i}{r+k-1-i} \leq (k-1)\binom{m}{r+k-1} \]
and therefore
\[
t + \sum_{i=r+1}^{r+k-1}\ell_i\binom{m-i}{r+k-1-i}\leq (k-1)\binom{m}{r+k-1}. \]
Now suppose that (\ref{eqn:prop3}) holds.
Let $\mathcal{C}$ be the set of all possible columns of cardinality $r+k-1$.
Let $C\in\mathcal{C}$.
Since there are at most $k-1$ columns of $A$ contained in $C\in \mathcal{C}$, we can define $t_C \geq0$ so that there are $k-1-t_C$ columns of $A$ contained in $C$.
Hence we can append up to $t_C$ copies of $C$ to $A$, and the resulting matrix will be an $r{\rm\textnormal{-}CBC}$.
Then the lemma follows if we show that $t\leq \sum_{C\in\mathcal{C}}t_C$.
Recall that each column of $A$ with cardinality $i$ is contained in $\binom{m-i}{r+k-i-1}$ columns of $\mathcal{C}$.
Therefore by an argument similar to the one above,
\[\sum_{i=r+1}^{r+k-1}\ell_i\binom{m-i}{r+k-1-i} = \sum_{C\in\mathcal{C}}(k-1-t_C) = (k-1)\binom{m}{r+k-1}-\sum_{C\in\mathcal{C}}t_C.\]
The result follows.
\end{proof}
We will require Lemma~1 of Bujit\'as and Tuza~\cite{tuza3}, which we restate here.
\begin{lemma}[Bujit\'as and Tuza~\cite{tuza3}]
\label{lem:tuza}
For any three integers $i$, $p$, and $m$ satisfying $1\leq i\leq p\leq m-1$, the following inequality holds:
\[\left\lfloor \dfrac{\binom{m-i}{p-i}-1}{m-p}\right\rfloor \geq p-i.\]
\end{lemma}
\begin{definition}\label{def:f}
For parameters $k\geq 3$, $r\geq 0$, and $m\geq k+r$, let $F(k,m,r)$ be the largest $n$ such that an $r{\rm\textnormal{-}CBC}(n,k,m)$ exists in which each column has cardinality $r+k-2$.
Such an $r{\rm\textnormal{-}CBC}(n,k,m)$ and $F(k,m,r)$ are closely related to \defn{packing designs} and \defn{packing numbers} \cite{millsmullin}, which will be discussed at the end of the section.
\end{definition}
\begin{lemma}
Letting $F(k,m,r)$ be as in Definition~\ref{def:f}, we have
\[F(k,m,r) \leq \dfrac{k-1}{r+k-1}\dbinom{m}{r+k-2}.\]
\label{lem:fmkr_inequality}
\end{lemma}
\begin{proof}
Let $P$ be an $r{\rm\textnormal{-}CBC}(n,k,m)$ with $n=F(m,k,r)$ and suppose the columns of $P$ have cardinality $r+k-2$.
We enumerate the set
\[ \mathcal{C}=\{(C,I)\mid C\mbox{ is a column of $P$}, C\subseteq I\subseteq[m], |I|=r+k-1\} \]
in two ways.
For any given column $C$ of $P$, we have that $|C|=r+k-2$ and so there are $m-(r+k-2)$ possible subsets $I\subseteq[m]$ for which $|I|=r+k-1$ and $C\subset I$.
Therefore $|\mathcal{C}| = F(m,k,r)\cdot(m-r-k+2)$.
For any given $I\subseteq[m]$ with $|I|=r+k-1$, since $P$ is an $r{\rm\textnormal{-}CBC}(n,k,m)$, there are at most $k-1$ columns $C$ of $P$ for which $C\subset I$.
Therefore $|\mathcal{C}| \leq \binom{m}{r+k-1}\cdot (k-1)$.
So
$F(k,m,r)\cdot (m-(r+k-2)) \leq \binom{m}{r+k-1}\cdot (k-1).$
The result follows.
\end{proof}
We now prove Theorem \ref{thm:fmkr}, letting $F(k,m,r)$ take the value from
Definition~\ref{def:f}.
\begin{proof}[Proof of Theorem \ref{thm:fmkr}]
Suppose that $m=r+k$.
Then $N(n,k,m;r)=mn-m(m-r-1)$ by Theorem~\ref{thm:maxk}, and our formula gives
\[\begin{array}{rcl}
N &=& (m-1)n - \left\lfloor(k-1)m-n\right\rfloor\\
&=& mn-km+m \\
&=& mn-(m-r)m+m \\
&=& mn-(m-r-1)m. \\
\end{array}\]
So the formula holds.
We now assume that $m>r+k$.
Let $P$ be an $r{\rm\textnormal{-}CBC}(n,k,m)$ with $n=F(m,k,r)$ and suppose the columns of $P$ have cardinality $r+k-2$.
Let $\mathcal{C}$ be the set of all columns contained in $[m]$ with cardinality $r+k-1$.
Let $$x=\left\lfloor\dfrac{(k-1)\binom{m}{r+k-1}-n}{m-r-k+1}\right\rfloor.$$
It follows from the restriction on $n$ that $0\leq x \leq F(k,m,r)$.
Let $B$ be an $r{\rm\textnormal{-}CBC}(x,k,m)$ consisting of $x$ columns from $P$.
By Lemma~\ref{prop:extension}, $B$ can be extended by appending up to $(k-1)\binom{m}{r+k-1} - x(m-r-k+2)$ columns
from $\mathcal{C}$, and
\begin{align*}
&(k-1)\binom{m}{r+k-1} - x(m-r-k+2){\rule[-15pt]{0pt}{0pt}}\\
&= (k-1)\binom{m}{r+k-1} - \left\lfloor\dfrac{(k-1)\binom{m}{r+k-1}-n}{m-r-k+1}\right\rfloor(m-r-k+2){\rule[-25pt]{0pt}{0pt}}\displaybreak[0]\\
&\geq (k-1)\binom{m}{r+k-1} - \left\lfloor\dfrac{(k-1)\binom{m}{r+k-1}-n}{m-r-k+1}+(k-1)\binom{m}{r+k-1}-n\right\rfloor{\rule[-25pt]{0pt}{0pt}}\displaybreak[0]\\
&= (k-1)\binom{m}{r+k-1} - \left\lfloor\dfrac{(k-1)\binom{m}{r+k-1}-n}{m-r-k+1}\right\rfloor-(k-1)\binom{m}{r+k-1}+n{\rule[-15pt]{0pt}{0pt}}\displaybreak[0]\\
&= n-x.
\end{align*}
Let $B'$ be an extension of $B$ obtained by appending the appropriate $n-x$ columns from $\mathcal{C}$.
Then $N(B')=(r+k-1)n-x$ and hence $N(n,k,m;r)\leq (r+k-1)n-x$.
Let $A$ be an optimal $r$-CBC$(n,k,m)$.
In what follows, we show that $N(A)\geq (r+k-1)n - x$.
By Lemma~\ref{lem:types}, we may assume that $A$ is type (\ref{lem:prop1}) or (\ref{lem:prop2}).
If $A$ is type (\ref{lem:prop2}), then $|A_j|\geq r+k-1$ for each $j\in[n]$, and therefore $N(A)\geq(r+k-1)n$.
Suppose now that $A$ is type (\ref{lem:prop1}); that is $r+1\leq |A_j|\leq r+k-1$ for each $j\in[n]$.
It is sufficient to show that
\[ N(A) = \sum_{i=r+1}^{r+k-1} i\ell_i = (r+k-1)n -\sum_{i=r+1}^{r+k-1}(r+k-1-i)\ell_i \geq (r+k-1)n-x,\]
where again $\ell_i$ $(r+1\leq i\leq r+k-1)$ denotes the number of columns of $A$ with cardinality $i$.
Hence it is sufficient to show that
\begin{equation}
\label{eqn:lowerbound}
\sum_{i=r+1}^{r+k-1}(r+k-1-i)\ell_i \leq x.
\end{equation}
By Lemma~\ref{prop:upperbound}, we have that
\[\sum_{i=r+1}^{r+k-1}\ell_i\binom{m-i}{r+k-1-i} \leq (k-1)\binom{m}{r+k-1}.\]
Since $\ell_{r+k-1} = n - (\ell_{r+1} + \cdots + \ell_{r+k-2})$, we can substitute:
\[ n-(\ell_{r+1} + \cdots + \ell_{r+k-2}) + \sum_{i=r+1}^{r+k-2}\ell_i\binom{m-i}{r+k-1-i} \leq (k-1)\binom{m}{r+k-1}.\]
We can move the $n$ over to the right side of the inequality, incorporate the $\ell_i$s in the summation, and divide both sides by $m-r-k+1$ (which is at least $1$):
\[ \sum_{i=r+1}^{r+k-2}\ell_i\left(\dfrac{\dbinom{m-i}{r+k-1-i}-1}{m-r-k+1}\right) \leq \dfrac{(k-1)\dbinom{m}{r+k-1}-n}{m-r-k+1}.\]
Taking the floor on the left for each term will produce something smaller than the floor of the term on the right, and the result follows.
\end{proof}
We now classify when equality is reached in Lemma \ref{lem:fmkr_inequality}, which in turn minimizes and simplifies the lower bound of the interval (\ref{eqn:midinterval}) in Theorem \ref{thm:fmkr}.
\begin{definition}
Let $X$ be a set and $\mathcal{B}$ be a family of subsets of $X$.
Recall that the ordered pair $(X,\mathcal{B})$ is a $t{\rm\textnormal{-}}(v,k,\lambda)$ \defn{design} if $|X|=v$, $|B|=k$ for each $B\in\mathcal{B}$, and each $t$-subset of $X$ is contained in exactly $\lambda$ sets in $\mathcal{B}$.
Moreover, in such a design, it is known that $|\mathcal{B}|=\binom{v}{t}\cdot\lambda/\binom{k}{t}$. See Khosrovshahi and Laue~\cite{tdesign} for more information about $t$-designs.
\end{definition}
\begin{definition}
Let $X$ be a set and $\mathcal{B}$ be a family of subsets of $X$.
A $t{\rm\textnormal{-}}(v,k,\lambda)$ \defn{packing design} if $|X|=v$, $|B|=k$ for each $B\in\mathcal{B}$, and each $t$-subset of $X$ is contained in \defn{at most} $\lambda$ sets in $\mathcal{B}$.
The \defn{packing number} $D_\lambda(v,k,t)$ is the number of blocks in a maximum $t{\rm\textnormal{-}}(v,k,\lambda)$ packing design.
Therefore $D_\lambda(v,k,t) \leq \binom{v}{t}\cdot\lambda/\binom{k}{t}$ with equality when a $t{\rm\textnormal{-}}(v,k,\lambda)$ design exists.
For more information on packing designs, see Mills and Mullin~\cite{millsmullin}.
\end{definition}
There is a connection between an $r{\rm\textnormal{-}CBC}$ and the complement of a packing design with appropriate parameters satisfying an additional property.
\begin{construction}
\label{con:design}
Let $g=m-(r+k)$ and $\mathcal{D}$ be a maximal $(g+1){\rm\textnormal{-}}(m,g+2,k-1)$ packing design with vertex set $[m]$, block set $\mathcal{B}$, with the additional property (P):
\begin{quote}
(P): no block in $\mathcal{B}$ appears more than $k-2$ times.
\end{quote}
Observe that $|\mathcal{B}| \leq \frac{k-1}{r+k-1}\binom{m}{r+k-1}$.
Let $A$ be a matrix whose columns are the complements of the sets in $\mathcal{B}$, and thus each of the $\frac{k-1}{r+k-1}\binom{m}{r+k-1}$ columns of $A$ has cardinality $r+k-2$.
\end{construction}
\begin{lemma}
\label{lemma:design_rcbc}
The matrix $A$ in Construction \ref{con:design} is an $r{\rm\textnormal{-}CBC}$.
\end{lemma}
\begin{proof}
Let $1\leq c\leq k$ and $J\subseteq[n]$ have cardinality $c$.
If $c\leq k-2$, then $\left |\bigcup_{j\in J}A_j \right| \geq r+k-2\geq r+c$.
If $c=k-1$ then, by property ($P$), not all the sets $A_j$ ($j\in J$) are equal, so $|\cup_{j\in J}A_j|\geq r+k-1$.
Therefore, to prove $A$ is an $r$-CBC, we need to show that if $|J|=k$, then $\left |\bigcup_{j\in J}A_j \right |\geq r+k$.
Assume that $A$ is not an $r{\rm\textnormal{-}CBC}$.
Then there exists $J\subseteq[n]$ of cardinality $k$ such that $\left |\bigcup_{j\in J}A_j \right|< r+k$.
Let $B_j\in\mathcal{B}$ ($j\in J$) be the complements of $A_j$.
Then $\left |\bigcap_{j\in J}B_j \right| > m-(r+k)$, so $ \left |\bigcap_{j\in J}B_j \right| \geq g+1$.
So there exists a $(g+1)$-subset of $[m]$ which is contained in $k$ blocks of a $(g+1){\rm\textnormal{-}}(m,g+2,k-1)$ packing design, which is a contradiction.
Therefore $\left |\bigcup_{j\in J}A_j\right |\geq r+k$ for every $k$-subset $J$ of~$[m]$.
\end{proof}
If the packing design used in Construction \ref{con:design} is, in fact, a design, then we are able to to restate Theorem~\ref{thm:fmkr} in a simplified form.
\begin{corollary}
\label{cor:ifdesignexists}
Let $r\geq 0$, $k\geq 3$, $m\geq r+k$, $g=m-(r+k)$, and suppose there exists a $(g+1){\rm\textnormal{-}}(m,g+2,k-1)$ design with property $(P)$.
If \[\frac{k-1}{r+k-1}\binom{m}{r+k-2}\leq n \leq (k-1)\binom{m}{r+k-1},\]
then
\[ N(n,k,m;r) = (r+k-1)n - \left\lfloor\dfrac{(k-1)\binom{m}{r+k-1}-n}{m-r-k+1}\right\rfloor.\]
\end{corollary}
Let $r=0$ and $\mathcal{B}$ be the set of all $(g+2)$-subsets of $[m]$.
Then $([m],\mathcal{B})$ is a $(g+1)$-$(m,g+2,k-1)$ design with property $(P)$ (in fact, each block is distinct).
Therefore the hypotheses of Corollary~\ref{cor:ifdesignexists} are satisfied.
Thus Theorem 1 of Bujit\'as and Tuza~\cite{tuza3} is a special case of Corollary~\ref{cor:ifdesignexists}.
\section{Achieving the trivial minimum}\label{sec:minimum}
We close with an inverse problem involving $r{\rm\textnormal{-}CBC}$s.
By Corollary~\ref{cor:bounds}, an $r{\rm\textnormal{-}CBC}(n,k,m)$ must have weight at least $(r+1)n$.
Given parameters $k$, $m$, and $r$, let $n(k,m;r)$ be the maximum value of $n$ such that $N(n,k,m;r)=(r+1)n$.
If there are no such $r{\rm\textnormal{-}CBC}$s for all $n\geq k$, we say $n(k,m;r)$ does not exist.
In this section, we construct $r{\rm\textnormal{-}CBC}$s whose weight is $(r+1)n$ and identify $n(k,m;r)$ in some special cases.
Observe that if $k=1$, then any matrix in which each column has cardinality $r+1$ is sufficient, so $n(1,m;r)=\infty$.
We now consider when $k\geq 2$.
Suppose that $r=0$.
Then a CBC$(n,k,m)$ has weight $n$ if and only if $m\geq n$ and each column is distinct.
So $n(k,m;0)$ exists (and equals $m$) if and only if $m\geq n$.
For larger $r$, this family of $r{\rm\textnormal{-}CBC}$s can be quite complex.
In what follows, we give a correspondence between $1{\rm\textnormal{-}CBC}$s with weight $2n$ and graphs on $m$ vertices with appropriate girth.
\begin{theorem}
Let $2 \leq k < m\leq n$.
Then $A$ is a $1{\rm\textnormal{-}CBC}(n,k,m)$ with weight $2n$ if and only if $A$ is the incidence matrix for a simple graph with $m$ vertices and girth at least $k+1$.
\end{theorem}
\begin{proof}
Let $G$ be a simple graph with girth at least $k+1$ and let $A$ be its incidence matrix.
Let $e_j$ denote the edge corresponding with column $A_j$.
Observe that $|A_j|=2$ for each $j\in[n]$ and hence $N(A)=2n$.
Let $J$ be a $c$-subset of $[n]$ with $c\leq k$ and $G'$ be the graph induced by the edge set $\{e_j\mid j\in J\}$.
Then $G'$ has no cycles, and therefore $G'$ is a forest.
Hence $G'$ has $a+c$ incident vertices, where $a\geq 1$ is the number of connected components of $G'$.
Therefore $\left |\bigcup_{j\in J}A_j\right|=a+c\geq 1+c$.
So $A$ is a $1{\rm\textnormal{-}CBC}(n,k,m)$.
Suppose $A$ is a $1{\rm\textnormal{-}CBC}(n,k,m)$ for which $N(A)=2n$.
Then $|A_j|=2$ for each $j\in[n]$, and therefore can be interpreted as the incidence matrix of a graph $G$.
If $G$ is not simple, then $G$ has two parallel edges, say which correspond to $A_p$ and $A_q$.
Then $|A_p\cup A_q| = 2 < 3$, which contradicts $A$ being a $1{\rm\textnormal{-}CBC}$.
Assume that $G$ does not have girth at least $k+1$.
Then there exists a cycle in $G$ with $c\leq k$ edges.
Let $J\subseteq[n]$ index the edges of the cycle.
So $\left |\bigcup_{j\in J}A_j\right | = c < 1+c$, a contradiction.
Therefore the graph $G$ has girth at least $k+1$.
\end{proof}
\begin{corollary} For all $m \geq 1$, $n(2,m;1)=\binom m2$ and
$n(3,m;1)=\left\lfloor {m^2}/4\right\rfloor$.
\end{corollary}
\begin{proof}
A graph with $m$ vertices and girth at least 3 is a simple graph, meaning a complete graph maximizes the number of edges.
So $n(2,m;1)=\binom m2$.
A graph with $m$ vertices and girth at least 4 is a triangle-free graph.
By Tur\'{a}n's theorem, a triangle-free graph on $m$ vertices with a maximum number of edges is a complete bipartite graph whose parts are sizes $\left\lfloor m/2\right\rfloor$ and $\left\lceil m/2\right\rceil$.
So $n(3,m;1)=\left\lfloor m/2\right\rfloor\cdot\left\lceil m/2\right\rceil=\left\lfloor{m^2}/4\right\rfloor$.
\end{proof}
The maximum number of edges in a graph on $m$ vertices and girth $k+1\geq 4$ is known only for certain pairs of $m$ and $k$, but no other infinite families are known at this time~\cite{girth}.
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2015-11-17T02:07:24",
"yymm": "1511",
"arxiv_id": "1511.04580",
"language": "en",
"url": "https://arxiv.org/abs/1511.04580",
"abstract": "Combinatorial batch codes were defined by Paterson, Stinson, and Wei as purely combinatorial versions of the batch codes introduced by Ishai, Kushilevitz, Ostrovsky, and Sahai. There are $n$ items and $m$ servers, each of which stores a subset of the items. A batch code is an arrangement for storing items on servers so that, for prescribed integers $k$ and $t$, any $k$ items can be retrieved by reading at most $t$ items from each server. Silberstein defined an erasure batch code (with redundancy $r$) as a batch code in which any $k$ items can be retrieved by reading at most $t$ items from each server, while any $r$ servers are unavailable (failed).In this paper, we investigate erasure batch codes with $t=1$ (each server can read at most one item) in a combinatorial manner. We determine the optimal (minimum) total storage of an erasure batch code for several ranges of parameters. Additionally, we relate optimal erasure batch codes to maximum packings. We also identify a necessary lower bound for the total storage of an erasure batch code, and we relate parameters for which this trivial lower bound is achieved to the existence of graphs with appropriate girth.",
"subjects": "Combinatorics (math.CO)",
"title": "On Erasure Combinatorial Batch Codes",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232945094719,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7093460358032346
} |
https://arxiv.org/abs/1103.4258 | $k$-Sum Decomposition of Strongly Unimodular Matrices | Networks are frequently studied algebraically through matrices. In this work, we show that networks may be studied in a more abstract level using results from the theory of matroids by establishing connections to networks by decomposition results of matroids. First, we present the implications of the decomposition of regular matroids to networks and related classes of matrices, and secondly we show that strongly unimodular matrices are closed under $k$-sums for $k=1,2$ implying a decomposition into highly connected network-representing blocks, which are also shown to have a special structure. | \section{Introduction}
Totally unimodular (TU) matrices form an important class of matrices for integer and linear programming due to the integrality properties of
the associated polyhedron. A matrix $A$ is \emph{totally unimodular} if each square submatrix of $A$ has determinant $0,+1,$ or $-1$.
The class of TU matrices has been studied extensively and combinatorial characterizations for these matrices can be found
in~\cite{NemWols:1988,Schrijver:86,Schrijver:2004}. An important subclass of TU matrices is defined as follows.
A matrix $A$ is \emph{strongly unimodular} (SU) if: (i) $A$ is TU, and (ii) every matrix obtained from $A$ setting a
$\pm{1}$ entry to $0$ is also TU. Strongly unimodular matrices have appeared several times in the
literature \cite{Cora:87,CraLoPo:92,LoPo:89} since they were first introduced in \cite{CraHaIb:86}. Another subclass of TU matrices
discussed in this paper is the class of network matrices. A \emph{network matrix} may be viewed as an edge-path matrix of
a directed graph with respect to a particular tree of the graph; results regarding network matrices can be found in~\cite{NemWols:1988,Schrijver:86, Schrijver:2004}. Seymour has shown in~\cite{Seymour:1980} that network matrices and their
transposes are the main building blocks for TU matrices.
In this paper we show that SU matrices are closed under $k$-sum operations for $k=1,2$ implying a decomposition into smaller SU matrices representing
$3$-connected regular matroids, for which matrices we provide a characterization.
The rest of the paper is organized as follows.
In section~\ref{sec_ksm}, we show that SU matrices are closed under the $k$-sum operations $(k=1,2)$ and thereby, they can be
decomposed into smaller SU matrices via these operations. The special structure of these smaller matrices is discussed in
section~\ref{sec_3c}. Finally, we assume that the reader is aware of basic notions of graph theory and matroid theory.
Our references for graph theory are~\cite{BonMur:07,Diestel:05} and for matroid theory are~\cite{Oxley:06,Truemper:98}.
\section{$k$-sums of strongly unimodular matrices} \label{sec_ksm}
The following two results (Lemmata~\ref{lem_ew} and~\ref{lem_su22}) can be obtained easily from the definition of SU matrices and the
fact that TU matrices are closed under deletions of rows and columns. The proof of Lemma~\ref{lem_ew} is straightforward and is ommited.
\begin{lemma} \label{lem_ew}
Every submatrix of a strongly unimodular matrix is strongly unimodular.
\end{lemma}
\begin{lemma} \label{lem_su22}
A $\{0,\pm{1}\}$ TU matrix having at most two nonzeros in every column (row) is SU.
\end{lemma}
\begin{proof}
Let $A$ be a TU matrix with at most two nonzeros in every column. The case in
which $A$ has two nonzeros in every row can be handled in much the same way. Let us change
a nonzero of column $i$ of $A$ and call $A'$ the matrix so-obtained. Now every submatrix of $A'$ either is equal to the corresponding
submatrix of $A$; or we can expand the determinant of the submatrix of $A'$
along column $i$ (which has one nonzero being $\pm{1}$) and observe that the
determinant of $A'$ is actually equal, up to $\pm{1}$ scaling, to
the determinant of a submatrix of $A$. Thus, in all cases the determinant of
any submatrix of $A'$ is $\{0,\pm{1}\}$ and therefore $A$ is SU.
\end{proof}
\noindent
As shown in the following result, strongly unimodular matrices are closed under some matrix operations.
\begin{lemma} \label{lem_opra}
Strongly unimodular matrices are closed under the following operations:
\begin{itemize}
\item [(i)] transposing,
\item[(ii)] adding a zero row or column,
\item [(iii)] adding a unit column or a unit row, and
\item [(iv)] repeating a column or a row
\end{itemize}
\end{lemma}
\begin{proof}
Part (i) is trivial since the determinant of any submatrix remains unchanged under transposing. For (ii), the addition of a zero column (row) to a matrix $A$ results to a matrix $A'$ which is TU, since TU matrices are closed under the addition of a zero column (row). Furthermore, the replacement of any nonzero of $A'$ by a zero has to take place to the submatrix $A$ of it. But $A$ is SU and therefore we have that the matrix so-obtained is a TU matrix plus a zero column (row). The result now follows from the fact that TU matrices are closed under the addition of a zero row (column).
For (iii), let's add a unit column $a$ to an SU matrix $A$ and let's call $A'=[A\;a]$ the matrix so-obtained. The case in which a unit row is added can be handled similarly. If we change the nonzero of column $a$ to zero then this is equivalent of adding a zero row to a TU matrix and therefore the matrix so-obtained remains TU. If we change any other nonzero of $A'$ to zero then this has to be an element of the part $A$ of $A'$; we change such a nonzero to zero and we call $A''=[B\;a]$ the new matrix. We shall show that any submatrix of $A''$ is TU. Obviously, any submatrix of $B$ is TU because $A$ is an SU matrix. In the remaining case, we can expand the determinant of a submatrix along column $a$ and observe that this determinant is a $\pm{1}$ multiple of the determinant of a submatrix of $B$.
For (iv), let $A'=[A\; a_1]$ be an SU matrix and let $a_1$ be a column of $A'$
which we repeat in order to construct the matrix $A_r=[A \;a_1 \;a_1]$. We note
here that the case of repeating a row can be handled in the same way. The only
case which has to be examined is the one in which a nonzero element of a
column $a_1$ becomes zero, since for all the other cases all the submatrices
of the matrix obtained are easily checked to be TU. Let $a_1'$ be the matrix
obtained from turning a nonzero of a column $a_1$ to zero, then the only
submatrices of $A''=[A \; a_1' \; a_1]$ which has to be examined of being TU are
those containing parts of column $a_1'$ and $a_1$, since all the other
submatrices are trivially TU. Expanding now the determinant of such a
submatrix of $A''$ along the column $a_1'$ and also expand the determinant of
the same submatrix of $[A \; a_1 \; a_1]$ along $a_1$ we see that these two
determinants differ by a determinant of a TU matrix. Thus, these
determinants differ by $0$ or $\pm{1}$. But the determinant of the submatrix
of $[A \; a_1 \; a_1]$ is equal to zero and therefore we have that the
determinant of the corresponding submatrix of $A''$ is either $0$ or $\pm{1}$.
\end{proof}
\noindent
The $k$-sum operations $(k=1,2,3)$ on matrices are of central importance for our work and are defined as follows:
\begin{definition}\label{def_k-sums}
If $A, B$ are matrices, $a,d$ are column vectors and $b,c$ are row vectors of
appropriate size in $\mathbb R$ then
\begin{description}
\item[1-sum:] $A\oplus_1 B:=\begin{bmatrix}A & 0\\0& B\end{bmatrix}$
\item[2-sum:] $\begin{bmatrix}A & a\end{bmatrix}\oplus_2 \begin{bmatrix}b\\B\end{bmatrix}:=\begin{bmatrix}A & ab\\0&
B\end{bmatrix}$
\item[3-sum:] $\begin{bmatrix}A & a & a\\c & 0 & 1\end{bmatrix}\oplus_3 \begin{bmatrix}1 & 0 & b\\d & d & B\end{bmatrix}:=
\begin{bmatrix}A & ab\\dc& B\end{bmatrix}$ or \\
\hspace*{.13in}$\begin{bmatrix}A & 0 \\b & 1 \\ c & 1 \end{bmatrix}\oplus^3 \begin{bmatrix}1 & 1 & 0\\
a& d & B\end{bmatrix}:=\begin{bmatrix}A & 0 \\ D & B\end{bmatrix}$ \\
where, in the $\oplus^3$, $b$ and $c$ are $\mathbb{R}$-independent row vectors and $a$ and $d$ are $\mathbb{R}$-independent column vectors
such that $[\frac{b}{c}]=[D_1|\bar{D}]$, $[a|d]=[\frac{\bar{D}}{D_2}]$ and $\bar{D}$ is a square non-singular matrix.
Then, $D=[a|d]\bar{D}^{-1}[\frac{b}{c}]$.
\end{description}
\end{definition}
\noindent
We show that SU matrices are closed under the $1$-sum and $2$-sum operations.
\begin{lemma} \label{lem_1-s}
If $A$ and $B$ are SU matrices then the matrix
$N=A\oplus_{1}B=\begin{bmatrix}A & 0\\0& B\end{bmatrix}$ is an SU matrix.
\end{lemma}
\begin{proof}
Since $A$ and $B$ are TU and from the fact that TU matrices are closed under $1$-sums we have that $N$ is TU. It remains to be shown that if we change a nonzero of the submatrix $A$ (or $B$) of $N$ to zero then the matrix $N'=\begin{bmatrix}A' & 0\\0& B'\end{bmatrix}$ obtained by this change is TU. Since $A$ and $B$ are SU we have that $A'$ and $B'$ are TU and from the fact that TU matrices are closed under the $1$-sum operation we have that $N'$ is TU as well.
\end{proof}
\begin{lemma} \label{lem_2-s}
If $A=\begin{bmatrix}A' & a\end{bmatrix}$ and
$B=\begin{bmatrix}b\\B'\end{bmatrix}$ are SU matrices then the matrix
$N=A\oplus_2 B=\begin{bmatrix}A' & ab\\0&
B'\end{bmatrix}$ is an SU matrix.
\end{lemma}
\begin{proof}
Since TU matrices are closed under $2$-sums we have that the matrix $N$, which
is the $2$-sum of the TU matrices $A$ and $B$, is TU. It remains to be shown
that changing a nonzero of $N$ to zero the matrix $N'$ so-obtained is also TU.
We consider the following two cases separately: (i) we replace a nonzero of
the submatrix $A'$ or $B'$ of $N$ by zero, and (ii) we replace a nonzero
element of the $ab$ submatrix of $N$ by zero.
For case (i) we can assume without loss of generality that we change a nonzero element of $A'$ to zero and let us call $\bar{N}=\begin{bmatrix} \bar{A'} & ab\\0&
B'\end{bmatrix}$ the matrix so-obtained (the case in which a nonzero element of $B'$ is changed is similar). Therefore, matrix $\bar{N}$ is the $2$-sum of the matrix $\begin{bmatrix} \bar{A'} & a\end{bmatrix}$ and $\begin{bmatrix}b\\B'\end{bmatrix}$, where $\begin{bmatrix} \bar{A'} & a\end{bmatrix}$ is a TU matrix since it is obtained from the SU matrix $A$ by replacement of a nonzero by a zero, and $\begin{bmatrix}b\\B'\end{bmatrix}$ is TU since it is equal to matrix $B$. From the fact that TU matrices are closed under $2$-sums the result follows.
For case (ii), let $N'$ be the matrix obtained from changing a nonzero of the
$ab$ part of $N$ to zero. We shall show that $N'$ is TU. Initially, we can see that
$N'=
\left[
\begin{array}{rr|r}
A' & a_1 & ab_2 \\
\mathbf{0} & b_1 & B_1
\end{array}
\right]
$, where $a_1$ contains the nonzero having changed and thus differs from
column $a$ only to that element; furthermore, we assume that $b_1$ is the
first column of $B'$ and $B_1$ is the rest of it, i.e. $B'=[b_1 \; B_1]$, and
that $B$ is the matrix having as first row the vector $[1 \; b_2]$, where the
first element of this vector has to be $1$ otherwise we could not find a
nonzero to change in order to create $a_1$, i.e.
$B=\left[\begin{array}{rr}
1 & b_2 \\
b_1 & B_1
\end{array}
\right]$.
We can easily see that $N'$ is the $3$-sum of the following two matrices
\[
\hat{A}=
\left[
\begin{array}{rr|rr}
A' & a_1 & a & a \\
\mathbf{0} & 1 & 0 & 1
\end{array}
\right]
\]
\[
\hat{B}=
\left[
\begin{array}{rrr}
0 & 1 & b_2 \\
b_1 & b_1 & B_1
\end{array}
\right]
\]
Since TU matrices closed under $3$-sums it suffices to show that each of
$\hat{A}$ and $\hat{B}$ is TU. We know that $[A \; a \; a]$ is SU because of
Lemma~\ref{lem_opra}~(iv); moreover, from (iii) of the same Lemma we have that
$
\left[
\begin{array}{rrr}
A' & a & a \\
\mathbf{0} & 0 & 1
\end{array}
\right]
$
is SU. Finally, applying again (iv) of Lemma~\ref{lem_opra} we have that
$A_m=
\left[
\begin{array}{rr|rr}
A' & a & a & a \\
\mathbf{0} & 1 & 0 & 1
\end{array}
\right]
$
is SU. Thus, changing a specific nonzero from a column
$\left[
\begin{array}{r}
a \\
1
\end{array}
\right] $ of $A_m$ to zero we obtain
$\hat{A}$ which has to be TU.
We know that $B$ is SU, so by Lemma~\ref{lem_opra}, we have that the matrix $B_m=
\left[
\begin{array}{rrr}
1 & 1 & b_2 \\
b_1 & b_1 & B_1
\end{array}
\right]
$ is SU. Thus, replacing a $1$ of a column
$\left[
\begin{array}{r}
1 \\
b
\end{array}
\right] $ of $B_m$ we obtain matrix $\hat{B}$ which has to be TU. Since both
$\hat{A}$ and $\hat{B}$ are TU the result follows.
\end{proof}
In what follows we shall make use of the following regular matroid decomposition theorem by Seymour~\cite{Seymour:1980}.
\begin{theorem} \label{th_Seym}
Every regular matroid $M$ may be constructed by means of $1$-, $2$-, and
$3$-sums starting with matroids each isomorphic to a minor of $M$ and each
either graphic or cographic or isomorphic to $R_{10}$.
\end{theorem}
\noindent
The $R_{10}$ regular matroid is a ten-element matroid, which can be found in~\cite{Oxley:06, Truemper:98}, and it has two unique totally unimodular compact representation matrices $B_1$ and $B_2$, up to row and column permutations and scaling of rows and columns by $-1$.
\begin{equation}\label{eq_B1}
B_1=
\kbordermatrix{\mbox{}& 1 & 2 & 3 & 4 & 5 \\
1 & {\;\,\! 1} & {\;\,\! 0} & {\;\,\! 0} & {\;\,\! 1} & {\!\!\! -1} \\
2 & {\!\!\! -1} & {\;\,\! 1} & {\;\,\! 0} & {\;\,\! 0} & {\;\,\! 1} \\
3 & {\;\,\! 1} & {\!\!\! -1} & {\;\,\! 1} & {\;\,\! 0} & {\;\,\! 0} \\
4 & {\;\,\! 0} & {\;\,\! 1} & {\!\!\! -1} & {\;\,\! 1} & {\;\,\! 0} \\
5 & {\;\,\! 0} & {\;\,\! 0} & {\;\,\! 1} & {\!\!\! -1} & {\;\,\! 1}
}
\mspace{40mu}
B_2=\kbordermatrix{\mbox{}& 1 & 2 & 3 & 4 & 5 \\
1 & 1 & 1 & 1 & 1 & 1 \\
2 & 1 & 1 & 1 & 0 & 0 \\
3 & 1 & 0 & 1 & 1 & 0 \\
4 & 1 & 0 & 0 & 1 & 1 \\
5 & 1 & 1 & 0 & 0 & 1 \\
}
\end{equation}
A consequence of theorem~Theorem \ref{th_Seym}
is the construction Theorem~\ref{Seymour_matrix} for totally unimodular
matrices which appears in \cite{Seymour:95,Truemper:98}.
\begin{theorem} \label{Seymour_matrix}
Any TU matrix is up to row and column permutations and scaling by $\pm{1}$ factors a network matrix, the transpose of a network matrix, the matrix $B_1$
or $B_2$ of (\ref{eq_B1}),or may be constructed recursively by these matrices using matrix $1$-, $2$- and $3$-sums.
\end{theorem}
\noindent
According to Theorem~\ref{Seymour_matrix}, the building blocks for totally
unimodular matrices are the network matrices and their transposes as well as
matrices $B_1$ and $B_2$ of (\ref{eq_B1}).
\begin{lemma}\label{lem_bb}
$B_1$ and $B_2$ are not SU.
\end{lemma}
\begin{proof}
If we make the value of the $(4,3)^{\textrm{th}}$-element of $B_1$ from $-1$ to $0$ then in the matrix so-obtained the
$3\times{3}$ submatrix defined by rows $3,4$ and $5$ and columns $2,3$ and $4$
has determinant equal to $+2$. Therefore, $B_1$ is not SU. Similarly, if we make the value of the $(4,1)^{\textrm{th}}$-element of
$B_2$ from $+1$ to $0$ then in the matrix so-obtained, the $3\times{3}$
submatrix defined by rows $3,4$ and $5$ and columns $1,4$ and $5$ has
determinant equal to $-2$ and thus, $B_2$ is not SU.
\end{proof}
\noindent
By Theorem~\ref{Seymour_matrix} and Lemma~\ref{lem_bb} we obtain the following result.
\begin{theorem}
Any SU matrix is up to row and column permutations and scaling by $\pm{1}$ factors a network
matrix, the transpose of a network matrix,or may be constructed recursively by these matrices using matrix $1$-, $2$- and $3$-sums.
\end{theorem}
The following theorem, known as the splitter theorem for regular matroids, is one of the most important
steps which led to the regular matroid decomposition theorem~\cite{Seymour:1980}.
\begin{theorem}\label{th_r100}
Every regular matroid can be obtained from copies of $R_{10}$ and from
$3$-connected minors without $R_{10}$ minors by a sequence of $1$-sums and $2$-sums.
\end{theorem}
\noindent
Combining Lemmata~\ref{lem_1-s},~\ref{lem_2-s} and~\ref{lem_bb} and Theorem~\ref{th_r100} we can now state the main result of this section.
\begin{theorem} \label{th_lak}
A matrix is strongly unimodular if and only if it is decomposed via $1$- and $2$-sums into strongly unimodular matrices
representing $3$-connected regular matroids without $R_{10}$ minors.
\end{theorem}
\noindent
In view of Theorem~\ref{th_lak} we can see that an SU matrix can be decomposed via $1$-sums and $2$-sums into a special class of SU matrices.
This class will be characterized in the following section.
\section{SU matrices of $3$-connected regular matroids} \label{sec_3c}
By Theorem~\ref{th_lak} we have that SU matrices are decomposed into smaller SU matrices which represent $3$-connected regular matroids without
$R_{10}$ minors. In this section we shall characterize the structure of these smaller matrices in Theorem~\ref{th_final}.
It is known that any $3$-connected binary matroid contains the wheel matroid $\mathcal{W}_3$ as a minor (Lemma~5.2.10 in~\cite{Truemper:98}).
In the following result we show that there exist two TU representation matrices for $\mathcal{W}_3$, one SU and one non-SU.
\begin{lemma} \label{lem_w3ne}
Up to row and column permutations and scaling by $-1$, the matroid $\mathcal{W}_3$ has two different totally unimodular compact representation matrices, namely
\begin{enumerate}
\item an SU representation
$
N_1=\left[
\begin{array}{rcc}
1 & 0 & 1 \\
-1 & 1 & 0 \\
0 & 1 & 1
\end{array}
\right]
$, and
\item a non-SU representation
$
N_2=\left[
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{array}
\right]
$
\end{enumerate}
\end{lemma}
\begin{proof}
Since the graphic matroids are uniquely representable over any field, given a TU compact representation of $\mathcal{W}_3$
we can obtain any other compact representation by row and column permutations, scaling of rows and columns by $-1$ and pivoting.
Since $\mathcal{W}_3$ is a graphic matroid, each of its TU compact representation matrices is a network matrix as well.
Pivoting in a network matrix results to a network matrix with respect to another tree of the same graph.
Specifically, up to graph isomorphism, graph $W_3$ has two different trees which are depicted in Figure~\ref{fig_netone}, where the bold edges
correspond to the tree edges. Thus, up to row and column permutations and scaling by $-1$, there are two different network matrices
representing $\mathcal{W}_3$; namely:
$
N_1=\left[
\begin{array}{rcc}
1 & 0 & 1 \\
-1 & 1 & 0 \\
0 & 1 & 1
\end{array}
\right]
$, and
$
N_2=\left[
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{array}
\right]
$. It is now easy to see that if we replace any nonzero of
$
\left[
\begin{array}{rcc}
1 & 0 & 1 \\
-1 & 1 & 0 \\
0 & 1 & 1
\end{array}
\right]
$ by a $0$ then all the matrices so-obtained are TU. On the other hand, if we replace the nonzero at third row and second column of
$
\left[
\begin{array}{ccc}
1 & 1 & 0 \\
0 & 1 & 1 \\
1 & 1 & 1
\end{array}
\right]
$ by $0$ then the matrix so-obtained is not TU.
\end{proof}
\begin{figure}[h]
\begin{center}
\centering
\psfrag{(1)}{\footnotesize $(1)$}
\psfrag{(2)}{\footnotesize $(2)$}
\includegraphics*[scale=0.5]{w3.eps}
\label{fig_netone}
\end{center}
\caption{The two possible network representations of $W_3$, where (1) gives rise to an SU network matrix while (2) gives rise
to a non-SU network matrix.}
\label{fig:w3}\end{figure}
We shall now prove the following important theorem which shows that SU representation matrices of $3$-connected regular matroids
can not have certain $2\times{2}$ matrices as submatrices.
\begin{theorem} \label{th_tr23}
If $N$ is an $m\times{n}$ representation matrix $(m,n\geq{3})$ of a $3$-connected regular matroid
containing, up to row and column permutations and scalings by $-1$, the submatrix
$
\left[
\begin{array}{rr}
1 & 1 \\
1 & 1
\end{array}
\right]
$,
then $N$ is not SU.
\end{theorem}
\begin{proof}
Since $N$ is the representation matrix of a connected matroid we have that
it has an $M(W_2)$ minor (see Lemma~5.2.10 in \cite{Truemper:98}). Furthermore,
the matrix
$
\left[
\begin{array}{rr}
1 & 1 \\
1 & 1
\end{array}
\right]
$ under any row and column permutations and scalings by $-1$
factors displays $M(W_2)$. Enlarge this $2\times{2}$ submatrix
to a maximal submatrix containing only $1$s. Let us call $D$ that submatrix and index its
rows and columnns by $R$ and $S$, respectively. Furthermore, in
the partitioned $N$ of~(\ref{eq_1}) each row of the submatrix $U$ and each column of
the submatrix $V$ is assumed to be nonzero. From our assumption that $D$ is
maximal we have that each row and each column of $U$ and $V$, respectively, must have at least one
zero element.
\begin{equation} \label{eq_1}
N=\begin{tabular}{c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{$S$} &\multicolumn{1}{c}{$Q$} &
\multicolumn{1}{c}{} \\ \cline{2-4}
$R$ & $D$ & $V$ & $0$ \\ \cline{2-4}
$P$ & $U$ & $0/\pm{1}$ & $0/\pm{1}$ \\ \cline{2-4}
& $0$ & $0/\pm{1}$ & $0/\pm{1}$ \\ \cline{2-4}
\end{tabular}
\end{equation}
Let $BG(N)$ be the bipartite graph of $N$ and let $F$ be its
subgraph obtained from the deletion of the edges corresponding to the $1$s of
$D$. By the proof of Lemma~5.2.10 and since $N$ is the representation matrix of
a $3$-connected regular matroid, we have that there must exist a path in $F$
connecting a vertex of $R$ with a vertex of $S$ which, due to the
bipartiteness of $F$, has to be of odd
length. If we assume that the length of that path is $3$ then
the matrix $N_2$ of Lemma~\ref{lem_w3ne} is a submatrix of $N$,
which implies that $N$ is not SU.
If the shortest path connecting a vertex of $R$ with a vertex of $S$ has
length greater than $3$ then we
will show that the matrix $N$ is also non-SU. Let's say
that the shortest path lies between the vertices $r_2$ and $s_2$ of $R$ and
$S$, respectively (see Figure~\ref{fig:rp}). Then $N$ will have the following
submatrix $M$:
\begin{equation*}
M=
\kbordermatrix{\mbox{}& q_1 & q_2 & & \ldots & & q_n & s_2 & s_1\\
r_2 & \pm{1} & 0 & 0 & \ldots & 0 & 0 & \pm{1} & \pm{1}\\
p_n & \pm{1} & \pm{1} & 0 & \ldots & 0 & 0 & 0& 0 \\
p_{n-1} & 0 & \pm{1} & \pm{1} & \ldots &0 &0 &0 &0\\
\vdots & \cdots & & \cdots & & \cdots & &\cdots & \\
p_1 & 0 & 0 & 0 & \ldots & 0 & \pm{1} & \pm{1} & 0\\
r_1 & 0 & 0 & 0 &\ldots & 0 & 0 & \pm{1} & \pm{1}
}
\end{equation*}
where $\{r_1,r_2\}\in{R}$, $\{s_1,s_2\}\in{S}$, $\{p_1,\ldots,p_n\}\in{P}$ and $\{q_1,\ldots,q_n\}\in{Q}$.
Moreover, we have that $M$ will have no zeros in the main diagonal and in the diagonal below the
main because of the path existing between $r_2$ and $s_2$ (see
Figure~\ref{fig:rp}). The submatrix of $M$ having rows indexed by $r_1$ and $r_2$ and columns indexed by
$s_1$ and $s_2$ is full of ones because it is submatrix of $D$. Furthermore,
we have zeros in the position indexed by $r_1$ and $q_1$ and in the position
indexed by $p_1$ and $s_1$ because we can assume that there exists at least one
vertex of $R$ not being adjacent to $q_1$, which we call $r_1$, and similarly we
can assume that there exists a vertex of $S$ not being adjacent to $p_1$, which
we call $s_1$. All the other zeros in $M$ are due to the fact that the path
between $r_2$ and $s_2$ is the shortest between a vertex of $R$ and a vertex of
$S$ in the graph $F$.
We shall now show that matrix $M$ is not SU. If we
expand the determinant of $M$ along the first row then this determinant is equal
to the sum of the determinants of three TU matrices being triangular with no
zero in the diagonal. Therefore, it is easy now to see that there exists a nonzero in the
first row of $M$ such that if we replace it by a zero and expand the determinant
of the matrix so-obtained along the first row then we have that the determinant of this
matrix will be $2$ or $-2$. Therefore, $N$ has a submatrix $M$ being non-SU
and by Lemma~\ref{lem_ew}, $N$ is not $SU$.
\end{proof}
\begin{figure}[h]
\begin{center}
\centering
\psfrag{r1}{\footnotesize $r_1$}
\psfrag{r2}{\footnotesize $r_2$}
\psfrag{rk}{\footnotesize $r_k$}
\psfrag{s1}{\footnotesize $s_1$}
\psfrag{s2}{\footnotesize $s_2$}
\psfrag{sl}{\footnotesize $s_l$}
\psfrag{p1}{\footnotesize $p_1$}
\psfrag{p2}{\footnotesize $p_2$}
\psfrag{pn}{\footnotesize $p_n$}
\psfrag{q1}{\footnotesize $q_1$}
\psfrag{q2}{\footnotesize $q_2$}
\psfrag{qn}{\footnotesize $q_n$}
\psfrag{pr}{\footnotesize $p_r$}
\psfrag{qt}{\footnotesize $q_t$}
\psfrag{...}{\footnotesize $\vdots$}
\includegraphics*[scale=0.4]{rp.eps}
\end{center}
\caption{The shortest path from a vertex of $R$ to a vertex of $S$ in the
graph $BG(N)$; note that all the non-path edges are not depicted.}
\label{fig:rp}
\end{figure}
Crama et al. in~\cite{CraLoPo:92} proved that if $A$ is an SU matrix then we can partition its rows as stated in the following theorem.
\begin{theorem} \label{th_cr11}
If $A$ is an SU matrix, then there exists a partition $(S_1,\ldots,S_k)$ of the rows of $A$ with the following properties:
\begin{itemize}
\item[(i)] every column of $A$ has $0, 1$ or $2$ nonzero entries in each $S_i$, for $i=1,\ldots,k$;
\item[(ii)] if a column has exactly one nonzero entry in some $S_i$, then all its entries in $S_{i+1},\ldots,S_k$ are zeros.
\end{itemize}
\end{theorem}
Since by Lemma~\ref{lem_opra}(i), SU matrices are closed under taking the transpose we can restate Theorem~\ref{th_cr11}
for the columns of an SU matrix. Consider an SU matrix $A'$ and let
$\mathcal{S}=(S_1, S_2,\ldots,S_k)$ be the partition of its rows as
determined by Theorem~\ref{th_cr11} and
$\mathcal{T}=(T_1, T_2,\ldots,T_l)$ be the partition of the rows of the transpose of $A'$
as determined by Theorem~\ref{th_cr11}. Then by permuting rows and
columns of $A'$ we can obtain the following SU matrix $A$:
\begin{equation} \label{eq_3}
A=\begin{tabular}{c|c|c|c|c|}
\multicolumn{1}{c}{} & \multicolumn{1}{c}{$T_1$} &\multicolumn{1}{c}{$T_2$} &
\multicolumn{1}{c}{$\cdots$} &\multicolumn{1}{c}{$T_l$} \\ \cline{2-5}
$S_1$ & $A_{1,1}$ & $A_{1,2}$ & $\cdots$ & $A_{1,l}$ \\ \cline{2-5}
$S_2$ & $A_{2,1}$ & $A_{2,2}$ & $\cdots$ & $A_{2,l}$ \\ \cline{2-5}
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ \\ \cline{2-5}
$S_k$ & $A_{k,1}$ & $A_{k,2}$ & $\cdots$ & $A_{k,l}$ \\ \cline{2-5}
\end{tabular}
\end{equation}
where we have that each $A_{i,j}$ is the submatrix
of $A'$ defined by the rows of $S_i$ and columns of $T_j$.
\begin{theorem}\label{th_final}
Let $A$ be an SU matrix representation of a $3$-connected regular matroid being in
the form of (\ref{eq_3}). Then the following hold:
\begin{itemize}
\item[(i)] $A_{1,1}$ has $0$ or $2$ nonzeros in each column and row
\item[(ii)] each column of $A_{1,j}$ has $0$ or $2$ nonzeros and each row of
$A_{i,1}$ has $0$ or $2$ nonzero elements
\item[(iii)] if an $A_{i,j}$ has $2$ nonzeros in each column and each
row then, up to row and column permutations,
\[
A_{i,j}=
\left[
\begin{array}{cccccc}
\pm{1} & & & & & \pm{1} \\
\pm{1} & \pm{1} & & & & \\
& \pm{1} & \vdots & & & \\
& & & \vdots & & \\
& & & & \pm{1} & \\
& & & & \pm{1} & \pm{1}
\end{array}
\right]
\]
\end{itemize}
\end{theorem}
\begin{proof}
For (i), it is enough to observe that if there was a column (row) of $A_{i,j}$
with exactly one nonzero, then by Theorem~\ref{th_cr11} this column (row) would be a unit column (row). This
would mean that the matroid represented by $A$ has a $2$-separation. This is in
contradiction with our hypothesis that this matroid is $3$-connected.
For (ii), if a column (row) of some $A_{1,j}$ ($A_{i,1}$) had exactly
one nonzero then by Theorem~\ref{th_cr11} it would be a unitary column
(row). Again this is in contradiction with our hypothesis that the matroid
represented by $A$ is $3$-connected.
For (iii), from Theorem~\ref{th_tr23} we have that $A_{i,j}$ has can not the
matrix
$
\left[
\begin{array}{rr}
1 & 1 \\
1 & 1
\end{array}
\right]
$ as submatrix. It is now straightforward to see that $A_{i,j}$ has the form
described in (iii).
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2011-03-23T01:01:37",
"yymm": "1103",
"arxiv_id": "1103.4258",
"language": "en",
"url": "https://arxiv.org/abs/1103.4258",
"abstract": "Networks are frequently studied algebraically through matrices. In this work, we show that networks may be studied in a more abstract level using results from the theory of matroids by establishing connections to networks by decomposition results of matroids. First, we present the implications of the decomposition of regular matroids to networks and related classes of matrices, and secondly we show that strongly unimodular matrices are closed under $k$-sums for $k=1,2$ implying a decomposition into highly connected network-representing blocks, which are also shown to have a special structure.",
"subjects": "Combinatorics (math.CO)",
"title": "$k$-Sum Decomposition of Strongly Unimodular Matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232935032462,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.709346035076998
} |
https://arxiv.org/abs/1708.08024 | Analyticity of Bounded Solutions of Analytic State-Dependent Delay Differential Equations | We study the analyticity of bounded solutions of systems of analytic state-dependent delay differential equations. We obtain the analyticity of solutions by transforming the system of state-dependent delay equations into an abstract ordinary differential equation in a subspace of the sequence space $l^{\infty}(\mathbb{R}^{N+1})$ and prove the existence of complex extension of the bounded solutions. An example is given to illustrate the general results. | \section{Introduction}\label{SOPS-4-1}
The analyticity of bounded solutions of delay differential equations with constant delay such as the well-known Wright's equation was established in work of Nussbaum \cite{Nussbaum-analyticity}. It is natural to conjecture that this analyticity result holds true for many differential equations with state-dependent delay such as
\begin{align}\label{eqn-2-oldd}\left\{
\begin{aligned}
\dot{x}(t)&= f(x(t),\,x(t- \tau)),\\
\tau & =r(x(t)),
\end{aligned}
\right.
\end{align}
with analytic $f$ and $r$. In this paper, we solve this conjecture.
We should remark that the work of Mallet-Paret and Nussbaum \cite{Nussbaum-2} also presented some examples where bounded solutions are no-longer analytic, while Krisztin \cite{Tibor} showed that globally defined bounded solutions of threshold type delay equations are analytic.
Then an important theoretical problem is what would be the most general form of state-dependent delay differential equations for which the conjecture remains true for differential equations with state-dependent delay. We also notice that establishing the analyticity of bounded solutions such as periodic solutions is essential for describing the global dynamics of some state-dependent delay differential equations. For example, in \cite{HWZ} the nonexistence of a nonconstant $p$-periodic real-valued solution which is constant in a small interval in $\mathbb{R}$ was assumed in order to obtain the global continuation of periodic solutions of the following system
\begin{align}\label{eqn-2-old}\left\{
\begin{aligned}
\dot{x}(t)=& f(x(t),\,x(t- \tau(t))),\\
\dot{\tau}(t)=& g(x(t), \tau (t)),
\end{aligned}
\right.
\end{align}
with analytic $f$ and $g$. Specifically, it was needed to exclude the case where there is a nonconstant $p-$periodic solution for which
\begin{align}
\tau(t)=\tau_0,\,t\in I+kp, k\in\mathbb{Z}
\end{align}
where $\tau_0>0$ is a constant and $I$ is an interval in $\mathbb{R}$ with length less than $p$. On the one hand, if there is such a periodic solution and if this solution is analytic on $\mathbb{R}$, then the delay $\tau$ must be a constant on the whole real line $\mathbb{R}$. On the other hand, under certain technical conditions, it can be ruled out the existence of such a periodic solution with constant delay by considering a cyclic system of ordinary differential equations (see \cite{HWZ} for more details) and hence these technical conditions can ensure the nonexistence of a nonconstant $p$-periodic solution for which $\tau $ remains to be a constant in a small interval in $\mathbb{R}$.
In this paper, we first note that bounded solutions of system (\ref{eqn-2-oldd}) and system (\ref{eqn-2-old}) and many others including those with ``threshold delay" must satisfy the following differential equations with state-dependent delays
\begin{align}\label{eqn-2}\left\{
\begin{aligned}
\dot{x}(t)=& f(x(t),\,x(t- \tau(t))),\\
\dot{\tau}(t)=& g(x(t),\,x(\eta(t)),\,\cdots,\,x(\eta^{M-1}(t)),\,\,\tau(t)),
\end{aligned}
\right.
\end{align} where
$\eta^0(t)=t$, $\eta(t)=t-\tau(t)$, $\eta^j(t)=\eta(\eta^{j-1}(t))$ for $j=1,\,2\,\,\cdots,\,M$ with $M\in\mathbb{N}$, and we assume
\begin{description}
\item[(A1)]The maps $f$:
$U\times U\ni
(\theta_1,\theta_2) \rightarrow
f(\theta_1,\theta_2)\in\mathbb{C}^N$ and $g$:
$U^M\times V \ni
(\gamma_1,\,\gamma_2)\rightarrow
g(\gamma_1,\,\gamma_2)\in\mathbb{C}$
are analytic with respect to $(\theta_1,\theta_2)$ and $(\gamma_1,\,\gamma_2)$, respectively, where $U\subset \mathbb{C}^N,\, V\subset\mathbb{C}$ are bounded open sets, $U^M=\underbrace{U\times U\times\cdots\times U}_{M}$.
\item[(A2)] There exist $l\in (0,\,1)$ and $c>1$ such that
$ |1-g(\gamma_1,\,\gamma_2)-\frac{c+l}{2}|<\frac{c-l}{2}$ for all
$(\gamma_1,\,\gamma_2)\in \overline{U}^M \times \overline{V}$, where $\overline{U}^M \times \overline{V}$ is the closure of $U^M\times V$.
\end{description}
(A1) is a natural assumption on the analyticity of $f$ and $g$ on their domains. (A2) is assuming that $g$ satisfies $l<|1-g|<c$ which ensures that the mapping $\mathbb{R}\ni t\rightarrow t-\tau(t)\in\mathbb{R}$ is increasing with a bounded rate.
Let $(x,\,\tau)\in C(\mathbb{R};\mathbb{R}^{N+1})$ be a bounded solution of system~(\ref{eqn-2}) and define the sequence $\left((y_1,\,z_1),\,(y_2,\,z_2,),\,\cdots\right)$ by
\begin{align*}(y_j(t),\, z_j(t))=\left(\frac{1}{c^{j}}x(\eta^{j-1}(t)),\,\frac{1}{c^{j}}\tau(\eta^{j-1}(t))\right)
\mbox{ for $j\geq 1,\,j\in\mathbb{N},\,t\in\mathbb{R}$.}
\end{align*} The reason that we carry a term $\frac{1}{c^{j}}$ will be clear by the end of this section.
For $j=1$ we have for every $t\in\mathbb{R}$,
\begin{align}
\frac{d}{dt}y_1(t)& = \dot{x}(t) =\frac{1}{c}{f(cy_1(t),\, c^2y_{2}(t) )},\label{analyticity-ODE-1-J} \\
\frac{d}{dt}z_1(t)& =\dot{\tau}(t) =\frac{1}{c}g(cy_1(t),\,c^2y_2(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\, cz_{1}(t) ).\label{analyticity-ODE-2-J}
\end{align}
For $j\geq 2,\,j\in\mathbb{N}$, we have for every $t\in\mathbb{R}$,
\begin{align}
\frac{d}{dt}y_j(t)& =\frac{1}{c^j}\dot{x}(\eta^{j-1}(t))\prod_{i=0}^{j-2}\dot{\eta}(\eta^i(t))\notag\\
& =\dot{x}(\eta^{j-1}(t))\frac{1}{c^j} \prod_{i=0}^{j-2}(1-g(x(\eta^i(t)), \,x(\eta^{i+1}(t)),\,\cdots,\,x(\eta^{i+M-1}(t)),\,\tau(\eta^i(t)) )\notag\\
& =\frac{f((c^{j}y_j(t),\, {c^{j+1}}y_{j+1}(t) )}{1-g(c^{j}y_j(t), \,c^{j+1}y_{j+1}(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\,c^{j}z_j(t)) }\notag\\
& \quad \times\frac{1}{c^j} \prod_{i=0}^{j-1}(1-g(c^{i+1}y_{i+1}(t), \,c^{i+2}y_{i+2}(t),\,\cdots,\,c^{i+M}y_{i+M}(t),\,c^{i+1}z_{i+1}(t))),\label{analyticity-ODE-1}
\intertext{and}
\frac{d}{dt}z_j(t)& =\frac{1}{c^j}\dot{\tau}(\eta^{j-1}(t))\prod_{i=0}^{j-2}\dot{\eta}(\eta^i(t))\notag\\
& =\dot{\tau}(\eta^{j-1}(t))\frac{1}{c^j} \prod_{i=0}^{j-2}(1-g(x(\eta^i(t)), \,x(\eta^{i+1}(t)),\,\cdots,\,x(\eta^{i+M-1}(t)),\,\tau(\eta^i(t)) )\notag\\
& =\frac{g(c^{j}y_j(t), \,c^{j+1}y_{j+1}(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\,c^{j}z_j(t))}{1-g(c^{j}y_j(t), \,c^{j+1}y_{j+1}(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\,c^{j}z_j(t))}\notag\\
& \quad \times\frac{1}{c^j} \prod_{i=0}^{j-1}(1-g(c^{i+1}y_{i+1}(t), \,c^{i+2}y_{i+2}(t),\,\cdots,\,c^{i+M}y_{i+M}(t),\,c^{i+1}z_{i+1}(t))).\label{analyticity-ODE-2}
\end{align}
Then the sequence $((y_1,\,z_1),\,(y_2,\,z_2),\,\cdots)$, $t\in\mathbb{R}$ satisfies a system of ordinary differential equations of (\ref{analyticity-ODE-1-J}), (\ref{analyticity-ODE-2-J}), (\ref{analyticity-ODE-1}) and (\ref{analyticity-ODE-2}). Namely, for every $t\in\mathbb{R}$ and for $j\geq 1$, we have
\begin{align} \label{New-Eqn-1}
\left\{
\begin{aligned}
\frac{d}{dt}y_j(t) & =\frac{f((c^{j}y_j(t),\, {c^{j+1}}y_{j+1}(t) )}{1-g(c^{j}y_j(t), \,c^{j+1}y_{j+1}(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\,c^{j}z_j(t)) } \\
& \quad \times\frac{1}{c^j} \prod_{i=0}^{j-1}(1-g(c^{i+1}y_{i+1}(t), \,c^{i+2}y_{i+2}(t),\,\cdots,\,c^{i+M}y_{i+M}(t),\,c^{i+1}z_{i+1}(t))),\\
\frac{d}{dt}z_j(t) & =\frac{g(c^{j}y_j(t), \,c^{j+1}y_{j+1}(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\,c^{j}z_j(t))}{1-g(c^{j}y_j(t), \,c^{j+1}y_{j+1}(t),\,\cdots,\,c^{j+M-1}y_{j+M-1}(t),\,c^{j}z_j(t))} \\
& \quad \times\frac{1}{c^j} \prod_{i=0}^{j-1}(1-g(c^{i+1}y_{i+1}(t), \,c^{i+2}y_{i+2}(t),\,\cdots,\,c^{i+M}y_{i+M}(t),\,c^{i+1}z_{i+1}(t))).
\end{aligned}\right.
\end{align}
It remains to decide an appropriate space where $(y_1(t),\,z_1(t),\,(y_2,\,z_2),\,\cdots)$, $t\in\mathbb{R}$ lives in. With the arguments of $f$ and $g$ in the right hand side of (\ref{New-Eqn-1}), it turns out that we can set $w(t)=((y_1(t),\,z_1(t)),\,(y_2(t),\,z_2(t)),\,\cdots)$ to be in the sequence space $l_c^{\infty} (\mathbb{R}^{N+1})$ defined by
\begin{align}\label{l-c-def}
l_c^{\infty} (\mathbb{R}^{N+1})=\{v=(v_1,\,v_2,\,\cdots,v_j,\cdots)\in l^{\infty}(\mathbb{R}^{N+1}): \sup_{j\in\mathbb{N}} c^j|v_j|<+\infty\},
\end{align} where we can find a subset such that the terms of $f$ and $g$ in system~(\ref{New-Eqn-1}) are well-defined. Besides, the product terms
in system~(\ref{New-Eqn-1}) need to be treated so that the right hand side of system~(\ref{New-Eqn-1}) always remains bounded as $j\rightarrow\infty$. We address this issue at Lemma~\ref{Lemma-l-infty}.
With the above preparations we can represent system~(\ref{New-Eqn-1}) by the following abstract ordinary differential equation:
\begin{align}\label{Abs-ODE-S-infty-new}
\frac{d}{dt}w(t)= H(Tw(t)),
\end{align}
where
the mapping $T: l_c^{\infty} (\mathbb{R}^{N+1})\rightarrow l^{\infty}(\mathbb{R}^{N+1})$
is defined by
\[
T(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)=(cv_1,\,c^2v_2,\,\cdots,c^{j}v_j,\cdots),
\] and $H: l^{\infty} (\mathbb{R}^{N+1})\rightarrow l^{\infty}(\mathbb{R}^{N+1})$ is defined by the right hand side of system~(\ref{New-Eqn-1}).
To obtain the analyticity of bounded solutions $(x(t),\,\tau(t))$, $t\in\mathbb{R}$ of system~(\ref{eqn-2}), we follow the idea of \cite{Nussbaum-analyticity} to show that the solution $w(t)$ to system~(\ref{Abs-ODE-S-infty-new}) has a complex extension and hence $(x(t),\,\tau(t))$, $t\in\mathbb{C}$ satisfies system~(\ref{eqn-2}) on the complex domain. We remark that there are significant new challenges not present in \cite{Nussbaum-analyticity} but in this paper. First, the operator $T$ is not a self mapping on $l_c^{\infty} (\mathbb{C}^{N+1})$ and the range of $H$ is in $l^{\infty} (\mathbb{R}^{N+1})$. This means that the right hand side of system~(\ref{Abs-ODE-S-infty-new}) does not define a vector field on $l_c^{\infty} (\mathbb{C}^{N+1})$ while we are looking for solutions in $l_c^{\infty} (\mathbb{C}^{N+1})$; Secondly, when we transform system~(\ref{Abs-ODE-S-infty-new}) into an integral form and consider the associated fixed point problem on $l^{\infty} (\mathbb{C}^{N+1})$ using the uniform contraction principle in Banach spaces, we can not obtain a contractive mapping on $l^{\infty} (\mathbb{C}^{N+1})$ unless we introduce a small perturbation. The problem is then reduced to show that the solution of the initial value problem associated with system~(\ref{Abs-ODE-S-infty-new}) is the limit of that of the perturbed system.
We organize the remaining part of the paper as follows: in section~\ref{preliminary}, we will develop results on analyticity of $H$ in the right hand side of system~(\ref{Abs-ODE-S-infty-new}) and some basic functional analysis necessary for proving the existence of complex extension of solutions to system~(\ref{Abs-ODE-S-infty-new}), using the the uniform contraction principle in Banach spaces;
We present the main results in section~\ref{Main-results}
and will illustrate this general result with an example in the last section.
\section{Notations and Preliminary Results}\label{preliminary}
Let $E$ be a complex Banach space, $D$ an open subset of the complex plane $\mathbb{C}$. A continuous mapping $u: D\ni t\rightarrow u(t)\in E $ is called analytic if for every $t\in D$,
$
\lim_{t\rightarrow t_0} \frac{u(t)-u(t_0)}{t-t_0}=u'(t_0)
$ exists. If $W$ is an open subset of $E$, $\tilde{E}$ is a complex Banach space, a continuous mapping $G: W \ni u\rightarrow G(u)\in \tilde{E}$ is called analytic if for all $u_0\in W$, and for all $h\in E$, the mapping $t\rightarrow G(u_0+th)$ is analytic in the neighbourhood of $0\in \mathbb{C}$.
Let $\mathbb{K}$ stand for the space of real numbers ($\mathbb{R}$) or complex numbers ($\mathbb{C}$). In the following, we develop some basic properties of the map $T$ and the spaces $l_c^{\infty} (\mathbb{K}^{N+1})$ and $l^{\infty} (\mathbb{K}^{N+1})$. We denote by $(v_j)_{j=1}^\infty$ the element $(v_1,\,v_2,\,\cdots,v_j,\cdots) $ in the sequence spaces.
\begin{lemma}\label{Banach-S-space}
Let $c>1$ be a constant and $l_c^{\infty} (\mathbb{K}^{N+1})$ be defined by
\[l_c^{\infty} (\mathbb{K}^{N+1})=\{v=(v_j)_{j=1}^\infty\in l^{\infty}(\mathbb{K}^{N+1}): \sup_{j\in\mathbb{N}} c^j|v_j|<+\infty\}. \] Then $l_c^{\infty} (\mathbb{K}^{N+1})$ is a Banach space under the norm \mbox{$\|\cdot\|_{l_c^{\infty} (\mathbb{K}^{N+1})}$} defined by
\[
\|v\|_{l_c^{\infty} (\mathbb{K}^{N+1})}=\sup_{j\in\mathbb{N}} c^j|v_j|.
\] \end{lemma}
\begin{lemma}\label{l-m-space}
Let $m\in\mathbb{N}, m\geq 2$ be a constant and $l_m^{\infty} (\mathbb{K}^{N+1})$ be defined by
\[l_m^{\infty} (\mathbb{K}^{N+1})=\{v=(v_j)_{j=1}^\infty\in l^{\infty}(\mathbb{K}^{N+1}): \sup_{j\in\mathbb{N}} j^m|v_j|<+\infty\}. \] Then $l_m^{\infty} (\mathbb{K}^{N+1})$ is a Banach space under the norm \mbox{$\|\cdot\|_{l_m^{\infty} (\mathbb{K}^{N+1})}$} defined by
\[
\|v\|_{l_m^{\infty} (\mathbb{K}^{N+1})}=\sup_{j\in\mathbb{N}} j^m|v_j|.
\]
Moreover, the embedding $I_m: l_m^{\infty} (\mathbb{K}^{N+1})\rightarrow l^{\infty} (\mathbb{K}^{N+1})$ is compact.
\end{lemma}
\begin{proof}
It is clear that $l_m^{\infty} (\mathbb{K}^{N+1})$ is a subspace of $l^{\infty}(\mathbb{K}^{N+1})$ and $\|v\|_{l_m^{\infty} (\mathbb{K}^{N+1})}=\sup_{j\in\mathbb{N}} j^m|v_j|$ defines a norm on $l_m^{\infty} (\mathbb{K}^{N+1})$. Let $\{v^n\}_{n=1}^{\infty}$ be a Cauchy sequence in $l_m^{\infty} (\mathbb{K}^{N+1})$. For every $n\in\mathbb{N}$, let $b^n=(v_1^{n},\,2^mv_2^{n},\,\cdots,j^mv_j^{n}, \cdots)$. Then $\{b^n\}_{n=1}^{\infty}$ is a Cauchy sequence in $l^{\infty}(\mathbb{K}^{N+1})$. Since $l^{\infty}(\mathbb{K}^{N+1})$ is a Banach space, there exists $b^*\in l^{\infty}(\mathbb{K}^{N+1})$ so that
\[
\lim_{n\rightarrow+\infty}|b^n-b^*|_{l^{\infty}(\mathbb{K}^{N+1})}=0.
\]
Then we have $v^*=(\frac{b_1^{*}}{1},\,\frac{b_2^{*}}{2^m},\,\cdots,\frac{b_j^{*}}{j^m}\cdots)\in l_m^{\infty} (\mathbb{K}^{N+1})$ and
\[
\lim_{n\rightarrow+\infty}|v^n-v^*|_{l_m^{\infty} (\mathbb{K}^{N+1})}=\lim_{n\rightarrow+\infty}|b^n-b^*|_{l^{\infty}(\mathbb{K}^{N+1})}=0.\]
Next we show that the embedding $I_m: l_m^{\infty} (\mathbb{K}^{N+1})\rightarrow l^{\infty} (\mathbb{K}^{N+1})$ is compact.
For every $k\in\mathbb{N}$ we define the ``cut-off'' operator $H_k: l_m^{\infty} (\mathbb{K}^{N+1})\rightarrow l^{\infty} (\mathbb{K}^{N+1})$
by
\[
H_k(v_1,\,v_2,\,\cdots,\,v_k,\,\cdots)=(v_1,\,v_2,\,\cdots, v_k,\,0,\cdots).
\] Then $H_k$ is compact since the dimension of the range is finite. Moreover we have
\[
\|(I_m-H_k)(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)\|_{_{l^{\infty} (\mathbb{K}^{N+1})}}=\sup_{j\geq k+1}|v_j|,
\]
which implies that $\|I_m-H_k\|\rightarrow 0$ as $k\rightarrow +\infty$ and hence $I_m$ is compact.
\hfill\hspace*{1em}\qed\end{proof}
\begin{lemma}\label{Banach-spaces}Let $c>1$ be a constant. The closed unit ball of
$l_c^{\infty} (\mathbb{K}^{N+1})$ is closed under the norm $\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})}$.
\end{lemma}
\begin{proof} Let $B_c(1)=\{v\in l_c^{\infty}(\mathbb{K}^{N+1}): \|v\|_{l_c^{\infty}(\mathbb{K}^{N+1})}\leq 1\}.$ Let $\{v^n\}_{n=1}^{+\infty}\subset B_c(1)$ be a Cauchy sequence in the norm $\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})}$. Since $l_c^{\infty} (\mathbb{K}^{N+1})$ is a subspace of the Banach space $(l^{\infty} (\mathbb{K}^{N+1}),\,|\cdot|_{l^{\infty}(\mathbb{K}^{N+1})})$. There exists $v^0\in l^{\infty} (\mathbb{K}^{N+1})$ such that
\begin{align}\label{Bs-0}
\lim_{n\rightarrow +\infty}\|v^n-v^0\|_{l^{\infty}(\mathbb{K}^{N+1})}=0. \end{align}Now we show that $v^0\in B_c(1)$. By way of contradiction, assume that $v^0\not\in B_c(1)$. Then we distinguish the following two cases:\\
\textit{Case 1}. $v^0\not\in l_c^{\infty}(\mathbb{K}^{N+1})$.
Then for every $K>0$, there exists $j_0\in\mathbb{N}$ such that $c^{j_0}|(v^0)_{j_0}|>K$. That is,
\begin{align}\label{Bs-1}
|(v^0)_{j_0}|>\frac{K}{c^{j_0}}.
\end{align}
On the other hand, it follows from
(\ref{Bs-0}) that
for every $\epsilon>0$, there exists $N_0\in\mathbb{N}$ such that for every $n>N_0$, we have
$
\sup_{j\in\mathbb{N}}|(v^0)_{j}-(v^n)_{j}|<\epsilon
$
which leads to
$
|(v^0)_{j}|- |(v^n)_{j}|<\epsilon,\,\mbox{for every $j\in\mathbb{N}$}, n>N_0.
$
It follows that
\begin{align}\label{Bs-tri-inequality-0}
|(v^n)_{j}|> |(v^0)_{j}|-\epsilon,\,\mbox{for every $j\in\mathbb{N}$}, n>N_0.
\end{align}
Choosing $j=j_0$ and $\epsilon= \frac{K}{2c^{j_0}}$ in (\ref{Bs-tri-inequality-0}), then by (\ref{Bs-0}) and (\ref{Bs-1}) we obtain that
|(v^n)_{j_0} | \geq |(v^0)_{j_0}|-\frac{K}{2c^{j_0}} >\frac{K}{2c^{j_0}},
$ which leads to
$ |c^{j_0}(v^n)_{j_0}|>K/2$ for every $n>N_0$. That is, $\lim_{n\rightarrow+\infty}\|v^n\|_{l_c^{\infty}(\mathbb{K}^{N+1})}=+\infty.$
This is a contradiction since $\{v^n\}_{n=1}^{+\infty}\subset B_c(1)$. \\
\textit{Case 2}. $v^0\in l_c^{\infty}(\mathbb{K}^{N+1})$ but $\|v^0\|_{l_c^{\infty}(\mathbb{K}^{N+1})}>1$. Let $s=\|v^0\|_{l_c^{\infty}(\mathbb{K}^{N+1})}$. Then $s>1$ and there exists $j_1\in\mathbb{N}$ such that $c^{j_1}|(v^0)_{j_1}|>1$. That is,
\begin{align}\label{Bs-2}
\frac{s}{c^{j_1}}=|(v^0)_{j_1}|>\frac{1}{c^{j_1}}.
\end{align}
On the other hand, it follows from
(\ref{Bs-0}) that
for every $\epsilon>0$, there exists $N_1\in\mathbb{N}$ such that for every $n>N_1$, we have
$
\sup_{j\in\mathbb{N}}|(v^n)_{j}-(v^0)_{j}|<\epsilon
$
which leads to
$
|(v^0)_{j}|- |(v^n)_{j}|<\epsilon,\,\mbox{for every $j\in\mathbb{N}$}, n>N_1.
$
It follows that
\begin{align}\label{Bs-tri-inequality}
|(v^n)_{j}|> |(v^0)_{j}|-\epsilon,\,\mbox{for every $j\in\mathbb{N}$}, n>N_0.
\end{align}
Note that $\{v^n\}_{n=1}^{+\infty}\subset B_c(1))$. Then by (\ref{Bs-tri-inequality}) we have \begin{align}\label{Bs-tri-inequality-1}
\frac{1}{c^j}\geq |(v^n)_{j}|> |(v^0)_{j}|-\epsilon,\,\mbox{for every $j\in\mathbb{N}$}, n>N_0.
\end{align}
Choosing $j=j_1$, $\epsilon=\frac{s-1}{2c^{j_1}}$ in (\ref{Bs-tri-inequality-1}) we obtain from (\ref{Bs-2}) that
\begin{align}\label{Bs-tri-inequality-2}
\frac{1}{c^{j_1}}\geq |(v^n)_{j_1}|> |(v^0)_{j_1}|-\epsilon=\frac{s}{c^{j_1}}-\frac{s-1}{2c^{j_1}},\,\mbox{for every }\, n>N_0.
\end{align}
Then we have $s<1$. This is a contradiction.
\hfill\hspace*{1em}\qed \end{proof}
We remark that the unit sphere of $l_c^{\infty} (\mathbb{K}^{N+1})$ is not closed under the norm $\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})}$.
In light of Lemma~\ref{Banach-spaces} we will equip bounded sets of $l_c^{\infty} (\mathbb{K}^{N+1})$ with the norm $\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})}$. The following three lemmas discuss the properties of a linear operator on $l_c^{\infty} (\mathbb{K}^{N+1})$ equipped with the norm $\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})}$.
\begin{lemma}\label{OMT}Let $c>1$ be a constant.
The mapping $T: (l_c^{\infty} (\mathbb{K}^{N+1},\,\|\cdot\|_{l^{\infty} (\mathbb{K}^{N+1})})\rightarrow (l^{\infty}(\mathbb{K}^{N+1}),\,\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})})$
defined by
\[
T(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)=(cv_1,\,c^2v_2,\,\cdots,c^{j}v_j,\cdots),
\]
has a compact inverse $T^{-1}$ with norm $\|T^{-1}\|=\frac{1}{c}$. Moreover, $T$ is a closed operator.
\end{lemma}
\begin{proof} We first show that $T^{-1}$ exists and is continuous. By definition of $T$ and that $c>1$, we know that $T$ is 1-1 and onto. Therefore $T^{-1}: (l^{\infty} (\mathbb{K}^{N+1},\,\|\cdot\|_{l^{\infty} (\mathbb{K}^{N+1})})\rightarrow (l_c^{\infty}(\mathbb{K}^{N+1}),\,\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})})$ exists and is given by
\[
T^{-1}(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)=(c^{-1}v_1,\,c^{-2}v_2,\,\cdots,c^{-j}v_j,\cdots).
\] Moreover, we have
\begin{align*}
\|T^{-1}\|
= & \sup_{v\in l^{\infty} (\mathbb{K}^{N+1})}\frac{\|T^{-1}v\|_{l^{\infty}(\mathbb{K}^{N+1})}}{\|v\|_{l^{\infty}(\mathbb{K}^{N+1})}}
= \sup_{\|v\|_{l^{\infty}(\mathbb{K}^{N+1})}=1}\|T^{-1}v\|_{l^{\infty}(\mathbb{K}^{N+1})}
= \frac{1}{c}.
\end{align*}
Next we show that $T^{-1}$ is compact. For every $m\in\mathbb{N}$ we define an operator $H_m: (l^{\infty} (\mathbb{K}^{N+1},\,\|\cdot\|_{l^{\infty} (\mathbb{K}^{N+1})})\rightarrow (l_c^{\infty}(\mathbb{K}^{N+1}),\,\|\cdot\|_{l^{\infty}(\mathbb{K}^{N+1})})$
by
\[
H_m(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)=(c^{-1}v_1,\,c^{-2}v_2,\,\cdots, c^{-m}v_m,\,0,\cdots).
\] Then $H_m$ is compact since the dimension of the range is finite. Moreover we have
\[
\|(T^{-1}-H_m)(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)\|_{_{l^{\infty} (\mathbb{K}^{N+1})}}=\sup_{j\geq m+1}c^{-j}\|(v_1,\,v_2,\,\cdots,\,v_j,\,\cdots)\|_{_{l^{\infty} (\mathbb{K}^{N+1})}},
\]
which implies that $\|T^{-1}-H_m\|\rightarrow 0$ as $m\rightarrow +\infty$ and hence $T^{-1}$ is compact.
Next we show that $T$ is a closed operator. Let $\{v^n\}_{n=1}^\infty\subset {l_c^{\infty} (\mathbb{K}^{N+1})}$ be a convergent sequence such that $\lim_{n\rightarrow+\infty}\|v^n-v\|_{l^{\infty} (\mathbb{K}^{N+1})}=0$ for some $v\in l^{\infty} (\mathbb{K}^{N+1})$, and such that $\lim_{n\rightarrow+\infty}\|Tv^n-u\|_{l^{\infty} (\mathbb{K}^{N+1})}=0$ for some $u\in l^{\infty} (\mathbb{K}^{N+1})$. Then we have
\begin{align*}
\|T^{-1}u-v\|_{l^{\infty} (\mathbb{K}^{N+1})} & =\|T^{-1}u-v^n+v^n-v\|_{l^{\infty} (\mathbb{K}^{N+1})} \\
& =\|T^{-1}u-v^n\|_{l^{\infty} (\mathbb{K}^{N+1})} +\|v^n-v\|_{l^{\infty} (\mathbb{K}^{N+1})} \\
& \leq \|T^{-1}\|\cdot\|u-Tv^n\|_{l^{\infty} (\mathbb{K}^{N+1})} +\|v^n-v\|_{l^{\infty} (\mathbb{K}^{N+1})} \\
&\rightarrow 0 \mbox{ as } n\rightarrow+\infty.
\end{align*}
Therefore we have $T^{-1}u-v=0$. That is, $Tv=u$. $T$ is closed.
\hfill\hspace*{0.05em}\qed \end{proof}
Denote by $\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))$ the space of bounded linear operators from $l^{\infty}(\mathbb{K}^{N+1})$ to $l^{\infty}(\mathbb{K}^{N+1}))$. We have the following two lemmas which will be used when we deal with the integral forms of the relevant abstract ordinary differential equations.
\begin{lemma}\label{Compact-operator}
Let the mapping $T: l_c^{\infty} (\mathbb{K}^{N+1})\rightarrow l^{\infty}(\mathbb{K}^{N+1})$
be as in Lemma~\ref{OMT} and $\lambda\geq 0$. Then the mappings $I - T^{-1}$ and $\lambda I+ T^{-1}: l^{\infty}(\mathbb{K}^{N+1})\rightarrow l^{\infty}(\mathbb{K}^{N+1})$ are bounded linear operators with
\begin{align*}
\| I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}&=1,\\
\| \lambda I + T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}&=\lambda+\frac{1}{c}.
\end{align*
Moreover, if $\lambda\in (0,\,1-1/c)$ then \[\| (1-\lambda) I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=1-\lambda. \]
\end{lemma}
\begin{proof}
Let $S(1)=\{v\in l^{\infty}(\mathbb{K}^{N+1}):\sup_{j\in\mathbb{N}}|v_j|=1\}\subset l^{\infty}(\mathbb{K}^{N+1}).$ Note that
\begin{align*}
\| I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}
=& \sup_{v\in S(1)} \sup_{j\in\mathbb{N}}(1-c^{-j})|v_j|\\
\leq & \sup_{v\in S(1)} \left(\sup_{j\in\mathbb{N}}|v_j|-\inf_{j\in\mathbb{N}} c^{-j} |v_j|\right)\\
=& 1.
\end{align*}Taking $v_0=\{\frac{j}{j+1}\vec{e}\}_{j=1}^{\infty}\in S(1)$ where $\vec{e}$ is a unit vector on the boundary of the unit ball of $\mathbb{K}^{N+1}$, we have
\begin{align*}
\| I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}
=& \sup_{v\in S(1)} \sup_{j\in\mathbb{N}}(1-c^{-j})|v_j|\\
\geq & \sup_{v=v_0} \left( \sup_{j\in\mathbb{N}}\frac{j}{j+1}(1-c^{-j}) \right)\\
=& 1.
\end{align*} It follows that $\| I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=1$. Moreover, we have
\begin{align*}
\| \lambda I + T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}
=& \sup_{v\in S(1)} \sup_{j\in\mathbb{N}}(\lambda+c^{-j})|v_j|\\
\leq & \sup_{v\in S(1)} \left(\lambda \sup_{j\in\mathbb{N}}|v_j|+\sup_{j\in\mathbb{N}} c^{-j} |v_j|\right)\\
=& \lambda+\frac{1}{c}.
\end{align*}Taking $v'_0=\{c^{-(j-1)}\vec{e}\}_{j=1}^{\infty}\in S(1)$, we have
\begin{align*}
\|\lambda I + T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}
=& \sup_{v\in S(1)} \sup_{j\in\mathbb{N}}(\lambda+c^{-j})|v_j|\\
\geq & \sup_{v=v'_0} \left( \sup_{j\in\mathbb{N}}c^{-(j-1)}(\lambda+c^{-j}) \right)\\
=& \lambda+\frac{1}{c}.
\end{align*} It follows that $\|\lambda I + T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}= \lambda+\frac{1}{c}$.
Finally, we show that $\| (1-\lambda) I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=1-\lambda.$ Note that we have $1-\lambda-c^{-j}>0$ for all $j\in\mathbb{N}$ since $\lambda\in (0,\,1-1/c)$. Then on the one hand we have
\begin{align*}
\| (1-\lambda) I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=& \sup_{v\in B(1)} \sup_{j\in\mathbb{N}}(1-\lambda-c^{-j})|v_j|\\
\leq & \sup_{j\in\mathbb{N}}(1-\lambda-c^{-j})\\
=& 1-\lambda.
\end{align*}On the other hand,
\begin{align*}
\| (1-\lambda) I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=& \sup_{v\in B(1)} \sup_{j\in\mathbb{N}}(1-\lambda-c^{-j})|v_j|\\
\geq & \sup_{v=v'_0}\sup_{j\in\mathbb{N}}(1-\lambda-c^{-j})|v_j|\\
=& \sup_{j\in\mathbb{N}}(1-\lambda-c^{-j})\frac{j}{j+1}\\
=& 1-\lambda.
\end{align*}It follows that $\|(1-\lambda) I - T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=1-\lambda.$\hfill \qed
\end{proof}
\begin{lemma}\label{Extension-operator}
Let the mapping $T: l_c^{\infty} (\mathbb{K}^{N+1})\rightarrow l^{\infty}(\mathbb{K}^{N+1})$
be as in Lemma~\ref{OMT}. Then for every $\lambda\geq 0$, the mapping
$(\lambda T + I)^{-1}: l^{\infty}(\mathbb{K}^{N+1})\rightarrow l_c^{\infty}(\mathbb{K}^{N+1})\subset l^{\infty}(\mathbb{K}^{N+1})$ is continuous with norm
\begin{align*}
\| (\lambda T + I)^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}&=\frac{1}{c\lambda+1}.
\end{align*}
\end{lemma}
\begin{proof} We compute $\|(\lambda T + I)^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}$. Let $S(1)=\{v\in l^{\infty}(\mathbb{K}^{N+1}):\sup_{j\in\mathbb{N}}|v_j|=1\}\subset l^{\infty}(\mathbb{K}^{N+1}).$ Note that
\begin{align*}
\|(\lambda T + I)^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}
=& \sup_{v\in S(1)} \sup_{j\in\mathbb{N}}\frac{|v_j|}{\lambda {c^j}+1}\\
=& \sup_{v\in S(1)} \sup_{j\in\mathbb{N}}\frac{|v_j|}{\lambda c^j +1}\\
\leq & \sup_{j\in\mathbb{N}} \frac{1}{\lambda c^j +1}\\
=& \frac{1}{\lambda c +1}.
\end{align*} Taking $v_0=\{c^{-(j-1)}\vec{e}\}_{j=1}^{\infty}\in B_c(1)$, we have
\begin{align*}\|(\lambda T + I)^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}=&\sup_{v\in S(1)} \sup_{j\in\mathbb{N}}\frac{|v_j|}{\lambda {c^j}+1}\\
\geq & \sup_{v=v_0} \sup_{j\in\mathbb{N}}\frac{1}{\lambda {c^j}+1}|v_j|\\
\geq & \sup_{j\in\mathbb{N}}\frac{c^{-(j-1)}}{\lambda {c^j}+1} \\
=& \frac{1}{\lambda c +1}.
\end{align*} It follows that $\|(\lambda T + I)^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1});\, l^{\infty}(\mathbb{K}^{N+1}))}= \frac{1}{\lambda c +1}$.
\hfill\hspace*{1em}\qed
\end{proof}
The following three lemmas address the well-posedness of system~(\ref{Abs-ODE-S-infty-new}) and the analyticity of the map $H$.
\begin{lemma}\label{Lemma-l-infty}Assume $\textrm{(A1)-(A2)}$. For every sequence
$\{(u_i,\,v_i)\}_{i=0}^{+\infty}\subset U\times V$, let $\mu_i=(u_i,\,u_{i+1},\,\cdots,\,u_{i+M-1},\,v_i)\in U^M\times V$. Then we have
\begin{align*}
\lim_{j\rightarrow+\infty} \frac{1}{c^j}\prod_{i=0}^{j-1}|1-g(\mu_i)|=0,
\end{align*}
Moreover, for every $m\in\mathbb{N}$, we have
\[
\lim_{j\rightarrow+\infty} \frac{j^m}{c^j} \prod_{i=0}^{j-1}|1-g(\mu_i)|=0.
\]
\end{lemma}
\begin{proof} By (A2), we have $|1-g(\gamma_1,\,\gamma_2)|<c$ and $|1-g(\gamma_1,\,\gamma_2)|$ with $(\gamma_1,\,\gamma_2)\in \overline{U}^M\times \overline{V}$ has a supremum less than $c$. Let $s>0$ be such that $c=e^s$. Then there exists $N_0\geq 1$, $N_0\in\mathbb{N}$ so that $|1-g(\mu_i)|\leq e^{s(1-\frac{1}{N_0})}$ for all $i\in\mathbb{N}$. Then for every $n\in\mathbb{N}$ we have
\[|1-g(\mu_i)|\leq e^{s(1-\frac{1}{N_0})}\leq e^{s(1-\frac{n}{i})}\textrm { for all } i\geq nN_0. \]
It follows that $\ln \left( \frac{|1-g(\mu_i)|}{c}\right)\leq -\frac{ns}{i} $ for all $ i\geq nN_0$. Then for $j> nN_0$ we have
\begin{align}
\sum_{i=0}^{j-1}\ln \left(\frac{|1-g(\mu_i )|}{c}\right)& =\sum_{i=0}^{nN_0-1}\ln \left(\frac{|1-g(\mu_i)|}{c}\right)+\sum_{i=nN_0}^{j-1}\ln \left(\frac{|1-g(\mu_i)|}{c}\right)\notag\\
& \leq \sum_{i=0}^{nN_0-1}\ln \left(\frac{|1-g(\mu_i)|}{c}\right)+s\sum_{i=nN_0}^{j-1} \left(-\frac{n}{i}\right).\label{analyticity-infty-new}
\end{align}
Let $c_0=\sum_{i=0}^{nN_0-1}\ln \left(\frac{|1-g(\mu_i)|}{c}\right)$. Then by (\ref{analyticity-infty-new}) and (A2), we have
\begin{align}
0<\frac{1}{c^j}\prod_{i=0}^{j-1}|1-g(\mu_i)|& = \exp{\sum\limits_{i=0}^{j-1}\ln \left(\frac{|1-g(\mu_i)|}{c}\right)}\notag\\
& \leq e^{c_0}\exp{\left(s\sum\limits_{i=nN_0}^{j-1} -\frac{n}{i}\right)}.\label{analyticity-infty-inequality}
\end{align}
Taking limits as $j\rightarrow +\infty$ in (\ref{analyticity-infty-inequality}) we have
\[
\lim_{j\rightarrow+\infty} \frac{1}{c^j}\prod_{i=0}^{j-1}|1-g(\mu_i)|=0.
\]
Choosing $n=m$ in the inequality (\ref{analyticity-infty-inequality}), we have
\begin{align}
0< \frac{j^m}{c^j} \prod_{i=0}^{j-1}|1-g(\mu_i)|& \leq j^m e^{c_0}\exp{\left(s\sum\limits_{i=mN_0}^{j-1} -\frac{m}{i} \right)}\notag\\
& = \exp\left(c_0+\sum\limits_{i=1}^{mN_0-1}\left(\frac{m}{i}\right)+\frac{m}{j}\right) \frac{j^m}{\exp{(mH_j)}}\notag\\
& = \exp\left(c_0+\sum\limits_{i=1}^{mN_0-1}\left(\frac{m}{i}\right)+\frac{m}{j}\right)\exp(m\ln j-mH_j),\label{analyticity-infty-inequality-2}
\end{align}
where $H_j=1+\frac{1}{2}+\cdots+\frac{1}{j}$ and $\sum_{i=1}^{mN_0-1}\left(\frac{m}{i}\right)$ is regarded 0 if $mN_0=1$. We note that $\lim_{j\rightarrow+\infty}\ln j-H_j=-\gamma$ where $\gamma>0$ is the Euler-M\'{a}scheroni constant. Taking supremum limits as $j\rightarrow +\infty$ in (\ref{analyticity-infty-inequality-2}) we have
\[
0<\limsup_{j\rightarrow+\infty}\frac{j^m}{c^j} \prod_{i=0}^{j-1}|1-g(\mu_i)|\leq \exp\left(c_0+\sum\limits_{i=1}^{mN_0-1}\left(\frac{m}{i}\right)-m\gamma\right)<+\infty.
\]Then we have
\begin{align*}
& \lim_{j\rightarrow+\infty}\frac{j^{m-1}}{c^j} \prod_{i=0}^{j-1}|1-g(\mu_i)|\\
\leq& \limsup_{j\rightarrow+\infty}\frac{j^m}{c^j} \prod_{i=0}^{j-1}|1-g(\mu_i)|\lim_{j\rightarrow+\infty}\frac{1}{j}\\
= &\,0.
\end{align*}
Since $m\in\mathbb{N}$ is arbitrary, it follows that $ \lim_{j\rightarrow+\infty} \frac{j^m}{c^j} \prod_{i=0}^{j-1}|1-g(\mu_i)|=0$ for all $m\in\mathbb{N}$. \hfill \qed
\end{proof}
Let $ l^{\infty}(U\times V)$ be the subset of $l^{\infty}(\mathbb{K}^{N+1})$ defined by
\[
l^{\infty}(U\times V)=\prod_{j=0}^\infty (U\times V).
\]
Note that $ l^{\infty}(U\times V)$ is not an open set of $l^{\infty}(\mathbb{K}^{N+1})$ if $l^{\infty}(\mathbb{K}^{N+1})$ is equipped with the product topology. However, we are concerned with the following set:
\begin{align}\label{set-A}
A=\{w=(w_0,\,w_1,\,\cdots)\in l^{\infty}(U\times V): \mbox{$\{w_j\}_{j=0}^\infty\subset Q_0$ for some compact $Q_0\subset U\times V$} \}.
\end{align}
For every $w=(w_0,\,w_1,\,\cdots)\in A$, we can find an open set $P$ and a compact set $Q$ such that $\{w_j\}_{j=0}^\infty\subset P\subset Q\subset U\times V$. Then $w\in l^\infty(P)\subset A\subset l^{\infty}(U\times V)$. Namely, $A$ is open under the box topology.
We also define the projections $\chi_i: l^{\infty}(U\times V)\rightarrow U^M\times V$ with $i\in\{0,\,1,\,2,\,\cdots\}$ by
\begin{align}\label{chi}
\chi_i (w)=(u_i,\,u_{i+1},\,\cdots,\,u_{i+M-1},\,v_i)
\end{align}for every $w=((u_i,\,v_i))_{j=1}^\infty\in l^{\infty}(U\times V).$
\begin{lemma}\label{Lemma-G}Let $A$ be defined at (\ref{set-A}).
Assume $($\textrm{A1 -- A2}$\,)$.
The mapping $G$ defined by
\[
G: A\ni w=(w_0,\,w_1,\,w_2,\,\cdots,\,w_i,\,\cdots)\rightarrow G(w)=\left(\frac{1}{c^j}\prod_{i=0}^{j-1}(1-g(\chi_i (w)))\right)_{j=1}^{+\infty},
\]where $w_i=(u_i,\,v_i)\in U\times V$, is continuous and analytic from $A\subset l^{\infty}(U\times V)$ to $ l^{\infty}(\mathbb{C}^{N+1})$.
\end{lemma}
\begin{proof}
By Lemma~\ref{Lemma-l-infty}, we know that $G$ is a mapping from $l^{\infty}(\mathbb{C}^{N+1})$ to $ l^{\infty}(\mathbb{C}^{N+1})$. Note that for every $i,\,j\in\mathbb{N}$ with $0\leq i\leq j-1$, we have
\begin{align}\label{product-derivative}
\frac{\partial }{\partial \mu_i}\prod_{i=0}^{j-1}(1-g(\mu_i ))&=
\frac{-\frac{\partial }{\partial \mu_i}g(\mu_i)}{(1-g(\mu_i))}\prod_{i=0}^{j-1}(1-g(\mu_i ))
\end{align}
Let $(\mu_0,\,\mu_1,\,\cdots,\,\mu_{j-1})$ denote a column vector in $\bigoplus\limits_{i=0}^{j-1}\mathbb{C}^{MN+1}$. Then we have
\begin{align*}
&\frac{\partial }{\partial (\mu_0,\,\mu_1,\,\cdots,\,\mu_{j-1})}\prod_{i=0}^{j-1}(1-g(\mu_i ))\\
=&\left(\prod_{i=0}^{j-1}(1-g(\mu_i))\right)
\left(\frac{-\frac{\partial }{\partial \mu_0} g(\mu_0)}{(1-g(\mu_0))},\,\frac{-\frac{\partial }{\partial \mu_1} g(\mu_1)}{(1-g(\mu_1))},\,\cdots,\,\frac{-\frac{\partial }{\partial \mu_{j-1}} g(\mu_{j-1})}{(1-g(\mu_{j-1}))}\right),
\end{align*}
which is also regarded as a column vector in $\bigoplus\limits_{i=0}^{j-1}\mathbb{C}^{MN+1}$.
For every $\epsilon>0$, choose $\delta=\epsilon$, for every $w_1=({w_1}_i),\,w_2=({w_2}_i)\in A$ with $|w_1-w_2|_{l^{\infty}(\mathbb{C}^{N+1})}<\delta$, by (\ref{product-derivative}) and the Integral Mean Value Theorem, we have
\begin{align*}
|G(w_1)-G(w_2)|_{l^{\infty}(\mathbb{C}^{N+1})} &=\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\prod_{i=0}^{j-1}(1-g(\chi_i(w_1) ))-\prod_{i=0}^{j-1}(1-g(\chi_i(w_2)))\right|\\
&\leq \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\bar{\chi}_i ))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\bar{\chi}_i )}{(1-g(\bar{\chi}_i ))}\left(\chi_i(w_1)-\chi_i(w_2)\right)\right|,
\end{align*}
where $\bar{\chi}_i=\chi_i(w_1)+\theta (\chi_i(w_1)-\chi_i(w_2))$ for some $\theta\in [0,\,1]$. By (A2) we have
$l<|1-g(\bar{\chi}_i )|<c$. By (A1), there exists $M_0>0$ so that $ |\frac{\partial }{\partial \chi_i} g(\bar{\chi}_i )|<M_0$. By Lemma~\ref{Lemma-l-infty}, there exists $M_1>0$ so that $\sup_{j\in\mathbb{N}}\frac{j}{c^j} \prod_{i=0}^{j-1}|1-g(\bar{\chi}_i )| <M_1$. It follows that
\begin{align}\label{G-continuity}
|G(w_1)-G(w_2)|_{l^{\infty}(\mathbb{C}^{N+1})} &\leq \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left(\prod_{i=0}^{j-1}|1-g(\bar{\chi}_i )|\right)
\sum_{i=0}^{j-1}\frac{M_0}{l}\left| (\chi_i(w_1)-\chi_i(w_2))\right|\notag\\
&\leq \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left(\prod_{i=0}^{j-1}|1-g(\bar{\chi}_i )|\right)
\frac{jM_0}{l}|w_1-w_2|_{l^{\infty}(\mathbb{C}^{N+1})}\notag\\
&= \sup_{j\in\mathbb{N}}\frac{j}{c^j}\left(\prod_{i=0}^{j-1}|1-g(\bar{\chi}_i )|\right)
\frac{M_0}{l}|w_1-w_2|_{l^{\infty}(\mathbb{C}^{N+1})}\notag\\
&= \frac{M_0M_1}{l} \epsilon,
\end{align}which implies that $G$ is continuous.
Next, we show that for every $w=(w_i)\in A\subset l^{\infty}(U\times V)$, and for all $h=(h_i)\in l^{\infty}(\mathbb{C}^{N+1})$, the mapping $\mathscr{G}:t\rightarrow G(w+th)$ is analytic in the neighborhood of $0\in \mathbb{C}$. Denote by $\bar{G}h$ the sequence
\[ \left(\frac{1}{c^j} \left(\prod_{i=0}^{j-1}(1-g(\chi_i(w)))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))}{(1-g(\chi_i(w) ))}\chi_i(h)\right)_{j=1}^{\infty}.\] Then by the same argument leading to (\ref{G-continuity}), we know that $\bar{G}h\in l^{\infty}(\mathbb{C}^{N+1})$ and
\begin{align}\label{analyticity-G}
&\left|\frac{G(w+th)-G(w)}{t}-\bar{G}h\right|_{l^{\infty}(\mathbb{C}^{N+1})}\notag\\
= & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\frac{1}{t}\left(\prod_{i=0}^{j-1}(1-g(\chi_i(w+th) ))-\prod_{i=0}^{j-1}(1-g(\chi_i(w)))\right)\right.\notag\\
& \left.-\left(\prod_{i=0}^{j-1}(1-g(\chi_i(w)))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w) )\chi_i(h)}{ 1-g(\chi_i(w) ) }\right|\notag\\
= & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\chi}_i )\chi_i(h)}{1-g(\tilde{\chi}_i )}\right.\notag\\
& \left.-\left(\prod_{i=0}^{j-1}(1-g(\chi_i(w) ))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))\chi_i(h)}{1-g(\chi_i(w) )}\right|\notag\\
\leq & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))-\prod_{i=0}^{j-1}(1-g({\chi}_i(w)))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\chi}_i )\chi_i(h)}{1-g(\tilde{\chi}_i)}\right|\notag\\
& +\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\chi_i(w)))\right)
\left(\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial\chi_i} g(\tilde{\chi}_i)\chi_i(h)}{1-g(\tilde{\chi}_i)}-
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial u_i} g(\chi_i(w))\chi_i(h)}{1-g(\chi_i(w))}\right)\right|,
\end{align}
where $\tilde{\chi}_i=\chi_i(w+t\theta\, h)$ for some $\theta\in [0,\,1]$. By applying the same argument leading to (\ref{G-continuity}) on the first term of the last inequality of (\ref {analyticity-G}) and by Lemma~\ref{Lemma-l-infty} we have
\begin{align}\label{Analyticity-G-1}
& \lim_{t\rightarrow 0}\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))-\prod_{i=0}^{j-1}(1-g(\chi_i(w) ))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\chi}_i )\chi_i(h)}{1-g(\chi_i(w) )}\right|\notag\\
\leq & \lim_{t\rightarrow 0}\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))-\prod_{i=0}^{j-1}(1-g(\chi_i(w)))\right)
\right|\frac{jM_0}{l}|h|_{l^{\infty}(\mathbb{C}^{N+1})}\notag\\
\leq & \lim_{t\rightarrow 0}\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\tilde{\chi}}_i ))\right)
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\tilde{\chi}}_i )}{(1-g(\tilde{\tilde{\chi}}_i ))}\theta \, t \chi_i(h)\right|\frac{jM_0}{l}|h|_{l^{\infty}(\mathbb{C}^{N+1})}\notag\\
= & \lim_{t\rightarrow 0}\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left| \prod_{i=0}^{j-1}(1-g(\tilde{\tilde{\chi}}_i ))
\right|\frac{j^2M_0^2}{l^2}|h|_{l^{\infty}(\mathbb{C}^{N+1})}^2\cdot |t|\notag\\
= & \lim_{t\rightarrow 0}\sup_{j\in\mathbb{N}}\frac{j^2}{c^j}\left| \prod_{i=0}^{j-1}(1-g(\tilde{\tilde{\chi}}_i ))
\right|\frac{M_0^2}{l^2}|h|_{l^{\infty}(\mathbb{C}^{N+1})}^2\cdot |t|\notag\\
= & \,0.
\end{align}
where $\tilde{\tilde{\chi}}_i=\chi_i(w+ t\theta\theta'h)$ for some $\theta'\in [0,\,1].$ By (A1), there exists $M_2>0$ so that $ |\frac{\partial^2 }{\partial \chi_i^2} g({\chi}_i(w) )|<M_2$ for every $w\in A$ and $i\in\mathbb{N}$.
Then it follows from the Integral Mean Value Theorem that the second term of the last inequality of (\ref {analyticity-G}) satisfies that
\begin{align*}
& \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i))\right)
\left(\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\chi}_i )\chi_i(h)}{(1-g(\tilde{\chi}_i ))}-
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))\chi_i(h)}{(1-g(\chi_i(w) ))}\right)\right| \notag\\
= & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))\right)\left[
\left(\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\chi}_i )\chi_i(h)}{(1-g(\tilde{\chi}_i ))}-
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))\chi_i(h)}{(1-g(\tilde{\chi}_i ))} \right)\right.\right. \notag\\
&\left.\left. +\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))\chi_i(w)}{(1-g(\tilde{\chi}_i ))}-
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))\chi_i(w)}{(1-g(\chi_i(w) ))}\right] \right|\notag\\
\leq & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))\right)\left[
\frac{jM_2t}{l}|h|_{l^{\infty}(\mathbb{C}^{N+1})}^2\right.\right. \notag\\
&\left.\left. +\left(-\frac{\partial }{\partial \chi_i} g(\chi_i(w) )\chi_i(h)\right)\sum_{i=0}^{j-1}\left(\frac{1}{(1-g(\tilde{\chi}_i ))}-
\frac{1}{(1-g(\chi_i(w)))}\right)\right] \right|\notag\\
= & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))\right)\left[
\frac{jM_2t}{l}|h|_{l^{\infty}(\mathbb{C}^{N+1})}^2\right.\right. \notag\\
&\left.\left. +M_0|h|_{l^{\infty}(\mathbb{C}^{N+1})}\sum_{i=0}^{j-1}\left(\frac{g(\tilde{\chi}_i )-g(\chi_i(w))}{(1-g(\tilde{\chi}_i ))(1-g(\chi_i(w)))}
\right)\right] \right|\notag\\
\leq & \sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))\right)\left[
\frac{jM_2 |h|_{l^{\infty}(\mathbb{C}^{N+1})}^2t}{l} +\frac{jM_0^2|h|_{l^{\infty}(\mathbb{C}^{N+1})}^2t}{l^2} \right] \right|.
\end{align*}
Then by Lemma~\ref{Lemma-l-infty} we have
\begin{align}\label{Analyticity-G-2}
\lim_{t\rightarrow 0}\sup_{j\in\mathbb{N}}\frac{1}{c^j}\left|\left(\prod_{i=0}^{j-1}(1-g(\tilde{\chi}_i ))\right)
\left(\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\tilde{\chi}_i )\chi_i(h)}{(1-g(\tilde{\chi}_i ))}-
\sum_{i=0}^{j-1}\frac{-\frac{\partial }{\partial \chi_i} g(\chi_i(w))\chi_i(h)}{(1-g(\chi_i(w) ))}\right)\right|=0.
\end{align}
By (\ref{analyticity-G}), (\ref{Analyticity-G-1}) and (\ref{Analyticity-G-2}) we have
\[
\lim_{t\rightarrow 0}\left|\frac{G(w+th)-G(w)}{t}-\bar{G}h\right|_{l^{\infty}(\mathbb{C}^{N+1})}=0.
\]
\hfill\hspace*{1em}\qed
\end{proof}
\begin{lemma}\label{complete-continuity}Assume $($\textrm{A1 -- A2}$\,)$.
Let the set $A$ and the map $G$ be as in Lemma~\ref{Lemma-G}.
Define $H: A\subset l^{\infty} (U\times V)\rightarrow l^{\infty} (\mathbb{K}^{N+1})$ by
\[
H(\theta)=(F_j(\theta)G_j(\theta))_{j=1}^{\infty}\in l^{\infty} (\mathbb{K}^{N+1}),
\]
where
\begin{align*}
\theta = & (\theta_1,\,\theta_2,\,\cdots,\,\theta_j,\,\cdots)=((u_1,\,v_1),\,(u_2,\,v_2),\,\cdots,\,(u_j,\,v_j),\,\cdots)\in l^{\infty} (\mathbb{K}^{N+1}),\\
F_j(\theta)= & \,\left(\frac{f(u_j,\, u_{j+1} )}{1-g(u_j,\,u_{j+1},\,\cdots,u_{j+M-1}, \,v_j )}, \frac{g(u_j,\,u_{j+1},\,\cdots,u_{j+M-1}, \,v_j )}{1-g(u_j,\,u_{j+1},\,\cdots,u_{j+M-1}, \,v_j )}\right),
\end{align*} for $j\geq 1,\,j\in\mathbb{N}$.
Then $H: \bar{A}\rightarrow l^{\infty} (\mathbb{K}^{N+1})$ is completely continuous and is analytic.
\end{lemma}
\begin{proof} According to Lemma~\ref{l-m-space}, we only need to show that for every bounded set $B\subset l^{\infty} (\mathbb{K}^{N+1})$, $H(B)$ is bounded in $ l_m^{\infty} (\mathbb{K}^{N+1})$. By (A1)-(A2), we know that $F(B)$ is bounded in $ l^{\infty} (\mathbb{K}^{N+1})$. Then by Lemma~\ref{Lemma-l-infty}, we have
\begin{align*}
\lim_{j\rightarrow+\infty}{j^m} \left|F_j(\theta)G_j(\theta)\right|=0.
\end{align*}Therefore we have $H(\theta)\in l_m^{\infty} (\mathbb{K}^{N+1})$. By Lemma~\ref{l-m-space}, $H$ is completely continuous. Then by (A1)--(A2), $F: l^{\infty} (\mathbb{C}^{N+1})\ni \theta\rightarrow F(\theta)\in l^{\infty} (\mathbb{C}^{N+1})$ is analytic. Then by the similar procedure for the proof of Lemma~\ref{Lemma-G}, we can show that for every $w=(w_i)\in A\subset l^{\infty}(U\times V)$, and for all $h=(h_i)\in l^{\infty}(\mathbb{C}^{N+1})$, the mapping $\mathscr{G}:t\rightarrow H(w+th)$ is analytic in the neighborhood of $0\in \mathbb{C}$.
\hfill\hspace*{1em}\qed
\end{proof}
\section{Main Results}\label{Main-results}
Let $\Omega$ be a bounded closed ball in $\mathbb{K}$. We denote by $C(\Omega; l^{\infty}(\mathbb{K}^{N+1}))$ the space of continuous functions $u: \Omega\ni t\rightarrow u(t)\in l^{\infty}(\mathbb{K}^{N+1})$ and denote by $C^1(\Omega; l^{\infty}(\mathbb{K}^{N+1}))$ the space of continuously differentiable functions $u: \Omega\ni t\rightarrow u(t)\in l^{\infty}(\mathbb{K}^{N+1})$. Then it is clear that $C(\Omega; l^{\infty}(\mathbb{K}^{N+1}))$
and $C^1(\Omega; l^{\infty}(\mathbb{K}^{N+1}))$ are Banach spaces equipped, respectively, with the norms
$\|u\| =\max_{t\in \Omega }|u(t)|_{l^{\infty}(\mathbb{K}^{N+1}))}$ and
\[\|u\| =\max\{\max_{t\in \Omega }|u(t)|_{l^{\infty}(\mathbb{K}^{N+1})},\,\max_{t\in \Omega }|u'(t)|_{ {l}^{\infty}(\mathbb{K}^{N+1})}\}.
\]
\begin{theorem}\label{Analyticity-th}Assume $($\textrm{A1 -- A2}$\,)$. Let $(x,\,\tau)\in\mathbb{R}^{N+1}$ be a bounded solution of system (\ref{eqn-2}). Suppose that there exists a compact set $Q\subset U\times V$ such that $(x(t),\,\tau(t))\in Q$ for all $t\in\mathbb{R}$. Then $(x,\,\tau)$ is analytic on $\mathbb{R}$.
\end{theorem}
\begin{proof} We define $\left((y_j,\,z_j)\right)_{j=1}^\infty\in C(\mathbb{R}; l_c^{\infty}(\mathbb{R}^{N+1}))$ by
\begin{align*}(y_j(t),\, z_j(t))=\left(\frac{1}{c^{j}}x(\eta^{j-1}(t)),\,\frac{1}{c^{j}}\tau(\eta^{j-1}(t))\right)
\mbox{ for $j\geq 1,\,j\in\mathbb{N},\,t\in\mathbb{R}$.}
\end{align*}
Then by the derivation in Section~\ref{SOPS-4-1}, for every $t\in\mathbb{R}$, $((y_j(t),\,z_j(t)))_{j=1}^{\infty}\in l_c^{\infty}(\mathbb{R}^{N+1})$ satisfies system~(\ref{New-Eqn-1}).
Let
\begin{align}\label{map-F}
F(\theta)=(F_1(\theta),\,F_2(\theta),\,\cdots,\,F_j(\theta),\,\cdots),
\end{align}
where
\begin{align*}
\theta = & (\theta_1,\,\theta_2,\,\cdots,\,\theta_j,\,\cdots)=((u_1,\,v_1),\,(u_2,\,v_2),\,\cdots,\,(u_j,\,v_j),\,\cdots)\in l^{\infty} (\mathbb{R}^{N+1}),\\
F_j(\theta)= & \,\left(\frac{f(u_j,\, u_{j+1} )}{1-g(u_j,\,u_{j+1},\,\cdots,u_{j+M-1}, \,v_j )}, \frac{g(u_j,\,u_{j+1},\,\cdots,u_{j+M-1}, \,v_j )}{1-g(u_j,\,u_{j+1},\,\cdots,u_{j+M-1}, \,v_j )}\right),
\end{align*} for $j\geq 1,\,j\in\mathbb{N}$.
Let $T$ be as in Lemma~\ref{OMT}, $G$ in Lemma~\ref{Lemma-G}. Then $w=\left((y_1,\,z_1),\, (y_2,\,z_2),\,\cdots)\right)\in C(\mathbb{R};l_c^{\infty} (\mathbb{R}^{N+1}))$ is a solution of the following ordinary differential equation
\begin{align}\label{Abs-ODE-S-infty}
\frac{d}{dt}w(t)= H(Tw(t)),
\end{align}
where $H(T(w))=(F_1(T(w))G_1(T(w)),\,F_2(T(w))G_2(T(w)),\,\cdots)\in l^{\infty} (\mathbb{R}^{N+1})$ and $G_j$ is the $j$-th coordinate of $G$.
Moreover, we notice that for every $j\in\mathbb{N}$,
\[
\{(c^jy_j(t),\,c^jz_j(t)): t\in\mathbb{R}\}=\{(x(t),\,y(t)):t\in\mathbb{R}\}\subset Q.
\] Then we have
$\{(c^jy_j(t),\,c^jz_j(t))_{j=1}^\infty: t\in\mathbb{R}\}\subset Q$ and
\[
Tw(t)=(c^jy_j(t),\,c^jz_j(t))_{j=1}^\infty\in A
\]for every $t\in\mathbb{R}$, where $A$ is defined by (\ref{set-A}).
Let $w_{t_0}=\left((y_j(t_0),\,z_j(t_0))\right)_{j=1}^\infty\in l_c^{\infty} (\mathbb{R}^{N+1})$, $t_0\in\mathbb{R}$. Then $w(t)$ is a solution of the following initial value problem
\begin{align}\label{Abs-ODE-IVP}\left\{
\begin{aligned}
\frac{d}{dt}w(t)& = H(Tw(t)), \\
w(t_0)&= w_{t_0}.
\end{aligned}
\right.
\end{align}
To prove the existence of complex extension of $w(t)\in l_c^{\infty} (\mathbb{R}^{N+1})$,
we put $\nu(t)=T w(t)\in l^{\infty} (\mathbb{C}^{N+1})$ and consider equation (\ref{Abs-ODE-IVP}) in $l^{\infty} (\mathbb{C}^{N+1})$. Then equation (\ref{Abs-ODE-IVP})
is transformed into the following integral equation
\begin{align}\label{Abs-Integral-Eq}
T^{-1}{\nu}(t)= w_{t_0}+\int_{t_0}^tH(\nu(s)) ds,
\end{align}where the integral is taken along the linear path $\xi\rightarrow t_0+\xi(t-t_0)$, $0\leq \xi\leq 1$.
Denote by $\Omega_h=\{t\in\mathbb{C}: |t-t_0|\leq h\}$ for some $h>0$. To prove the existence and uniqueness of the solution using the Uniform Contraction Principle, we consider fixed point problem associated with the following mapping
\begin{align}\label{L0-lambda-operator}
L_0(\nu)(t)=(I-T^{-1})\nu(t) + w_{t_0}+\int_{t_0}^tH(\nu(s)) ds,
\end{align} on $C(\Omega_h; A)$.
However, by Lemma~\ref{Compact-operator}, $L_0$ is not contractive on $C(\Omega_h; l^\infty(\mathbb{C}^{N+1}))$ in general, but its perturbation
$L: C(\Omega_h; l^\infty(\mathbb{C}^{N+1}))\times [0,\,1] \rightarrow C(\Omega_h; l^\infty(\mathbb{C}^{N+1}))$ defined by
\begin{align}\label{L-lambda-operator}
L(\nu,\,\lambda)(t)=((1-\lambda)I-T^{-1})\nu(t) + (T^{-1}+\lambda I)\nu_{t_0}+\int_{t_0}^tH(\nu(s)) ds,
\end{align} is contractive on $C(\Omega_h; l^\infty(\mathbb{C}^{N+1}))$ for $\lambda\in (0,\,1-1/c)$ and some $h>0$, where $\nu_{t_0}=Tw_{t_0}$.
If there exists a $\nu\in C(\Omega_h; A)$ such that $L(\nu)=\nu$, then $\nu$ is a solution of the following initial value problem:
\begin{align}\label{Abs-ODE-IVP-lambda}\left\{
\begin{aligned}
(\lambda I+T^{-1})\frac{d}{dt}\nu(t)& = H(\nu(t)), \\
\nu(t_0)&=T w_{t_0}.
\end{aligned}
\right.
\end{align} Writing (\ref{Abs-ODE-IVP-lambda}) in integral form, we have
\begin{align}\label{Abs-ODE-IVP-integral-lambda}
\begin{aligned}
T^{-1}\nu (t)& = w_{t_0}+\int_{t_0}^t(\lambda T+ I)^{-1} H\left(\nu (s)\right)ds.
\end{aligned}
\end{align}
By Lemma~\ref{complete-continuity}, $H$ is analytic and $L$ is an analytic mapping from $A\subset l^{\infty} (\mathbb{C}^{N+1})$ to $ l^{\infty} (\mathbb{C}^{N+1})$.
We organize the remaining part of the proof as follows: We first show with claims 1 and 2 the existence and uniqueness of solutions $\nu_\lambda$ of system~(\ref{Abs-ODE-IVP-integral-lambda}) and with claim 3 $\nu_\lambda$ satisfies that $\lim_{\lambda\rightarrow 0^+}T^{-1}\nu_\lambda =w_0\in C(\Omega_{h_0}; T^{-1}(\bar{A}))$ for some $h_0>0$. Secondly, we show with claim 4 that the right hand side of system~(\ref{Abs-ODE-IVP-integral-lambda}) is coordinate-wise convergent to that of system~(\ref{Abs-Integral-Eq}) with
$\lim_{n\rightarrow+\infty}H(\nu_{\lambda_n}(t))=H(\nu_0(t))$, where $\nu_0: \Omega_{h_0}\rightarrow\bar{A}$ is a map such that $H(\nu_0)$ is continuous in $t\in \Omega_{h_0}$ and the sequence $\{\lambda_n\}_{n=1}^{\infty}$ is in $(0,\,1-\frac{1}{c})$ with $\lim_{n\rightarrow+\infty}\lambda_n=0$. Lastly, we show with claim 5 that $H(Tw_0)=H(\nu_0)$ which implies that $w_0$ satisfies system~(\ref{Abs-Integral-Eq}) and hence it is the solution of the initial value problem (\ref{Abs-ODE-S-infty}).
Now we show the following
\textbf{Claim 1}: For every $\lambda\in (0,\,1)$, there exists $h>0$ such that there exists
one and only one point $\nu_\lambda\in C(\Omega_h; \bar{A})$ such that $L(\nu_\lambda,\,\lambda)=\nu_\lambda$ and $\nu_\lambda$ is analytic and is differentiable with respect to $\lambda$.
{\textit{Proof of Claim 1:}} We only need to show that $L$ is a contractive mapping in some closed neighborhood of $w_{t_0}$ in $C(\Omega_h; l^{\infty}(\mathbb{C}^{N+1}))$ where $\Omega_h=\{t\in\mathbb{C}: |t-t_0|\leq h\}$ for some $h>0$ to be determined. Denote by $\|\cdot\|_{C}$ the supremum norm on the Banach space ${C(\Omega_h; l^{\infty}(\mathbb{C}^{N+1}))}$. For every $w_1,\,w_2\in C(\Omega_h; l^{\infty}(\mathbb{C}^{N+1}))$ we have
\begin{align}\label{eqn-condensing}
&\|L(w_1,\,\lambda)-L(w_2,\,\lambda)\|_{C}\notag\\
=& \max_{t\in \Omega_h} \left\|((1-\lambda)I-T^{-1})w_1(t)+\int_{t_0}^tH(w_1(s)) ds\right)\notag\\
& \left.-((1-\lambda)I-T^{-1})w_2(t)-\int_{t_0}^tH(w_2(s)) ds \right\|_{l^{\infty}(\mathbb{C}^{N+1})}\notag\\
\leq & \max_{t\in \Omega_h} \|(1-\lambda) I -T^{-1}\|_{\mathscr{L}(l^{\infty}(\mathbb{K}^{N+1};\, l^{\infty}(\mathbb{K}^{N+1})} |w_1(t)-w_2(t)|_{l^{\infty}(\mathbb{C}^{N+1})}\notag \\
& +\max_{t\in \Omega_h}\int_{t_0}^t \left\|H(w_1(s))-H(w_2(s))\right \|_{l^{\infty}(\mathbb{C}^{N+1})} ds.
\end{align}
Since $H$ is analytic on $A$, there exist constants $\delta>0$ and $l_0>0$ so that
$|H(\nu_1 )-H(\nu_2) |_{l^{\infty}(\mathbb{C}^{N+1})}\leq l_0 |\nu_1-\nu_2|_{l^{\infty}(\mathbb{C}^{N+1})}$ for every $\nu_1,\,\nu_2\in A$ with $|\nu_1-\nu({t_0})|_{l^{\infty}(\mathbb{C}^{N+1})}\leq \delta, |\nu_2-\nu(t_0)|_{l^{\infty}(\mathbb{C}^{N+1})}\leq \delta$.
Let $X=\{\nu\in C(\Omega_h; \bar{A}): \max_{t\in\Omega_h}|\nu(t)-\nu({t_0})|_{l^{\infty}(\mathbb{C}^{N+1})}\leq \delta\}$. Then $X$ is a closed subset of the Banach space $C(\Omega_h; l^{\infty}(\mathbb{C}^{N+1})$. By (\ref{eqn-condensing}) and by Lemma~\ref{Compact-operator}, we have
\begin{align*}
\|L(w_1,\,\lambda)-L(w_2,\,\lambda)\|_{C}\leq & (1-\lambda) \|w_1-w_2\|_{C}+ l_0 h \|w_1-w_2\|_{C}\\ = & (1-\lambda+l_0 h) \|w_1-w_2\|_{C},
\end{align*} for every $w_1,\,w_2\in X$.
Therefore, if $h\in (0,\,\frac{\lambda}{l_0})$, then $1-\lambda+l_0 h\in (0,\,1)$. Moreover, we choose $h>0$
small enough so that
\begin{align*}
& \max_{t\in\Omega_h}\|L(\nu(t),\,\lambda)-\nu({t_0})\|_{l^{\infty}(\mathbb{C}^{N+1})}\\
=&\max_{t\in\Omega_h}\left\|((1-\lambda)I-T^{-1})(\nu(t)-\nu({t_0}))+\int_{t_0}^tH(\nu(s)) ds\right\|_{l^{\infty}(\mathbb{C}^{N+1})}\\
\leq & \,\delta.
\end{align*}
Then by the Uniform Contraction Principle in Banach spaces, we know that $L(\cdot,\,\lambda): X\rightarrow X$ is a contractive mapping with a unique fixed point $\nu_\lambda\in C(\Omega_h; \bar{A})$ and $\nu_{\lambda}$ is analytic. Noticing that $L$ is linear in $\lambda$, $\nu_{\lambda}$ is differentiable with respect to $\lambda$. This completes the proof of Claim 1.
\textbf{Claim 2}: There exists $h_0>0$ and so that $\Omega_{h_0}$ is the common existence region of the fixed points $\nu_\lambda$ of $L(\nu,\,\lambda)$ for all $\lambda\in (0,\,1-1/c).$
{\textit{Proof of Claim 2:}}
Let $w_\lambda= T^{-1}\nu_\lambda,\,\nu_\lambda\in X$ where $X$ is as in Claim 1.
Note that $\nu_{\lambda}\in C(\Omega_{h_\lambda}; \bar{A})$ where $h_\lambda>0$ is a constant depending on $\lambda$.
Let $\widetilde{M}>0$ be the supremum of $\|H(\nu)\|_{l^{\infty}(\mathbb{C}^{N+1})}$ on $\bar{A}$. Let $0< \beta\leq +\infty$ be such that $\{t\in\mathbb{C}: |t-t_0|<\beta\}$ is the maximal existence region of $\nu_\lambda(t)$ on $\bar{A}$. If $\beta=+\infty$, then $\nu_\lambda$ can be extended to the whole complex plane $\mathbb{C}$ with $\nu_\lambda(t)\in l^{\infty}(\bar{U}\times \bar{V}))$ for all $t\in\mathbb{C}$. Otherwise, by Theorem 10.5.5 of \cite{Dieudonne}, there exists $t_1 \in\{t\in\mathbb{C}: |t-t_0|<\beta\}$ so that
$\nu_\lambda$ achieves value in the boundary of $A$. Let $B$ denote
the boundary of $A$. Let $r$ be defined by
\begin{align*}
r= \inf_{\nu\in B}\|T^{-1}(\nu-\nu_{t_0})\|_{l^{\infty}(\mathbb{C}^{N+1})}.
\end{align*}
Now we show that $r>0$. Suppose not. Note that by Lemma~\ref{OMT}, $T^{-1}$ is compact and $B$ is closed and bounded in $l^{\infty}(\mathbb{C}^{N+1})$. Therefore $r$ is the minimum norm of a compact set. There exists $\nu^*\in B$ such that $r= \|T^{-1}(\nu^*-\nu_{t_0})\|_{l^{\infty}(\mathbb{C}^{N+1})}=0.$
Then
we have $\nu_{t_0}=\nu(t_0)=\nu^*\in B$. This is a contradiction since $\nu(t_0)$ is in the interior of $A$. It follows that $r>0$.
By Lemma~\ref{Compact-operator}, we know that $\lambda I +T^{-1}\in \mathscr{L}(l^{\infty}(\mathbb{C}^{N+1}); l^{\infty}(\mathbb{C}^{N+1}))$ has norm equal to $ \lambda+\frac{1}{c}$. Then we have
\begin{align*}
r = & \inf_{\nu\in B}\|T^{-1}(\nu-\nu(t_0))\|_{l^{\infty}(\mathbb{C}^{N+1})}\\
\leq &\inf_{\nu\in B}\|(\lambda I +T^{-1})(\nu-\nu(t_0))\|_{l^{\infty}(\mathbb{C}^{N+1})}.\\
\leq & \left\|(\lambda I +T^{-1})(\nu_\lambda(t_1)-\nu(t_0))\right\|_{l^{\infty}(\mathbb{C}^{N+1})}\\
= & \left\| \int_{t_0}^{t_1} (\lambda I +T^{-1}) \nu_\lambda' (s)ds\right\|_{l^{\infty}(\mathbb{C}^{N+1})}\\
\leq & \sup_{t\in\Omega_h}\int_{t_0}^t \left\| H(\nu_\lambda (s)) \right\|_{l^{\infty}(\mathbb{C}^{N+1})}ds\\
\leq &\widetilde{M}\beta.
\end{align*}
It follows that $\beta\geq \frac{r}{\widetilde{M}}$. Let $h_0=\frac{r}{\widetilde{M}}$. Then $\Omega_{h_0}$ is the common existence region of $\nu_\lambda$ for all $\lambda\in (0,\,1-1/c).$ This completes the proof of Claim 2.
\textbf{Claim 3}: Let $\nu_\lambda$, and $h_0$ be as in Claim 2. There exists an analytic function $w_0\in C(\Omega_{h_0};T^{-1}(\bar{A}))$ so that $\lim_{\lambda\rightarrow 0^+} \|T^{-1}\nu_{\lambda}-w_0\|_{C(\Omega_{h_0}; l^{\infty}(\mathbb{C}^{N+1}))}=0$.
{\textit{Proof of Claim 3:}} By Claim 2, we have $w_\lambda=T^{-1}\nu_\lambda\in C(\Omega_{h_0}; T^{-1}(\bar{A}))$. Moreover, the uniformly bounded set $\left\{w_{\lambda}: \lambda\in (0,\,1-1/c)\right\}$ is compact in $C(\Omega_{h_0}; T^{-1}(\bar{A}))$, by the Arzel\'{a}--Ascoli theorem, since for every $\varepsilon>0$ there exists $\tilde{\delta}=\frac{\varepsilon}{\widetilde{M}} >0$ so that $|t-t'|<\tilde{\delta}$ implies that
\begin{align*}
\left\|w_{\lambda}(t)-w_{\lambda}(t')\right\|_{l^{\infty}(\mathbb{C}^{N+1})}
\leq & \left\|\int_{t'}^t (\lambda T+I)^{-1} H(Tw_{\lambda} (s))ds \right\|_{l^{\infty}(\mathbb{C}^{N+1})}\\
\leq & \widetilde{M} \tilde{\delta}\\
=&\varepsilon,
\end{align*}
where $\widetilde{M}>0$ was defined in the proof of Claim 2, and Lemma~\ref{Extension-operator} was applied to obtain the second inequality.
Therefore, there exists $w_0\in C(\Omega_{h_0}; \bar{A}) $ so that
\begin{align}\label{w-uniform}\lim_{\lambda\rightarrow 0}\|w_{\lambda}-w_0\|_{C(\Omega_{h_0}; \bar{A})}=0.
\end{align} Since $\left\{w_{\lambda}\right\}_{\lambda\in (0,\,1-1/c)}$ is a set of analytic functions in norm $\|\cdot\|_{ C(\Omega_{h_0}; l_c^{\infty}(\mathbb{C}^{N+1}))}$ and analytic in norm $\|\cdot\|_{C(\Omega_{h_0}; l^{\infty}(\mathbb{C}^{N+1}))}$, $w_0$ is also analytic in norm $\|\cdot\|_{C(\Omega_{h_0}; l^{\infty}(\mathbb{C}^{N+1}))}$.
Now we show that $w_0\in C(\Omega_{h_0}; T^{-1}(A))$. First we show that $w_0\in C(\Omega_{h_0}; l_c^{\infty}(\mathbb{C}^{N+1})$.
Suppose that $w_0\not\in C(\Omega_{h_0}; l_c^{\infty}(\mathbb{C}^{N+1})$. Then for every $K>0$ there exists $j_0\in\mathbb{N}$ such that $\sup_{t\in \Omega_{h_0}}c^{j_0}|(w_0)_{j_0}(t)|>K$. That is,
\begin{align}\label{MJ0}
\sup_{t\in \Omega_{h_0}}|(w_0)_{j_0}(t)|>\frac{K}{c^{j_0}}.
\end{align}
On the other hand, it follows from
$\lim_{\lambda\rightarrow 0^+}\|w_{\lambda}-w_0\|_C=0$, that
for every $\epsilon>0$, there exists $\delta>0$ such that for every $\lambda\in (0,\,\delta)$, we have
\begin{align*}
\sup_{t\in \Omega_{h_0}}\sup_{j\in\mathbb{N}}|(w_0)_{j}(t)-(w_\lambda)_{j}(t)|<\epsilon,
\end{align*}
which leads to
\begin{align*}
\sup_{t\in \Omega_{h_0}} |(w_0)_{j}(t)|-\sup_{t\in \Omega_{h_0}}|(w_\lambda)_{j}(t)|<\epsilon,\,\mbox{for every $j\in\mathbb{N}$}.
\end{align*}
It follows that
\begin{align}\label{tri-inequality}
\sup_{t\in \Omega_{h_0}}|(w_\lambda)_{j}(t)|>\sup_{t\in \Omega_{h_0}} |(w_0)_{j}(t)|-\epsilon,\,\mbox{for every $j\in\mathbb{N}$}.
\end{align}
Choosing $j=j_0$ and $\epsilon=\frac{K}{2c^{j_0}}$ in (\ref{tri-inequality}), then by (\ref{MJ0}) we obtain that
\begin{align*
\sup_{t\in \Omega_{h_0}}|(w_\lambda)_{j_0}(t)|& \geq \sup_{t\in \Omega_{h_0}} |(w_0)_{j_0}(t)|-\frac{K}{2c^{j_0}}\\
& >\frac{K}{2c^{j_0}},
\end{align*}which leads to
$ \sup_{t\in \Omega_{h_0}}|c^{j_0}(w_\lambda)_{j_0}(t)|>K/2$ for every $\lambda\in (0,\,\delta)$. That is, $w_\lambda\not\in C(\Omega_{h_0}; l_c^{\infty}(\mathbb{C}^{N+1}))$ as $\lambda\rightarrow 0$ and hence $\nu_\lambda=Tw_\lambda\not\in C(\Omega_{h_0}; l^{\infty}(\mathbb{C}^{N+1}))$. This is a contradiction and hence $w_0\in C(\Omega_{h_0}; l_c^{\infty}(\mathbb{C}^{N+1}))$.
Next we show that $w_0\in C(\Omega_{h_0}; T^{-1}(\bar{A}))$. Suppose not. Since $w_0\in C(\Omega_{h_0}; l_c^{\infty}(\mathbb{C}^{N+1}))$, there exists $t^*\in\Omega_{h_0}$ so that $w_0(t^*)\in l_c^{\infty}(\mathbb{C}^{N+1})\setminus T^{-1}(\bar{A})$. By (\ref{w-uniform}) we have
\begin{align}\label{w-t-star}
\lim_{\lambda\rightarrow 0}\|w_\lambda(t^*)-w_0(t^*)\|_{l^{\infty}(\mathbb{C}^{N+1})}=0.
\end{align}
Since $\left\{w_{\lambda}: \lambda\in (0,\,1-1/c)\right\}$ is uniformly bounded in $C(\Omega_{h_0}; T^{-1}(\bar{A}))$, there exists a closed ball
$B'$ in $T^{-1}(\bar{A})$ which contains the closure of $\{w_\lambda(t^*)\}_{\lambda\in (0,\,1-1/c)}$. Then by Lemma~\ref{Banach-spaces}
and by (\ref{w-t-star}), we have $w_0(t^*)\in B'\subset T^{-1}(\bar{A})$ which is a contradiction. This completes the proof of Claim 3.
{\bf Claim 4:} Let $h_0$ be as in Claim 2. There exists a map $\nu_0: \Omega_{h_0}\rightarrow \bar{A}$ such that $H(\nu_0)$ is continuous and is such that for every $t\in \Omega_{h_0}$,
there exists a sequence $\{\lambda_n\}_{n=1}^{\infty}\subset (0,\,1-\frac{1}{c})$ with $\lim_{n\rightarrow+\infty}\lambda_n=0$ such that
$\lim_{n\rightarrow+\infty}H(\nu_{\lambda_n}(t))=H(\nu_0(t)).$
{\it Proof of Claim 4}: Note that by Claim 1,
$\nu_\lambda \in C(\Omega_{h_0};T^{-1}(\bar{A}))$ is uniformly bounded with respect to $\lambda\in (0,\,1-\frac{1}{c})$.
Since by Lemma~\ref{complete-continuity} $H$ is completely continuous, for every $t\in \Omega_{h_0}$, the set \[
\left\{H(\nu_\lambda(t)): \lambda\in \left(0,\,1-\frac{1}{c}\right)\right\},
\]is pre-compact in $l^\infty(\mathbb{C}^{N+1})$.
So there exists a sequence $\{\lambda_n\}_{n=1}^{\infty}\subset (0,\,1-\frac{1}{c})$ with $\lim_{n\rightarrow+\infty}\lambda_n=0$ and $\nu_0(t)\in T^{-1}(\bar{A})$, where $T^{-1}(\bar{A})$ is compact, such that
\begin{align}\label{new-limit-01}
\lim_{n\rightarrow+\infty}H(\nu_{\lambda_n}(t))=\lim_{n\rightarrow+\infty} H(Tw_{\lambda_n}(t))=H(\nu_0(t)).
\end{align}
Next we show that $H(\nu_0): \Omega_{h_0}\ni t\rightarrow H(\nu_0(t)) \in l^\infty(\mathbb{C}^{N+1})$ is continuous in $t\in \Omega_{h_0}$. Let $t\in \Omega_{h_0}$. By (\ref{new-limit-01}), for every $\epsilon>0$, there exists $N_1\in\mathbb{N}$ such that for every $n>N_1$,
\begin{align}\label{nu-1}
\|H(\nu_{\lambda_n}(t))-H(\nu_0(t))\|_{ l^\infty(\mathbb{C}^{N+1})}<\frac{\epsilon}{3},
\end{align}
Since $H(\nu_{\lambda_n})$ is continuous, there exists $\delta>0$ such that for every $t'\in \Omega_{h_0} $ with $|t-t'|<\delta$ we have
\begin{align}\label{nu-2}
\|H(\nu_{\lambda_n}(t))-H(\nu_{\lambda_n}(t'))\|_{ l^\infty(\mathbb{C}^{N+1})}<\frac{\epsilon}{3}.
\end{align}
Taking subsequence of $\{\lambda_n\}$ if necessary, by (\ref{new-limit-01}) there exists $N'$ such that for every $n>N'$, we have
\begin{align}\label{nu-3}
\|H(\nu_{\lambda_n}(t'))-H(\nu_0(t'))\|_{ l^\infty(\mathbb{C}^{N+1})}<\frac{\epsilon}{3}.
\end{align}
By (\ref{nu-1}), (\ref{nu-2}) and (\ref{nu-3}) we have for $n>\max\{N_1,\,N'\}$,
\[
\|H(\nu_0(t))-H(\nu_0(t'))\|_{ l^\infty(\mathbb{C}^{N+1})}<\epsilon.
\]That is $H(\nu_0)$ is continuous. This completes the proof of Claim 4.
{\bf Claim 5:} Let $h_0$ be as in Claim 2, $w_0$ be as in Claim 3, $\nu_0$ be as in Claim 4. Then $H(Tw_0)=H(\nu_0)$ and $w_0$ is the solution of the initial value problem (\ref{Abs-ODE-IVP}).
{\it Proof of Claim 5}:
It follows from Claim 3 that $w_0$ is in $C(\Omega_{h_0}; T^{-1}(\bar{A}))$ and $w_0$ is the limit of $w_\lambda$ as $\lambda\rightarrow 0^+$ in the norm $\|\cdot\|_{C(\Omega_{h_0}; l^\infty(\mathbb{C}^{N+1}))}$. We first show that $Tw_{\lambda} $ converges to $Tw_0$ coordinate-wise. That is, for every $j\in\mathbb{N}$,
\begin{align}\label{claim-5-1}
\lim_{\lambda\rightarrow 0}\sup_{t\in \Omega_{h_0}}|(Tw_{\lambda})_j(t)-(Tw_0)_j(t)|= 0.
\end{align}
If not, there exists $j_0\in\mathbb{N}$ and $\epsilon_0>0$ and a sequence $\{\lambda_{n}\}_{n=1}^{\infty}\subset (0,\,1-\frac{1}{c}) $ converging to 0 such that
\[
\sup_{t\in \Omega_{h_0}}|(w_{\lambda_{n}})_{j_0}(t)-(w_0)_{j_0}(t)|\geq \frac{\epsilon_0}{c^{j_0}}, \mbox{for all } n\in\mathbb{N},
\]which leads to
\[
\sup_{t\in \Omega_{h_0}}\sup_{j\in\mathbb{N}}|(w_{\lambda_{n}})_{j}(t)-(w_0)_{j}(t)|\geq\sup_{t\in \Omega_{h_0}}|(w_{\lambda_{n}})_{j_0}(t)-(w_0)_{j_0}(t)|\geq \frac{\epsilon_0}{c^{j_0}},
\]
for all $n\in\mathbb{N}$. This is a contradiction, since $w_0$ is the limit of $w_\lambda$ as $\lambda\rightarrow 0^+$ in the norm $\|\cdot\|_{C(\Omega_{h_0}; l^\infty(\mathbb{C}^{N+1}))}$.
Noticing that each coordinate of $H(Tw_\lambda)$ involves only finitely many coordinates of $Tw_\lambda$ and $H$ is analytic. $H(Tw_{\lambda})$ converges to $H(Tw_0)$ coordinate-wise as $Tw_{\lambda} $ converges to $Tw_0$ coordinate-wise with $\lambda\rightarrow 0^+$. By Claim 4, we have
\begin{align}\label{eqn-last0}
H(\nu_0)=H(Tw_0).
\end{align}
Noting that by Lemma~\ref{Extension-operator}, $(\lambda T+I)^{-1}\in \mathscr{L}(l^{\infty}(\mathbb{C}^{N+1}); l^{\infty}(\mathbb{C}^{N+1}))$ is bounded for every $\lambda\in[0,\,1)$. Notice that $\nu_\lambda$, $\lambda\in (0,\,1-\frac{1}{c})$, satisfies (\ref{Abs-ODE-IVP-integral-lambda}). On the one hand, by Claim 3 we have
\begin{align}\label{eqn-last2}
\lim_{\lambda\rightarrow 0^+}\|T^{-1}\nu_{\lambda} (t)-w_0(t)\|_{l^\infty(\mathbb{C}^{N+1})}=0.
\end{align}On the other hand, for every $j\in\mathbb{N}$ and $t\in\Omega_{h_0}$ we have
\begin{align}\label{eqn-last}
& \left|\int_{t_0}^t\left[(\lambda T+ I)^{-1} H\right]_j\left(\nu_{\lambda}(s)\right)ds-\int_{t_0}^t H_j(\nu_0(s))ds\right|\notag\\
= & \left|\int_{t_0}^t\left[(\lambda T+ I)^{-1} H\right]_j\left(\nu_{\lambda}(s)\right)- H_j(\nu_0(s))ds\right|\notag\\
= &\left|\int_{t_0}^t \frac{1}{(\lambda c^j+ 1)} H_j\left(\nu_{\lambda}(s)\right)- H_j(\nu_0(s)) ds\right|\notag\\
= &\left|\int_{t_0}^t \left( \frac{1}{(\lambda c^j+ 1)} (H_j\left(\nu_{\lambda}(s)\right)- H_j(\nu_0(s)))-\frac{\lambda c^j}{\lambda c^j+1} H_j(\nu_0(s))\right)ds\right|\notag\\
\leq &\left|\int_{t_0}^t \frac{1}{(\lambda c^j+ 1)} (H_j\left(\nu_{\lambda}(s)\right)- H_j(\nu_0(s)))ds\right|+\left|\int_{t_0}^t\frac{\lambda c^j}{\lambda c^j+1} H_j(\nu_0(s)) ds\right|\notag\\
= & \frac{1}{(\lambda c^j+ 1)} \left|\int_{t_0}^t(H_j\left(\nu_{\lambda}(s)\right)- H_j(\nu_0(s)))ds\right|+\frac{\lambda c^j}{\lambda c^j+1}\left|\int_{t_0}^t H_j(\nu_0(s)) ds\right|\notag\\
= & \frac{1}{(\lambda c^j+ 1)} \left|\int_{t_0}^t(H_j\left(Tw_{\lambda}(s)\right)- H_j(Tw_0(s)))ds\right|+\frac{\lambda c^j}{\lambda c^j+1}\left|\int_{t_0}^t H_j(\nu_0(s)) ds\right|
\end{align} where $\xi\in\Omega_{h_0}$ and $H_j$ denotes the $j$-th coordinate of $H$. Since $H(Tw_{\lambda})$ converges to $H(Tw_0)$ coordinate-wise as $Tw_{\lambda} $ converges to $Tw_0$ coordinate-wise with $\lambda\rightarrow 0^+$, uniformly with respect to $t\in\Omega_{h_0}$.
Letting $\lambda \rightarrow 0^+$ in (\ref{eqn-last}), we have for every $j\in\mathbb{N}$ and $t\in\Omega_{h_0}$,
\begin{align}\label{eqn-last3} \left|\int_{t_0}^t\left[(\lambda_n T+ I)^{-1} H\right]_j\left(\nu_{\lambda}(s)\right)ds-\int_{t_0}^t H_j(\nu_0(s))ds\right|\rightarrow 0 \mbox{ as $\lambda\rightarrow0^+$}.
\end{align}
By (\ref{eqn-last2}) and (\ref{eqn-last3}), we have for every $t\in\Omega_{h_0}$,
\[
w_0(t)=w_{t_0}+\int_{t_0}^tH(\nu_0(s))ds,
\]
which combined with (\ref{eqn-last0}) gives
\[
w_0(t)=w_{t_0}+\int_{t_0}^tH(Tw_0(s))ds.
\]
That is, $w_0$ is a solution of the initial value problem (\ref{Abs-ODE-IVP}). By analyticity of $w_0$, it is the unique solution of (\ref{Abs-ODE-IVP}) which is the complex extension of the real-valued solution $w=\left((y_1,\,z_1),\,(y_2,\,z_2),\,\cdots\right)\in C(\mathbb{R}; l_c^{\infty}(\mathbb{R}^{N+1}))$ at $t=t_0\in\mathbb{R}$. It follows that $(x,\,\tau)=(cy_1,\,cz_1)$ is analytic at $t_0$. Since $t_0\in\mathbb{R}$ is arbitrary, $(x,\,\tau)$ is analytic on $\mathbb{R}$. This completes the proof of Claim 5 and that of the theorem.\qed
\end{proof}
\section{Example}\label{section-3}
In this section, we present an example from important applications. We now study the analyticity of periodic
solutions for the following delay differential equations with adaptive delay:
\begin{align}\label{ch3-eqn-4-23}
\left\{
\begin{aligned}
\dot{x}_1(t)&=-\mu x_1(t)+\sigma b( x_2(t-\tau(t))),\\
\dot{x}_2(t)&=-\mu x_2(t)+\sigma b( x_1(t-\tau(t))),\\
\dot{\tau}(t)&=1-h(x(t))\cdot(1+\tanh\tau(t)),
\end{aligned}
\right.
\end{align}
where $x(t)=(x_1(t),\,x_2(t))\in\mathbb{R}^2$, $\tau(t)\in\mathbb{R}$, $\tanh(\tau)= (e^{2\tau}-1)/(e^{2\tau}+1)$ and $\mu>0$ is a constant.
We make the following assumptions:
\begin{description}
\item[$(\alpha_1)$] $b: \mathbb{R}\rightarrow\mathbb{R}$ and $h: \mathbb{R}^2\rightarrow\mathbb{R}$ are continuously differentiable functions with $b'(0)=-1$;
\item[$(\alpha_2)$] There exist $h_0<h_1$ in $(1/2,\,1)$ such that $h_1> h(x)> h_0$ for all $x\in \mathbb{R}^2$;
\item[$(\alpha_3)$] $b$ is decreasing on $\mathbb{R}$ and the map $\mathbb{R}\ni y\rightarrow yb(y)\in\mathbb{R}$ is injective;
\item[$(\alpha_4)$]$yb(y)<0$ for $y\neq 0$, and there exists a continuous function $M:\mathbb{R}\ni \sigma\rightarrow M(\sigma)\in (0,\,+\infty)$ so that
\[
\frac{b(y)}{y}>-\frac{\mu}{2|\sigma|},
\]
for $|y|\geq M(\sigma)$;
\item[$(\alpha_5)$] $h_0>(1+e^{-\pi})/2$ and there exists $\epsilon>0$ so that $b$ and $h$ have analytic complex extensions on
\[
U_0\times V_0=\{(p,\,q)\in \mathbb{C}^2\times\mathbb{C}: \Re(p,\,q)\in \overline{\Omega}_1,\, |\Im(p,\,q)|\leq\epsilon\}
\] where $\Omega_1= (-M(\sigma),\,M(\sigma))\times (-M(\sigma),\,M(\sigma))\times \left(0,\,-\frac{\ln (2h_0-1)}{2}\right).$
\end{description}
\begin{lemma}[\cite{HWZ}]\label{ch3-lemma-4-6}
Assume $(\alpha_1)$--$(\alpha_4)$ hold. Then the range of every periodic solution
$(x_1,\,x_2,\,\tau)$ of $($\ref{ch3-eqn-4-23}$\,)$ with $\sigma\in\mathbb{R}$ is contained in
\[
\Omega_1=(-M(\sigma),\,M(\sigma))\times(-M(\sigma),\,M(\sigma))\times \left(0, -\frac{\ln(2h_0-1)}{2}\right).
\]
\end{lemma}
\begin{theorem}
Assume that $(\alpha_1)$--$(\alpha_5)$ hold. Then all the periodic solutions of (\ref{ch3-eqn-4-23}) are analytic on $\mathbb{R}$.
\end{theorem}
\begin{proof} By Lemma~\ref{ch3-lemma-4-6}, the range of every periodic solution
$(x_1,\,x_2,\,\tau)$ of $($\ref{ch3-eqn-4-23}$\,)$ with $\sigma\in\mathbb{R}$ is contained in
$\Omega_1$. Now we apply Theorem~\ref{Analyticity-th}. Let $l=1/2\in (0,\,1)$. For every $(x(t),\,\tau(t))=(x_1(t),\,x_2(t),\,\tau(t))\in \overline{\Omega}_1$, $t\in\mathbb{R}$, we have \[1\leq 1+\tanh \tau(t)\leq\frac{1}{h_0}<\frac{2}{1+e^{-\pi}}\] and hence by $(\alpha_2)$ and $(\alpha_5)$ we obtain
\begin{align*}
& 1-(1-h(x(t))\cdot(1+\tanh\tau(t)))-\frac{e+l}{2}\\
= &\, h(x(t))\cdot(1+\tanh\tau(t)) -\frac{e+l}{2}\\
< &\, 1+\tanh \tau(t)-\frac{e+l}{2}\\
< &\frac{2}{1+e^{-\pi}}-\frac{e+l}{2}\\
< & \, \frac{e-l}{2},
\intertext{and}
& 1-(1-h(x(t))\cdot(1+\tanh\tau(t)))-\frac{e+l}{2}\\
= &\, h(x(t))\cdot(1+\tanh\tau(t)) -\frac{e+l}{2}\\
> &\, \frac{1+e^{-\pi}}{2}(1+\tanh \tau(t))-\frac{e+l}{2}\\
\geq &\frac{1+e^{-\pi}}{2}-\frac{e+l}{2}\\
>& \, -\frac{e-l}{2}.
\end{align*}Therefore, we have $| 1-(1-h(x(t))\cdot(1+\tanh\tau(t)))-\frac{e+l}{2}|<\frac{e-l}{2}$ for all $(x(t),\,\tau(t))\in\overline{\Omega}_1$. Note that $1>h_0>(1+e^{-\pi})/2$ and $(x(t),\,\tau(t))\in\overline{\Omega}_1$ imply that $0< \tau(t)<\frac{\pi}{2}$. And the complex extension of $1+\tanh q$ is analytic for $|q|<\frac{\pi}{2},\,q\in \mathbb{C}$. Then by $(\alpha_5)$ we can choose $\epsilon_0\in (0,\,\epsilon)$
small enough so that $| 1-(1-h(p)\cdot(1+\tanh q))-\frac{e+l}{2}|<\frac{e-l}{2}$ for all $(p,\,q)\in U\times V$ where
\begin{align*}
U\times V= \{(p,\,q)\in \mathbb{C}^2\times\mathbb{C}: \Re(p,\,q)\in {\Omega}_1,\, |\Im(p,\,q)|<\epsilon_0\}\subset U_0\times V_0.
\end{align*}
Then by applying Theorem~\ref{Analyticity-th} on $U\times V$, analyticity of all the periodic solutions of (\ref{ch3-eqn-4-23}) follows.
\hfill\hspace*{1em}\qed
\end{proof}
| {
"timestamp": "2017-08-29T02:06:00",
"yymm": "1708",
"arxiv_id": "1708.08024",
"language": "en",
"url": "https://arxiv.org/abs/1708.08024",
"abstract": "We study the analyticity of bounded solutions of systems of analytic state-dependent delay differential equations. We obtain the analyticity of solutions by transforming the system of state-dependent delay equations into an abstract ordinary differential equation in a subspace of the sequence space $l^{\\infty}(\\mathbb{R}^{N+1})$ and prove the existence of complex extension of the bounded solutions. An example is given to illustrate the general results.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Analyticity of Bounded Solutions of Analytic State-Dependent Delay Differential Equations",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232930001333,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7093460347138797
} |
https://arxiv.org/abs/1811.06380 | Embeddings of Free Magmas with Applications to the Study of Free Non-Associative Algebras | We introduce an embedding of the free magma on a set A into the direct product of the free magma on a singleton set and the free semigroup on A. This embedding is then used to prove several theorems related to algebraic independence of subsets of the free non-associative algebra on A. Among these theorems is a generalization of a result due to Kurosh, which states that every subalgebra of a free non-associative algebra is also free. | \section{Preliminaries}
Throughout our entire discussion, we will let $A$ denote a fixed, nonempty set and we will let $\mathscr{M}[A]$ denote the free magma on $A$. Associated with each element $m$ of $\mathscr{M}[A]$ is a positive integer $\text{deg}(m)$ called the \emph{degree} of $m$. If $\circ_M$ denotes the product defined on $\mathscr{M}[A]$, then the degree function on $\mathscr{M}[A]$ satisfies
\[ \text{deg}(m_1 \circ_M m_2) = \text{deg}(m_1) + \text{deg}(m_2) \]
for all $m_1, m_2 \in \mathscr{M}[A]$. The elements of $A$ are the elements of $\mathscr{M}[A]$ with $\text{deg}(m) = 1$. For $n > 1$, the elements $m$ of $\mathscr{M}[A]$ with $\text{deg}(m) = n$ are of the form $m_1 \circ_M m_2$ where $\text{deg}(m_1) + \text{deg}(m_2) = n$. (See \cite{bourbaki_algebra}.) The following proposition states one of the fundamental properties of the operation $\circ_M$.
\begin{proposition}\cite{bourbaki_algebra} \label{prop:magma1}
If $m_1, m'_1, m_2, m'_2 \in \mathscr{M}[A] $, then
\[ m_1 \circ_M m_2 = m'_1 \circ_M m'_2 \text{ if and only if } m_1 = m'_1 \text{ and } m_2 = m'_2 \,. \]
\end{proposition}
\noindent We will refer to elements of $\mathscr{M}[A]$ as \emph{$A$-monomials}. Rather than using the symbol $\circ_M$, we will from now on denote the product of two elements $m_1, m_2 \in \mathscr{M}[A]$ by $(m_1, m_2)$.
\\
As a simple illustration, let $Z = \{z_1, z_2, z_3, z_4\}$ where $z_i \neq z_j$ if $i \neq j$. The elements of $Z$ are elements of $\mathscr{M}[Z]$ of degree 1. Products of the elements of $Z$,
\begin{equation} \label{deg2}
(z_1, z_1),\, (z_1, z_2),\, (z_1, z_3),\, (z_2, z_1),\, \ldots
\end{equation}
are also in $\mathscr{M}[Z]$. The elements in \eqref{deg2} have degree $2$. Products of elements in \eqref{deg2} and elements of $Z$,
\begin{equation} \label{deg3}
((z_1, z_2), z_3), \, (z_1, (z_2, z_3)), \, ((z_2, z_1), z_3), \, \ldots
\end{equation}
are in $\mathscr{M}[Z]$ and have degree $3$. Products of elements in \eqref{deg3} and elements of $Z$ together with products of pairs of elements in \eqref{deg2} are the elements of $\mathscr{M}[Z]$ having degree $4$. So, $\mathscr{M}[Z]$ contains the set $Z$, products of elements of $Z$, products of products of $Z$ and elements of $Z$, and so on.
\\
The \emph{free semigroup} on the set $A$, denoted $\mathscr{S}[A]$, is the set of all finite sequences
\[ f: \{1, 2, \ldots, n\} \to A \]
of elements of $A$ together with an operation $\circ_S$ defined as follows:
\\
\\
If $f_1 : \{1,2, \ldots, m\} \to A$ and $f_2 : \{1,2, \ldots, n\} \to A$ are elements of $\mathscr{S}[A]$, then
\[f_1 \circ_S f_2 : \{1,2, \ldots, m + n\} \to A\]
is such that
\[ (f_1 \circ_S f_2)(k) = f_1(k) \]
if $1 \leq k \leq m$ and
\[ (f_1 \circ_S f_2)(k) = f_2(k - m) \]
if $m+1 \leq k \leq m+n$ \cite{bourbaki_algebra}.
\\
The operation $\circ_S$, which is associative, is called \emph{concatenation}. It is standard notation to write, for example, the element $f: \{1, 2, 3\} \to Z$ with $f(i) = z_i$ for all $i$ as
\[ z_1 z_2 z_3 \,. \]
If
\[ f: \{1, 2, \ldots, n\} \to A \]
is an element of $\mathscr{S}[A]$, then we say that the \emph{degree} of $f$ is $n$ and we write $\text{deg}(f) = n$.
The following proposition, which is analogous to Proposition \ref{prop:magma1}, has a straightforward proof.
\begin{proposition} \label{prop:semigroup1}
Let $f_1, f'_1, f_2, f'_2 \in \mathscr{S}[A] $. If $\text{deg}(f_1) = \text{deg}(f'_1)$ or $\text{deg}(f_2) = \text{deg}(f'_2)$, then
\[ f_1 \circ_S f_2 = f'_1 \circ_S f'_2 \text{ if and only if } f_1 = f'_1 \text{ and } f_2 = f'_2 \,. \]
\end{proposition}
For the full remainder of our discussion, we let $\mathbb{F}$ denote a fixed, arbitrarily chosen field. If $\mathbb{F}[\mathscr{M}[A]]$ is the free $\mathbb{F}$-vector space over the set $\mathscr{M}[A]$, then let
\[ \circ_N : \mathbb{F}[\mathscr{M}[A]] \times \mathbb{F}[\mathscr{M}[A]] \to \mathbb{F}[\mathscr{M}[A]] \]
be the unique bilinear product on $\mathbb{F}[\mathscr{M}[A]]$ which satisfies $m_1 \circ_N m_2 = m_1 \circ_M m_2$ for all $m_1, m_2 \in \mathscr{M}[A]$. The vector space $\mathbb{F}[\mathscr{M}[A]]$ together with the operation $\circ_N$ is an $\mathbb{F}$-algebra, which we will denote by $\mathscr{N}[A]$. $\mathscr{N}[A]$ is called the \emph{free (non-associative) $\mathbb{F}$-algebra on the set $A$} \cite{bourbaki_algebra}. Rather than using the symbol $\circ_N$, we will from now on denote the product of two elements $p_1, p_2 \in \mathscr{N}[A]$ by $(p_1, p_2)$. If $\{a_1, a_2, \ldots, a_n\}$ is a finite set of $n$ elements, then we will denote the free non-associative algebra $\mathscr{N}[\{a_1, a_2, \ldots, a_n\}]$ by $\mathscr{N}[a_1, a_2, \ldots, a_n]$.
Under the obvious identification, $\mathscr{M}[A]$ is a subset of $\mathscr{N}[A]$ that is a basis for the vector space $\mathscr{N}[A]$. We will call elements of $\mathscr{N}[A]$ $A$-polynomials. Thus, every $A$-polynomial can be expressed uniquely as a linear combination of $A$-monomials. For a nonzero $A$-polynomial $p$, if
\[ p = \sum_{i = 1}^k c_i n_i \]
for nonzero $c_i \in \mathbb{F}$ and $A$-monomials $n_i$, where $n_i \neq n_j$ if $i \neq j$, then the $A$-monomials $n_i$ are called the \emph{monomial terms} of $p$. The \emph{degree} of an $A$-polynomial $p$, denoted $\text{deg}(p)$, is defined to be the degree of the highest-degree monomial term of $p$. The degree of the zero $A$-polynomial is left undefined. If $p_1$ and $p_2$ are any nonzero $A$ polynomials, then we have
\[ \text{deg}((p_1, p_2)) = \text{deg}(p_1) + \text{deg}(p_2) \,. \]
If, for example, $\mathbb{F} = \mathbb{Q}$ and $Z$ is as given previously, then
\[ p_1 = ((z_2, z_1), (z_4, z_4)) + 2(z_1, z_1) \,,\]
and
\[ p_2 = 4 (z_3, (z_1, z_1)) + z_2 + 3 z_3 \,,\]
are elements of the free $\mathbb{Q}$-algebra over the set $Z$. We see that $\text{deg}(p_1) = 4$ and $\text{deg}(p_2) = 3$.
\\
For a positive integer $n$, let $\{X_1, \ldots, X_n\}$ be a finite set of $n$ indeterminates. We will refer to elements of $\mathscr{M}[X_1, \ldots, X_n]$ and $\mathscr{N}[X_1, \ldots, X_n]$ as simply \emph{monomials} and \emph{polynomials}, respectively.
If $\mathscr{A}$ is an $\mathbb{F}$-algebra, then for any $n$-tuple $(y_1, \ldots, y_n)$ of elements $y_1, \ldots, y_n \in \mathscr{A}$, there is a unique algebra homomorphism
\[ \phi_{(y_1, \ldots, y_n)} : \mathscr{N}[X_1, \ldots, X_n] \to \mathscr{A} \]
with
\[ \phi_{(y_1, \ldots, y_n)} (X_i) = y_i \]
for all $1 \leq i \leq n$. For a polynomial $P = P(X_1, \ldots, X_n)$, we will write
\[ \phi_{(y_1, \ldots, y_n)} (P) = P(y_1, \ldots, y_n) \,. \]
If $P$ is a polynomial, then we will write $P = P(X_1, \ldots, X_n)$ to express that $P$ is in $\mathscr{N}[X_1, \ldots, X_n]$.
For a subset $G$ of $\mathscr{A}$, we will let $\langle G \rangle$ denote the subalgebra of $\mathscr{A}$ generated by $G$. It is easy to verify that $\langle G \rangle$ can be described as the set of all elements of $\mathscr{A}$ of the form $P(g_1, \ldots, g_n)$, where, for some positive integer $n$, $P \in \mathscr{N}[X_1, \ldots, X_n]$ and $g_1, \ldots , g_n \in G$.
\section{Sequence Type and Product Type}
The following theorem describes an embedding of the free magma $\mathscr{M}[A]$ into the direct product of two magmas which, unless $A$ is a singleton set, are both easier to work with than $\mathscr{M}[A]$. An intuitive description of this embedding is given after the proof of the theorem.
\begin{thm} \label{thm:embedding}
Let $\mathscr{S}[A]$ be the free semigroup on $A$, let $\mathscr{M}[X]$ be the free magma on a singleton set $\{X\}$, and let $\pi_1$ and $\pi_2$ denote the canonical projections of the direct product of magmas $\mathscr{M}[X] \times \mathscr{S}[A]$ onto $\mathscr{M}[X]$ and $\mathscr{S}[A]$, respectively. There is a unique injective magma morphism $\phi$ of the free magma $\mathscr{M}[A]$ into $\mathscr{M}[X] \times \mathscr{S}[A]$ such that
\\
\begin{center}
\begin{varwidth}{\textwidth}
\begin{enumerate}[label=(\roman*)]
\item $\pi_1 \circ \phi$ is surjective and constant on the subset $A$ of $\mathscr{M}[A]$ and
\item $\pi_2 \circ \phi$ restricted to $A$ is the inclusion map $\iota_A: A \to \mathscr{S}[A]$.
\\
\end{enumerate}
\end{varwidth}
\end{center}
Up to isomorphism, $\mathscr{M}[X]$ is the unique magma with this property.
\end{thm}
\begin{proof}
Let $\phi : \mathscr{M}[A] \to \mathscr{M}[X] \times \mathscr{S}[A]$ be the unique magma morphism with $\phi(a) = (X, a)$ for all $a \in A$. It is easy to see that $\phi$ satisfies the two given conditions. A straightforward induction argument shows that
\begin{equation} \label{eq:embedding}
\text{deg}(m) = \text{deg}(\pi_1(\phi(m))) = \text{deg}(\pi_2(\phi(m)))
\end{equation}
for all $m \in \mathscr{M}[A]$.
We now prove injectivity of $\phi$ by induction. If $m, m' \in \mathscr{M}[A]$ both have degree 1 (so $m, m' \in A$) and $m \neq m'$, then clearly $\phi(m) \neq \phi(m')$. Suppose that $n \geq 1$ is such that for any $m, m' \in \mathscr{M}[A]$ with $m \neq m'$ and $1 \leq \text{deg}(m), \text{deg}(m') \leq n$, we have $\phi(m) \neq \phi(m')$. If $m, m' \in \mathscr{M}[A]$ are such that $m \neq m'$ and $1 \leq \text{deg}(m), \text{deg}(m') \leq n+1$, we have $m = (m_1, m_2)$ and $m' = (m'_1, m'_2)$ for some $m_i, m'_i \in \mathscr{M}[A]$ with $1 \leq \text{deg}(m_i), \text{deg}(m'_i) \leq n$. Since $m \neq m'$, we have $m_1 \neq m'_1$ or $m_2 \neq m'_2$. Without loss of generality, assume that $m_1 \neq m'_1$. By the induction hypothesis, we have $\phi(m_1) \neq \phi(m'_1)$ and so $ \pi_1(\phi(m_1)) \neq \pi_1(\phi(m'_1)) $ or $\pi_2(\phi(m_1)) \neq \pi_2(\phi(m'_1))$. If $ \pi_1(\phi(m_1)) \neq \pi_1(\phi(m'_1)) $, then
\[ \pi_1(\phi(m)) = (\pi_1(\phi(m_1)), \pi_1(\phi(m_2))) \neq (\pi_1(\phi(m'_1)), \pi_1(\phi(m'_2))) = \pi_1(\phi(m')) \]
by Proposition \ref{prop:magma1}. If $ \pi_1(\phi(m_1)) = \pi_1(\phi(m'_1)) $, then we must have
\[ \text{deg}(\pi_2(\phi(m_1))) = \text{deg}(\pi_2(\phi(m'_1))) \]
by \eqref{eq:embedding}. If, in addition, we have $ \pi_2(\phi(m_1)) \neq \pi_2(\phi(m'_1)) $, then
\[ \pi_2(\phi(m)) = (\pi_2(\phi(m_1)), \pi_2(\phi(m_2))) \neq (\pi_2(\phi(m'_1)), \pi_2(\phi(m'_2))) = \pi_2(\phi(m')) \]
by Proposition \ref{prop:semigroup1}. In either case, we have $\phi(m) \neq \phi(m')$. Injectivity of $\phi$ now follows easily.
We now show uniqueness of $\phi$. If $\psi$ is any such function, then, because $\pi_1 \circ \psi$ is both surjective and constant on $A$, $(\pi_1 \circ \psi)(A)$ is a one-element generating set for $\mathscr{M}[X]$. This implies that $(\pi_1 \circ \psi)(A) = \{X\}$ or, equivalently, that $(\pi_1 \circ \psi)(a) = X$ for every $a$ in $A$. Combining this together with the fact that $\pi_2 \circ \psi$ is the inclusion map from $A$ into $\mathscr{S}[A]$, we see that
\[ \psi(a) = ((\pi_1 \circ \psi)(a), (\pi_2 \circ \psi)(a)) = (X, a) \]
for each $a$ in $A$. Therefore, we must have $\psi = \phi$.
Assume that there is a magma $M'$ that can be used in place of $\mathscr{M}[X]$ in Theorem \ref{thm:embedding}. Let
\[ \phi' : \mathscr{M}[X] \to M' \times \mathscr{S}[X] \]
be the unique injective magma morphism described in the statement of the theorem. It is easily verified that $\pi'_1 \circ \phi'$, where $\pi'_1$ is the canonical projection of $M' \times \mathscr{S}[X]$ onto $M'$, is a magma isomorphism of $\mathscr{M}[X]$ with $M'$.
\end{proof}
To illustrate, let $Z = \{z_1, z_2, z_3, z_4\}$, let $\pi_{Z,1}$ and $\pi_{Z,2}$ be the canonical projections of $\mathscr{M}[X] \times \mathscr{S}[Z]$ onto $\mathscr{M}[X]$ and $\mathscr{S}[Z]$, respectively, and let $\phi_Z$ be the embedding described in Theorem \ref{thm:embedding}. If $m = (z_2, (z_3, (z_1, z_4)))$, then
\[ (\pi_{Z,1} \circ \phi_Z)(m) = (X, (X, (X, X))) \, \in \mathscr{M}[X] \]
and
\[ (\pi_{Z,2} \circ \phi_Z)(m) = z_2 z_3 z_1 z_4 \, \in \mathscr{S}[Z]. \]
This example illustrates a general principle regarding the functions $\pi_1 \circ \phi$ and $\pi_2 \circ \phi$ appearing in Theorem \ref{thm:embedding}. If an element of $\mathscr{M}[A]$ is expressed symbolically as in this example, then the function $\pi_1 \circ \phi$ encodes the arrangement of parentheses and the function $\pi_2 \circ \phi$ encodes the sequence obtained if all parentheses are removed. From now on, we will write $\pi_1 \circ \phi$ as $\Pi$ and $\pi_2 \circ \phi$ as $\Sigma$. For a given $m$ in $\mathscr{M}[A]$, we will refer to $\Pi(m)$ and $\Sigma(m)$ as the \emph{product type} of $m$ and the \emph{sequence type} of $m$, respectively.
We point out that the core idea underlying the embedding of Theorem \ref{thm:embedding} is by no means a novel one. If you regard, as a computer scientist might, the free magma on $A$ as the set of complete, planar, rooted binary trees with leaves labelled by elements of $A$, then for $m$ in $\mathscr{M}[A]$, $\Pi(m)$ gives the corresponding binary tree and $\Sigma(m)$ gives the labelling of leaves. It is our hope that the framework provided by this embedding will lend itself well to effective computation in free non-associative algebras, and will ultimately enable us to discover a wealth of new Schreier varieties. It is the author's view that the proofs of results presented in this paper serve as a testament to the promise of this approach.
\\
The next proposition utilizes the notion of product type. It will be used in Section 4.
\begin{proposition} \label{prop:differentpt_eval}
Let $m_1, \ldots, m_n \in \mathscr{M}[A]$ be of the same degree, and let $M = M(X_1, \ldots, M_n)$ and $M' = M'(X_1, \ldots, X_n)$ be monomials. If $\Pi(M) \neq \Pi(M')$, then
\[ \Pi(M(m_1, \ldots, m_n)) \neq \Pi(M'(m_1, \ldots, m_n)) \,. \]
\end{proposition}
\begin{proof}
The proof is by induction on deg($M$). The result is vacuously true if $\text{deg}(M) = 1$. For some $k > 1$, assume that the result holds if $\text{deg}(M) = j$ for any $j$ with $1 \leq j < k$ and assume now that $\text{deg}(M) = k$. If $M'$ is a monomial of degree 1, then the result is trivial. If $M'$ is of degree greater than one and if $M = (M_1, M_2)$ and $M' = (M'_1, M'_2)$ for monomials $M_i, M'_i$, then (since $\Pi(M) \neq \Pi(M')$) we must have $\Pi(M_1) \neq \Pi(M'_1)$ or $\Pi(M_2) \neq \Pi(M'_2)$. Without any loss of generality, assume that $\Pi(M_1) \neq \Pi(M'_1)$. By the induction hypothesis, we have $\Pi(M_1(m_1, \ldots, m_n)) \neq \Pi(M'_1(m_1, \ldots, m_n))$. Since
\[ \Pi(M(m_1, \ldots, m_n)) = (\Pi(M_1(m_1, \ldots, ,m_n)), \Pi(M_2(m_1, \ldots, m_n))) \]
and
\[ \Pi(M'(m_1, \ldots, m_n)) = (\Pi(M'_1(m_1, \ldots, m_n)), \Pi(M'_2(m_1, \ldots, m_n))) \,, \]
Proposition \ref{prop:magma1} implies that
\[ \Pi(M(m_1, \ldots, m_n)) \neq \Pi(M'(m_1, \ldots, m_n)) \,. \]
\end{proof}
\section{Direct Sum Decompositions}
For each positive integer $n$, let $\mathscr{H}_n[A]$ be the subspace of $\mathscr{N}[A]$ generated by the $A$-monomials of degree $n$. We have
\begin{equation*}
\mathscr{N}[A] = \bigoplus_{n = 1}^{\infty} \mathscr{H}_n[A] \,,
\end{equation*}
and $\mathscr{N}[A]$ is a graded $\mathbb{F}$-algebra under this decomposition. For each $n$, $\mathscr{H}_n[A]$ is called the \emph{homogeneous component of $\mathscr{N}[A]$ of degree $n$}, and nonzero elements of $\mathscr{H}_n[A]$ are called \emph{homogeneous $A$-polynomials of degree $n$} \cite{bourbaki_algebra}.
\\
Thus, if $Z = \{z_1, z_2, z_3, z_4\}$ and $\mathbb{F} = \mathbb{Q}$, then the $Z$-polynomial
\[ p_1 = (z_2, z_1) + 4 (z_1, z_4) + 5 (z_2, z_2) + (z_1, z_3) \]
is homogeneous of degree 2, and the $Z$-polynomial
\[ p_2 = (((z_1, z_3), z_4), z_2) + 2 (((z_3, z_2), z_2), z_2) + (z_3, (z_1, (z_2, z_2))) \]
is homogeneous of degree 4.
\\
For the remainder of our work, for any $n$, $\pi_n$ will denote the canonical projection map
\[ \pi_n : \mathscr{N}[A] \to \mathscr{H}_n[A] \,. \]
For any $A$-polynomial $p$, $\pi_n(p)$ will be called the \emph{homogeneous component of $p$ of degree $n$}.
The following proposition states that homogeneity is preserved when homogeneous $A$-polynomials are substituted into a monomial $M$.
\begin{proposition} \label{prop:eval_degree}
If $M(X_1, \ldots, X_n)$ is a monomial and if $h_1, \ldots, h_n$ are nonzero homogeneous $A$-polynomials, then $M(h_1, \ldots, h_n)$ is a homogeneous $A$-polynomial with
\[ \text{deg}(M(h_1, \ldots, h_n)) = \sum\limits_{i=1}^n \text{deg}(M_i(h_1, \ldots, h_n)) \,,\]
where $M_i = (\Sigma(M))(i)$ for all $i$. In particular, if $\text{deg}(h_i) = k$ for all $i$, then $M(h_1, \ldots, h_n)$ is a homogeneous $A$-polynomial of degree $k(\text{deg}(M))$.
\end{proposition}
\begin{proof}
If $\text{deg}(M) = 1$, then the result is trivial. Assume that for some $n \geq 1$, the result holds for all monomials of degree $n$ and suppose now that $\text{deg}(M) = n + 1$. If $M'$ and $M''$ are monomials such that $M = (M', M'')$, then we have
\begin{align*}
\text{deg}(M(h_1, \ldots, h_n)) &= \text{deg}(M'(h_1, \ldots, h_n), M''(h_1, \ldots, h_n)) \\
&= \text{deg}(M'(h_1, \ldots, h_n)) + \text{deg}(M''(h_1, \ldots, h_n)) \\
&= \sum\limits_{i=1}^{d'} \text{deg}(M'_i(h_1, \ldots, h_n)) + \sum\limits_{i=1}^{n + 1 - d'} \text{deg}(M''_i(h_1, \ldots, h_n))
\end{align*}
where $d' = \text{deg}(M')$. Under the concatenation operation on $\mathscr{S}[A]$, we have
\[ (\Sigma(M'))(j) = (\Sigma(M))(j) \]
for all $j$ with $1 \leq j \leq d'$ and
\[ (\Sigma(M''))(j) = (\Sigma(M))(d' + j) \]
for all $j$ with $1 \leq j \leq n + 1 - d'$. So,
\[ \sum\limits_{i=1}^{d'} \text{deg}(M'_i(h_1, \ldots, h_n)) = \sum\limits_{i=1}^{d'} \text{deg}(M_i(h_1, \ldots, h_n)) \]
and
\[ \sum\limits_{i=1}^{n + 1 - d'} \text{deg}(M''_i(h_1, \ldots, h_n)) = \sum\limits_{i = d' + 1}^{n + 1} \text{deg}(M_i(h_1, \ldots, h_n)) \,. \]
Therefore,
\begin{align*}
\text{deg}(M(h_1, \ldots, h_n)) &= \sum\limits_{i=1}^{d'} \text{deg}(M'_i(h_1, \ldots, h_n)) + \sum\limits_{i=1}^{n + 1 - d'} \text{deg}(M''_i(h_1, \ldots, h_n)) \\
&= \sum\limits_{i=1}^{d'} \text{deg}(M_i(h_1, \ldots, h_n)) + \sum\limits_{i=d' + 1}^{n+1} \text{deg}(M_i(h_1, \ldots, h_n)) \\
&= \sum\limits_{i=1}^{n+1} \text{deg}(M_i(h_1, \ldots, h_n))
\end{align*}
and so the proposition holds by induction.
\end{proof}
A subalgebra $S$ of $\mathscr{N}[A]$ is said to be \emph{homogeneous} if for every positive integer $n$ and $p \in S$ we have $\pi_n(p) \in S$ \cite{bourbaki_algebra}. The following proposition states that a subalgebra of $\mathscr{N}[A]$ that is generated by a set of homogeneous $A$-polynomials is a homogeneous subalgebra.
\begin{proposition} \cite{lewin} \label{prop:homog_subalgebra}
If $H$ is a set of homogeneous $A$-polynomials, then $\langle H \rangle$ is a homogeneous subalgebra of $\mathscr{N}[A]$.
\end{proposition}
\begin{proof}
Let $P = P(X_1, \ldots, X_n)$ be a nonzero polynomial and let $h_1, \ldots, h_n$ be elements of $H$. Assume $n$ is a positive integer such that
\[ \pi_n(P(h_1, \ldots, h_n)) \neq 0 \,. \]
We have $P = \sum\limits_{i=1}^l c_i M_i$ for some nonzero $c_i \in \mathbb{F}$ and distinct monomials $M_1, \ldots, M_l$. By re-indexing if necessary, we may assume that $s$ with $1 \leq s \leq l$ is such that $ \text{deg}(M_i(h_1, \ldots, h_n)) = n $ if and only if $1 \leq i \leq s$. If $P' = \sum\limits_{i=1}^s c_i M_i$, then we have
\[ \pi_n(P(h_1, \ldots, h_N)) = P'(h_1, \ldots, h_n) \in \langle H \rangle \,. \]
\end{proof}
A direct sum decomposition of each homogeneous component $\mathscr{H}_n[A]$ of $\mathscr{N}[A]$ is afforded to us by the product type function $\Pi$. If $n$ is a postive integer, then for any $t$ in $\mathscr{M}[X]$ with $\text{deg}(t) = n$, we denote by $\mathscr{H}_{n,t}[A]$ the subspace of $\mathscr{H}_n[A]$ generated by the $A$-monomials $m \in \mathscr{M}[A]$ with $\Pi(m) = t$. If, for each positive integer $n$, $\mathscr{M}_n$ is the set of degree-$n$ elements of $\mathscr{M}[X]$, then we have
\begin{equation*}
\mathscr{H}_n[A] = \bigoplus_{t' \in \mathscr{M}_n} \mathscr{H}_{n,t'}[A] \,.
\end{equation*}
This direct sum decomposition will be referred to as the \emph{product type direct sum decomposition} of $\mathscr{H}_n[A]$.
\\
To illustrate, if $t_1 = ((X, X), X)$ and $t_2 = ((X, (X, X)), X)$, then
\[ p_3 = ((z_2, z_2), z_1) - 4 ((z_1, z_2), z_1) \]
is an element of $\mathscr{H}_{3, t_1}[Z]$ and
\[ p_4 = 3((z_4, (z_4, z_4)), z_4) + 2((z_2, (z_3, z_1)), z_4) + 3((z_1, (z_2, z_3)), z_4) \]
is an element of $\mathscr{H}_{4, t_2}[Z]$.
\\
We now present a proposition that demonstrates the utility of the embedding described in Theorem \ref{thm:embedding}. This proposition will play a key role in the proof of Theorem \ref{thm:algebra5}.
\begin{proposition} \label{prop:algebra1}
Let $M = M(X_1, \ldots, X_n)$ be a monomial and let $p_1, \ldots, p_n \in \mathscr{N}[A]$ be linearly independent. If
\[ \gamma: \Pi^{-1}(\{\Pi(M)\}) \subset \mathscr{N}[X_1, \ldots, X_n] \to \mathscr{N}[A] \]
is such that
\[ \gamma(M') = M'(p_1, \ldots, p_n) \]
for all $M' \in \Pi^{-1}(\{\Pi(M)\})$, then $\gamma$ is injective and
\[ \gamma(\Pi^{-1}(\{\Pi(M)\})) \]
is a linearly independent subset of $\mathscr{N}[A]$.
\end{proposition}
\begin{proof}
We will prove the result by induction on $\text{deg}(M)$. If $\text{deg}(M) = 1$, then
\[ \Pi^{-1}(\{\Pi(M)\}) = \{X_1, \ldots, X_n\} \,, \]
\[ \gamma(X_i) = p_i \text{ for each $i$} \,, \]
and
\[ \gamma(\Pi^{-1}(\{\Pi(M)\})) = \{p_1, \ldots, p_n\} \,. \]
By assumption, $\{p_1, \ldots, p_n\}$ is a linearly independent set. It is clear that $\gamma$ is injective.
Assume that for some $n > 1$, the result holds for all $M' = M'(X_1, \ldots, X_n)$ with $1 \leq \text{deg}(M') < n$. Suppose now that $\text{deg}(M) = n$, and let $M_1$ and $M_2$ be the unique monomials such that $M = (M_1, M_2)$. For each $i$, let
\[ \gamma_i : \Pi^{-1}(\{\Pi(M_i)\}) \subset \mathscr{N}[X_1, \ldots, X_n] \to \mathscr{N}[A] \]
be such that
\[ \gamma_i(M') = M'(p_1, \ldots, p_n) \]
for each $M' \in \Pi^{-1}(\{\Pi(M_i)\})$. Since
\[ \Pi^{-1}(\{\Pi(M)\}) = \left\{ (M', M'') : M' \in \Pi^{-1}(\{\Pi(M_1)\}) \text{ and } M'' \in \Pi^{-1}(\{\Pi(M_2)\}) \right\} \,, \]
we may define a function
\[ \phi_M : \Pi^{-1}(\{\Pi(M)\}) \to \mathscr{N}[A] \otimes \mathscr{N}[A] \]
by letting
\[ \phi_M ((M',M'')) = M'(p_1, \ldots, p_n) \otimes M''(p_1, \ldots, p_n) \]
for each $M' \in \Pi^{-1}(\{\Pi(M_1)\})$ and $M'' \in \Pi^{-1}(\{\Pi(M_2)\})$.
Because $\text{deg}(M_i) < n$ for each $i$, it follows from the induction hypothesis that $\phi_M$ is injective and
\[ \phi_M(\Pi^{-1}(\{\Pi(M)\})) \]
is a linearly independent subset of $\mathscr{N}[A] \otimes \mathscr{N}[A]$. If
\[ \mathscr{N}[A] \mathscr{N}[A] = \text{span}\left(\{(y_1, y_2) : y_1, y_2 \in \mathscr{N}[A]\}\right) \,, \]
then let
\[ \phi: \mathscr{N}[A] \mathscr{N}[A] \to \mathscr{N}[A] \otimes \mathscr{N}[A] \]
be the unique linear isomorphism such that
\[ \phi((m_1, m_2)) = m_1 \otimes m_2 \]
for all $m_1, m_2 \in \mathscr{M}[A]$. We have
\[ \gamma = \phi^{-1} \circ \phi_M \]
and thus $\gamma$ is injective and
\[ \gamma(\Pi^{-1}(\{\Pi(M)\})) \]
is a linearly independent subset of $\mathscr{N}[A]$.
\end{proof}
To illustrate Proposition \ref{prop:algebra1}, let $Z = \{z_1, z_2, z_3, z_4\}$, $\mathbb{F} = \mathbb{Q}$, $p_1 = (z_1, z_2)$, and $p_2 = ((z_3, z_3), z_2)$. If $M = (X_1, X_1)$, then
\[ \Pi^{-1}(\Pi(M)) = \{ (X_1, X_1), (X_1, X_2), (X_2, X_1), (X_2, X_2) \} \]
and $\gamma(\Pi^{-1}(\Pi(M)))$ consists of the elements
\[ ((z_1, z_2), (z_1, z_2)) \,, \]
\[ ((z_1, z_2), ((z_3, z_3), z_2)) \,, \]
\[ (((z_3, z_3), z_2), (z_1, z_2)) \,, \]
and
\[ (((z_3, z_3), z_2), ((z_3, z_3), z_2)) \]
which are all distinct and linearly independent by Proposition \ref{prop:algebra1}.
\\
\section{Algebraic Independence}
Let $\mathscr{A}$ be an $\mathbb{F}$-algebra. A subset $\alpha$ of $\mathscr{A}$ will be called \emph{algebraically independent} if for any distinct $y_1, \ldots, y_n \in \alpha$ and any polynomial $P = P(X_1, \ldots, X_n)$,
\[ P(y_1, \ldots, y_n) = 0 \text{ implies } P = 0. \]
By considering homogeneous degree-$1$ polynomials, we see that every algebraically independent set is also linearly independent.
The significance of algebraic independence is the following:
\begin{thm} \label{thm:ai_meaning}
A nonempty subset $\alpha$ of an $\mathbb{F}$-algebra $\mathscr{A}$ is algebraically independent if and only if the algebra homomorphism
\[ \psi : \mathscr{N}[\alpha] \to \langle \alpha \rangle \]
with $\psi(y) = y$ for all $y \in \alpha$ is an isomorphism.
\end{thm}
\begin{proof}
It is clear that $\psi$ is surjective. We will show that $\alpha$ is algebraically independent if and only if $\psi$ is injective.
For any $y' \in \text{ker} \, \psi$, we have
\[ y' = P(y'_1, \ldots, y'_n) \]
for some polynomial $P = P(X_1, \ldots, X_n)$ and some $y'_1, \ldots, y'_n \in \alpha$. It follows from this that $\psi$ is injective if and only if for any choice of distinct $y_1, \ldots, y_n \in \alpha$, the restriction of $\psi$ to the subalgebra $\langle y_1, \ldots, y_n \rangle$ of $\mathscr{N}[\alpha]$ is injective.
For distinct $y_1, \ldots, y_n \in \alpha$, let
\[ \gamma: \langle y_1, \ldots, y_n \rangle \subset \mathscr{N}[A] \to \mathscr{N}[y_1, \ldots, y_n] \]
be the isomorphism with $\gamma(y_i) = y_i$ for all $i$ and let
\[ \gamma_g : \mathscr{N}[y_1, \ldots, y_n] \to \mathscr{N}[X_1, \ldots, X_n] \]
be the isomorphism induced by the function
\[ g: \{y_1, \ldots, y_n\} \to \{X_1, \ldots, X_n\} \]
with $g(y_i) = X_i$ for all $i$ \cite{ismail}. If
\[ \phi_{(y_1, \ldots, y_n)} : \mathscr{N}[X_1, \ldots, X_n] \to \langle \alpha \rangle \]
is the homomorphism with $\phi_{(y_1, \ldots, y_n)}(X_i) = y_i$ for all $i$, then we see that $\phi_{(y_1, \ldots, y_n)} \circ \gamma_g \circ \gamma$ is injective if and only if $\phi_{(y_1, \ldots, y_n)}$ is injective. Since $\phi_{(y_1, \ldots, y_n)} \circ \gamma_g \circ \gamma$ is the restriction of $\psi$ to the subalgebra $\langle y_1, \ldots, y_n \rangle$ of $\mathscr{N}[\alpha]$, we see that $\psi$ is injective if and only if $\phi_{(y_1, \ldots, y_n)}$ is injective.
It follows easily from the definition of algebraic independence that $\phi_{(y_1, \ldots, y_n)}$ is injective for any choice of distinct $y_1, \ldots, y_n \in \alpha$ if and only if $\alpha$ is an algebraically independent set.
\end{proof}
Theorem \ref{thm:ai_meaning} implies that a subset $\alpha$ of an $\mathbb{F}$-algebra $\mathscr{A}$ is algebraically independent if and only if the subalgebra $\langle \alpha \rangle$ of $\mathscr{A}$ generated by $\alpha$ is free over $\alpha$. Thus, our definition of algebraic independence is equivalent to the one given in \cite{umirbaev}. Our definition closely resembles the definition of algebraic independence that is used in ring theory \cite{lang}.
\\
It is now that we present several key results involving algebraic independence in free non-associative algebras. The first two results are well-known, and are both consequences of the fact that the variety of non-associative algebras possesses what is known as the \emph{Nielsen property} \cite{mikhalev} (see the paragraph following the statement of Theorem \ref{thm:algebra6}). We give proofs of these results in order to demonstrate our alternative framework for studying free non-associative algebras. In particular, Proposition \ref{prop:algebra1}, which makes use of the embedding given in Theorem \ref{thm:embedding}, is used in the final step of the proof of Theorem \ref{thm:algebra5}.
\begin{thm} \label{thm:algebra5}
Let $\alpha \subset \mathscr{N}[A]$ be a set of nonzero, homogeneous $A$-polynomials of the same degree. If $\alpha$ is a linearly independent set, then $\alpha$ is also an algebraically independent set.
\end{thm}
\begin{proof}
Let $k \geq 1$ be such that every $h \in \alpha$ is a homogeneous $A$-polynomial of degree $k$, let $h_1, \ldots, h_n$ be distinct elements of $\alpha$, and let $P = P(X_1, \ldots, X_n)$ be a nonzero polynomial. We will show that $P(h_1, \ldots, h_n)$ is nonzero.
For some $t \geq 1$, we have
\[ P(X_1, \ldots, X_n) = \sum\limits_{i = 1}^{t} c_i M_i(X_1, \ldots, X_n) \]
where $c_i \in \mathbb{F}$ are nonzero for all $i$ and $M_1, \ldots, M_t$ are distinct monomials. It follows from the bilinearity of the product operation on $\mathscr{N}[A]$ together with Proposition \ref{prop:eval_degree} that each $M_i(h_1, \ldots, h_n)$ is a homogeneous $A$-polynomial of degree $k(\text{deg}(M_i))$. By reindexing if necessary, we can assume that some $l$ with $1 \leq l \leq t$ is such that
\[ \text{deg}(M_1) = \text{deg}(M_2) = \cdots = \text{deg}(M_l) = \text{max}\left(\text{deg}(M_1), \text{deg}(M_2), \ldots, \text{deg}(M_t)\right) \]
and $\text{deg}(M_j) < \text{deg}(M_1)$ for all $l < j \leq t$. If $P_1 = \sum\limits_{i=1}^l c_i M_i $ and $P_2 = \sum\limits_{i=l + 1}^t c_i M_i $, then we have
\[ \pi_{k(\text{deg}(M_1))}(P_1(h_1, \ldots, h_n)) = P_1(h_1, \ldots, h_n) \]
and
\[ \pi_{k(\text{deg}(M_1))}(P_2(h_1, \ldots, h_n)) = 0 \,. \]
It suffices to show that $P_1(h_1, \ldots, h_n)$ is nonzero.
\\
Starting now with $P_1$, we can assume (by reindexing if necessary) that some $l'$ with $1 \leq l' \leq l$ is such that
\[ \Pi(M_1) = \Pi(M_2) = \cdots = \Pi(M_{l'}) \]
and $\Pi(M_{j'}) \neq \Pi(M_1)$ for all $j'$ with $l' < j' \leq l$. It follows from Proposition \ref{prop:differentpt_eval} that no monomial term of $M_{j'}(h_1, \ldots, h_n)$ for any $j'$ with $l' < j' \leq l$ has the same product type as any monomial term of $M_i(h_1, \ldots, h_n)$ for any $i$ with $1 \leq i \leq l'$. Thus, if $ P'_1 = \sum\limits_{i=1}^{l'} c_i M_i $, then it suffices to show that $P'_1(h_1, \ldots, h_n)$ is nonzero. Because $M_1, M_2, \ldots, M_{l'}$ are distinct and have the same product type, and because $h_1, \ldots, h_n$ are linearly independent, the fact that $P'_1(h_1, \ldots, h_n)$ is nonzero follows from Proposition \ref{prop:algebra1}.
\end{proof}
\vspace{3mm}
In light of the following theorem, we emphasize that for a nonzero $A$-polynomial $p$, $\pi_{deg(p)}(p)$ is the highest-degree nonzero homogeneous component of $p$.
\begin{thm} \label{thm:algebra6}
Let $\alpha$ be a set of nonzero $A$-polynomials and let
\[ \alpha' = \{ \pi_{deg(p)} (p) : p \in \alpha \} \,. \]
If $\alpha'$ is algebraically independent and if
\[ \pi_{deg(p_1)}(p_1) \neq \pi_{deg(p_2)}(p_2) \]
for all $p_1, p_2 \in \alpha$ with $p_1 \neq p_2$, then $\alpha$ is algebraically independent.
\end{thm}
In other words, if no two elements of $\alpha$ have the same highest-degree homogeneous component and if the subset of $\mathscr{N}[A]$ consisting of the highest-degree homogeneous components of elements of $\alpha$ is algebraically independent, then $\alpha$ is algebraically independent. Such a set $\alpha$ is said to be \emph{reduced} \cite{mikhalev}. Theorem \ref{thm:algebra6} states that every reduced set is also an algebraically independent set. In the language of universal algebra, this is what is known as the Nielsen property.
\begin{proof}
Let $P = P(X_1, \ldots, X_n)$ be a nonzero polynomial and let $p_1, \ldots, p_n$ be distinct elements of $\alpha$. We will show that $P(p_1, \ldots, p_n)$ is nonzero.
For some $t \geq 1$, $P = \sum\limits_{i = 1}^{t} c_i M_i $ where $M_1, \ldots, M_t$ are distinct monomials and each $c_i \in \mathbb{F}$ is nonzero. For each $i$ with $1 \leq i \leq t$,
\[ M_i(p_1, \ldots, p_n) = M_i(\pi_{deg(p_1)}(p_1), \ldots, \pi_{deg(p_n)}(p_n)) + r_i \]
where $r_i$ is a (possibly zero) $A$-polynomial with
\[ \text{deg}(r_i) < \text{deg}(M_i \left(\pi_{\text{deg}(p_1)}(p_1), \ldots, \pi_{\text{deg}(p_n)}(p_n)\right) \]
if $r_i$ is nonzero.\footnote{This is easily shown using the bilinearity of the product operation on $\mathscr{N}[A]$.} By re-indexing if necessary, we may assume that some $s \geq 1$ is such that
\[ \text{deg}\left(M_i\left(\pi_{deg(p_1)}(p_1), \ldots, \pi_{deg(p_n)}(p_n)\right)\right) = \text{deg}(P(p_1, \ldots, p_n)) \]
if and only if $1 \leq i \leq s$. We have
\[ \pi_{\text{deg}(P(p_1, \ldots, p_n))} (P(p_1, \ldots, p_n)) = \sum\limits_{i = 1}^{s} c_i M_i \left (\pi_{deg(p_1)} (p_1), \ldots, \pi_{deg(p_n)}(p_n) \right) \]
and by algebraic independence of $\alpha'$,
\[ \pi_{deg(P(p_1, \ldots, p_n))} (P(p_1, \ldots, p_n)) \neq 0 \,. \]
Thus, the highest degree homogeneous component of $P(p_1, \ldots, p_n)$ is nonzero which implies that $P(p_1, \ldots, p_n)$ is nonzero.
\end{proof}
\vspace{3mm}
The next three results can be viewed, collectively, as a generalization of Kurosh's Theorem \cite{kurosh}. Indeed, Kurosh's Theorem is a direct corollary of Theorem \ref{thm:algebra8}.
We will say that a subset $S$ of $\mathscr{N}[A]$ is of \emph{bounded degree} if there is a positive integer $N$ such that $deg(p) \leq N$ for all nonzero $p$ in $S$. If $S$ is nonempty and of bounded degree, then we will denote the smallest such $N$ by $\text{deg}(S)$. If $S$ is empty, then we define $\text{deg}(S) = 0$.
\begin{thm} \label{thm:algebra7}
Let $\alpha_1 \subset \mathscr{N}[A]$ be an algebraically independent set of homogeneous $A$-polynomials which is of bounded degree. If $\alpha_2 \subset \mathscr{N}[A]$ is a linearly independent set of homogeneous $A$-polynomials of the same degree $k \geq \text{deg}(S)$ and if
\[ \langle \alpha_1 \rangle \cap \text{span}(\alpha_2) = \{ 0 \} \,, \]
then $\alpha = \alpha_1 \cup \alpha_2$ is an algebraically independent set.
\end{thm}
\begin{proof}
Let $h_1, \ldots, h_n$ be distinct elements of $\alpha = \alpha_1 \cup \alpha_2$ and let $P = P(X_1, \ldots, X_n)$ be a nonzero polynomial with $P = \sum\limits_{i=1}^t c_i M_i$ for distinct monomials $M_1, \ldots, M_t$ and nonzero $c_1, \ldots, c_t$ in $\mathbb{F}$. We will show that $P(h_1, \ldots, h_n)$ is nonzero.
If $h_i \in \alpha_1$ for all $i$ or if $h_i \in \alpha_2$ for all $i$, then the result holds by algebraic independence of $\alpha_1$ and $\alpha_2$. (Algebraic independence of $\alpha_2$ follows from Theorem \ref{thm:algebra5}.) So, by re-indexing if necessary, suppose that $j$ with $1 \leq j < n$ is such that $h_1, \ldots, h_j$ are elements of $\alpha_1$ and $h_{j+1}, \ldots, h_n$ are elements of $\alpha_2$. By considering the direct sum decomposition of $\mathscr{N}[A]$ into homogeneous components we may assume that for every $i$, $M_i(h_1, \ldots, h_n)$ is a homogeneous $A$-polynomial of degree $d \geq 1$.
We will prove the result by induction on $d$. Because at least one of the $h_i$ is in $\alpha_2$, we can assume that $d \geq k$. Otherwise, we would have $M_i = M_i(X_1, \ldots, X_j)$ for each $i$, and the fact that $P(h_1, \ldots, h_n)$ is nonzero would then follow from the algebraic independence of $\alpha_1$. If $d = k$, then the fact that $P(h_1, \ldots, h_n)$ is nonzero follows easily from the algebraic independence of $\alpha_1$, the linear independence of $\alpha_2$, and the assumption that \[ \langle \alpha_1 \rangle \cap \text{span}(\alpha_2) = \{ 0 \}. \]
Suppose $l > k$ is such that $P(h_1, \ldots, h_n)$ is nonzero if $k \leq d < l$, and assume now that $d = l$. Since $l > k$, for each $i$ with $1 \leq i \leq t$ we have $M_i = (M'_i, M''_i)$ for some monomials $M'_i$ and $M''_i$. By re-indexing if necessary, we may assume that $r \geq 1$ is such that
\[ M'_1(h_1, \ldots, h_n), \ldots, M'_r(h_1, \ldots, h_n) \]
(which are all homogeneous $A$-polynomials by Proposition \ref{prop:eval_degree}) are all of the same degree $d' \geq 1$ and that
\[ \text{deg}(M'_j(h_1, \ldots, h_n)) \neq d' \]
for all $j > r$. Under this assumption, every $A$-monomial term of
\[ M_i(h_1, \ldots, h_n) = (M'_i(h_1, \ldots, h_n), M''_i(h_1,\ldots, h_n)) \]
is of the form $(m'_i, m''_i)$ for $A$-monomials $m'_i$ and $m''_i$ such that $\text{deg}(m'_i) = d'$ if and only if $i \leq r$. It follows that for any $j > r$ and $i \leq r$, no $A$-monomial term of $M_j(h_1, \ldots, h_n)$ is an $A$-monomial term of $M_i(h_1, \ldots, h_n)$. Thus, it suffices to show that the polynomial $P' = \sum\limits_{i=1}^r c_i M_i$ is such that $P'(h_1, \ldots, h_n)$ is nonzero.
Using bilinearity and re-indexing if necessary, for some polynomials $Q_i$ we may write
\[ P' = \sum\limits_{i=1}^r c_i M_i = \sum\limits_{i=1}^s (M'_i, Q_i) \]
where $s \leq r$, $M'_i \neq M'_j$ for $i \neq j$, and each $Q_i(h_1, \ldots, h_n)$ is a homogeneous polynomial of degree $d - d'$. We may also assume that for some $u$ with $1 \leq u \leq s$, there is an $A$-monomial $m$ of degree $d - d'$ which is an $A$-monomial term of $Q_i(h_1, \ldots, h_n)$ if and only if $1 \leq i \leq u$. Using bilinearity, we can now write
\[ P'(h_1, \ldots, h_n) = \sum\limits_{i=1}^s (M'_i(h_1, \ldots, h_n), Q_i(h_1, \ldots, h_n)) = \left( \left(\sum\limits_{i=1}^u c'_i M'_i(h_1, \ldots, h_n)\right), m \right) + R \,, \]
where each $c'_i \in \mathbb{F}$ is nonzero and $R$ is a homogeneous $A$-polynomial of degree $d$ which shares no common $A$-monomial terms with $\left( \left(\sum\limits_{i=1}^u c'_i M'_i(h_1, \ldots, h_n)\right), m \right)$. We see that if $P'(h_1, \ldots, h_n) = 0$, then we must have
\[ \left( \left(\sum\limits_{i=1}^u c'_i M'_i(h_1, \ldots, h_n)\right), m \right) = 0. \]
But, by the induction hypothesis,
\[ \sum\limits_{i=1}^u c'_i M'_i(h_1, \ldots, h_n) \neq 0 \]
and so
\[ \left( \left(\sum\limits_{i=1}^u c'_i M'_i(h_1, \ldots, h_n)\right), m \right) \neq 0 , \]
which implies that $P'(h_1, \ldots, h_n)$ is nonzero. This shows that the result holds if $d = l$, which completes the proof by induction.
\end{proof}
\vspace{3mm}
Theorem \ref{thm:algebra8} gives a situation in which an algebraically independent subset of a subalgebra $S$ of $\mathscr{N}[A]$ can be extended to an algebraically independent set which generates the entire subalgebra $S$. The proof of Theorem \ref{thm:algebra8} makes use of the following fact that is easily verified:
\begin{center}
If $\{\alpha_n\}_{n=1}^{\infty}$ is a collection of algebraically independent subsets of an $\mathbb{F}$-algebra $\mathscr{A}$ and if $i \leq j$ implies $\alpha_i \subset \alpha_j$ for all $i$ and $j$, then $\displaystyle{\bigcup\limits_{n=1}^{\infty} \alpha_n}$ is an algebraically independent subset of $\mathscr{A}$.
\end{center}
\vspace{2mm}
\begin{thm} \label{thm:algebra8}
Let $S$ be a homogeneous subalgebra of $\mathscr{N}[A]$ and let $\alpha_S \subset S$ be an algebraically independent set of homogeneous $A$-polynomials which is of bounded degree. If
\[ \{p \in S : deg(p) \leq \text{deg}(\alpha_S) \} \subset \langle \alpha_S \rangle \,, \]
then there exists an algebraically independent set $\alpha \subset S$ of homogeneous $A$-polynomials such that
\[ \alpha_S \subset \alpha \text{\quad and \quad} \langle \alpha \rangle = S \,. \]
\end{thm}
We remark that the recursive construction utilized in the following proof first appeared in Kurosh's paper \cite{kurosh}. A similar construction was later used by Shirshov \cite{shirshov} to show that every subalgebra of a free Lie algebra is free.
\begin{proof} If $\langle \alpha_S \rangle = S$, then we can take $\alpha = \alpha_S$. Otherwise, assume that $\langle \alpha_S \rangle \neq S$. We will recursively define a set $\alpha$ that satisfies the theorem.
Let $k_1 \geq 1$ be smallest such that $deg(p_1) = k_1$ for some $p_1 \in S - \langle \alpha_S \rangle$. By hypothesis, $k_1 > \text{deg}(\alpha_S)$. Let
\[ W_{k_1} = \text{span}(\{p \in S : deg(p) = k_1\}) \]
and let $\gamma_{k_1}$ be a basis for $W_{k_1}$. If
\[ W_{k_1} \cap \langle \alpha_S \rangle \neq \{0\} \,, \]
assume that $\gamma_{k_1}$ extends a basis $\beta_{k_1}$ for $W_{k_1} \cap \langle \alpha_S \rangle$ to a basis for $W_{k_1}$. If
\[W_{k_1} \cap \langle \alpha_S \rangle = \{0\} \,, \]
let $\beta_{k_1} = \emptyset$. Note that in either case, every element of $\gamma_{k_1} - \beta_{k_1}$ is of degree $k_1$. By Theorem \ref{thm:algebra7},
\[ \alpha_{k_1} = (\gamma_{k_1} - \beta_{k_1}) \cup \alpha_S \]
is an algebraically independent set. We see that for all $p \in S$ with $deg(p) \leq k_1$, $p \in \langle \alpha_{k_1} \rangle$.
\\
For some $n > 1$ assume that $\alpha_{k_1}, \alpha_{k_2}, \ldots, \alpha_{k_{n-1}}$, where $ 1 \leq k_1 < k_2 < \ldots < k_{n-1}$, are algebraically independent subsets of $S$ consisting of homogeneous $A$-polynomials and are such that
\\
\[ (1) \hspace{.4cm} \alpha_{k_{i}} \subset \alpha_{k_{j}} \text{ if } i \leq j \, \]
\\
and
\\
\[ (2) \hspace{.4cm} \text{ for all } p \in S \text{ with } deg(p) \leq k_{n-1} \,, \,\, p \in \langle \alpha_{k_{n-1}} \rangle \,. \]
\\
If $\langle \alpha_{k_{n-1}} \rangle = S$, then let $k_n = k_{n-1} + 1$ and let $\alpha_{k_n} = \alpha_{k_{n-1}}$. Otherwise, assume that $\langle \alpha_{k_{n-1}} \rangle \neq S$, and let $k_n > k_{n-1}$ be the smallest integer such that $deg(p) = k_n$ for some $\displaystyle{p \in S - \langle \alpha_{k_{n-1}} \rangle}$. Let
\[ W_{k_n} = \text{span}(\{p \in S : deg(p) = k_n\}) \]
and let $\gamma_{k_n}$ be a basis for $W_{k_n}$. If \[ W_{k_n} \cap \langle \alpha_{k_{n-1}} \rangle \neq \{0\} \, \]
assume that $\gamma_{k_n}$ extends a basis $\beta_{k_n}$ for $W_{k_n} \cap \langle \alpha_{k_{n-1}} \rangle$ to a basis for $W_{k_n}$. If
\[W_{k_n} \cap \langle \alpha_{k_{n-1}} \rangle = \{0\} \,, \]
let $\beta_{k_n} = \emptyset$. By Theorem \ref{thm:algebra7},
\[ \alpha_{k_n} = (\gamma_{k_n} - \beta_{k_n}) \cup \alpha_{k_{n-1}} \]
is an algebraically independent set. We also have
\[ \alpha_{k_{n-1}} \subset \alpha_{k_n} \]
and for all $p \in S$ with $deg(p) \leq k_n$, $p \in \langle \alpha_{k_n} \rangle$.
\\
We have a sequence of algebraically independent sets of homogeneous $A$-polynomials such that
\[ \alpha_{k_1} \subset \alpha_{k_2} \subset \ldots \subset \alpha_{k_n} \subset \ldots \,, \]
and so
\[ \alpha = \bigcup\limits_{n = 1}^{\infty} \alpha_{k_n} \]
is algebraically independent. Clearly, $\langle \alpha \rangle = S$. We also have $\alpha_S \subset \alpha_{k_1} \subset \alpha$.
\end{proof}
Letting $\alpha_S = \emptyset$ in the statement of Theorem \ref{thm:algebra8}, we see that every homogeneous subalgebra of $\mathscr{N}[A]$ is free.
\vspace{3mm}
The following theorem extends Theorem \ref{thm:algebra8}.
\begin{thm} \label{thm:algebra8}
Let $S$ be a subalgebra of $\mathscr{N}[A]$, and let $\alpha_S \subset S$ be a set of $A$-polynomials such that
\[ \alpha_{S'} = \{ \pi_{deg(p)} (p) : p \in \alpha_S \} \]
is algebraically independent and
\[ \pi_{deg(p_1)}(p_1) \neq \pi_{deg(p_2)}(p_2) \]
for all $p_1, p_2 \in \alpha_S$ with $p_1 \neq p_2$.
If $\alpha_S$ is of bounded degree and such that
\[ \{p \in S : deg(p) \leq \text{deg}(\alpha_S) \} \subset \langle \alpha_S \rangle \,, \]
then there exists an algebraically independent set $\alpha \subset S$ such that
\[ \alpha_S \subset \alpha \text{\quad and \quad} \langle \alpha \rangle = S \,. \]
\end{thm}
\begin{proof}
If $\langle \alpha_S \rangle = S$, then we can take $\alpha = \alpha_S$. Assume that $\langle \alpha_S \rangle \neq S$ and let
\[ H = \{ \pi_{\text{deg}(p)}(p) : p \in S \text{ and } p \neq 0 \} \]
and $S' = \langle H \rangle$. Proposition \ref{prop:homog_subalgebra} implies that $S'$ is a homogeneous subalgebra of $\mathscr{N}[A]$. By Theorem \ref{thm:algebra8}, $\alpha_{S'}$ can be extended to an algebraically independent set $\alpha'$ of homogeneous $A$-polynomials with $\langle \alpha' \rangle = S'$. For each $h \in \alpha'$, let $f(h) \in S$ be such that
\[ \pi_{\text{deg}(f(h))}(f(h)) = h \]
and such that $f(h) \in \alpha_S$ if $h \in \alpha_{S'}$. (It is easy to verify that such an $f$ exists.) Now, let $\alpha = f(\alpha')$. We have $\alpha_S \subset \alpha$, and Theorem \ref{thm:algebra6} implies that $\alpha$ is an algebraically independent set. To show that $\langle \alpha \rangle = S$, suppose that $S - \langle \alpha \rangle$ is nonempty and assume that $N$ is the smallest positive integer such that $p \in S - \langle \alpha \rangle$ for some $p$ with $\text{deg}(p) = N$. It follows easily from the fact that $\langle \alpha' \rangle = S'$ that we must have $N > 1$.
Let $h_1, \ldots, h_n \in \alpha'$ and the polynomial $P$ be such that
\[ P(h_1, \ldots, h_n) = \pi_{\text{deg}(p)}(p) \,. \]
If $p_1, \ldots, p_n$ are the elements of $\alpha$ with $p_i = f(h_i)$ for all $i$, then we have
\[ p - P(p_1, \ldots, p_n) = r_p \]
where $r_p$ is a nonzero polynomial with $\text{deg}(r_p) < \text{deg}(p) = N$. Since $p_i \in \alpha$ for all $i$, it follows that
\[P(p_1, \ldots, p_n) \in \alpha \]
which implies that $r_p \not\in \langle \alpha \rangle$. This contradicts the minimality of $N$, and so we must have $\langle \alpha \rangle = S$.
\end{proof}
\vspace{3mm}
Applying Theorem \ref{thm:algebra8} with $\alpha_S = \emptyset$ gives us Kurosh's Theorem.
\begin{corollary}[Kurosh's Theorem]\label{kurosh}
For any subalgebra $S$ of $\mathscr{N}[A]$, there exists an algebraically independent set $\alpha \subset \mathscr{N}[A]$ with $\langle \alpha \rangle = S$.
\end{corollary}
\section{Concluding Remarks}
As stated in the introduction, a complete description of all Schreier varieties is our main goal. We are naturally led to the study of free non-associative algebras satisfying certain relations, which may be obtained by forming quotient algebras of the free non-associative algebra. As a first step, we are interested in studying free non-associative algebras satisfying relations which take the form of a finite set of homogeneous polynomials. It is hoped that the framework established in this paper will enable us to generate a wealth of new examples of Schreier varieties.
\section{Acknowledgements}
The author would like to first thank Dr. Elizabeth Jurisich, who served as his thesis advisor. The developments described in this paper would not have been possible without her patience and support. The author would also like to thank Dr. Ben Cox and Dr. Iana Anguelova, who served as thesis committee members.
| {
"timestamp": "2018-11-16T02:13:30",
"yymm": "1811",
"arxiv_id": "1811.06380",
"language": "en",
"url": "https://arxiv.org/abs/1811.06380",
"abstract": "We introduce an embedding of the free magma on a set A into the direct product of the free magma on a singleton set and the free semigroup on A. This embedding is then used to prove several theorems related to algebraic independence of subsets of the free non-associative algebra on A. Among these theorems is a generalization of a result due to Kurosh, which states that every subalgebra of a free non-associative algebra is also free.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Embeddings of Free Magmas with Applications to the Study of Free Non-Associative Algebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232899814557,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7093460325351697
} |
https://arxiv.org/abs/1802.08246 | Characterizing Implicit Bias in Terms of Optimization Geometry | We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by an algorithm can be characterized in terms of the potential or norm of the optimization geometry, and independently of hyperparameter choices such as step-size and momentum. | \section{Introduction}\label{sec:intro}
Implicit bias from the optimization algorithm plays a crucial
role in learning deep neural networks as it introduces effective capacity control not directly specified in the objective \citep{neyshabur2015search,neyshabur2015path,zhang2017understanding, keskar2016large,wilson2017marginal,neyshabur2017geometry}.
In overparameterized models where the
training objective has many global minima, optimizing using a specific algorithm, such as gradient descent, \textit{implicitly biases} the solutions to some special
global minima. The properties of the learned model, including its generalization performance, are thus crucially influenced by the choice of optimization algorithm used. In neural networks especially, characterizing these special global minima for common algorithms such as stochastic gradient descent (SGD) is essential for understanding what the inductive bias of the learned model is and why such large capacity networks often show remarkably good generalization even in the absence of explicit regularization \citep{zhang2017understanding} or early stopping \citep{Hoffer2017}.
Implicit bias from optimization depends on the choice of algorithm, and
changing the algorithm, or even changing associated
hyperparameter can change the implicit bias.
For example, \citet{wilson2017marginal} showed that for
some standard deep learning architectures, variants of SGD algorithm
with different choices of momentum and adaptive gradient updates
(AdaGrad and Adam) exhibit different biases and thus have different
generalization performance;
\citet{keskar2016large}, \citet{Hoffer2017} and \citet{Smith2018} study how the size of the
mini-batches used in SGD influences generalization; and
\citet{neyshabur2015path} compare the bias of path-SGD (steepest
descent with respect to a scale invariant path-norm) to standard SGD.
It is therefore important to explicitly relate different optimization algorithms to their implicit biases.
Can we precisely characterize which global minima different algorithms converge to?
How does this depend on the loss function?
What other choices including initialization,
step-size, momentum, stochasticity, and adaptivity, does the
implicit bias depend on?
In this paper, we provide answers to some of these questions for simple linear regression and classification models.
While neural networks are certainly more complicated than these simple linear models, the results here provide a segue into understanding such biases for more complex models.
For linear models, we already have an understanding of the implicit bias of gradient
descent. For underdetermined least squares objective, gradient descent can be shown to converge to the minimum Euclidean norm solution. Recently, \citet{soudry2017implicit} studied gradient descent for linear
logistic regression. The logistic loss is fundamentally different
from the squared loss in that the loss function has no attainable
global minima. Gradient descent iterates therefore diverge (the norm
goes to infinity), but \citeauthor{soudry2017implicit} showed that they
diverge in the direction of the hard margin support vector machine
solution, and therefore the decision boundary converges to this maximum
margin separator.
Can we extend such characterization to other optimization methods
that work under different (non-Euclidean) geometries such as mirror
descent with respect to some potential, natural gradient descent with
respect to a Riemannian metric, and steepest descent with respect to
a generic norm? Can we relate the implicit bias to these geometries?
As we shall see, the answer depends on whether the loss function is
similar to a squared loss or to a logistic loss. This difference is captured
by two family of losses: \begin{inparaenum}[(a)]\item loss functions
that have a unique finite root, like the squared loss and \item
strictly monotone loss functions where the infimum is unattainable,
like the logistic loss. \end{inparaenum} For losses with a unique
finite root, we study the {\em limit point} of the optimization iterates,
$w_\infty=\lim_{t\to\infty} \W{t}$. For monotone losses, we study the
{\em limit direction}
$\bar{w}_\infty=\lim_{t\to\infty}\frac{\W{t}}{\|\W{t}\|}$.
In Section~\ref{sec:unique-root} we study linear models with loss functions that have unique finite
roots. We obtain a robust characterization of the limit point for
mirror descent, and discuss how it is independent of step-size and
momentum. For natural gradient descent, we show that the step-size does
play a role, but get a characterization for infinitesimal step-size.
For steepest descent, we show that not only does step-size affects the limit point, but even with infinitesimal step-size, the expected characterization does not hold. The situation is fundamentally
different for strictly monotone losses such as the logistic loss
(Section~\ref{sec:monotonic}) where we do get a precise
characterization of the limit direction for generic steepest descent. We also study the adaptive gradient descent method (AdaGrad) \cite{duchi2011adaptive} (Section~\ref{sec:adagrad})
and optimization over matrix factorization (Section~\ref{sec:mf}). Recent studies
considered the bias of such methods for least squares problems
\citep{wilson2017marginal,gunasekar2017implicit}, and here we study
these algorithms for monotone loss functions, obtaining a more robust characterization
for matrix factorization problems, while concluding that the implicit bias of AdaGrad depends on initial conditions including step-size even for strict monotone losses.
\section{Losses with a Unique Finite Root}\label{sec:unique-root}
We first consider learning linear models using losses with a unique finite root, such as the squared loss, where the loss $\ell(\hat{y},y)$ between a prediction $\hat{y}$ and label $y$ is minimized at a unique and finite value of $\hat{y}$. We assume without loss of generality, that $\min_{\hat{y}} \ell(\hat{y},y)=0$ and the unique minimizer is $\hat{y}=y$.
\begin{property} [\textbf{Losses with a unique finite root}] \label{ass:finite-root} For any $y$, a sequence $\{\hat{y}_t\}_{t=1}^\infty$ minimizes $\ell(.,y)$, i.e., $\ell(\hat{y}_t,y)\overset{t\to\infty}\longrightarrow\inf_{\hat{y}}\ell(\hat{y},y)=0$ if and only if $\hat{y}_t\overset{t\to\infty}\longrightarrow y$.
\end{property}
Denote the training dataset $\{(\x{n},\y{n}):n=1,2,\ldots,N\}$ with features $\x{n}\in \b{R}^{d}$ and labels $\y{n}\in \mathbb{R}$. The empirical loss (or risk) minimizer of a linear model $f(x)=\innerprod{w}{x}$ with parameters $w\in\b{R}^{d}$ is given by,
\begin{equation}
\min_{w} \c{L}(w):=\sum_{n=1}^N {\ell}(\innerprod{w}{\x{n}},\y{n}).
\label{eq:lm}
\end{equation}
We are particularly interested in the case where $N<d$ and the observations are realizable, i.e., $\min_w \c{L}(w)=0$. Under these conditions, the optimization problem in eq. \eqref{eq:lm} is underdetermined and has multiple global minima denoted by $\c{G}=\{w:\c{L}(w)=0\}=\{w:\forall n,\;\innerprod{w}{\x{n}}=\y{n}\}.$ Note that the set of global minima $\c{G}$ is the same for any loss $\ell$ with unique finite root (Property \ref{ass:finite-root}), including, e.g., the Huber loss, the truncated squared loss.
Which specific global minima $w\in\c{G}$ do different optimization algorithms reach when minimizing the empirical loss objective $\c{L}(w)$?
\subsection{Gradient descent}\label{sec:gd-finite}
Consider gradient descent updates for minimizing $\c{L}(w)$ with step-size sequence $\{\eta_t\}_t$ and initialization $\W{0}$,
\begin{equation*}\W{t+1}=\W{t}-\eta_t\nabla\c{L}(\W{t}).\end{equation*}
If $\W{t}$ minimizes the empirical loss in eq.~\eqref{eq:lm}, then the iterates converge to the unique global minimum that is closest to initialization $\W{0}$ in $\ell_2$ distance, i.e., $\W{t}\to\argmin_{w\in\c{G}} \|w-\W{0}\|_2$. This can be easily seen as for any $w$, the gradients $\nabla\c{L}(w)=\sum_n{\ell}^\prime (\innerprod{w}{\x{n}},\y{n})\x{n}$ are always constrained to the fixed subspace spanned by the data $\{x_n\}_n$, and thus the iterates $\W{t}$ are confined to the low dimensional affine manifold ${\W{0}}+\text{span}(\{x_n\}_n)$. Within this low dimensional manifold, there is a unique global minimizer $w$ that satisfies the linear constraints in $\c{G}=\{w:\innerprod{w}{\x{n}}=\y{n},\forall n\in[N]\}$.
The same argument also extends for updates with instance-wise stochastic gradients, where we use a stochastic estimate $\tilde{\nabla}\c{L}(\W{t})$ of the full gradient $\nabla\c{L}(\W{t})$ computed from a random subset of instances $S_t\subseteq[N]$,
\begin{equation}\label{eq:stoc}
\tilde{\nabla}\c{L}(\W{t})=\sum\nolimits_{n\in S_{t}\subset[n]} \nabla_{w} \ell(\innerprod{\W{t}}{\x{n_t}},\y{n_t}).
\end{equation}
Moreover, when initialized with ${\W{0}}=0$,
the implicit bias characterization also extends to the following generic momentum and acceleration based updates,
\begin{equation}\W{t+1}\!=\!\W{t}\!+\!\beta_t\Delta\W{t-1}\!-\!\eta_t\nabla\c{L}\!\left(\!\W{t}\!+\!\gamma_t\Delta\W{t-1}\!\right)\!,
\label{eq:mom}
\end{equation}
where $\Delta\W{t-1}=\W{t}-\W{t-1}$. This includes Nesterov's acceleration ($\beta_t=\gamma_t$) \citep{nesterov1983method} and Polyak's heavy ball momentum ($\gamma_t=0$) \citep{polyak1964some}.
For losses with a unique finite root, the implicit bias of gradient descent therefore depends only on the initialization and not on the step-size or momentum or mini-batch size. Can we get such succinct characterization for other optimization algorithms? That is, characterize the bias in terms of the optimization geometry and initialization, but independent of choices of step-sizes, momentum, and stochasticity.
\subsection{Mirror descent}\label{sec:md-finite}
Mirror descent (MD) \citep{beck2003mirror,nemirovskii1983problem} was introduced as a generalization of gradient descent for optimization over geometries beyond the Euclidean geometry of gradient descent. In particular, mirror descent updates are defined for any strongly convex and differentiable potential $\psi$ as
\begin{equation}
\W{t+1}=\argmin_{w\in\c{W}} \eta_t \innerprod{w}{\nabla\c{L}(\W{t})}+D_\psi(w,\W{t}),
\label{eq:md-update}
\end{equation}
where $D_\psi(w,w')\!=\!\psi(w)-\psi(w')-\innerprod{\nabla \psi(w')}{w-w'}$ is the \textit{Bregman divergence} \citep{bregman1967relaxation} w.r.t. $\psi$, and $\c{W}$ is some constraint set for parameters $w$.
We first look at unconstrained optimization where $\c{W}=\mathbb{R}^d$ and the update in eq.~\eqref{eq:md-update} is equivalent to
\begin{equation}
\nabla\psi(\W{t+1})=\nabla\psi(\W{t})-\eta_t \nabla\c{L}(\W{t}).
\label{eq:md-upd-opt}
\end{equation}
For a strongly convex potential $\psi$, $\nabla\psi$ is called the link function and is invertible. Hence, the above updates are uniquely defined. Also, $w$ and $\nabla\psi(w)$ are referred as \textit{primal} and \textit{dual} variables, respectively.
Examples of potentials $\psi$ for mirror descent include the squared $\ell_2$ norm $\psi(w)=\nicefrac{1}{2}\|w\|_2^2$, which leads to gradient descent; the entropy potential $\psi(w)=\sum_i w[i]\log{w[i]}-w[i]$; the spectral entropy for matrix valued $w$, where $\psi(w)$ is the entropy potential on the singular values of $w$; general quadratic potentials $\psi(w)=\nicefrac{1}{2}\|w\|_D^2=\nicefrac{1}{2}\,w^\top Dw$ for any positive definite matrix $D$; and the squared $\ell_p$ norms for $p\in(1,2]$.
From eq.~\eqref{eq:md-upd-opt}, we see that rather than the primal iterates $\W{t}$, it is the dual iterates $\nabla\psi(\W{t})$ that are constrained to the low dimensional data manifold $\nabla\psi(\W{0})+\text{span}(\{\x{n}\}_{n\in[N]})$. The arguments for gradient descent can now be generalized to get the following result.
\begin{restatable}{theorem}{thmmdfinite} \label{thm:md-finite} For any loss $\ell$ with a unique finite root (Property~\ref{ass:finite-root}), any realizable dataset $\{\x{n},\y{n}\}_{n=1}^N$, and any strongly convex potential $\psi$, consider the mirror descent iterates $\W{t}$ from eq. \eqref{eq:md-upd-opt} for minimizing the empirical loss $\c{L}(w)$ in eq. \eqref{eq:lm}. For all initializations $\W{0}$, if the step-size sequence $\{\eta_t\}_t$ is chosen such that the limit point of the iterates $w_\infty=\lim_{t\to\infty}\W{t}$ is a global minimizer of $\c{L}$, i.e., $\c{L}(w_{\infty})=0$, then $w_\infty$ is given by
\begin{equation}\label{eq:md_mnopt}
w_{\infty}=\argmin_{w:\forall n, \innerprod{w}{\x{n}}=\y{n}} D_\psi(w,\W{0})
\end{equation}
\end{restatable}
In particular, if we start at ${\W{0}}=\argmin_w \psi(w)$ (so that $\nabla\psi({\W{0}})=0$), then we get to $w_\infty=\argmin_{w\in\c{G}} \psi(w)$, where recall that $\c{G}=\{w:\forall n, \innerprod{w}{\x{n}}=\y{n}\}$ is the set of global minima for $\c{L}(w)$.
The analysis of Theorem~\ref{thm:md-finite} can also be extended for special cases of constrained mirror descent (eq. \eqref{eq:md-update}) when $\c{L}(w)$ is minimized over realizable affine equality constraints.
\begin{restatable}{subtheorem}{thmlinc} \label{thm:linc}
Under the conditions of Theorem~\ref{thm:md-finite}, consider constrained mirror descent updates $\W{t}$ from eq. \eqref{eq:md-update} with realizable affine equality constraints, that is $\c{W}=\{w:Gw=h\}$ for some $G\in\b{R}^{d^\prime\times d}$ and $h\in\b{R}^{d^\prime}$ and additionally, $\exists w\in\c{W}$ with $\c{L}(w)=0$. For all initializations $\W{0}$, if the step-size sequence $\{\eta_t\}_t$ is chosen to asymptotically minimize $\c{L}$, i.e., $\c{L}(w_{\infty})=0$, then $w_\infty=\argmin_{w\in\c{G}\cap \c{W}}D_\psi(w,\W{0})$.
\end{restatable}
For example, in exponentiated gradient descent \citep{kivinen1997exponentiated}, which is mirror descent w.r.t $\psi(w)=\sum_i w[i]\log{w[i]}-w[i]$, under the explicit simplex constraint $\c{W}=\{w:\sum_iw[i]=1\}$, Theorem~\ref{thm:linc} shows that using uniform initialization ${\W{0}}=\frac{1}{d}\mathbf{1}$, mirror descent will return the the maximum entropy solution ${w_\infty=\argmin_{w\in\c{G}\cap \c{W}} \sum_{i}w[i]\log{w[i]}}$.
Let us now consider momentum for mirror descent. There are two possible generalizations of the gradient descent momentum in eq.~\eqref{eq:mom}: adding momentum either to primal variables $\W{t}$, or to dual variables $\nabla\psi(\W{t})$,
\begin{flalign}
&\text{Dual momentum:}&&\!\!\!\!\!\nabla\psi(\W{t+1})=\nabla\psi(\W{t})+\beta_t \Delta z_{(t-1)}-\eta_t \nabla\c{L}\left(\W{t}+\gamma_t \Delta w_{(t-1)}\right)\label{eq:dual-mom}&\\
&\text{Primal momentum:}&&\!\!\!\!\!\nabla\psi(\W{t+1})=\nabla\psi\big(\W{t}+\beta_t \Delta w_{(t-1)}\big)-\eta_t \nabla\c{L}\left(\W{t}+\gamma_t \Delta w_{(t-1)}\right)\label{eq:primal-mom}&
\end{flalign}
where $\Delta z_{(-1)}=\Delta w_{(-1)}=0$, and for $t\ge1$, $\Delta z_{(t-1)}=\nabla\psi(\W{t})-\nabla\psi(\W{t-1})$ and $\Delta w_{(t-1)}=\W{t}-\W{t-1}$ are the momentum terms in the primal and dual space, respectively; and $\{\beta_t\ge 0,\gamma_t\ge 0\}_t$ are the momentum parameters.
If we initialize at ${\W{0}}=\argmin_w \psi(w)$, then even with dual momentum $\nabla\psi(\W{t})$ continues to remain in the data manifold. This leads to the following extension of Theorem~\ref{thm:md-finite}.
\begin{restatable}{subtheorem}{thmmdfinitea} \label{thm:md-finite1a}Under the conditions in Theorem~\ref{thm:md-finite}, if initialized at ${\W{0}}=\argmin_w \psi(w)$, then the mirror descent updates with dual momentum also converge to \eqref{eq:md_mnopt}, i.e., for all $\{\eta_t\}_t,\{\beta_t\}_t,\{\gamma_t\}_t$, if $\W{t}$ from eq. \eqref{eq:dual-mom} converges to $w_{\infty}\in\c{G}$, then $w_{\infty}=\argmin_{w\in\c{G}}\psi(w)$.
\end{restatable}
\begin{restatable}{remark}{remsgd}\label{rem:sgd}Following the same arguments, we can show that Theorem~\ref{thm:md-finite}--\ref{thm:md-finite1a} also hold when instancewise stochastic gradients defined in eq. \eqref{eq:stoc} are used in place of $\nabla\c{L}(\W{t})$
\end{restatable}
{\small
\begin{figure*}
\centering
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{md-primal.pdf}
\caption{\label{fig:md}\centering Mirror descent primal \newline momentum (Example~\ref{ex:md})}
\end{subfigure}
\begin{subfigure}[b]{0.27\textwidth}
\includegraphics[width=\textwidth]{ngd-primal.pdf}
\caption{\label{fig:ngd}\centering Natural gradient descent\newline (Example~\ref{ex:ngd})}
\end{subfigure}
\begin{subfigure}[b]{0.35\textwidth}
\includegraphics[width=\textwidth]{sd-primal.pdf}
\caption{\label{fig:sd}\centering Steepest descent w.r.t $\|.\|_{\nicefrac{4}{3}}$ \newline (Example~\ref{ex:sd})}
\end{subfigure}
\caption{\label{fig:finite-root}\small Dependence of implicit bias on step-size and momentum: In $(a)$--$(c)$, the blue line denotes the set $\c{G}$ of global minima for the respective examples. In $(a)$ and $(b)$, $\psi$ is the entropy potential and all algorithms are initialized with ${\W{0}}=[1,1]$ so that $\psi({\W{0}})=\argmin_{w}\psi(w)$. $w^*_\psi=\argmin_{\psi\in\c{G}}\psi(w)$ denotes the minimum potential global minima we expect to converge to.
$(a)$ \textbf{Mirror descent with primal momentum (Example~\ref{ex:md}):} the global minimum that eq. \eqref{eq:primal-mom} converges to depends on the momentum parameters---the sub-plots contain the trajectories of eq. \eqref{eq:primal-mom} for different choices of $\beta_t=\beta$ and $\gamma_t=\gamma$.
$(b)$ \textbf{Natural gradient descent (Example~\ref{ex:ngd}):} for different step-sizes $\eta_t=\eta$, eq. \eqref{eq:ngd-update} converges to different global minima. Here, $\eta$ was chosen to be small enough to ensure $\W{t}\in\text{dom}(\psi)$. (c) \textbf{Steepest descent w.r.t $\|.\|_{\nicefrac{4}{3}}$ (Example~\ref{ex:sd}):} the global minimum to which eq. \eqref{eq:sd-update} converges to depends on $\eta$. Here ${\W{0}}=[0,0,0]$, $w^*_{\|.\|}=\argmin_{\psi\in\c{G}}\|w\|_{\nicefrac{4}{3}}$ denotes the minimum norm global minimum, and $w^\infty_{\eta\to0}$ denotes the solution of infinitesimal SD with $\eta\to0$. Note that even as $\eta\to 0$, the expected characterization does not hold, i.e., $w^\infty_{\eta\to0}\neq w^*_{\|.\|}$.}
\end{figure*}
}
Let us now look at primal momentum.
For general potentials $\psi$, the dual iterates $\nabla\psi(\W{t})$ from the primal momentum can fall off the data manifold and the additional components influence the final solution. Thus, the specific global minimum that the iterates $\W{t}$ converge to will depend on the values of momentum parameters $\{\beta_t,\gamma_t\}_t$ and step-sizes $\{\eta_t\}_t$ as demonstrated in the following example.
\begin{example}\label{ex:md}
Consider optimizing $\c{L}(w)$ with dataset $\{(\x{1}=[1,2], \y{1}=1)\}$ and squared loss $\ell(u,y)=(u-y)^2$ using primal momentum updates from eq. \eqref{eq:primal-mom} for MD w.r.t. the entropy potential $\psi(w)=\sum_i w[i]\log{w[i]}-w[i]$. For initialization ${\W{0}}=\argmin_w \psi(w)$, Figure~\ref{fig:md} shows how different choices of momentum $\{\beta_t,\gamma_t\}$ change the limit point $w_\infty$. Additionally, we show the following:
\begin{subproposition} \label{prop:md-primal} In Example~\ref{ex:md}, consider the case where primal momentum is used only in the first step, but $\gamma_t=0$ and $\beta_{t}=0$ for all $t\ge2$. For any $\beta_1>0$, there exists $\{\eta_t\}_t$, such that $\W{t}$ from \eqref{eq:primal-mom} converges to a global minimum, but not to $\argmin_{w\in\c{G}}\psi(w)$.
\end{subproposition}
\end{example}
\subsection{Natural gradient descent}
Natural gradient descent (NGD) was introduced by \citet{amari1998natural} as a modification of gradient descent, wherein the updates are chosen to be the steepest descent direction w.r.t a Riemannian metric tensor $H$ that maps $w$ to a positive definite local metric $H(w)$. The updates are given by,
\begin{equation}
\W{t+1}= \W{t}-\eta_t H\big(\W{t}\big)^{-1}{\nabla{\c{L}}}(\W{t}).
\label{eq:ngd-update}
\end{equation}
In many instances, the metric tensor $H$ is specified by the Hessian $\nabla^2\psi$ of a strongly convex potential $\psi$. For example, when the metric over the Riemannian manifold is the KL divergence between distributions $P_w$ and $P_{w'}$ parameterized by $w$, the metric tensor is given by $H(w)=\nabla^2\psi(P_w)$, where the potential $\psi$ is the entropy potential over $P_w$
\paragraph{Connection to mirror descent} When $H(w)=\nabla\psi^2(w)$ for a strongly convex potential $\psi$,
as the step-size $\eta$ goes to zero, the iterates $\W{t}$ from natural gradient descent in eq. \eqref{eq:ngd-update} and mirror descent w.r.t $\psi$ in eq. \eqref{eq:md-update} converge to each other, and the common dynamics in the limit is given by,
\begin{equation}
\dv{\nabla \psi(\W{t})}{t}=-{\nabla{\c{L}}}(\W{t})\implies\dv{\W{t}}{t}=-\nabla^2\psi(\W{t})^{-1}{\nabla{\c{L}}}(\W{t}).
\label{eq:md-ode}
\end{equation}
Thus, as the step-sizes are made infinitesimal, the limit point of natural gradient descent $w_\infty= \lim_{t\to\infty}\W{t}$ is also the limit point of mirror descent and hence will be biased towards solutions with minimum divergence to the initialization, i.e., as $\eta\to0$, $w_\infty=\argmin_{w\in\c{G}} D_\psi(w, \W{0})$.
For general step-sizes $\{\eta_t\}$, if the potential $\psi$ is quadratic, $\psi(w)=\nicefrac{1}{2}\|w\|_D^2$ for some positive definite $D$, we get linear link functions $\nabla\psi(w)=Dw$ and constant metric tensors $\nabla^2\psi(w)=H(w)=D$, and the natural gradient descent updates \eqref{eq:ngd-update} are the same as the mirror descent \eqref{eq:md-upd-opt}. Otherwise the updates in eq. \eqref{eq:ngd-update} is only an approximation of the mirror descent update $\nabla\psi^{-1}(\nabla\psi(\W{t})-\eta_t\nabla\c{L}(\W{t}))$.
For natural gradient descent with finite step-size and non-quadratic potentials $\psi$, the characterization in eq. \eqref{eq:md_mnopt} generally does not hold. We can see this as for any initialization $\W{0}$, a finite $\eta_1>0$ will lead to $\W{1}$ for which the dual variable $\nabla\psi(\W{1})$ is no longer in the data manifold $\text{span}(\{\x{n}\})+\nabla\psi(\W{0})$, and hence will converge to a different global minimum dependent on the step-sizes $\{\eta_t\}_t$.
\begin{example}\label{ex:ngd} Consider optimizing $\c{L}(w)$ with squared loss over dataset $\{(\x{1}=[1,2], \y{1}=1)\}$ using the natural gradient descent w.r.t. the metric tensor given by $H(w)=\nabla^2\psi(w)$, where $\psi(w)=\sum_iw[i]\log{w[i]}-w[i]$, and initialization ${\W{0}}=[1,1]$. Figure~\ref{fig:ngd} shows that NGD with different step-sizes $\eta$ converges to different global minima. For a simple analytical example: take one finite step $\eta_1>0$ and then follow the continuous time path in eq. \eqref{eq:md-ode}.
\begin{subproposition}\label{prop:ngd} For almost all $\eta_1>0$, $\lim_{t\to\infty}\W{t}=\argmin_{w\in\c{G}}D_\psi(w,\W{1})\neq \argmin_{w\in\c{G}}D_\psi(w,\W{0})$.
\end{subproposition}
\end{example}
\remove{
\paragraph{Necessary conditions: }
So far, we saw conditions under which MD w.r.t $\psi$ and NGD w.r.t. $\nabla^2\psi$ converges to the global optimum that minimizes the Bregman divergence to the initialization.
We can also ask the inverse question of what kind of updates will lead to an implicit bias towards minimizing the Bregman divergence to an initialization.
\begin{lemma}\label{lem:cond}Consider any strongly convex and differentiable function $\phi:\mathbb{R}^d\to \mathbb{R}$ and the associated Bregman divergence $D_\phi(w,w')=\phi(w)-\phi(w')-\innerprod{\nabla \phi(w')}{w-w'}$.
For any $\ell(u,y)$ with finite roots and sequence of $\W{t}$ that converges to a global optimum of $\c{L}(w)$ in \eqref{eq:lm}, we get $\lim_{t\to\infty}\W{t}= \argmin_{w\in\c{G}} D_\phi(w,\W{0})$ for \textit{every} $\W{0}$ if and only if $\forall t, \nabla\phi(\W{t+1})-\nabla\phi(\W{t})\in\text{span}(\{\x{n}\})$.
\end{lemma}
In particular, for first order algorithms where $\W{t}$ is updated only based on past updates $\{\W{t'}\}_{t'<t}$ and gradients $\{\c{L}(\W{t'})\}_{t'<t}$, to get the implicit bias of $\argmin_{w\in\c{G}} D_\phi(w,\W{0})$ for every $\W{0}$, we need to necessarily use updates of the form $ \nabla\phi(\W{t+1})=\nabla\phi(\W{t})+\sum_{t'<t} \xi_{t'}\nabla\c{L}(\W{t})$ and its stochastic variants, for some $\{\xi_{t'}\in\mathbb{R}\}$---essentially minor variants of MD.
The above results do not preclude cases where we can get implicit regularization under additional conditions on $\W{0}$, for example, in Theorem~\ref{thm:md-finite1a} we get implicit regularization to $\psi(w)$ for MD with dual momentum when $\nabla\psi(\W{0})=0$, but this does not extend to \textit{any} other initialization, and hence is not covered by Lemma~\ref{lem:cond}. }
\subsection{Steepest Descent}
Gradient descent is also a special case of steepest descent (SD) w.r.t a generic norm $\|.\|$ \citep{boyd2004convex} with updates given by,
\begin{equation}
\W{t+1}= \W{t}+\eta_t \Delta \W{t}, \text{ where }\Delta \W{t}=\argmin_{v} \innerprod{\nabla{\c{L}}(\W{t})}{v}+\frac{1}{2}\|v\|^2.
\label{eq:sd-update}
\end{equation}
The optimality of $\Delta\W{t}$ in eq. \eqref{eq:sd-update} requires $-\nabla{\c{L}}(\W{t})\in \partial\|\Delta\W{t}\|^2$, which is equivalent to,
\begin{equation}
\langle\Delta\W{t},\!-\nabla\c{L}(\W{t})\rangle\!=\!\|\Delta\W{t}\|^2\!=\!\|\nabla\c{L}(\W{t})\|_\star^2.
\label{eq:sd-upd-opt}
\end{equation}
Examples of steepest descent include gradient descent, which is
steepest descent w.r.t $\ell_2$ norm and coordinate descent, which is
steepest descent w.r.t $\ell_1$ norm. In general, the update
$\Delta\W{t}$ in eq. \eqref{eq:sd-update} is not uniquely defined and
there could be multiple direction $\Delta\W{t}$ that minimize eq.
\eqref{eq:sd-update}. In such cases, any minimizer of eq. \eqref{eq:sd-update}
is a valid steepest descent update and satisfies eq.
\eqref{eq:sd-upd-opt}
Generalizing gradient descent, we might expect the limit point $w_\infty$ of steepest
descent w.r.t an arbitrary norm $\|.\|$ to be the solution closest to initialization in corresponding norm,
$\argmin_{w\in\c{G}} \|w-{\W{0}}\|$. This is indeed the case for
quadratic norms $\|v\|_D=\sqrt{v^\top Dv}$ when eq.~\ref{eq:sd-update} is equivalent to mirror descent
with $\psi(w)=\nicefrac{1}{2}\|w\|_D^2$. Unfortunately, this
does not hold for general norms.
\begin{example}\label{ex:sd}
Consider minimizing $\c{L}(w)$ with dataset $\{(\x{1}=[1,1,1], \y{1}=1), (\x{1}=[1,2,0], \y{1}=10)\}$ and loss $\ell(u,y)=(u-y)^2$ using steepest descent updates w.r.t. the $\ell_{4/3}$ norm. The empirical results for this problem in Figure~\ref{fig:sd} clearly show that even for $\ell_p$ norms where the $\|.\|^2_p$ is smooth and strongly convex, the corresponding steepest descent converges to a global minimum that depends on the step-size. Further, even in the continuous step-size limit of $\eta\to 0$, $\W{t}$ does not converge to $\argmin_{w\in\c{G}} \|w-{\W{0}}\|$.
\end{example}
\paragraph{Coordinate descent}
Steepest descent w.r.t.~the $\ell_1$ norm is called the coordinate descent, with updates:
\begin{equation*}
\Delta\W{t+1}\in\text{conv}\left\{-\eta_t \frac{\partial \c{L}(w)}{\partial w[j_t]}e_{j_t}:j_t=\argmax_j \left|\frac{\partial \c{L}(w)}{\partial w[j]}\right|\right\},
\label{eq:sdl1-finite}
\end{equation*}
where $\text{conv}(S)$ denotes the convex hull of the set $S$, and $\{e_j\}$ are the standard basis, i.e., when multiple partial derivatives are maximal, we can choose any convex combination of the maximizing coordinates, leading to many possible coordinate descent optimization paths
The connection between optimization paths
of coordinate descent and the $\ell_1$ \textit{regularization path} given by,
$\hat{w}(\lambda)=\argmin_{w}\c{L}(w)+\lambda \|w\|_1$, has been studied by \citet{efron2004least}. The specific coordinate descent path where updates are along the average of all optimal coordinates and the step-sizes are infinitesimal is equivalent to
forward stage-wise selection, a.k.a.~$\epsilon$-boosting
\citep{friedman2001greedy}. When the $\ell_1$ regularization path
$\hat{w}(\lambda)$ is monotone in each of the coordinates, it is
identical to this stage-wise selection path, i.e.,~to a coordinate
descent optimization path (and also to the related LARS path)
\citep{efron2004least}. In this case, at the limit of $\lambda\to0$ and $t\to\infty$, the optimization and regularization paths, both converge to the minimum $\ell_1$
norm solution. However, when the regularization path $\hat{w}(\lambda)$ is not
monotone, which can and does happen, the optimization and regularization paths diverge, and forward stage-wise selection can converge to solutions with sub-optimal $\ell_1$ norm.
This matches our understanding that steepest descent w.r.t.~a norm $\|.\|$, in
this case the $\ell_1$ norm might converge to a solution that is {\em not} always the minimum $\|.\|$ norm solution.
\subsection{Summary for losses with a unique finite root}
For losses with a unique finite root, we
characterized the implicit bias of generic mirror descent algorithm in
terms of the potential function and initialization. This
characterization extends for momentum in the dual space as well as to
natural gradient descent in the limit of infinitesimal step-size. We also saw that the characterization breaks for mirror
descent with primal momentum and natural gradient descent with finite
step-sizes. Moreover, for steepest descent with general norms, we were
unable to get a useful characterization even in the infinitesimal step
size limit. In the following section, we will see that for
strictly monotone losses, we {\em can} get a characterization also for
steepest descent.
\section{Strictly Monotone Losses}\label{sec:monotonic}
We now turn to strictly monotone loss functions $\ell$ where the behavior of the implicit bias is fundamentally different, and as are the situations when the implicit bias can be characterized. Such losses are common in classification problems where $y=\{-1,1\}$ and $\ell(f(x),y)$ is typically a continuous surrogate of the $0$-$1$ loss. Examples of such losses include logistic loss, exponential loss, and probit loss.
\begin{property} [\textbf{Strict monotone losses}] \label{ass:mon-bdd}
$\ell(\hat{y},y)$ is bounded from below, and $\forall y$, $\ell(\hat{y},y)$ is strictly monotonically decreasing in $\hat{y}$.
Without loss of generality, $\forall y$, $\inf_{\hat{y}} \ell(\hat{y},y)= 0$ and $\ell(\hat{y},y)\overset{\hat{y} y\to \infty}\longrightarrow 0$.
\end{property}
We look at classification models that fit the training data $\{\x{n},\y{n}\}_n$ with linear decision boundaries $f(x)=\innerprod{w}{x}$ with decision rule given by $\hat{y}(x)=\text{sign}(f(x))$.
In many instances of the proofs, we also assume without loss of generality that $y_n=1$ for all $n$, since for linear models, the sign of $\y{n}$ can equivalently be absorbed into $\x{n}$.
We again look at unregularized empirical risk minimization objective of the form in eq. \eqref{eq:lm}, but now with strictly monotone losses.
When the training data $\{\x{n},\y{n}\}_n$ is not linearly separable, the empirical objective $\c{L}(w)$ can have a finite global minimum. However, if the dataset is linearly separable, i.e., $\exists w:\forall n, \y{n}\innerprod{w}{\x{n}}>0$, the empirical loss $\c{L}(w)$ is again ill-posed, and moreover $\c{L}(w)$ does not have any finite minimizer, i.e, $\c{L}(w)\to0$ only as $\|w\|\to\infty$.
Thus, for any sequence $\{\W{t}\}_{t=0}^\infty$, if $\c{L}(\W{t})\to 0$, then $\W{t}$ necessarily diverges to infinity rather than converge, and hence we cannot talk about $\lim_{t\to\infty}\W{t}$. Instead, we look at the limit direction $\bar{w}_{\infty}=\lim\limits_{t\to\infty}\frac{\W{t}}{\|\W{t}\|}$ whenever the limit exists. We refer to existence of this limit as convergence in direction. Note that, the limit direction fully specifies the decision rule of the classifier that we care about.
\remove{To show convergence of ${\W{t}}$ in direction, we further restrict to losses with exponential tails.
\begin{property} [\textbf{Tight exponential tail}] \label{ass:exp-tail} $\ell(u,y)$ has a tight exponential tail if $\exists \mu>0$ and $u_0>0$ such that
$$\forall yu>u_0,\;\;(1-e^{-\mu yu})e^{-yu}\le -{\ell}^\prime (u,y)\le (1+e^{-\mu yu})e^{-yu}$$
\end{property}
This includes, exponential, logistic, and sigmoid losses. \remove{chk}More specifically, our results are proved only for the case of exponential loss $\ell(u,y)=\exp(-uy)$. But generalization to tight exponential tails can be obtained for with bit of additional algebra along the lines of \citet{soudry2017implicit} and \citet{telgarsky2013margins}. }
We focus on the exponential loss $\ell(u,y)=\exp(-uy)$. However, our results can be extended to loss functions with tight exponential tails, including logistic and sigmoid losses, along the lines of \citet{soudry2017implicit} and \citet{telgarsky2013margins}.
\subsection{Gradient descent}
\citet{soudry2017implicit} showed that for almost all linearly separable datasets, gradient descent with \textit{any initialization and any bounded step-size} converges in direction to maximum margin separator with unit $\ell_2$ norm, i.e., the hard margin support vector machine classifier,
\[
\bar{w}_\infty=\lim_{t\to\infty}\frac{\W{t}}{\|\W{t}\|_2}= w^*_{\|.\|_2}:=\argmax_{\|\W{t}\|_2\le 1} \min_n \y{n}\innerprod{w}{\x{n}}.\]
This characterization of the implicit bias is independent of both the step-size as well as the initialization. We already see a fundamentally difference from the implicit bias of gradient descent for losses with a unique finite root (Section~\ref{sec:gd-finite}) where the characterization depended on the initialization.
Can we similarly characterize the implicit bias of different algorithms establishing $\W{t}$ converges in direction and calculating $\barw_\infty$? Can we do this even when we \textit{could not} characterize the limit point $w_\infty=\lim_{t\to\infty} \W{t}$ for losses with unique finite roots? As we will see in the following section, we can indeed answer these questions for steepest descent w.r.t arbitrary norms.
\subsection{Steepest Descent}\label{sec:sd-finite}
Recall that for squared loss, the limit point of steepest descent
depends on the step-size, and we were unable obtain a useful
characterization even for infinitesimal step-size and zero initialization. In contrast, for exponential loss, the following
theorem provides a crisp characterization of the limit direction of
steepest descent as a maximum margin solution, independent of
step-size (as long as it is small enough) and initialization. Let $\|.\|_\star$ denote the dual norm of $\|.\|$.
\begin{restatable}{theorem}{thmsdexp} \label{thm:sd-exp}
For any separable dataset $\{\x{n},\y{n}\}_{n=1}^N$ and any norm $\lVert \cdot \rVert$, consider the steepest
descent updates from eq. \eqref{eq:sd-upd-opt} for minimizing $\c{L}(w)$ in eq. \eqref{eq:lm} with the exponential loss $\ell(u,y)=\exp(-uy)$. For all initializations $\W{0}$, and all bounded step-sizes satisfying $\eta_t \le \max\{\eta_+, \frac{1}{B^2\c{L}(\W{t})}\}$, where $B:=\max_n \|x_n\|_\star$ and $\eta_+<\infty$ is any finite upper bound, the iterates $\W{t}$ satisfy the following,
\[ \lim_{t\to\infty} \min_n\frac{\y{n}\innerprod{\W{t}}{x_n}}{\|\W{t}\|}= \max_{w:\|w\|\le 1} \min_{n} {\y{n}\innerprod{w}{x_n}}.\]
In particular, if there is a unique maximum-$\|.\|$ margin solution $w^\star_{\|.\|}=\argmax_{w:\|w\|\le1} \min_{n} {\y{n}\innerprod{w}{x_n}}$, then the limit direction is given by $\bar{w}_\infty=\lim\limits_{t\to\infty}\frac{\W{t}}{\|\W{t}\|}=w^\star_{\|.\|}$.
\end{restatable}
A special case of Theorem~\ref{thm:sd-exp} is for steepest descent
w.r.t.~the $\ell_1$ norm, which as we already saw corresponds to
coordinate descent. More specifically, coordinate descent on the
exponential loss can be thought of as an alternative presentation of
AdaBoost \citep{schapire2012boosting}, where each coordinate represents
the output of one ``weak learner''. Indeed, initially mysterious
generalization properties of boosting have been understood in terms of
implicit $\ell_1$ regularization \citep{schapire2012boosting}, and
later on AdaBoost with small enough step-size was shown to converge in
direction precisely to the maximum $\ell_1$ margin solution \citep{zhang2005boosting,shalev2010equivalence,telgarsky2013margins}, just as
guaranteed by Theorem~\ref{thm:sd-exp}.
In fact, \citet{telgarsky2013margins} generalized the result to a
richer variety of exponential tailed loss functions including logistic
loss, and a broad class of non-constant step-size rules. Interestingly, coordinate descent with exact line search can result in infinite step-sizes, leading the iterates to
converge in a different direction that is not a max-$\ell_1$-margin
direction \citep{rudin2004dynamics}, hence the maximum step-size bound in Theorem~\ref{thm:sd-exp}.
Theorem~\ref{thm:sd-exp} is a generalization of the result of \citeauthor{telgarsky2013margins}
to steepest descent with respect to other norms, and our proof
follows the same strategy as \citeauthor{telgarsky2013margins}. We
first prove a generalization of the duality result of
\citet{shalev2010equivalence}: if there is a unit norm linear
separator that achieves margin $\gamma$, then $\norm{\nabla
\c{L}(w)}_\star \ge \gamma \c{L}(w)$ for all $w$. By using this lower
bound on the dual norm of the gradient, we are able to
show that the loss decreases faster than the increase in the norm of
the iterates, establishing convergence in a margin maximizing direction.
In relating the optimization path to the regularization path, it is
also relevant to relate Theorem \ref{thm:sd-exp} to the result by \citet{rosset2004boosting} that
for monotone loss functions and $\ell_p$ norms, the $\ell_p$ regularization path $\hat{w}(c)=\argmin_{w:\|w\|_p\le c}
\c{L}(\W{t})$ also converges in direction to the maximum margin
separator, i.e.,~$\lim_{c\to\infty}\hat{w}(c)=w^\star_{\|.\|_p}$ . Although the
optimization path and regularization path are not the same, they both
converge to the same max-margin separator in the limits of $c\to \infty$ and $t\to\infty$, for the regularization path and steepest descent optimization path, respectively.
\subsection{Adaptive Gradient Descent (AdaGrad)} \label{sec:adagrad}
Adaptive gradient methods, such as AdaGrad \citep{duchi2011adaptive} or Adam \citep{kingma2015adam} are very popular for neural network training. We now look at the implicit bias of the basic (diagonal) AdaGrad.
\begin{equation}
w_{\left(t+1\right)} =w_{\left(t\right)}-\eta\mathbf{G}_{\left(t\right)}^{-1/2}\nabla\mathcal{L}\left(w_{\left(t\right)}\right),
\label{eq: AdaGrad}
\end{equation}
where $\mathbf{G}_{\left(t\right)}\in\mathbb{R}^{d\times d}$ is a diagonal matrix such that,
\begin{equation}
\,\forall i:\,\,\mathbf{G}_{\left(t\right)}[i,i]=\sum_{u=0}^{t}\left(\nabla\mathcal{L}\left(w_{\left(u\right)}\right)[i]\right)^{2}\,.\label{eq: G}
\end{equation}
AdaGrad updates described above correspond to a pre-conditioned gradient descent, where the pre-conditioning matrix $\mathbf{G}_{(t)}$ adapts across iterations.
It was observed by \citet{wilson2017marginal} that for neural networks with squared loss, adaptive methods tend to degrade generalization performance in comparison to non-adaptive methods (e.g., SGD with momentum), even when both methods are used to train the network until convergence to a global minimum of training loss. This suggests that adaptivity does indeed affect the implicit bias. For squared loss, by inspection the updates in eq. \eqref{eq: AdaGrad}, we do not expect to get a characterization of the limit point $w_\infty$ that is independent of the step-sizes.
However, we might hope that, like for steepest descent, the situation might be different for strictly monotone losses, where the asymptotic behavior could potentially nullify the initial conditions. Examining the updates in eq. \eqref{eq: AdaGrad}, we can see that the robustness to initialization and initial updates depend on whether the matrices $\mathbf{G}_{(t)}$ diverge or converge: if $\mathbf{G}_{(t)}$ diverges, then we expect the asymptotic effects to dominate, but if it is bounded, then the limit direction will depend on the initial conditions.
Unfortunately, the following theorem shows that, the components of $\mathbf{G}_{(t)}$ matrix are bounded, and hence even for strict monotone losses, the initial conditions ${\W{0}},\mathbf{G}_{\left(0\right)}$ and step-size $\eta$ will have a non-vanishing contribution to the asymptotic behavior of $\mathbf{G}_{(t)}$ and hence to the limit direction $\bar{w}_{\infty}=\lim\limits_{t\to\infty}\frac{\W{t}}{\|\W{t}\|}$, whenever it exists. In other words, the implicit bias of AdaGrad does indeed depend on initialization and step-size.
\begin{restatable}{theorem}{thmlemadagrad}
\label{lem:AdaGrad} For any linearly separable training data $\{\x{n},\y{n}\}_{n=1}^N$, consider the AdaGrad iterates $\W{t}$ from eq.~\eqref{eq: AdaGrad} for minimizing $\mathcal{L}\left(w\right)$ with exponential loss $\ell(u,y)=\exp(-uy)$.
For any fixed and bounded step-size $\eta<\infty$, and any initialization of ${\W{0}}$ and $\mathbf{G}_{\left(0\right)}$, such that $\frac{\eta}{2}\mathcal{L}\left({\W{0}}\right)<1$, and $\left\Vert \mathbf{G}_{\left(0\right)}^{-1/4}\x{n}\right\Vert _{2}\leq1$, $\forall i,\forall t:\,\,\mathbf{G}_{\left(t\right)}[i,i]<\infty.$
\end{restatable}
\remove{
For example, if $\mathbf{G}_{(t)}$ converges to some fixed matrix $\mathbf{G}_{(\infty)}$, then AdaGrad asymptotically becomes GD with a fixed pre-conditioner. As this is a special case of SD, we expect $w_{(t)}$ to converge in direction to that of the solution of the following max margin problem
\begin{equation}
\min_{w}w^{\top}\mathbf{G}_{\left(\infty\right)}^{1/2}w\quad , s.t.\,w^{\top}\x{n}\geq1 .\label{eq: svm for Adagrad}
\end{equation}
In general though, as the matrix $\mathbf{G}_{(\infty)}$ depends on the initial conditions, so would the minimizer of eq. \ref{eq: svm for Adagrad}.
}
\section{Gradient descent on the factorized parameterization}\label{sec:mf}
Consider the empirical risk minimization in eq. \eqref{eq:lm} for matrix valued $X_n\in\mathbb{R}^{d\times d}$, $W\in\mathbb{R}^{d\times d}$
\begin{equation}
\min_{W} \c{L}(W)=\ell(\innerprod{W}{X_n},y_n).
\label{eq:lmW}
\end{equation}
This is the exact same setting as eq. \eqref{eq:lm} obtained by arranging $w$ and $\x{n}$ as matrices. We can now study another class of algorithms for learning linear models based on matrix factorization, where we reparameterize $W$ as $W=UV^\top$ with \textit{unconstrained} $U\in\mathbb{R}^{d\times d}$ and $V\in\mathbb{R}^{d\times d}$ to get the following equivalent objective,
\begin{equation}
\min_{U,V} \c{L}(UV^\top)=\sum_{n=1}^N\ell(\innerprod{UV^\top}{X_{n}},\y{n}).
\label{eq:lm-uv}
\end{equation}
Note that although non-convex, eq. \eqref{eq:lm-uv} is equivalent to eq. \eqref{eq:lmW} with the exact same set of global minima over $W=UV^\top$.
\citet{gunasekar2017implicit} studied this problem for squared loss $\ell(u,y)=(u-y)^2$ and noted that gradient descent on the factorization yields radically different implicit bias compared to gradient descent on $W$. In particular, gradient descent on $U,V$ is often observed to be biased towards low nuclear norm solutions, which in turns ensures generalization \citep{srebro2005generalization} and low rank matrix recovery \citep{recht2010guaranteed,candes2009exact}. Since the matrix factorization objective in eq. \eqref{eq:lm-uv} can be viewed as a two-layer neural network with linear activation, understanding the implicit bias here could provide direct insights into characterizing the implicit bias in more complex neural networks with non-linear activations.
\citet{gunasekar2017implicit} noted that, the optimization problem in eq. \eqref{eq:lm-uv} over factorization $W\!=\!UV^\top\!$ can be cast as a special case of optimization over p.s.d. matrices with unconstrained symmetric factorization $W\!=\!UU^\top$:
\begin{equation}
\min_{U\in\mathbb{R}^{d\times d}}\bar{\c{L}}(U)=\c{L}(UU^\top)=\sum_{n=1}^N\ell\left(\innerprod{UU^\top}{X_{n}},\y{n}\right).
\label{eq:lm-u}
\end{equation}
Specifically, in terms of both the objective as well as gradient descent updates, a problem instance of eq. \eqref{eq:lm-uv} is equivalent to a problem instance of eq. \eqref{eq:lm-u} with larger data matrices $\tilde{X}_n=\left[\begin{smallmatrix}0&X_n\\X_n^\top&0\end{smallmatrix}\right]$ and loss optimized over larger p.s.d. matrix of the form $\tilde{U}\tilde{U}^\top=\left[\begin{smallmatrix}A_1&W\\W^\top&A_2\end{smallmatrix}\right]$, where $W=UV^\top$ corresponds to the optimization variables in the original problem instance of eq. ~\eqref{eq:lm-uv} and $A_1$ and $A_2$ some p.s.d matrices that are irrelevant for the objective.
\remove{
Analogous to \citet{gunasekar2017implicit}, we also consider a more general formulation of the factorized problem \eqref{eq:lm-uv}, where we optimize $\c{L}(W)$ over a larger p.s.d matrix $W\succcurlyeq 0$ using gradient descent over unconstrained symmetric factorization $W=UU^\top$ with $U\in\mathbb{R}^{d\times d}$:
\begin{equation}
\min_{U\in\mathbb{R}^{d\times d}}\!\!\overline{\c{L}}(U)\!=\!\c{L}(UU^\top)\!=\!\sum_{n=1}^N\ell\left(\innerprod{UU^\top}{X_{n}},\y{n}\right)
\label{eq:lm-u}
\end{equation}
So far we focused on estimating linear models, by directly updating $w\in\mathbb{R}^{d}$. Another class of algorithms for learning linear models is using matrix factorization. Consider estimating $w\in \mathbb{R}^{d}$ using a simple $2$--layer linear neural network by reparameterizing $w$ as $w=Uv$ where $U\in \mathbb{R}^{d\times d'}$, $v\in\mathbb{R}^{d'}$ where $d'$ is the number of hidden units. A bottleneck layer of $d'<d$ imposes explicit constraint on the representation power of $w=Uv$. However, if $d'\ge d$, then optimization of eq. \eqref{eq:lm} over $w=Uv$ is exactly the same as optimization over unconstrained $w$. In general, optimization over $U,v$ lead to non-convex objectives which are hard to optimize. But let us assume we are able to efficiently optimize over $U$,$v$ using gradient descent. For underdetermined problems, we can then ask the question of what is the implicit bias of gradient descent on the factorized representation.
Let us consider eq. \eqref{eq:lm-uv} in a simpler formulation where the linear model is parameterized by a p.m. matrix $W\in\b{S}_+^{d\times d}$ and we optimize over full rank symmetric factorization $W=UU^\top$ where $U\in\mathbb{R}^{d\times d}$. Let $\{X_n,y_n\}$ denote the dataset in matrix form, where $X_n\in\mathbb{R}^{d\times d}$.
\begin{equation}
\min_{U\in\mathbb{R}^{d\times d}}\!\!\overline{\c{L}}(U)\!=\!\c{L}(UU^\top)=\sum_{n=1}^N\ell\left(\innerprod{UU^\top}{X_{n}},\y{n}\right)
\label{eq:lm-u}
\end{equation}
As noted by \citet{gunasekar2017implicit}, any optimization problem in eq. \eqref{eq:lm-uv} over $W=UV^\top$ is equivalent to an instance of eq. \eqref{eq:lm} over $\left[\begin{smallmatrix}A_1&W\\W^\top&A_2\end{smallmatrix}\right]$ for some p.m. matrix variables $A_1,A_2\succcurlyeq 0$ along with data matrices $\left[\begin{smallmatrix}0&X_n\\X_n^\top&0\end{smallmatrix}\right]$. In this case, both instances, the asymmetric problem as well as the corresponding equivalent symmetric problem, have equivalent updates to $W$ along the gradient descent path.
Thus, for the purpose of studying the implicit bias, without loss of generality, we can focus on eq. \eqref{eq:lm-u}
More generally, eq. \eqref{eq:lm} can be converted to the following factorization problem by rearranging $w$ as $\text{Mat}(w)\in\mathbb{R}^{d_1\times d_2}$ for some $d_1d_2=d$ and reparameterizing $\text{Mat}(w)=UV^\top$ for $d'= \min\{d_1,d_2\}$, }
Henceforth, we will also consider the symmetric matrix factorization in \eqref{eq:lm-u}.
Let $\U{0}\in\mathbb{R}^{d\times d}$ be any full rank initialization, gradient descent updates in $U$ are given by,
\begin{equation}
\U{t+1}=\U{t}-\eta_t\nabla\bar{\c{L}}(\U{t}),
\label{eq:mf-updateU}
\end{equation}
with corresponding updates in $\WW{t}=\U{t}\U{t}^\top$ given by,
\begin{flalign}
\WW{t+1}=\WW{t}\! &-\eta_t\big[\nabla{\c{L}}(\WW{t})\WW{t}+\WW{t}\nabla{\c{L}}(\WW{t})\big]+\eta_t^2\nabla{\c{L}}(\WW{t})\WW{t}\nabla{\c{L}}(\WW{t})
\label{eq:mf-updateW}
\end{flalign}
\paragraph{Losses with a unique finite root }For squared loss, \citet{gunasekar2017implicit} showed that the implicit bias of iterates in eq.~\eqref{eq:mf-updateW} crucially depended on both the initialization $\U{0}$ as well as the step-size $\eta$. \citeauthor{gunasekar2017implicit} conjectured, and provided theoretical and empirical evidence that
gradient descent on the factorization converges to the minimum
nuclear norm global minimum, but only if the initialization is infinitesimally close to zero and the step-sizes
are infinitesimally small. \citet{li2017algorithmic}, later proved the conjecture under additional assumption that the measurements $X_n$ satisfy certain \textit{restricted isometry property (RIP)}.
In the case of squared loss, it is evident that for finite step-sizes and finite initialization, the implicit bias towards the minimum nuclear norm global minima is not exact. In practice, not only do we need $\eta>0$, but we also cannot initialize very close to zero since zero is a saddle point for eq. \eqref{eq:lm-u}. The natural question motivated by the results in Section~\ref{sec:monotonic} is: for strictly monotone losses, can we get a characterization of the implicit bias of gradient descent for the factorized objective in eq. \eqref{eq:lm-u} that is more robust to initialization and step-size?
\paragraph{Strict monotone losses } In the following theorem, we again see that the characterization of the implicit bias of gradient descent for factorized objective is more robust in the case of strict monotone losses.
\begin{restatable}{theorem}{thmmfexp} \label{thm:mf-exp}
For almost all datasets $\{X_{n},\y{n}\}_{n=1}^N$ separable by a p.s.d. linear classifier, consider the gradient descent iterates $\U{t}$ in eq. \eqref{eq:mf-updateU} for minimizing $\bar{\c{L}}(U)$ with the exponential loss $\ell(u,y)=\exp(-uy)$ and the corresponding sequence of linear predictors $\WW{t}$ in eq. \eqref{eq:mf-updateW}. For any full rank initialization $\U{0}$ and
any finite step-size sequence $\{\eta_t\}_t$,
if $\WW{t}$ asymptotically minimizes $\c{L}$, i.e., $\c{L}(\WW{t})\to0$, and additionally the updates $\U{t}$ and the gradients $\nabla\c{L}(\WW{t})$ converge in direction,
then the limit direction $\bar{U}_\infty=\lim\limits_{t\to\infty}\frac{\U{t}}{\|\U{t}\|_*}$
is a scaling of a first order stationary point (f.o.s.p) of the following non-convex optimization problem
\begin{equation}
\bar{U}_\infty\propto\text{ f.o.s.p. }\min_{U\in\mathbb{R}^{d\times d}} \norm{U}^2_2 \quad \text{ s.t., } \quad \forall n, {\y{n}\innerprod{UU^\top}{X_{n}}}\ge 1.
\label{eq:uopt}
\end{equation}
\end{restatable}
\begin{remark}Any global minimum $U^*$ of eq.~\eqref{eq:uopt} corresponds to predictor $W^*$ that minimizes the nuclear norm $\|.\|_*$ of linear p.s.d. classifier with margin constraints,
\begin{equation}W^*=\argmin_{W\succcurlyeq0}\|W\|_* \text{ s.t., } \forall n, {\y{n}\innerprod{W}{X_{n}}}\ge1.\label{eq:wopt}\end{equation}
Additionally, in the absence of rank constraints on $U$, all second order stationary points of eq.~\eqref{eq:uopt} are global minima for the problem. More general, we expect a stronger result that $\bar{W}_\infty=\bar{U}_\infty\bar{U}_\infty^\top$, which is also the limit direction of $\WW{t}$, is a minimizer of eq. \eqref{eq:wopt}. Showing a stronger result that $\WW{t}$ indeed converges in direction to $W^*$ is of interest for future work.
\end{remark}
Here we note that convergence of $\U{t}$ in direction is necessary for the characterization of implicit bias to be relevant, but in Theorem~\ref{thm:mf-exp}, we require stronger conditions that the gradients $\nabla\c{L}(\WW{t})$ also converge in direction. Relaxing this condition is of interest for future work.
\paragraph{Key property } Let us look at exponential loss when $\WW{t}$ converges in direction to, say $\bar{W}_\infty $. Then $\bar{W}_\infty$ can be expressed as $\WW{t}=\bar{W}_\infty g(t)+\rho(t)$ for some scalar $g(t)\to\infty$ and $\frac{\rho(t)}{g(t)}\to 0$. Consequently, the gradients $\nabla\c{L}(\WW{t})=\sum_{n}\text{e}^{-g(t)\y{n}\innerprod{W_\infty}{X_{n}}}e^{-\y{n}\innerprod{\rho(t)}{X_{n}}}\,y_nX_{n}$ will asymptotically be dominated by linear combinations of examples $X_{n}$ that have the smallest distance to the decision boundary, i.e.,~the support vectors of $\bar W_\infty$. This behavior can be used to show optimality of $\bar U_\infty$ such that $\bar W_\infty=\bar U_\infty \bar U_\infty^\top$ to the first order stationary points of the maximum margin problem in eq.~\ref{eq:uopt}.
This idea formalized in the following lemma, which is of interest beyond the results in this paper.
\begin{restatable}{lemma}{lemgradconv} \label{lem:grad-conv}
For almost all linearly separable datasets $\{\x{n},\y{n}\}_{n=1}^N$, consider any sequence $\W{t}$ that minimizes $\c{L}(w)$ in eq. \eqref{eq:lm} with exponential loss, i.e., $\c{L}(\W{t})\to0$.
If $\frac{\W{t}}{\|\W{t}\|}$ converges, then for every accumulation point $z_\infty$ of $\Big\{\frac{-\nabla\c{L}(\W{t})}{\norm{\nabla\c{L}(\W{t})}}\Big\}_t$,
$\exists \{\alpha_n\ge 0\}_{n\in S} \text{ s.t., } z_\infty=\sum\limits_{n\in S}\alpha_n\y{n}\x{n},$
where $\barw_\infty=\lim\limits_{t\to\infty}\frac{\W{t}}{\|\W{t}\|}$ and $S=\{n:\y{n}\innerprod{\bar w_\infty}{\x{n}}=\min_n \y{n}\innerprod{\bar w_\infty}{\x{n}}\}$ are the indices of the data points with smallest margin to $\bar w_\infty$.
\end{restatable}
\section{Summary}
We studied the implicit bias of different optimization algorithms for two families of losses, losses with a unique finite root and strict monotone losses, where the biases are fundamentally different.
In the case of losses with a unique finite root, we have a simple characterization of the limit point $w_\infty=\lim_{t\to\infty}\W{t}$ for mirror descent. But for this family of losses, such a succinct characterization does not extend to steepest descent with respect to general norms.
On the other hand, for strict monotone losses, we noticed that the initial updates of the algorithm, including initialization and initial step-sizes are nullified when we analyze the asymptotic limit direction $\bar{w}_\infty=\lim\limits_{t\to\infty}\frac{\W{t}}{\|\W{t}\|}$. We show that for steepest descent, the limit direction is a maximum margin separator within the unit ball of the corresponding norm.
We also looked at other optimization algorithms for strictly monotone losses. For matrix factorization, we again get a more robust characterization that relates the limit direction to the maximum margin separator with unit nuclear norm. This again, in contrast to squared loss \citet{gunasekar2017implicit}, is independent of the initialization and step-size.
However, for AdaGrad, we show that even for strict monotone losses, the limit direction $\bar{w}_\infty$ could depend on the initial conditions
In our results, we characterize the implicit bias for linear models as minimum norm (potential) or maximum margin solutions. These are indeed very special among all the solutions that fit the training data, and in particular, their generalization performance can in turn be understood from standard analyses \cite{bartlett2003rademacher}.
Going forward, for more complicated non-linear models, especially neural networks, further work is required in order to get a more complete understanding of the implicit bias. The preliminary result for matrix factorization provides us tools to attempt extensions to multi-layer linear models, and eventually to non-linear networks.
Even for linear models, the question of what is the implicit bias is when $\c{L}(w)$ is optimized with explicitly constraints $w\in\c{W}$ is an open problem. We believe similar characterizations can be obtained when there are multiple feasible solutions with $\c{L}(w)=0$.
We also believe, the results for single outputs considered in this paper can also be extended for multi-output loss functions
Finally, we would like a more fine grained analysis connecting the iterates $\W{t}$ along the optimization path of various algorithms to the regularization path, $\hat{w}(c)=\argmin_{\c{R}(w)\le c}\c{L}(w)$, where an explicit regularization is added to the optimization objective. In particular, our positive characterizations show that the optimization and regularization paths meet at the limit of $t\to\infty$ and $c\to \infty$, respectively. It would be desirable to further understand the relations between the entire optimization and regularization paths, which will help us understand the non-asymptotic effects from early stopping.
\section*{Acknowledgments} The authors are grateful to M.S. Nacson, Y. Carmon, and the anonymous ICML reviewers for helpful comments on the manuscript. The research was supported in part by NSF IIS award 1302662. The work of DS was supported by the Taub Foundation.
\subsection{Convergence of $-\nabla\c{L}(\W{t})$}
{\lemgradconv*}
Here \textit{for almost all $\{\x{n},y_n\}$} means that with probability $1$ over the dataset $\{\x{n},y_n\}$ such that the signed features $y_n\x{n}$ are drawn independently from a distribution that is absolutely continuous w.r.t the $d$ dimensional Lebesgue measure.
\begin{proof} Without loss of generality assume $\forall n, y_n=1$, else the sign of $y$ can be absorbed into $x$ as $\x{n}\gets y_nx_n$.
Let $X\in\mathbb{R}^{N\times d}$ denote the data matrix with $\x{n}\in\mathbb{R}^d$ along the rows of $X$. Also, for any $J\subseteq [N]$, $X_J\in\mathbb{R}^{|J|\times d}$ denotes the submatrix of $X$ with only the rows corresponding to indices in $J$.
We have that $\lim_{t\to\infty}\c{L}(\W{t})=0$ for strictly monotone loss over separable data, this implies asymptotically $\W{t}$ satisfies $X\W{t}>0,\|\W{t}\|\to\infty$.
Since $\W{t}$ converges in direction to $\bar w_\infty$, we can write $\W{t} = g(t) \bar w_\infty +\rho_{(t)} $ for a scalar $g(t)=\|\W{t}\|\to\infty$ and vector $\rho(t)\in\mathbb{R}^d$ such that $\frac{\rho(t)}{g(t)}\to0$. Additionally, this implies $\forall n$, $X\bar w_{\infty}>0$.
We introduce some additional notation:
\begin{compactitem}
\item Denote the asymptotic margin of $x_n$ as $\bar{\gamma}_n:=\innerprod{\x{n}}{\bar{w}_\infty}$. Additionally, we define the following:
\begin{compactitem}
\item
Let $\gamma=\min_n\innerprod{\x{n}}{\bar w_\infty}=\min_n e_n^\top X\bar{w}_\infty>0$ denote the smallest margin, where $e_n\in\mathbb{R}^N$ are standard basis.
\item Let $S:=\{n:\innerprod{\x{n}}{\bar{w}_\infty}=\gamma\}$ denote the indices of support vectors of $\bar{w}_\infty$
\item Denote the second smallest margin of $\bar{w}_\infty$ as $\bar{\gamma}:=\min_{n\notin S}\innerprod{\x{n}}{\bar{w}_\infty}>\gamma$.
\end{compactitem}
\item Define $\alpha_n(t) := \exp(-\innerprod{\rho(t)}{x_n})$ and let $\alpha(t)\in\mathbb{R}^N$ be a vector of $\alpha_n(t)$ stacked. For any $J\subset[N]$ and $\alpha\in\mathbb{R}^N$, similar to $X_J$, let $\alpha_J\in\mathbb{R}^{|J|}$ be the sub-vector with components corresponding to the indices in $J$
\item $B= \max_n \norm{x_n}_2$,
\end{compactitem}
Since $\norm{\rho(t)}/g(t)\to0$ and $\gamma,\bar{\gamma}>0$, we have $\forall \epsilon_1,\epsilon_2>0$, $\exists t_{\epsilon_1},t_{\epsilon_2}$ such that
\begin{equation}
\begin{split}
\forall t>t_{\epsilon_1},\;\forall n,\quad &\innerprod{\rho(t)}{x_n}\le \|{\rho(t)}\|_2 B\le \epsilon_1\gamma g(t),\text{ and }\\
\forall t>t_{\epsilon_2},\;\forall n,\quad &\innerprod{\rho(t)}{x_n}\ge -\|{\rho(t)}\|_2 B\ge -\epsilon_2\bar{\gamma} g(t)
\end{split}
\label{eq:rho}
\end{equation}
The first prove the following claim:
\begin{claim} For almost all $\{\x{n}\}$, $|S|<d$ and $\sigma_{|S|} (X_{S}) >0$, where $\sigma_k (A)$ is the $k^{th}$ singular value of $A$.
\end{claim}
\begin{proof}
Since, $S= \{n:\innerprod{\bar{w}_\infty}{\x{n}}=\gamma\}$, we have $X_{S} \bar{w}_\infty = \gamma 1_{S}\in\mathbb{R}^{|S|}.$
If $X$ is randomly drawn from a continuous distribution, for any fixed subset $J$ if $|J| >d$, the column span of $X_J$ is rank deficient and will miss any fixed vector $v$ that is independent of $X$ with probability $1$. Thus
\begin{align}
\mathbb{R}^{|J|}\ni1_{J} \notin \text{colspan}(X_J),\text{ for almost all }X_J\in\mathbb{R}^{|J|\times d}.
\end{align}
Since we always have $1_{S} \in \text{colspan} (X_S)$, this implies for almost all $X$, $|S| \le d$ and $\sigma_{|S|} (X_{S} )>0$.
\end{proof}
\paragraph{Exponential loss: }
For exponential loss, the gradient at $\W{t}$ is given by
\begin{align}
\nonumber -\nabla \c{L}( \W{t} ) &= \sum_{n \in S} \exp(-\gamma g(t)) \exp(- \rho(t) ^\top x_n) x_n + \sum_{n \in S^c} \exp( - \bar \gamma_ng(t)) \exp(-\rho(t)^\top x_n) x_n\\
&:= I(t) + II(t),
\label{eq:1p2}
\end{align}
where $I(t)=\sum_{n \in S} \exp(-\gamma g(t)) \exp(- \rho(t) ^\top x_n) x_n $ and $II(t)=\sum_{n \notin S} \exp( - \bar \gamma_ng(t)) \exp(-\rho(t)^\top x_n) x_n$.
To prove the lemma, we need to show that the gradient are dominated by the positive span of support vectors. Towards this goal, we will now show that $\lim\limits_{t\to\infty} \frac{\norm{II(t)}}{\norm{I(t)}} =0$.
Recall that $\alpha(t)=[\alpha_n(t)]_n$ is defined as $\alpha_n(t) = \exp(-\innerprod{\rho(t)}{x_n})$ and $\alpha_S(t)\in\mathbb{R}^{|S|}$ is a subvector restricted to indices in $S$. The following are true for any $\epsilon_1,\epsilon_2>0$.
\begin{asparaenum}[Step 1.]
\item \textit{Lower bound on $I(t)$: } There exists $t_{\epsilon_1}$ such that for all $t>t_{\epsilon_1}$, we have
\begin{align}
\nonumber \norm{I}_2 &= \exp(-\gamma g(t) ) \norm{X_{S} \alpha_S(t)} _2
\ge \exp(-\gamma g(t)) \sigma_{|S|} (X_S) \norm{\alpha_S(t)}_2\\
\nonumber&\ge \exp(-\gamma g(t)) \sigma_{|S|} (X_S) \max_{n\in S} \alpha_n(t)\\
&\overset{(a)}\ge\sigma_{|S|} (X_S) \exp(- (1+\epsilon_1)\gamma g(t)):= C_1 \exp(- (1+\epsilon_1)\gamma g(t)),
\label{eq:1}
\end{align}
where $(a)$ follows from \eqref{eq:rho}, from which we get $\alpha_n(t)=\exp(-\innerprod{\rho(t)}{x_n})\ge\exp(- \epsilon_1\gamma g(t)) $, and $C_1=\sigma_{|S|} (X_S)>0$ is a constant independent of $t$.
\item \textit{Upper bound on $II(t)$: } Again, for large enough $t>t_{\epsilon_2}$, we have
\begin{align}
\nonumber \norm{II(t)}_2 &=\sum_{n \notin S} \exp( - \bar \gamma_ng(t)) \exp(-\rho(t)^\top x_n) x_n\le N\max_n\exp(- \bar \gamma_n g(t)) \alpha_n\|x_n\|_2\\
\nonumber&\overset{(a)}\le \exp(- \bar \gamma g(t))BN \max_n\alpha_n\\
&\overset{(b)}\le BN\exp(-(1-\epsilon_2)\bar{\gamma} g(t)):=C_2 \exp(-(1-\epsilon_2)\bar{\gamma}g(t)),
\label{eq:2}
\end{align}
where $(a)$ uses $\forall n\notin S, \bar \gamma_n\ge \bar\gamma$ (recall that $\bar\gamma$ is the second smallest margin to $\bar{w}_\infty$) and $(b)$ follows from \eqref{eq:rho}, using $\alpha_n=\exp(-\innerprod{\rho(t)}{x_n})\le\exp(\epsilon_2\bar{\gamma}g(t))$, and $C_2=BN>0$ is again a constant independent of $t$.
\item \textit{Remaining steps in the proof: } By combining \eqref{eq:1} and \eqref{eq:2} using $\epsilon_1=\nicefrac{(\bar{\gamma}-\gamma)}{4\gamma}$ and $\epsilon_2=\nicefrac{(\bar{\gamma}-\gamma)}{4\bar{\gamma}}$ and an appropriate constant $C>0$, we have for any norm $\|.\|$
\begin{align}
\frac{\norm{II(t)}}{\norm{I(t)}} &\le C\exp( - \frac12(\bar \gamma -\gamma)g(t))\overset{(a)}\to 0,
\end{align}
where $(a)$ follows from $\bar{\gamma}>\gamma$ and $g(t)=\|\W{t}\|\to \infty$.
Finally, note that $ -\frac{\nabla \mathcal{L}(\W{t})}{\norm{\nabla \mathcal{L}(\W{t})}} = \frac{I(t)}{\norm{I(t)+II(t)}} +\frac{II(t)}{\norm{I(t)+II(t)}}.$
Since $\norm{\frac{II(t)}{\norm{I(t)+II(t)}} }\le \frac{\norm{II(t)}/{\|I(t)\|}}{1-\norm{II(t)}/\norm{I(t)}}\overset{t\to\infty}\to0$, and $I(t) \propto \sum_{n\in S}\alpha_{n}(t)\x{n}$ for $\alpha_n(t)>0$, we have shown that every limit point of $-\frac{\nabla \mathcal{L}( \W{t})}{\norm{\nabla \mathcal{L}( \W{t})}} \to \sum_{n\in S}\alpha_n\x{n}$ for some $\alpha_n>0$.
\end{asparaenum}
Recall that in the beginnning of the proof we made a change of variable that $\x{n}\gets \y{n}\x{n}$. Reversing this change of variable finishes the proof for exponential loss.
\remove{\paragraph{Extension to general exponential tail losses}
For general exponential tail loss (Property~\ref{ass:exp-tail}), we can essentially flow the same steps with some additional book keeping.
\begin{align}
\nonumber -\nabla \c{L}( \W{t} ) &= \sum_{n \in S} (-\ell^\prime(\innerprod{\W{t}}{x_n}) x_n+ \sum_{n \in S^c} (-\ell^\prime(\innerprod{\W{t}}{x_n}) x_n\\
&= I(t) + II(t).
\label{eq:1p2-1}
\end{align}
\textit{Step 1: Lower bound on $I(t)$: } For large enough $t>t_{\epsilon_1}$, we have
\begin{align}
\nonumber \norm{I}_2 &= \exp(-\gamma g(t) ) \norm{X_{S} \alpha_S(t)} _2\ge \exp(-\gamma g(t)) \sigma_{|S|} (X_S) \norm{\alpha_S(t)}_2\ge \exp(-\gamma g(t)) \sigma_{|S|} (X_S) \min_n \alpha_n(t)\\
&\overset{(a)}\ge\sigma_{|S|} (X_S) \exp(- (1+\epsilon_1)\gamma g(t))\ge C_1 \exp(- (1+\epsilon_1)\gamma g(t)),
\end{align}
where $(a)$ follows from the definition of $\alpha_n=\exp(-\innerprod{\rho(t)}{x_n})$ and \eqref{eq:rho}, and $C_1>0$ is a constant independent of $t$. }
\end{proof}
\remove{
\subsection{Convergence of $\U{t}$ in direction}
The following general lemma shows the first part of Theorem~\ref{thm:mf-exp} on convergence of $\U{t}$.
\begin{lemma} [$\U{t}$ converges in direction if $\Delta\U{t}$ converges in direction] Let $\U{t}$ be any diverging sequence of iterates, i.e., $\|\U{t}\|\to \infty$, iteratively defined using bounded discrete updates of $\U{t+1}=\U{t}+\eta_t\Delta\U{t}$ for any $0<\eta_t<\infty$, any finite incremental update direction $\Delta\U{t}$ such that $\|\Delta\U{t}\|>0$, and any finite initialization $\U{0}<\infty$.
For such sequences, if $\lim\limits_{t\to\infty}\frac{\Delta\U{t}}{\norm{\Delta\U{t}}}=\bar{U}_\infty$, then $\lim\limits_{t\to\infty}\frac{\U{t}}{\norm{\U{t}}}=\bar{U}_\infty$.
\end{lemma}
\label{lem:wconv}
\begin{proof}
For all $\eta_t>0$, since $\frac{\Delta\U{t}}{\norm{\Delta\U{t}}}\to\bar{U}_\infty$, we can write $\Delta\U{t}=\bar{U}_\infty h(t)+\xi(t)$ where $h(t)=\|\Delta\U{t}\|$ and $\frac{\xi(t)}{h(t)}\to0$.
Also, define $g(t)$ and $\rho(t)$ as follows\footnote{Note that $g(t)=\sum_{u<t}\norm{\Delta{\U{t}}}$ here is different from notation in the proof of Lemma~\ref{lem:grad-conv} where we defined $g(t)=\norm{\W{t}}$.}:
\begin{equation}
g(t):=\sum_{u<t}\eta_{u}h(u)\text{, and }\rho(t):=\U{t}-\bar{U}_\infty g(t)=\sum_{u<t}\eta_{u}\xi(u)+\U{0}.
\label{eq:hgt}
\end{equation}
In order to prove the lemma, we need to show that $\frac{{\rho}(t)}{g(t)}\to 0$. We observe the following,
\begin{asparaenum}
\item We have $\norm{\U{t}}-\|\U{0}\|\le \sum_{u<t}\eta_{u} \|\Delta\U{u}\|=g(t)$. Since, $\|\U{t}\|\to \infty$, we thus get $g(t) \to \infty.$
\item
Also, $g(t)=\sum_{u<t}\eta_uh(u)$ is a strictly monotonically increasing as $\forall t<\infty, h(t)=\norm{\Delta \U{t}}>0$.
\item Finally, $\lim\limits_{t\to\infty} \frac{\rho(t+1)-\rho(t)}{g(t+1)-g(t)}=\lim\limits_{t\to\infty} \frac{\xi(t)}{ h(t)}=0.$
\end{asparaenum}
Thus, by using the Stolz-Cesaro (Theorem~\ref{thm:stolzcesaro}), we get $\lim\limits_{t\to\infty }\frac{\rho(t)}{g(t)}=0$.
\end{proof}
\remove{
Recall the set of global minima for $\c{L}(w)$, $\c{G}=\{w: \forall n, \y{n}\innerprod{w}{\x{n}}> 0\text{ and }\|w\|=\infty\}$ and the set of $1$-margin solutions $\c{G}^{(1)}=\{w:\forall n, \y{n}\innerprod{w}{\x{n}}\ge 1\}$
\begin{proposition} If $\lim\limits_{t\to\infty}\c{L}(\W{t})=0$ and Lemma~\ref{lem:d-conv} holds, then $\exists \gamma>0$, such that $\frac{w_{\infty}}{\gamma}\in \c{G}^{(1)}$.
\end{proposition}
\begin{proof}
Since $\lim_{t\to\infty}\c{L}(\W{t})=0$, we have $\forall n$, $\lim_{t\to\infty}\y{n}\innerprod{\W{t}}{\x{n}}=\infty>0$, i.e., $\forall M>0,\exists t_M\text{ s.t., } \forall t>t_M$, $\y{n}\innerprod{\W{t}}{\x{n}}\ge M$ for all $n$. This implies, for all $t>t_M$, $\y{n}\innerprod{\frac{\W{t}}{\|\W{t}\|}}{\x{n}}>0$, and hence under Lemma~\ref{lem:d-conv}, $\lim_{t\to\infty}\y{n}\innerprod{\frac{\W{t}}{\|\W{t}\|}}{\x{n}}=\innerprod{w_\infty}{\x{n}}>0$. For any finite $N$, $\gamma:=\min_n \innerprod{w_\infty}{\x{n}}>0$, then $w_\infty/\gamma\in\c{G}^{(1)}$.
\end{proof}
}
}
\subsection{Proof of Theorem~\ref{thm:mf-exp}}
{\thmmfexp*}
\begin{proof}
In this proof, $\norm{.}_F$, $\norm{.}_*$, and $\norm{.}_\text{op}$ denote the Frobenious norm, nuclear norm, and operator norm, respectively.
From the assumption of theorem, we have that $\U{t}$ converges in direction. Let $\bar{U}_\infty=\lim\limits_{t\to\infty}\frac{\U{t}}{\norm{\U{t}}_F}$. Noting that for $\WW{t}=\U{t}\U{t}^\top$, $\norm{\WW{t}}_*=\norm{\U{t}}_F^2$, we have that $\lim\limits_{t\to\infty}\frac{\WW{t}}{\norm{\WW{t}}_*}=\lim\limits_{t\to\infty}\frac{\U{t}}{\norm{\U{t}}_F}\frac{\U{t}^\top}{\norm{\U{t}}_F}=\bar{U}_\infty\bar{U}_\infty^\top$. Denote $\bar{W}_\infty=\lim\limits_{t\to\infty}\frac{\WW{t}}{\norm{\WW{t}}_*}=\bar{U}_\infty\bar{U}_\infty^\top$.
Since $\WW{t}$ minimizes a strictly monotone loss, we have that $\|\WW{t}\|_*\to\infty$ and $\forall n, y_n\innerprod{\bar W_\infty}{X_n}>0$. Let $\gamma=\min_n y_n\innerprod{\bar W_\infty}{X_n}$ denote the margin of $\bar W_\infty$ and $S=\{n:\y{n}\innerprod{\bar W_\infty}{X_n}=\gamma\}$ denote the indices of the support vectors of ${\bar{W}}_\infty$.
In order to prove the theorem, we can can equivalently show that a positive scaling of $\bar U_\infty$ given by $\bar{\bar{U}}_\infty=\bar{U}_\infty/\sqrt{\gamma}$ is the first order stationary point of eq. \eqref{eq:uopt}.
In the remainder of the proof we show that $\bar{\bar{U}}_\infty$ satisfies the following KKT optimality conditions of \eqref{eq:uopt}:
\begin{flalign}
\textbf{To show:}&&& \y{n}\innerprod{\bar{\bar{U}}_\infty\bar{\bar{U}}_\infty^\top}{X_n} \ge 1 \text{ and }\exists \alpha\ge0 \text{ s.t., } &&\textbf{(primal and dual feasibility)}&\label{eq:pf}\\
&&& \forall i\notin {S}: \alpha_n=0\text{ and }&&\textbf{(complementary slackeness) }&\label{eq:cs}\\
&&& \bar{\bar{U}}_\infty=\sum_n\alpha_n y_nX_{n}{\bar{\bar{U}}}_\infty.&&\textbf{(stationarity) }&\label{eq:s}
\end{flalign}
\paragraph{Primal feasibility} This holds by definition since $\bar{\bar{U}}_\infty\bar{\bar{U}}_\infty^\top=\bar{W}_\infty/\gamma$ has unit margin by the scaling
\paragraph{Dual feasibility and complementary slackness}
Denote $Z_{(t)}=-\nabla\c{L}(\WW{t})=\sum_n\exp(-y_n\innerprod{\WW{t}}{X_n}) y_n X_n$. From the assumptions in the theorem, we have that $Z_{(t)}$ converge in direction. Let $\bar{Z}_\infty=\lim\limits_{t\to\infty}\frac{\Z{t}}{\norm{\Z{t}}_\text{op}}$.
In addition, we also assume that $\c{L}(\WW{t})\to0$ and that $\U{t}$ convergence in direction, which in turn implies convergence in direction of $\WW{t}=\U{t}\U{t}^\top$. Thus, from Lemma~\ref{lem:grad-conv}, we have $\bar{Z}_\infty=\sum_{n\in S} {\alpha}_ny_nX_n$ for some $\{{\alpha}_n\}_{n=1}^N$ such that ${\alpha}_n\ge 0$ and ${\alpha}_n=0$ for all $n\notin S$. We propose this $\{\alpha_n\}_{n=1}^N$ as our candidate dual certificate, which satisfies both dual feasibility and complementary slackness.
\textbf{Stationarity: }To prove the theorem, we now need to show that: $\bar{\bar{U}}_\infty= D\bar Z_\infty \bar{\bar{U}}_\infty$, for some positive scalar $D$, or equivalently that ${\bar{U}}_\infty= D\bar Z_\infty {\bar{U}}_\infty$. This forms the main part of the proof.
Using the assuptions in the theorem, we have that $\U{t}$ and $Z_{(t)}$ converges in direction, we introducing the following notation to conveniently represent these quantities.
\begin{compactenum}
\item Since $\frac{\U{t}}{\|\U{t}\|_F}\to\bar U_\infty$, we define $g(t)$ and $\rho_{(t)}$ satisfying the following,
\begin{equation}
\U{t}=\bar U_\infty g(t)+\rho_{(t)}
\;\text{ s.t., }\; g(t):=\norm{\U{t}}_F\to\infty\text{ and }\frac{\rho_{(t)}}{g(t)}\to 0.
\label{eq:u}
\end{equation}
\item For exponential loss, $\c{L}(\WW{t})\to0$ implies $Z_{(t)}=-\nabla\c{L}(\WW{t})\to 0$.
Thus, using the previously introduced notation $\bar Z_\infty=\lim\limits_{t\to\infty}\frac{Z_{(t)}}{\norm{Z_{(t)}}_\text{op}}$, we define ${p(t)}$ and $\zeta_{(t)}$ as follows
\begin{equation}
Z_{(t)}=-\nabla\c{L}(\WW{t})=\bar Z_\infty p(t)+\zeta_{(t)} \;\text{ s.t., }\; p(t):=\norm{Z_{(t)}}_\text{op}\to 0 \text{ and }\frac{\zeta_{(t)}}{p(t)}\to0 .
\label{eq:g}
\end{equation}
\end{compactenum}
To show stationarity, we need to show that $\bar U_\infty=D\bar Z_\infty \bar U_\infty$, which requires that the columns of $\bar W_\infty$ are spanned subset of eigenvectors of $\bar Z_\infty$ that correspond to the same eigen value.
Let $\Delta\U{t}=\U{t+1}-\U{t}$. Substituting expressions of $\U{t}$ and $Z_{(t)}$ from \eqref{eq:u} and \eqref{eq:g}, respectively, for the updates $\Delta\U{t}$ from eq. \eqref{eq:mf-updateU}, we have
\begin{flalign}
\nonumber \Delta \U{t}&=\eta_t Z_{(t)}\U{t}=\eta_t p(t) g(t)\left[\bar Z_\infty \bar U_\infty+ \bar Z_\infty\frac{\rho_{(t)}}{g(t)}+\frac{\zeta_{(t)}}{p(t)}\bar U_\infty\right]&\\
&\overset{(a)}=\eta_t p(t) g(t)[\bar Z_\infty \bar U_\infty+\delta_{(t)}]&
\label{eq:last}
\end{flalign}
where in $(a)$ we collect all the diminishing terms into $\delta_{(t)}=\bar Z_\infty\frac{\rho_{(t)}}{g(t)}+\frac{\zeta_{(t)}}{p(t)}\bar U_\infty \to 0$ as from eqs. \eqref{eq:u}--\eqref{eq:g}, we have $\frac{\rho_{(t)}}{g(t)},\frac{\zeta_{(t)}}{p(t)}\to0$ and $\bar Z_\infty$ and $\bar{W}_\infty$ are finite quanitities independent of $t$.
Summing over $t$, we have that
\begin{flalign}
\U{t}-\U{0}&=\bar Z_\infty \bar U_\infty\sum_{u<t}\eta_u p(u) g(u)+\sum_{u<t}\delta_{(u)}\eta_u p(u) g(u)
\label{eq:last1}
\end{flalign}
\begin{claim}$\norm{\bar Z_\infty \bar U_\infty}>0$ and $\sum_{u<t}\eta_u p(u) g(u)\to\infty$.
\end{claim}
\begin{proof}
First, recall that for the limit direction $\bar{W}_\infty=\bar U_\infty \bar U_\infty^\top$, $\min_ny_n\innerprod{\bar{W}_\infty}{X_n}=\gamma>0$ and $\bar Z_\infty=\sum_{n\in S}\alpha_n y_n X_n$ for $\alpha_n\ge0$.
Thus, $\innerprod{Z_\infty}{\bar W_\infty}=\innerprod{\bar Z_\infty \bar U_\infty}{\bar U_\infty}=\sum_{n\in S}\alpha_n \gamma >0$ for $\bar U_\infty\neq 0$, and hence $\norm{\bar Z_\infty \bar U_\infty}>0$.
Secondly, since $\delta_{(t)}\to0$ in eq. \eqref{eq:last1}, $\exists t_0$ such that $\forall t>t_0$, $\norm{\delta_{(t)}}\le 1$ and since all the incremental updates to gradient descent are finite, we have that $\sup_{t}\norm{\delta_{(t)}}<\infty$. Additionally, since $p(t)=\norm{\Z{t}}_\text{op}$ and $g(u)=\norm{\U{t}}_F$ are positive, we have that $b_t=\sum_{u<t}\eta_u p(u) g(u)$ is monotonic increasing, thus if $\lim\sup_{t\to\infty} b_t=\infty$ then $\lim_{t\to\infty} b_t=\infty$.
On contrary, if $\lim\sup_{t\to\infty} b_t=C<\infty$, then we have from eq. \eqref{eq:last1}, $\norm{\U{t}}\le\norm{\U{0}}+\norm{\bar Z_\infty\bar U_\infty}C +\left(\sup_{t}\norm{\delta_{(t)}}\right) C<\infty$ which is a contradiction to $\norm{\U{t}}\to\infty$.
\end{proof}
From the above claim, we have that the sequence $b_t=\sum_{u<t}\eta_u p(u) g(u)$ is monotonic increasing and diverging. Thus, for $a_t=\sum_{u<t}\delta_{(u)}\eta_u p(u) g(u)$, using Stolz-Cesaro theorem~(Theorem~\ref{thm:stolzcesaro}), we have that
\begin{flalign}
\nonumber\lim_{t\to\infty}\frac{a_t}{b_t}=\lim_{t\to\infty}\frac{\sum_{u<t}\delta_{(u)}\eta_u p(u) g(u)}{\sum_{u<t}\eta_u p(u) g(u)}=\lim_{t\to\infty}\frac{a_{t+1}-a_{t}}{b_{t+1}-b_{t}}=\lim_{t\to\infty}\delta_{(t)}=0.\\
\implies \text{for }\tilde{\delta}_{(t)}\to0,\; \text{we have }\sum_{u<t}\delta_{(u)}\eta_u p(u) g(u)=\tilde{\delta}_{(t)}\sum_{u<t}\eta_u p(u) g(u),
\label{eq:last2}
\end{flalign}
Subtituting eq. \eqref{eq:last2} in eq. \eqref{eq:last1}, we have
\begin{flalign}
\U{t}&\overset{(a)}=\left[\bar Z_\infty \bar U_\infty+{\delta}'_{(t)}\right]\left[\sum_{u<t}\eta_u p(u) g(u)\right]\\
\implies \frac{\U{t}}{\norm{\U{t}}}&=\frac{\bar Z_\infty \bar U_\infty+{\delta}'_{(t)}}{\norm{\bar Z_\infty \bar U_\infty+\delta'_{(t)}}_F}\overset{(a)}\to \frac{\bar Z_\infty \bar U_\infty}{\norm{\bar Z_\infty \bar U_\infty}}\\
\implies \bar U_\infty&=\lim_{t\to\infty}\frac{\U{t}}{\norm{\U{t}}}=\frac{1}{\norm{\bar Z_\infty \bar U_\infty}}\bar Z_\infty \bar U_\infty,
\end{flalign}
where in $(a)$ we absorbed all the diminishing terms into $\delta'_{(t)}=\tilde{\delta}_{(t)}+\U{0}/\sum_{u<t}\eta_up(u)g(u)\to0$ and $(b)$ follows since $\bar Z_\infty \bar U_\infty\neq 0$ and hence dominates $\tilde{\delta}_{(t)}$.
We have thus shown that $\bar U_\infty=D\bar Z_\infty \bar U_\infty$ for $D=\frac{1}{\norm{\bar Z_\infty \bar U_\infty}}$ which completes the proof of the theorem.
\end{proof} | {
"timestamp": "2018-10-23T02:20:38",
"yymm": "1802",
"arxiv_id": "1802.08246",
"language": "en",
"url": "https://arxiv.org/abs/1802.08246",
"abstract": "We study the implicit bias of generic optimization methods, such as mirror descent, natural gradient descent, and steepest descent with respect to different potentials and norms, when optimizing underdetermined linear regression or separable linear classification problems. We explore the question of whether the specific global minimum (among the many possible global minima) reached by an algorithm can be characterized in terms of the potential or norm of the optimization geometry, and independently of hyperparameter choices such as step-size and momentum.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)",
"title": "Characterizing Implicit Bias in Terms of Optimization Geometry",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232884721166,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7093460314458144
} |
https://arxiv.org/abs/0811.1701 | Sufficient enlargements of minimal volume for finite dimensional normed linear spaces | Let $B_Y$ denote the unit ball of a normed linear space $Y$. A symmetric, bounded, closed, convex set $A$ in a finite dimensional normed linear space $X$ is called a {\it sufficient enlargement} for $X$ if, for an arbitrary isometric embedding of $X$ into a Banach space $Y$, there exists a linear projection $P:Y\to X$ such that $P(B_Y)\subset A$. The main results of the paper: {\bf (1)} Each minimal-volume sufficient enlargement is linearly equivalent to a zonotope spanned by multiples of columns of a totally unimodular matrix. {\bf (2)} If a finite dimensional normed linear space has a minimal-volume sufficient enlargement which is not a parallelepiped, then it contains a two-dimensional subspace whose unit ball is linearly equivalent to a regular hexagon. | \section{Introduction}
This paper is devoted to a generalization of the main results of
\cite{Ost04}, where similar results were proved in the dimension
two. We refer to \cite{Ost04,Ost07+} for more background and
motivation.
\subsection{Notation and definitions}
All linear spaces considered in this paper will be over
the reals.
By a {\it space} we mean a normed linear space,
unless it is explicitly mentioned otherwise.
We denote by $B_X$ ($S_X$)
the unit ball (sphere) of a space $X$. We say that
subsets $A$ and $B$ of finite dimensional linear spaces $X$ and $Y$,
respectively, are {\it linearly equivalent} if there exists a
linear isomorphism $T$ between the subspace spanned by $A$ in
$X$ and the subspace spanned by $B$ in $Y$ such that $T(A)=B$.
By a {\it symmetric} set $K$ in a linear space we mean a
set such that $x\in K$ implies $-x\in K$.
\medskip
Our terminology and notation of Banach space theory follows
\cite{JL01}. By $B_p^n$, $1\le p\le\infty$, $n\in \mathbb{N}$ we
denote the closed unit ball of $\ell_p^n$. Our terminology and
notation of convex geometry follows \cite{Sch93}.
\medskip
We use the term {\it ball}~ for a
symmetric, bounded, closed, convex set with interior points
in a finite dimensional linear space.
\begin{definition} {\rm\cite{extracta} A ball $A$ in a finite dimensional normed
space $X$ is called a {\it sufficient enlargement} (SE) for $X$
(or of $B_X$) if, for an arbitrary isometric embedding of $X$ into
a Banach space $Y$, there exists a projection $P:Y\to X$ such that
$P(B_Y)\subset A$. A sufficient enlargement $A$ for $X$ is called
a {\it minimal-volume sufficient enlargement} (MVSE) if ${\rm vol}\hskip0.02cm
A\le{\rm vol}\hskip0.02cm D$ for each SE $D$ for $X$.}
\end{definition}
It can be proved, using a standard compactness
argument and Lemma \ref{L:H} below,
that minimal-volume sufficient enlargements exist for every
finite dimensional space.
\medskip
Recall that a real matrix $A$ with entries $-1$, $0$, and $1$ is
called {\it totally unimodular} if all minors (that is,
determinants of square submatrices) of $A$ are equal to $-1, 0$,
or $1$. See \cite{padberg} and \cite[Chapters~19--21]{Sch86} for a
survey of results on totally unimodular matrices and their
applications.
\medskip
A Minkowski sum of finitely many line segments in a linear space
is called a {\it zonotope} (see
\cite{bolker,martini,McM71,Sch93,schneiderweil} for basic facts on
zonotopes). We consider zonotopes that are sums of line segments
of the form $I(x)=\{\lambda x:~ -1\le\lambda\le 1\}$. For a
$d\times m$ totally unimodular matrix with columns $\tau_i$
$(i=1,\dots,m)$ and real numbers $a_i$ we consider the zonotope
$Z$ in $\mathbb{R}^d$ given by
$$Z=\sum_{i=1}^mI(a_i\tau_i).$$
The set of all zonotopes that are linearly equivalent to zonotopes
obtained in this way over all possible choices of $m$, of a rank
$d$ totally unimodular $d\times m$ matrix, and of positive numbers
$a_i~ (i=1,\dots,m)$ will be denoted by ${\cal T}_d$. Observe that
each element of ${\cal T}_d$ is $d$-dimensional in the sense that
it spans a $d$-dimensional subspace. It is easy to describe all
$2\times m$ totally unimodular matrices and to show that ${\cal
T}_2$ is the union of the set of all symmetric hexagons and the
set of all symmetric parallelograms.
\medskip
The class ${\cal T}_d$ of zonotopes has been characterized in several
different ways, see \cite{Cox,Erd99,J,McM75,laa,She}. We shall use
a characterization of ${\cal T}_d$ in terms of lattice tiles. Recall
that a compact set $K\subset \mathbb{R}^d$ is called a {\it
lattice tile} if there exists a basis $\{x_i\}_{i=1}^d$ in
$\mathbb{R}^d$ such that
$$\mathbb{R}^d=\bigcup_{m_1,\dots,m_d\in\mathbb{Z}}\left(\left(
\sum_{i=1}^dm_ix_i\right)+K\right),$$ and the interiors of the
sets $(\sum_{i=1}^dm_ix_i)+K$ are disjoint. The set
$$\Lambda=\left\{\sum_{i=1}^dm_ix_i:~m_1,\dots,m_d\in\mathbb{Z}\right\}$$ is
called a {\it lattice}. The absolute value of the determinant of
the matrix whose columns are the coordinates of $\{x_i\}_{i=1}^d$
is called the {\it determinant} of $\Lambda$ and is denoted
$d(\Lambda)$, see \cite[\S~3]{GL87}.
\medskip
\begin{theorem}\label{T:tiles} {\rm\cite{McM75}, \cite{Erd99}} A $d$-dimensional zonotope is a
lattice tile if and only if it is in ${\cal T}_d$.
\end{theorem}
It is worth mentioning that lattice tiles in $\mathbb{R}^d$ do not
have to be zonotopes, see \cite{V,McM80,McM81}, and \cite[Chapter
3]{Z}.
\subsection{Statements of the main results}
The main result of \cite{laa} can be restated in the following
way. (A finite dimensional normed space is called {\it polyhedral}
if its unit ball is a polytope.)
\begin{theorem}\label{T:laa+} A ball $Z$ is linearly equivalent to
an MVSE for some d-dimensional polyhedral space
$X$ if and only if $Z\in{\cal T}_d$.
\end{theorem}
In \cite{Ost04} it was shown that for $d=2$ the statement of
Theorem \ref{T:laa+} is valid without the restriction of
polyhedrality of $X$. The main purpose of the present paper is to
prove the same for each $d\in\mathbb{N}$. It is clear that it is
enough to prove
\begin{theorem}\label{T:MVSE}
Each MVSE for a $d$-dimensional space is in ${\cal T}_d$.
\end{theorem}
Using Theorem \ref{T:MVSE} we show that spaces having
non-parallelepipedal MVSE cannot be strictly convex or smooth.
More precisely, we prove
\begin{theorem}\label{T:NP} Let $X$ be a finite dimensional
normed linear space having an MVSE that is not a parallelepiped.
Then $X$ contains a two-dimensional subspace whose unit ball is
linearly equivalent to the regular hexagon.
\end{theorem}
\begin{remark}{Remarks. 1} Theorem \ref{T:NP} is a simultaneous
generalization of \cite[Theorem 4]{Ost04} (which is a special case
of Theorem \ref{T:NP} corresponding to the case $\dim X=2$) and of
\cite[Theorem 7]{archiv} (which states that each MVSE for
$\ell_2^n$ is a cube circumscribed about $B_2^n$).
\smallskip
\noindent{\bf 2.} The fact that $X$ contains a two-dimensional
subspace whose unit ball is linearly equivalent to a regular
hexagon does not imply that $X$ has an MVSE that is not a
parallelepiped. A simplest example supporting this statement is
$\ell_\infty^3$.
\end{remark}
\section{Proof of Theorem \ref{T:MVSE}}
First we show that it is enough to prove the following lemmas. It
is worth mentioning that our proof of Theorem \ref{T:MVSE} goes
along the same lines as the proof of its two-dimensional version
in \cite{Ost04}. The most difficult part of the proof is a
$d$-dimensional version of the approximation lemma (\cite[Lemma 2,
p.~380]{Ost04}), it is the contents of Lemma \ref{L:APPROX} of the
present paper. Also, a two-dimensional analogue of Lemma
\ref{L:closed} is completely trivial.
\begin{lemma}\label{L:closed}
Let $T_n\subset \mathbb{R}^d,~ n\in\mathbb{N}$ be such that
$T_n\in {\cal T}_d$, and $\{T_n\}_{n=1}^\infty$ converges with
respect to the Hausdorff metric to a $d$-dimensional set $T$. Then
$T\in {\cal T}_d$.
\end{lemma}
\begin{remark}{Remark} If a sequence $\{T_n\}_{n=1}^\infty\subset{\cal T}_d$ converges to a lower-dimensional set
$T$, the set $T$ does not have to be in ${\cal T}_{\dim T}$. In fact,
as it was already mentioned, ${\cal T}_2$ is the set of all
symmetric hexagons and parallelograms. On the other hand, it is
easy to find a Hausdorff convergent sequence of elements of ${\cal
T}_3$ whose limit is an octagon.
\end{remark}
\begin{lemma}[Main lemma]\label{L:APPROX} For each $d\in\mathbb{N}$ there exist
$\psi_d>0$ and a function $\mathfrak{t}_d:(0,\psi_d)\to(1,\infty)$
satisfying the conditions:
\medskip
\noindent{\rm (1)} $\lim_{\varepsilon\downarrow
0}\mathfrak{t}_d(\varepsilon)=1$;
\medskip
\noindent{\rm (2)} If $Y$ is a $d$-dimensional polyhedral space,
$B$ is an MVSE for $Y$, and $A$ is an SE for $Y$ satisfying
\begin{equation}\label{E:lemma1}
{\rm vol}\hskip0.02cm A\le (1+\varepsilon)^d{\rm vol}\hskip0.02cm B
\end{equation}
for some $0<\varepsilon<\psi_d$, then $A$ contains a ball $\tilde A$
satisfying the conditions:
\medskip
\noindent{\rm (a)} $d(\tilde A,T)\le \mathfrak{t}_d(\varepsilon)$
for some $T\in{\cal T}_d$, where by $d(\tilde A,T)$ we denote the
Banach--Mazur distance;
\medskip
\noindent{\rm (b)} $\tilde A$ is an SE for $Y$.
\end{lemma}
\begin{lemma}\label{L:H} {\rm\cite[Lemma 3]{Ost04}}
The set of all sufficient enlargements for a finite dimensional
normed space $X$ is closed with respect to the Hausdorff metric.
\end{lemma}
\begin{proof}{Proof of Theorem \ref{T:MVSE}} (We assume that
Lemmas \ref{L:closed} and \ref{L:APPROX} have been proved.) Let
$X$ be a $d$-dimensional space and let $A$ be an MVSE for $X$. Let
$\{\varepsilon_n\}_{n=1}^\infty$ be a sequence satisfying
$\psi_d>\varepsilon_n>0$ and $\varepsilon_n\downarrow 0$. Let
$\{Y_n\}_{n=1}^\infty$ be a sequence of polyhedral spaces
satisfying
\begin{equation}\label{E:Y_n}
\frac1{1+\varepsilon_n}B_X\subset
B_{Y_n}\subset B_X.
\end{equation}
Then $A$ is an SE for $Y_n$. Let $B_n$ be an MVSE for $Y_n$. Then
$(1+\varepsilon_n)B_n$ is an SE for $X$. Since $A$ is a
minimal-volume SE for $X$, we have
$${\rm vol}\hskip0.02cm A\le {\rm vol}\hskip0.02cm\left((1+\varepsilon_n)B_n\right)=
(1+\varepsilon_n)^d{\rm vol}\hskip0.02cm B_n.$$
\smallskip
By Lemma \ref{L:APPROX} for every $n\in\mathbb{N}$ there exists an
SE $\tilde A_n$ for $Y_n$ satisfying
$$\tilde A_n\subset A$$
and
\begin{equation}\label{E:n}
d(\tilde A_n, T_n)\le \mathfrak{t}_d(\varepsilon_n)
\end{equation}
for some $T_n\in{\cal T}_d$.
\medskip
The condition (\ref{E:Y_n}) implies that
$(1+\varepsilon_n)\tilde A_n$ is an SE for $X$.
\medskip
The sequence $\{(1+\varepsilon_n)\tilde A_n\}_{n=1}^\infty$ is
bounded (all of its terms are contained in $(1+\varepsilon_1)A$). By the
Blaschke selection theorem \cite[p.~50]{Sch93} the sequence
$\{(1+\varepsilon_n)\tilde A_n\}_{n=1}^\infty$ contains a
subsequence convergent with respect to the Hausdorff metric. We
denote its limit by $D$, and assume that the sequence
$\{(1+\varepsilon_n)\tilde A_n\}_{n=1}^\infty$ itself converges to $D$.
\medskip
Observe that each $\tilde A_n$ contains $(1/(1+\varepsilon_1))B_X$
and is contained in $A$. By (\ref{E:n}) we may assume without loss
of generality that $T_n$ are balls in $X$ satisfying
\begin{equation}\label{E:inclusions}
\frac1{1+\varepsilon_1}B_X\subset \tilde A_n\subset T_n\subset
\mathfrak{t}_d(\varepsilon_n)\tilde A_n\subset
\mathfrak{t}_d(\varepsilon_n) A.
\end{equation}
It is clear that $D$ is the Hausdorff limit of $\{\tilde
A_n\}_{n=1}^\infty$. From (\ref{E:inclusions}) we get that $D$ is
the Hausdorff limit of $\{T_n\}_{n=1}^\infty$. By Lemma
\ref{L:closed} we get $D\in{\cal T}_d$.
\medskip
By Lemma \ref{L:H} the set $D$ is an SE for $X$. Since
$(1+\varepsilon_n)\tilde A_n\subset (1+\varepsilon_n)A$, and
$(1+\varepsilon_n)A$ is Hausdorff convergent to $A$, we have
$D\subset A$. On the other hand, $A$ is an MVSE for $X$, hence
$D=A$ and $A\in{\cal T}_d$.
\end{proof}
\begin{proof}{Proof of Lemma \ref{L:closed}} By
Theorem \ref{T:tiles} the sets $T_n$ are lattice tiles. Let
$\{\Lambda_n\}_{n=1}^\infty$ be lattices corresponding to these
lattice tiles. Since volume is continuous with respect to the
Hausdorff metric (see \cite[p.~55]{Sch93}), the supremum
$\sup_n{\rm vol}\hskip0.02cm (T_n)$ is finite. Since $T_n$ is a lattice tile with
respect to $\Lambda_n$, the determinant of $\Lambda_n$ satisfies
$d(\Lambda_n)={\rm vol}\hskip0.02cm (T_n)$. (Although I have not found this result
in the stated form, it is well known. It can be proved, for
example, using the argument from \cite[pp.~42--43, Proof of
Theorem 2]{GL87}.) Hence $\displaystyle{\sup_n
d(\Lambda_n)<\infty}$. Since $T$ is $d$-dimensional, there exists
$r>0$ such that $rB_2^d\subset T$. Choosing a smaller $r>0$, if
necessary, we may assume that $rB_2^d\subset T_n$ for each $n$.
Therefore the lattices $\{\Lambda_n\}_{n=1}^\infty$ satisfy the
conditions of the selection theorem of Mahler (see, for example,
\cite[\S 17]{GL87}, where the reader can also find the standard
definition of convergence for lattices). Hence the sequence
$\{\Lambda_n\}_{n=1}^\infty$ contains a subsequence which
converges to some lattice $\Lambda$. It is easy to verify that $T$
tiles $\mathbb{R}^d$ with respect to $\Lambda$.
On the other hand, the number of possible distinct columns of a
totally unimodular matrix with columns from $\mathbb{R}^d$ is
bounded from above by $3^d$, because each entry is $0$, $1$, or
$-1$. (Actually a much better exact bound is known, see
\cite[p.~299]{Sch86}.) Using this we can show that $T$ is a
zonotope by a straightforward argument. Also we can use the
argument from \cite[Theorem 3.5.2]{Sch93} and the observation that
a convergent sequence of measures on the sphere of $\ell_2^d$,
each of whom has a finite support of cardinality $\le 3^d$,
converges to a measure supported on $\le 3^d$ points.
Thus, $T$ is a zonotope and a lattice tile. Applying Theorem
\ref{T:tiles} again, we get $T\in {\cal T}_d$.
\end{proof}
\section{Proof of the Main Lemma}
\subsection{Coordinatization}
\begin{proof}{Proof of Lemma \ref{L:APPROX}} In our argument the dimension $d$ is
fixed. Many of the parameters considered below depend on $d$,
although we do not reflect this dependence in our notation.
\medskip
Since $Y$ is polyhedral, we can consider $Y$ as a subspace of
$\ell_\infty^m$. Let $P:\ell_\infty^m\to Y$ be a linear projection
satisfying $P(B_\infty^m)\subset A$ (such a projection exists
because $A$ is an SE). Let $\tilde A=P(B_\infty^m)$. It is easy to
see that $\tilde A$ is an SE for $Y$. It remains to show that
$\tilde A$ is close to some $T\in {\cal T}_d$ with respect to the
Banach--Mazur distance.
\medskip
We consider the standard inner product on $\ell_\infty^m$. (The
unit vector basis is an orthonormal basis with respect to this
inner product.)
\medskip
Let $\{q_1,\dots,q_{m-d}\}$ be an orthonormal basis in $\ker P$.
Let $\{y_1,\dots, y_d\}$ be an orthonormal basis in $Y$. Let
$\tilde q_1,\dots,\tilde q_d$ be such that $\{\tilde
q_1,\dots,\tilde q_d,q_1,\dots,q_{m-d}\}$ is an orthonormal basis
in $\ell_\infty^m$.
\begin{lemma}\label{L:shape} {\bf (Image Shape Lemma)} Let
$P$ and $\tilde q_1,\dots,\tilde q_d$ be as above. Denote by
$\tilde Q=[\tilde q_1,\dots,\tilde q_d]$ the matrix whose columns
are $\tilde q_1, \dots,\tilde q_d$. Let $z_1,\dots,z_m$ be the
columns of the transpose matrix $\tilde Q^T$. Then $P(B_\infty^m)$
is linearly equivalent to the zonotope
$\sum_{i=1}^mI(z_i)\subset\mathbb{R}^d$.
\end{lemma}
\begin{proof}{Proof} It is enough to observe that:
\medskip
\noindent(i) Images of $B_\infty^m$ under
two linear projections with the
same kernel are linearly equivalent. Hence, $P(B_\infty^m)$ is
linearly equivalent to the image of the orthogonal projection
with the kernel $\ker P$.
\medskip
\noindent(ii) The matrix $\tilde Q\tilde Q^T$
is the matrix of the orthogonal projection with the kernel
$\ker P$.
\end{proof}
By Lemma \ref{L:shape} we may replace $\tilde A$ by
\begin{equation}\label{E:Z}
Z=\sum_{i=1}^mI(z_i)
\end{equation}
in the estimate (a) of Lemma \ref{L:APPROX}.
\medskip
Let $M=\binom md$. We denote by $u_i$ $(i=1,\dots,M)$ the $d\times d$
minors of $[y_1,\dots,y_d]$ (ordered in some way). We denote by
$w_i$ $(i=1,\dots,M)$ the $d\times d$
minors of $[\tilde q_1,\dots,\tilde q_d]$ ordered in the same
way as the $u_i$. We denote by
$v_i$ $(i=1,\dots,{\binom m{m-d}}=M)$ their complementary
$(m-d)\times(m-d)$
minors of $[q_1,\dots,q_{m-d}]$. Using the word {\it complementary}
we mean that all minors are considered as minors of the matrix
$[\tilde q_1,\dots,\tilde q_d,q_1,\dots,q_{m-d}]$,
see \cite[p.~76]{aitken}.
\medskip
By the Laplacian expansion (see
\cite[p.~78]{aitken})
$$
\det[y_1,\dots,y_d,q_1,\dots,q_{m-d}]=
\sum_{i=1}^M\theta_iu_iv_i
$$
and
\begin{equation}\label{E:laplace}
\det[\tilde q_1,\dots,\tilde q_d,q_1,\dots,q_{m-d}]=
\sum_{i=1}^M\theta_iw_iv_i
\end{equation}
for proper signs $\theta_i$.
\medskip
Since the matrix $[\tilde q_1,\dots,\tilde q_d,q_1,\dots,q_{m-d}]$
is orthogonal, we have
\begin{equation}\label{E:pm1}
\det[\tilde q_1,\dots,\tilde q_d,q_1,\dots,q_{m-d}]=\pm1.
\end{equation}
We need the following result on compound matrices.
(We refer to \cite[Chapter V]{aitken} for necessary definitions and
background.)
\medskip
{\it A compound matrix of an orthogonal
matrix is orthogonal} (see \cite[Example 4 on p.~94]{aitken}).
\medskip
This result implies, in particular, that the Euclidean norms of
the vectors $\{w_i\}_{i=1}^M$ and $\{v_i\}_{i=1}^M$ in
$\mathbb{R}^M$ are equal to $1$.
\medskip
From (\ref{E:laplace}) and (\ref{E:pm1}) we get that
either
\smallskip
(a) $w_i=\theta_iv_i$ for every $i$
\smallskip
\noindent or
\smallskip
(b) $w_i=-\theta_iv_i$ for every $i$.
\medskip
Without loss of generality, we assume that $w_i=\theta_iv_i$
for all $i$ (we replace $q_1$ by $-q_1$ if it
is not the case).
\medskip
We compute the volume of $\tilde A$ and $B$
with the normalization that
comes from the Euclidean structure introduced above. It is
well known (see \cite[p.~318]{jfa}) and is easy to verify that
with this normalization
$${\rm vol}\hskip0.02cm\tilde A=\frac{2^d}{\left|\sum_{i=1}^M\theta_iu_iv_i\right|}
\sum_{i=1}^M|v_i|$$
and
$${\rm vol}\hskip0.02cm B=\frac{2^d}{\max_i|u_i|}$$
for each MVSE $B$ for $Y$.
\begin{remark}{Remark} After the publication of \cite{jfa}
I learned that the formula for the volume of a zonotope used in
\cite{jfa} can be found in \cite[Appendix, Section VI]{blaschke}.
\end{remark}
Since ${\rm vol}\hskip0.02cm\tilde A\le{\rm vol}\hskip0.02cm A$, the inequality (\ref{E:lemma1})
implies that
\begin{equation}\label{E:min2}
\max_i|u_i|\sum_{i=1}^M|v_i|\le
(1+\varepsilon)^d
\left|\sum_{i=1}^M\theta_iu_iv_i\right|.
\end{equation}
By (a) the inequality (\ref{E:min2}) can be rewritten as
\begin{equation}\label{E:min3}
\max_i|u_i|\sum_{i=1}^M|w_i|\le
(1+\varepsilon)^d
\left|\sum_{i=1}^Mu_iw_i\right|.
\end{equation}
We need the following two observations:
\medskip
\noindent(i)~ $2^d\sum_{i=1}^M|w_i|$ is the volume of $Z$ in
$\mathbb{R}^d$.
\medskip
\noindent(ii) The vector $\{u_i\}_{i=1}^M$ is what is called the
{\it Grassmann coordinates}, or the {\it Pl\"ucker coordinates} of
the subspace $Y\subset\mathbb{R}^m$, see \cite[Chapter VII]{HP} and
\cite[p.~42]{sh}. Recall that $Y$ is spanned by the columns of the
matrix $[y_1,\dots,y_d]$. It is easy to see that if we choose
another basis in $Y$, the Grassman (Pl\"ucker) coordinates will be
multiplied by a constant.
\medskip
We denote by $\mathcal{Z}_\varepsilon$ ($\varepsilon>0$) the set of all
$d$-dimensional zonotopes in $\mathbb{R}^d$ satisfying the
condition (\ref{E:min3}) with an equality. More precisely, we
define $\mathcal{Z}_\varepsilon$ as the set of those
$d$-dimensional zonotopes $Z$ in $\mathbb{R}^d$ for which
\begin{itemize}
\item[(1)] There exists $m\in\mathbb{N}$ and a rank $d$ matrix
$\tilde Q$ of size $m\times d$ such that
$Z=\sum_{i=1}^mI(z_i)$, where $z_i\in\mathbb{R}^d$, $i=1,\dots,
m$, are rows of $\tilde Q$.
\item[(2)] There exists a rank $d$ matrix $Y$ of size $m\times d$
such that, if we denote the $d\times d$ minors of $\tilde Q$ by
$\{w_i\}_{i=1}^\infty$, where $M=\binom{m}d$, and the $d\times d$
minors of $Y$, ordered in the same way as the $w_i$, by
$\{u_i\}_{i=1}^\infty$, then
\begin{equation}\label{E:equal}
\max_i|u_i|\sum_{i=1}^M|w_i|=(1+\varepsilon)^d
\left|\sum_{i=1}^Mu_iw_i\right|,
\end{equation}
and there is no $Y$ for which
$$\max_i|u_i|\sum_{i=1}^M|w_i|<(1+\varepsilon)^d
\left|\sum_{i=1}^Mu_iw_i\right|.$$
\end{itemize}
\begin{remark}{Remarks. 1} It is clear that the zonotope property of being in
$\mathcal{Z}_\varepsilon$ is invariant under changes of the system of
coordinates.
\medskip
\noindent{\bf 2.} We do not consider the class $\mathcal{Z}_0$
because, as it was shown in \cite{laa}, this class is contained in
$\mathcal{T}_d$.
\end{remark}
Many objects introduced below depend on $Z$ and $\varepsilon$, although
sometimes we do not reflect this dependence in our notation.
Let $Z\in\mathcal{Z}_\varepsilon$. We shall change the system of
coordinates in $\mathbb{R}^d$ twice. First we introduce in
$\mathbb{R}^d$ a new system of coordinates such that the unit
(Euclidean) ball $B_2^d$ of $\mathbb{R}^d$ is the maximal volume
ellipsoid in $Z$. From now on we consider the vectors $z_i$
introduced in Lemma \ref{L:shape} as vectors in $\mathbb{R}^d$ and
not as $d$-tuples of real numbers.
\medskip
It is easy to see that the support function of $Z$ is
given by
$$h_Z(x)=\sum_{i=1}^m|\langle x, z_i\rangle|.$$
It is more convenient for us to write this formula
in a different way. We consider the set
\begin{equation}\label{E:z_i}
\left\{\frac{z_1}{||z_1||},\dots,\frac{z_m}{||z_m||},-\frac{z_1}{||z_1||},
\dots,-\frac{z_m}{||z_m||}\right\}.
\end{equation}
If the vectors in (\ref{E:z_i}) are pairwise distinct, we let
$\mu$ to be the atomic measure on the unit (Euclidean) sphere $S$
whose atoms are given by
$\mu(z_i/||z_i||)=\mu(-z_i/||z_i||)=||z_i||/2$. It is easy to see
that
\begin{equation}\label{E:support}
h_Z(x)=\int_S|\langle x,z\rangle|d\mu(z).
\end{equation}
The defining formula for $\mu$ should be adjusted in the natural way
if some of the vectors in (\ref{E:z_i}) are equal.
\medskip
Conversely, if $\mu$ is a nonnegative measure on $S$ supported on
a finite set, then (\ref{E:support}) is a support function of some
zonotope (see \cite[Section 3.5]{Sch93} for more information on
this matter).
\medskip
Dealing with subsets of $S$ we use the following terminology and
notation. Let $x_0\in S$, $r>0$. The set $\Delta(x_0,r):=\{x\in
S:~||x-x_0||<r\hbox{ or }||x+x_0||<r\}$, where $||\cdot||$ is the
$\ell_2$-norm, is called a {\it cap}. If $0<r<\sqrt{2}$, then
$\Delta(x_0,r)$ consists of two connected components. In such a case
both $x_0$ and $-x_0$ will be considered as {\it centers} of
$\Delta(x_0,r)$.
\medskip
We are going to show that if $\varepsilon>0$ is small, then the inequality
(\ref{E:min3}) implies that all but a very small part of the
measure $\mu$ is supported on a union of small caps centered at a
set of vectors which are multiples of a set of vectors satisfying
the condition: if we write their coordinates with respect to a
suitably chosen basis, we get a totally unimodular matrix. Having
such a set, it is easy to find $T\in{\cal T}_d$ which is close to $Z$
with respect to the Banach--Mazur distance, see Lemma \ref{L:BM}.
\medskip
For any two numbers $\omega,\delta>0$ we introduce the set
$$\Omega(\omega,\delta):=\{x\in S:~ \mu(\Delta(x,\omega))\ge\delta\}$$
(recall that by $S$ we denote the unit sphere of $\ell_2^d$). In
what follows $c_1(d), c_2(d), \dots$, $C_1(d)$, $C_2(d), \dots$
denote quantities depending on the dimension $d$ only. Since $d$
is fixed throughout our argument, we regard them as constants.
\medskip
First we find conditions on $\omega$ and $\delta$ under which the
set $\Omega(\omega,\delta)$ contains a normalized basis
$\{e_i\}_{i=1}^d$ whose distance to an orthonormal basis can be
estimated in terms of $d$ only.
\begin{lemma}\label{L:bases} There exist $0<c_1(d), C_1(d), C_2(d)<\infty$, such that for
$\omega\le\frac1{6d}$ and $\delta\le c_1(d)\omega^{d-1}$ there is
a normalized basis $\{e_i\}_{i=1}^d$ in the space $\mathbb{R}^d$
satisfying the conditions:
\smallskip
\noindent{\bf (a)} $\mu(\Delta(e_i,\omega))\ge\delta$.
\smallskip
\noindent{\bf (b)} If $\{o_i\}_{i=1}^d$ is an orthonormal basis in
$\mathbb{R}^d$, then the operator $N:\mathbb{R}^d\to\mathbb{R}^d$
given by $No_i=e_i$ satisfies $||N||\le C_1(d)$ and $||N^{-1}||\le
C_2(d)$, where the norms are the operator norms of $N, N^{-1}$
considered as operators from $\ell_2^d$ into $\ell_2^d$.
\end{lemma}
\begin{proof}{Proof} We need an estimate for $\mu(S)$. Observe
that if $K_1$ and $K_2$ are two symmetric zonotopes and
$K_1\subset K_2$, then $\mu_1(S)\le\mu_2(S)$ for the corresponding
measures $\mu_1$ and $\mu_2$ (defined as even measures satisfying
(\ref{E:support}) with $Z=K_1$ and $Z=K_2$, respectively). To
prove this statement we integrate the equality (\ref{E:support})
with respect to $x$ over the Haar measure on $S$.
\medskip
Now we use the assumption that $B_2^d$ is the maximal volume
ellipsoid in $Z$. Let $\sum_{i=1}^n\gamma_ix_i\otimes x_i$ be the
F.~John representation of the identity operator corresponding to
$Z$ (see \cite[p.~46]{JL01}). Then
$$Z\subset\left\{x:~ |\langle x, x_i\rangle|\le 1~ \forall
i\in\{1,\dots,n\}\right\}.
$$
Since $\displaystyle{x=\sum_{i=1}^n\langle x, x_i\rangle
\gamma_ix_i}$ for each $x\in\mathbb{R}^d$, we have
$\displaystyle{Z\subset\sum_{i=1}^n[-\gamma_ix_i,\gamma_ix_i]}$.
Since $\displaystyle{\sum_{i=1}^n\gamma_i=d}$, this implies
$\mu(S)\le d$.
\medskip
Using the well-known computation, which goes back to B.~Gr\"unbaum
(\cite[p.~462, (5.2)]{Gru60}, see, also, \cite[pp.~94--95]{Jam87})
one can find estimates for $\mu(S)$ from below, which imply
$\mu(S)\ge\sqrt{d}$. For our purposes the trivial estimate
$\mu(S)\ge 1$ is sufficient (this estimate follows immediately
from $Z\supset B_2^d$, because this inclusion implies $h_Z(x)\ge
||x||$).
\medskip
We denote the normalized Haar measure on $S$ by $\eta$. It is well
known that there exists $c_2(d)>0$ such that
\begin{equation}\label{E:c_2}
\eta(\Delta(x,r))\ge c_2(d)r^{d-1}~\forall r\in(0,1)~\forall x\in
S.
\end{equation}
Using a standard averaging argument and $\mu(S)\ge 1$, we get that
there exists $e_1\in S$ such that
$$\mu(\Delta(e_1,\omega))\ge c_2(d)\omega^{d-1}.$$
\medskip
Consider the closed
$\displaystyle{\left(\frac1{3d}+\omega\right)}$-neighborhood (in
the $\ell_2^d$ metric) of the line $L_1$ spanned by $e_1$. Let
$\Delta_1$ be the intersection of this neighborhood with $S$. Our
purpose is to estimate $\mu(S\backslash \Delta_1)$ from below. Let
$x\in S$ be orthogonal to $e_1$. Then
$$1\le h_Z(x)\le 1\cdot\mu(S\backslash \Delta_1)+\left(\frac1{3d}+\omega\right)\cdot d,$$
where the left-hand side inequality follows from the fact that $Z$
contains $B_2^d$. Therefore $\mu(S\backslash \Delta_1)\ge
1-\displaystyle\left(\frac1{3d}+\omega\right)d$.
\medskip
We erase all measure $\mu$ contained in $\Delta_1$, use a standard
averaging argument again, and find a vector $e_2$ such that
$$\mu(\Delta(e_2,\omega)\backslash \Delta_1)\ge
c_2(d)\omega^{d-1}\left(1-\left(\frac1{3d}+\omega\right)d\right).$$
Since $\mu(\Delta(e_2,\omega)\backslash \Delta_1)>0$, the vector
$e_2$ is not in the $\frac1{3d}$-neighborhood of $L_1$.
\medskip
Let $\Delta_2$ be the intersection of $S$ with the closed
$\displaystyle{\left(\frac1{3d}+\omega\right)}$-neighborhood of
$L_2={\rm lin}\hskip0.02cm\{e_1,e_2\}$ (that is, $L_2$ is the linear span of
$\{e_1,e_2\}$). Let $x\in S$ be orthogonal to $L_2$. Then
$$1\le h_Z(x)\le 1\cdot\mu(S\backslash \Delta_2)+\left(\frac1{3d}+\omega\right)\cdot d,$$
where the left-hand side inequality follows from the fact that $Z$
contains $B_2^d$. Therefore $\mu(S\backslash \Delta_2)\ge
1-\displaystyle\left(\frac1{3d}+\omega\right)d$.
\medskip
Using the standard averaging argument in the same way as in the
previous step we find a vector $e_3$ such that
$$\mu(\Delta(e_3,\omega)\backslash \Delta_2)\ge
c_2(d)\omega^{d-1}\left(1-\left(\frac1{3d}+\omega\right)d\right).$$
Since $\mu(\Delta(e_3,\omega)\backslash \Delta_2)>0$, the vector
$e_3$ is not in the $\frac1{3d}$-neighborhood of $L_2$.
\medskip
We continue in an obvious way. As a result we construct a
normalized basis $\{e_1,\dots,e_d\}$ satisfying the conditions
\begin{itemize}
\item[{\bf (i)}] $\displaystyle{\mu(\Delta(e_i,\omega))\ge
c_2(d)\omega^{d-1}\left(1-\left(\frac1{3d}+\omega\right)d\right)}$.
\smallskip
\item[{\bf (ii)}]
$\displaystyle{\hbox{dist}(e_i,\hbox{lin}\{e_j\}_{j=1}^{i-1})\ge
\frac1{3d}}$, $i=2,\dots,d$, where $\hbox{dist}(\cdot,\cdot)$
denotes the distance from a vector to a subspace.
\end{itemize}
If $\omega<\frac1{6d}$, the inequality {\bf (i)} implies
$$\mu(\Delta(e_i,\omega))\ge\frac12
c_2(d)\omega^{d-1},$$ and we get the estimate {\bf (a)} of Lemma
\ref{L:bases} with $c_1(d)=c_2(d)/2$.
To estimate $||N||$ and $||N^{-1}||$, we let $\{o_i\}_{i=1}^d$ be
the basis obtained from $\{e_i\}$ using the Gram--Schmidt
orthonormalization process. Let $N:\mathbb{R}^d\to\mathbb{R}^d$ be
defined by $No_i=e_i$. The estimate $||N||\le C_1(d)$ with
$C_1(d)=\sqrt{d}$ follows because the vectors $\{e_i\}_{i=1}^d$
are normalized and the vectors $\{o_i\}_{i=1}^d$ form an
orthonormal set.
\medskip
To estimate $||N^{-1}||$ we observe that the matrix of $N$ with
respect to the basis $\{o_i\}$ is of the form
$$N=\left(\begin{array}{cccc} N_{11} & N_{12} &\dots & N_{1d}\\
0 & N_{22} & \dots & N_{2d}\\
\vdots &\vdots &\ddots &\vdots\\
0 & 0 & \dots & N_{dd}\end{array}\right),$$ and that the
inequality {\bf (ii)} implies $N_{ii}\ge \frac1{3d}$. We have
$$T=\left(\begin{array}{cccc} N_{11} & 0 &\dots & 0\\
0 & N_{22} & \dots & 0\\
\vdots &\vdots &\ddots &\vdots\\
0 & 0 & \dots & N_{dd}\end{array}\right)\cdot
\left(\begin{array}{cccc} 1 & \frac{N_{12}}{N_{11}} &\dots & \frac{N_{1d}}{N_{11}}\\
0 & 1 & \dots & \frac{N_{2d}}{N_{22}}\\
\vdots &\vdots &\ddots &\vdots\\
0 & 0 & \dots & 1\end{array}\right)=D(I+U),$$ where $I$ is the
identity matrix,
$$D=\left(\begin{array}{cccc} N_{11} & 0 &\dots & 0\\
0 & N_{22} & \dots & 0\\
\vdots &\vdots &\ddots &\vdots\\
0 & 0 & \dots & N_{dd}\end{array}\right),\hbox{ and } U=
\left(\begin{array}{ccccc} 0 & \frac{N_{12}}{N_{11}} &\dots & \frac{N_{1,{d-1}}}{N_{11}} & \frac{N_{1d}}{N_{11}}\\
0 & 0 & \dots & \frac{N_{2,{d-1}}}{N_{22}} & \frac{N_{2d}}{N_{22}}\\
\vdots &\vdots &\ddots &\vdots &\vdots\\
0 & 0 & \dots & 0 & \frac{N_{d-1,d}}{N_{d-1,d-1}}\\
0 & 0 & \dots & 0 & 0
\end{array}\right)
$$
Therefore
\begin{equation}\label{E:inverse}
N^{-1}=(I+U)^{-1}D^{-1}=(I-U+U^2-\dots+(-1)^{d-1}U^{d-1})D^{-1},
\end{equation}
the identity $(I+U)^{-1}=(I-U+U^2-\dots+(-1)^{d-1}U^{d-1})$
follows from the obvious equality $U^d=0$. The definition of $U$
and $N_{ii}\ge \frac1{3d}$ imply that columns of $U$ are vectors
with Euclidean norm at most $3d$, hence $||U||\le 3d^{\frac32}$.
Therefore the identity (\ref{E:inverse}) implies the following
estimate for $||N^{-1}||$:
$$||N^{-1}||\le \frac{||U||^d-1}{||U||-1}\cdot||D^{-1}||\le
\frac{3^dd^{\frac{3d}2}-1}{3d^{\frac32}-1}\cdot3d.
$$
Denoting the right-hand side of this inequality by $C_2(d)$ we get
the desired estimate.
\end{proof}
\begin{remark}{Remark} We do not need sharp estimates for $c_1(d),
C_1(d)$, and $C_2(d)$ because $d$ is fixed in our argument, and
the dependence on $d$ of the parameters involved in our estimates
is not essential for our proofs.
\end{remark}
We use the following notation: for a set $\Gamma\subset S$ and a
real number $r>0$ we denote the set $\{x\in S:~
\inf\{||x-y||:~y\in\Gamma\}\le r\}$ by $\Gamma_r$.
\begin{lemma}\label{L:complement} Let $c_2(d)$ be the constant from {\rm (\ref{E:c_2})}, then
$\mu(S\backslash ((\Omega(\omega,\delta))_\omega))\le
\displaystyle{\frac{\delta}{c_2(d)\omega^{d-1}}}$.
\end{lemma}
\begin{proof}{Proof} Assume the contrary, that is, $\mu(S\backslash ((\Omega(\omega,\delta))_\omega))>
\frac{\delta}{c_2(d)\omega^{d-1}}$. Then, using a standard
averaging argument as in Lemma \ref{L:bases}, we find a point $x$
such that
$$\mu(\Delta(x,\omega)\backslash ((\Omega(\omega,\delta))_\omega))\ge
c_2(d)\omega^{d-1}\cdot\frac{\delta}{c_2(d)\omega^{d-1}}=\delta.$$
By the definition of $\Omega(\omega,\delta)$ this implies $x\in
\Omega(\omega,\delta)$. On the other hand, since the set
$\Delta(x,\omega)\backslash ((\Omega(\omega,\delta))_\omega)$ is
non-empty, it follows that $x\notin \Omega(\omega,\delta)$. We get
a contradiction. \end{proof}
\subsection{Notation and definitions used in the rest of the
proof}\label{S:notation}
For each $Z\in\mathcal{Z}_\varepsilon$ we apply Lemma \ref{L:bases} with
$\omega=\omega(\varepsilon)=\varepsilon^{4k}$ and $\delta=\delta(\varepsilon)=\varepsilon^{4dk}$,
where $0<k<1$ is a number satisfying the conditions
\begin{equation}
k<\frac{1}{6+4d^2}~\hbox{ and }~k<\frac{1}{2d+4d^2},
\end{equation}
we choose and fix such number $k$ for the rest of the proof. It is
clear that there is $\Xi_0=\Xi_0(d,k)>0$ such that the conditions
$\omega(\varepsilon)\le\frac1{6d}$ and $\delta(\varepsilon)\le
c_1(d)(\omega(\varepsilon))^{d-1}$ are satisfied for all $\varepsilon\in(0,\Xi_0)$,
where $c_1(d)$ is the constant from Lemma \ref{L:bases}. In the rest
of the argument we consider $\varepsilon\in(0,\Xi_0)$ only. Let
$\{e_i\}_{i=1}^d$ be one of the bases satisfying the conditions of
Lemma \ref{L:bases} with the described choice of $\omega$ and
$\delta$. Now we change the system of coordinates in
$\mathbb{R}^d\supset Z$ the second time. The new system of
coordinates is such that $\{e_i\}_{i=1}^d$ is its unit vector basis.
We shall modify the objects introduced so far ($\Omega,\mu$, etc.)
and denote their versions corresponding to the new system of
coordinates by ${\check\Omega}, \check\mu$, etc. All these objects depend on
$Z$, $\varepsilon$, and the choice of $\{e_i\}_{i=1}^d$.
\medskip
We denote by $\check S$ the Euclidean unit sphere in the new
system of coordinates. We denote by $\mathfrak{N}:S\to\check S$ the natural
normalization mapping, that is, $\mathfrak{N}(z)=z/||z||$, where $||z||$ is
the Euclidean norm of $z$ with respect to the new system of
coordinates. The estimates for $||N||$ and $||N^{-1}||$ from Lemma
\ref{L:bases} imply that the Lipschitz constants of the mapping
$\mathfrak{N}$ and its inverse $\mathfrak{N}^{-1}:\check S\to S$ can be estimated in
terms of $d$ only.
We introduce a measure ${\check\mu}$ on $\check S$ as an atomic measure
supported on a finite set and such that ${\check\mu}(\mathfrak{N}(z))=\mu(z)||z||$
for each $z\in S$ , where $||z||$ is the norm of $z$ in the new
system of coordinates. Using the definition of the zonotope $Z$ it
is easy to check that the function
$$\check h_Z(x)=\int_{\check S}|\langle x,\check
z\rangle|d\check\mu(\check z),$$ where $\langle\cdot,\cdot\rangle$
is the inner product in the new coordinate system, is the support
function of $Z$ in the new system of coordinates.
\medskip
We define ${\check\Omega}={\check\Omega}(\omega,\delta)$ as
$\mathfrak{N}(\Omega(\omega,\delta))$. It is clear that $e_i\in{\check\Omega}$.
Everywhere below we mean coordinates in the new system of
coordinates (when we refer to $||\cdot||$, $\Delta$, etc).
\medskip
The observation that $\mathfrak{N}$ and $\mathfrak{N}^{-1}$ are Lipschitz, with
Lipschitz constants estimated in terms of $d$ only, implies the
following statements:
\begin{itemize}
\item There exist $C_3(d), C_4(d)<\infty$ such that
\begin{equation}\label{E:Obs1} {\check\mu}(\check
S\backslash(({\check\Omega}(\omega,\delta))_{C_3(d)\omega(\varepsilon)}))\le
C_4(d)\frac{\delta}{\omega^{d-1}}
\end{equation}
(we use Lemma \ref{L:complement}).
\item There exist $c_3(d)>0$ and $C_5(d)<\infty$ such that
\begin{equation}\label{E:Obs2}{\check\mu}(\Delta(x, C_5(d)\omega))\ge c_3(d)\delta~~\forall
x\in{\check\Omega}(\omega,\delta)\end{equation} (we use the definitions of
$\Omega(\omega,\delta)$ and ${\check\Omega}(\omega,\delta)$).
\item There exists a constant $C_6(d)$ depending on $d$ only, such
that
\begin{equation}\label{E:vol}
{\rm vol}\hskip0.02cm(Z)\le C_6(d).\end{equation}
\end{itemize}
Let $\check Q$ be the transpose of the matrix whose columns are
the coordinates of $z_i$ in the new system of coordinates. We
denote by $\check w_i$ $(i=1,\dots,M)$ the $d\times d$ minors of
$\check Q$ ordered in the same way as the $w_i$. The vector
$\{\check w_i\}_{i=1}^M$ is a scalar multiple of
$\{w_i\}_{i=1}^M$. Therefore (\ref{E:equal}) implies
\begin{equation}\label{E:1+ep}
\max_i|u_i|\sum_{i=1}^M|\check w_i|=(1+\varepsilon)^d
\left|\sum_{i=1}^Mu_i\check w_i\right|.
\end{equation}
The volume of $Z$ in the new system of coordinates is
$2^d\sum_{i=1}^M|\check w_i|$.
\subsection{Lemma on six large minors}
To show that if $\varepsilon>0$ is small, then the inequality (\ref{E:1+ep})
implies that all but a very small part of the measure ${\check\mu}$ is
supported ``around'' multiples of vectors represented by a totally
unimodular matrix in some basis, we need the following lemma. It
shows that the inequality (\ref{E:1+ep}) implies that the measure
${\check\mu}$ cannot have non-trivial ``masses'' near $(d+2)$-tuples of
vectors satisfying certain condition.
\begin{lemma}\label{L:six} Let $\chi(\varepsilon)$, $\sigma(\varepsilon)$, and $\pi(\varepsilon)$ be functions
satisfying the following conditions:
\begin{itemize}
\item[{\bf (1)}] $\displaystyle{\lim_{\varepsilon\downarrow 0}\chi(\varepsilon)=\lim_{\varepsilon\downarrow
0}\sigma(\varepsilon)=\lim_{\varepsilon\downarrow 0}\pi(\varepsilon)=0}$;
\item[{\bf (2)}] $\varepsilon=o((\chi(\varepsilon))^2(\sigma(\varepsilon))^d)\hbox{
as }\varepsilon\downarrow 0$;
\item[{\bf (3)}] $\pi(\varepsilon)=o(\chi(\varepsilon)) \hbox{ as }\varepsilon\downarrow
0$;
\item[{\bf (4)}] There is a subset $\Phi_0\subset (0,\Xi_0)$ such
that the closure of $\Phi_0$ contains $0$, and for each
$\varepsilon\in\Phi_0$ there exist $Z\in \mathcal{Z}_\varepsilon$ and points
$x_1,\dots,x_{d-2},p_1,p_2,p_3,p_4$ in the corresponding $\check
S$, such that
\begin{equation}\label{E:measure}
{\check\mu}(\Delta(z,\pi(\varepsilon)))\ge\sigma(\varepsilon)~~\forall
z\in\{x_1,\dots,x_{d-2},p_1,p_2,p_3,p_4\}.
\end{equation}
\end{itemize}
Let $\mathcal{U}_0$ be the set of pairs $(\varepsilon,Z)$ in which
$\varepsilon\in\Phi_0$ and $Z$ satisfies the condition from {\bf (4)}.
\smallskip
Let $\Phi_1\subset\Phi_0$ be the set of those $\varepsilon\in\Phi_0$ for
which there exists $(\varepsilon,Z)\in\mathcal{U}_0$ such that the
corresponding points $x_1,\dots,x_{d-2},p_1,p_2,p_3,p_4$ satisfy
the condition
\begin{equation}\label{E:detge} |\det(H_{\alpha,\beta})|\ge\chi(\varepsilon)\end{equation}
for all matrices $H_{\alpha,\beta}$ whose columns are the
coordinates of $\{x_1,\dots,x_{d-2},p_\alpha,p_\beta\}$,
$\alpha,\beta\in\{1,2,3,4\}$, $\alpha\ne\beta$, with respect to an
orthonormal basis $\{e_i\}_{i=1}^d$ in $\mathbb{R}^d$. Then there
exists $\Xi_1>0$ such that $\Phi_1\cap(0,\Xi_1)=\emptyset$.
\end{lemma}
\begin{proof}{Proof} We assume the contrary, that is, we assume that
$0$ belongs to the closure of $\Phi_1$. For each $\varepsilon\in\Phi_1$ we
choose $Z\in\mathcal{Z}_\varepsilon$ such that $(\varepsilon,Z)\in\mathcal{U}_0$
and the condition (\ref{E:measure}) is satisfied. We show that for
sufficiently small $\varepsilon>0$ this leads to a contradiction.
\medskip
We consider the following perturbation of the matrix
$H_{\alpha,\beta}$: each column vector $z$ in it is replaced by a
vector from $\Delta(z,\pi(\varepsilon))$. We denote the obtained
perturbation of the matrix $H_{\alpha,\beta}$ by
$H_{\alpha,\beta}^p$. We claim that
\begin{equation}\label{E:Hp}
|\det(H_{\alpha,\beta}^p)|\ge\chi(\varepsilon)-d\cdot\pi(\varepsilon).\end{equation}
To prove this claim we need the following lemma, which we state in
a bit more general form than is needed now, because we shall need
it later.
\begin{lemma}\label{L:detperturb} Let $x_1,\dots, x_d,z\in \ell_2^d$ be such that
$\displaystyle{\max_{2\le i\le d}}||x_i||\le\mathfrak{m}$ and
$||z-x_1||\le\mathfrak{l}$. Then
$$\left|\det[z,x_2,\dots,x_d]-\det[x_1,x_2,\dots,x_d]\right|\le\mathfrak{l}\cdot\mathfrak{m}^{d-1}.$$
\end{lemma}
This lemma follows immediately from the volumetric interpretation
of determinants.
\medskip
To get the inequality (\ref{E:Hp}) we apply Lemma
\ref{L:detperturb} $d$ times with $\mathfrak{m}=1$ and
$\mathfrak{l}=\pi(\varepsilon)$.
\medskip
Since $Z\in\mathcal{Z}_\varepsilon$, it can be represented in the form
$Z=\sum_iI(z_i)$. First we complete our proof in a special case
when the following condition is satisfied:
\smallskip
\noindent{\bf (*)} {\it All vectors $z_i$ whose normalizations
$z_i/||z_i||$ belong to the sets $\Delta(z, \pi(\varepsilon))$, $z\in
\{x_1,\dots$, $x_{d-2},p_1,p_2,p_3, p_4\}$, have the same norm
$\tau$ and there are equal amounts of such vectors in each of the
sets $\Delta(z, \pi(\varepsilon))$, $z\in \{x_1,\dots$,
$x_{d-2},p_1,p_2,p_3, p_4\}$, we denote the common value of the
amounts by $F$.}
\smallskip
The inequality (\ref{E:measure}) implies
$$F\cdot\tau\ge \sigma(\varepsilon)$$
We denote by $\Lambda$ the set of all numbers $i\in\{1,\dots, M\}$
satisfying the condition: the normalizations of columns of the
minor $\check w_i$ form a matrix of the form $H_{\alpha,\beta}^p$,
for some $\alpha,\beta\in\{1,2,3,4\}$.
\medskip
We need an estimate for $\displaystyle{\sum_{i\in\Lambda}|\check
w_i|}$. The inequality (\ref{E:Hp}) implies $|\check
w_i|\ge\tau^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon))$ for each $i\in\Lambda$.
\medskip
On the other hand, the cardinality $|\Lambda|$ of $\Lambda$ is
$6F^d$. In fact, there are $F^{d-2}$ ways to choose $z_i/||z_i||$
in the sets $\Delta(x_j,\pi(\varepsilon))$, $j=1,\dots, d-2$. There are
$\binom{4}2=6$ ways to choose two of the sets
$\Delta(p_j,\pi(\varepsilon))$, $j=1,2,3,4$, and there are $F^2$ ways to
choose one vector $z_i/||z_i||$ in each of them. Therefore
$|\Lambda|=6F^d$ and
\begin{equation}\label{E:2.16}
\sum_{i\in\Lambda}|\check w_i|\ge
6F^d\tau^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon))\ge6(\sigma(\varepsilon))^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon)).
\end{equation}
We assume for simplicity that $\max_i|u_i|=1$ (if it is not the
case, some of the sums below should be multiplied by
$\max_i|u_i|$). The $u_i$ are defined above the equality
(\ref{E:equal}). Then the condition (\ref{E:1+ep}) can be
rewritten as
\begin{equation}\label{E:2.17}
(1+\varepsilon)^d\left|\sum_{i=1}^Mu_i\check w_i\right|\ge
\sum_{i\in\Lambda}|\check w_i|+\sum_{i\notin\Lambda}|\check w_i|.
\end{equation}
On the other hand,
\begin{equation}\label{E:2.18}
(1+\varepsilon)^d\left|\sum_{i=1}^Mu_i\check w_i\right|\le
(1+\varepsilon)^d\left|\sum_{i\in\Lambda}u_i\check w_i\right|+
(1+\varepsilon)^d\sum_{i\notin\Lambda}|\check w_i|.
\end{equation}
From (\ref{E:2.17}) and (\ref{E:2.18}) we get
\begin{equation}\label{E:2.19}
(1+\varepsilon)^d \left|\sum_{i\in\Lambda}u_i\check w_i\right|\ge
\sum_{i\in\Lambda}|\check w_i|- \left((1+\varepsilon)^d-1\right)
\sum_{i\notin\Lambda}|\check w_i|.
\end{equation}
As is well known, $2^d\sum_{i=1}^M|\check w_i|$ is the volume of
$Z$, hence $\sum_{i=1}^M|\check w_i|\le 2^{-d}C_6(d)$.
\medskip
Using this observation and the inequalities (\ref{E:2.16}) and
(\ref{E:2.19}) we get
$$
\left|\sum_{i\in\Lambda}u_i\check w_i\right|\ge \left(
\frac1{(1+\varepsilon)^d}-
\frac{((1+\varepsilon)^d-1)C_6(d)2^{-d}}{6(\sigma(\varepsilon))^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon))}
\right) \sum_{i\in\Lambda}|\check w_i|.
$$
(We use the fact that $\chi(\varepsilon)-d\cdot\pi(\varepsilon)>0$ if $\varepsilon>0$ is
small enough.) The conditions {\bf (2)} and {\bf (3)} imply that
there exists $\psi>0$ such that
\begin{equation}\label{E:0.04}
\left( \frac1{(1+\varepsilon)^d}-
\frac{((1+\varepsilon)^d-1)C_6(d)2^{-d}}{6(\sigma(\varepsilon))^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon))}
\right)>(1-0.04(\chi(\varepsilon)-d\cdot\pi(\varepsilon)))
\end{equation}
is satisfied if $\varepsilon\in(0,\psi)$. The right-hand side is
chosen in the form needed below.
\medskip
Let $\psi>0$ be such that the statement above is true. Then for
$\varepsilon\in(0,\psi)$ we have
\begin{equation}\label{E:not}
\left|\sum_{i\in\Lambda}u_i\check w_i\right|\ge
(1-0.04(\chi(\varepsilon)-d\cdot\pi(\varepsilon))) \sum_{i\in\Lambda}|\check w_i|.
\end{equation}
Recall that $u_i$ are $d\times d$ minors of some matrix
$[y_1,\dots,y_d]$. We need the Pl\"ucker relations, see
\cite[p.~312]{HP} or \cite[p.~42]{sh}. The result that we need can
be stated in the following way: if $\gamma_1,\dots,\gamma_{d-2},
\kappa_1, \kappa_2, \kappa_3, \kappa_4$ are indices of $d+2$ rows
of $[y_1,\dots,y_d]$, then
\begin{equation}\label{E:P}
t_{1,2}t_{3,4}-t_{1,4}t_{3,2}+t_{2,4}t_{3,1}=0,
\end{equation}
where $t_{\alpha,\beta}$ is the determinant of the $d\times d$
matrix whose rows are the rows of $[y_1,\dots,y_d]$ with the
indices $\gamma_1,\dots,\gamma_{d-2}, \kappa_\alpha$, and
$\kappa_\beta$. Note that (\ref{E:P}) can be verified by a
straightforward computation (which is very simple if we make a
suitable change of coordinates before the computation).
\medskip
Now we show that (\ref{E:not}) cannot be satisfied. Let $\Psi$ be
a set consisting of $(d+2)$ vectors $z_{\kappa_1}, z_{\kappa_2},
z_{\kappa_3}, z_{\kappa_4}, z_{\gamma_1}, \dots,
z_{\gamma_{d-2}}$, formed in the following way. We choose vectors
$(z_{\kappa_i}/||z_{\kappa_i}||)\in \Delta(p_i,\pi(\varepsilon))$,
$i=1,2,3,4$, and choose vectors
$(z_{\gamma_i}/||z_{\gamma_i}||)\in \Delta(x_i,\pi(\varepsilon))$,
$i=1,\dots,d-2$. To each such selection there corresponds a set of
$6$ minors $\check w_i$ of the form
$\tau^d\det(H_{\alpha,\beta}^p)$, we denote this set of six minors
by $\{\check w_i\}_{i\in M(\Psi)}$.
\medskip
One of the immediate consequences of the Pl\"ucker relation
(\ref{E:P}) is that for any such $(d+2)$-tuple $\Psi$
\begin{equation}\label{E:sqrt2}
|u_i|\le\frac1{\sqrt{2}}~\hbox{ for some }~i\in M(\Psi).
\end{equation}
(Here we use the assumption that $\max_i|u_i|=1$.)
\medskip
For each $\Psi$ we choose one such $i\in M(\Psi)$ and denote it by
$s(\Psi)$. The estimate (\ref{E:Hp}) and the condition {\bf (*)}
imply that
\begin{equation}\label{E:rho}
\tau^d\ge |\check w_i|\ge\tau^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon))
\end{equation}
for every $i\in\Lambda$.
\medskip
Hence for every $(d+2)$-tuple $\Psi$ of the described type we have
\begin{align*}\begin{split}\left|\sum_{i\in M(\Psi)}u_i\check w_i\right|&\le \sum_{i\in
M(\Psi)\backslash\{s(\Psi)\}}|\check w_i|+ \frac1{\sqrt{2}}|\check
w_{s(\Psi)}|\le \sum_{i\in M(\Psi)}|\check
w_i|-\frac{\sqrt{2}-1}{\sqrt{2}} |\check w_{s(\Psi)}|\\
&=\sum_{i\in M(\Psi)}|\check w_i|\left(1-
\frac{(\sqrt{2}-1)|\check
w_{s(\Psi)}|}{\sqrt{2} \sum_{i\in M(\Psi)}|\check w_i|}\right)\\
&\le \sum_{i\in M(\Psi)}|\check w_i| \left(1-
\frac{(\sqrt{2}-1)\tau^d(\chi(\varepsilon)-d\cdot\pi(\varepsilon))}{\sqrt{2}
\cdot6\tau^d}\right)\\
&< \sum_{i\in M(\Psi)}|\check
w_i|\left(1-0.04(\chi(\varepsilon)-d\cdot\pi(\varepsilon))\right).\end{split}\end{align*}
Thus
\begin{equation}\label{E:quad}
\left|\sum_{i\in M(\Psi)}u_i\check w_i\right| < \sum_{i\in
M(\Psi)}|\check w_i|\left(1-0.04(\chi(\varepsilon)-d\cdot\pi(\varepsilon))\right).
\end{equation}
Recall that $F$ is the number of vectors $z_i$ corresponding to
each of the sets $\Delta(z, \pi(\varepsilon))$, $z\in
\{x_1,\dots,x_{d-2},p_1,p_2,p_3, p_4\}$. Simple counting shows
that for an arbitrary collection $\{\Upsilon_i\}_{i\in\Lambda}$ of
numbers we have
$$
\sum_\Psi\sum_{i\in M(\Psi)}\Upsilon_i=F^2
\sum_{i\in\Lambda}\Upsilon_i.
$$
Using (\ref{E:quad}) we get that
\begin{align*}\begin{split}
F^2\left|\sum_{i\in\Lambda}u_i\check w_i\right|&=
\left|\sum_\Psi\sum_{i\in M(\Psi)}u_i\check w_i\right|\le
\sum_\Psi\left|\sum_{i\in M(\Psi)}u_i\check w_i\right|\\
&<\sum_\Psi\sum_{i\in M(\Psi)}|\check w_i|
(1-0.04(\chi(\varepsilon)-d\cdot\pi(\varepsilon)))\\
&= F^2\sum_{i\in\Lambda}|\check w_i|
(1-0.04(\chi(\varepsilon)-d\cdot\pi(\varepsilon))).
\end{split}\end{align*}
If $\varepsilon\in(0,\psi)$, we get a contradiction with
(\ref{E:not}).
\medskip
To see that the general case can be reduced to the case {\bf (*)}
we need the following observation:
\medskip
Let $\tau_1, \tau_2>0$ be such that $\tau_1+\tau_2=1$. We replace
the row with the coordinates of $z_j$ in $\check Q$ by two rows,
one of them is the row of coordinates of $\tau_1 z_j$ and the
other is the row of coordinates of $\tau_2 z_j$. The zonotope
generated by the rows of the obtained matrix coincides with $Z$.
In the matrix $[y_1,\dots, y_d]$ we replace the $j^{\rm th}$ row
by two copies of it. It is easy to see that if we replace the
sequences $\{u_i\}_{i=1}^M$ and $\{\check w_i\}_{i=1}^M$ by
sequences of $d\times d$ minors of these new matrices, the
condition (\ref{E:1+ep}) is still satisfied.
\medskip
We can repeat this `cutting' of vectors $z_j$ into `pieces' with
(\ref{E:1+ep}) still being valid.
\medskip
Therefore, we may assume the following: among $z_j$ corresponding
to each of the sets $\Delta(z, \pi(\varepsilon))$, $z\in
\{x_1,\dots,x_{d-2},p_1,p_2,p_3, p_4\}$ there exists a subset
$\Phi(z, \pi(\varepsilon))$ consisting of vectors having the same length
$\tau$, and such that the sum of norms of vectors from $\Phi(z,
\pi(\varepsilon))$ is $\ge\displaystyle{\frac{\sigma(\varepsilon)}2}$,
moreover, we may assume that the numbers of such vectors in the
subsets $\Phi(z,\pi(\varepsilon))$ are the same for all $z\in
\{x_1,\dots,x_{d-2},p_1,p_2,p_3, p_4\}$.
\medskip
Lemma \ref{L:six} in this case can be proved using the same
argument as before, but with $\Lambda$ being the set of those
minors $\check w_i$ for which rows are from $\Phi(z,\pi(\varepsilon))$.
Everything starting with the inequality (\ref{E:2.16}) can be
shown in the same way as before; only some constants will be
changed (because we need to replace $\sigma(\varepsilon)$ by
$\frac{\sigma(\varepsilon)}2$).
\end{proof}
\subsection{Searching for a totally unimodular matrix}
Let $\rho(\varepsilon)=\varepsilon^k$, $\nu(\varepsilon)=\varepsilon^{3k}$. For a vector $s$ we
denote its coordinates with respect to $\{e_i\}_{i=1}^d$ by
$\{s_i\}_{i=1}^d$. (Here $k$ and $\{e_i\}_{i=1}^d$ are the same as
in Section \ref{S:notation}.)
\begin{lemma}\label{L:smallangles} If
\begin{equation}\label{E:k1}
k<\frac{1}{6+4d^2},
\end{equation}
then there exists $\Xi_2>0$ such that for $\varepsilon\in(0,\Xi_2)$,
$s,t\in{\check\Omega}(\omega(\varepsilon), \delta(\varepsilon))$, and
$\alpha,\beta\in\{1,\dots,d\}$, the inequality
\begin{equation}\label{E:aboverho}
\min\{|s_\alpha|, |s_\beta|, |t_\alpha|, |t_\beta|\}\ge\rho(\varepsilon),
\end{equation}
implies
\begin{equation}\label{E:largedet}
\left|\det\left(\begin{array}{cc} s_\alpha &
t_\alpha\\
s_\beta & t_\beta\end{array}\right)\right|<\nu(\varepsilon).
\end{equation}
\end{lemma}
\begin{proof}{Proof} Assume the contrary, that is, there exists a subset $\Phi_2\subset (0,1)$, having $0$ in its
closure and such that for each $\varepsilon\in\Phi_2$ there exist
$Z\in{\cal Z}_\varepsilon$, $s,t\in{\check\Omega}(\omega(\varepsilon), \delta(\varepsilon))$ and
$\alpha,\beta$ satisfying the condition (\ref{E:aboverho}), and
such that
$$\left|\det\left(\begin{array}{cc} s_\alpha &
t_\alpha\\
s_\beta & t_\beta\end{array}\right)\right|\ge\nu(\varepsilon).$$ We apply
Lemma \ref{L:six} with $\{x_1,\dots,x_{d-2}\}=
\{e_i\}_{i\ne\alpha,\beta}$,
$\{p_1,p_2,p_3,p_4\}=\{e_\alpha,e_\beta, s, t\}$. Using a
straightforward determinant computation we see that the condition
(\ref{E:detge}) is satisfied with $\chi(\varepsilon)=\min\{1,\rho(\varepsilon),
\nu(\varepsilon)\}=\varepsilon^{3k}$ (we consider $\varepsilon<1$).
\medskip
The inequality (\ref{E:Obs2}) implies that the condition {\bf (4)}
of Lemma \ref{L:six} is satisfied with
$\pi(\varepsilon)=C_5(d)\omega(\varepsilon)=C_5(d)\varepsilon^{4k}$ and
$\sigma(\varepsilon)=c_3(d)\delta(\varepsilon)=c_3(d)\varepsilon^{4dk}$. It is clear that
the conditions {\bf (2)} and {\bf (3)} of Lemma \ref{L:six} are
satisfied. To get {\bf (2)} we use the condition (\ref{E:k1}).
Applying Lemma \ref{L:six}, we get the existence of the desired
$\Xi_2$.
\end{proof}
For each vector from ${\check\Omega}(\omega(\varepsilon), \delta(\varepsilon))$ we define its
{\it top set} as the set of indices of coordinates whose absolute
values $\ge\rho(\varepsilon)$.
\medskip
The collection of all possible top sets is a subset of the set of
all subsets of $\{1,\dots,d\}$, hence its cardinality is at most
$2^d$. We create a collection $\Theta(\omega(\varepsilon),
\delta(\varepsilon))\subset{\check\Omega}(\omega(\varepsilon), \delta(\varepsilon))$ in the following
way: for each subset of $\{1,\dots,d\}$ which is a top set for at
least one vector from ${\check\Omega}(\omega(\varepsilon), \delta(\varepsilon))$, we choose
one of such vectors; the set $\Theta(\omega(\varepsilon), \delta(\varepsilon))$ is
the set of all vectors selected in this way.
\medskip
In our next lemma we show that each vector from ${\check\Omega}(\omega(\varepsilon),
\delta(\varepsilon))$ can be reasonably well approximated by a vector from
$\Theta(\omega(\varepsilon), \delta(\varepsilon))$. Therefore (as we shall see
later), to prove Lemma \ref{L:APPROX} it is sufficient to find a
``totally unimodular'' set approximating $\Theta(\omega(\varepsilon),
\delta(\varepsilon))$.
\begin{lemma}\label{L:smalldistance} Let $\rho(\varepsilon)$ and $\nu(\varepsilon)$ be as above and let $k$ and $\Xi_2$ be numbers
satisfying the conditions of Lemma {\rm \ref{L:smallangles}}. Let
$\varepsilon\in(0,\Xi_2)$, $Z\in{\cal Z}_\varepsilon$, and let
$s,t\in{\check\Omega}(\omega(\varepsilon),\delta(\varepsilon))$ be two vectors with the same
top set $\Sigma$. Then
\begin{equation}\label{E:topset}
\min\{||t+s||,
||t-s||\}\le\sqrt{2\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}+4d\rho(\varepsilon)^2}.
\end{equation}
\end{lemma}
\begin{proof}{Proof} Observe that if
$\rho(\varepsilon)=\varepsilon^k>\displaystyle{\frac{1}{\sqrt{d}}}$, the statement
of the lemma is trivial. Therefore we may assume that
$\rho(\varepsilon)\le\displaystyle{\frac{1}{\sqrt{d}}}$. In such a case
$\Sigma$ contains at least one element.
First we show that the signs of different components of $s$ and
$t$ ``agree'' on $\Sigma$ in the sense that either they are the
same everywhere on $\Sigma$, or they are the opposite everywhere
on $\Sigma$. In fact, assume the contrary, and let $\alpha,
\beta\in\Sigma$ be indices for which the signs ``disagree''. Then,
as is easy to check,
$$\left|\det\left(\begin{array}{cc} s_\alpha &
t_\alpha\\
s_\beta &
t_\beta\end{array}\right)\right|=|s_\alpha||t_\beta|+|s_\beta||t_\alpha|\ge
2(\rho(\varepsilon))^2>\nu(\varepsilon),$$ and we get a contradiction. We consider
the case when the signs of $t_\alpha$ and $s_\alpha$ are the same
for each $\alpha\in\Sigma$, the other case can be treated
similarly (we can just consider $-s$ instead of $s$).
\medskip
We may assume without loss of generality that $|t_\alpha|\ge
|s_\alpha|$ for some $\alpha\in\Sigma$. We show that in this case
\begin{align*
|t_\beta|\ge\left(1-\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}\right)|s_\beta|
\end{align*}
for all $\beta\in\Sigma$. In fact, if
$|t_\beta|<\displaystyle{\left(1-\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}\right)}|s_\beta|$
for some $\beta\in\Sigma$, then
$$\nu(\varepsilon)>\left|\det\left(\begin{array}{cc} s_\alpha &
t_\alpha\\
s_\beta & t_\beta\end{array}\right)\right|\ge|t_\alpha|
|s_\beta|-|s_\alpha||t_\beta|\ge|s_\alpha||s_\beta|\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}\ge\nu(\varepsilon),
$$
a contradiction.
\medskip
We have
\begin{align*}\begin{split}
||t-s||^2&=||t||^2+||s||^2-2\langle t, s\rangle\le
2-2\sum_{\alpha\in\Sigma}\left(1-\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}\right)s_\alpha^2+2\sum_{\alpha\notin\Sigma}\rho(\varepsilon)^2\\
&\le
2\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}+4\sum_{\alpha\notin\Sigma}\rho(\varepsilon)^2\le
2\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}+4d\rho(\varepsilon)^2.\end{split}
\end{align*} Q.E.D.\end{proof}
Let $\Theta(\omega(\varepsilon),\delta(\varepsilon))=\{\mathfrak{b}_j\}_{j=1}^J$,
where $J\le 2^d$. We may and shall assume that
$\{e_i(\varepsilon)\}_{i=1}^d\subset \Theta(\omega(\varepsilon),\delta(\varepsilon))$ (see
Lemma \ref{L:bases} and Section \ref{S:notation}). We denote
$d\cdot 2^d$ by $\mathfrak{n}$ and introduce $d\cdot\mathfrak{n}$
functions:
$\varphi_1(\varepsilon),\dots,\varphi_{d\cdot\mathfrak{n}}(\varepsilon)$, such
that
\begin{equation}\label{E:phi1}
\varphi_1(\varepsilon)\ge\dots\ge\varphi_{d\cdot
\mathfrak{n}}(\varepsilon)=\rho(\varepsilon)=\varepsilon^k.
\end{equation}
\begin{equation}\label{E:phi2}
\displaystyle{\varphi_{\alpha}(\varepsilon)=(\varphi_{\alpha+1}(\varepsilon))^{\frac1{d+1}}}.
\end{equation}
We consider the matrix $X$ whose columns are
$\{\mathfrak{b}_j\}_{j=1}^J$. We order the absolute values of
entries of this matrix in non-increasing order and denote them by
$\mathfrak{a}_1\ge\mathfrak{a}_2\ge\dots\ge\mathfrak{a}_{d\cdot J}$. Let $j_0$ be the least
index for which
\begin{equation}\label{E:j_0}
\varphi_{d\cdot j_0}(\varepsilon)>\mathfrak{a}_{j_0}. \end{equation} The existence
of $j_0$ follows from $\{e_i(\varepsilon)\}_{i=1}^d\subset
\Theta(\omega(\varepsilon),\delta(\varepsilon))$. The definition of $j_0$ implies
that $\mathfrak{a}_j\ge \varphi_{d\cdot j}(\varepsilon)$ for $j<j_0$, hence
$\mathfrak{a}_j\ge\varphi_{d\cdot(j_0-1)}(\varepsilon)$ for $j\le j_0-1$.
\medskip
We replace all entries of the matrix $X$ except
$\mathfrak{a}_1,\dots,\mathfrak{a}_{j_0-1}$ by zeros and denote the obtained matrix
by $G=(G_{ij})$, $i=1,\dots,d$, $j=1,\dots,J$, and its columns by
$\{g_j\}_{j=1}^{J}$. It is clear that
\begin{equation}\label{E:gx}||g_j-\mathfrak{b}_j||\le d\cdot\varphi_{dj_0}(\varepsilon).
\end{equation}
We form a bipartite graph $\mathcal{G}$ on the vertex set $\{\bar
1, \dots,\bar d\}\cup\{1,\dots,J\}$, where we use bars in $\bar 1,
\dots, \bar d$ because these vertices are considered as different
from the vertices $1,\dots,d$, which are in the set
$\{1,\dots,J\}$. The edges of $\mathcal{G}$ are defined in the
following way: the vertices $\bar i$ and $j$ are adjacent if and
only if $G_{ij}\ne 0$. So there is a one-to-one correspondence
between edges of $\mathcal{G}$ and non-zero entries of $G$. We
choose and fix a maximal forest $\mathcal{F}$ in $\mathcal{G}$.
(We use the standard terminology, see, e. g. \cite[p.~11]{Sch86}.)
\medskip
For each non-zero entry of $G$ we define its {\it level} in the
following way:
\smallskip
The level of entries corresponding to edges of $\mathcal{F}$ is
$1$.
\medskip
For a non-zero entry of $G$ which does not correspond to an edge
in $\mathcal{F}$ we consider the cycle in $\mathcal{G}$ formed by
the corresponding edge and edges of $\mathcal{F}$. We define the
{\it level} of the entry as the half of the length of the cycle
(recall that the graph $\mathcal{G}$ is bipartite, hence all
cycles are even).
\medskip
\noindent{\bf Observation.} One of the classes of the bipartition
has $d$ vertices. Hence no cycle can have more than $2d$ edges,
and the level of each vertex is at most $d$.
\medskip
To each entry $G_{ij}$ of level $f$ we assign a square submatrix
$G(ij)$ of $G$ all other entries in which are of levels at most
$f-1$. We do this in the following way. To entries corresponding
to edges of $\mathcal{F}$ we assign the $1\times 1$ matrices
containing these entries. For an entry $G_{ij}$ which does not
correspond to an edge in $\mathcal{F}$ we consider the
corresponding edge $\mathfrak{e}$ in $\mathcal{G}$ and the cycle
$\mathcal{C}$ formed by $\mathfrak{e}$ and edges of $\mathcal{F}$.
Then we consider the entries in $G$ corresponding to edges of
$\mathcal{C}$ and the minimal submatrix in $G$ containing all of
these entries. Now we consider all edges in $\mathcal{G}$
corresponding to non-zero entries of this submatrix. We choose and
fix in this set of edges a minimum-length cycle $\mathcal{M}$
containing $\mathfrak{e}$. We define $G(ij)$ as the minimal
submatrix of $G$ containing all entries corresponding to edges of
$\mathcal{M}$. It is easy to verify that:
\begin{itemize}
\item $G(ij)$ is a square submatrix of $G$. \item Non-zero entries
of $G(ij)$ are in one-to-one correspondence with entries of
$\mathcal{M}$. \item The expansion of the determinant of $G(ij)$
according to the definition contains exactly two non-zero terms.
\item All non-zero entries of $G(ij)$ except $G_{ij}$ have level
$\le f-1$.
\end{itemize}
\begin{lemma}\label{L:det0} Let $k<1/(2d+4d^2)$.
If $\varepsilon>0$ is small enough, then there exists a $d\times J$ matrix $\tilde G$ such that:
\smallskip
\noindent{\rm (1)} If some entry of $G$ is zero, the corresponding
entry of $\tilde G$ is also zero.
\smallskip
\noindent{\rm (2)} The entries of level $1$ of $\tilde G$ are the
same as for $G$;
\smallskip
\noindent{\rm (3)} All other non-zero entries of $\tilde G$ are
perturbations of entries of $G$ satisfying the following
conditions:
\begin{itemize}
\item[{\rm (a)}] If $G_{ij}$ is of level $f$, then $|G_{ij}-\tilde
G_{ij}|<\varphi_{d\cdot j_0-f+1}(\varepsilon)$.
\item[{\rm (b)}] For each non-zero entry $G_{ij}$ of level $\ge 2$
of $G$ the determinant of the submatrix $\tilde G(ij)$ of $\tilde
G$ corresponding to $G(ij)$ is zero.
\end{itemize}
\end{lemma}
\begin{proof}{Proof} Let $G_{ij}$ be an entry of level $f$. Since,
as it was observed above, all entries of $G(ij)$ have level $\le
f-1$, we can prove the lemma by induction as follows.
(1) We let $\tilde G_{ij}=G_{ij}$ for all $G_{ij}$ of level one.
(2) Let $f\ge 2$. {\bf Induction hypothesis:} We assume that for
all entries $G_{ij}$ of levels $\ell(G_{ij})$ satisfying
$2\le\ell(G_{ij})\le f-1$ we have found perturbations $\tilde
G_{ij}$ satisfying
$$|G_{ij}-\tilde G_{ij}|\le \varphi_{d\cdot j_0-\ell(G_{ij})+1}(\varepsilon),$$
such that $\det(\tilde G(ij))=0$. (Note that this assumption is
vacuous if $f=2$.)
\medskip
{\bf Inductive step:} Let $G_{ij}$ be an entry of level $f$. If
$\varepsilon>0$ is small enough we can find a number $\tilde G_{ij}$ such
that $|\tilde G_{ij}-G_{ij}|\le\varphi_{d\cdot j_0-f+1}(\varepsilon)$ and
$\det(\tilde G(ij))=0$. Observe that by the induction hypothesis
and the observation that all other entries of $G(ij)$ have levels
$\le f-1$, all other entries of $\tilde G(ij)$ have already been
defined.
\medskip
So let $G_{ij}$ be an entry of level $f$, and $G(ij)$ be the
corresponding square submatrix. Renumbering rows and columns of
the matrix $G$ we may assume that the matrix $G(ij)$ looks like
the one sketched below for some $h\le f$.
$$G(ij)=\left(\begin{array}{ccccc}
a_1 & 0 & \dots & 0 & G_{ij}\\
b_1 & a_2 & \dots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & \dots & a_{h-1} & 0\\
0 & 0 & \dots & b_{h-1} & a_h
\end{array}\right)$$
Therefore the matrix $G$ (possibly, after renumbering of columns
and rows) has the form
\begin{equation}\label{E:G}
\left(\begin{array}{ccccccccccccc} a_1 & 0 & \dots & 0 & G_{ij} &
0 & 0 & \dots & 0 & 0 & 1 & 0 &
\dots\\
b_1 & a_2 & \dots & 0 & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 1 &
\dots\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots &
\ddots & \vdots & \vdots &\vdots & \vdots & \ddots \\
0 & 0 & \dots & a_{h-1} & 0 & 0 & 0 & \dots & 0 & 0 & 0 & 0 &
\dots\\
0 & 0 & \dots & b_{h-1} & a_h & 0 & 0 & \dots & 0 & 0 & 0 & 0 &
\dots\\
* & * & \dots & * & * & 1 & 0 & \dots & 0 & 0 & 0 & 0 &
\dots\\
* & * & \dots & * & * & 0 & 1 & \dots & 0 & 0 & 0 & 0 &
\dots\\
\vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots &
\ddots & \vdots & \vdots &\vdots & \vdots & \ddots \\
* & * & \dots & * & * & 0 & 0 & \dots & 1 & 0 & 0 & 0 &
\dots\\
* & * & \dots & * & * & 0 & 0 & \dots & 0 & 1 & 0 & 0 &
\dots
\end{array}\right)\end{equation}
We have assumed that we have already found entries $\{\tilde
a_n\}_{n=1}^h$ and $\{\tilde b_n\}_{n=1}^{h-1}$ of $\tilde G$
which are perturbations of $\{a_n\}_{n=1}^h$ and $\{
b_n\}_{n=1}^{h-1}$. The entries $1$ shown (\ref{E:G}) are the only
non-zero entries in their columns, therefore the corresponding
edges of $\mathcal{G}$ should be in $\mathcal{F}$. Let us denote
the perturbation of $G_{ij}$ we are looking for by $\tilde
G_{ij}$. The condition (b) of Lemma \ref{L:det0} can be written as
\begin{equation}\label{E:tilde}\prod_{n=1}^h\tilde
a_n+(-1)^{h-1}\prod_{n=1}^{h-1}\tilde b_n\cdot \tilde
G_{ij}=0\end{equation}
So it suffices to show that the number $\tilde G_{ij}$, found as a
solution of (\ref{E:tilde}) satisfies $|\tilde
G_{ij}-G_{ij}|<\varphi_{d\cdot j_0-f+1}(\varepsilon)$. To show this we
assume the contrary. Since there are finitely many possibilities
for $j_0$ and $f$, the converse can be described as existence of
$j_0$ and $f$, such that there is a subset $\Phi_3\subset(0,1)$,
whose closure contains $0$, satisfying the condition:
For each $\varepsilon\in\Phi_3$ there is $Z\in\mathcal{Z}_\varepsilon$ such that
after proceeding with all steps of the construction we get: all
the conditions above are satisfied, but
\begin{equation}\label{E:dettilde}
\left|\prod_{n=1}^h\tilde a_n+(-1)^{h-1}\prod_{n=1}^{h-1}\tilde
b_n\cdot G_{ij}\right|>\varphi_{d\cdot
j_0-f+1}(\varepsilon)\prod_{n=1}^{h-1}|\tilde b_n|.
\end{equation}
We need to get from here an estimate for $|\det(G(ij))|$ from
below. To get it we observe that the inequality (\ref{E:dettilde})
is an estimate from below of the determinant of the matrix
$$G'(ij)=\left(\begin{array}{ccccc}
\tilde a_1 & 0 & \dots & 0 & G_{ij}\\
\tilde b_1 & \tilde a_2 & \dots & 0 & 0 \\
\vdots & \vdots & \ddots & \vdots & \vdots\\
0 & 0 & \dots & \tilde a_{h-1} & 0\\
0 & 0 & \dots & \tilde b_{h-1} & \tilde a_h
\end{array}\right).$$
To get from here an estimate for $\det(G(ij))$ from below we
observe the following: The $\ell_2$-norm of each column of
$G_{ij}$ is $\le 1$, the $\ell_2$-distance between a column of
$G_{ij}$ and the corresponding column of $G'(ij)$ is at most
$2\varphi_{dj_0-f+2}(\varepsilon)$. Hence the $\ell_2$-norm of each column
of $G'(ij)$ is $\le 1+2\varphi_{dj_0-f+2}(\varepsilon)$. Applying Lemma
\ref{L:detperturb} $h$ times we get
$$|\det(G(ij))|\ge|\det(G'(ij))|-h\cdot
2\varphi_{dj_0-f+2}(\varepsilon)(1+2\varphi_{dj_0-f+2}(\varepsilon))^{h-1}.
$$
The induction hypothesis implies
$$|\tilde b_i|\ge\varphi_{d(j_0-1)}(\varepsilon)-\varphi_{dj_0-f+2}(\varepsilon),
$$
we get
\begin{equation}\label{E:detG}\begin{split}
|\det(G(ij))|&\ge
\varphi_{dj_0-f+1}(\varepsilon)\cdot(\varphi_{d(j_0-1)}(\varepsilon)-\varphi_{dj_0-f+2}(\varepsilon))^{h-1}\\
&\quad-h\cdot
2\varphi_{dj_0-f+2}(\varepsilon)(1+2\varphi_{dj_0-f+2}(\varepsilon))^{h-1}.\end{split}
\end{equation}
Let us keep the notation $\{g_j\}_{j=1}^J$ for columns of the
matrix (\ref{E:G}). We consider the following six $d\times d$
minors of this matrix: the corresponding submatrices contain the
columns $\{g_2,\dots,g_{h-1},g_{h+1},\dots,g_d\}$, and two out of
the four columns $\{g_1,g_h,g_{d+1}, g_{d+2}\}$. Observe that
$g_{h+1}=e_{h+1},\dots,g_d=e_d,g_{d+1}=e_1, g_{d+2}=e_2$.
\medskip
The absolute values of the minors are equal to
\begin{equation}\label{E:minors}
|\det G(ij)|, ~\left|\prod_{n=2}^h a_n\right|,
~\left|\prod_{n=1}^{h-1} b_n\right|,
~|a_1|\cdot\left|\prod_{n=2}^{h-1} b_n\right|,
~\left|\prod_{n=2}^h b_n\right|, ~\left|\prod_{n=2}^{h-1}
b_n\right|.
\end{equation}
The first number in (\ref{E:minors}) was estimated in
(\ref{E:detG}). All other numbers are at least
$(\varphi_{d(j_0-1)}(\varepsilon))^{h-1}$, it is clear that this number
exceeds the number from (\ref{E:detG}).
\medskip
We are going to use Lemma \ref{L:six} with $\{x_1,\dots,x_{d-2}\}=
\{\mathfrak{N}(g_2),\dots,\mathfrak{N}(g_{h-1}),\mathfrak{N}(g_{h+1}),\dots$,
$\mathfrak{N}(g_d)\}$ and
$\{p_1,p_2,p_3,p_4\}=\{\mathfrak{N}(g_1),\mathfrak{N}(g_h),\mathfrak{N}(g_{d+1}),
\mathfrak{N}(g_{d+2})\}$. (Recall that $\mathfrak{N}(z)=z/||z||$.)
Our definitions imply that $||\mathfrak{b}_j||=1$ and $||g_j||\le
1$, because $g_j$ is obtained from $\mathfrak{b}_j$ by replacing
some of the coordinates by zeros. Hence the inequality
(\ref{E:detG}) and the remark above on the numbers
(\ref{E:minors}) imply that the condition (\ref{E:detge}) is
satisfied with
\begin{equation}\label{E:chi}\begin{split} \chi(\varepsilon)&=
\varphi_{dj_0-f+1}(\varepsilon)\cdot(\varphi_{d(j_0-1)}(\varepsilon)-\varphi_{dj_0-f+2}(\varepsilon))^{h-1}\\
&\quad-h\cdot
2\varphi_{dj_0-f+2}(\varepsilon)(1+2\varphi_{dj_0-f+2}(\varepsilon))^{h-1}.
\end{split}
\end{equation}
The inequality (\ref{E:gx}), the inclusion
$\mathfrak{b}_j\in{\check\Omega}(\omega(\varepsilon),\delta(\varepsilon))$ and (\ref{E:Obs2})
imply that the condition (\ref{E:measure}) is satisfied with
$\pi(\varepsilon)=2d\cdot\varphi_{dj_0}(\varepsilon)+C_5(d)\omega(\varepsilon)$ and
$\sigma(\varepsilon)=c_3(d)\delta(\varepsilon)$. So it remains to show that the
condition (\ref{E:phi2}) implies that the conditions {\bf (2)} and
{\bf (3)} of Lemma \ref{L:six} are satisfied.
By (\ref{E:phi2}), (\ref{E:chi}), the inequality $2\le h\le f\le
d$, and the trivial observation that all functions
$\varphi_\alpha(\varepsilon)$ do not exceed $1$ for $0\le \varepsilon\le 1$, we
have
\begin{equation}\label{E:Ochi}(\varphi_{dj_0-f+1}(\varepsilon))^d=O(\chi(\varepsilon)).
\end{equation}
Now we verify the condition {\bf (3)} of Lemma \ref{L:six}. The
part (b) can be verified as follows. The conditions (\ref{E:phi1})
and (\ref{E:phi2}), together with $f\ge 2$ and
$\omega(\varepsilon)=\varepsilon^{4k}$, imply that
$\pi(\varepsilon)=O(\varphi_{dj_0}(\varepsilon))=o((\varphi_{dj_0-f+1}(\varepsilon))^d)=o(\chi(\varepsilon))$.
To verify the condition {\bf (2)} of Lemma \ref{L:six} it suffices
to observe that (\ref{E:Ochi}) and (\ref{E:phi1}) imply
$(\rho(\varepsilon))^d=O(\chi(\varepsilon))$. Hence {\bf (2)} is satisfied if
$2dk+4d^2k<1$. This inequality is among the conditions of Lemma
\ref{L:det0}. Hence we can apply Lemma \ref{L:six} and get the
conclusion of Lemma \ref{L:det0}.
\end{proof}
Now let $\tilde G$ be an approximation of $G$ by a matrix
satisfying the conditions of Lemma \ref{L:det0}. We use the same
maximal forest $\mathcal{F}$ in $\mathcal{G}$ as above. It is
easy to show (and the corresponding result is well known in the
theory of matroids, see, for example, \cite[Theorem 6.4.7]{Oxl92})
that multiplying columns and rows of $\tilde G$ by positive
numbers we can make entries corresponding to edges of
$\mathcal{F}$ to be equal to $\pm 1$. Denote the obtained matrix
by $\widehat G$.
\begin{lemma}\label{L:pm1} If $\tilde G$ satisfies the conditions
of Lemma {\rm \ref{L:det0}}, then $\widehat G$ is a matrix with
entries $-1,0$, and $1$.
\end{lemma}
\begin{proof}{Proof} Assume the contrary, that is, there are
entries $\widehat G_{ij}$ which are not in the set $\{-1,0,1\}$.
Let $\widehat G_{ij}$ be one of such entries satisfying the
additional condition: the level $\ell(G_{ij})$ is the minimal
possible among all entries $\widehat G_{ij}$ which are not in
$\{-1,0,1\}$. Denote by $\widehat{G}(ij)$ the submatrix of
$\widehat G$ which corresponds to $G(ij)$.
Then, by observations preceding Lemma \ref{L:det0}, the expansion
of $\det\widehat{G}(ij)$ contains two non-zero terms: one of them
is $1$ or $-1$, the other is $\widehat{G}_{ij}$ or
$-\widehat{G}_{ij}$. Our assumptions imply that
$\det\widehat{G}(ij)\ne 0$. This contradicts
$\det\tilde{G}(ij)=0$, because $\widehat{G}$ is obtained from
$\tilde G$ using multiplications of columns and rows by numbers.
\end{proof}
In Lemma \ref{L:TU} we show that for functions
$\varphi_\alpha(\varepsilon)$ chosen as above, the matrix $\widehat G$
should be totally unimodular for sufficiently small $\varepsilon$. In
Lemma \ref{L:BM} we show how to estimate the Banach--Mazur
distance between $Z$ and $\mathcal{T}_d$ in the case when
$\widehat G$ is totally unimodular.
\begin{lemma}\label{L:TU} If $\varepsilon>0$ is small enough, the matrix
$\widehat G$ is totally unimodular.
\end{lemma}
\begin{proof}{Proof} The conclusion of Lemma \ref{L:det0} implies
that each entry of $\tilde G$ is a
$\varphi_{d(j_0-1)+1}(\varepsilon)$-approximation of an entry from $G$.
Therefore for small $\varepsilon$ the absolute value of each non-zero
entry of $\tilde G$ is at least $\varphi_{d(j_0-1)}(\varepsilon)/2$. This
implies the following observation.
\medskip
\noindent{\bf Observation.} Each $d\times d$ minor of $\tilde G$
is a product of the corresponding minor of $\widehat G$ and a
number $\zeta$ satisfying
$(\varphi_{d(j_0-1)}(\varepsilon)/2)^d\le\zeta\le 1$.
\begin{proof}{Proof} Consider a square submatrix
$\tilde{\mathcal{S}}$ in $\tilde G$ and the corresponding
submatrix $\widehat{\mathcal{S}}$ in $\widehat G$. If the
corresponding minor is zero, there is nothing to prove. If it is
non-zero, we reorder columns and rows of $\tilde{\mathcal{S}}$ in
such a way that all entries on the diagonal become non-zero, and
do the same reordering with $\widehat{\mathcal{S}}$. Let
$\mathfrak{r}_i,\mathfrak{c}_j>0$ be such that after multiplying
rows of $\widehat{\mathcal{S}}$ by $\mathfrak{r}_i$ and columns of
the resulting matrix by $\mathfrak{c}_j$ we get
$\tilde{\mathcal{S}}$. Then
$$\det(\tilde{\mathcal{S}})=\det(\widehat{\mathcal{S}})\prod_i\mathfrak{r}_i\prod_j\mathfrak{c}_j.$$
On the other hand,
$\mathfrak{r}_i\mathfrak{c}_i\ge\varphi_{d(j_0-1)}(\varepsilon)/2$,
because the diagonal entry of $\widehat{\mathcal{S}}$ is $\pm1$,
and the absolute value of the diagonal entry of
$\tilde{\mathcal{S}}$ is $\ge\varphi_{d(j_0-1)}(\varepsilon)/2$. The
conclusion follows.
\end{proof}
\begin{lemma}\label{L:Gomory} Let $D$ be a $d\times J$ matrix with entries $-1,0$, and $1$, containing a $d\times d$ identity
submatrix. If $D$ is not totally unimodular, then it contains
$(d+2)$ columns $\{\widehat x_1,\dots,\widehat x_{d-2}$, $\widehat
p_1,\widehat p_2,\widehat p_3, \widehat p_4\}$ such that for all
six choices of two vectors from the set $\{\widehat p_1,\widehat
p_2$, $\widehat p_3,\widehat p_4\}$ minors obtained by joining
them to $\{\widehat x_1,\dots,\widehat x_{d-2}\}$ are non-zero.
\end{lemma}
\begin{proof}{Proof} Our argument follows
\cite[pp.~1068--1069]{Cam65} (see, also,
\cite[pp.~269--271]{Sch86}), where a similar statement is
attributed to R.~Gomory.
\medskip
Suppose that $D$ is not totally unimodular, then it has a square
submatrix $\mathcal{S}$ with $|\det(\mathcal{S})|\ge 2$. Let
$\mathcal{S}$ be of size $h\times h$. Reordering columns and rows
of $D$ (if necessary), we may assume that $D$ is of the form:
$$D=\left(\begin{array}{cccc}\mathcal{S} & 0 & I_h & *\\
* & I_{d-h} & 0 & *\end{array}\right),
$$
where $I_h$ and $I_{d-h}$ are identity matrices of sizes $h\times
h$ and $(d-h)\times (d-h)$, respectively, $0$ denote matrices with
zero entries of the corresponding dimensions, and $*$ denote
matrices of the corresponding dimensions with unspecified entries.
\medskip
We consider all matrices which can be obtained from $D$ by a
sequence of the following operations:
\begin{itemize}
\item Addition or subtraction a row to or from another row, \item
Multiplication of a column by $-1$, \end{itemize} provided that
after each such operation we get a matrix with entries $-1,0$, and
$1$.
Among all matrices obtained from $D$ in such a way we select a
matrix $\widehat D$ which satisfies the following conditions:
\begin{itemize}
\item[(1)] Has all unit vectors among its columns; \item[(2)] Has
the maximal possible number $\xi$ of unit vectors among the first
$d$ columns.
\end{itemize}
Observe that $\xi<d$ because the operations listed above preserve
the absolute value of the determinant and at the beginning the
absolute value of the determinant formed by the first $d$ columns
was $\ge 2$. Let $d_r$ be one of the first $d$ columns of
$\widehat D$ which is not a unit vector. Let $\{i_1,\dots,i_t\}$
be indices of its non-zero coordinates. Then at least one of the
unit vectors $e_{i_1},\dots,e_{i_t}$ is not among the first $d$
columns of $\widehat D$ (the first $d$ columns of $\widehat D$ are
linearly independent). Assume that $e_{i_1}$ is not among the
first $d$ columns of $\widehat D$. We can try to transform
$\widehat D$ adding/subtracting the row number $i_1$ to/from rows
number $i_2,\dots, i_t$ (and multiplying the column number $r$ by
$(-1)$, if necessary) into a new matrix $\tilde D$ which satisfies
the following conditions:
\begin{itemize}
\item Has among the first $d$ columns all the unit vectors it had
before; \item Has $e_{i_1}$ as its column number $r$; \item Has
all the unit vectors among its columns.
\end{itemize}
It is not difficult to verify that the only possible obstacle is
that there exists another column $d_t$ in $\widehat D$, such that
for some $s\in\{2,\dots,t\}$
\begin{equation}\label{E:D}\left|\det\left(\begin{array}{cc} D_{i_1r} &
D_{i_1t}\\D_{i_sr} &
D_{i_st}\end{array}\right)\right|=2,\end{equation} where by
$D_{ij}$ we denote entries of $\widehat D$. By the maximality
assumption, a submatrix satisfying (\ref{E:D}) exists.
It is easy to see that letting $\{\widehat p_1,\widehat
p_2,\widehat p_3, \widehat p_4\}=\{d_r,d_s,e_{i_1},e_{i_s}\}$, and
$\{\widehat x_1,\dots,\widehat
x_{d-2}=\{e_1,\dots,e_d\}\backslash\{e_{i_1},e_{i_s}\}$, we get a
set of columns of $\widehat D$ satisfying the required condition.
Since the operations listed above preserve the absolute values of
$d\times d$ minors, the corresponding columns of $D$ form the
desired set.
\end{proof}
\begin{remark}{Remark} Lemma \ref{L:Gomory} can also be obtained by
combining known characterizations of regular and binary matroids,
see \cite{Oxl92} (we mean, first of all, Theorem 9.1.5, Theorem
6.6.3, Corollary 10.1.4, and Proposition 3.2.6).
\end{remark}
We continue our proof of Lemma \ref{L:TU}. Assume the contrary.
Since there are finitely many possible values of $j_0$, there is
$j_0$ and a subset $\Phi_4\subset(0,1)$, whose closure contains
$0$, satisfying the condition:
For each $\varepsilon\in\Phi_4$ there is $Z\in\mathcal{Z}_\varepsilon$ such that
following the construction, we get the preselected value of $j_0$,
and the obtained matrix $\widehat G$ is not totally unimodular.
\medskip
Since the entries of $\widehat G$ are integers, the absolute
values of the minors are at least one. We are going to show that
the corresponding minors of $G$ are also `sufficiently large', and
get a contradiction using Lemma \ref{L:six}.
\medskip
By the observation above the corresponding minors of $\tilde G$
are at least $(\varphi_{d(j_0-1)}(\varepsilon)/2)^d$. The Euclidean norm
of a column in $\tilde G$ is at most
$1+d\varphi_{d(j_0-1)+1}(\varepsilon)$. Applying Lemma \ref{L:detperturb}
$d$ times we get that the corresponding minor of $G$ are at least
$$(\varphi_{d(j_0-1)}(\varepsilon)/2)^d-d^2\varphi_{d(j_0-1)+1}(\varepsilon)\cdot(1+d\varphi_{d(j_0-1)+1}(\varepsilon))^{d-1}.$$
We are going to use Lemma \ref{L:six} for $x_1,\dots,x_{d-2},
p_1,p_2,p_3,p_4$ defined in the following way. Let $\check
x_1,\dots,\check x_{d-2},\check p_1,\check p_2,\check p_3,\check
p_4$ be the columns of $G$ corresponding to the columns $\widehat
x_1,\dots,\widehat x_{d-2}, \widehat p_1,\widehat p_2,\widehat
p_3, \widehat p_4$ of $\widehat G$, and $x_1,\dots,x_{d-2},
p_1,p_2,p_3,p_4$ be their normalizations (that is, $x_1=\check
x_1/||\check x_1||$, etc). Since norms of columns of $G$ are $\le
1$, the condition (\ref{E:detge}) of Lemma \ref{L:six} is
satisfied with
$$\chi(\varepsilon)=(\varphi_{d(j_0-1)}(\varepsilon)/2)^d-d^2\varphi_{d(j_0-1)+1}(\varepsilon)\cdot(1+d\varphi_{d(j_0-1)+1}(\varepsilon))^{d-1}.$$
Now we recall that columns $\{g_j\}$ of $G$ satisfy (\ref{E:gx})
for some vectors $\mathfrak{b}_j\in{\check\Omega}(\omega(\varepsilon), \delta(\varepsilon))$.
Hence the distance from $x_1,\dots,x_{d-2}, p_1,p_2,p_3,p_4$ to
the corresponding vectors $\mathfrak{b}_j$ is $\le
2d\varphi_{dj_0}(\varepsilon)$. By (\ref{E:Obs2}) the condition
(\ref{E:measure}) is satisfied with
$$\pi(\varepsilon)=2d\varphi_{dj_0}(\varepsilon)+C_5(d)\omega(\varepsilon)$$
and
$$\sigma(\varepsilon)=c_3(d)\delta(\varepsilon).$$
The fact that the conditions {\bf (2)} and {\bf (3)} of Lemma
\ref{L:six} are satisfied is verified in the same way as at the
end of Lemma \ref{L:det0}, the only difference is that instead of
(\ref{E:Ochi}) we have $(\varphi_{d(j_0-1)}(\varepsilon))^d=O(\chi(\varepsilon))$.
This does not affect the rest of the argument. Therefore, under
the same condition on $k$ as in Lemma \ref{L:det0} we get, by
Lemma \ref{L:six}, that $\widehat G$ should be totally unimodular
if $\varepsilon>0$ is small enough.
\end{proof}
\begin{lemma}\label{L:BM} If $\widehat G$ is totally unimodular,
then there exists a zonotope $T\in\mathcal{T}_d$ such that
$$d(Z,T)\le\mathfrak{t}_d(\varepsilon),$$
where $\mathfrak{t}_d(\varepsilon)$ is a function satisfying
$\lim_{\varepsilon\downarrow 0}\mathfrak{t}_d(\varepsilon)=1$.
\end{lemma}
\begin{proof}{Proof} Observe that the matrix $\tilde G$ can be
obtained from $\widehat G$ using multiplications of rows and
columns by positive numbers. Hence, re-scaling the basis
$\{e_i\}$, if necessary, we get: columns of $\tilde G$ with
respect to the re-scaled basis are of the form $a_i\tau_i$, where
$\tau_i$ are columns of a totally unimodular matrix (see the
definition of $\mathcal{T}_d$ in the introduction).
We are going to approximate the measure ${\check\mu}$ by a measure
$\widehat{\mu}$ supported on vectors which are normalized columns
of $\tilde G$. Recall that ${\check\mu}$ is supported on a finite subset
of $\check S$.
The approximation is constructed in the following way. We erase
the measure ${\check\mu}$ supported outside
$({\check\Omega}(\omega(\varepsilon),\delta(\varepsilon)))_{C_3(d)\omega(\varepsilon)}$. The total
mass of the measure erased in this way is small by (\ref{E:Obs1}).
As for the measure supported on
$\mathcal{B}:=({\check\Omega}(\omega(\varepsilon),\delta(\varepsilon)))_{C_3(d)\omega(\varepsilon)}$,
we approximate each atom of it by the atom of the same mass
supported on the nearest normalized column of $\tilde G$. We
denote the nearest to $z\in{\rm supp}\hskip0.02cm\check\mu$ normalized column of
$\tilde G$ by $\mathcal{A}(z)$. If there are several such columns,
we choose one of them.\medskip
Now we estimate the distance from a point of
$({\check\Omega}(\omega(\varepsilon),\delta(\varepsilon)))_{C_3(d)\omega(\varepsilon)}$ to the
nearest normalized column of $\tilde G$. The distance from this
point to ${\check\Omega}(\omega(\varepsilon),\delta(\varepsilon))$ is ${C_3(d)\omega(\varepsilon)}$,
the distance from a point from ${\check\Omega}(\omega(\varepsilon),\delta(\varepsilon))$ to
the point from $\Theta(\omega(\varepsilon),\delta(\varepsilon))$ with the same top
set (or its opposite), by Lemma \ref{L:smalldistance}, can be
estimated from above by
$\sqrt{2\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}+4d\rho(\varepsilon)^2}$. The
distance from a point in $\Theta(\omega(\varepsilon),\delta(\varepsilon))$ to the
corresponding column of $G$ is estimated in (\ref{E:gx}), it is
$\le d\cdot\varphi_{dj_0}(\varepsilon)$, so it is $\le
d\cdot\varphi_{1}(\varepsilon)$, and the distance from a column of $G$ to
the corresponding column of $\tilde G$ is $\le
d\cdot\varphi_{d(j_0-1)+1}(\varepsilon)\le d\cdot\varphi_1(\varepsilon)$. Since we
have to normalize this vector, the total distance from a point of
$({\check\Omega}(\omega(\varepsilon),\delta(\varepsilon)))_{C_3(d)\omega(\varepsilon)}$ to the
nearest normalized column of $\tilde G$ can be estimated from
above by
$${C_3(d)\omega(\varepsilon)}+\sqrt{2\frac{\nu(\varepsilon)}{(\rho(\varepsilon))^2}+4d\rho(\varepsilon)^2}+4
d\cdot\varphi_{1}(\varepsilon)$$ It is clear that this function, let us
denote it by $\zeta(\varepsilon)$, tends to $0$ as $\varepsilon\downarrow 0$,
recall that $\rho(\varepsilon)=e^k$, $\nu(\varepsilon)=\varepsilon^{3k}$,
$\omega(\varepsilon)=\varepsilon^{4k}$,
$\displaystyle{\varphi_1(\varepsilon)=\varepsilon^{{\left(\frac1{d+1}\right)^{d\mathfrak{n}-1}}}}$.
The obtained measure corresponds to a zonotope from
$\mathcal{T}_d$. Let us denote this zonotope by $T$.
\medskip
Since the dual norms to the gauge functions of $Z$ and $T$ are
their support functions, we get the estimate
$$d(T,Z)\le\sup_{u\in\check S}\frac{\check h_Z(u)}{\check h_T(u)}\cdot
\sup_{u\in\check S}\frac{\check h_T(u)}{\check h_Z(u)}.$$
So it is enough to show that
\begin{equation}\label{E:C1C2}
C_1(d,\varepsilon)\le \frac{\check h_T(u)}{\check h_Z(u)}\le
C_2(d,\varepsilon),\end{equation} where $\lim_{\varepsilon\downarrow
0}C_1(d,\varepsilon)=\lim_{\varepsilon\downarrow 0}C_2(d,\varepsilon)=1$. \medskip
Observe that Lemma \ref{L:bases} implies that there exists a
constant $0<C_7(d)<\infty$ such that
\begin{equation}\label{E:C7}C_7(d)\le \check h_Z(u),~~ \forall
u\in\check S.\end{equation}
We have
\begin{align*}\begin{split} \check h_Z(u)&=\int_{\check S}|\langle
u,z\rangle|d{\check\mu}(z)\le \int_{\check S\backslash
\mathcal{B}}|\langle
u,z\rangle|d{\check\mu}(z)\\
&\quad+ \int_{\check S} |\langle
u,z\rangle|d\widehat{\mu}(z)+\sum_{z\in{\rm supp}\hskip0.02cm\check \mu\cap
\mathcal{B}}\left(|\langle u,z\rangle-\langle
u,\mathcal{A}(z)\rangle|\right)\check\mu(z)
\\&
\le C_4(d)\frac{\delta(\varepsilon)}{\omega^{d-1}(\varepsilon)}+\check
h_T(u)+\zeta(\varepsilon){\check\mu}(\check S), ~~ \forall u\in\check S.
\end{split}\end{align*}
In a similar way we get
\begin{align*}\begin{split} \check h_T(u)&=\int_{\check S}|\langle
u,z\rangle|d\widehat{\mu}(z)\le \int_{\mathcal{B}} |\langle
u,z\rangle|d{\check\mu}(z)\\
&\quad+\sum_{z\in{\rm supp}\hskip0.02cm\check \mu\cap\mathcal{B}}\left(|\langle
u,z\rangle-\langle u,\mathcal{A}(z)\rangle|\right)\check\mu(z)
\\&
\le \check h_Z(u)+\zeta(\varepsilon){\check\mu}(\check S), ~~ \forall u\in S.
\end{split}\end{align*}
Using (\ref{E:C7}) we get
$$1-\frac{C_4(d)\frac{\delta(\varepsilon)}{\omega^{d-1}(\varepsilon)}}{C_7(d)}-\frac{\zeta(\varepsilon){\check\mu}(\check
S)}{C_7(d)}\le\frac{\check h_T(u)}{\check h_Z(u)}\le
1+\frac{\zeta(\varepsilon){\check\mu}(\check S)}{C_7(d)}.$$ It is an estimate of
the form (\ref{E:C1C2}), Q.E.D.
\end{proof}
It is clear that Lemma \ref{L:BM} completes our proof of Lemma
\ref{L:APPROX}.
\end{proof}
\section{Proof of Theorem \ref{T:NP}}
\begin{proof}{Proof} We start by proving Theorem \ref{T:NP} for polyhedral
$X$. In this case we can consider $X$ as a subspace of
$\ell_\infty^m$ for some $m\in {\bf N}$. Since $X$ has an MVSE
which is not a parallelepiped, there exists a linear projection
$P:\ell_\infty^m \to X$ such that $P(B_\infty^m)$ has the minimal
possible volume, but $P(B_\infty^m)$ is not a parallelepiped. Let
$d=\dim X$, let $\{q_1,\dots,q_{m-d}\}$ be an orthonormal basis in
$\ker P$ and let $\{\tilde q_1,\dots,\tilde q_d\}$ be an
orthonormal basis in the orthogonal complement of $\ker P$. As it
was shown in Lemma \ref{L:shape}, $P(B_\infty^m)$ is linearly
equivalent to the zonotope spanned by rows of $\tilde Q=[\tilde
q_1,\dots,\tilde q_d]$. By the assumption this zonotope is not a
parallelepiped. It is easy to see that this assumption is
equivalent to: there exists a minimal linearly dependent
collection of rows of $\tilde Q$ containing $\ge 3$ rows. This
condition implies that we can reorder the coordinates in
$\ell_\infty^m$ and multiply the matrix $\tilde Q$ from the right
by an invertible $d\times d$ matrix $C_1$ in such a way that
$\tilde QC_1$ has a submatrix of the form
$$\left(
\begin{array}{cccc}
1 & 0 & \dots & 0\\
0 & 1 & \dots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \dots & 1\\
a_1 & a_2 & \dots & a_d
\end{array}\right),
$$
where $a_1\ne 0$ and $a_2\ne 0$. Let $\mathcal{X}$ be a matrix
whose columns form a basis of $X$. The argument of \cite{laa} (see
the conditions (1)--(3) on p.~96) implies that $\mathcal{X}$ can
be multiplied from the right by an invertible $d\times d$ matrix
$C_2$ in such a way that $\mathcal{X}C_2$ is of the form
$$\left(
\begin{array}{cccc}
1 & 0 & \dots & 0\\
0 & 1 & \dots & 0\\
\vdots & \vdots & \ddots & \vdots\\
0 & 0 & \dots & 1\\
{\rm sign}\hskip0.02cm a_1 & {\rm sign}\hskip0.02cm a_2 & \dots & *\\
\vdots & \vdots & \ddots & \vdots
\end{array}\right),
$$
where at the top there is an $d\times d$ identity matrix, and all
minors of the matrix $\mathcal{X}C_2$ have absolute values $\le
1$.
\medskip
Changing signs of the first two columns, if necessary, we get that
the subspace $X\subset\ell_\infty^m$ is spanned by columns of the
matrix
\begin{equation}\label{Z}
\left(
\begin{array}{ccccc}
\pm1 & 0 & 0 &\dots & 0\\
0 & \pm1 & 0 &\dots & 0\\
0 & 0 & 1 &\dots & 0\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
0 & 0 & 0 &\dots & 1\\
1 & 1 & * & \dots & *\\
b_1 & c_1 & * & \dots & *\\
b_2 & c_2 & * & \dots & *\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
b_{m-l-1} & c_{m-l-1} & * & \dots & *
\end{array}\right).
\end{equation}
The condition on the minors implies that $|b_i|\le 1$, $|c_i|\le
1$, and $|b_i-c_i|\le 1$ for each $i$. Therefore the subspace,
spanned in $\ell_\infty^m$ by the first two columns of the matrix
(\ref{Z}) is isometric to ${\bf R}^2$ with the norm
$$||(\alpha, \beta)||=\max(|\alpha|, |\beta|, |\alpha+\beta|).$$
It is easy to see that the unit ball of this space is linearly
equivalent to a regular hexagon. Thus, Theorem \ref{T:NP} is
proved in the case when $X$ is polyhedral.
\medskip
Proving the result for general, not necessarily polyhedral, space,
we shall denote the space by $Y$. We use Theorem \ref{T:MVSE}.
Actually we need only the following corollary of it: {\it Each
MVSE is a polyhedron.} Therefore we can apply the following result
to each MVSE.
\begin{lemma}\label{T:polyhedral} {\rm\cite[Lemma 1]{Ost04}}
Let $Y$ be a finite dimensional space and let $A$ be a polyhedral
MVSE for $Y$. Then there exists another norm on $Y$ such that the
obtained normed space $X$ satisfies the conditions:
\medskip
\noindent{\rm (1)} $X$ is polyhedral;
\noindent{\rm (2)} $B_X\supset B_Y$;
\noindent{\rm (3)} $A$ is an MVSE for $X$.
\end{lemma}
So we consider the space $Y$ as being embedded into a polyhedral
space $X$ with the embedding satisfying the conditions of Lemma
\ref{T:polyhedral}. By the first part of the proof the space $X$
satisfies the conditions of Theorem \ref{T:NP} and we may assume
that $X$ is a subspace $\ell_\infty^m$ in the way described in the
first part of the proof. So $X$ is spanned by columns - let us
denote them by $e_1, \dots, e_d$ - of the matrix (\ref{Z}) in
$\ell_\infty^m$. It is easy to see that to finish the proof it is
enough to show that the vectors $e_1$, $e_2$, $e_1-e_2$ are in
$B_Y$.
\medskip
It turns out each of these points is the center of a facet of a
minimum-volume parallelepiped containing $B_X$. In fact, let
$\{f_i\}_{i=1}^m$ be the unit vector basis of $\ell_\infty^m$. Let
$P_1$ and $P_2$ be the projections onto $Y$ with the kernels
${\rm lin}\hskip0.02cm\{f_{d+1},\dots,f_m\}$ and ${\rm lin}\hskip0.02cm\{f_1,f_{d+2},\dots,f_m\}$,
respectively (recall that $Y$, as a linear space, coincides with
$X$). The analysis from \cite[pp.~318--319]{jfa} shows that
$P_1(B_\infty^m)$ and $P_2(B_\infty^m)$ have the minimal possible
volume among all linear projections of $B_\infty^m$ into $X$. It
is easy to see that $P_1(B_\infty^m)$ and $P_2(B_\infty^m)$ are
parallelepipeds.
\medskip
We show that $e_1$, $e_2$ are centers of facets of
$P_1(B_\infty^m)$, and that $e_1-e_2$ is the center of a facet of
$P_2(B_\infty^m)$. In fact, the centers of facets of
$P_1(B_\infty^m)$ coincide with $P_1(f_1),\dots,P_1(f_d)$, and it
is easy to check that $P_1(f_i)=e_i$ for $i=1,\dots,d$. As for
$P_2$, we observe that
$e_1-e_2\in{\rm lin}\hskip0.02cm\{f_1,f_2,f_{d+2},\dots,f_m\}$, and the coefficient
near $f_2$ in the expansion of $e_1-e_2$ is $\pm 1$. Therefore
$P_2(f_2)=\pm(e_1-e_2)$.
\medskip
Since the projections $P_1$ and $P_2$ satisfy the minimality
condition from \cite[Lemma 1]{laa} (see, also
\cite[pp.~318--319]{jfa}), the parallelepipeds $P_1(B_\infty^m)$
and $P_2(B_\infty^m)$ are MVSE for $X$. Hence, by the conditions
of Lemma \ref{T:polyhedral}, they are MVSE for $Y$ also. Hence,
they are minimum-volume parallelepipeds containing $B_Y$. On the
other hand, it is known, see \cite[Lemma 3$\cdot$1]{PS}, that
centers of facets of minimal-volume parallelepipeds containing
$B_Y$ should belong to $B_Y$, we get $e_1,e_2,e_1-e_2\in B_Y$. The
theorem follows.
\end{proof}
I would like to thank Gideon Schechtman for turning my attention
to the fact that the class ${\cal T}_d$ was studied in works on lattice
tiles.
\end{large}
| {
"timestamp": "2008-11-11T20:37:11",
"yymm": "0811",
"arxiv_id": "0811.1701",
"language": "en",
"url": "https://arxiv.org/abs/0811.1701",
"abstract": "Let $B_Y$ denote the unit ball of a normed linear space $Y$. A symmetric, bounded, closed, convex set $A$ in a finite dimensional normed linear space $X$ is called a {\\it sufficient enlargement} for $X$ if, for an arbitrary isometric embedding of $X$ into a Banach space $Y$, there exists a linear projection $P:Y\\to X$ such that $P(B_Y)\\subset A$. The main results of the paper: {\\bf (1)} Each minimal-volume sufficient enlargement is linearly equivalent to a zonotope spanned by multiples of columns of a totally unimodular matrix. {\\bf (2)} If a finite dimensional normed linear space has a minimal-volume sufficient enlargement which is not a parallelepiped, then it contains a two-dimensional subspace whose unit ball is linearly equivalent to a regular hexagon.",
"subjects": "Functional Analysis (math.FA)",
"title": "Sufficient enlargements of minimal volume for finite dimensional normed linear spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232874658905,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7093460307195776
} |
https://arxiv.org/abs/1703.07951 | Sums of quadratic functions with two discriminants | Zagier in [4] discusses a construction of a function $F_{k,D}(x)$ defined for an even integer $k \geq 2$, and a positive discriminant $D$. This construction is intimately related to half-integral weight modular forms. In particular, the average value of this function is a constant multiple of the $D$-th Fourier coefficient of weight $k+1/2$ Eisenstein series constructed by H. Cohen in \cite{Cohen}.In this note we consider a construction which works both for even and odd positive integers $k$. Our function $F_{k,D,d}(x)$ depends on two discriminants $d$ and $D$ with signs sign$(d)=$ sign$(D)=(-1)^k$, degenerates to Zagier's function when $d=1$, namely, \[ F_{k,D,1}(x)=F_{k,D}(x), \] and has very similar properties. In particular, we prove that the average value of $F_{k,D,d}(x)$ is again a Fourier coefficient of H. Cohen's Eisenstein series of weight $k+1/2$, while now the integer $k \geq 2$ is allowed to be both even and odd. | \section{Introduction} \label{intro}
Let $\mathfrak{Q}_D$ be the set of all quadratic functions $Q=ax^2+bx+c=[a,b,c]$ with integer coefficients and of discriminant $D=b^2-4ac>0$. For an even positive integer $k \geq 2$, Zagier \cite{Zagier} defines the function $F_{k,D}: \mathbb{R} \to \mathbb{R}$ by
\begin{equation*}
\displaystyle F_{k,D}(x):=\sum_{\substack{Q \in \mathfrak{Q}_D \\a<0<Q(x)}}Q(x)^{k-1}
\end{equation*}
and investigates its striking properties. The construction raises an obvious question, what happens if $k$ is odd:
the function $ F_{k,D}(x)$ fails to have all these properties then.
In \cite[Section 9]{Zagier}, Zagier explains how one can gain the extra freedom and to allow $k$ to be odd:
he suggests to consider a symmetrization
\[
F_{k,\mathcal A}(x):=\sum_{\substack{Q \in \mathcal{A}\\a<0<Q(x)}}Q(x)^{k-1} + (-1)^k\sum_{\substack{Q \in \mathcal{-A}\\a<0<Q(x)}}Q(x)^{k-1}
\]
where the summation is restricted to quadratic forms in one equivalence class $\mathcal {A} \subset \mathfrak{Q}_D$ which is an
orbit in $\mathfrak{Q}_D$ under the action of $PSL_2(\Z)$, and
\[
-\mathcal{A} = \left\{-Q \ \vert \ Q \in \mathcal{A}\right\}.
\]
However, restricting to one class $\mathcal A$ does not allow for a generalization to odd $k$ of one of important properties of
$F_{k,D}(x)$ which is discussed in \cite[Section 14]{Zagier}. Namely, one can define a constant
$F_{k,0}$ such that
for every $x$, for even $k\geq 2$, the generating function $F_{k,0}+ \sum_D F_{k,D}(x) q^D$, where the sum is taken over all discriminants $D>0$, is the $q$-expansion of
a modular form of weight $k+1/2$ in Kohnen's $+$-space. The functions $F_{k,D}(x)$ are $1$-periodic, and their average values are calculated by Zagier in \cite[Section 8]{Zagier}. These are, up to a common multiple, $q$-expansion coefficients of H. Cohen's Eisenstein series. In order to state the result of this calculation, we
denote by ${\mathcal H}_k(\tau)$ the weight $k+1/2$ Eisenstein series on $\Gamma_0(4)$ introduced by H. Cohen in \cite{Cohen}:
\[
{\mathcal H}_k(\tau)=\zeta(1-2k)+\sum_{(-1)^kD > 0} H(k,|D|) q^{|D|} \hspace{3mm} \textup{with $q=\exp(2 \pi i \tau)$ and $\Im(\tau)>0$ throughout.}
\]
The summation runs over discriminants $D$ such that $(-1)^kD > 0$, and $H(k,|D|)$ denote Cohen's numbers.
These are essentially the values at negative integers of Dirichlet $L$-function of the
quadratic character associated with the field extension $\Q(\sqrt{D})/\Q$.
We refer to \cite{Cohen} for the definition of $H(k,D)$ and do not duplicate Cohen's definition in this paper.
The result of Zagier's calculation in \cite[Section 8]{Zagier} can now be stated as the identity
\begin{equation} \label{zc}
\frac{\zeta(1-2k)}{\zeta(1-k) }\left(\frac{1}{2}\zeta(1-k)+\sum_{D>0} \int_0^1 F_{k,D}(x) dx \ q^D\right)= \frac{1}{2} {\mathcal H}_k(\tau)
\end{equation}
which holds for even $k \geq 2$.
In this paper, we present a generalization of $ F_{k,D}(x)$ which allows us to produce an exact analog of (\ref{zc}) for odd $k$.
Let $D$ be any discriminant, $d$ be a fundamental discriminant such that $\Delta := Dd>0$. For a quadratic form $Q=ax^2+bx+c=[a,b,c]$ with integer coefficients and of discriminant
\[
b^2-4ac = \Delta,
\]
the value of genus character $\chi_d(Q)$ is defined (cf. \cite{Gross}) by
\[
\chi_d(Q) =
\begin{cases}
0 & \textup{if $(a,b,c,d)>1$ }\\
\left(\frac{d}{r}\right) & \textup{if $(a,b,c,d)=1,$ where $Q$ represents $r,\ \ $ $(r,d)=1$.}
\end{cases}
\]
We now assume that $k>1$ is an integer, and
\[
\sign d = \sign D = (-1)^k.
\]
We define
\[
\displaystyle F_{k,D,d}(x):=\sum_{\substack{Q \in \mathfrak{Q}_{Dd} \\a<0<Q(x)}} \chi_d (Q) Q(x)^{k-1}.
\]
Note that our $F_{k,D,d}(x)$ generalizes Zagier's $F_{k,D}(x)$ directly. Namely, for even $k>1$, we have
\[
F_{k,D,1}(x) = F_{k,D}(x).
\]
By the same argument as in \cite{Zagier}, our functions $F_{k,D,d}(x)$ are $1$-periodic and continuous for $k>1$, thus
their average values make sense. The main result of this paper is the following generalization of (\ref{zc}).
\begin{theorem}
For an integer $k>1$, and a fundamental discriminant $d$ such that $\sign d = (-1)^k$,
\[
\frac{\zeta(1-2k)}{H(k,|d|)}\left(\frac{1}{2}H(k,|d|)+\sum_{(-1)^kD>0} \int_0^1 F_{k,D,d}(x) dx \ q^{|D|}\right)=
\frac{1}{2} {\mathcal H}_k(\tau).
\]
\end{theorem}
It is quite natural to ask about the boundary case $k=1$.
It follows from \cite{Jameson} that $F_{1,D,d}(x)$ is defined if and only if $x$ is rational, so no averaging is possible.
At the same time, the series ${\mathcal H}_1$ is not modular (see \cite{Cohen,Zn}). The following result checks with these observations.
\begin{theorem}
For a fundamental discriminant $d<0$ and a discriminant $D<0$ with $Dd$ being non-square, and $x \in \Q$, we have that
\[
F_{1,D,d}(x)=0.
\]
\end{theorem}
The proof of Theorem 1 is presented in Section 2.
Equality of constant terms of $q$-series in Theorem 1 follows directly from the definition of Cohen's numbers $H(k,N)$
in \cite{Cohen}.
Thus Theorem 1
is equivalent to the term-by-term identity
\begin{equation} \label{id}
\int_0^1 F_{k,D,d}(x) dx = \frac{H(k,|D|)H(k,|d|)}{2\zeta(1-2k)},
\end{equation}
and that is what we prove in Section 2. This proof depends on two technical propositions
(Proposition \ref{mult} and \ref{fact} in Section 2) which claim a decomposition of a certain
Dirichlet series into an Euler product, and calculate its Euler factors. The proofs of these propositions
are presented in Section 3 of the paper.
The value of genus character $\chi_d(Q)=\chi_d({\mathcal A})$ depends only on the class ${\mathcal A} \in \mathfrak{Q}_{Dd}$ such that $Q\in {\mathcal A}$, not
on the individual form $Q$ (see \cite{Gross} for details). It follows that
\begin{equation} \label{sumid}
F_{k,D,d}(x) = \sum_{{\mathcal A}} \chi_d({\mathcal A}) F^*_{k,{\mathcal A}}(x),
\end{equation}
where the sum is taken over all classes ${\mathcal A}$ of quadratic forms of discriminant $Dd$, and
\[
F^*_{k,{\mathcal A}}(x) = \sum_{\substack{Q \in {\mathcal A} \\ a<0<Q(x)}} Q(x)^{k-1}
\]
are introduced and briefly discussed in \cite[Section 9]{Zagier}. In particular, since $F^*_{k,{\mathcal A}}(x)$ are periodic functions with period $1$, so are our $F_{k,D,d}(x)$, and the integrals in the left of (\ref{id}) may be interpreted as average values of these functions.
In Section 4, we address the case when $k=1$. We show cancellations in (\ref{sumid}) which prove Theorem 2.
\vskip 10pt
{\sc Acknowledgement}
The author is grateful to Prof. Pavel Guerzhoy for his advice and great support. His comments were very valuable to the writing of this paper.
\section{Proof of Theorem 1} \label{pt1}
In this section, we prove Theorem 1.
\begin{proof}
All we need is to prove (\ref{id}). As in \cite[Section 8]{Zagier}, we have
\begin{align*}
\int_0^1 F_{k,D,d}(x) dx = \sum_{\substack{Q=[a,b,c]\in \mathfrak{Q}_{Dd}/\Gamma_{\infty} \\ a<0}} \chi_d (Q) \beta_k(Q),
\end{align*}
where $\beta_k(Q):=\int_{-\infty}^{\infty} [\text{max}(0,Q(x))]^{k-1} dx$.
We evaluate this integral using the substitution $x=\frac{-b+t\sqrt{Dd}}{2a}$:
\begin{align*}
\beta_k(Q)&= c_k(Dd)^{k-\frac{1}{2}}|a|^{-k} \hspace*{5mm} \text{with} \hspace*{5mm} c_k:=\frac{1}{2^{2k-1}}\int_{-1}^1(1-t^2)^{k-1} dt = \frac{1}{2^{2k-1}}\frac{\Gamma(k) \Gamma (\frac{1}{2})}{\Gamma (k+\frac{1}{2})}.
\end{align*}
It follows that
\begin{align*}
\int_0^1 F_{k,D,d}(x) dx &= c_k |Dd|^{k-1/2}\sum_{\substack{Q=[a,b,c]\in \mathfrak{Q}_{Dd}/\Gamma_{\infty} \\ a<0}} \frac{\chi_d (Q)}{|a|^k}\\
&= c_k |Dd|^{k-1/2}\sum_{n=1}^{\infty} \left( \sum_{\substack{0 \leq b \leq 2n-1 \\ b^2 \equiv Dd \text{ mod } 4n }} \chi_d \left(\left[-n,b,\frac{Dd-b^2}{4n}\right]\right) \right) \frac{1}{n^k}.\\
\end{align*}
\begin{prop} \label{mult}
For a positive integer $n$, let
\[
N_{D,d}(n):=\sum_{\substack{0 \leq b \leq 2n-1 \\ b^2 \equiv Dd \text{ mod } 4n }} \chi_d \left(\left[-n,b,\frac{Dd-b^2}{4n}\right]\right).
\]
The function $ (-1)^k N_{D,d} \colon \N \rightarrow \Z$
is multiplicative.
\end{prop}
We postpone a proof of Proposition \ref{mult} till Section 3, and continue with our proof of Theorem 1.
Proposition \ref{mult} allows us to write an Euler product expansion for the series
$\sum_{n=1}^{\infty} (-1)^kN_{D,d}(n)n^{-k}$, and we have that
\begin{align*}
\int_0^1 F_{k,D,d}(x) dx &= c_k |Dd|^{k-1/2}\sum_{n=1}^{\infty} \frac{N_{D,d}(n)}{n^k}\\
&= (-1)^kc_k |Dd|^{k-1/2}\sum_{n=1}^{\infty} \frac{(-1)^kN_{D,d}(n)}{n^k}\\
&= (-1)^kc_k |Dd|^{k-1/2} \prod_p \sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}}.\\
\end{align*}
Our next proposition calculates the Euler factors in the above product
\begin{prop} \label{fact}
Let $p$ be a prime.
Let $D=D_0f^2$ with a fundamental discriminant $D_0$. Let $e\geq 0$ be the integer defined by $p^e || f$.
Then
\[
\sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}}= \frac{1-p^{-2k}}{\left(1-\left(\frac{D_0}{p}\right)p^{-k}\right)\left(1-\left(\frac{d}{p}\right)p^{-k}\right)}
\frac{1}{(p^e)^{2k-1}} \left(\sigma_{2k-1}(p^{e})-\left(\frac{D_0}{p}\right)p^{k-1}\sigma_{2k-1}(p^{e-1})\right),
\]
where we adopt the usual convention $\sigma_{2k-1}(1/p)=0$.
\end{prop}
We postpone a proof of Proposition \ref{fact} till Section 3, and continue with our proof of Theorem 1.
Assume that $D=D_0f^2$ with a fundamental discriminant $D_0$, and let
$f=\prod_{i=1}^{m} p_i^{e_i}$.
An inductive argument on the number of prime factors of $f$ allows us to conclude that
\[
\frac{1}{f^{2k-1}}\sum_{r|f} \mu (r)\left(\frac{D_0}{r}\right)r^{k-1}\sigma_{2k-1}\left(\frac{f}{r}\right) = \prod_{i=1}^{m} \frac{1}{(p_i^{e_i})^{2k-1}} \left(\sigma_{2k-1}(p_i^{e_i})+\mu(p_i)\left(\frac{D_0}{p_i}\right)p_i^{k-1}\sigma_{2k-1}(p_i^{e_i-1})\right).
\]
We take this equality into the account and use Proposition \ref{fact} to find that
\begin{align*}
&\int_0^1 F_{k,D,d}(x) dx \\
=& (-1)^k c_k |Dd|^{k-1/2}\prod_p \frac{1-p^{-2k}}{\left(1-\left(\frac{D_0}{p}\right)p^{-k}\right)\left(1-\left(\frac{d}{p}\right)p^{-k}\right)}\frac{1}{f^{2k-1}} \sum_{r|f} \mu (r)\left(\frac{D_0}{r}\right)r^{k-1}\sigma_{2k-1}\left(\frac{f}{r}\right)\\
=& (-1)^k c_k |Dd|^{k-1/2} L_{D_0}(k)L_{d}(k)\frac{1}{\zeta(2k)}\frac{1}{f^{2k-1}} \sum_{r|f} \mu (r)\left(\frac{D_0}{r}\right)r^{k-1}\sigma_{2k-1}\left(\frac{f}{r}\right)\\
=& (-1)^k c_k |D_0d|^{k-1/2} L_{D_0}(k)L_{d}(k)\frac{1}{\zeta(2k)}\sum_{r|f} \mu (r)\left(\frac{D_0}{r}\right)r^{k-1}\sigma_{2k-1}\left(\frac{f}{r}\right).
\end{align*}
Now standard functional equation for Dirichlet $L$-functions (and the definition of Cohen's numbers from \cite{Cohen})
allows us to derive
\begin{align*}
\int_0^1 F_{k,D,d}(x) dx =\frac{H(k,|D|)H(k,|d|)}{2H(k,0)}
\end{align*}
which is equivalent to (\ref{id}).
\end{proof}
\section{Proofs of Propositions \ref{mult} and \ref{fact}} \label{ppp12}
\begin{proof}[Proof of Proposition \ref{mult}]
Let $n_1$ and $n_2$ be two positive integers such that $(n_1,n_2)=1$.
We want to prove that
\[
N_{D,d}(n_1n_2) = N_{D,d}(n_1) N_{D,d}(n_2).
\]
Without loss of generality, assume that $n_2$ is odd. Thus,
$(n_2,4)=1$ and $(4n_1,n_2)=1$.
We use our definition of $N_{D,d}(n)$ to transform these quantities. We obtain
\begin{align} \label{sum1}
\displaystyle N_{D,d}(n_1n_2)&=\sum_{\substack{0 \leq b \leq 2n_1n_2-1 \\ b^2 \equiv Dd \text{ mod } 4n_1n_2 }} \chi_d \left(\left[-n_1n_2,b,\frac{Dd-b^2}{4n_1n_2}\right]\right) \notag\\
&=\sum_{\substack{0 \leq b \leq 2n_1n_2-1 \\ b^2 \equiv Dd \text{ mod } 4n_1n_2 }} \chi_d \left(\left[-n_1,b,\frac{Dd-b^2}{4n_1n_2}\cdot n_2\right]\right)\chi_d \left(\left[n_2,b,\frac{Dd-b^2}{4n_1n_2}\cdot (-n_1)\right]\right) \notag\\
&=\sum_{\substack{0 \leq b \leq 2n_1n_2-1 \\ b^2 \equiv Dd \text{ mod } 4n_1n_2 }} \chi_d \left(\left[-n_1,b,\frac{Dd-b^2}{4n_1}\right]\right)\chi_d \left(\left[n_2,b,-\frac{Dd-b^2}{4n_2}\right]\right) \notag\\
&=(-1)^k\sum_{\substack{0 \leq b \leq 2n_1n_2-1 \\ b^2 \equiv Dd \text{ mod } 4n_1n_2 }} \chi_d \left(\left[-n_1,b,\frac{Dd-b^2}{4n_1}\right]\right)\chi_d \left(\left[-n_2,b,\frac{Dd-b^2}{4n_2}\right]\right)
\end{align}
Now consider
\begin{align} \label{sum2}
\displaystyle N_{D,d}(n_1) N_{D,d}(n_2)&=\sum_{\substack{0 \leq b_1 \leq 2n_1-1 \\ b_1^2 \equiv Dd \text{ mod } 4n_1}} \chi_d \left(\left[-n_1,b_1,\frac{Dd-b_1^2}{4n_1}\right]\right)\sum_{\substack{0 \leq b_2 \leq 2n_2-1 \\ b_2^2 \equiv Dd \text{ mod } 4n_2 }} \chi_d \left(\left[-n_2,b_2,\frac{Dd-b_2^2}{4n_2}\right]\right) \notag\\
&=\sum_{\substack{0 \leq b_1 \leq 2n_1-1 \\ b_1^2 \equiv Dd \text{ mod } 4n_1 \\ 0 \leq b_2 \leq 2n_2-1 \\ b_2^2 \equiv Dd \text{ mod } 4n_2 }} \chi_d \left(\left[-n_1,b_1,\frac{Dd-b_1^2}{4n_1}\right]\right)\chi_d \left(\left[-n_2,b_2,\frac{Dd-b_2^2}{4n_2}\right]\right).
\end{align}
Note that the sums (\ref{sum1}) and (\ref{sum2}) have same amounts of summands. Indeed,
denote by $v(n)$ be the number of solutions of $b^2-Dd \equiv 0 \text{ (mod } n)$. Then
the number of summands in (\ref{sum1}) is
\[
\frac{1}{2}v(4n_1n_2)=\frac{1}{2}v(4n_1)v(n_2)
\] while the number of summands in
(\ref{sum2}) is
\[
\frac{1}{2}v(4n_1)\cdot \frac{1}{2}v(4n_2)=\frac{1}{2}v(4n_1)\cdot \frac{1}{2}v(4)v(n_2)=\frac{1}{2}v(4n_1)v(n_2).
\]
We now establish a one-to-one correspondence between these sets of summands such that corresponding summands are equal.
Summand in (\ref{sum2}) are numerated by pairs $(b_1,b_2)$ of residues modulo $2n_1$ and $2n_2$ correspondingly
(which satisfy additional congruence conditions modulo $4n_1$ and $4n_2$.)
The Chinese Remainder Theorem allows us to find $B$ (unique modulo $4n_1n_2$) such that
\[
B \equiv b_1 \mod 4n_1 \hspace{3mm} \text{and $B \equiv b_2 \mod n_2$}
\]
We now lift $B$ to an integer, which we also denote by $B$ such that $0 \leq B < 4n_1n_2$, and
set
\[
b= \begin{cases}
B & \text{if $B< 2n_1n_2$} \\
4n_1n_2-B & \text{if $B \geq 2n_1n_2$}
\end{cases}
\]
It is easy to see that the above procedure establishes a one-to-one correspondence between the sets of summands
in (\ref{sum1}) and (\ref{sum2}), and we now want to check that corresponding summands are equal.
Since $b \equiv b_1\pmod {4n_1}$, we set $b= b_1+4n_1m = b_1+(2n_1)(2m)$ for some integer $m$ and find that
\[
\chi_d \left(\left[-n_1,b_1,\frac{Dd-b_1^2}{4n_1}\right]\right)= \chi_d \left(\left[-n_1,b,\frac{Dd-b^2}{4n_1}\right]\right).
\]
Since $b \equiv b_2 \pmod {n_2}$, we set $b=b_2+n_2m$ for some integer $m$.
The congruence $b_2^2 \equiv Dd \pmod 4$ implies $b_2 \equiv Dd \pmod 2$.
Similarly, $b^2 \equiv Dd \pmod 4$ implies $b \equiv Dd \pmod 2$ and $b \equiv b_2 \pmod 2$.
Since $n_2$ is odd, $m$ must be even, $m=2m'$.
Thus, $b=b_2+n_2m=b_2+n_2(2m')=b_2+2n_2(m')$. Now we have
\[
\chi_d \left(\left[-n_2,b_2,\frac{Dd-b_2^2}{4n_2}\right]\right)= \chi_d \left(\left[-n_2,b,\frac{Dd-b^2}{4n_2}\right]\right).
\]
It follows that
\[
N_{D,d}(n_1n_2)=(-1)^k N_{D,d}(n_1) N_{D,d}(n_2),
\]
therefore
\[
(-1)^kN_{D,d}(n_1n_2)= [(-1)^k N_{D,d}(n_1)] [(-1)^k N_{D,d}(n_2)]
\]
as required.
\end{proof}
We now turn to the proof of Proposition \ref{fact}.
This proof varies slightly depending on whether the involved quantities are or are not divisible by $p$.
Also, the case $p=2$ has to be considered separately.
In particular, we say that we are in {\bf Case 1} if $p \nmid f$, and in {\bf Case 2} if $p | f$.
In each case, we consider the following sub-cases
\begin{enumerate}[{\bf(i)}]
\item $p \nmid d$, $p \nmid D_0$
\item $p \nmid d$, $p|D_0$
\item $p | d$, $p \nmid D_0$
\item $p| d$, $p|D_0$,
\end{enumerate}
and in every sub-case we will have part {\bf(a)} if $p$ is odd, and part {\bf(b)} for $p=2$.
For the sake of space and clarity, we present here proofs only for {\bf Case 1(i)(a)} and {\bf Case 2(iii)(a)}. While the former is the simplest generic case, we will use the latter to illustrate the ideas involved in these proofs. In the remaining cases, one exploits same
set of ideas, specifically, one uses an explicit calculation of the quantities $N_{D,d}(p^n)$.
\begin{proof}[Proof of Proposition \ref{fact} in \textbf{\bf Case 1(i)(a)}]
Recall the assumptions: $p \nmid f$, $p \nmid d$ and $p \nmid D_0$ with $p$ odd.
We need to prove the identity
\[
\sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}} = \frac{1-p^{-2k}}{\left(1-\left(\frac{D_0}{p}\right)p^{-k}\right)\left(1-\left(\frac{d}{p}\right)p^{-k}\right)}.
\]
As long as $p \nmid d$, we can use an explicit formula for the genus character proved in \cite{Gross} to get
\[
\chi_d([-p^n,b,c])=\left(\frac{d}{-p^n}\right)\left(\frac{1}{c}\right)=\left(\frac{d}{-p^n}\right).
\]
We thus have that
\[
N_{D,d}(p^n)=\sum_{\substack{0 \leq b \leq 2n-1 \\ b^2 \equiv Dd \text{ mod } 4p^n }} \chi_d ([-p^n,b,\frac{Dd-b^2}{4p^n}])=\left(\frac{d}{-p^n}\right)\sum_{\substack{0 \leq b \leq 2n-1 \\ b^2 \equiv Dd \text{ mod } 4p^n }} 1.
\]
We make use of notation (cf. \cite[Section 8]{Zagier})
\[
N_{\Delta}(n)=\sum_{\substack{0 \leq b \leq 2n-1 \\ b^2 \equiv \Delta \text{ mod } 4n }} 1
\]
to obtain
\begin{align*}
\sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}} &= \sum_{n=0}^{\infty} \frac{(-1)^k\left(\frac{d}{-p^n} \right) N_{Dd}(p^n)}{p^{nk}}\\
&=\sum_{n=0}^{\infty} \frac{(-1)^k \left(\frac{d}{-1} \right) \left(\frac{d}{p^n} \right) N_{Dd}(p^n)}{p^{nk}}\\
&=\sum_{n=0}^{\infty} \frac{\left(\frac{d}{p} \right)^n N_{Dd}(p^n)}{p^{nk}}.
\end{align*}
Recall that $v(n)$ denotes the number of solutions of $b^2-Dd \equiv 0 \text{ (mod } n)$.
Since $p$ is odd,
\[
N_{Dd}(p^n)= \frac{1}{2} \cdot v(4p^n) \\
= \frac{1}{2} \cdot v(4) \cdot v(p^n) \\
= \frac{1}{2} \cdot 2 \cdot v(p^n) \\
=v(p^n). \\
\]
If $\left(\frac{d}{p} \right) \not = \left(\frac{D}{p} \right)= \left(\frac{D_0}{p} \right)$, then $\left(\frac{Dd}{p} \right)=-1$ means that
$Dd$ is a quadratic non-residue $\mod p$, therefore $v(p^n)=N_{Dd}(p^n) =0$ for $n \geq 1$, and
\[
\sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}}=1= \frac{(1+p^{-k})(1-p^{-k})}{\left(1-\left(\frac{D_0}{p} \right)p^{-k}\right)\left(1-\left(\frac{d}{p} \right)p^{-k}\right)}
\]
as required.
If $\left(\frac{d}{p} \right) = \left(\frac{D}{p} \right)$, then $\left(\frac{Dd}{p} \right)=1$ and $Dd$ is a quadratic residue modulo $p$.
Then Hensel's lemma implies that $v(p^n)=N_{Dd}(p^n) =2$ for $n \geq 1$, and we calculate
\begin{align*}
\sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}} & =\sum_{n=0}^{\infty} \frac{\left(\frac{d}{p} \right)^n N_{Dd}(p^n)}{p^{nk}}\\
&=1+\frac{\left(\frac{d}{p} \right)\cdot 2}{p^k}+\frac{ \left(\frac{d}{p^2} \right) \cdot 2}{p^{2k}}+\frac{ \left(\frac{d}{p^3} \right)\cdot 2}{p^{3k}}+\cdots\\
&=1+\frac{2\left(\frac{d}{p} \right)p^{-k}}{1-\left(\frac{d}{p} \right)p^{-k}}\\
&=\frac{1+\left(\frac{d}{p} \right)p^{-k}}{1-\left(\frac{d}{p} \right)p^{-k}}\\
&=\frac{\left(1+\left(\frac{d}{p} \right)p^{-k}\right)\left(1-\left(\frac{d}{p} \right)p^{-k}\right)}{\left(1-\left(\frac{d}{p} \right)p^{-k}\right)\left(1-\left(\frac{d}{p} \right)p^{-k}\right)}\\
&=\frac{(1+p^{-k})(1-p^{-k})}{\left(1-\left(\frac{D_0}{p} \right)p^{-k}\right)\left(1-\left(\frac{d}{p} \right)p^{-k}\right)}.
\end{align*}
as required.
\end{proof}
\begin{proof}[Proof of Proposition \ref{fact} in \textbf{\bf Case 2(iii)(a)}]
Recall the assumptions: $p | f$, $p | d$ and $p \nmid D_0$ with $p$ odd. Furthermore, recall that integer $e >0$ is defined as the maximum power of $p$ dividing $f$, namely $p^e || f$.
Under these assumptions, one can calculate the quantities $N_{D,d}(p^n)$ to be:
\[
\begin{array}{lccc}
N_{D,d}(p^{2s-1})&= & 0 & \text{for} \hspace{3mm}1 \leq s \leq e, \\
N_{D,d}(p^{2s})&=& (-1)^k(p^{s}-p^{s-1}) & \text{for} \hspace{3mm} 1 \leq s \leq e, \\
N_{D,d}(p^{2e+1})&=&(-1)^k\left(\frac{D_0}{p}\right)p^e ,\\
N_{D,d}(p^{n})&=&0 & \text{for} \hspace{3mm} n \geq 2e+2.
\end{array}
\]
Thus we have that
\begin{align*}
&\sum_{n=0}^{\infty} \frac{(-1)^kN_{D,d}(p^n)}{p^{nk}}
= 1+\frac{p-1}{p^{2k}}+\frac{p^2-p}{p^{4k}}+\frac{p^3-p^2}{p^{6k}}+\cdots+\frac{p^e-p^{e-1}}{p^{2ek}}+\frac{\left(\frac{D_0}{p}\right)p^e}{p^{(2e+1)k}}\\
=&1+\frac{1}{p^{2k-1}}+\frac{1}{p^{2(2k-1)}}+\cdots+\frac{1}{p^{e(2k-1)}}-\left(\frac{1}{p^{2k}}+\frac{1}{p^{4k-1}}+\frac{1}{p^{6k-2}}+\cdots+\frac{1}{p^{2ek-e+1}}\right)+\frac{\left(\frac{D_0}{p}\right)p^e}{p^{(2e+1)k}}\\
=&\frac{\sigma_{2k-1}(p^e)}{p^{e(2k-1)}}-\frac{p^{(e-1)(2k-1)}+p^{(e-2)(2k-1)}+\cdots+p^{2k-1}+1}{p^{e(2k-1)+1}}+\frac{\left(\frac{D_0}{p}\right)p^e}{p^{(2e+1)k}} \\
=&\frac{\sigma_{2k-1}(p^e)}{p^{e(2k-1)}}-\frac{\sigma_{2k-1}(p^{e-1})}{p^{e(2k-1)+1}}+\frac{\left(\frac{D_0}{p}\right)}{p^{e(2k-1)+k}}\\
=&\frac{p^k\sigma_{2k-1}(p^{e})+\left(\frac{D_0}{p}\right)\sigma_{2k-1}(p^{e})-\left(\frac{D_0}{p}\right)p^{2k-1}\sigma_{2k-1}(p^{e-1})-p^{k-1}\sigma_{2k-1}(p^{e-1})}{p^{e(2k-1)+k}}\\
=& \frac{p^k+\left(\frac{D_0}{p}\right)}{p^{k}} \cdot \frac{\sigma_{2k-1}(p^{e})-\left(\frac{D_0}{p}\right)p^{k-1}\sigma_{2k-1}(p^{e-1})}{p^{e(2k-1)}} \\
=& \left(1+\left(\frac{D_0}{p}\right)p^{-k}\right)\frac{1}{p^{e(2k-1)}} \left(\sigma_{2k-1}(p^{e})-\left(\frac{D_0}{p}\right)p^{k-1}\sigma_{2k-1}(p^{e-1})\right)\\
=& \frac{1-p^{-2k}}{\left(1-\left(\frac{D_0}{p}\right)p^{-k}\right)\left(1-\left(\frac{d}{p}\right)p^{-k}\right)} \frac{1}{(p^e)^{2k-1}} \left(\sigma_{2k-1}(p^{e})-\left(\frac{D_0}{p}\right)p^{k-1}\sigma_{2k-1}(p^{e-1})\right).
\end{align*}
\end{proof}
\section{Proof of Theorem 2} \label{pt2}
The statement follows easily from
\begin{equation} \label{one}
F_{1,D,d}(x+1)=F_{1,D,d}(x),
\end{equation}
\begin{equation} \label{two}
F_{1,D,d}(0)=0
\end{equation}
and
\begin{equation} \label{three}
F_{1,D,d}\left(\frac{1}{x}\right)=F_{1,D,d}(x)
\end{equation}
for every $x \in \Q$.
It is easy to verify that (\ref{one}) holds.
In order to check (\ref{two}), notice that
\begin{equation} \label{four}
\sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a<0<c}} \chi_d ([a,b,c])=
\sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0<a}} \chi_d ([a,b,c])=
0.
\end{equation}
since if $[a,b,c]$ appears in the sum, so does $[-c,b,-a]$ and $ \chi_d ([a,b,c])= -\chi_d ([-a,-b,-c])=-\chi_d ([-c,b,-a])$.
Equation (\ref{two}) follows immediately, because the first sum equals $F_{1,D,d}(0)$.
We now prove (\ref{three}). We start with a transformation of $F_{1,D,d}(1/x)$:
\begin{align*}
F_{1,D,d}\left(\frac{1}{x}\right)=&\sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a<0\\Q\left(\frac{1}{x}\right)>0}} \chi_d ([a,b,c])\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a<0\\a\left(\frac{1}{x}\right)^2+b\left(\frac{1}{x}\right)+c>0}} \chi_d ([a,b,c])\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a<0\\a+bx+cx^2>0}} \chi_d ([a,b,c])\\
=& \sum_{\substack{Q=[c,b,a]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0\\ax^2+bx+c>0}} \chi_d ([c,b,a]) \hspace*{1cm}\text{ (switched the names } a \text{ and } c)\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0\\ax^2+bx+c>0}} \chi_d ([a,b,c]) \hspace*{1cm}\text{ (by properties of } \chi_d).\\
\end{align*}
It follows that
\begin{align*}
&F_{1,D,d}\left(\frac{1}{x}\right)-F_{1,D,d}(x)\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0\\ax^2+bx+c>0}} \chi_d ([a,b,c]) - \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a<0\\ax^2+bx+c>0}} \chi_d ([a,b,c]) \\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0<a\\ax^2+bx+c>0}} \chi_d ([a,b,c])- \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a<0<c\\ax^2+bx+c>0}} \chi_d ([a,b,c]) \hspace*{1cm}{(ac \not = 0 \text{ since } Dd \text{ is not a square})}\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0<a\\ax^2+bx+c>0}} \chi_d ([a,b,c])+ \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\-a>0>-c\\-ax^2-bx-c<0}} \chi_d ([-a,-b,-c])\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0<a\\ax^2+bx+c>0}} \chi_d ([a,b,c])+ \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\a>0>c\\ax^2+bx+c<0}} \chi_d ([a,b,c]) \hspace*{1cm}
\begin{tabular}{l}
(\text{replaced } $-a,-b,-c$ \text{ by } $a,b,c$ \\
\text{ in the second sum})
\end{tabular}
\\
=& \sum_{\substack{Q=[a,b,c]\in \mathbb{Z}^3 \\ b^2-4ac=Dd \\c<0<a}} \chi_d ([a,b,c]) \hspace*{1cm} (ax^2+bx+c \not = 0 \text{ since } Dd \text{ is not a square})\\
=& 0 \text{ by (\ref{four})}.
\end{align*}
\qed
| {
"timestamp": "2017-03-24T01:03:22",
"yymm": "1703",
"arxiv_id": "1703.07951",
"language": "en",
"url": "https://arxiv.org/abs/1703.07951",
"abstract": "Zagier in [4] discusses a construction of a function $F_{k,D}(x)$ defined for an even integer $k \\geq 2$, and a positive discriminant $D$. This construction is intimately related to half-integral weight modular forms. In particular, the average value of this function is a constant multiple of the $D$-th Fourier coefficient of weight $k+1/2$ Eisenstein series constructed by H. Cohen in \\cite{Cohen}.In this note we consider a construction which works both for even and odd positive integers $k$. Our function $F_{k,D,d}(x)$ depends on two discriminants $d$ and $D$ with signs sign$(d)=$ sign$(D)=(-1)^k$, degenerates to Zagier's function when $d=1$, namely, \\[ F_{k,D,1}(x)=F_{k,D}(x), \\] and has very similar properties. In particular, we prove that the average value of $F_{k,D,d}(x)$ is again a Fourier coefficient of H. Cohen's Eisenstein series of weight $k+1/2$, while now the integer $k \\geq 2$ is allowed to be both even and odd.",
"subjects": "Number Theory (math.NT)",
"title": "Sums of quadratic functions with two discriminants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232940063591,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7093460295577234
} |
https://arxiv.org/abs/0810.0092 | On the pre-image of a point under an isogeny | Given a rational point on a curve in a rational isogeny class, a natural question concerns the field of definition of its pre-images. The multiplication by m endomorphism is a powerful and much-used tool. The pre-images for this map are found by factorizing a monic polynomial of degree m^2. For m = 2, Everest and King gave examples where the existence of a quadratic factor coincided with the existence of a rational pre-image via a 2-isogeny. Nelson Stephens asked if this always happens and the question is answered in the affirmative. It is also shown that the analogue for m = 3 can only be false when there exists a rational point of order three and a small number of counterexamples are found. The results are proven over any field with characteristic not two or three. | \section{Introduction}
Given an elliptic curve $E/ \mathbb{Q}$, the set of all curves $E'$ isogenous to $E$ over $\mathbb{Q}$ is finite (up to
isomorphism) and is known as an isogeny class. V\'{e}lu's formulae \cite{1} and the Weierstrass parameterization of
the elliptic curve can be used to find an isogeny class. This is best illustrated in an algorithm developed by Cremona
\cite{2}. He has used his algorithm to produce tables of isogeny classes \cite{3}. For each curve in the class,
non-torsion generators of the Mordell-Weil group are also given.
Methods which find isogeny classes over an arbitrary field have also been developed \cite{4}.
Given a curve in an isogeny class and a point in the Mordell-Weil group, the focus here is on where the pre-images
of the point are defined.
\begin{defn}
Let $K$ be a field, $E/K$ an elliptic curve, $P \in E(K)$ and $\phi:E' \to E$ an isogeny. Suppose that $E'$, $\phi$ and
a point in $\phi^{-1}(P)$ are all defined over a finite extension $L/K$ with $[L:K]<\deg \phi$. Then $P$ is called
\emph{maginfied}. If $L=K$ then $P$ is called \emph{K-magnified}.
\end{defn}
Suppose that $E/\mathbb{Q}$ is an elliptic curve. In \cite{5} Everest,
Miller and Stephens showed that if a rational non-torsion point is $\mathbb{Q}$-magnified by $\phi:E' \to E$ then the
corresponding elliptic divisibility sequence has only finitely many prime power terms. In \cite{6} it was shown
that these finitely many terms correspond to $S$-integral points on $E'$, where $S$ is given explicitly. By the main
result in \cite{7}, the
$\mathbb{Q}$-magnified condition can be weakened to magnified when the isogeny is multiplication by two or three.
Stephens asked if a non-torsion point is $\mathbb{Q}$-magnified whenever it is magnified by doubling a point. The
following theorem resolves this question.
\begin{thm} \label{1.2}
Let $E$ be an elliptic curve defined over a field $K$ with $\car{K} \ne 2$. If a non-torsion point $P \in E(K)$ is
magnified by doubling a point then it is $K$-magnified.
\end{thm}
It should be noted that factorizing a monic quartic polynomial determines whether a point is magnified by
doubling a point (see Section \ref{2}). This polynomial depends only on the point and the coefficients of a Weierstrass
equation for $E$. For multiplication by three the conclusion has to be weakened.
\begin{thm} \label{1.3}
Let $E$ be an elliptic curve defined over a field $K$ with $\car{K} \ne 2,3$. If a non-torsion point $P \in E(K)$ is
magnified by tripling a point then either it is $K$-magnified or $E$ has a non-trivial $K$-rational $3$-torsion point.
\end{thm}
Listed in Table~\ref{table1} are curves having a non-torsion rational point which is magnified by tripling a point but
not $\mathbb{Q}$-magnified. They were found using PARI/GP \cite{11} and in Section~\ref{3} it is explained that such
examples are rare.
\section{Proofs of Theorems \ref{1.2} and \ref{1.3}} \label{2}
Let $K$ be a field with $\car{K} \ne 2$. Let $E/K$ be an elliptic curve with Weierstrass coordinate
functions $x$ and $y$. Let $P \in E(K)$ be a non-torsion point. The following observation plays an important role.
\begin{lem} \label{2.1}
Suppose that $E'/K$ is an elliptic curve and $\phi: E' \to E$ is an isogeny defined over $K$ with $\phi(R)=P$. Then
$K(x(R), y(R))=K(x(R))$.
\end{lem}
\begin{proof}
Put $L=K(x(R))$ and $L'=K(x(R), y(R))$. Then $[L':L] \le 2$. Suppose that $[L':L]=2$ and choose
$\sigma \in \gal(L'/L)$ to be non-trivial. Then $T=\sigma(R)-R$ is in the kernel of $\phi$ since
$\sigma(\phi(R))-\phi(R)= \mathcal{O}$. But $x(R+T)=x(R)$ so $R+T=\pm R$. Since $P$ is non-torsion it follows that
$\sigma(R)=R$ and $L'=L$.
\end{proof}
Let $\psi_m, \theta_m \in K[E]$ be the standard division polynomials (see p. 39 of \cite{9}). Define
$\delta_m^P \in K[x]$ by $\delta_m^P=\theta_m-x(P)\psi_m^2$. Then $\delta_m^P$ is monic and has degree $m^2$. The
zeros of $\delta_m^P$ determine the values of $x(R)$ for which $mR=P$.
\begin{proof}[Proof of Theorem \ref{1.2}] Assume that $P$ is magnified by doubling $R \in E(L)$. Using
Lemma \ref{2.1}, $L=K(x(R))$. We may choose $R$ so that $[L:K] \le 2$. Suppose that $[L:K]=2$ and choose
$\sigma \in \gal(L/K)$ to be
non-trivial. Then $T=\sigma(R)-R$ is a $2$-torsion point since $\sigma(2R)-2R=\mathcal{O}$. Also $T \in E(K)$ since
$\sigma(T)=-T$. Using this torsion point, we can construct an elliptic curve $E'/K$ and a $2$-isogeny $\phi: E \to E'$
with $\ker \phi=\{ \mathcal{O},T \}$ (see p. 95 of \cite{10}). Moreover, both $\phi$ and its dual $\hat{\phi}: E' \to E$
are defined over $K$. Put $\phi(R)=Q$. It follows that $\sigma(Q)=\phi(\sigma(R))=\phi(R+T)=\phi(R)$. Hence $Q \in E'(K)$
and $\hat{\phi}(Q)=P$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{1.3}]
We may assume that $\delta_3^P$ has an irreducible factor $f$ of degree $3$. Let $r_1,r_2,r_3$ be the zeros of $f$ and
let $R_1,R_2,R_3$ be the corresponding solutions to $3R=P$. Then the splitting field $L$ of $f$ is $K(r_1, \sqrt{D})$,
where $D$ is the discriminant of $f$. Moreover, $\gal(L/K)$ is isomorphic to $A_3$ or $S_3$. Let $\alpha=(1,2,3)$ be the
generator of $\gal(L/K(\sqrt{D}))$ and put $T=\alpha(R_1)-R_1$. Suppose that $E$ does not have a non-trivial $K$-rational
$3$-torsion point. Any permutation preserves $R_1+R_2+R_3-P$ so it must equal $\mathcal{O}$. Thus
$\alpha(T)=R_3-R_2=R_2-R_1=T$ and $D$ is not a square in $K$. Choose $\sigma \in \gal(K(\sqrt{D})/K)$ to be non-trivial.
Since $\sigma(T)+T$ is a $K$-rational $3$-torsion point, $x(T) \in K$. Using V\'{e}lu's formulae \cite{1}, we can
construct an elliptic curve $E'/K$ and a $3$-isogeny $\phi: E \to E'$ with kernel $\{ \mathcal{O},T,-T \}$. Moreover,
both $\phi$ and its dual $\hat{\phi}: E' \to E$ are defined over $K$. Now $\phi(R_2)=\phi(R_1+T)$ and
$\phi(R_3)=\phi(R_1-T)$. Hence $\phi(R_1)$ is fixed by $\gal(L/K)$.
\end{proof}
\section{Computations} \label{3}
Examples where $\delta_3^P$ factorizes (over $K$) but $P$ is not $K$-magnified are relatively hard to find.
Let $K=\mathbb{Q}$. The first $12$ of Cremona's ``generators'' tables from \cite{3} were considered.
There are $22,962$ pairs $(E,P)$ such that $\delta_3^P$ factorizes (over $\mathbb{Q}$).
In all but $14$ of these pairs $P$ is $\mathbb{Q}$-magnified by a $3$-isogeny, which can be constructed using V\'{e}lu's
formulae \cite{1} and a rational zero of $\psi_3$.
\begin{table}[h]
\caption{Curves having magnified points which are not $\mathbb{Q}$-magnified}
\hbox{
\vtop{\hsize=1.5in
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$N$ & $C$ & \# \\
\hline
17739 & g & 1 \\
19926 & l & 2 \\
26730 & y & 2 \\
39710 & z & 1 \\
45662 & h & 1 \\
\hline
\end{tabular}
\end{center}
}
\vtop{\hsize=1.7in
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$N$ & $C$ & \# \\
\hline
47526 & f & 1 \\
49818 & j & 1 \\
57222 & bw & 2 \\
62814 & r & 1 \\
64395 & f & 1 \\
\hline
\end{tabular}
\end{center}
}
\vtop{\hsize=1.5in
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$N$ & $C$ & \# \\
\hline
70470 & m & 2 \\
92055 & u & 1 \\
113866 & d & 1 \\
119646 & dd & 2 \\ [0.6ex]
\hline
\end{tabular}
\end{center}
}
}
\label{table1}
\end{table}
The $14$ counterexamples found are listed in Table \ref{table1}. They are given in the same format as Cremona uses, where
$N$ is the conductor, $C$ is the isogeny class and \# is the number of the curve in the class. Each curve listed has rank
$1$ and $P$ is taken to be the generator which Cremona gives. Moreover, they each lie in an isogeny class of size two. Let
$Q$ be a generator for the other curve. Denote by $\hat{h}$ the canonical height (see \cite{10}). If $P$ is
$\mathbb{Q}$-magnified by a $d$-isogeny then it is the image of $mQ+T$, for some non-zero integer $m$ and torsion point
$T$. This implies $dm^2\hat{h}(Q)=\hat{h}(P)$. But in fact, $\hat{h}(Q)=3\hat{h}(P)$. Thus $P$ is not
$\mathbb{Q}$-magnified.
It is worth noting that for all of the pairs $(E,P)$ considered and for all odd primes $l \le 7$, if $\delta_l^P$
factorizes (over $\mathbb{Q}$) then $\psi_l$ has a factor of degree at most $(l-1)/2$.
| {
"timestamp": "2008-10-01T09:38:04",
"yymm": "0810",
"arxiv_id": "0810.0092",
"language": "en",
"url": "https://arxiv.org/abs/0810.0092",
"abstract": "Given a rational point on a curve in a rational isogeny class, a natural question concerns the field of definition of its pre-images. The multiplication by m endomorphism is a powerful and much-used tool. The pre-images for this map are found by factorizing a monic polynomial of degree m^2. For m = 2, Everest and King gave examples where the existence of a quadratic factor coincided with the existence of a rational pre-image via a 2-isogeny. Nelson Stephens asked if this always happens and the question is answered in the affirmative. It is also shown that the analogue for m = 3 can only be false when there exists a rational point of order three and a small number of counterexamples are found. The results are proven over any field with characteristic not two or three.",
"subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)",
"title": "On the pre-image of a point under an isogeny",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232940063591,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7093460295577234
} |
https://arxiv.org/abs/2104.02853 | An SEIR epidemic model of fractional order to analyze the evolution of the COVID-19 epidemic in Argentina | A pandemic caused by a new coronavirus (COVID-19) has spread worldwide, inducing an epidemic still active in Argentina. In this chapter, we present a case study using an SEIR (Susceptible-Exposed-Infected-Recovered) diffusion model of fractional order in time to analyze the evolution of the epidemic in Buenos Aires and neighboring areas (Región Metropolitana de Buenos Aires, (RMBA)) comprising about 15 million inhabitants. In the SEIR model, individuals are divided into four classes, namely, susceptible (S), exposed (E), infected (I) and recovered (R). The SEIR model of fractional order allows for the incorporation of memory, with hereditary properties of the system, being a generalization of the classic SEIR first-order system, where such effects are ignored. Furthermore, the fractional model provides one additional parameter to obtain a better fit of the data. The parameters of the model are calibrated by using as data the number of casualties officially reported. Since infinite solutions honour the data, we show a set of cases with different values of the lockdown parameters, fatality rate, and incubation and infectious periods. The different reproduction ratios R0 and infection fatality rates (IFR) so obtained indicate the results may differ from recent reported values, constituting possible alternative solutions. A comparison with results obtained with the classic SEIR model is also included. The analysis allows us to study how isolation and social distancing measures affect the time evolution of the epidemic. | \section{Introduction}\label{intro}
We present an SEIR subdiffusion model
of fractional order $\nu$, with $0 <\nu \le 1$ to analyze the time evolution of the COVID-19 epidemic in Buenos Aires and neighboring areas (Region Metropolitana de Buenos Aires, (RMBA)) with a population of about 15 million inhabitants.
{\rm RMBA consists of Ciudad Aut\'onoma de Buenos Aires (CABA) plus forty municipalities
covering an area of about thirteen thousand square kilometers, where some of these municipalities have rural areas. Thus, RMBA has an average population density
of 1100 people/km$^2$, but in CABA and many of its neighboring cities this number increases significantly. For example, CABA has a population density of about 14000 people/km$^2$.
In this work, we consider that RMBA has a uniform population distribution.}
The epidemic started officially on March 9th with the number of cases and deaths
still increasing {\rm at the day of writing (September 22th, 2020}.
The classical SEIR model ($\nu = 1$) has been used by Carcione et al. \cite{carcione2020} and Santos et al. \cite{Santos2020} to model the COVID-19 epidemic in Italy and Argentina, respectively.
Fractional calculus has been used to define diffusion and wave propagation models in biological and viscoelastic materials \cite{caputo67,mainardi96,carcione2002,mainardi2010,caputo2011,caputo2011b,kochubei2011,carcione2017}.
One important
property of the fractional-order SEIR model is that incorporates memory and
hereditary properties, a behavior exhibited by most biological systems.
{\rm The use of fractional order derivatives affects the
duration of the epidemic, peaks of infected and dead individuals per day and number of number casualties.}
Among other authors that have applied fractional calculus to obtain solutions of the SEIR model,
we mention
Scherer et al. \cite{scherer2011}, that used a Gr\"unwald-Letnikov time-discrete procedure, introduced by
Ciesielski and Leszczynski \cite{cl2003} (CL method).
Besides,
Zeb et al. \cite{zeb2013} presented an analysis of several numerical methods to solve the SEIR model
of fractional order. For general works on fractional calculus including numerical methods,
we refer to Podlubny \cite{podlubny99} and Li and Zeng \cite{lizeng2015}.
We first formulate an initial-value problem (IVP) for the {\rm classical SEIR model ($\nu = 1$)} and the SEIR subdiffusion equations of
fractional order $\nu$ at the continuous level using the
Caputo definition of the fractional derivative \cite{mainardi2010}.
Existence and uniqueness of the solution of this IVP, with positive values, is demonstrated in \cite{zeb2013}.
The numerical solutions of the
continuous IVP are computed by using the time-explicit
algorithm of Gorenflo-Mainardi-Moretti-Paradisi (GMMP method)
\cite{gorenflo2002,gorenflo2007}.
The conditional stability of the time-explicit GMMP method (and also of the CL method) was demonstrated by
Murillo et al. \cite{murillo2017} [see their equation (19)].
The validation of the GMMP method is performed by comparison of its results against those of the classic SEIR model and those of the fractional Adams-Bashford-Moulton method (ABM method) as defined in \cite{lizeng2015}.
The parameters of the SEIR model are the birth and death rates,
infection and incubation periods, probability of disease transmission
per contact, fatality rate and initial number of exposed
individuals. These parameters, together with the
order of the fractional derivative, are obtained by fitting the number of fatalities officially reported.
This is an inverse problem with an infinite number of solutions (local minima) honouring the data, which is solved by using a quasi-Newton technique for nonlinear least squares problem with the formula of Broyden-Fletcher-Goldfarb-Shanno \cite{Gill81}.
The numerical simulations give an effective procedure to study the spread of the evolution of virus, analyze the effects of the
lockdown measures and predict the peak of infected and dead individuals {\rm per day}.
\section{The Caputo derivative and initial value problems}\label{sec:1}
For $0 < \nu \le 1$, the time fractional Caputo derivative $D^{\nu}_c (u(t)$ is defined as \cite{caputo67,gorenflo2002,gorenflo2007,mainardi2010}
\begin{eqnarray}\label{defcaputo}
D^{\nu}_c (f(t)
= \displaystyle\frac{1}{\Gamma(1 - \nu)}
\displaystyle\int_0^t\left[ \displaystyle\frac{\partial}{\partial f(\tau)} \right]
\displaystyle\frac{ d \tau}{(t - \tau)^\nu},
\end{eqnarray}
where $\Gamma(\cdot )$ denotes the Euler's Gamma function.
Note that the Caputo derivatives of constant functions $f(t) = 1$ vanish and those of powers of $t$, $f(t) = t^k$ are
\[
\displaystyle\frac{\Gamma(k+1)}{\Gamma(k - \nu +1)} t^{k - \nu}.
\]
The advantage of using the Caputo derivative in Caputo-type IVP's is that the initial conditions are the same as those of the classical
ordinary differential equations. For details on the Caputo derivative and its relation with the
Riemann-Liouville fractional derivative we refer to \cite{mainardi2010}.
To approximate the time-fractional Caputo derivative, we use a backward Gr\"unwald-Letnikov
approximation at time $t_n = n \Delta t, n= 0,1,,\cdots$, with $f_n = f(n \Delta t)$, $\Delta t $ being
the time step, as follows
\cite{gorenflo2002,gorenflo2007}:
\begin{eqnarray}\label{eq2}
D^{\nu}_c (f(t)|_{t_{n+1}} \approx \displaystyle\frac{1}{(\Delta t)^{\nu}}\sum_{j=0}^{n+1} (-1)^j
c^\nu_j \binom \nu j f_{n+1-j}.
\end{eqnarray}
The coefficients
\[
c^\nu_j = (-1)^j \binom \nu j
\]
can be obtained in terms of Euler's Gamma function using the recurrence relation
\begin{eqnarray}\label{eq2a}
&&\binom \nu j = \displaystyle\frac{\Gamma(\nu+1)}{\Gamma(j+1) \Gamma(\nu-j+1)} = \displaystyle\frac{\nu-j+1}{j}
\binom \nu {j-1}, \quad \binom \nu 0 = 1.
\end{eqnarray}
The work by Abdullah et al. \cite{abdullah2017} presents an
analysis of the fractional-order SEIR model formulated in terms of the Caputo derivative and
its GMMP time discretization.
\section{The classical and fractional-order SEIR models}\label{seir}
The IVP for the classic SEIR system of nonlinear ordinary differential equations is
\begin{eqnarray}\label{eq4}
&&\dot S = f_1(S,E,I,R)(t) = \Lambda - \mu S(t) - \beta S(t) \displaystyle\frac{I(t)}{N(t)},\\
&&\dot E = f_2(S,E,I,R)(t) = \beta S(t) \displaystyle\frac{I(t)}{N(t)} - (\mu + \epsilon) E(t), \nonumber\\
&&\dot I = f_3(S,E,I,R)(t) = \epsilon E(t) - (\gamma + \mu + \alpha) I(t), \nonumber \\
&&\dot R = f_4(S,E,I,R)(t) = \gamma I(t) - \mu R(t), \nonumber
\end{eqnarray}
with initial conditions $S(0), E(0), I(0)$ and $R(0)$. A dot above a variable indicates the time derivative, while $N(t)$ is the number of live
individuals at time $t$, i.e., $N = S + E + I + R \le N_0$, {\rm $N_0$ being the total initial population.}
In \eqref{eq4},
$S$ is the number of individuals
susceptible to be exposed while $E$ is the number of exposed individuals, in which the disease is latent; they are infected but not infectious. Individuals in the $E$-class
become infected ($I$) with a rate $\epsilon$ and infected become recovered ($R$) with a rate $\gamma$.
People in the $R$ class do not move back to the $S$ class since lifelong immunity is assumed.
Furthermore, $1/\gamma$ and $1/\epsilon$ are
the infection and incubation periods, respectively, $\Lambda$ is the birth rate, $\mu$ is
the natural per capita death rate, $\alpha$ is the average fatality rate, and $\beta$ is the probability of disease transmission per contact.
All of these coefficients have units of 1/time.
{\rm
Given the short period of the epidemic in Argentina (6 months at the time of writing),
and that the average life expectancy
is about 76 years, it is reasonable to assume that
$ \Lambda = \mu N$, so that the deaths balance the newborns.}
Dead individuals $D(t)$ are computed as $D(t) = N_0 - N(t)$, so that
the dead people per unit time $\dot D (t)$, can be obtained as \cite{Sen17}:
\begin{equation} \label{eq3}
\dot D (t) = \alpha I (t).
\end{equation}
Next, we reformulate the system \eqref{eq4} into a fractional-order system by using the Caputo derivative in \eqref{defcaputo}:
\begin{eqnarray}
&&D^{\nu}_c S(t) = f^\nu_1(S,E,I,R)(t) =\mu^\nu N - \mu^\nu S(t) - \beta^\nu S(t) \displaystyle\frac{I(t)}{N(t)},\nonumber\\
&&D^{\nu}_c E(t) = f^\nu_2(S,E,I,R)(t) = \beta^\nu S(t) \displaystyle\frac{I(t)}{N(t)}
- (\mu^\nu + \epsilon^\nu) E(t)\label{eq5}\\
&&D^{\nu}_c I(t) = f^\nu_3(S,E,I,R)(t) = \epsilon^\nu E(t) - (\gamma^\nu + \mu^\nu + \alpha^\nu) I(t), \nonumber\ \\
&&D^{\nu}_c R(t)= f^\nu_4(S,E,I,R)(t) = \gamma^\nu I(t) - \mu^\nu R(t). \nonumber\
\end{eqnarray}
The reproduction ratio, $R_0$, indicates
the number of cases induced by a single infectious individual. When $R_0 < 1$, the disease dies out; when $R_0 > 1$, an epidemic occurs. Al-Sheikh \cite{alsheikh2012} analyzes the behavior of the SEIR models
in terms of $R_0$. For the SEIR model, $R_0$ is given by \cite{zhang2013}
\begin{equation} \label{31}
R_0 = \frac{\beta^\nu \epsilon^\nu}{(\epsilon^\nu+\mu^\nu) (\gamma^\nu+ \alpha^\nu +
\mu^\nu)}.
\end{equation}
The infection fatality rate (IFR) is defined as
\begin{equation} \label{IFR1}
{\rm IFR} \ (\%) = 100 \cdot \frac{\alpha^\nu}{\alpha^\nu + \gamma^\nu} \approx
100 \cdot \frac{\alpha^\nu}{\gamma^\nu} ,
\end{equation}
where this relation holds at all times, not only at the end of the epidemic.
\subsection{Time discretization}
An explicit conditionally stable GMMP algorithm for the fractional order system \eqref{eq5} is formulated as follows
\cite{gorenflo2002,gorenflo2007}:
\begin{eqnarray}
&&S_{n+1} = -\sum_{j=1}^{m+1} c^\nu_j S(m+1 - j) + S_0 \sum_{j=0}^{m+1} c^\nu_j
+ (\Delta t)^\nu f_1(S_n, E_n, I_n, R_n)\label{gmmp1}\\
&&E_{n+1} = -\sum_{j=1}^{m+1} c^\nu_j E(m+1 - j) + E_0 \sum_{j=0}^{m+1} c^\nu_j
+ (\Delta t)^\nu f_2(S_n, E_n, I_n, R_n)\label{gmmp2}\\
&&I_{n+1} = -\sum_{j=1}^{m+1} c^\nu_j I(m+1 - j) + I_0 \sum_{j=0}^{m+1} c^\nu_j
+ (\Delta t)^\nu f_3(S_n, E_n, I_n, R_n)\label{gmmp3}\\
&&R_{n+1} = -\sum_{j=1}^{m+1} c^\nu_j R(m+1 - j) + R_0 \sum_{j=0}^{m+1} c^\nu_j
+ (\Delta t)^\nu f_4(S_n, E_n, I_n, R_n)\label{gmmp4}
\end{eqnarray}
The results of the GMMP method \eqref{gmmp1}-\eqref{gmmp4} will be validated against the solution of the classical SEIR model ($\nu = 1$) and the Adams-Bashford-Moulton (ABM) time-explicit scheme as defined in
\cite{lizeng2015} and included in the Appendix.
\section{Numerical results.}
\subsection{Validation of the GMMP algorithm} \label{vali}
The results of the GMMP algorithm are cross-checked with those of the ABM solver for the
classical SEIR model ($\nu = 1$ ) and SEIR models of fractional orders $\nu = 0.9$ and $0.8$.
We use the following parameters, given in Chowel et al. \cite{chowel2003} and used by Carcione et al. \cite{carcione2020} to perform a parametric analysis of the model. Average disease incubation $1/\epsilon =3 $ days,
infectious period $1/\gamma= 8$ days, induced fatality rate $\alpha= 0.006$/day, $\beta = 0.75$/day, and $\Lambda = \mu = 0$. The initial conditions are
$E(0) = 1, S(0) = N(0) - E(0) - I(0), I(0) = 1$ and $R(0) = 0$.
The time step is $dt$ = 0.01 day and
N$_0$ = 10 million. This case corresponds to a high reproduction ratio $R_0 = 5.72$.
Figures \ref{fig1}--\ref{fig6} show the results of the four classes, S,E,I,R, and the dead and dead per day individuals
computed by using the GMMP and ABM algorithms.
First, an excellent agreement between the results of the two algorithms is observed for all
values of the fractional order derivative $\nu$. {\rm To quantify this agreement, we compute a mean squared relative error
between the estimations of both methods. For example, in the computation of infected individuals,
the following errors are obtained: 1.512 $\times$ 10$^{-5}$ for $\nu = 1$, 9.880 $\times$ 10$^{-6}$ for $\nu = 0.9$
and 1.053 $\times$ 10$^{-5}$ for $\nu = 0.8$.}
In particular, the results for $\nu = 1$ agree with those of Figures 1 and 2 in \cite{carcione2020}.
%
Figure \ref{fig1} shows that decreasing the order of the fractional derivative causes a delay and an increase in the number of susceptible individuals. While for the classical model the number of infectious individuals vanish at long times, this is not the case for the orders $\nu = 0.8$ and $\nu = 0.9$ (Figure \ref{fig3}).
We run the simulator up to a very long time but the individuals do not vanish, so that
the epidemic never ends (in theory). This happens because $R_0 \ge 1$. We run other examples with different parameters such that $R_0 < 1$ and as expected the number of infectious individuals vanish and the epidemic dies out. For brevity these plots are not shown. {\rm The case $R_0 < 1$ is analyzed in Subsection 4.2, when simulating the evolution of the epidemic in the RMBA using fractional derivatives. This value of $R_0$ is associated with the strict lockdown imposed
by the government, with a corresponding decrease in the number of infected individuals.}
Regarding the exposed infected classes (Figures \ref{fig2}-\ref{fig3}), a decrease in $\nu$ causes
delays and reduces the amplitude of the peaks of these classes.
Furthermore, as $\nu$ decreases the number of casualties increase as seen in Figure \ref{fig5} while
Figure \ref{fig6} shows a delay and increase of the peak in the
number of dead individuals per day. {\rm Also, note that Figure \ref{fig4} shows a delay and decrease in the number of recovered individuals as
the order of the fractional derivative decreases.}
These simulations consider a single value of $\beta$, the lockdown parameter. In a realistic case, $\beta$ is a function of time and the procedure is that every time $\beta$ changes, the algorithm has to be fully initialized from the beginning. Changing $\beta$ in the same time loop yields wrong results. This fact has been verified by cross-checking different algorithms and several fractional orders.
\vskip1cm
\begin{figure}
\includegraphics[scale=0.35]{figure1.eps}
\vskip0.3cm
\caption{Susceptible individuals for the classical SEIR model ($\nu = 1$) and fractional-order
derivatives $\nu = 0.8$ and $0.9$}
\label{fig1}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure2.eps}
\vskip0.3cm
\caption{Exposed individuals for the classical SEIR model ($\nu = 1$) and fractional-order
derivatives $\nu = 0.8$ and $0.9$}
\label{fig2}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure3.eps}
\vskip0.3cm
\caption{Infected individuals for the classical SEIR model ($\nu = 1$) and fractional-order
derivatives $\nu = 0.8$ and $0.9$}
\label{fig3}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure4.eps}
\vskip0.3cm
\caption{Dead individuals for the classical SEIR model ($\nu = 1$) and fractional-order
derivatives $\nu = 0.8$ and $0.9$}
\label{fig5}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure5.eps}
\vskip0.3cm
\caption{Recovered individuals for the classical SEIR model ($\nu = 1$) and fractional-order
derivatives $\nu = 0.8$ and $0.9$}
\label{fig4}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure6.eps}
\vskip0.3cm
\caption{Dead individuals per day for the classical SEIR model ($\nu = 1$) and fractional-order
derivatives $\nu = 0.8$ and $0.9$}
\label{fig6}
\end{figure}
\subsection{Analysis of the COVID-19 epidemic in the RMBA} \label{sec:rmba}
We model the COVID-19 epidemic in the RMBA, with a population $N_0$ = 14839026 individuals
according to the 2010 Census (\url {https://www.indec.gob.ar/indec/web/Nivel4-Tema-2-41-135}).
The prediction of the time evolution of the epidemic is very difficult due to the uncertainty
of the parameters defining the SEIR model. Virus properties such as the infectious
and incubation periods ($\gamma^{-1}$ and $\epsilon^{-1}$) and life expectancy
of an infected individual ($\alpha^{-1}$)
lie in certain bounded intervals. Instead, the parameter $\beta$ is time dependent, due to changes according to
the lockdown
and social-distance measures imposed by the government.
Most authors use the infectious individuals to calibrate the model, e.g., Gonz\'alez-Parra et al. \cite{gonzalez2014}, who model the AH1N1/09 influenza epidemic in Bogot\'a, Colombia
and in the Nueva Esparta state in Venezuela.
Since the number of asymptomatic, undiagnosed infectious individuals in RMBA is unknown, we
choose to calibrate the model with the number
of officially reported casualties as the most reliable data, from day 1 (March 9, 2020)
to day {\rm 198 (September 22th, 2020)} (\url{https://www.argentina.gob.ar/coronavirus/informe-diario}).
Concerning the parameters, fractional order
and initial conditions of the model, we assume $\mu$ = 3.6 $\times$ 10 $^{-5}$/ day, corresponding to
a life expectancy of 76 years. Changes in the $\beta$ parameter are associated with different measures of
lockdown and social distance imposed by the goverment. Thus, we assume that $\beta$ is a piecewise constant
function, where its variations are related to the inflection points observed in the curve of casualties.
After the initial time $t_0 = 1$ day, this curve shows two inflection points at times $t_1$ = 31 day
and $t_3$ = 50 day. The fractional-order derivative $\nu$, the values of $\alpha$, $\beta$, $\epsilon$,
$\gamma$ and the initial exposed individuals $E(0)$ are estimated by minimizing the $L^2$-norm between the simulated and actual casualties, which is an inverse problem with an infinite number of solutions due to the existence of local minima. The estimation is also performed for the classical case $\nu = 1$.
This inverse problem is solved by using a quasi Newton approximation technique for nonlinear least-squares problems, based on the formula
of Broyden-Fletcher-Goldfarb-Shanno \cite{Gill81}. Application of this technique to solve inverse problems in reservoir engineering can be found in \cite{Savioli94}. Table \ref{table1} shows ranges of
the fractional derivative $\nu$, of the parameters $\alpha$, $\beta$, $\epsilon$,
$\gamma$ and the initial exposed individuals $E(0)$ used in the inversion procedure.
Table \ref{table2} displays the initial values and results of {\rm four} outputs (Cases) of the fitting procedure.
\begin{table}
\caption{Constraints and ranges of the estimation procedure}
\label{table1}
\begin{tabular}{lllllll}
\hline\noalign{\smallskip}
Variable $\rightarrow$ &$\nu$ & $\alpha$ & $\beta$ & $\epsilon^{-1}$ & $\gamma^{-1}$ & $E(0)$ \\
& & day$^{-1}$ & day$^{-1}$ & day & day\\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
Lower bound & 0.8 & 10$^{-5}$ & 0.1 & 3 & 3 & 10$^2$\\
Upper bound & 1.0 & 10$^{-1}$ & 0.9 & 9 & 9 & 10$^4$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
\end{tabular}
\end{table}
\begin{table}
\caption{Initial values and results of the estimation procedure.}
\label{table2}
\begin{tabular}{lllllllll}
\hline\noalign{\smallskip}
Variable $\rightarrow$ &$\nu$ & $\alpha$ & $\beta_1$ & $\beta_2$ & $\beta_3$ & $\epsilon^{-1}$ & $\gamma^{-1}$& $E(0)$ \\
& & day$^{-1}$ & day$^{-1}$ & day$^{-1}$ &day$^{-1}$ & day & day & \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
Case 1 & & & & & & & &\\
Initial & 0.9 &6.00$\times$10$^{-3}$ & 0.5 & 0.2 & 0.3 & 5.0 & 4.0 & 500\\
Optimum & 0.919 &2.130761$\times$10$^{-4}$ & 0.66090 & 0.12507 &0.34002 & 8.976007 & 5.335143 & 1623 \\
$R_0$ & & & 3.178 & 0.688 & 1.725 & & & \\
IFR =0.197 & & & & & & & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
Case 2 & & & & & & & &\\
Initial & 0.85 &6.00$\times$10$^{-3}$ & 0.4 & 0.2 & 0.3 & 5.0 & 4.0 & 1000\\
Optimum &0.812 &4.179268$\times$ 10$^{-4}$&0.77273 &0.47231 & 0.56801 &8.121503 &3.022527 & 1138 \\
$R_0$ & & & 1.982 & 1.329 & 1.539 & & & \\
IFR = 0.444 & & & & & & & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
Case 3 & & & & & & & &\\
Initial & 1 &6.00$\times$10$^{-3}$ & 0.5 & 0.2 & 0.3 & 5.0 & 4.0 & 500\\
Optimum & 1 &2.822018 $\times$10$^{-4}$ & 0.49040 & 0.10396 & 0.27568 & 8.975264 &6.212071 & 2821\\
$R_0$ & & & 3.041 & 0.645 & 1.710 & & & \\
IFR = 0.175 & & & & & & & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
Case 4 & & & & & & & &\\
Initial & 0.9 &6.00$\times$10$^{-3}$ & 0.4 & 0.2 & 0.3 & 5.0 & 4.0 & 1000\\
Optimum & 0.929 &2.787611 $\times$10$^{-4}$ & 0.47289 & 0.10168 & 0.31122 & 8.244641 &5.751017 & 4110\\
$R_0$ & & & 2.526 & 0.606 & 1.713 & & & \\
IFR = 0.254 & & & & & & & & \\
\noalign{\smallskip}\hline\noalign{\smallskip}\\
\end{tabular}
\end{table}
Let us analyze {\rm four} cases, resulting from the
minimization algorithm. We obtained the SEIR parameters, the fractional order and
the initial exposed humans values fitting the data.
In all the cases, the initial number of infected individuals is
assumed to be $I(0)$ = 100.
{\rm Figures \ref{fig7} and \ref{fig8} show the dead individuals and dead individuals per day for Case 1. The inflection point at $t_1 = 30$ day,
related to a change of $R_0$ from 3.178
to 0.688, shows
a decay in the simulated curves, because of the effect of the lockdown.
After $t_1 = 50$ day, the curves exhibits a continuous
increase in casualties due to the relaxation of the lockdown measures with $R_0 = 1.725$.
Figure \ref{fig9} shows the behavior of all classes, with a
a peak of 555 thousand infected individuals at day 188 (September 12th, 2020) while Figure \ref{fig10}
exhibits a death toll of 19000 people after 800 days (May 17th, 2022)
and a peak of 234 casualties at day 188.
The parameters of Cases 2 and 3 in Table \ref{table2} also fit the data, with graphs similar to those in
Figures \ref{fig7} and \ref{fig8}. Case 2
estimates peaks of 309 deaths and 285 thousand infected individuals
at day 222 (October 16th, 2020). At day 800 (May 17, 2022),
there are 34 thousand deaths and 7457 thousand recovered humans.
This increase in the
number of casualties is due to the higher infection fatality
rate IFR and higher reproduction ratios $R_0$ as compared with those of Case 1 (see
Table 2).
Case 3, which corresponds to the classical SEIR model ($\nu$ = 1), exhibits a peak of 171 casualties at day 184 (September 8th, 2020) and
607 thousand people infected.
The end of the epidemic is consider
the day at which the number of infected individuals is smaller than 1, which is day 594 (October 24th, 2021)
for this case.
At this day, the total number of recovered and dead individuals
are 10157 thousand and
18 thousand, respectively, so that the total number of infected people at the end of the epidemic is
10175 thousand individuals. This is the case
predicting the smallest number of casualties.}
{\rm Finally, since the reported number of deceased people could possibly be underestimated due to undeclared cases
and delays in the upload of official data, we also consider a case with
30 \% more casualties to date (Case 4 in Table \ref{table2}), giving IFR = 0.254 \% and values
of the parameters similar to those of Case 1. Besides, the peak occurs almost at the same day of Case 1
(day 187: September 11th, 2020) with 592 thousand infected individuals and 296 casualties. This peak of casualties
and the death toll of 24400 individuals are
approximately 30 \% higher than those of Case 1.}
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure7.eps}
\vskip0.3cm
\caption{Dead individuals. The red dots represent the data and the solid line the fit using
the SEIR model of fractional order with $\nu = 0.919$}
\label{fig7}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35]{figure8.eps}
\vskip0.3cm
\caption{Dead individuals per day. The red dots represent the data and the solid line the fit using
the SEIR model of fractional order with $\nu = 0.919$}
\label{fig8}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35,angle=0]{figure9.eps}
\vskip0.3cm
\caption{Number of individuals in all classes (millions) for the SEIR model of fractional order with $\nu = 0.919$}
\label{fig9}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35,angle=0]{figure10.eps}
\vskip0.3cm
\caption{Total number of deaths and deaths per day for the SEIR model of fractional order with $\nu = 0.919$}
\label{fig10}
\end{figure}
{\rm In the following, we compare the behavior of all classes for the different orders of the fractional derivative
used in this analysis, i.e., $\nu = 1, 0.919$ and $0.812$.
Figure \ref{fig11} displays the number of infected individuals, where there is a delay and decrease
of the peak values as the order of the fractional derivative decreases. This behavior
is consistent with that observed in Figure \ref{fig3}.
Figure \ref{fig12} shows an increase in the number of casualties by decreasing the order
of the fractional derivative, with a 47 \% increase between $\nu = 1$ and $\nu = 0.812$. Moreover, it
can be seen that the curves stabilize at later times as the fractional order decreases.
Finally, Figures \ref{fig13} and \ref{fig14} exhibit the estimated recovered and susceptible individuals for the three values of $\nu$.
Recovered individuals increase and, consequently, susceptible individuals decrease as the order of fractional derivative increases.
The curves exhibit asymptotic values at later times as $\nu$ decreases, and the lower the value of $\nu$
the later individuals recover from the virus infection.
Note that the general trends of Figures \ref{fig11}--\ref{fig14} are similar to those of the figures in Subsection \ref{vali},
in spite of the fact that parameters obtained from the adjustment are different for the three cases.}
{\rm In the four cases described above, we consider that the initial number of infected individuals is $I(0)= 100$. Nevertheless, we tested other values:
if $I(0)$ belongs to the interval $[10, 150]$ a reasonable adjustment is obtained, with similar values to those shown in Table \ref{table2} and a slight delay
on the infected individuals peak as
$I(0)$ decreases. Outside this interval, the fit is poor and the results have no physical meaning.}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35,angle=0]{figure11.eps}
\vskip0.3cm
\caption{Infected individuals for the SEIR model of fractional orders $\nu =1, 0.919$ and $0.812$}
\label{fig11}
\end{figure}
\vskip1cm
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35,angle=0]{figure12.eps}
\vskip0.3cm
\caption{Dead individuals for the SEIR model of fractional orders $\nu =1, 0.919$ and $0.812$}
\label{fig12}
\end{figure}
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35,angle=0]{figure13.eps}
\vskip0.3cm
\caption{Recovered individuals for the SEIR model of fractional orders $\nu =1, 0.919$ and $0.812$}
\label{fig13}
\end{figure}
\begin{figure}
\vskip1cm
\includegraphics[scale=0.35,angle=0]{figure14.eps}
\vskip0.3cm
\caption{Susceptible individuals for the SEIR model of fractional orders $\nu =1, 0.919$ and $0.812$}
\label{fig14}
\end{figure}
\section{Conclusions}
{\rm We use a fractional SEIR (Susceptible, Exposed, Infected, Recovered) diffusion model to analyze the evolution of the COVID-19 epidemic in Argentina, particularly in the Region Metropolitana de Buenos Aires (RMBA), where a significant number of the population is concentrated.
We solve the SEIR system of fractional order $\nu, 0< \nu< 1$ and the classical ($\nu = 1$) SEIR model by using a time-explicit
Gorenflo-Mainardi-Moretti-Paradisi (GMMP) method. To validate this method, the results were cross-checked with those of the time-explicit fractional Adams-Bashford-Moulton (ABM) method, obtaining an excellent agreement between the two schemes.
Assuming that the birth and death rates are balanced, the parameters that characterize the model are the infection and incubation periods, the probability of disease transmission per contact, the fatality rate and the initial number of exposed individuals.
These parameters and the order $\nu$ of the fractional derivative are estimated by fitting the number of casualties officially reported. This inverse problem is solved by using a quasi-Newton technique for non-linear least-squares problem with the Broyden-Fletcher-Goldfarb-Shanno formula.
In all the simulations we used three lockdown parameters (denoted by $\beta$),
associated with the different
measures taken by the government during the evolution of the epidemic.
One important conclusion related with this time-dependent parameter is that both the fractional GMMP and ABM algorithms need to be fully initialized from the beginning in order to obtain correct results.
Different cases have been analyzed since the inverse problem has an infinite number of solutions. We observe a similar behavior in all the cases, with a fatality rate IFR varying in the range, $[0.175 , 0.444]$. After the 50th day of lockdown, it is observed a continuous increase in casualties
due to the relaxation of the preventive social isolation and community circulation of the virus.
The numerical simulations in RMBA show that when the order of the fractional derivative decreases,
i.e., higher subdiffusion of the virus,
the duration of the epidemic is extended, and the peak of infected individuals and number of casualties increase. Furthermore, the classical SEIR model yield a smaller number of casualties and infected
individuals with associated peaks located at earliest times as compared with those of the fractional-order cases.
}
\section{Appendix}
The Adams-Bashford-Moulton explicit scheme for the fractional order SEIR equations is formulated as follows \cite{lizeng2015}
\vskip0.3cm
\noindent
{\bf Predictor}
\begin{eqnarray}
&&S^p_{n+1} = ((n+1) \Delta t) S_0 + \sum_{j=0}^n b_{j,n+1)} f_1^{\nu}(S_j, E_j, I_j, R_j)\label{abm1}\\
&&E^p_{n+1} = ((n+1) \Delta t) E_0 + \sum_{j=0}^n b_{j,n+1)} f_2^{\nu}(S_j, E_j, I_j, R_j)\nonumber\\
&&I^p_{n+1} = ((n+1) \Delta t) I_0 + \sum_{j=0}^n b_{j,n+1)} f_3^{\nu}(S_j, E_j, I_j, R_j)\nonumber\\
&&E^p_{n+1} = ((n+1) \Delta t) R_0 + \sum_{j=0}^n b_{j,n+1)} f_4^{\nu}(S_j, E_j, I_j, R_j)\nonumber\\
&&N^p_{n+1} = S^p_{n+1} + E^p_{n+1} + R^p_{n+1} + I^p_{n+1}.\nonumber
\end{eqnarray}
{\bf Corrector}
\begin{eqnarray}
&&S_{n+1} = ((n+1) \Delta t) S_0
+ \sum_{j=0}^n a_{j,n+1} f_1^{\nu}(S^p_{n+1}, E^p_{n+1}, I^p_{n+1}, R^p_{n+1}) \label{abm2}\\
&&E_{n+1} = ((n+1) \Delta t) E_0
+ \sum_{j=0}^n a_{j,n+1} f_2^{\nu}(S^p_{n+1}, E^p_{n+1}, I^p_{n+1}, R^p_{n+1}) \nonumber\\
&&I_{n+1} = ((n+1) \Delta t) I_0
+ \sum_{j=0}^n a_{j,n+1} f_3^{\nu}(S^p_{n+1}, E^p_{n+1}, I^p_{n+1}, R^p_{n+1}) \nonumber\\
&&R_{n+1} = ((n+1) \Delta t) R_0
+ \sum_{j=0}^n a_{j,n+1} f_4^{\nu}(S^p_{n+1}, E^p_{n+1}, I^p_{n+1}, R^p_{n+1}) \nonumber\\
&&N_{n+1} = S_{n+1} + E_{n+1} + R_{n+1} + I_{n+1}.\nonumber
\end{eqnarray}
In \eqref{abm1}-\eqref{abm2} the coefficients $b_{j,n+1}, a_{j,n+1}$ are
\begin{eqnarray*}
&&b_{j,n+1} = \displaystyle\frac{1}{\Gamma(1 + \nu} \left[(n -j +1)^\nu - (n -j)^\nu \right]\\
&& a_{j,n+1} = \displaystyle\frac{1}{\Gamma(2 + \nu)}
=\begin{cases} (n)^{\nu +1} - (n-\nu) (n+1)^\nu, \quad j=0,\\
(n - j +2)^{\nu+1} + (n - j)^{\nu+1} - 2 (n-j+1)^{\nu+1},\quad 1\le j \le n-1\\
1 , \quad j=n+1.\end{cases}
\end{eqnarray*}
{\rm Concerning the error of the numerical scheme ABM, Abdullah et al. \cite{abdullah2017}
give a bound in terms of the time step size $\Delta t$.
On the other hand, Li and Zeng \cite{lizeng2015} and Li et al. \cite{li2011} show that the fractional forward Euler and ABM methods are stable and convergent of order one in $\Delta t$. }
| {
"timestamp": "2021-04-08T02:07:44",
"yymm": "2104",
"arxiv_id": "2104.02853",
"language": "en",
"url": "https://arxiv.org/abs/2104.02853",
"abstract": "A pandemic caused by a new coronavirus (COVID-19) has spread worldwide, inducing an epidemic still active in Argentina. In this chapter, we present a case study using an SEIR (Susceptible-Exposed-Infected-Recovered) diffusion model of fractional order in time to analyze the evolution of the epidemic in Buenos Aires and neighboring areas (Región Metropolitana de Buenos Aires, (RMBA)) comprising about 15 million inhabitants. In the SEIR model, individuals are divided into four classes, namely, susceptible (S), exposed (E), infected (I) and recovered (R). The SEIR model of fractional order allows for the incorporation of memory, with hereditary properties of the system, being a generalization of the classic SEIR first-order system, where such effects are ignored. Furthermore, the fractional model provides one additional parameter to obtain a better fit of the data. The parameters of the model are calibrated by using as data the number of casualties officially reported. Since infinite solutions honour the data, we show a set of cases with different values of the lockdown parameters, fatality rate, and incubation and infectious periods. The different reproduction ratios R0 and infection fatality rates (IFR) so obtained indicate the results may differ from recent reported values, constituting possible alternative solutions. A comparison with results obtained with the classic SEIR model is also included. The analysis allows us to study how isolation and social distancing measures affect the time evolution of the epidemic.",
"subjects": "Populations and Evolution (q-bio.PE)",
"title": "An SEIR epidemic model of fractional order to analyze the evolution of the COVID-19 epidemic in Argentina",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232935032463,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7093460291946052
} |
https://arxiv.org/abs/math/0605754 | String cohomology groups of complex projective spaces | Let X be a space and write LX for its free loop space equipped with the action of the circle group T given by dilation. We compute the equivariant cohomology H^*(LX_hT; Z/p) as a module over H^*(BT; Z/p) when X=CP^r for any positive integer r and any prime number p. The computation implies that the associated mod p Serre spectral sequence collapses from the E_3-page. | \section{Introduction}
Let $LX$ be the space of maps from the circle to a space
$X$. The circle acts on itself by rotation, and this action
induces an action of the circle group ${{\mathbb T}}=S^1=SO(2)$ on $LX$.
(The action extends to an $O(2)$-action, but we will not consider
the extended action in this paper).
The homotopy orbit under the circle action is
the space $E{{\mathbb T}}\times_{{\mathbb T}} LX$, which we will
also write as $LX_{h{\mathbb T} }$.
The purpose of this paper is to compute the ${{\mathbb T}}$
Borel cohomology with ${\mathbb F}_p$ coefficients of the free loop space
on ${\mathbb{C}\mathrm{P}}^r$, that is $H^*(L {\mathbb{C}\mathrm{P}}^r_{h{\mathbb T} };{\mathbb F}_p )$.
There are several motivations for studying this question.
In differential topology one studies the spectrum $TC(M)$,
which is related to the diffeomorphisms of the manifold $M$.
There is a long exact sequence for the spectrum cohomology
of $TC(M)$, where the other terms are given by the cohomology
and the Borel cohomology of the free loop space on $M$.
So this result gets us closer to understanding $TC({\mathbb{C}\mathrm{P}}^r )$.
The theory of Chas and Sullivan \cite{SullivanChas} constructs
algebraic operations on groups related to the free loop space.
Part of this structure has been computed in the case $M={\mathbb{C}\mathrm{P}}^r$
\cite{CohenJones}. It would be interesting to write down as much
as possible of the Chas-Sullivan structure for the particularly
simple example $M={\mathbb{C}\mathrm{P}}^r$. Since one of the groups carrying the
structure is the Borel cohomology of the free loops space, a first
step is to compute this group.
The usual cohomology of the free loop space has been used
to study the existence of closed geodesics
\cite{GM}, \cite{Klingenberg}, \cite{VigueSullivan}. The main
problem studied is to decide whether any Riemannian manifold
has infinitely many geometrically distinct closed geodesics.
This method works better the bigger the cohomology of the free loop space is.
For general metrics on ${\mathbb{C}\mathrm{P}}^r$ one has not succeeded.
It is a far out possibility that the knowledge of the equivariant
cohomology could possibly be of some help.
We should point out that our computation does not give us much more
than the cohomology groups as groups. One can ask for product
structure, cohomology operations and higher torsion. We are working
on these questions.
The method we use is convoluted, so we should try to
give an overview of the computation here.
We know several different spectral sequences that converges to
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T} };{\mathbb F}_p )$. The first is the Serre spectral sequence
for $L{\mathbb{C}\mathrm{P}}^r \to L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T} } \to B{{\mathbb T}}$. Since the cohomology of
$L{\mathbb{C}\mathrm{P}}^r$ is known, we know the $E_2$ page of this spectral sequence.
The $d_2$ differential is given if you can compute the action map
${{\mathbb T}} \times L{\mathbb{C}\mathrm{P}}^r \to L{\mathbb{C}\mathrm{P}}^r$ in the ordinary cohomology
of the free loop space. We are able to compute this directly, using
methods from \cite{SpSe}, so we can compute the $E_3$ page of the
Serre spectral sequence. It is a consequence of our final result that
there are no further non-trivial differentials, so that the Serre
spectral sequence collapses from the $E_3$ page. One could argue
that this is the simplest formulation of our main result. We have no
clue as to how to prove this directly using only the Serre spectral
sequence.
The second spectral sequence is derived from infinite dimensional
Morse theory. For a compact Riemannian manifold $M$, the energy
function $E:LM\to {\mathbb R}$ is a ${{\mathbb T}}$-equivariant map, so we get
an equivariant filtration of $LM$ by energy levels.
For $M={\mathbb{C}\mathrm{P}}^r$ with the symmetric space (Fubini--Study) metric,
the energy function is a Morse--Bott function. The filtration defines
a spectral sequence. We have enough knowledge of the critical manifolds
to write down the spectral sequence. This method has previously been
used by Klingenberg in \cite{KlingProj} to compute
$H^*(EO(2)\times_{O(2)} L {\mathbb{C}\mathrm{P}}^r;{\mathbb F}_2 )$.
In this case, Klingenberg proves that the spectral sequence collapses.
In our case, the spectral sequence will eventually collapse, but not
from the $E_1$ page, but only from the $E_p$-page.
There are definitely non-trivial differentials in the Morse
spectral sequence. Not all of these differentials start
at the ${\mathbb T}$ fixed points, that is at the space of constant curves.
The non-triviality of the differentials can be interpreted
as a geometrical statement about $n$-fold iterated geodesics.
It is a consequence of our calculation that on ${\mathbb{C}\mathrm{P}}^r$
with the standard metric for any $n$ there is
always a curve close to an $n$-fold geodesic, such that if the curve
moves according to the dynamics given by the gradient of the energy
function $E$, then it will eventually approximate an $n-1$-fold
iterated geodesic.
We study the Morse filtration spectral sequence using localization
methods from equivariant homotopy theory. The $p$-fold iteration
map induces a map
\[
H^*(LM_{h{\mathbb T} } ;{\mathbb F}_p )
\to H^*(LM_{h{\mathbb T} }^{(p)} ;{\mathbb F}_p)
\]
where the superscript $(p)$ means that we have twisted the
${\mathbb T}$-action. This map is compatible with the Morse filtration,
so it induces a map of spectral sequences.
The localization methods prove that under certain technical
conditions, after inverting the generator $u$ of $H^*(B{\mathbb T} ;{\mathbb F}_p )$
this map induces an isomorphism of spectral sequences. It turns out
that the Morse spectral sequence for the twisted action collapses,
so this isomorphism puts strong conditions on the original spectral
sequence.
We apply this to our special case. After some brute calculation
in the Morse spectral sequence we obtain the collapse from the $E_p$
page.
So we have to contend with the first differentials.
We know that there are some non-trivial differentials, because the $E_1$
page of the Morse spectral sequence is strictly bigger than
the $E_3$ term of the Serre spectral sequence. To play these
two spectral sequences out against each other, we study the
${\mathbb T}$-transfer. This is a degree $-1$ endomorphism on $H^*(X)$
defined for any ${\mathbb T}$-space $X$. What makes it interesting is that we
can compute it using homotopy theoretical methods, but it seems
hard to calculate the transfer directly in the Morse theory context,
primarily because the transfer will not
respect the splitting of $H^*(L{\mathbb{C}\mathrm{P}}^r ;{\mathbb F}_p )$ into summands
corresponding to the quotients of the Morse filtration. So you
can translate information about the ${\mathbb T}$-transfer into information
about how the Morse strata fit together, that is information about
the heteroclinic trajectories of the flow on $L{\mathbb{C}\mathrm{P}}^r$ defined by the
gradient of the energy function.
A third spectral sequence converging to
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}};{\mathbb F}_p )$ is the spectral sequence that
we developed in \cite{SpSe} and \cite{BO}.
For any space $X$ the $E_2$ term of this spectral
sequence is algebraically described as non-abelian higher derived
functors. If $X$ is simply connected, it converges towards
$H^*(LX_{h{{\mathbb T}}};{\mathbb F}_p )$. For general $X$ it seems to be a hard
algebraic problem to compute the $E_2$ page explicitly, but we can handle
the algebra for ${\mathbb{C}\mathrm{P}}^r$. Preliminary computations show that the
size of the $E_2$ term of this spectral sequence agrees with the
size of the Morse spectral sequence, which means that it collapses
from the $E_2$ page.
The individual sections are organized as follows.
In sections \ref{sec:Geospace} and \ref{sec:CohGeospace} we study
the space of geodesics on ${\mathbb{C}\mathrm{P}}^r$. Except for the constant ``geodesics'',
this space consists of an infinite number of homeomorphic components.
We compute its cohomology by comparing to the cohomology of Grassmann
spaces. The ${\mathbb F}_p$-cohomology of any ${\mathbb T}$-space is a module over
$H^*(B{\mathbb T} ;{\mathbb F}_p )\cong {\mathbb F}_p [u]$, and we determine these
module structures.
This is very closely related to Klingenberg's work in \cite{KlingProj}.
The main difference is that we are interested in the circle action,
while he studies the action of the group $O(2) \supset {\mathbb T}$.
In section \ref{sec:Orbits} we compute the equivariant (Borel)
cohomology of the spaces of geodesics. The action of ${\mathbb T}$ is different
on the different homeomorphic components.
The equivariant cohomology can distinguish at
least some of them.
In section \ref{sec:Bundles} we recall some facts about
equivariant cohomology theory, in particular the
localization theorem, which shows that the relative equivariant
cohomology of $X^{C_p}\subset X$ is annihilated by $u$.
In section \ref{sec:Twisting} we study the following abstract
construction. If $X$ is a ${\mathbb T}$ space, then the fixed points
$X^{C_p}$ is a ${\mathbb T} \cong {\mathbb T}/C_p$ space. We show that the
equivariant cohomology of $X^{C_p}$ with coefficients in ${\mathbb F}_p$
does not depend on the action. It is just the tensor product of
the ordinary cohomology of $X$ with ${\mathbb F}_p [u]$.
In section \ref{sec:Iteration} we apply the general theory of
the two previous section to the free loop space of a manifold.
The idea is that the iteration map, which maps a loop to the
same loop run through $p$ times, is really the inclusion of the
$C_p$ fixed points, so that it induces an isomorphism of localized
equivariant cohomology and by the results of \ref{sec:Twisting}
it is easy to compute this equivariant cohomology.
The localization theorem needs a finite dimensionality condition.
This condition is not just technical, but essential. Since the
free loop space is infinite dimensional, we have to reduce the
situation to a finite dimensional situation. We do this step
using Morse theory on the Hilbert manifold version of the free
loop space.
In section \ref{sec:MSS} we analyze how the theory from section
\ref{sec:Iteration} influences the Morse spectral sequence. We show
that under certain (strong) conditions, the localization of the
Morse spectral sequence for equivariant cohomology is a functor
in the non-equivariant Morse spectral sequence.
In section \ref{sec:MorseCPr} we specialize to ${\mathbb{C}\mathrm{P}}^r$. We use the
results from the sections \ref{sec:CohGeospace} and \ref{sec:Orbits}
to explicitly write down the $E_1$ page of the Morse spectral sequence,
in both the equivariant and the non-equivariant case.
For this space we do know that the non-equivariant spectral sequence
collapses, and it follows from section \ref{sec:Iteration} that
the localization of the Morse spectral sequence collapses.
This does not imply that the un-localized Morse spectral sequence
for the equivariant cohomology collapses from the $E_1$ page,
but it is sufficient to prove collapsing from the $E_p$ page.
Now there is a change of scene to homotopy theoretical methods.
In section \ref{sec:FreeCohomology}
we compute the ${\mathbb T}$ transfer map. We do this in the slightly greater
generality of spaces whose cohomology are truncated polynomial algebras
on one generator. To do the computation, we use the homological
methods we developed in \cite{SpSe}. The central computation is
a computation in non-abelian homological algebra, which we
do in section \ref{sec:Derived}.
Even if it is not necessary for the proof of our main result, we
study the Serre spectral sequence converging to Borel
homology in section \ref{sec:SerreSS}. In particular we
show that our main result implies that the Serre spectral sequence
collapses from the $E_3$ page. We also obtain some information about
the product structure in $H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} ;{\mathbb F}_p )$, which seems
hard to get using only the Morse theory approach.
In section \ref{se:MorseSerreSS} we show how the transfer map
forces the existence of non-trivial differentials in the
Morse spectral sequence. It seems to be hard to pinpoint these
differentials exactly, but we obtain sufficient information to
compute the $E_\infty$ term as a module over ${\mathbb F}_p [u]$.
{\em Notation}: We fix a prime $p$. Cohomology groups will always
have coefficients in the field ${\mathbb F}_p$, unless we explicitly
state otherwise. We state otherwise mainly in section
\ref{sec:CohGeospace}. The group ${\mathbb T}$ is the circle group.
Spaces are supposed to be homotopy equivalent to CW-complexes, and
${\mathbb T}$ spaces are supposed homotopy equivalent to ${\mathbb T}$ CW-complexes.
\section{Geodesics on ${\mathbb{C}\mathrm{P}}^r$}
\label{sec:Geospace}
Let us fix some notation and recall some basic facts.
Consider ${\mathbb C}^{r+1}$ with the usual Hermitian inner product
${\inp \medspace \medspace}_{\mathbb C}$, and write
$\inp \medspace \medspace$ for its real part.
Thus $\inp \medspace \medspace$ is the standard inner product on
${\mathbb R}^{2r+2} \cong {\mathbb C}^{r+1}$.
Let $S^{2r+1}$ be the unit sphere in ${\mathbb C}^{r+1}$.
Since the left action of ${\mathbb T}$ on $S^{2r+1}$ given by
$(z,v)\mapsto zv$ is a free proper action, the Hopf map
\[ Q:S^{2r+1} \to S^{2r+1}/{\mathbb T} ={\mathbb{C}\mathrm{P}}^r \]
is a smooth submersion. We equip ${\mathbb{C}\mathrm{P}}^r$ with the Fubini Study
Hermitian metric. This means that for any $x\in S^{2r+1}$ the map
\begin{equation} \label{a_x}
a_x : ({\mathbb C} x)^\perp \subseteq T_x(S^{2r+1}) \xrightarrow{T_xQ}
T_{Q(x)}({\mathbb{C}\mathrm{P}}^r )
\end{equation}
is a ${\mathbb C}$-linear isometry, where
$({\mathbb C} x)^\perp = \{ v \in {\mathbb C}^{r+1} | {\inp v x}_{\mathbb C} =0\}$.
Note that the following identity holds
\begin{equation} \label{fseqn}
a_{zx}(zv)=a_x(v) \text{ for } z\in S^1.
\end{equation}
See for example lemma \cite[14.4]{MT} for these facts.
We equip $S^{2r+1}$ with the Riemannian metric coming from
$\inp \medspace \medspace$ and use the real part
of the Fubini Study metric as Riemannian metric on ${\mathbb{C}\mathrm{P}}^r$.
Let $H_x$ be the horizontal subspace ie. the orthogonal
complement of $\ker (T_xQ)={\mathbb R} ix$ in $T_x(S^{2r+1})$. Since
\[ H_x = \{ v \in {\mathbb C}^{r+1} | \inp v x = \inp v {ix} =0 \}
= ({\mathbb C} x)^\perp \]
we see that $a_x : H_x \to T_{Q(x)}({\mathbb{C}\mathrm{P}}^r )$ is an
${\mathbb R}$-linear isometry. So the Hopf map $Q$ is a
Riemannian submersion \cite[Ch. 2]{GHL}.
We use the Levi--Civita connection corresponding
to the standard metrics on $S^{2r+1}$ and ${\mathbb{C}\mathrm{P}}^r$.
Let $\PaG r 1$ denote the set of primitive closed geodesics
$f:[0,1]\to {\mathbb{C}\mathrm{P}}^r$ and let $\PaG r q$ for integer $q\geq 1$
be the set of geodesics of the form $f(qt)$ for $f\in \PaG r 1$.
Note that a geodesic in $\PaG r q$ has length $q\pi$ and energy
$q^2\pi^2$.
The free loop space $L{\mathbb{C}\mathrm{P}}^r$ defined as the set of
closed curves $g:[0,1] \to {\mathbb{C}\mathrm{P}}^r$ of class $H^1$ is a Hilbert
manifold modeled on $L({\mathbb R}^{2r})$ \cite[\S 1]{KlingSphere}
and by \cite[\S 1.4]{KlingProj} we have that $\PaG r q$ is a (critical)
submanifold of $L{\mathbb{C}\mathrm{P}}^r$.
The left ${\mathbb T}$ action on $L{\mathbb{C}\mathrm{P}}^r$ restricts to an action on $\PaG r q$.
We view an element in $\PaG r q$ as a periodic geodesic and then
\[ {\mathbb T} \times \PaG r q \to \PaG r q ; \quad
(e^{2\pi i\theta}*f)(t)=f(t-\theta), \quad \theta \in {\mathbb R} .\]
We will now give an alternative description of $\PaG r q$.
The description can be found in \cite[\S 1]{KlingProj}, but we will
write down some explicit maps and add information on the ${\mathbb T}$-action.
Write $\St 2 {r+1}$ for the Stiefel manifold of complex
{\em orthonormal} 2-frames in ${\mathbb C}^{r+1}$. We can rotate a
2-frame $(x,v)$ by an angle $\omega \in {\mathbb R}$ as follows:
\[ R(\omega )(x,v) =
(\cos (\omega )x+\sin (\omega )v,-\sin (\omega )x+\cos (\omega )v). \]
Write $\Fr 2 {r+1}$ for the complex projective Stiefel manifold
$\St 2 {r+1} /{\mathrm{diag}}_2 (U(1))$.
\begin{definition}
For integer $q\geq 1$ we let $\Fra 2 {r+1} q$ denote $\Fr 2 {r+1}$
equipped with the well-defined ${\mathbb T}$-action
\[ {\mathbb T} \times \Fr 2 {r+1} \to \Fr 2 {r+1}; \quad
e^{2\pi i \theta}*[x,v] = [R(-q\pi \theta )(x,v)], \quad \theta \in {\mathbb R} .\]
\end{definition}
\begin{proposition}
There is a diffeomorphism
$\phi_q : \Fra 2 {r+1} q \to \PaG r q$ defined by
$\phi_q([x,v])=Q\circ c(q,x,v)$ where
\[ c(q,x,v)(t)= \cos (q\pi t)x+\sin (q\pi t)v, \quad 0\leq t\leq 1. \]
Furthermore, $\phi_q$ is a ${\mathbb T}$-equivariant map.
\end{proposition}
\begin{proof}
By \cite[2.109 and 2.110]{GHL} the map $\phi_q$ is a bijection.
A direct computation using the trigonometric addition formulas
shows that $\phi_q$ is ${\mathbb T}$-equivariant.
\end{proof}
Note that the geodesics $Q\circ c(q,x,v)$ can easily be extended
from $[0,1]$ to the open interval $(-\epsilon , 1+\epsilon )$
for $\epsilon>0$.
\begin{lemma} \label{sigma}
There is a ${\mathbb T}$-equivariant diffeomorphism
\[ \sigma_q :\Fra 2 {r+1} q \to \Fratilde 2 {r+1} q ; \quad
[x,v]\mapsto [\frac {x+iv} {\sqrt 2} ,\frac {x-iv} {\sqrt 2}], \]
where $\Fratilde 2 {r+1} q = \Fr 2 {r+1}$ equipped with
the well-defined ${\mathbb T}$-action
\[ e^{2\pi i \theta } \star [a,b]=
[e^{-\pi q i \theta }a,e^{\pi q i \theta }b ].\]
\end{lemma}
\begin{proof}
We have an automorphism $\sigma$ of $\St 2 {r+1}$ as follows:
\[ \sigma (x,v)= \frac 1 {\sqrt 2} (x+iv,x-iv) \quad , \quad
\sigma^{-1}(a,b)=\frac 1 {\sqrt 2} (a+b,-i(a-b)). \]
This automorphism respects the diagonal $U(1)$ action. Furthermore,
\[ \sigma (R(\omega )(x,v))=
\frac 1 {\sqrt 2} (e^{-i\omega }(x+iv),e^{i\omega }(x-iv)). \]
The result follows.
\end{proof}
Note that the ${\mathbb T}$-action on $\Fratilde 2 {r+1} 1$ is free so by
the lemma the ${\mathbb T}$-action on $\Fra 2 {r+1} 1$ is also free.
We have yet another interpretation of the space of primitive geodesics.
Let $\eta: S(\tau ({\mathbb{C}\mathrm{P}}^r )) \to {\mathbb{C}\mathrm{P}}^r$ denote the sphere bundle
of the tangent bundle.
\begin{proposition}
\label{lemma:UnitTangents}
There is a diffeomorphism $\psi :\Fr 2 {r+1} \to S(\tau ({\mathbb{C}\mathrm{P}}^r ))$
defined by
\[
\psi ([x,v])=(a_x(v))_{Q(x)}.
\]
The composition of $\eta \circ \psi$ is projection on the first factor
$[x,v] \mapsto [x]$.
\end{proposition}
\begin{proof}
The map $\psi$ is well-defined by (\ref{fseqn}) and has the inverse
$\psi^{-1}(v_{Q(x)})=[x,a_x^{-1}(v)]$.
Both $\psi$ and $\psi^{-1}$ are smooth maps.
\end{proof}
We equip $S(\tau ({\mathbb{C}\mathrm{P}}^r ))$ with the ${\mathbb T}$-action which makes
$\psi$ an equivariant map.
There is a fiber bundle sequence
$\Fra 2 2 1\to \Fra 2 {r+1} 1 \to \Gr 2 {r+1}$ since the Grassmannian
is the quotient of the Stiefel manifold by the group $U(2)$.
Equivalently, we have a fiber bundle
\[ S(\tau ({\mathbb{C}\mathrm{P}}^1 )) \to S(\tau ({\mathbb{C}\mathrm{P}}^r )) \to \Gr 2 {r+1} .\]
The action of ${\mathbb T}$ on the total space is free and preserves the fibers.
After dividing by it, we get another fiber bundle
\begin{equation}
\label{eq:geodesicsandgrassman}
\SG 1 \to \SG r \to \Gr 2 {r+1}.
\end{equation}
Note that we may view a point in $\SG r$ as the trace of a simple
geodesics together with an orientation. Klingenberg \cite{KlingProj} does
not consider $\SG r$. He divides out by the entire $O(2)$ action instead
and denote the resulting quotient space by $\mathbf \Delta ({\mathbb{C}\mathrm{P}}^r )$.
\begin{example}
\label{example:S2}
In the case $r=1$ we have that ${\mathbb{C}\mathrm{P}}^1$ is the standard round
sphere $S^2$ with radius $\frac 12$. A tangent vector $w\in T_v(S^2)$ is part of a unique
ordered basis $(v,w,v\times w )$. There is a unique $A\in SO(3)$
so that $v=Ae_1$, $w=Ae_2$, $v\times w=Ae_3$. This establishes
a diffeomorphism between $SO(3)$ and $S(\tau (S^2))$.
The induced action of $e^{i\theta} \in S^1$ on $SO(3)$
is given by right multiplication $A\mapsto A\rho (\theta )$
where $\rho (\theta )$ is the rotation matrix by angle
$\theta$ around the axis $e_3$.
The map $A\mapsto Ae_3$ is invariant under the action.
It defines a map $\SG 1\to S^2$ which is a diffeomorphism.
\end{example}
The example combines with the fibrations to show that we have a
diagram of fibration sequences
\begin{equation}
\label{eq:fiberdiagram}
\xymatrix@C=1.5 cm{
{\mathbb T} \ar@{=}[r] \ar[d] & {\mathbb T} \ar[r] \ar[d] & {*} \ar[d] \\
SO(3) \ar[r] \ar[d] & S(\tau ({\mathbb{C}\mathrm{P}}^r )) \ar[r]^{q} \ar[d]^{q^\prime}
& \Gr 2 {r+1} \ar@{=}[d] \\
S^2 \ar[r] & \SG r \ar[r]^{\overline q} & \Gr 2 {r+1}
} \end{equation}
Let $\gamma_2$ be the canonical complex 2-dimensional vector bundle
over $\Gr 2 {r+1}$ and let ${\mathbb P} (\gamma_2 )$ be the associated projective
bundle. A point in the total space of ${\mathbb P} (\gamma_2 )$ is a flag
$V_1\subset V_2\subset {\mathbb C}^{r+1}$ where
$V_i$ has complex dimension $i$.
\begin{lemma}
\label{lemma:ProjectiveBundle}
The fiber bundle $S^2\to \SG r \to \Gr 2 {r+1}$
is isomorphic to the fiber bundle
${\mathbb{C}\mathrm{P}}^1 \to {\mathbb P} (\gamma_2) \to \Gr 2 {n+1}$.
\end{lemma}
\begin{proof}
It is enough to show that there is a bundle map
$\Fratilde 2 {r+1} 1 /{\mathbb T} \to {\mathbb P} (\gamma_2 )$ which is a fiber wise
diffeomorphism. We have a smooth map
\[ f: \St 2 {r+1} \to {\mathbb P} (\gamma_2 )\quad ;\quad
(a,b)\mapsto ({\mathbb C} \{a \} \subset {\mathbb C} \{ a, b\} \subset {\mathbb C}^{r+1}), \]
which factors through $\Fratilde 2 {r+1} 1 /{\mathbb T}$ since multiplying
the generators by units does not change a linear span. It suffices
to see that $\overline f:\Fratilde 2 {r+1} 1 /{\mathbb T} \to {\mathbb P} (\gamma_2 )$
is a diffeomorphism when restricted to the fibers over
${\mathbb C}^2 \subset {\mathbb C}^{r+1}$ since we can then get the result for a
general fiber by changing basis. The map of fibers is
$\Fratilde 2 2 1 \to {\mathbb{C}\mathrm{P}}^1$ which under the standard identification
$U(2)/(U(1)\times U(1))\cong {\mathbb{C}\mathrm{P}}^1$ corresponds to the quotient map
\[ (\frac {U(2)} {{\mathrm{diag}}_2 (U(1))})/{\mathbb T} \to
\frac {U(2)} {U(1)\times U(1)} .\]
This map is a diffeomorphism because of the following identity of
$2\times 2$ diagonal matrices:
\[ D(e^{2\pi i \alpha }, e^{2\pi i \beta })=
D(e^{\pi i (\alpha + \beta )},e^{\pi i (\alpha + \beta )})
D(e^{\pi i (\alpha - \beta )},e^{\pi i (\beta - \alpha )}); \quad
\alpha , \beta \in {\mathbb R} .\]
\end{proof}
\section{The cohomology of spaces of geodesics}
\label{sec:CohGeospace}
The purpose of this section is to compute the cohomology
of $\SG r$ and $S(\tau ({\mathbb{C}\mathrm{P}}^r ))$. It turns out to be
convenient to do this computations for cohomology with
coefficients in the integers. We first determine the
cohomology of the base Grassmann manifold.
\begin{theorem}
\label{th:GrassmannCohomology}
Let $c_1,c_2$ be the two first Chern classes
of the canonical bundle $\gamma_2$ over $\Gr 2 {r+1}$. Then one has
\[
H^*(\Gr 2 {r+1} ;{\mathbb Z} )\cong {\mathbb Z} [c_1,c_2]/(\phi_r ,\phi_{r+1} ),
\]
where $\phi_i \in {\mathbb Z} [c_1,c_2]$ is the polynomial defined
inductively by
\begin{displaymath}
\phi_0 = 1; \quad \phi_1 = -c_1; \quad
\phi_i = -c_1 \phi_{i-1} - c_2 \phi_{i-2}.
\end{displaymath}
\end{theorem}
\begin{proof}
According to \cite{Borel}, proposition 31.1, we have an isomorphism
\begin{equation} \label{eq:Borel}
H^*(\Gr n {n+m} ;{\mathbb Z} ) \cong
\frac {{\mathbb Z} [x_1, \dots ,x_n]^{\Sigma_n} \otimes
{\mathbb Z} [x_{n+1}, \dots , x_{n+m}]^{\Sigma_m}}
{({\mathbb Z} [x_1, \dots ,x_{n+m}]^{\Sigma_{n+m}})^+}
\end{equation}
Here the degree of $x_i$ is 2 for all $i$ and $\Sigma_k$ denotes
the symmetric group. The plus sign means forming the ideal
generated by elements in positive degrees.
By \cite{Borel}, 30.2 one sees that this
isomorphism comes from the fibration
\[ \frac {U(n+m)} {U(n)\times U(m)} \to
\frac {EU(n+m)} {U(n)\times U(m)} \to
\frac {EU(n+m)} {U(n+m)}, \]
or equivalently the fibration
\[ \Gr n {n+m} \xrightarrow{j_{n,m}} BU(n) \times BU(m)
\xrightarrow{B\rho_{n,m}} BU(n+m), \]
where $\rho_{n,m}: U(n)\times U(m) \to U(n+m)$ is the usual
block matrix inclusion. The isomorphism above appears as the
factorization of $j_{n,m}^*$ through the positive degree part of the
image of $(B\rho_{n,m})^*$.
One can describe $j_{n,m}$ as the composite map
\[ j_{n,m} : \Gr n {n+m} \to \Gr n {n+m} \times \Gr m {n+m} \to
\Gr n \infty \times \Gr m \infty , \]
where the first map is given by $V\mapsto (V,V^\perp )$.
Using the fact that the pullback of the canonical bundle $\gamma_m$
along the map $\Gr n {n+m} \to \Gr m {n+m}$, $V\mapsto V^\perp$
is the orthogonal complement $\gamma_n^\perp$, one sees that the
Chern classes maps as follows:
\[ j_{n,m}^* (c_i (\gamma_n ({\mathbb C}^\infty ))) = c_i(\gamma_n) \quad , \quad
j_{n,m}^* (c_i (\gamma_m ({\mathbb C}^\infty ))) = c_i(\gamma_n^\perp ). \]
Put $c_i=c_i(\gamma_n )$ and $\bar c_j = c_j(\gamma_n^\perp )$.
By the dimension of the bundles we have that $c_i=0$ for $i>n$ and
$\bar c_j=0$ for $j>m$. Since
$\gamma_n \oplus \gamma_n^\perp \cong \epsilon^{n+m}$
we also have that $\sum_{i+j=k} c_i\bar c_j=0$ for $k>0$.
This gives us a quotient map into the cohomology of the Grassman
manifold, which by (\ref{eq:Borel}) is an isomorphism
\[
H^*(\Gr n {n+m} ;{\mathbb Z} ) \cong
{\mathbb Z} [c_i,\bar c_j | i,j>0] /
(c_i|i>n)+(\bar c_j|j>m)+(\sum_{i+j=k}c_i\bar c_j |k>0).
\]
In the special case $n=2$ and $m=r-1$ we have that
\begin{align*}
& {\mathbb Z}[c_i,\bar c_j|i,j>0]/(c_i|i>2)+(\sum_{i+j=k} c_i\bar c_j |k>0)
\cong \\
&{\mathbb Z}[c_1,c_2,\bar c_1,\bar c_2, \dots]/ ( \bar c_k-\phi_k(c_1,c_2)|k>0)
\end{align*}
Dividing by $(\bar c_j|j>r-1)$ we see that
\[
H^*(\Gr 2 {r+1} ;{\mathbb Z} ) \cong
{\mathbb Z}[c_1,c_2]/ (\phi_i| i\geq r).
\]
However, it follows from the inductive definition that
the $\phi_i$ is contained in the ideal generated by
$\phi_r,\phi_{r+1}$ for $i\geq r$ and this finishes the proof.
\end{proof}
We now turn to the projective bundle over the Grassmann
space.
\begin{theorem}
\label{th:ProjectiveCohomology}
Let $\pi : {\mathbb P} (\gamma_2 ) \to \Gr 2 {r+1}$ be the projective
bundle of the canonical bundle $\gamma_2$. There is an isomorphism
\[ H^*({\mathbb P} (\gamma_2 );{\mathbb Z} ) \cong
{\mathbb Z} [x_1,x_2] / (Q_r,Q_{r+1}) , \]
where $x_1$ and $x_2$ have degree $2$ and
\[ Q_k(x_1,x_2) = \sum_{i=0}^k x_1^ix_2^{k-i} \in {\mathbb Z} [x_1,x_2]. \]
We also have that
$\pi^*(c_1(\gamma_2 ))=x_1+x_2$ and $\pi^*(c_2(\gamma_2 ))=x_1x_2$.
\end{theorem}
\begin{proof}
We use \cite{Husemoller}, 17.2.5 and 17.2.6.
Let $\lambda \to {\mathbb P} (\gamma_2 )$ be the sub bundle of the
pullback $\pi_*(\gamma_2 )$ defined such that a point in the
total space of $\lambda$ is a pair
$(\{ V_1\subset V_2 \subset {\mathbb C}^{r+1} \}, v)$ where the complex
dimension of $V_i$ is $i$ and $v\in V_1$. Let $\bar \lambda$ be
the conjugate bundle of $\lambda$. Then, we have
\[
H^*({\mathbb P} (\gamma_2 );{\mathbb Z} ) \cong
H^*(\Gr 2 {r+1} ;{\mathbb Z} )[c_1(\bar \lambda )] /
(c_1(\bar \lambda )^2 + c_1(\gamma_2 )c_1(\bar \lambda ) +
c_2(\gamma_2 )).
\]
Combining this with theorem \ref{th:GrassmannCohomology} we see that
$H^*({\mathbb P} (\gamma_2 );{\mathbb Z} )$ is generated by the three classes
$\pi^*(c_1(\gamma_2 ))$, $\pi^*(c_2(\gamma_2 ))$ and $c_1(\lambda )$
with the three relations
\begin{align*}
& \phi_r \big( \pi^* (c_1(\gamma_2 )),\pi^* (c_2(\gamma_2 ))\big) =0, \\
& \phi_{r+1} \big( \pi^* (c_1(\gamma_2 )),\pi^* (c_2(\gamma_2 ))\big) =0, \\
& \pi^* (c_2(\gamma_2 ))=
c_1(\lambda )\pi^* (c_1(\gamma_2 ))-c_1(\lambda )^2.
\end{align*}
Define $x_1=c_1(\lambda )$ and
$x_2=\pi^* (c_1(\gamma_2 ))-c_1(\lambda )$.
Using the last of the above equations to eliminate
$\pi^* (c_2(\gamma_2 ))$, we get that $H^*({\mathbb P} (\gamma_2 );{\mathbb Z} )$
is generated by the classes $x_1$ and $x_2$ subject to the relations
we get by substituting $\pi^* (c_1(\gamma_2 ))$ and $\pi^* (c_2(\gamma_2 ))$
expressed by $x_1$ and $x_2$ into $\phi_r$ and $\phi_{r+1}$.
Note that $\pi^* (c_1(\gamma_2 ))=x_1+x_2$ and that
\[
\pi^* c_2(\gamma_2 )=
c_1(\lambda )\pi^* (c_1(\gamma_2 ))-c_1(\lambda )^2 =
x_1(x_1+x_2)-x_1^2 = x_1x_2.
\]
So we put $Q_r(x_1,x_2)=(-1)^r\phi_r (x_1+x_2,x_1x_2)$.
The two relations are polynomials $Q_r$, $Q_{r+1}$
in $x_1$ and $x_2$. The polynomials $Q_i$ are given inductively
by substituting into the inductive definition of $\phi_i$.
The inductive formula becomes
\[
Q_0 = 1, \quad Q_1 = x_1+x_2, \quad
Q_i = (x_1+x_2)Q_{i-1}-x_1x_2Q_{i-2}.
\]
It is easy to check that the polynomials
\[
Q_i(x_1,x_2)=\sum_{j=0}^i{x_1^jx_2^{i-j}}
=(x_1^{i+1}-x_2^{i+1})/(x_1-x_2)
\]
satisfy this inductive definition.
\end{proof}
Because of lemma \ref{lemma:ProjectiveBundle} we have the
following result:
\begin{corollary}
\label{cor:GeoCohomology}
There is an isomorphism
$H^*(\SG r;{\mathbb Z})\cong{\mathbb Z}[x_1,x_2]/\{Q_r,Q_{r+1}\}$.
The induced of the stabilization map $\SG r \hookrightarrow \SG {r+1}$ maps
$x_1$ and $x_2$ in $H^*(\SG {r+1} ;{\mathbb Z} )$ to the classes with the
same names in $H^*(\SG r ;{\mathbb Z} )$.
\end{corollary}
\begin{proof}
The statement about the stability of the classes $x_1$ and $x_2$
follows from the fact that they are defined using Chern classes
of the bundles $\gamma_2$ and $\lambda$. These bundles pull back
to bundles with the same names.
\end{proof}
We note that it is very easy to check whether
a polynomial is in the ideal generated by $Q_r$ and $Q_{r+1}$.
\begin{lemma}
\label{le:Qcheck}
Let $P=\sum_{i=0}^m p_ix_1^ix_ 2^{m-i}$ be a homogeneous polynomial of
degree $m$. Then $P$ is contained in the ideal $I=(Q_r,Q_{r+1})$
if and only if $p_i=p_j$ for all $i,j$ such that
$m-r\leq i\leq r$ and $m-r\leq j\leq r$.
\end{lemma}
\begin{proof}
Since $x_1^{r+1}=Q_{r+1}-x_1Q_r$, and similarly for $x_2^r$,
the monomials $x_1^ix_2^{m-i}$ are contained in $I$ if
$0\leq i\leq m-r-1$ or $r+1\leq i\leq m$. It follows that
$P\in I$ if and only if
$\sum_{m-r \leq i\leq r}{p_ix_1^ix_ 2^{m-i}}\in I$.
If $p_{m-r}=p_{m-r+1}=\dots =p_r$, we see that
$P$ is congruent to $Q_m$ modulo $I$, and $P\in I$.
To see that this condition also is necessary,
let $J\subset I$ be the ideal generated by $x_1^{r+1}$
and $x_2^{r+1}$. Then $x_1Q_k=x_2Q_k=Q_{k+1} \mod J$,
so $I$ is generated as an abelian group by $J$
together with $Q_k$ for $k\geq r$. So for any homogeneous
polynomial $P\in I$ of degree $m$, there is a
$\lambda \in {\mathbb Z}$ such that $P-\lambda Q_m\in J$.
This completes the proof.
\end{proof}
Next we consider the space of parametrized geodesics
$S(\tau({\mathbb{C}\mathrm{P}}^r))$. We consider two fibration, namely the
middle horizontal fibration in (\ref{eq:fiberdiagram})
and the obvious spherical fibration:
\[ SO(3) \to S(\tau ({\mathbb{C}\mathrm{P}}^r )) \xrightarrow{q} \Gr 2 {r+1}, \quad \quad
S^{2r-1} \to S(\tau ({\mathbb{C}\mathrm{P}}^r )) \xrightarrow{\eta} {\mathbb{C}\mathrm{P}}^r .\]
Let $\gamma_1$ be the canonical bundle over ${\mathbb{C}\mathrm{P}}^r$, so that
\[
H^*({\mathbb{C}\mathrm{P}}^r; {\mathbb Z} )={\mathbb Z} [c_1(\gamma_1 )]/(c_1(\gamma_1 )^{r+1}).
\]
We have the following result (compare with the remark in the
introduction of \cite{AGMP}):
\begin{lemma}
\label{lemma:TangentCohomology}
Put $t=\eta^* (c_1(\gamma_1 ))$. There is a class
$\bar \sigma \in H^{2r+1}(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb Z} )$ such that
\[
H^*(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb Z} )={\mathbb Z} [t,\bar \sigma ]/
(t^{r+1}, \medspace \bar \sigma^2 , \medspace (r+1)t^r,
\medspace t^r\bar\sigma ).
\]
Furthermore, $q^*(c_1(\gamma_2 ))=2t$
and $q^* (c_2(\gamma_2 ))=t^2$.
\end{lemma}
\begin{proof}
We consider the Serre spectral sequence for the spherical
fibration $\eta$:
\[
E_2^{**}={\mathbb Z} [c_1(\gamma_1 ),\sigma ]/
(c_1(\gamma_1 )^{r+1}, \medspace \sigma^2 )\Rightarrow
H^*(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb Z} ),
\]
where $\deg (\sigma )=2r-1$. For dimensional reasons, the only possibly
non-trivial differential is $d_{2r}(\sigma )$ which is given by the
Euler characteristic of ${\mathbb{C}\mathrm{P}}^r$.
$d_{2r}(\sigma )= e(\eta ) = (r+1)c_1(\gamma_1 )^r$.
For this, cf \cite{Milnor} corollary 11.12 and theorem 12.2.
Let $\bar \sigma$ be a class representing
$c_1(\gamma_1 )\sigma$. There cannot be either further differentials
nor extensions for dimensional reasons. This finishes the cohomology
computation.
\textit{Claim:} There is a bundle isomorphism
$q^*(\gamma_2 )\cong \eta^* (\gamma_1) \oplus \eta^*(\gamma_1 )$. \\
To prove the claim, we use the diffeomorphism $\phi$ from
lemma \ref{lemma:UnitTangents}. We have a commutative diagram
\[ \xymatrix@C=1.5 cm{
& {\mathbb{C}\mathrm{P}}^{r} \\
\Fr 2 {r+1} \ar[r]^\phi_\cong \ar[ru]^{pr_1} \ar[rd]_p &
S(\tau ({\mathbb{C}\mathrm{P}}^r )) \ar[u]_\eta \ar[d]^q \\
& \Gr 2 {r+1},
} \]
where $p$ is the canonical projection and $pr_1$ is given by
projection on the first factor. We also have the projection on
the second factor $pr_2$ and
$pr_1^*(\gamma_1 ) \cong pr_2^*(\gamma_1 )$.
So it suffices to show that
$p^*(\gamma_2 ) \cong pr_1^*(\gamma_1 ) \oplus pr_2^*(\gamma_1 )$.
But this isomorphism is obvious.
We can now compute the total Chern class as follows:
\[
1+c_1(q^*(\gamma_2 ))+c_2(q^*(\gamma_2 ))=
(1+c_1(\eta^*(\gamma_1 )))^2=
1+2c_1(\eta^*(\gamma_1))+(c_1(\eta^*(\gamma_1 )))^2.
\]
The final statement of the lemma follows from the
naturality of the Chern classes.
\end{proof}
\begin{remark}
\label{SCohomologyModP}
Similar to the integral computation we can compute
cohomology with coefficients in ${\mathbb F}_p = {\mathbb Z}/p$ for $p$ prime.
Let $x$ be the mod $p$ reduction of $t$.
If $p\mid (r+1)$, there is a class
$\sigma \in H^{2r-1}(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb F}_p )$ such that
\[
H^*(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb F}_p )\cong
{\mathbb F}_p [x,\sigma ]/(x^{r+1},\medspace \sigma^2 ).
\]
Similarly, if $p\nmid (r+1)$, there is a class
$\bar \sigma \in H^{2r+1}(S(\tau ({\mathbb{C}\mathrm{P}}^r));{\mathbb F}_p )$ such that
\[
H^*(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb F}_p )\cong
{\mathbb F}_p [x,\bar \sigma ]/(x^r, \medspace \bar \sigma^2 ).
\]
\end{remark}
\begin{corollary}
\label{cor:QuotientMap}
The composition $S(\tau ({\mathbb{C}\mathrm{P}}^r )) \xrightarrow{q^\prime} \SG r
\xrightarrow{\psi} {\mathbb P} (\gamma_2)$ of the quotient map with the
diffeomorphism satisfies
\[ (\psi \circ q^\prime )^*(x_1)= (\psi \circ q^\prime )^*(x_2)=
\eta^* (c_1(\gamma_1 )). \]
The kernel of the map $(\psi\circ q^\prime)^*$ is a copy of the
integers, generated by $x_1-x_2$.
\end{corollary}
\begin{proof}
Put $f=\psi \circ q^\prime$ and consider the composition
\[
S(\tau ({\mathbb{C}\mathrm{P}}^r ))\xrightarrow{f} {\mathbb P} (\gamma_2 )
\xrightarrow{\pi} \Gr 2 {r+1} ,
\]
which equals the map $q$ from (\ref{eq:fiberdiagram}).
By lemma \ref{lemma:TangentCohomology} we have that
\[ (\pi \circ f)^*(c_1(\gamma_2 ))=2t \quad , \quad
(\pi \circ f)^*(c_2(\gamma_2 ))=t^2, \]
where $t=\eta^* (c_1(\gamma_1 ))$.
According to \ref{th:ProjectiveCohomology} we also have
that $\pi^* (c_1(\gamma_2 ))=x_1+x_2$ and
$\pi^* (c_2(\gamma_2 ))=x_1x_2$. So we conclude that
\[
f^*(x_1)+f^*(x_2) = 2t \quad , \quad
f^*(x_1)f^*(x_2) = t^2.
\]
If $r\geq 3$ one has
$H^i(S(\tau({\mathbb{C}\mathrm{P}}^r));{\mathbb Z})\cong {\mathbb Z}$ for $i=2,4$ generated by
$t$ and $t^2$ respectively. So these equations have the unique
solution $f^*(x_1)=f^*(x_2)=t$ which proves the corollary in this
case. If $i\leq 2$, we use the standard inclusion ${\mathbb C}^i \subset {\mathbb C}^3$
together with naturality to get the desired result.
\end{proof}
\section{Borel cohomology of spaces of geodesics}
\label{sec:Orbits}
We consider the space of parametrized geodesics $S(\tau ({\mathbb{C}\mathrm{P}}^r ))$
with the free ${{\mathbb T}}$-action. Let $S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)}$ denote
$S(\tau ({\mathbb{C}\mathrm{P}}^r ))$ where we have twisted the ${{\mathbb T}}$-action
by composing it with the $n$th power map $\lambda_n :{{\mathbb T}} \to {{\mathbb T}}$;
$z\mapsto z^n$. Write $C_n \subseteq {{\mathbb T}}$ for the cyclic group of
order $n$. The map $\lambda_n$ passes to an isomorphism ${{\mathbb T}}/C_n \to {{\mathbb T}}$
with inverse $R_n$ given by sending $z$ to an $n$th root of $z$.
We write $BC_n$ for $E{{\mathbb T}}/C_n$ with ${{\mathbb T}}$-action through
$R_n :{{\mathbb T}}\to {{\mathbb T}}/C_n$.
The action of $C_n$ on $S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)}$ is trivial. This makes it
possible to consider $S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)}$ as a ${{\mathbb T}}/C_n$ space.
We get homeomorphisms as follows:
\[
E{{\mathbb T}}\times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)} \cong
E{{\mathbb T}}/C_n \times_{{{\mathbb T}}/C_n} S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)} \cong
BC_n \times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r )).
\]
We can write ${{\mathbb T}}\to S(\tau ({\mathbb{C}\mathrm{P}}^r )) \to \SG r$ as a pullback
of the universal ${{\mathbb T}}$-bundle ${{\mathbb T}}\to E{{\mathbb T}}\to B{{\mathbb T}}$ along a map
$f:\SG r \to B{{\mathbb T}}$. By this pullback diagram we get a map of
associated fibration sequences:
\begin{equation}
\label{eq:CpAction}
\xymatrix@C=2 cm{
BC_n \ar[r] \ar@{=}[d]
& BC_n\times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r )) \ar[r]^-{\rho} \ar[d]
& \SG r \ar[d]^f \\
BC_n \ar[r] & B{{\mathbb T}} \ar[r] & B{{\mathbb T}}.
}
\end{equation}
Note that we have used the homotopy equivalence
$pr_1:BC_n\times_{{{\mathbb T}}} E{{\mathbb T}} \xrightarrow{\simeq} B{{\mathbb T}}$
in the middle of the lower part of the diagram.
We can now compute the cohomology of the homotopy orbit spaces.
\begin{theorem}
\label{th:CohomologyOrbits}
For any prime $p$ one has
\[
H^*(E{{\mathbb T}}\times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)})\cong
\begin{cases}
{\mathbb F}_p [x_1,x_2]/(Q_r,Q_{r+1}),
& p\nmid n, \\
{\mathbb F}_p [u,x,\sigma ]/(x^{r+1},\sigma^2 ),
& p\mid n, \medspace p \mid (r+1), \\
{\mathbb F}_p [u,x,\bar \sigma ]/(x^r,\bar \sigma^2 ) ,
& p\mid n, \medspace p\nmid (r+1),
\end{cases}
\]
where $u$, $x$, $x_1$, $x_2$ have degree $2$ and $\deg (\sigma )=2r-1$,
$\deg (\bar \sigma )=2r+1$.
\end{theorem}
\begin{proof}
If $p$ does not divide $n$, then the mod $p$ cohomology of $BC_n$
equals that of a point, and by an obvious spectral sequence argument
the projection map $\rho$ induces an isomorphism in ${\mathbb F}_p$
cohomology. The result follows by corollary \ref{cor:GeoCohomology}.
Now assume that $p$ does divide $n$. Then one has
$H^*(BC_n)\cong {\mathbb F}_p[v,w]/I_{n,p}$
where $I_{n,p}$ is the ideal defined by
$I_{n,p}=(v^2-w)$ if $p=2$ and $4\nmid n$
and $I_{n,p}=(v^2)$ otherwise. The degrees are $\deg (v)=1$ and
$\deg (w)=2$.
The two fibrations of diagram (\ref{eq:CpAction}) each give a spectral
sequences, and the vertical maps induce a map of spectral sequences. Let us denote the
spectral sequence derived from the lower row of the diagram
by $E_r(I)$, the one derived from the upper row by
$E_r(II)$ and the map by $f^*: E_r(I) \to E_r(II)$.
We have that $E_2(I)={\mathbb F}_p [u]\otimes {\mathbb F}_p [v,w]/I_{n,p}$
where the only non-trivial differential is $d_2$, which
is determined by $d_2w=0$, $d_2v=u$ and the product structure.
This follows since the inclusion of the fiber
$BC_n \to B{{\mathbb T}}$ is given by the inclusion $C_n\subseteq {{\mathbb T}}$ and
the spectral sequence converges to $H^*(B{{\mathbb T}})$
such that $E_\infty (I)= {\mathbb F}_p [w]$.
We compute the $E_2$ page of the other spectral sequence:
\[
E_2(II) \cong H^*(\SG r)\otimes {\mathbb F}_p [v,w]/I_{n,p}
\Rightarrow
H^*(BC_n\times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r ))).
\]
The class $w$ is a permanent cycle, since it is the image of a
permanent cycle in the spectral sequence $E_r(I)$.
The whole spectral sequence is generated by
the permanent cycle $w$ together with
the classes in $E_2^{*,0}(II)$ and $E_2^{*,1}(II)$.
But this implies, for formal reasons, that the only possible
non-trivial differential is $d_2$. Using the product structure we
also see that this non-trivial differential is determined by
$d_2(v)$, which by naturality equals the image $f^*(d_2(v))=f^*(u)$.
We claim that $f^*(u)=\lambda (x_1-x_2)$
for some $\lambda \in {\mathbb F}_p \setminus \{ 0 \}$.
To see this, we consider the fibration sequence
\[ \xymatrix{
S(\tau ({\mathbb{C}\mathrm{P}}^r ))\ar[r] &\SG r \ar[r]^-f & B{{\mathbb T}}.
} \]
We have already investigated the involved spaces.
According to lemma \ref{lemma:TangentCohomology}
and corollary \ref{cor:GeoCohomology}, the corresponding
spectral sequence for integral cohomology has the form
\[
E_2={\mathbb Z} [u] \otimes {\mathbb Z} [t,\bar \sigma]/J
\Rightarrow {\mathbb Z} [x_1,x_2]/(Q_r,Q_{r+1}),
\]
where $J=(t^{r+1},\bar \sigma^2, (r+1)t^r,t^r\bar \sigma )$.
In the notation, we do not distinguish between the classes
$x_1,x_2\in H^2({\mathbb P} (\gamma_2);{\mathbb Z})$ and their pull--backs under
$\psi\colon \SG r \xrightarrow{\psi} {\mathbb P} (\gamma_2)$
Consider the case $r\geq 2$ first. We have a short exact sequence
\[ \xymatrix{
0\ar[r] & H^2(B{{\mathbb T}};{\mathbb Z} )\ar[r]^-{f^*} &
H^2(\SG r ;{\mathbb Z} )\ar[r]^-{(q^\prime )^* } &
H^2(S(\tau ({\mathbb{C}\mathrm{P}}^r ));{\mathbb Z} ) \ar[r] & 0.\\
} \]
According to corollary \ref{cor:QuotientMap}
the kernel of of $(q^\prime )^*$ is the free group generated
by $x_1-x_2$.
It follows that $f^*(u)=\pm(x_1-x_2)$. Using naturality on the canonical
inclusion $\SG 1 \subset \SG 2$, which is a ${{\mathbb T}}$-map,
we see that this formula is also true for $r=1$. So we have
proved the claim.
We return to cohomology with ${\mathbb F}_p$ coefficients.
Let $K_r$ and $C_r$ be the kernel and the cokernel of
multiplication by the element $x_1-x_2$ on
$H^*(\SG r)$. Then
\[
E_3(II)=({\mathbb F}_p [w] \otimes C_r)\oplus ({\mathbb F}_p [w] \otimes vK_r),
\]
and the spectral sequence collapses from the $E_3$-page.
By theorem \ref{th:ProjectiveCohomology} and the equation
$Q_k(x_1,x_1)=(k+1)x_1^k$ we see that
\[
C_r=
\begin{cases}
{\mathbb F}_p [x_1]/x_1^{r+1}, & p \mid (r+1),\\
{\mathbb F}_p [x_1]/x_1^{r}, & p \nmid (r+1).
\end{cases}
\]
The dimension of the cokernel agrees with the dimension of the kernel.
So the kernel of multiplication by
$x_1-x_2$ is a vector space over ${\mathbb F}_p$ of dimension
$r+1$ if $p\mid (r+1)$, and dimension $r$ if $p\nmid (r+1)$.
We need more precise information about the kernel,
since we want to compute the multiplicative structure
of the spectral sequence $E_r(II)$.
To determine the kernel, it is enough to exhibit as many linearly
independent elements in the kernel as it's known dimension.
So it suffices to find $r+1$ non-trivial elements in pairwise different
degrees if $p\mid (r+1)$, and $r$ non trivial elements in pairwise
different degrees if $p\nmid (r+1)$.
Consider the following homogeneous polynomial
\[
a_k=x_1^k\sum_{i=0}^{r-1} (i+1) x_1^ix_2^{r-1-i}.
\]
A computation shows that
$(x_2-x_1)a_k=x_1^k(Q_r(x_1,x_2)-(r+1)x_1^r)$.
So if $p\mid (r+1)$, the elements $a_k$ for $k\geq 0$
are all in the kernel of multiplication by $x_1-x_2$.
If $p\nmid (r+1)$ then $a_k$ is in the kernel if
$k\geq 1$, since $x_1^{r+1}=Q_{r+1}-x_2Q_r$.
To show that the kernel is generated by these classes, we
have to check that each $a_k$ is non-trivial in
${\mathbb F}_p [x_1,x_2]/(Q_r,Q_{r+1})$, as long as $k\leq r$.
We use lemma \ref{le:Qcheck} to do so. In $a_k$, the coefficient
of $x_1^{k-1}x_2^r$ is $0$ and the coefficient of
$x_1^kx_2^{r-1}$ is $1$. Furthermore, both $k-1$ and $k$ lies
between $(k+r-1)-r=k-1$ and $r$ so by the lemma we conclude
that $a_k \notin (Q_r,Q_{r+1})$ for $k\leq r$ as desired.
It follows that if $p\mid (r+1)$ then the $r+1$ elements
$a_0,a_1,\dots ,a_r$ form a basis for the kernel and
if $p\nmid (r+1)$ then the $r$ elements
$a_1,a_2,\dots ,a_r$ form a basis for the kernel.
By the explicit formula for $a_k$, we see that the kernel has
basis $a_0,x_1a_0,\dots ,x_1^ra_0$ when $p\mid (r+1)$ and
$a_1,x_1a_1,\dots ,x_1^{r-1}a_1$ when $p\nmid (r+1)$.
Let $\sigma$ represent $a_0$ when $p\mid (r+1)$ and
and let $\bar \sigma$ represent $a_1$ when $p\nmid (r+1)$.
We obtain the $E_3$ term
\[
E_3(II) =
\begin{cases}
{\mathbb F}_p [w,x_1,\sigma ]/(x_1^{r+1},\sigma^2 ), & p\mid (r+1), \\
{\mathbb F}_p [w,x_1,\bar \sigma]/(x_1^r,\bar \sigma^2 ), & p\nmid (r+1).
\end{cases}
\]
We already noted that there can be no nontrivial differentials beyond
the second one, so $E_3(II)=E_\infty (II)$.
\end{proof}
\begin{corollary}
\label{cor:SerreCollapse}
If $p$ divides $n$, then the mod $p$ cohomology Serre spectral
sequence for the fibration
\[
\xymatrix@C=2 cm{
S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)} \ar[r]^-{i}
& E{{\mathbb T}}\times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)} \ar[r]^-{pr_1}
& B{{\mathbb T}} }
\]
collapses at the $E_2$ page.
If $p$ does not divide $n$, the inclusion $i$ of the fiber induces a
surjection in even degrees
\[ i^*: H^{2*}(E{{\mathbb T}}\times_{{{\mathbb T}}} S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)})
\twoheadrightarrow H^{2*}(S(\tau ({\mathbb{C}\mathrm{P}}^r ))). \]
\end{corollary}
\begin{proof}
Assume that $p$ divides $n$. We can compute the $E_2$ page by remark
\ref{SCohomologyModP} and we have just computed the cohomology of the
total space. It turns out that the $E_2$ page is isomorphic to the
cohomology of the total space. So there is not room for any differentials.
Assume that $p$ does not divide $n$. We only have to check that the
classes $x^j$ are in the image of $i^*$. But by the homeomorphisms we
used for computing the cohomology of the Borel construction and
corollary \ref{cor:QuotientMap} we have that $i^*(x_1)=x$ and $i^*(x_2)=x$
so the result follows.
\end{proof}
\section{Equivariant vector bundles}
\label{sec:Bundles}
In this section we collect results about the homotopy theory of
${\mathbb T}$-vector bundles which we will need later.
For greater clarity, the level of generality will be greater
than strictly necessary.
We are interested in the cohomology of the Borel construction on
Thom spaces. Let $G$ be a compact Lie group (we will only need the
case $G={{\mathbb T}}$). Assume that $\xi =(p:E \to X)$ is a $G$-vector-bundle
over $X$ in the sense of \cite[I.9]{tomD}.
\begin{lemma}
\label{lemma:Thom}
The map $EG\times_G \medspace p :EG\times_G E \to EG\times_G X$ is
the projection of a vector bundle which we denote $\xi_{hG}$.
There is a homeomorphism
${\mathrm{Th}} (\xi_{hG} )\cong EG_+\wedge_G {\mathrm{Th}} (\xi )$.
Moreover, if $G$ is connected, then $\xi_{hG}$ is oriented if and
only if $\xi$ is oriented when viewed as an ordinary vector bundle.
\end{lemma}
\begin{proof}
We have that $EG\times X$ is a free $G$-space. If this $G$-space
is also strongly locally trivial, then proposition \cite[I.9.4]{tomD}
gives us that $\xi_{hG}$ is a vector bundle. So we must show that
$EG\times X$ has a tube around all of its point \cite[page 46]{tomD}.
For $(e,x)\in EG\times X$ we have that $G/G_{(e,x)}=G$.
The universal principal $G$-bundle $\pi :EG\to BG$ is locally
trivial, so we have a neighborhood $V$ around $\pi (e)$ and
a trivialization $\phi :\pi^{-1} (V) \xrightarrow{\cong} V\times G$.
We get a tube around $(e,x)$ as follows:
\[ \xymatrix@C=2 cm{
\pi^{-1} (V)\times X \ar[r]^-{pr_1} & \pi^{-1} (V) \ar[r]^-{\phi}
& V\times G \ar[r]^-{pr_1} & G.
} \]
Thus $\xi_{hG}$ is a vector bundle as stated.
We may assume that there is a Riemannian metric on $\xi$ which is
invariant under the $G$ action. We get a Riemannian metric
on $\xi_{hG}$ such that
$D(\xi_{hG} )=EG\times_G D(\xi )$ and
$S(\xi_{hG} )=EG\times_G S(\xi )$.
So we have
\begin{align*}
{\mathrm{Th}} (\xi_{hG} ) &= EG\times_G D(\xi )/EG \times_G S(\xi ) \cong
EG_+\wedge_G (D(\xi )/S(\xi ))\\
&= EG_+\wedge_{G} {\mathrm{Th}}( \xi ).
\end{align*}
Regarding orientability, consider the fibration
$G\to EG\times X \xrightarrow{\pi} EG\times_G X$. It is the
pullback of $G\to EG\to BG$ along $pr_1:EG\times_G X \to BG$.
Furthermore, $BG$ is 1-connected since $G$ is connected so we have
trivial coefficients in the $E_2$ pages of the Serre
spectral sequences for $\pi$. Thus
$H^1(EG\times_G X;{\mathbb F}_2 )\to H^1(X;{\mathbb F}_2 )$
is injective. The vector bundle $\xi_{hG}$ is the pullback of
$\xi$ along $X\to EG\times_G X$.
Since orientability is equivalent to the vanishing of
the first Stiefel Whitney class the result follows.
\end{proof}
\begin{corollary}
\label{cor:ThomIso}
Let $\eta$ be an $n$-dimensional ${{\mathbb T}}$-vector bundle
over the ${{\mathbb T}}$-space $X$. Assume that $\eta$ is
oriented for $H^*(-;{\mathbb Z}/p)$. Then there is an isomorphism
\[
\widetilde H^*({\mathrm{Th}} (\eta)_{h{{\mathbb T}}} ;{\mathbb F}_p ) \cong
H^{*-n}(X_{h{{\mathbb T}}} ;{\mathbb F}_p ).
\]
\end{corollary}
We are going to need a special case of the localization theorem for
Borel cohomology. The set up for the theorem is the following.
\begin{definition}
A based space $X$ homotopy equivalent to a ${\mathbb T}$ CW complex
satisfies the localization finiteness condition if
$X$ contains only finitely many ${{\mathbb T}}$ orbit types, and $X$
is finite dimensional.
\end{definition}
Let $Y$ be any ${{\mathbb T}}$ space with a fixed base point.
The space $E{{\mathbb T}}_+\wedge Y$ is equipped with a diagonal map
$\tilde \Delta :E{{\mathbb T}}_+\wedge Y \to
E{{\mathbb T}}_+\wedge E{{\mathbb T}}_+ \wedge Y$
given by $\tilde \Delta (s,y)=(s,s,y)$. After taking
quotients with group actions, we obtain
\[
\Delta_Y :E{{\mathbb T}}_+ \wedge_{{{\mathbb T}}} Y \to
(B{{\mathbb T}}_+)\wedge (E{{\mathbb T}}_+ \wedge_{{{\mathbb T}}} Y).
\]
This map induces a product
\[
H^*(B{{\mathbb T}})\otimes \widetilde H^*(E{{\mathbb T}}_+ \wedge_{{{\mathbb T}}} Y)
\to \widetilde H^*(E{{\mathbb T}}_+ \wedge_{{{\mathbb T}}} Y),
\]
which makes $\widetilde H^*(E{{\mathbb T}}_+ \wedge_{{{\mathbb T}}} Y)$
into a module over the ring $H^*(B{{\mathbb T}})$. A ${{\mathbb T}}$-equivariant map
$f:Y_1\to Y_2$ induces a module map by naturality. This module
structure will pervade the whole theory.
One thing we can use it for is to localize. We invert the generator,
to form the cohomology localized away from $u$ as follows:
\[
H^*(X_{h{{\mathbb T}}})\left[ \frac 1 u \right]=
H^*(X_{h{{\mathbb T}}})\otimes_{{\mathbb F}_p[u]} {\mathbb F}_p [u,u^{-1}].
\]
\begin{theorem}
\label{th:localization}
If $X$ satisfies the localization finiteness condition,
the inclusion $X^{C_p}\subseteq X$
induces an isomorphism of localized cohomology
\[ \xymatrix@C=2 cm{
H^*(X_{h{{\mathbb T}}})\left[ \frac 1 u \right] \ar[r]^-{\cong}
& H^*((X^{C_p})_{h{{\mathbb T}}})\left[ \frac 1 u \right].
} \]
\end{theorem}
\begin{proof}
This is a special case of the localization theorem
\cite[III.4.2]{tomD}. The parameters of the localization theorem,
as given by tom Dieck, are chosen as follows:
$G={\mathbb T}$, the cohomology theory $H^*(-)=H^*(-;{\mathbb F}_p)$
and the set $S\subseteq H^*(BG)=H^*(B{{\mathbb T}};{\mathbb F}_p)$ is
$S=\{ 1,u,u^2,\dots \}$. The subset $A\subseteq X$ is the empty set.
The statement of the localization theorem is to be interpreted as
follows: The family ${\cal F} (S)$ of subgroups of ${\mathbb T}$ (defined in
\emph{ibid} after III.3.1) is the family of subgroups $C_n\subseteq {\mathbb T}$
with $n$ not divisible by $p$. The set $X({\cal F})\subseteq X$
defined in \emph{ibid} I.6.1, is the complement of the
set $X^{C_p}$ in $X$. But then $FX=X^{C_p}$, so that theorem
\cite[III.4.2]{tomD} indeed specializes to our theorem.
\end{proof}
The application we want is the following: Let $X$ be a
space, satisfying the localization finiteness condition.
Assume that the action is not effective, but that
the subgroup $C_p\subseteq {{\mathbb T}}$ acts trivially. On the other hand,
we assume that all isotropy groups of points in $X$ are contained in
$C_n\subseteq {{\mathbb T}}$ for some fixed $n$.
Let $\xi$ be a ${{\mathbb T}}$-vector bundle over $X$. We do not assume that
the action of $C_p$ on $\xi$ is trivial. The fixed points of $C_p$
forms a ${{\mathbb T}}$ subbundle $\xi^{C_p}\subseteq \xi$.
\begin{theorem}
The following map induces an isomorphism on mod $p$ cohomology localized
away from $u$:
\[
E{{\mathbb T}}_+\wedge_{{\mathbb T}} {\mathrm{Th}}( \xi^{C_p} ) \to
E{{\mathbb T}}_+\wedge_{{{\mathbb T}}} {\mathrm{Th}} (\xi ).
\]
\end{theorem}
\begin{proof}
The only possible isotropy groups of ${\mathrm{Th}} (\xi )$ are
${{\mathbb T}}$ itself (for the base point in the Thom space), and subgroups
of $C_n$. In particular, there are finitely many orbit types.
Since ${\mathrm{Th}} (\xi^{C_p} )={\mathrm{Th}} (\xi )^{C_p}$, the result now follows
by applying theorem \ref{th:localization} on
the ${\mathbb T}$-space ${\mathrm{Th}} (\xi )$.
\end{proof}
\section{The twisted action}
\label{sec:Twisting}
Let $X$ be a ${\mathbb T}$-space with action map $\mu :{\mathbb T} \times X \to X$.
We can twist this action by the power map
$\lambda_n :{\mathbb T} \to {\mathbb T}$; $\lambda_n(z)=z^n$
and obtain another ${\mathbb T}$-space $X^{(n)}$. The underlying spaces of
$X$ and $X^{(n)}$ are equal, but the action map for $X^{(n)}$ is
$\mu_n: {\mathbb T} \times X^{(n)}\to X^{(n)}$;
$\mu_n (z,x)=\mu (\lambda_n (z),x)$.
\begin{lemma}
\label{le:FunctFib}
Let $X$ be an ${{\mathbb T}}$-space. We have a pullback of fibration
sequences which is natural in $X$ as follows:
\[ \xymatrix@C=2 cm{
BC_n \ar[r] \ar[d]
& E{{\mathbb T}}\times_{{\mathbb T}} X^{(n)} \ar[r]^{g_n} \ar[d]
& E{{\mathbb T}}\times_{{\mathbb T}} X\ar[d] \\
BC_n \ar[r] & B{{\mathbb T}} \ar[r]^{B\lambda_n} & B{{\mathbb T}}.
} \]
\end{lemma}
\begin{proof}
It is convenient to consider models of $E{{\mathbb T}}$ and $B{{\mathbb T}}$
which are realizations of simplicial topological abelian groups.
So let $E{{\mathbb T}}=|E{{\mathbb T}}_\bullet|$ and $B{{\mathbb T}}=|B{{\mathbb T}}_\bullet|$, where
$E{\mathbb T}_q$ is the $q+1$-fold Cartesian product of ${\mathbb T}$ by itself with
\[
d_i(z_0,\dots ,z_q)=(z_0,\dots ,\widehat{z_i},\dots , z_q), \quad
s_i(z_0,\dots ,z_q)=(z_0,\dots ,z_i,z_i,\dots , z_q)
\]
and $B{\mathbb T}_q =E{\mathbb T}_q /{\mathrm{diag}}_q ({\mathbb T} )$. The hat means that the element is
left out.
We have simplicial maps $E(\lambda_n )_\bullet$ and $B(\lambda_n )_\bullet$
given by rising all elements in a tuple to the $n$th power.
Since the kernel of $B(\lambda_n)_q$ is exactly $(BC_n)_q$, the bottom
row in the diagram is the realization of a short exact sequence of
simplicial abelian groups, which means that it is a fibration.
Since $E(\lambda_n )$ is a map over $B(\lambda_n)$ the right hand
square in the diagram commutes when we put
$g_n([e,x])=[E(\lambda_n )(e),x]$.
Note that this makes $g_n$ well-defined since
$E(\lambda_n )(ez)=E(\lambda_n )(e)z^n$.
The map of the vertical fibers is the identity so the right hand
square is a vertical pullback. It follows that it is also a
horizontal pullback.
\end{proof}
Here is our main result on Borel cohomology of twisted actions.
\begin{theorem}
\label{th:TwistedCollapse}
For any prime $p$ and ${\mathbb T}$-space $X$ there is an isomorphism,
natural in $X$:
\[ \xymatrix@C=2 cm{
H^*(X)\otimes {\mathbb F}_p [u] \ar[r]^\cong
& H^*(E{{\mathbb T}}\times_{\mathbb T} X^{(p)}).
} \]
\end{theorem}
To prove the theorem, we first show that there exists a natural
subgroup ${\cal F}_1^*(X)$ of $H^*((X^{(p)})_{h{\mathbb T}})$
with the property that the statement is true if we
replace $H^*(X)$ by ${\cal F}_1^*(X)$. After we have
constructed this subgroup, it is easy to identify it with
$H^*(X)$.
The spectral sequence derived from the fibration of
lemma \ref{le:FunctFib} defines a filtration
\[
{\cal F}_0^*(X) \subseteq {\cal F}_1^*(X) \subseteq \dots \subseteq
H^*(E{{\mathbb T}}\times_{{{\mathbb T}}} X^{(p)})
\]
such that
${\cal F}_i^*(X)/{\cal F}_{i-1}^*(X) \cong
\bigoplus_{*\geq 0} E_\infty^{i,*}$.
Each of these filtration subgroups is a functor on the category
of ${{\mathbb T}}$-spaces. It is well known how to interpret the first
group:
\[ {\cal F}_0^*(X)=g_p^*(H^*(X_{h{\mathbb T}})) \]
We need to consider the
next step in the filtration. This group does not have an
equally simple description.
\begin{definition}
The group of low elements
${\cal F}_1^*(X)\subseteq H^*(E{{\mathbb T}}\times_{{{\mathbb T}}} X^{(p)})$
is the subgroup consisting of those classes which in the Serre
spectral sequence represents elements in $E_\infty^{*,0}$ or
$E_\infty^{*,1}$.
\end{definition}
We define a reduced version of the group in the obvious way.
If $X$ is a pointed ${{\mathbb T}}$-space, we have inclusions of groups,
natural on such spaces
\[
g_n^*(H^*(E{{\mathbb T}}\times_{{{\mathbb T}}} X,B{{\mathbb T}}))=
{\cal F}_0^*(X,*)\subseteq {\cal F}_1^*(X,*)\subseteq
H^*(E{{\mathbb T}}\times_{{{\mathbb T}}} X^{(n)},B{{\mathbb T}}).
\]
Here is a general fact about the low elements corresponding to
$p$-fold twisting.
\begin{lemma} \label{LowBasis}
$H^*(X^{(p)}_{h{\mathbb T}})$ is a free module over
$H^*(B{\mathbb T})= {\mathbb F}_p[u]$. The low elements ${\cal F}_1^*(X)$
form a basis for it as an ${\mathbb F}_p[u]$ module.
\end{lemma}
\begin{proof}
The module structure on $H^*(X^{(p)}_{h{\mathbb T}})$
is given by the map $pr_1:X^{(p)}_{h{\mathbb T}}\to B{{\mathbb T}}$.
The Serre spectral sequence associated to the upper fibration
sequence in lemma \ref{le:FunctFib} has the form
\[
E_2^{n,m}(X)\cong H^n(X_{h{\mathbb T}})\otimes H^m(BC_p)
\Rightarrow H^{n+m}(X^{(p)}).
\]
Remember that $H^*(BC_p)\cong {\mathbb F}_p[v,w]/I_{p}$,
where $I_p$ is a principal ideal, generated by $v^2-w$ if $p=2$ and
by $v^2$ otherwise.
The Serre spectral sequence of the lower fibration sequence in
the lemma has $E_2^{*,*}={\mathbb F}_p [u]\otimes {\mathbb F}_p [v,w]/I_p$ and
converges towards $H^*(B{\mathbb T} )$ so we must have $d_2(v)=u$ and
$E_3(*)=E_\infty (*)={\mathbb F}_p [w]$.
Using this together with naturality in $X$ of the spectral sequence
we get that for any $X$ the class $w\in E_2^{0,2}(X)$ is a permanent
cycle. Furthermore,
\[
E_2^{*,*}(X)\cong {\mathbb F}_p[w]\otimes (E_2^{*,0}\oplus E_2^{*,1})
\]
so it follows formally that
$E_3^{*,*}(X)\cong {\mathbb F}_p[w]\otimes (E_3^{*,0}\oplus E_3^{*,1}).$
But then it also follows formally that higher differentials in
this spectral sequence vanish, so that
\[
E_\infty^{*,*}(X)\cong
{\mathbb F}_p[w]\otimes (E_\infty^{*,0}\oplus E_\infty^{*,1}).
\]
The low elements are by definition a subspace of $H^*(X^{(p)}_{h{\mathbb T}})$.
We can extend this inclusion to a unique map of ${\mathbb F}_p [u]$-modules
\[
f:{\mathbb F}_p [u]\otimes {\cal F}_1^*(X)\to H^*(X^{(p)}_{h{\mathbb T}}).
\]
This map sends $u^i\otimes {\cal F}_1^*(X)$ to
${\cal F}_i^*(X)$, and the map induces an isomorphism on
the corresponding graded rings, by the above computation of
$E_\infty^{*,j}$, and because $u\in H^2(B{\mathbb T})$ represents
$w\in E_\infty^{0,2}$.
It follows that $f$ itself is an isomorphism.
\end{proof}
\begin{proof}[proof of theorem \ref{th:TwistedCollapse}]
We have to exhibit a natural isomorphism
${\cal F}_1^*(X)\to H^*(X)$. From the spherical fibration
\[ \xymatrix@C=1cm{
{\mathbb T} \ar[r] & E{\mathbb T} \times X^{(p)} \ar[r]^i & E{\mathbb T} \times_{\mathbb T} X^{(p)}
} \]
we get a long exact Gysin sequence
\[ \xymatrix@C=1cm{
\ar[r]
& H^{*-2}(X^{(p)}_{h{\mathbb T}}) \ar[r]^-{\cdot u}
& H^*(X^{(p)}_{h{\mathbb T}}) \ar[r]^-{i^*}
& H^*(X^{(p)}) \ar[r]
& H^{*-1}(X^{(p)}_{h{\mathbb T}}) \ar[r]
&
} \]
(One can use the map $X\to *$ to see that the Euler class is $u$.)
Since $H^*(X^{(p)}_{h{{\mathbb T}}})$ is a free ${\mathbb F}_p [u]$ module,
multiplication by $u$ is injective, and the long exact sequences
breaks up into short exact sequences. So via lemma \ref{LowBasis}
we get the short exact sequence
\[ \xymatrix@C=1cm{
0 \ar[r]
& {\mathbb F}_p [u]\otimes {\cal F}_i^*(X) \ar[r]^-{\cdot u}
& {\mathbb F}_p [u]\otimes {\cal F}_i^*(X) \ar[r]^-{i^*}
& H^*(X^{(p)}) \ar[r]
& 0.
} \]
Thus, $i^*$ factors through
$\big( {\mathbb F}_p [u]\otimes {\cal F}_i^*(X)\big) /
{\operatorname{im}} (\cdot u)= {\cal F}_i^*(X)$
and gives the desired isomorphism.
\end{proof}
\section{The $n$-fold iteration map}
\label{sec:Iteration}
Let $M$ be a compact Riemannian manifold. In this chapter we are
examining the Hilbert manifold model of the free loop space.
We denote this Hilbert manifold by $LM$. An element in $LM$ is
a curve $\gamma :{\mathbb T} \to M$ of class $H^1$. The Hilbert manifold model
is homotopy equivalent to the usual continuous mapping space
model. The energy integral
\[ E:LM \to {\mathbb R}; \quad
E(\gamma )=\int_{\mathbb T} |\gamma^{\medspace \prime}(z)|^2dz \]
is a smooth function on $LM$. It's critical points are the closed
geodesics on $M$. We review the main results of Morse theory in
this setting. For details we refer to \cite{Klingenberg},
especially to chapter 1.
The power map $\lambda_n :{\mathbb T} \to {\mathbb T}$; $\lambda_n (z) =z^n$ gives us an
an $n$-fold iteration map
\[
{P}_n :LM\to LM; \quad {P}_n(\gamma )= \gamma \circ \lambda_n .
\]
Just like other versions of the iteration map,
this map is not equivariant with respect to the ${{\mathbb T}}$-action,
but becomes equivariant if we twist the action. That is,
the formula defines an equivariant map ${P}_n :(LM)^{(n)}\to LM$.
The map preserves energy up to a constant factor,
$E({P}_n(\gamma))=n^2E(\gamma)$.
That is, it induces an equivariant map of energy filtration
${\mathcal F} (a)=E^{-1}(-\infty,a)$ as follows:
\[
{P}_n :({\mathcal F} (a))^{(n)} \to {\mathcal F} (n^2a)
\]
We also get an equivariant map of quotients
\[
{P}_n : ({\mathcal F} (a)/{\mathcal F} ({a-\epsilon}))^{(n)} \to
{\mathcal F}({n^2a})/{\mathcal F}({n^2a-n^2\epsilon}).
\]
Let $T_\gamma LM$ denote the tangent space of $LM$ at
$\gamma$. It consists of the vector fields along $\gamma$ of
class $H^1$, and is a real Hilbert space with inner product
\[
{\inp \xi \eta}_1= \int_{{\mathbb T}} \big( \inp {\xi (z)} {\eta (z)} +
\inp {\nabla \xi (z)} {\nabla \eta (z)} \big) dz.
\]
Here $\nabla$ denotes the covariant differentiation
along the curve defined by the Levi--Civita connection of the
Riemannian manifold $M$. The inner product defines a Riemannian
metric on the Hilbert manifold $LM$.
One technical problem is that the iteration map is not an (injective)
isometry with respect to this metric. To study the properties of
${P}_n$
it will be convenient to consider a modified inner product on
$T_\gamma LM$. Let
\[
{\inp \xi \eta }_{c,1}=\int_{{\mathbb T}} \big( \inp {\xi (z)} {\eta (z)} +
c\inp {\nabla \xi (z)} {\nabla \eta (z)} \big) dz.
\]
The differential of the iteration map is given by
\[ D_\gamma ({P}_n ): T_\gamma LM \to T_{{P}_n (\gamma )}LM ;\quad
D_\gamma ({P}_n ) (\xi ) =\xi \circ \lambda_n .\]
So we have
$\nabla (D_\gamma({P}_n )\xi )= n\nabla (\xi ) \circ \lambda_n =
n D_\gamma ({P}_n )(\nabla \xi )$ and
\[
{\inp {D_\gamma ({P}_n )\xi} {D_\gamma ({P}_n )\eta }}_1
= \int_{{\mathbb T}} \inp {\xi (z^n)} {\eta (z^n)} +
n^2 \inp {\nabla (\xi )(z^n)} {\nabla (\eta )(z^n)} dz=
{\inp \xi \eta }_{n^2,1}.
\]
This means that even if the iteration map does not preserve the
inner product, it becomes an isometry if we replace the
inner product on the target by a suitably modified
inner product.
The metric on $M$ determines an exponential map
$\exp_p\colon T_pM\to M$ for each $p\in M$. This induces an
exponential map
\[
\exp^\sim_\gamma : T_\gamma LM \to LM; \quad
\exp^\sim_\gamma (\xi )(t)=\exp_{\gamma (t)}(\xi (t)).
\]
This is not the exponential map derived from the
Hilbert metric on $LM$, but it has the advantage that
it is compatible with the iteration map in the sense
that the following diagram commutes:
\[
\xymatrix@C=2cm{
T_\gamma LM \ar[r]^{D_\gamma ({P}_n )} \ar[d]^{\exp^\sim_\gamma}
& T_{{P}_n (\gamma )} LM \ar[d]^{\exp^\sim_{{P}_n(\gamma )}} \\
LM \ar[r]^{{P}_n} & LM. \\
}
\]
The exponential map is a diffeomorphism from a neighborhood of
$0$ in $T_\gamma LM$ to a neighborhood of $\gamma$ in $LM$.
Let $\gamma$ be a closed geodesic on $M$. This is a critical point
for the energy function $E$. The Hessian of the energy function defines
a quadratic form $D^2E(\gamma )$ on $T_\gamma LM$. This form determines
a self adjoint operator $A_\gamma :T_\gamma LM \to T_\gamma LM$ by the
equation $D^2E(\gamma )(\xi ,\eta )={\inp {A_\gamma \xi } \eta }_1$.
The operator $A_\gamma$ is the sum of the identity with a compact
operator, so there are at most a finite number of negative
eigenvalues, each corresponding to a finite dimensional vector space
of eigenvectors of $A_\gamma$.
Obviously the operator $A_\gamma$ depends not just on
the Hessian of the energy integral, but also on the metric on $LM$.
Since we are considering modifications of the inner product,
we also have to consider the corresponding modifications of the self adjoint operator.
To emphasize this dependence, for a real vector space
$V$ with inner product ${\inp \medspace \medspace }_\alpha $ and with a
quadratic form $Q$, we define the operator
$A({\inp \medspace \medspace }_\alpha ,Q)$ by the property that
$Q(\xi ,\eta )=
{\inp {A({\inp \medspace \medspace }_\alpha ,Q)\xi } \eta }_\alpha$.
Now let $K_a$ be the space of critical points of $E$ with energy
level $a$. The negative bundle $\mu^-$ over $K_a$ is the vector
bundle whose fiber at $\gamma$ is the vector space spanned by the
eigenvectors belonging to negative eigenvalues of $A_\gamma$.
Similarly, $\mu^0$ and $\mu^+$ are the vector bundles with fibers
spanned by the eigenvectors corresponding to the eigenvalue $0$ and the
positive eigenvalues respectively.
Using the modified inner product ${\inp \medspace \medspace}_{c,1}$
we obtain a modified negative bundle $\mu_c^-$ etc. This bundle is
not the same as $\mu^-$. But on the other hand, the difference
between these two bundles is not so dramatic, since the orthogonal
projection from $\mu_c^-$ to $\mu^-$ with respect to the standard
inner product defines an isomorphism of bundles.
\begin{lemma}
The iteration map induces a bijection
${P}_n :K_a \to (K_{an^2})^{C_n}$.
The group $C_n$ acts on the fiber $(\mu^- )_{P_n\gamma }$, and for
any $\gamma \in K_a$ the differential of the iteration map induces
an isomorphism of vector spaces
\[
\xymatrix@C=2cm{
D_\gamma ({P}_n ): (\mu_{n^2}^- )_\gamma \ar[r]^-{\cong}
& ((\mu^- )_{{P}_n \gamma })^{C_n}. }
\]
\end{lemma}
\begin{proof}
The property of being a geodesic is a local property of a curve,
so ${P}_n \gamma$ is a geodesic if and only if $\gamma$ is a geodesic.
It follows that ${P}_n (K_a)\subseteq (K_{n^2a})^{C_n}$.
On the other hand, if $\theta \in (K_{n^2a})^{C_n}$ we can write
$\theta ={P}_n \gamma$ for some $\gamma \in LM$.
But $\gamma$ has to be a geodesic, since $\theta$ is,
and $E(\gamma )=E({P}_n \gamma )/n^2=a$, which means that
$\gamma \in K_{a}$. Thus, ${P}_n$ induces a bijection as stated.
Now we look at the tangent map. We first claim that this map induces
an isomorphism
\[
\xymatrix@C=2cm{
D_\gamma ({P}_n ): T_\gamma LM \ar[r]^-{\cong}
& (T_{{P}_n \gamma }LM)^{C_n}.}
\]
This is clear, since ${P}_n \gamma$ is periodic of period $n$,
and any $n$-periodic vector field along ${P}_n \gamma$ is the
image under the differential of the iteration map of a vector
field on $\gamma$.
We must show that this isomorphism restricts to an isomorphism as
stated. The groups $C_n$ acts on $V=T_{{P}_n \gamma}LM$,
let us say that a generator $\sigma \in C_n$ acts by
\[ (\sigma \xi )(z)=\xi (z\zeta_n )\in T_{{P}_n \gamma (z\zeta_n )}M=
T_{{P}_n \gamma (z)}M; \quad \zeta_n =e^{2\pi i/n} . \]
Let $N\in {\mathbb C} [C_n]$ be the sum $\frac 1 n \sum_{k=0}^{n-1} \sigma^k$.
This $N$ acts as an idempotent on $V$, and so does ${\mathbf{Id}} -N$.
The inner product is manifestly invariant under the action on $C_n$,
so a simple calculation shows that $N$ is self adjoint such that
$N$ and ${\mathbf{Id}}-N$ are orthogonal idempotents. Moreover,
the idempotents commute with the action of the group $C_n$. So $V$ splits as a
$C_n$ representation into an orthogonal sum $NV\oplus ({\mathbf{Id}} -N)V$.
The energy function is also invariant under the group action,
so by the same argument as we just used for the inner product,
the quadratic form $(V,D^2_{{P}_n \gamma}(E)$ splits as a direct sum
$(NV,D^2_{{P}_n \gamma}(E))\oplus (({\mathbf{Id}}-N)V,D^2_{{P}_n \gamma}(E))$.
Let $A=A({\inp \medspace \medspace}_1 ,D^2_{{P}_n \gamma}(E))$.
Since both the inner product and the quadratic form split
as direct sums, the linear endomorphism $A:V\to V$ is a direct sum
of two self adjoint endomorphisms
\[ A|_{NV}:NV\to NV, \quad A|_{({\mathbf{Id}}-N)V}:({\mathbf{Id}}-N)V\to ({\mathbf{Id}}-N)V. \]
It follows that the subspace of $V$ generated by negative eigenvectors
equals the direct sum of the negative eigenvector spaces of
$A|_{NV}$ and $A|_{({\mathbf{Id}}-N)V}$.
We claim that $NV=V^{C_n}$ and $(({\mathbf{Id}} -N)V)^{C_n}=0$. This follows
since $\sigma N=N$ so that $NV\subseteq V^{C_n}$ and if
$\sigma ({\mathbf{Id}} -N)\xi =({\mathbf{Id}} -N)\xi$ then $\sigma \xi =\xi$ so that
$({\mathbf{Id}} -N)\xi =0$.
To finish the proof, we have to show that
$D_\gamma ({P}_n )(\mu^-_{n^2} )_\gamma$ equals
the negative eigenvector space of $A|_{NV}$.
But $D_\gamma ({P}_n )$ is an isometry (with respect to the modified
inner product), and it preserves $D^2E$ up to multiplication by $n^2$.
So have a commutative diagram
\[
\xymatrix@C=2cm{
T_\gamma LM \ar[r]^{D_\gamma {P}_n}
\ar[d]_{n^2A({\inp \medspace \medspace}_{n^2,1},D^2_\gamma E)}
& T_{{P}_n \gamma }LM \ar[d]^{A({\inp \medspace \medspace}_1 ,
D^2_{{P}_n \gamma }E)} \\
T_\gamma LM \ar[r]^{D_\gamma {P}_n } & T_{{P}_n \gamma} LM
}
\]
by which we get the desired result.
\end{proof}
Assume that $N_a\subseteq LM$ is a nondegenerate
critical manifold of energy $a$. Also assume that
${P}_p (N_a) \subseteq N_{p^2a}$ is a non-degenerate
critical manifold, and that there are no other critical
points at energy levels in the intervals
$(a-\epsilon,a)$ and $(p^2(a-\epsilon),p^2a)$.
The Whitney sum $\mu=\mu^- \oplus \mu^+$ is the normal
bundle of the submanifold $N_a \subseteq LM$ because we are
assuming that $N_a$ satisfies Bott's non-degeneracy condition.
\begin{theorem}
\label{th:IterationMap}
The $p$-fold iteration map induces an isomorphism in cohomology
localized away from $u$ as follows:
\[
\widetilde H^*(({\mathcal F} (a)/{\mathcal F}({a-\epsilon}))_{h{\mathbb T}})
\left[ \frac 1 u \right] \cong
\widetilde H^*(({\mathcal F} ({p^2a})/{\mathcal F} ({p^2(a-\epsilon)}))^{(p)}_{h{{\mathbb T}}})
\left[ \frac 1 u \right] .\]
\end{theorem}
\begin{proof}
We have a commutative diagram (possibly after decreasing $\epsilon$)
\[
\xymatrix@C=1cm{
(D(\mu_{p^2}^- (N_a)),S(\mu_{p^2}^- (N_a)))\ar[r]^-{D{P}_p}
\ar[d]^{\exp^\sim}
& (D(\mu^- (N_{p^2a})),S(\mu^- (N_{p^2a})))\ar[d]^{\exp^\sim} \\
({\mathcal F} (a),{\mathcal F} ({a-\epsilon}))\ar[r]^-{{P}_p}
& ({\mathcal F} ({p^2a}),{\mathcal F} ({p^2(a-\epsilon)})).
}
\]
According to theorem \ref{th:localization} the upper horizontal map
induces an isomorphism in mod $p$ cohomology localized away from $u$.
The exponential map in the right column induces a ${\mathbb T}$ homotopy
equivalence on quotients by theorem \cite[2.4.10]{Klingenberg}.
This is the Hilbert manifold version of the fundamental theorem of
Morse theory. So to prove the theorem, we need to see that the left
vertical map induces a ${{\mathbb T}}$ equivariant homotopy equivalence on
quotients.
But this is true for the same reason. The proof of
theorem \cite[2.4.10]{Klingenberg} does not use the explicit
form of the metric on the Hilbert manifold $LM$, but only the
nondegeneracy of $D^2E$. The statement of the variation of the
theorem using $LM$ with the modified metric
${\inp \medspace \medspace }_{1/p^2,1}$ is exactly the statement
that the left vertical map is a homotopy equivalence.
\end{proof}
\begin{remark}
\label{re:IterationMap}
The theorem and its proof are also valid if $a$ is not a critical value.
In this case the interpretation of the statement is that if $\lambda$ is
a critical value of the energy function, such that $\lambda/p^2$ is not a
critical value, then for $\epsilon >0$ sufficiently small,
\[
\widetilde H^*(({\mathcal F} ({\lambda })/{\mathcal F}
({\lambda -\epsilon}))^{(p)}_{h{\mathbb T}})
\left[ \frac 1 u \right] =0.
\]
\end{remark}
\begin{remark}
Since $P_p$ is a homeomorphism whose image is the $C_p$ fixed points,
we could try to use theorem \ref{th:localization} directly on
the inclusion ${\mathcal F} ({p^2a})^{C_p}\subseteq {\mathcal F} ({p^2a})$, cleverly
bypassing this entire section. The problem with this approach is that
you need a finite dimensionality condition for the localization statement
\ref{th:localization} to be true. The only role of Morse theory in the
proof of theorem \ref{th:IterationMap} is to reduce the
infinite dimensional situation to a finite dimensional one.
It seems likely that this reduction can be done in greater
generality, and that the non-degeneracy condition in \ref{th:IterationMap}
is far stronger than necessary.
\end{remark}
\section{The Morse spectral sequences}
\label{sec:MSS}
Let $M$ be a compact Riemannian manifold as before. But in this
section we are going to assume that the critical points of the
energy function on $LM$ are collected on compact critical manifolds.
We also assume that each of these critical manifolds satisfy the
Bott non-degeneracy condition. This is a strong assumption
on the metric of $M$, but for instance the symmetric spaces
satisfy this, according to \cite{Ziller}.
We are considering the filtration induced by the levels of
the energy function. Let the critical values of the energy function be
$0=\lambda_0 <\lambda_1 <\dots $. There is a filtration of
$LM$ by ${\mathcal F} (\lambda_i )=E^{-1}(-\infty ,\lambda_i )$.
This filtration is equivariant with respect to the action of
the circle, and induces spectral sequences of various forms
of cohomology. We call these spectral sequences Morse spectral
sequences. The condition we impose on the metric of $M$ means that for
each $\lambda$ we have a finite dimensional critical manifold
$N(\lambda )$. The tangent bundle of $L M$ restricted to
$N(\lambda )$ splits ${{\mathbb T}}$-equivariantly into a sum of three bundles.
\[
TLM|_{N(\lambda )}\cong
\mu^- (\lambda )\oplus \mu^0 (\lambda )\oplus \mu^+ (\lambda ).
\]
The standard Morse theory argument can be carried through equivariantly
on the Hilbert manifold $LM$. This was done by Klingenberg. For an account
of this work see section \cite[2.4]{Klingenberg}, especially
theorem 2.4.10. The statement of this theorem implies that we have
an equivariant homotopy equivalence
\[
{\mathcal F} (\lambda_n )/{\mathcal F} (\lambda_{n-1} ) \simeq {\mathrm{Th}} (\mu^- (\lambda_n )).
\]
We will consider cohomology of the homotopy
orbits $LM_{h{\mathbb T}}$. We are also going to consider the twisted action.
This is not because we have a particular interest in
$H^*(LM_{h{\mathbb T}}^{(p)})$ in itself. As we will see,
these groups are easy to compute anyhow. The purpose of considering
them is rather to be able to use a comparison argument to
obtain information about the Morse spectral sequence
of $H^*(LM_{h{\mathbb T}})$. The abstract situation is as follows:
\begin{theorem}
\label{th:ThreeSS}
There are three spectral sequences
\begin{align*}
& \{ E_r^{n,m}({\mathcal M})(LM) \} \Rightarrow
H^{n+m}(LM), \\
& \{ E_r^{n,m}({\mathcal M})(LM_{h{\mathbb T}}) \} \Rightarrow
H^{n+m}(LM_{h{\mathbb T}}), \\
& \{ E_r^{n,m}({\mathcal M})(LM^{(p)}_{h{\mathbb T}}) \} \Rightarrow
H^{n+m}(LM^{(p)}_{h{\mathbb T}}).
\end{align*}
The $E_1$ pages are given by
\[ \widetilde H^m({\mathrm{Th}} (\mu^- (\lambda_n )))), \quad
\widetilde H^m({\mathrm{Th}} (\mu^- (\lambda_n)_{h{\mathbb T}}))), \quad
\widetilde H^m({\mathrm{Th}} (\mu^- (\lambda_n )_{h{\mathbb T}}^{(p)} ))) \]
respectively. We have a natural isomorphism of spectral sequences
\[
E_*({\mathcal M})(LM_{h{\mathbb T} }^{(p)} )\cong
{\mathbb F}_p [u]\otimes_{{\mathbb F}_p} E_*({\mathcal M})(LM)
\]
as a module over $H^*(B{{\mathbb T} })\cong {\mathbb F}_p [u]$.
\end{theorem}
\begin{proof}
Consider the energy filtration
${\mathcal F} (\lambda_0 )\subseteq {\mathcal F} (\lambda_1 )\subseteq \dots$
with union $LM$. Because the filtration is equivariant, it induces
filtrations of the spaces $LM$, $LM_{h{\mathbb T}}$ and $LM^{(p)}_{h{{\mathbb T}}}$.
These filtrations give rise to three exact couples which gives
the three spectral sequences. The spectral sequences all converge
(strongly). One can use \cite[Ch. 9,1.6]{Sp} or \cite[theorem 12.6]{Bo}
and the remark following \cite[theorem 7.1]{Bo} to see this.
The functorial isomorphism of $H(B{{\mathbb T}} ;{\mathbb F}_p )$-modules from
theorem \ref{th:TwistedCollapse} imply the natural isomorphism of
spectral sequences.
\end{proof}
We next have to discuss the $p$-fold iteration map.
If there exists a periodic geodesic of energy $\lambda$, its
$p$-fold iterate is a geodesic of energy $p^2\lambda$. So the
sequence $p^2\lambda_0 <p^2\lambda_1 <p^2\lambda_2 <\dots$
is a subsequence of the sequence
$\lambda_0 <\lambda_1 <\lambda_2 <\dots$.
The $p$-fold iteration map induces an equivariant map of filtrations:
\[ \xymatrix@C=1cm{
{\mathcal F} (\lambda_0 )^{(p)} \ar[r] \ar[d] &
{\mathcal F} (\lambda_1 )^{(p)} \ar[r] \ar[d] & \dots \ar[r] & LM^{(p)} \ar[d] \\
{\mathcal F} (p^2\lambda_0 ) \ar[r] &
{\mathcal F} (p^2\lambda_1 ) \ar[r] & \dots \ar[r] & LM. \\
} \]
\begin{lemma}
\label{le:ItLocalization}
Let $p^2\lambda_{i-1} =\lambda_j <\lambda_{j+1} <\dots
<\lambda_k =p^2\lambda_i$ be the critical values of the energy function
in the interval $[ p^2\lambda_{i-1} , p^2\lambda_i ]$. Then
\[
H^*({\mathcal F} (\lambda_\ell )_{h{\mathbb T}} , {\mathcal F} (\lambda_{\ell-1})_{h{\mathbb T}} ;{\mathbb F}_p )
\left[ \frac 1 u \right] =
\begin{cases}
0 & , \ell \neq k, \\
H^*({\mathcal F} (\lambda_i ),{\mathcal F} (\lambda_{i-1} );{\mathbb F}_p )\otimes {\mathbb F}_p [u,u^{-1}]
& , \ell=k.
\end{cases}
\]
The isomorphism is induced by the iteration map followed by the
isomorphism of theorem \ref{th:TwistedCollapse}.
\end{lemma}
\begin{proof}
This follows from theorem \ref{th:IterationMap} and remark
\ref{re:IterationMap}.
\end{proof}
\begin{theorem}
\label{th:relabel}
Up to a re-indexing of the columns in the spectral sequence
the $p$-fold iteration map induces a natural isomorphisms
of spectral sequences
\[
E_*({\mathcal M} )(LM_{h{\mathbb T}}) \left[ \frac 1 u \right] \cong
E_*({\mathcal M} )(LM) \otimes {\mathbb F}_p [u,u^ {-1}].
\]
\end{theorem}
\begin{proof}
We have to explain the re-indexing.
Let $j(i)$ be the non-negative number such that
$p^2\lambda_i=\lambda_{j(i)}$, and $k(i)$ the largest
number $s\leq i$ such that $s=j(t)$ for some $t$. So by
definition, $k(i)\leq i$.
Besides the filtration ${\mathcal F}_i:={\mathcal F} (\lambda_i)$ we consider two
derived filtrations of $LM$;
\[ {\mathcal F}_i^\prime := {\mathcal F}_{j(i)} \quad , \quad
{\mathcal F}_i^{\prime \prime} := {\mathcal F}_{k(i)} . \]
The filtrations define spectral sequences
\[
E_*({\mathcal M} )^\prime (LM_{h{\mathbb T}}) \Rightarrow
H^*(LM_{h{\mathbb T}}) \quad , \quad
E_*({\mathcal M} )^{\prime \prime}(LM_{h{\mathbb T}}) \Rightarrow
H^*(LM_{h{{\mathbb T}}}).
\]
The $p$ fold iteration map ${P} :={P}_p$ induces a map of filtrations
${P} :{\mathcal F}_i \to {\mathcal F}_{j(i)}$, so we have a ladder
\[ \xymatrix@C=1cm{
({\mathcal F}_0 )^{(p)}_{h{\mathbb T}} \ar[r] \ar[d]^{P}
& ({\mathcal F}_1)^{(p)}_{h{\mathbb T}} \ar[r] \ar[d]^{P}
& \dots \ar[r] & LM^{(p)}_{h{\mathbb T}} \ar[d]^{P} \\
({\mathcal F}_0^\prime )_{h{\mathbb T}} \ar[r]
& ({\mathcal F}_1^\prime )_{h{\mathbb T}} \ar[r]
&\dots \ar[r] & LM_{h{\mathbb T}}. \\
} \]
According to theorem \ref{th:IterationMap} we get an isomorphism
of spectral sequences
\[ \xymatrix@C=1cm{
{P}^* :E_*({\mathcal M} )^\prime (LM_{h{\mathbb T}}) \left[ \frac 1 u \right]
\ar[r]^-{\cong}
& E_*({\mathcal M} )(LM^{(p)}_{h{\mathbb T}} )\left[ \frac 1 u \right] .
} \]
Note that the spectral sequence on the right hand side can be
rewritten by the isomorphism in theorem \ref{th:ThreeSS}.
The filtration ${\mathcal F}_i^{\prime \prime}$ is just a trivial reindexing
of the filtration ${\mathcal F}_i^\prime$.
We have that
\[
{\mathcal F}_0^\prime ={\mathcal F}_0^{\prime \prime}={\mathcal F}_1^{\prime \prime} =\dots =
{\mathcal F}_{j(1)-1}^{\prime \prime} \subseteq
{\mathcal F}_1^\prime = {\mathcal F}_{j(1)}^{\prime \prime} =
{\mathcal F}_{j(1)+1}^{\prime \prime} = \dots
\]
So the two spectral sequences
\[
E_*({\mathcal M} )^{\prime \prime}(LM_{h{\mathbb T}}) \left[ \frac 1 u \right]
\quad , \quad
E_*({\mathcal M})^\prime (LM_{h{\mathbb T}}) \left[ \frac 1 u \right]
\]
are the same up to a re-indexing of the columns.
Finally, since
${\mathcal F}_i^{\prime \prime} = {\mathcal F}_{k(i)} \subseteq {\mathcal F}_i$
we have an inclusion of filtrations, which induces a map of spectral
sequences
\[ \xymatrix@C=1cm{
E_*({\mathcal M} )^{\prime \prime} (LM_{h{\mathbb T}}) \left[ \frac 1 u \right]
\ar[r]
& E_*({\mathcal M} )(LM_{h{\mathbb T}}) \left[ \frac 1 u \right] .
} \]
According to lemma \ref{le:ItLocalization} this map is an
isomorphism of $E_1$ pages, and thus of spectral sequences.
This proves the theorem.
\end{proof}
\section{The Morse spectral sequences for ${\mathbb{C}\mathrm{P}}^r$}
\label{sec:MorseCPr}
We now turn to the special case $M={\mathbb{C}\mathrm{P}}^r$ with the usual symmetric space
(Fubini--Study) metric. We know that this space satisfies the conditions
of section \ref{sec:MSS}, in particular theorem \ref{th:relabel} is
valid. We are going to study the Morse spectral sequence for
$H^*(L{\mathbb{C}\mathrm{P}}^r_ {h{{\mathbb T}}})$. Using theorem \ref{th:relabel}
we can pin down the possible structure of the differentials, but we
cannot completely determine them. In particular, using only differential
geometry, we can not prove that the differentials are non-trivial.
But we will make an effort to obtain as much information as possible
about the Morse spectral sequence with the information we already have
available at this point.
It will turn out in section \ref{se:MorseSerreSS} that there are
non-trivial differentials in the Morse spectral sequence. In that
section we are going to compare the Morse spectral sequence to a
purely homotopy theoretical spectral sequence. This will not quite
solve the differentials, but it will at least determine the dimension of
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}})$ as a vector space over ${\mathbb F}_p$.
\begin{theorem}
\label{th:MorsePoincare}
The Morse spectral sequence
$E_*^{*,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$
is a spectral sequence of
$H(B{{\mathbb T}})={\mathbb F}_p [u]$ modules converging towards
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$.
In case $p\mid (r+1)$ the $E_1$ page is given by
\[
\begin{aligned}
& E_1^{0,*} &
&=& &{\mathbb F}_p [u,x]/(x^{r+1}),&
& & \\
& E_1^{pm+k,*}&
&=& &\alpha_{pm+k} {\mathbb F}_p [x_1,x_2]/(Q_r,Q_{r+1}),\quad
0\leq m, \quad 1\leq k\leq p-1, & \\
& E_1^{pm,*} &
&=& &\alpha_{pm} {\mathbb F}_p [x,u] /(x^{r+1})
\oplus \zeta_{pm} {\mathbb F}_p [x,u]/(x^{r+1}),\quad 1\leq m.&
\end{aligned}
\]
In case $p\nmid (r+1)$ the $E_1$ page is given by
\[
\begin{aligned}
& E_1^{0,*} &
&=& & {\mathbb F}_p [u,x]/(x^{r+1}), & \\
& E_1^{pm+k,*} &
&=& & \alpha_{pm+k} {\mathbb F}_p [x_1,x_2]/(Q_r,Q_{r+1}),
\quad 0\leq m,\quad 1\leq k \leq p-1, & \\
& E_1^{pm,*} &
&=& &\alpha_{pm} {\mathbb F}_p [x,u]/(x^r)\oplus
\bar \zeta_{pm} {\mathbb F}_p [x,u]/(x^r),\quad 1\leq m.&
\end{aligned}
\]
The total degree of the element $\alpha_{pm+k}x_1^ix_2^j$ is
$2r(pm+k-1)+2i+2j+1$ and the filtration degree is $pm+k$, where
$1\leq k\leq p-1$.
The generators which are free ${\mathbb F}_p[u]$-module generators have total
degree and filtration degree as follows:
\begin{tabular}{l !{\quad} l !{\quad} l !{\quad} l}
\toprule
class & case & total degree & filtration\\
\midrule
$\alpha_{pm} x^i$ & all $r$, $0\leq i\leq r-1$ & $2r(pm-1)+2i+1$ & $pm$\\
$\alpha_{pm} x^r$ & $p\mid (r+1)$ & $2r(pm-1)+2r+1$ & $pm$\\
$\zeta_{pm} x^i$ & $p\mid (r+1)$ and $0\leq i\leq r$ & $2rpm+2i$ & $pm$\\
$\bar \zeta_{pm} x^i$ & $p\nmid (r+1)$ and $0\leq i\leq r-1$ & $2rpm+2+2i$
& $pm$ \\
\bottomrule
\end{tabular}
\end{theorem}
\begin{proof}
The sequence of critical values for the energy function is
given by $\lambda_n=n^2$. So in order to determine the $E_1$ page
we have to compute $\widetilde H^*({\mathrm{Th}} (\mu^- (n^2))_{h{\mathbb T}})$
for each $n$.
The case $n=0$ is special. The critical manifold $N(0)$
consists of the constant curves, so it is diffeomorphic
to ${\mathbb{C}\mathrm{P}}^r$ itself, with trivial action of ${\mathbb T}$. Since the
constant curves are the absolute minima of the
energy function, the negative bundle is the trivial bundle.
We find
\[
E_1^{0,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )=
H^*(B{\mathbb T} \times {\mathbb{C}\mathrm{P}}^r)=
H^*(B{\mathbb T})\otimes H^*({\mathbb{C}\mathrm{P}}^r).
\]
This is what the theorem states for $E_1^{0,*}$.
Now consider $n>0$. The negative bundle $\mu^-(n^2)$ is a
${\mathbb T}$-vector bundle over $N(n^2)\cong S(\tau ({\mathbb{C}\mathrm{P}}^r ))^{(n)}$.
We know from theorem \ref{th:ThreeSS} that
\[ E_1^{n,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )\cong \widetilde
H^*({\mathrm{Th}} (\mu^-(n^2))_{h{\mathbb T}} ).\]
Recall from \cite{SySp} that if we forget the ${\mathbb T}$-action,
then $\mu^-(n^2)$ is the vector bundle
$(n-1)\eta^* (\tau ({\mathbb{C}\mathrm{P}}^r ))\oplus \epsilon$ over $S(\tau({\mathbb{C}\mathrm{P}}^r))$.
In particular, this means that $\mu^-(n^2)$ is orientable.
So by lemma \ref{lemma:Thom}, the vector bundle
$\mu^-(n^2)_{h{\mathbb T}}$ is orientable and
${\mathrm{Th}} (\mu^-(n^2))_{h{\mathbb T}} \cong {\mathrm{Th}} ( \mu^-(n^2)_{h{\mathbb T}}).$
The vector bundle $\mu^- (n^2)_{h{\mathbb T}}$ is of the same dimension as
$\mu^- (n^2)$, that is of dimension $2r(n-1)+1$. So by the Thom
isomorphism we find that
\[
E_1^{n,*} (L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )\cong \widetilde
H^*({\mathrm{Th}} (\mu^-(n^2)_{h{\mathbb T}} )) \cong
H^{*-(2r(n-1)+1)}(N(n^2)_{h{\mathbb T}} ).
\]
The theorem now follows from the computation of the Borel cohomology
of the space of geodesics in theorem \ref{th:CohomologyOrbits}.
\end{proof}
\begin{remark}
The symbol $\alpha_n x^i$ refers to the cup product in a critical manifold.
The precise meaning is that it is the Thom isomorphism of the bundle
$\mu^{-}_n$ applied to the class $x^i$. In particular, the class
$\alpha_n$ is the Thom class itself. The product is not defined
on the cohomology of the Thom space of the negative bundle.
It is also not defined in the spectral sequence.
So strictly speaking, the product notation is improper.
Products with $u$ on the other hand are genuine products,
which are defined in the spectral sequence.
\end{remark}
We also consider the non-equivariant case.
\begin{lemma}
\label{le:nonequivariant}
Consider the Morse spectral sequence
\[
\{ E_r^{*,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r ) \} \Rightarrow H^* (L{\mathbb{C}\mathrm{P}}^r ).
\]
We have the following formulas for it's $E_1$ page:
\[
E_1^{n,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )=
\begin{cases}
{\mathbb F}_p [x]/(x^{r+1})
& , n=0, \\
\alpha_n {\mathbb F}_p [x,\sigma ]/(x^{r+1},\sigma^2)
& , n\geq 1, \quad p\mid (r+1), \\
\alpha_n {\mathbb F}_p [x,\bar \sigma ]/(x^r,\bar \sigma^2 )
& , n\geq 1, \quad p\nmid (r+1). \\
\end{cases}
\]
The total degree of $\alpha_n$ is $2r(n-1)+1$, the total degree of
$\sigma$ is $2r-1$ and the total degree of $\bar\sigma$ is $2r+1$.
If $i\geq rn+1$ and $p\mid (r+1)$ or if $i\geq rn$ and $p\nmid (r+1)$
then $E_1^{n,2i+1-n}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )=0$ for all $n$.
Finally, the canonical map
\[ E_1^{n,2j+1-n}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )\to
E_1^{n,2j+1-n}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )\]
is surjective for all $n$ and all $j$.
\end{lemma}
\begin{proof}
Most of this follows directly from remark \ref{SCohomologyModP}.
The class of highest even degree in
${\mathbb F}_p [x,\sigma ]/(x^{r+1},\sigma^2 )$ is $x^r$ of degree $2r$
and in ${\mathbb F}_p [x,\bar \sigma ]/(x^r,\bar \sigma^2 )$
it is $x^{r-1}$ of degree $2r-2$. The stated vanishing result follows
from this observation. The final surjectivity statement
follows from the surjectivity result in corollary \ref{cor:SerreCollapse}.
\end{proof}
A basic fact, which we will use again and again, is the
following:
\begin{theorem}
\label{th:CPcollapse}
The Morse spectral sequence for non-equivariant cohomology collapses.
\end{theorem}
This theorem follows for instance from the stable splitting
of $L{\mathbb{C}\mathrm{P}}^r$ that we constructed in \cite{SySp}. But the
theorem definitely does not belong to us. To prove
the basic fact we only need a splitting on homology level,
and such a splitting was known to Ziller \cite{Ziller}.
\begin{corollary}
\label{cor:oddcounting}
Let ${\mathcal F}_0 \subseteq {\mathcal F}_1 \subseteq \dots \subseteq L{\mathbb{C}\mathrm{P}}^r$ be the
energy filtration and let $m\geq 0$. If $p\mid (r+1)$, the odd degree
cohomology $H^{\text{odd} }({\mathcal F}_m)$ is a vector space of dimension
$(r+1)m$. If $p\nmid (r+1)$, its dimension is $rm$.
\end{corollary}
\begin{proof}
According to theorem \ref{th:CPcollapse} the spectral sequence
$\{ E_r^{*,*} \}= \{ E_r^{*,*} ({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )\}$ collapses, so that
\[
H^{\text{odd}}({\mathcal F}_{m} ) \cong
\bigoplus_{i,n} E^{n,2i+1-n}_\infty
\cong \bigoplus_{i,n} E^{n,2i+1-n}_1,
\]
where the direct sums are taken over pairs $n,i$ such that
$0\leq n\leq m$ and $n\leq 2i+1$. By lemma \ref{le:nonequivariant}
we have that $E_1^{0,2i+1}=0$, and for a fixed $n$ with $n\geq 1$ we
have that
\[
\sum_{n \leq 2i+1} \dim (E_1^{n,2i+1-n})=
\begin{cases}
r+1 & \text{if } p\mid (r+1), \\
r &\text{if } p\nmid (r+1), \\
\end{cases}
\]
independently of $n$. It follows that
\[
\dim H^{\text{odd}} ({\mathcal F}_{m}) =
\sum_{1\leq n\leq m} \medspace \sum_{n\leq 2i+1} \dim(E_1^{n,2i+1-n})=
\begin{cases}
(r+1)m & \text{if } p\mid (r+1),\\
rm & \text{if } p\nmid (r+1).\\
\end{cases}
\]
\end{proof}
Later it will be convenient to be able to estimate
the sheer size of the Morse spectral sequence using a coarse
dimension counting. So let's get that over with.
\begin{lemma}
\label{le:MorsePoincare}
In case $p\mid (r+1)$, the Poincar\'e series of
$E_1^{*,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ is
\[
\frac {1-t^{2r+2}} {(1-t)(1-t^2)(1-t^{2pr})}.
\]
In case $p\nmid (r+1)$, the Poincar\'e series of
$E_1^{*,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ is
\[
\frac {1-t^{2r+2}-t^{2pr}+t^{2pr+2}} {(1-t)(1-t^2)(1-t^{2pr})}.
\]
\end{lemma}
\begin{proof}
In theorem \ref{th:MorsePoincare} the $E_1$ term is written as a direct
sum of three terms. We compute the Poincar\'e series of each term, and add.
First note that the Poincar\'e series of the algebra
${\mathbb F}_p [x_1,x_2]/(Q_r,Q_{r+1})$ is
\[ \frac {1-t^{2r}} {1-t^2} \medspace \frac {1-t^{2r+2}} {1-t^2}\]
as one sees by writing down a basis of monomials.
Here is the case $p\mid (r+1)$:
\begin{align*}
& \frac{1} {1-t^2} \medspace \frac {1-t^{2r+2}} {1-t^2}
+\frac {t} {1-t^{2pr}} \medspace
\frac {1-t^{2r(p-1)}} {1-t^{2r}} \medspace
\frac {1-t^{2r}} {1-t^2} \medspace
\frac {1-t^{2r+2}} {1-t^2} \medspace + \\
& \frac {1} {1-t^2} \medspace \frac {1-t^{2r+2}} {1-t^{2}} \medspace
\frac {t^{2r(p-1)+1}+t^{2rp}} {1-t^{2pr}} ,
\end{align*}
which equals
\begin{align*}
& \frac{1-t^{2r+2}} {(1-t^2)^2 (1-t^{2pr})}
[(1-t^{2pr})+(t-t^{2r(p-1)+1})+(t^{2pr}+t^{2r(p-1)+1})] = \\
& \frac {(1-t^{2r+2})(1+t)} {(1-t^2)^2 (1-t^{2pr})}
=\frac {1-t^{2r+2}} {(1-t)(1-t^2)(1-t^{2pr})}.
\end{align*}
In case $p\nmid (r+1)$, the last term in the direct sum is changed
and thus the Poincar\'e series equals the sum
\begin{align*}
& \frac{1} {1-t^2} \medspace \frac {1-t^{2r+2}} {1-t^2}
+\frac {t} {1-t^{2pr}} \medspace
\frac {1-t^{2r(p-1)}} {1-t^{2r}} \medspace
\frac {1-t^{2r}} {1-t^2} \medspace
\frac {1-t^{2r+2}} {1-t^2} \medspace + \\
& \frac {1} {1-t^2} \medspace \frac {1-t^{2r}} {1-t^2} \medspace
\frac {t^{2r(p-1)+1}+t^{2pr+2}} {1-t^{2pr}} ,
\end{align*}
which equals
\begin{align*}
& \frac {1} {(1-t^2)^2 (1-t^{2pr})}
[(1-t^{2r+2}-t^{2pr}+t^{2pr+2r+2}) \\
& +(t-t^{2(p-1)r+1}-t^{2r+3}+t^{2pr+3})
+(t^{2(p-1)r+1}+t^{2pr+2}-t^{2pr+1}-t^{2pr+2r+2})] = \\
& \frac {1-t^{2r+2}-t^{2pr}+t^{2pr+2}} {(1-t)(1-t^2)(1-t^{2pr})} .
\end{align*}
\end{proof}
Now that we have completed the census, we will investigate what we can
say about the algebraic properties of the Morse spectral sequence
for ${\mathbb{C}\mathrm{P}}^r$.
\begin{lemma}
\label{le:naturality}
The classes $\zeta_{pm} x^i$, $\bar \zeta_{pm} x^i$, $\alpha_{pm} x^i$
and $\alpha_nx_1^i$
are not in the image of any differential.
\end{lemma}
\begin{proof}
The inclusion map $i: L{\mathbb{C}\mathrm{P}}^r \to L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}}$
induces a map of filtrations, and a map of cohomology
Morse spectral sequences. We know from theorem \ref{th:CPcollapse}
that the target spectral sequence collapses.
But on the $E^1$ level, the classes $\zeta_{pm} x^i$ and
$\bar \zeta_{pm}x^i$ survive to the classes represented by the Thom
isomorphism applied to $\sigma x^i$ respectively $\bar \sigma x^i$,
according to corollary \ref{cor:SerreCollapse}. These classes are not
in the image of a differential (since all differentials vanish).
So by naturality, the classes $\zeta_{pm} x^i$ and $\bar \zeta_{pm}x^i$
cannot be in the image of a differential either.
Similarly, the classes $\alpha_{pm} x^i$ and $\alpha_nx_1^i$
are mapped non-trivially
according to corollary \ref{cor:SerreCollapse} and the result follows.
\end{proof}
\begin{lemma}
\label{le:Preliminary}
In the Morse spectral sequence $E_*^{*,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$
every non-trivial differential starts in an even total degree.
\end{lemma}
\begin{proof}
We have to do some counting, but the strategy of the proof is simple.
First we note that it is enough to consider the generators of $E_1$,
secondly we see that except for a very few special cases these
generators cannot map non-trivially for dimensional reasons.
Thirdly we use lemma \ref{le:naturality} to dispose of the
remaining cases.
The elements of odd degree in the spectral sequence are of the form
$\alpha_{pm} x^iu^j$ or $\alpha_nx_1^ix_2^j$. Because the spectral
sequence is a spectral sequence of ${\mathbb F}_p[u]$ modules, to prove the
lemma it suffices to show the special case that all differentials
vanish on the generators $\alpha_{pm} x^i$ and $\alpha_nx_1^i$.
We consider a class $d_s(\alpha_{pm} x^i)$. We wish to prove that it
is trivial. This class has filtration $pm+s$ and total degree
$2r(pm-1)+2i+2$. We ask, when does there exist a non trivial class
of this filtration and total degree? To figure this out, we inspect
the table of theorem \ref{th:MorsePoincare}, and realize the
following facts.
\begin{itemize}
\item There exist non-trivial classes in
$E_1 ({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} ;{\mathbb F}_p )$
of total even degree, and of filtration $n$ if and only
if $n$ is divisible by $p$.
\item If $n$ is divisible by $p$ there is a unique such class
of filtration $n$ with lowest possible even dimension. In case
$p$ divides $r+1$ it is $\zeta_n$ of dimension $2rn$,
if $p$ does not divide $r+1$ it is $\bar \zeta_n$ of dimension $2rn+2$.
\end{itemize}
The class $d_s(\alpha_{pm} x^i)$ has total even degree, so if it
were non-trivial it would to have degree at least
equal to the lowest possible degree of such a class. That is
\[
2r(pm-1)+2i+2 \geq
\begin{cases}
2r(pm+s), & p\mid (r+1), \\
2r(pm+s)+2, & p\nmid (r+1).
\end{cases}
\]
If $p$ does not divide $r+1$, we have the two inequalities
$i\geq r(s+1)$ and $r-1 \geq i$. This system has no solution
with $s \geq 1$.
If $p$ divides $r+1$, we have the two inequalities
$i+1\geq r(s+1)$ and $r\geq i$. Now there is the unique solution
$s=r=i=1$. That is, we would have that
$d_1(\alpha_{pm-1}x)=\lambda \zeta_{pm}$ for a scalar $\lambda\not=0$.
But this contradicts lemma \ref{le:naturality}.
A similar argument shows that $d_s(\alpha_nx_1^i)=0$.
\end{proof}
At this point, to a certain extent we can make a shortcut in the theory.
Using the elementary method as in the proof of
lemma \ref{le:Preliminary} we can prove the following statement
(theorem \ref{th:TransferClass} ). The term ``elementary'' should be
read as ``without using theorem \ref{th:relabel}''.
Together with the purely homotopy theoretical analysis of the
Serre spectral sequence (which we discuss in detail in section
\ref{sec:SerreSS}), theorem \ref{th:TransferClass} is
sufficient to compute the cohomology $H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$
in the special case $p\mid (r+1)$. It does not seem possible to
cover the case $p\nmid (r+1)$ using a similar method.
It is precisely because of this that we need theorem \ref{th:relabel}.
\begin{theorem}
\label{th:TransferClass}
If $p$ divides $r+1$ then there is a class
$\hat \zeta_{pm} \in H^{2pmr} (L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$
such that the restriction
$i^* (\hat \zeta_{pm}) \in H^{2pmr}(L{\mathbb{C}\mathrm{P}}^r )$
is non-trivial.
\end{theorem}
\begin{proof}
The argument is very similar to the proof of lemma \ref{le:Preliminary}.
We show that the class
$\zeta_{pm} \in E_1^{pm,(2r-1)pm}({\mathcal M})(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$
of theorem \ref{th:MorsePoincare} survives to $E_\infty$.
Suppose that $d_s(\zeta_{pm})=\alpha_{pm+s}y$
for some nontrivial scalar $y$.
Equality of total degrees gives the relation
\[
2rpm+1=\deg (\zeta_{pm} )+1 =
\deg (\alpha_{pm+s} y)=2r(pm+s-1)+\deg(y)+1\geq 2rpm+1.
\]
So we conclude that $s=1$, $\deg(y)=0$, and
$d_1(\zeta_{pm})=\lambda \alpha_{pm+1}$ for a non-trivial scalar $\lambda$.
But this contradicts lemma \ref{le:naturality}. That is,
$\zeta_{pm}$ is a permanent cycle. Furthermore, $\zeta_{pm}$ is not
in the image of any differential by lemma \ref{le:naturality} ,
so it survives to $E_\infty$.
We claim that any class
$\hat \zeta_{pn} \in H^{2pnr}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} ;{\mathbb F}_p )$
representing $\zeta_{pm}$ will satisfy the property of the theorem.
By naturality, $\zeta_{pm}$ maps nontrivially by $i^*$ to the
$E_1$ page of the Morse spectral sequence converging to
$H^*(L{\mathbb{C}\mathrm{P}}^r ;{\mathbb F}_p )$. The theorem follows, again since this spectral
sequence collapses.
\end{proof}
In order to prove this sections final result we need a lemma
on non-negatively graded modules over the graded ring ${\mathbb F}_p [u]$
(graded by $\deg(u)=2$). We say that the graded module $M$ is trivial
in degrees strictly greater than $n$ if $M^i=0$ for all $i>n$.
Let us also say that a graded ${\mathbb F}_p [u]$-module is generated in
degrees less or equal to $m$ if there is a set of generators which
have degrees less or equal to $m$. Because $M$ is bounded from below,
a set of elements $\{ x_\alpha \}$ generate $M$ exactly if the
reductions $[x_\alpha ]$ span the ${\mathbb F}_p$-vector space $M/uM$.
\begin{lemma}
\label{le:filterdegree}
Let $f:M \to N$ be a degree preserving map of non-negatively
graded ${\mathbb F}_p [u]$ modules.
Assume that $M$ is generated in degrees less or equal to
$m$, and that $N$ is trivial in degrees strictly greater than $n$.
Then the kernel of $f$ is generated in degrees
less or equal to $\max (m,n)$.
\end{lemma}
\begin{proof}
Assume without loss of generality that $f$ is surjective.
Multiplication by $u$ defines a map of the short
exact sequence $0\to \ker (f) \to M \to N \to 0$ to itself.
The snake lemma produces an exact sequence
\[
\xymatrix@C=1cm{
\ker (u:N \to N) \ar[r] & \ker(f) /u\ker(f) \ar[r]
& M/uM \ar[r] & N/uN \ar[r] & 0.
}
\]
That $M$ is generated in degrees less or equal to $m$
implies that the graded vector space $M/uM$ is zero in degrees
strictly greater than $m$. The exact sequence proves
that $\ker(f) /u\ker(f)$ is trivial in degrees greater than
$\max(m,n)$. Pick a set of spanning classes $\{ [x_\alpha]\}$ in
$\ker(f) /u\ker(f)$, and lift them arbitrarily to classes
$\{ x_\alpha\}$ in $\ker(f)$ of degree less or equal to $\max (m,n)$. These
classes generate $\ker (f)$.
\end{proof}
We are now ready to apply our general theorem on the
localized Morse spectral sequence to our particular specimen.
\begin{theorem}
\label{Epcollapse}
The spectral sequence $E_*({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ collapses
from the $E_p$ page.
\end{theorem}
\begin{proof}
Recall from lemma \ref{le:Preliminary} that all non-trivial
differentials are defined on classes in even total degree.
It is easy to see from theorem \ref{th:MorsePoincare} that the
only classes of even total degree sit in filtrations divisible by $p$.
Let $X^{pm,*}_s \subseteq E_s^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$
be the subgroup of elements of even total degree. This is a module
over the ring ${\mathbb F}_p[u]$, and for all $s$ one has
\[
X_{s+1}^{pm,*}=\ker \left( d_s :X_s^{pm,*}\to E_s^{pm+s,*} \right) .
\]
{\emph{Claim.}} If $1\leq s\leq p-1$, then the ${\mathbb F}_p [u]$-module
$X^{pm,*}_{s+1}$ is generated by elements in degrees
less than $2r(pm+s+1)-2$.
The claim follows from lemma \ref{le:filterdegree} and induction over
$s$. By theorem \ref{th:MorsePoincare} we see that $X_1^{pm,*}$
in all cases is generated in degrees less or equal to $2pmr+2r$.
$E_s^ {pm+s,*}$ is a quotient of the ${\mathbb F}_p[u]$-module $E_1^{pm+s,*}$.
Using theorem \ref{th:MorsePoincare} we see that for $1 \leq s \leq p-1$
this last module is trivial in dimensions strictly greater than
$2r(pm+s)+2r-1$. Now we use lemma \ref{le:filterdegree} on the
differential
\[
d_1: X_1^{pm,*} \to E_1^{pm+1,*}.
\]
The differential raises total degree by 1. Taking this
into account, lemma \ref{le:filterdegree} shows
that $X^{pm}_1$ is generated in degree less than
$\max(2pmr+2r,2r(pm+1)+2r-2)=2r(pm+2)-2$, which proves
the claim for $s=1$.
Using lemma \ref{le:filterdegree} inductively on $d_s$
for $2\leq s\leq p-1$ proves the claim.
In particular, $X^{mp,*}_{p}$ is generated in degrees
less or equal to $2r(pm+p)-2$. The basic fact \ref{th:CPcollapse}
says that the Morse spectral sequence $E_*({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )$ collapses
from the $E_1$ page. Because of theorem \ref{th:relabel} this implies
that the localized spectral sequence
$E_* ({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )[1/u]$
collapses from the $E_1$ page.
From our computation in theorem \ref{th:MorsePoincare} we can
read off that the localization map
\[ \xymatrix@C=1cm{
E_1^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )
\ar[r]
& E_1^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} ) \left[ \frac 1 u \right]
} \]
is injective. By naturality, this means that no non-trivial
differentials are arriving at
$E_1^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$. So in particular, the
$d_p$ differential on $X^{mp,*}_{p}$ is trivial, and
$X^{mp,*}_{p+1}=X^{mp,*}_{p}$ is generated in degrees
less than $2r(pm+p)-2$.
We claim that all higher differentials $d_s$
vanish on $X_{p+1}^{pm,*}$. Assume to the contrary that
$s$ is the smallest $s\geq p+1$ which does not vanish on
$X_s^{pm,*}$. Since $X_s^{pm,*}=X_{p+1}^{pm,*}$, we know that
$X_s^{pm,*}$ is generated in degrees less or equal to $2r(pm+p)-2$.
It is enough to prove that the differential $d_s$ vanishes on the
generators. A generator of $X_{p+1}^{pm,*}$ will be mapped by $d_s$
to a class in some $E_{p+1}^{pm+s,*}$ of total degree less or equal
to $2r(pm+p)-1$. We have to show that there is no such non-trivial class.
But the non-zero class of lowest degree in $E_{p+1}^{pm+s,*}$ is
$\alpha_{pm+s}$ which has degree $2r(pm+s-1)+1$.
Since we are assuming that $s\geq p+1$, this degree is larger than or
equal to $2r(pm+p)+1$. This finishes the proof.
\end{proof}
\section{Derived functors at odd primes}
\label{sec:Derived}
In this section and in the following two sections we are studying
the cohomology of the free loop space by homotopy theoretical methods.
The main result we are aiming for is a computation of the action of
the circle on the cohomology. It seems hard to obtain this using Morse
theory methods, since the action will cause non-trivial interaction
between the layers in the Morse filtration. In section
\ref{se:MorseSerreSS} we will show that this computation has consequences
for the Morse spectral sequence $E_*({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$, and
essentially solves its differentials.
In this section we make a preliminary algebraic computation,
which we will need for the homotopy theory calculation.
Let as before $p$ be a prime number. Write $\trunc r x$ for
the truncated polynomial algebra ${\mathbb F}_p [x] /(x^{r+1})$,
where $r\geq 1$ and the degree of the generator $|x|$ is
a positive even number.
In \cite{BO} we defined the derived functors
$H_*(\trunc r x ; {\overline \Omega} )$, and in the case $p=2$ we computed them.
In this section we extend this calculation to compute these
derived functors for odd $p$. We will use definitions and notation
from \cite{BO}.
Assume that $p$ is an odd prime. Recall the functor
${\overline \Omega} :{\mathcal F} \to {{\mathcal A}lg}$ from \cite{SpSe}. The cohomology with
${\mathbb F}_p$ coefficients of a space defines an object in ${\mathcal F}$, and
${{\mathcal A}lg}$ is the category of non-negatively graded algebras $A$
over ${\mathbb F}_p$ such that $a^p=a$ for $a\in A^0$.
We view $\trunc n x$ as an object in ${\mathcal F}$ with $\lambda =0$ and
$\beta =0$. Note that by definition of ${\mathcal F}$, any polynomial algebra
${\mathbb F}_p [ z_i|i\in I]$ on even dimensional generators is a free object
in ${\mathcal F}$. This special type of free object has trivial $\lambda$ and
$\beta$. By a similar argument as in \cite{BO} Theorem 2.1 we find the
following result:
\begin{theorem}
\label{res}
For odd primes $p$, there is an almost free simplicial resolution
$R_\bullet \in s{\mathcal F}$ of the object $\trunc r x \in {\mathcal F}$ as follows:
$R_q = {\mathbb F}_p [x,y_1,\dots , y_q]$ for $q\geq 0$
where $|y_i|=(r+1)|x|$. The face and degeneracy maps
are given by $s_i(x)=x$, $d_i(x)=x$,
\begin{align*}
s_i(y_j) &= \begin{cases}
y_j &, i\geq j ,\\
y_{j+1} &, i<j ,
\end{cases}
\\
d_i(y_j) &= \begin{cases}
x^{r+1} &, i=0, j=1, \\
y_{j-1} &, i<j, j>1, \\
y_j &, i\geq j , j<q, \\
0 &, i=q, j=q.
\end{cases}
\end{align*}
\end{theorem}
Using this resolution, the derived functors can be computed as the
homotopy groups
$H_i(\trunc r x ; {\overline \Omega} ) = \pi_i {\overline \Omega} (R_\bullet )$.
It is convenient to use the normalized chain complex
$N_* ({\overline \Omega} R_\bullet )$ with $N_i ({\overline \Omega} R_\bullet )= \cap_{j=1}^i \ker (d_j)$
and differential $d_0$ for this purpose.
The de Rham differential on ${\overline \Omega} R_\bullet$ is not the only
simplicial derivation. There is is another one which turns out
to be useful.
\begin{lemma} \label{std}
There is a well defined derivation $\theta : {\overline \Omega} R_q \to {\overline \Omega} R_q$
of degree 1 for each $q\geq 0$ which satisfies
$\theta (ab)=\theta (a) b +(-1)^{|a|}a \theta (b)$
for all $a,b\in {\overline \Omega} R_q$ and is defined by the following for
$1\leq j \leq q$:
\[ \theta (x)=0,\quad \theta (y_j)=0 ,\quad \theta ({\bf d} x)=x,\quad
\theta ({\bf d} y_j)= (r+1)y_j. \]
One has that $\theta \circ \theta =0$. Furthermore,
$\theta$ commutes with the simplicial face and
degeneracy maps and hence defines a simplicial derivation
$\theta : {\overline \Omega} R_\bullet \to {\overline \Omega} R_\bullet .$
\end{lemma}
\begin{proof}
We have that ${\overline \Omega} R_q = {\mathbb F}_p [x,y_1,\dots ,y_q] \otimes
\Lambda ({\bf d} x, {\bf d} y_1 ,\dots ,{\bf d} y_q )$.
By the derivation property we see that $\theta (({\bf d} x)^2)=0$
and $\theta (({\bf d} y_j)^2)=0$ so $\theta$ is well defined.
The derivation property for $\theta$ also implies that
$\theta \circ \theta$ is a derivation of degree two.
So $\theta \circ \theta$ is zero since it maps
the algebra generators to zero.
One checks that
$\theta (s_i z) =s_i\theta (z)$ and $\theta (d_iz) = d_i\theta (z)$
for each algebra generator $z$ by direct computations.
The most interesting case goes as follows:
\begin{align*}
\theta (d_0 ({\bf d} y_1)) &= \theta ({\bf d} (x^{r+1})) =
\theta ((r+1)x^r{\bf d} x)=(r+1)x^{r+1}=d_0((r+1)y_1) \\
&= d_0\theta ({\bf d} y_1).
\end{align*}
\end{proof}
Here is a complete computation of the derived functors.
\begin{theorem}
\label{der}
If $p$ is a prime such that $p \mid (r+1)$,
Then there is an isomorphism of bigraded ${\mathbb F}_p$-algebras
\[ H_*(\trunc r x ; {\overline \Omega} )\cong \trunc r x \otimes \Lambda ({\bf d} x )
\otimes \Gamma [\omega ], \]
where $\Vert x \Vert = (0,|x|)$, $\Vert {\bf d} x \Vert =(0, |x|-1)$,
$\Vert \gamma_i (\omega ) \Vert =(i, i((r+1)|x|-1))$.
The algebra generators are represented by cycles in the normalized
chain complex $N_* ({\overline \Omega} R_\bullet )$ as follows:
$x=[x]$, ${\bf d} x= [{\bf d} x]$,
$\gamma_i (\omega ) = [{\bf d} y_1 \dots {\bf d} y_i ]$.
The de Rham differential induces the map
\[ {\bf d}_*: H_*(\trunc r x ;{\overline \Omega} ) \to H_*(\trunc r x ;{\overline \Omega} ); \quad
x \mapsto {\bf d} x, \quad {\bf d} x \mapsto 0, \quad
\gamma_i (\omega )\mapsto 0. \]
If $p$ is a prime such that $p \nmid (r+1)$, then there is an
isomorphism of bigraded ${\mathbb F}_p$-algebras
\[ H_*(\trunc r x ; {\overline \Omega} )\cong {\mathbb F}_p [a_i,b_i | i\geq 0]/I_r, \]
where $I_r$ is the ideal generated by the following elements
for $i,j \geq 0$:
\[ a_ia_j, \quad
b_ib_j-\binom {i+j} i b_0 b_{i+j}, \quad
a_ib_j-\binom {i+j} i b_0 a_{i+j}, \quad
b_0^rb_i, \quad b_0^ra_i. \]
Here
$\Vert a_i \Vert = (i, i((r+1)|x|-1)+|x|-1)$ and
$\Vert b_i \Vert = \Vert a_i \Vert + (0,1)$.
The algebra generators are represented by cycles in the normalized
chain complex $N_* ({\overline \Omega} R_\bullet )$ as follows:
$a_i=[{\bf d} x {\bf d} y_1 \dots {\bf d} y_i]$,
$b_i=[\theta ({\bf d} x {\bf d} y_1 \dots {\bf d} y_i)]$.
The de Rham differential induces the map
\[ {\bf d}_*: H_*(\trunc r x ;{\overline \Omega} ) \to H_*(\trunc r x ;{\overline \Omega} ); \quad
a_i \mapsto 0, \quad b_i \mapsto (1+(r+1)i)a_i. \]
\end{theorem}
\begin{remark} \label{expl}
Explicitly, the cycle that represents $b_i$ is
\[ \theta ({\bf d} x {\bf d} y_1 \dots {\bf d} y_i)=
x{\bf d} y_1 \dots {\bf d} y_i +(r+1){\bf d} x \sum_{k=1}^i (-1)^k
y_k {\bf d} y_1 \dots \widehat{{\bf d} y_k} \dots {\bf d} y_i. \]
\end{remark}
\begin{proof}
For $p=2$ these results were proved in \cite{BO} so assume that
$p$ is an odd prime. We first compute the derived functors as
${\mathbb F}_p$-vector spaces.
We have that $\trunc r x$ is the pushout of the diagram
${\mathbb F}_p \leftarrow {\mathbb F}_p [y] \rightarrow {\mathbb F}_p [x]$ in ${\mathcal F}$ where
$y\mapsto x^{r+1}$. Since ${\overline \Omega}$ commutes with colimits (appendix of
\cite{O}) and ${\mathbb F}_p [x]$ is a free module over ${\mathbb F}_p [y]$,
proposition 6.3 of \cite{SpSe} gives us a Quillen spectral sequence
\[ E_{i,j}^2 = {\operatorname{Tor}}_i^{H_*({\mathbb F}_p [y];{\overline \Omega} )}
({\mathbb F}_p , H_*({\mathbb F}_p [x];{\overline \Omega} ))_j \Rightarrow H_{i+j}(\trunc r x; {\overline \Omega} ). \]
Since polynomial algebras on even dimensional generators
are free objects in ${\mathcal F}$, we see that
$E_{i,j}^2=0$ for $j>0$ and that
\[ H_i(\trunc r x; {\overline \Omega} ) \cong E_{i,0}^2 \cong
{\operatorname{Tor}}_i^{{\overline \Omega} ({\mathbb F}_p [y])} ({\mathbb F}_p , {\overline \Omega} ({\mathbb F}_p [x])). \]
Note that ${\overline \Omega} ({\mathbb F}_p [y])= {\mathbb F}_p [y] \otimes \Lambda ({\bf d} y)$ is the
free graded commutative algebra on $\{ y,{\bf d} y\}$ so we have the
Koszul resolution $(K_*, \partial )$ of ${\mathbb F}_p$ by free
${\overline \Omega} ({\mathbb F}_p [y])$-modules:
\[ K_*=\Lambda (v)\otimes \Gamma [w]\otimes {\overline \Omega} ({\mathbb F}_p [y]); \quad
\partial v =y,\quad \partial \gamma_i (w) = \gamma_{i-1}(w) {\bf d} y ,\]
where $v\in K_1$ and $\gamma_i (w)\in K_i$. In order to compute
the group ${\operatorname{Tor}}_i^{{\overline \Omega} ({\mathbb F}_p [y])} ({\mathbb F}_p , {\overline \Omega} ({\mathbb F}_p [x]))$
we tensor this Koszul resolution with
${\overline \Omega} ({\mathbb F}_p [x] )$ over ${\overline \Omega} ({\mathbb F}_p [y])$ and get a chain complex
$(C_*,\partial )$ with
\[ C_* = \Lambda (v)\otimes \Gamma [w]\otimes {\overline \Omega} ({\mathbb F}_p [x]), \quad
\partial v = x^{r+1}, \quad
\partial \gamma_i (w) = (r+1)\gamma_{i-1} (w)x^r{\bf d} x. \]
Computing the homology of this chain complex, we find that if
$p \mid (r+1)$ then
\[ H_*(\trunc r x;{\overline \Omega} ) \cong
\trunc r x \otimes \Lambda ({\bf d} x) \otimes \Gamma [w]. \]
If $p \nmid (r+1)$ we compute that
$H_0(\trunc r x ;{\overline \Omega} ) \cong \trunc r x \otimes \Lambda ({\bf d} x) /
(x^r{\bf d} x)$. For $i>0$,
$H_i(\trunc r x;{\overline \Omega} )$ is a sum of two copies of $\trunc r x/(x^r)$,
the generators being ${\bf d} x \gamma_i (w)$ and
$x\gamma_i (w)-(r+1){\bf d} x v \gamma_{i-1}(w)$.
This completes the computation of the derived functors
as ${\mathbb F}_p$-vector spaces.
We now check that the listed representatives are indeed cycles
in the chain complex $N_*({\overline \Omega} R_\bullet )$. Define
elements in ${\overline \Omega} R_i$ using the derivation $\theta$ as follows:
\[
\omega_i = {\bf d} y_1 \dots {\bf d} y_i ,\quad
\alpha_i = {\bf d} x \omega_i ,\quad
\beta_i = \theta (\alpha_i ).
\]
Here by convention $\omega_0=1$ such that
$\alpha_0={\bf d} x$ and $\beta_0= x$.
We have that $d_j \omega_i =0$ for $0<j\leq i$ since $({\bf d} y_j)^2=0$
and $d_iy_i=0$. Furthermore,
$d_0\omega_i = (r+1)x^r{\bf d} x\omega_{i-1}$.
Thus $\omega_i$ is a cycle when $p \mid (r+1)$ and
$\alpha_i$, $\beta_i$ are cycles when $p \nmid (r+1)$ as
stated.
By a similar argument as the one presented in $\cite{BO}$
theorem 2.5 one checks that
\[ [x^j ({\bf d} x)^\epsilon \omega_i ], \quad 0\leq j \leq n ,\quad
\epsilon \in \{ 0 , 1 \} \]
are linearly independent in $H_i(N_*{\overline \Omega} R_\bullet )$ when
$p \mid (r+1)$ and that
\[ [\beta_0^j \alpha_i ] , [\beta_0^j \beta_i ] , \quad
0\leq j \leq r-1 \]
are linearly independent in $H_i(N_*{\overline \Omega} R_\bullet )$ when
$p \nmid (r+1)$. By a dimension count, these two sets are then
vector spaces bases.
We now prove that the algebra structure of $H_*(\trunc r x ; {\overline \Omega} )$
is as stated. The algebra structure comes from the chain map
\[ \rho : C_*({\overline \Omega} R) \otimes C_*({\overline \Omega} R) \xrightarrow{g}
C_*({\overline \Omega} R\otimes {\overline \Omega} R) \xrightarrow{C_*(m)} C_*({\overline \Omega} R)\]
where $g$ is the Eilenberg--MacLane shuffle map \cite{ML} and
$m$ the multiplication map on ${\overline \Omega} R_\bullet$. By $C_*(V)$ for a
simplicial ${\mathbb F}_p$-vector space $V$, we mean the chain complex with
$C_i (V)=V_i$ and differential $\sum_{j=0}^i (-1)^j d_j$.
So we have the formula
\[ \rho (v_i \otimes w_j) = \sum_{(\mu, \nu)} (-1)^{\epsilon (\mu )}
s_\nu (v_i) s_\mu (w_j) \]
where the sum is over all $(i,j)$ shuffles $(\mu , \nu )$ of the set
$\{ 0, 1, \dots ,i+j-1\}$ and
$\epsilon (\mu )= \sum_{k=1}^i (\mu_k -(k-1))$.
Lemma 2.4 of \cite{BO} still holds for ${\mathbb F}_p$-coefficients.
(There is a small misprint in the lemma: There should be a hat over
$\nu_j$ in the index set of the last formula.) So we find that
\[ \rho (\omega_i \otimes \omega_j ) = \binom {i+j} i \omega_{i+j} .\]
By this formula and remark \ref{expl} it follows directly that
\[ \rho (\alpha_i \otimes \alpha_j )=0 \quad , \quad
\rho (\alpha_i \otimes \beta_j ) =
\binom {i+j} i \beta_0 \alpha_{i+j} .\]
Since the degeneracy maps commutes with $\theta$, there is
a commutative diagram as follows where $A_\bullet = {\overline \Omega} R_\bullet$ :
\[
\diagram
A_i\otimes A_j \rrto^-{g}
\dto_{\theta \otimes 1 +(-1)^{i+j} 1 \otimes \theta}
& &A_{i+j} \otimes A_{i+j} \rto^-{m}
\dto_{\theta \otimes 1 + (-1)^{i+j} 1 \otimes \theta}
& A_{i+j} \dto^\theta \\
A_i\otimes A_j \rrto^-{g} & & A_{i+j}\otimes A_{i+j} \rto^-{m} & A_{i+j}
\enddiagram
\]
By mapping $\alpha_i \otimes \theta (\alpha _j)$ both ways around
one finds that
\[ \rho (\beta_i \otimes \beta_j) =
\binom {i+j} i \beta_0 \beta_{i+j}. \]
Thus the algebra structure is as stated.
It follows directly form the formulas for the representing cycles,
that the de Rham map is as stated.
\end{proof}
\section{Cohomology of the free loop space}
\label{sec:FreeCohomology}
\begin{theorem}
\label{th:action}
Let $p$ be a prime. Assume that $X$ is a 1-connected space with
mod $p$ homology of finite type. Assume also that
\[ H^*(X)={\mathbb F}_p [x]/(x^{r+1}), \]
where the degree $|x|$ of $x$ is even and $r\geq 1$.
Put $\alpha=|x|$ and $\rho=(r+1)\alpha-2$. \\
1) If $p\mid (r+1)$ then there is an algebra isomorphism
\[ H^* (LX) \cong {\mathbb F}_p [x]/(x^{r+1}) \otimes
\Lambda ({\bf d} x) \otimes \Gamma [\omega ], \]
where $|x|=\alpha$, $|{\bf d} x|=\alpha-1$, $|\gamma_i (\omega )|=\rho i$.
The action differential is given by
\[ d :H^*(LX) \to H^*(LX);\quad
d(x)={\bf d} x, \quad d({\bf d} x)=0, \quad d(\gamma_i (\omega ))=0. \]
2) If $p\nmid (r+1)$ then there is an algebra isomorphism
\[ H^*(LX)\cong {\mathbb F}_p [a_i,b_i|i\geq 0]/I, \]
where $I$ is the ideal generated by the following elements
for $i,j \geq 0$:
\[a_ia_j, \quad
b_ib_j-\binom {i+j} i b_0b_{i+j}, \quad
b_ia_j-\binom {i+j} i b_0a_{i+j}, \quad
b_0^rb_i, \quad
b_0^ra_i. \]
The degrees of the generators are $|a_i|=\rho i+\alpha-1$ and
$|b_i|=\rho i+\alpha$. In particular, the dimensions
of $H^{2k}(LX)$ and $H^{2k-1}(LX)$ are the same.
The action differential is given by
\[ d:H^*(LX) \to H^*(LX);\quad
d(a_i)=\kappa_i b_0^{r-1}b_i, \quad d(b_i)=((r+1)i+1)a_i. \]
where $\kappa_i=0$ unless $\alpha = 2$, $p=2$, $r$ is even and
$i$ is odd.
\end{theorem}
\begin{remark} \label{rem:action}
When $p\nmid (r+1)$ we have the following formula for
$0\leq i$, $1\leq j \leq r$:
\[ d(b_0^{j-1}b_i)=((r+1)i+j)b_0^{j-1}a_i. \]
We will show later (in corollary \ref{cor:actionproof}) that
$\kappa_i=0$. So we have $d( b_0^{j-1}a_i)=0$.
\end{remark}
\begin{proof}
According to \cite{SpSe} there is a strongly convergent second
quadrant spectral sequence of cohomology type
\[ E_2^{-i,j} = H_i(H^*(X);{\overline \Omega} )^j \Rightarrow
H^*(LX ), \]
where the $E_2$ page is the derived functors which we computed
in theorem \ref{der}. The spectral sequence is a spectral sequence
of algebras, and the induced of the de Rham differential ${\bf d}_*$
on $E_2$ corresponds to the action differential $d$ on
$H^*(LX)$.
There are two distinct cases: $p\mid (r+1)$ and
$p\nmid (r+1)$. In both cases, the $E^2$ page does not admit
any differentials because of its distribution of zeros,
so $E_\infty \cong E_2$ as a vector space over ${\mathbb F}_p$.
The theorem can be paraphrased as that the $E_2$ page
is isomorphic to $H^*(LX)$ as an algebra, and
that also the action differential agrees.
We have to look close to exclude the possibility of multiplicative
extensions as well as extension problems concerning the action
differential. Write $|\cdot |$ for the total degree in the spectral
sequence.
1) Assume that $p\mid (r+1)$. Since $x$ and ${\bf d} x$ lie
in $H_0(\trunc r x ;{\overline \Omega} )$ they have unique representatives
in $H^*(LX)$ which satisfy $x^{r+1}=0$, $({\bf d} x)^2=0$ and
$d(x)={\bf d} x$. So the possible extension questions are if
the relations $\gamma_i^p=0$ and $d\gamma_i=0$ are valid.
Let us denote a class in $H^*(LX)$ representing
$\gamma_i \in E_\infty$ by the symbol $\overline \gamma_i$.
We look at $\overline \gamma_i^p$. We know that $\overline \gamma_i^p=0$
up to classes of the same total degree and strictly higher
filtration. The class $\gamma_{ip}$ has the same total degree, but
also the same filtration degree as $\gamma_i^p$.
Using that $ |\gamma_{j+1} |=|x^r \gamma_j |+\alpha -2$, we see that if
$\alpha \geq 4$ this is the only class of the same total degree
as $\gamma_i ^p$, so that independently of the choice of
$\overline \gamma_i$, we indeed have that $\overline \gamma_i^p=0$.
Similarly $d\gamma_i=0$ up to classes of the same total degree and
of strictly higher filtration. But if $\alpha \geq 4$, using that
$|d \gamma_{j+1} |=|x^{r-1}{\bf d} x \gamma_j |+\alpha -2$
we see that there is no such class. So $d\overline \gamma_i=0$.
Now we consider the slightly more complicated case $\alpha =2$.
In this case the class $x^r\gamma_j$ has the same total degree as
$\gamma_{j+1}$ and strictly higher filtration. It is the only
such class. Similarly, $x^{r-1}{\bf d} x \gamma_j$ is the
only class of same total degree as $d\gamma_{j+1}$ and of strictly
higher filtration.
The filtration of $H^*=H^*(LX)$ has the form
\[ H^* \supseteq \dots \supseteq F^{-i}H^* \supseteq F^{-i+1}H^*
\supseteq \dots \supseteq F^0H^* \supseteq F^1H^*=0. \]
We now show that we can choose $\overline \gamma_i \in F^{-i}H^*$
such that $[\overline \gamma_i] = \gamma_i \in E_\infty^{-i,*}$ and
$d\overline \gamma_i =0$. For $i=0$ the unit $\overline \gamma_0 =1$
has these properties. Assume that $\overline \gamma_j$ has been chosen
with these properties for $1\leq j < i$. Choose
$\gamma_i^\prime \in F^{-i}H^*$ such that
$[\gamma_i^\prime ] =\gamma_i \in E_\infty^{-i,*}$.
We have $[d\gamma_i^\prime ]=0$ so
$d\gamma_i^\prime =kx^{r-1}dx \overline \gamma_{i-1}$ for
some $k\in {\mathbb F}_p$. Put $\overline \gamma_i =
\gamma_i^\prime-\frac k r x^r\overline \gamma_{i-1}$.
Then, $[\overline \gamma_i] = \gamma_i \in E_\infty^{-i,*}$ and
$d\overline \gamma_i=
d\gamma_i^\prime -k x^{r-1}dx\overline \gamma_{i-1} =0$.
We claim that these representatives satisfy
$(\overline \gamma_{p^i} )^p=0$ for $i\geq 0$.
From the spectral sequence we see that
$(\overline \gamma_{p^i} )^p=c x^r\overline \gamma_{p^{i+1}-1}$
for some $c \in {\mathbb F}_p$. We apply the action differential
on this equation and find
\[ 0 = d((\overline \gamma_{p^i})^p) =
crx^{r-1}dx \overline \gamma_{p^{i+1}-1}, \]
and since $r\equiv -1$ mod $p$ it follows that $c=0$.
So there is a well defined algebra map
\[ \phi :{\mathbb F}_p [x]/(x^{r+1}) \otimes \Lambda ({\bf d} x) \otimes
\Gamma [\omega ] \to H^*(LX), \]
which maps $\gamma_i (\omega )$ to $\overline \gamma_i$.
We can filter the domain such that $\phi$ becomes a map of filtered
algebras. Since $\phi$ is an isomorphism on associated graded objects, it is
itself an isomorphism.
2) Assume that $p\nmid (r+1)$.
The classes and multiplicative relations in $E_2$ are as follows:
\begin{tabular}{l !{\quad} l !{\quad} l !{\quad} l}
\toprule
class or relation & type & total degree/parity & filtration \\
\midrule
$b_0^ia_j$,\quad \medspace $0\leq i\leq r-1$ & class
& $\rho j+\alpha(i+1)-1$, odd & $-j$ \\
$b_0^ib_j$, \quad $0\leq i\leq r-1$ & class & $\rho j+\alpha(i+1)$, even
& $-j$ \\
$b_ia_j-\binom {i+j} i b_0a_{i+j}$ & relation & $\rho(i+j)+2\alpha-1$, odd
& $-i-j$ \\
$b_0^ra_j$ & relation & $\rho j+\alpha (r+1)-1$, odd & $-j$ \\
$a_ia_j$ & relation & $\rho (i+j)+2\alpha-2$, even & $-i-j$ \\
$b_ib_j-\binom {i+j} i b_0b_{i+j}$ & relation & $\rho(i+j)+2\alpha$, even
& $-i-j$ \\
$b_0^rb_j$ & relation & $\rho j+\alpha (r+1)$, even &$-j$\\
\bottomrule
\end{tabular}
\noindent We note that if $0\leq i\leq r-1$, then
\[
0 < \alpha \leq \alpha(i+1) \leq \alpha r
\leq \rho.
\]
From this and the list of classes above it follows easily
that there is at most one class in each total degree.
A differential in the spectral sequence raises total
degree by one, and it strictly increases the
filtration.
The unique class of total degree one higher than $a_j$
is $b_j$, which has the same filtration as $a_j$, so that $a_j$ is a
permanent cycle. If $\alpha\geq 4$, there is no non-trivial class
of degree one higher than $b_j$.
If $\alpha=2$, the unique class of degree one higher than $b_j$ is
$b_0a_j$ which again has the same filtration so $b_j$ is also a
permanent cycle.
Each of the relations given above is true for any lifting of the
generators in $E_\infty$ to generators of $H^*(LX)$ up to
classes of the same total degree and strictly higher filtration.
We claim that in each case, there are no non-trivial such classes.
If $\alpha \geq 4$, there are no nontrivial classes of the
same dimension as $b_0^ra_j$ or $b_0^rb_j^r$. In case
$\alpha=2$, there are the unique classes $a_{j+1}$
respectively $b_{j+1}$. But these have lower
filtration than the relation, so they cannot contribute to
extensions.
If $\alpha \geq 4$ there is no class which has the same
total degree as the relation $a_ia_j$. In case
$\alpha =2$, there is the unique possibility $b_{i+j}$,
which does not matter anyway since its filtration
is too low.
Also, the relations
$b_ia_j-\binom {i+j} i b_0a_{i+j}$ and
$b_ib_j-\binom {i+j} i b_0b_{i+j}$ have the same
total degree as $b_0a_{i+j}$
respectively $b_0b_{i+j}$. By filtration check,
there can be no extension problems.
Finally we have to consider the action differential.
The filtration argument says that
$db_i$ is as stated. We have to argue that
$da_i=0$. If $\alpha\geq 4$, there is no
non-trivial possibility for $da_i$. If
$\alpha=2$, the class $da_i$ has the same
total degree as $b_{i-1}b_0^{r-1}$.
Because $d$ is a differential, $d(a_i)=0$
unless
\[
0=db_i=(1+(r+1)i)a_i,
\]
and
\[
0=d(b_{i-1}b_0^{r-1})=
(1+(r+1)(i-1))b_0^{r-1}a_{i-1}+
(r-1)b_{i-1}b_0^{r-2}a_0=
((r+1)i-1)b_0^{r-1}a_{i-1}.
\]
Solving this, we get that $2=0$ (that is $p=2$),
that $r$ is even and that $i$ is odd.
\end{proof}
We will later want to know how big the cokernel of the
action differential is, so we do the counting now.
We write ${\mathbb N} = \{ 1,2,3,\dots \}$ for the set of natural numbers.
\begin{definition}
\label{def:TF}
Let $p$ be a prime and let $r,\alpha \in {\mathbb N}$ with $\alpha$ even.
Put $\rho=(r+1)\alpha-2$ and let
\[
\chi_p (s) = \begin{cases}
0 & \text{if } p \mid s, \\
1 & \text{if } p \nmid s.
\end{cases}
\]
We define two subsets of ${\mathbb N}$ as follows:
\begin{align*}
& {\mathcal IF} (r,p,\alpha )=
\{ \rho i+\alpha j \mid \chi_p (r+1)\leq j\leq r, 0\leq i
\text{ and } p\mid ((r+1)i+j) \} \setminus \{ 0 \} , \\
& {\mathcal IT} (r,p,\alpha )=
\{ \rho i+\alpha j \mid \chi_p (r+1)\leq j\leq r, 0\leq i
\text{ and } p\nmid ((r+1)i+j) \} .
\end{align*}
\end{definition}
The notation ${\mathcal IF}$ refers to an {\em index} set for {\em free}
generators and ${\mathcal IT}$ refers to an {\em index } set for
{\em transfer} generators. This choice of notation will make sense
later on.
\begin{lemma}
\label{le:twosets}
Whether a natural number $k$ is contained in
${\mathcal IF} (r,p,\alpha )$ respectively in ${\mathcal IT} (r,p,\alpha )$ only
depends on the congruence class of $k$ modulo $\rho p$.
Furthermore,
\begin{align*}
& {\mathcal IF} (r,p,\alpha )\cap {\mathcal IT} (r,p,\alpha ) =
\begin{cases}
2r {\mathbb N} & \text{if $p\mid (r+1)$ and $\alpha=2$,}\\
\emptyset& \text{otherwise,}
\end{cases}\\
& {\mathcal IF}(r,p,2)\cup {\mathcal IT}(r,p,2) = 2{\mathbb N}.
\end{align*}
If a number is in ${\mathcal IF} (r,p,\alpha )$ or in ${\mathcal IT} (r,p,\alpha )$,
there is a unique choice of numbers $i,j$ that displays it as such.
If $p\mid (r+1)$, the set
$\{ 0< 2k\leq \rho p m \mid 2 k\in {\mathcal IF} (p,r,\alpha ) \}$
has $m(r+1)$ elements. If $p\nmid (r+1)$, the set has $mr$ elements.
\end{lemma}
\begin{proof}
Note that $0\not\in {\mathbb N}$. We warm up by first ignoring the
congruence conditions. Assume that $i,i^\prime\in {\mathbb Z}$ and that
$\chi_p (r+1) \leq j, j^\prime \leq r$.
We make two claims about this situation.
\begin{enumerate}
\item If $i^\prime >i$ and
$\rho i+\alpha j=\rho i^\prime+\alpha j^\prime$, then
$\alpha =2$, $p\mid (r+1)$, $i^\prime = i+1$, $j^\prime=0$ and $j=r$.
\item If $\rho i+\alpha j > 0$, then $i\geq 0$.
\end{enumerate}
To prove the first claim, note that
$\alpha (j-j^\prime)=\rho (i^\prime-i)\geq \rho$. So
\[
j-j^\prime \geq \frac \rho \alpha = r+\frac {\alpha-2} {\alpha}.
\]
Since $0\leq j,j^\prime\leq r$ and $\alpha\geq 2$,
this is only possible if
$j=r$, $j^\prime=0$, $\alpha=2$. But this implies that
$i^\prime =i+1$ and by assumption, if $j^\prime =0$,
then $p\mid (r+1)$.
To prove the second claim, note that
\[
i> -\frac {\alpha j} {\rho} \geq
-\frac {\alpha r} {\rho} =
-\frac {\alpha r}{\alpha (r+1)-2} \geq
-\frac {\alpha r} {\alpha r}= -1.
\]
We now prove the lemma. First assume that
$k$ and $k+m\rho p$ are natural numbers. Assume also that
$k\in {\mathcal IF} (r,p,\alpha )$. We can write $k=\rho i+\alpha j$, where
$i,j$ satisfy the appropriate conditions.
Then $k+m\rho p=\rho(i+mp)+\alpha j$, and the pair $(i+mp,j)$
satisfies the same congruence conditions, and conditions on $j$.
This proves that $k+m\rho p\in {\mathcal IF} (r,p,\alpha )$, if we know that
$i+mp\geq 0$. But this follows from claim 2. together with our
assumption that $k+m\rho p$ is a natural number.
The same argument shows that ${\mathcal IT} (r,p,\alpha )$ is also a union
of congruence classes of natural numbers.
If $x\in {\mathcal IF} (r,p,\alpha )\cap {\mathcal IT}(r,p,\alpha )$, it must be
possible to write $x$ in two different ways in the form
$x=\rho i+\alpha j$.
By the first claim, we get that the only possible way
this can happen is that $\alpha=2$, $p\mid (r+1)$ and
$x=2ri + 2r=2r(i+1)+0$. This proves that
${\mathcal IF} (r,p,\alpha )\cap {\mathcal IT} (r,p,\alpha )$ is empty unless
$p\mid (r+1)$ and $\alpha =2$. It also shows that
${\mathcal IF} (r,p,2)\cap {\mathcal IT}(r,p,2) \subseteq 2r{\mathbb N} .$
We have to show that if $p\mid (r+1)$
then $2r{\mathbb N} \subseteq {\mathcal IF} (r,p,2)\cap {\mathcal IT} (r,p,2)$.
In this case we write
$2rm=\rho i+\alpha j=\rho i^\prime +\alpha j^\prime$
for $(i,j)=(m,0)$ and $(i^\prime ,j^\prime )=(m-1,r)$.
This proves the claim, since $p\mid j$ but $p\nmid j^\prime$.
We have ${\mathcal IF} (r,p,2)\cup {\mathcal IT} (r,p,2) =
\{ 2(ri+j) \mid \chi_p (r+1) \leq j \leq r, 0 \leq i \}$,
which equals $2{\mathbb N}$ as stated. The uniqueness statement on
$i,j$ follows directly from claim 1.
To prove the final statement about the number of elements,
it is enough to show that the number of congruence
classes in ${\mathcal IF} (r,p,\alpha )$ modulo $\rho p$
is $r+1$ respectively $r$. In case $p\mid (r+1)$
the congruence classes of $2k$ are the classes
of form $\rho i+\alpha pj^\prime$ for $0\leq i < p$
and $0\leq j^\prime <(r+1)/p$, and there are clearly
$p((r+1)/p)=r+1$ of those. In case $p\nmid (r+1)$,
each $j$ uniquely determines a congruence class
$i(j)$ modulo $p$ such that
$(r+1)i(j)+j\equiv 0\mod p$. That is, each $j$
with $1\leq j \leq r$ uniquely determines an $i$,
$0\leq i<p$ such that $(r+1)i+j\equiv 0\mod p$.
So there are exactly $r$ pairs $(i,j)$ qualifying,
and there are $r$ congruence classes in ${\mathcal IF} (r,p,\alpha )$.
\end{proof}
\begin{example}
Let $\alpha=2$, $r=2$. Then $\rho = 4$ and
\begin{align*}
& {\mathcal IF} (2,2,2) = \{ 4+8m, 6+8m | \medspace m\geq 0 \} , \\
& {\mathcal IF} (2,3,2) = 4{\mathbb N} , \\
& {\mathcal IF} (2,5,2) = \{ 8+20m, 14+20m | \medspace m\geq 0 \} , \\
& {\mathcal IF} (2,7,2) = \{ 10+28m, 20+28m | \medspace m\geq 0 \} .
\end{align*}
\end{example}
\begin{lemma}
\label{le:countingtransfer}
Let $X$ be as in theorem \ref{th:action}, $k\in {\mathbb N}$ and put $H^*=H^*(LX)$.
\begin{enumerate}
\item \label{action:kernel}
The kernel of the action differential
$d: H^{2k} \to H^{2k-1}$ is either a trivial or a one dimensional
vector space. It is non-trivial if and only if
$2k\in {\mathcal IF}(r,p, \alpha )$.
\item \label{action:image}
The image of $d:H^{2k}\to H^{2k-1}$ is either a trivial or a one
dimensional vector space. It is non-trivial if and only if
$2k\in {\mathcal IT}(r,p, \alpha )$.
\item \label{action:cokernel}
The cokernel of $d:H^{2k}\to H^{2k-1}$ is either a trivial or a one
dimensional vector space. In case $p\nmid (r+1)$, it is non-trivial
if and only if $2k\in {\mathcal IF} (r,p,\alpha )$.
In case $p\mid (r+1)$ it is non-trivial
if and only if either $2k\in {\mathcal IF}(r,p,\alpha )$
and $\rho \nmid 2k$ or if $k>1$ and $2k\equiv 2\mod \rho$.
\item \label{action:kernelsum}
The kernel of the map
$d:\oplus_{2\leq 2k\leq \rho pm} H^{2k}\to
\oplus_{1\leq 2k-1\leq \rho pm-1} H^{2k-1}$
is a vector space of dimension $mr$ if $p\nmid (r+1)$, and
of dimension $m(r+1)$ if $p\mid (r+1)$.
\item \label{action:cokernelsum1}
The cokernel of the map
$d:\oplus_{2\leq 2k\leq \rho pm} H^{2k}\to
\oplus_{1\leq 2k-1\leq \rho pm-1} H^{2k-1}$ is a vector space of
dimension $rm$ when $p\nmid (r+1)$.
\item \label{action:cokernelsum2}
The cokernel of the map
$d:\oplus_{2\leq 2k\leq \rho pm+2} H^{2k}\to
\oplus_{1\leq 2k-1\leq \rho pm+1} H^{2k-1}$
is a vector space of dimension $(r+1)m$ when $p\mid (r+1)$.
\end{enumerate}
\end{lemma}
\begin{proof}
For the action differential $d:H^{2k} \to H^{2k-1}$ we have the equation
\begin{gather}
\begin{split}
\label{eq:cokerdimension}
\dim {\mathrm {coker}} (d) &=\dim H^{2k-1}-\dim {\mathrm {im}} (d)\\
&= \dim H^{2k-1} - (\dim H^{2k}-\dim \ker (d))\\
&= -(\dim H^{2k}-\dim H^{2k-1})+\dim \ker (d).
\end{split}
\end{gather}
We first consider the case $p\nmid (r+1)$.
The even part $H^{\text{even}}$ has basis $\{ b_0^{j-1}b_i\}$ and
the odd part $H^{\text{odd}}$ has basis
$\{ b_0^{j-1}a_i\}$ where $0\leq i$, $1\leq j\leq r$. The basis
elements sit in degrees $\rho i +\alpha j$ and $\rho i + \alpha j -1$
respectively. The kernel of the action differential
$d:H^{\text{even}}\to H^{\text{odd}}$ is generated by
those $b_0^{j-1}b_i$ for which $p\mid ((r+1)i+j)$ and
its image of those $b_0^{j-1}a_i$ for which $p\nmid ((r+1)i+j)$.
Combining this with the uniqueness statement for $i,j$
in lemma \ref{le:twosets} we see that \ref{action:kernel}. and
\ref{action:image}. are valid. Furthermore we see that
$\dim H^{2k}=\dim H^{2k-1}$ which combined with
(\ref{eq:cokerdimension}) gives us \ref{action:cokernel}.
Next we consider the case $p\mid (r+1)$. Here $H^{\text{even}}$ has
basis $\{ x^j\gamma_i (\omega ) \}$ and $H^{\text{odd}}$ has basis
$\{ x^jdx\gamma_i (\omega )\}$ where $0\leq i$, $0\leq j\leq r$.
The basis elements sit in degrees $\rho i+\alpha j$ and
$\rho i+\alpha j+\alpha -1$ respectively. The kernel for the action
differential $d:H^{\text{even}}\to H^{\text{odd}}$ is generated by those
$x^j\gamma_i (\omega )$ for which $p\mid j$ and the image is
generated by those $x^{j-1}dx\gamma_i (\omega )$ for which
$p\nmid j$. As above lemma \ref{le:twosets} give us that
\ref{action:kernel}. and \ref{action:image}. are valid.
We now prove \ref{action:cokernel}. One checks it directly
for $(r,p,\alpha )=(1,2,2)$. Assume that $(r,p,\alpha )\neq (1,2,2)$
which implies $\rho >2$. By a counting argument we find the following:
\begin{equation}
\label{eq:homdimension}
\dim H^{2k}-\dim H^{2k-1}=
\begin{cases}
1 & \text{if $\rho \mid 2k$,}\\
-1 & \text{if $\rho \mid (2k-2)$ and $k>1$,}\\
0 & \text{otherwise.}
\end{cases}
\end{equation}
We combine (\ref{eq:homdimension}) and (\ref{eq:cokerdimension})
in order to prove statement \ref{action:cokernel}.
If $\rho \mid 2k$ we have $2k\in {\mathcal IF} (r,p,\alpha )$ so the
dimension of the cokernel becomes $0$. Assume that
$\rho \mid (2k-2)$ and $k>1$. We claim that this implies that
$2k\notin {\mathcal IF} (r,p,\alpha )$. For if $2k\in {\mathcal IF} (r,p,\alpha )$
we have that $2k=\rho i+\alpha j$ where $0\leq i$, $0\leq j \leq r$
and $p\mid j$. It follows that
$2\equiv 2k \equiv \alpha j \mod \rho$. Since
$\rho >2$, we cannot have $j=0$. We conclude that
$1 \leq p\leq j$. On the other hand,
$0\equiv 2k-2=\rho i+ \alpha j -2\equiv \alpha j-2\mod \rho$,
so $\rho \leq \alpha j-2$. Now we have our contradiction,
finishing the proof of \ref{action:cokernel}.
since $\alpha j-2 \leq \alpha r-2 <\rho$.
We have reduced the last three statements of the lemma
to statements about the sets ${\mathcal IT} (r,p,\alpha )$ and
${\mathcal IF} (r,p,\alpha )$. We see that \ref{action:kernelsum}.
is equivalent to the statement that the set
$\{ 0< 2k \leq \rho pm | 2k \in {\mathcal IF} (r,p,\alpha) \}$
has $rm$ elements if $p\nmid (r+1)$ and
$(r+1)m$ elements if $p\mid (r+1)$. But this is exactly
the content of the last statement of lemma \ref{le:twosets}.
Statement \ref{action:cokernelsum1}. follows from statement
\ref{action:kernelsum}. and (\ref{eq:cokerdimension}).
Finally, we prove statement \ref{action:cokernelsum2}.
One verifies it directly for
$(r,p,\alpha )=(1,2,2)$. Assume that $(r,p,\alpha )\neq (1,2,2)$.
Formula (\ref{eq:homdimension}) gives us that
\[
\sum_{2\leq 2k\leq \rho pm+2}{(\dim H^{2k}-\dim H^{2k-1})}=0.
\]
So by (\ref{eq:cokerdimension}), \ref{action:kernelsum}. and
\ref{action:kernel}. it suffices to check that
$\rho p m +2\notin {\mathcal IF} (r,p,\alpha )$. This follows from
lemma \ref{le:twosets} since $\rho pm+2\equiv 2$ mod $\rho p$.
\end{proof}
We define two formal power series in ${\mathbb Z} [[t]]$ by
\[
P_{{\mathcal IT} (r,p,\alpha )}(t)= \sum_{n \in {\mathcal IT} (r,p,\alpha )}t^n,
\quad
P_{{\mathcal IF} (r,p,\alpha )}(t)= \sum_{n\in {\mathcal IF} (r,p,\alpha )}t^n.
\]
Note that the constant terms are zero in both series. Furthermore,
if the numbers $\kappa_i$ in theorem \ref{th:action} are
zero for all $i$, then $P_{{\mathcal IT} (r,p,\alpha )}(t)=tP_{{\mathrm {im}} (d)}(t)$ where
$P_{{\mathrm {im}} (d)}(t)$ is the Poincar\'e series for the image of the
action differential. By lemma \ref{le:twosets} we find the following
result:
\begin{lemma} \label{le:Poincaretwosets}
When $\alpha =2$ such that $\rho = 2r$ we have that
\[
P_{{\mathcal IT} (r,p,2)}(t)+P_{{\mathcal IF} (r,p,2)}(t)=
\frac {t^2} {1-t^2}+(1-\chi_p (r+1)) \frac {t^{2r}} {1-t^{2r}} .
\]
\end{lemma}
\section{The $E_3$-term of the Serre spectral sequence}
\label{sec:SerreSS}
Let $Y$ be a ${\mathbb T}$-space with
$H_*(Y)$ of finite type. Write $q:E{\mathbb T} \times Y \to E{\mathbb T} \times_{\mathbb T} Y$
for the quotient map. As described in appendix
\ref{Appendix:s1transfer} there is a ${\mathbb T}$-transfer map
$\tau$ such that the composite $q^*\circ \tau$ equals the action
differential:
\[ d: H^{*+1}(Y) \xrightarrow{\tau } H^*(Y_{h{\mathbb T}}) \xrightarrow{q^*}
H^*(Y) .\]
Since $B{\mathbb T}$ is 1-connected and $H^*(B{\mathbb T}) ={\mathbb F}_p [u]$ where $|u|=2$,
the Serre spectral sequence for the fibration $Y\to Y_{h{\mathbb T}} \to B{\mathbb T}$
has the following form:
\[ E_2^{*,*} = {\mathbb F}_p [u] \otimes H^*(Y) \Rightarrow H^*(Y_{h{\mathbb T}}). \]
The $d_2$ differential is determined by the action differential since
$d_2y=udy$ for all $y \in H^*(Y) $. Thus, the $E_3$ term has the
following form:
\[ E_3^{*,*} = {\operatorname{im}} (d) \oplus ({\mathbb F}_p [u]\otimes H(d)), \]
where ${\operatorname{im}} (d)$ and $H(d)$ denotes the image and the homology of the
action differential respectively.
\begin{proposition} \label{imd}
The subspace ${\operatorname{im}} (d) \subseteq E_3^{*,*}$ survives to
$E_\infty^{*,*}$. For any $a\in H^*(Y)$ one has that
$\tau (a) \in H^*(Y_{h{\mathbb T} })$ represents
$da \in E_{\infty }^{0,*}$ and that $pr_1^*(u)\tau (a)=0$
in $H^*(Y_{h{\mathbb T} })$, where $pr_1 : Y_{h{\mathbb T}} \to B{\mathbb T}$ denotes the
projection on the first factor.
\end{proposition}
\begin{proof}
There are two commutative diagrams
\[ \xymatrix@C=1cm{
H^*(Y) \ar[ddr]_-{d} \ar[r]^-{\tau}
& H^*(Y_{h{\mathbb T} }) \ar[dd]_-{q^*} \ar@{>>}[r]
& E_\infty^{0,*} \ar@{^{(}->}[d]
& & H^*(Y_{h{\mathbb T} }) \ar[r]^-{q^*}
& H^*(E{\mathbb T} \times Y) \\
& & E_3^{0,*} \ar@{^{(}->}[d]
& & H^*(B{\mathbb T} ) \ar[r] \ar[u]^-{pr_1^*} & H^*(E{\mathbb T} ) \ar[u]_-{pr_1^*}\\
& H^*(Y) \ar@{=}[r] & E_2^{0,*}
} \]
Assume that $a\in H^*(Y)$ has $da\neq 0$. The diagram to the left
shows that $\tau (a)\neq 0$ and that $da$ survives to $E_\infty$ and
is represented by $\tau (a) \in H^*(Y_{h{\mathbb T} })$.
The diagram to the right shows that $q^*\circ pr_1^*(u)=0$ since
$E{\mathbb T}$ is contractible. By Frobenius reciprocity
$pr_1^*(u)\tau (b) = \tau (q^*(pr_1^* (u))b)=0$.
\end{proof}
We now take $Y=LX$. By the result in the previous section we
can compute the $E_3$ term of the Serre spectral sequence.
\begin{proposition} \label{th:E3Serre}
Let $p$ be a prime. Assume that $X$ is a 1-connected space with
mod $p$ homology of finite type. Assume also that
\[ H^*(X;{\mathbb F}_p )={\mathbb F}_p [x]/(x^{r+1}), \]
where $\alpha=|x|$ is even and $r\geq 1$. Put $\rho=(r+1)\alpha-2$.\\
1) If $p\mid (r+1)$ then
\[ E_3^{*,*} \cong
\big( {\mathbb F}_p [u,\phi , q, \delta_0 , \delta_1 , \dots ,\delta_{p-2}]
/I \big) \otimes \Gamma [\omega ], \]
where I is the ideal
\[ I=(\phi^{(r+1)/p},\medspace q^2,\medspace
\delta_j u,\medspace \delta_j q ,\medspace \delta_j \delta_k
\medspace | 0\leq j\leq p-2, \medspace 0\leq k \leq p-2 ). \]
The bidegrees are
$\Vert u \Vert = (2,0)$,
$\Vert \phi \Vert = (0,p\alpha )$, $\Vert q \Vert = (0,p\alpha -1)$,
$\Vert \delta_j \Vert = (0, j\alpha +\alpha -1)$ and
$\Vert \gamma_i (\omega ) \Vert =(0,\rho i)$.
The generators are represented by elements in the $E_2$ term as
follows:
\[ \quad u = [u], \quad \phi = [x^p], \quad q = [x^{p-1} dx], \quad
\delta_j = [x^jdx], \quad
\gamma_i (\omega ) = [\gamma_i (\omega )]. \]
2) If $p\nmid (r+1)$ and the numbers $\kappa_t$ from
theorem \ref{th:action} are zero for all $t$, then
\begin{align*}
E_3^{*,*} \cong {\mathbb F}_p [v_i^{(k)} ,w_i^{(k)} ,\trcl i h , u |
& 1\leq k \leq r, \medspace p\mid ((r+1)i+k), \\
& 1\leq h \leq r, \medspace p\nmid ((r+1)i+h),
\medspace 0 \leq i] /I ,
\end{align*}
where $I$ is the ideal generated by the elements
\begin{align*}
& \trcl i h u,\quad \trcl i h w_j^{(\ell )},\quad
\trcl i h \trcl j m ,\quad w_i^{(k)}w_j^{(\ell)}, \\
& v_i^{(k)} v_j^{(\ell )} -
\epsilon_{r} (k+\ell ) \binom {i+j} i v_{i+j}^{(k+\ell)}, \\
& v_i^{(k)} w_j^{(\ell )} -
\epsilon_{r} (k+\ell ) \binom {i+j} i w_{i+j}^{(k+\ell )}, \\
& v_i^{(k)} \trcl j h -
\epsilon_{r} (k+h) \binom {i+j} i \trcl {i+j} {k+h}.
\end{align*}
Here the number $\epsilon_{r}(s)$ equals $1$ if $1\leq s \leq r$ and
$0$ otherwise. The bidegrees of the generators are
\begin{align*}
& \Vert v_i^{(k)}\Vert = (0, \rho i +\alpha k), \quad
\Vert w_i^{(k)}\Vert = (0, \rho i +\alpha k-1), \\
& \Vert \trcl i h\Vert = (0, \rho i +\alpha h-1), \quad
\Vert u \Vert = (2,0),
\end{align*}
and the generators are represented by elements in the $E_2$ term
as follows:
\[ v_i^{(k)}=[b_0^{k-1}b_i], \quad w_i^{(k)}=[b_0^{k-1}a_i], \quad
\trcl i h = [b_0^{h-1}a_i], \quad u=[u]. \]
\end{proposition}
\begin{proof}
1) Assume that $p\mid (r+1)$ such that $r+1 =mp$ for some $m\geq 1$.
By the K\" unneth formula we have that
\[ H(d) = H\big( {\mathbb F}_p [x]/(x^{mp}) \otimes \Lambda (dx);d\big)
\otimes \Gamma [\omega ]. \]
Since $d(x^j) = jx^{j-1}dx$ and
$d(x^jdx)=0$, the kernel and image of the differential on
${\mathbb F}_p [x]/(x^{mp})\otimes \Lambda (dx)$ has the
following ${\mathbb F}_p$-bases:
\begin{align*}
& \{ x^{kp}, x^j dx |
0\leq k \leq m-1, 0 \leq j \leq mp-1 \} , \\
& \{ x^{j-1}dx |1 \leq j \leq mp-1, p\nmid j \} .
\end{align*}
It follows that
$H(d)={\mathbb F}_p [\phi ]/(\phi^m )\otimes \Lambda (q) \otimes
\Gamma [\omega ],$ where $\phi = x^p$ and $q=x^{p-1}dx$.
We can get the basis elements in ${\operatorname{im}} (d)$ by multiplying the
elements in the set $\{ x^kdx | 0\leq k \leq p-2 \}$
by the elements $\gamma_i (\omega )$ and powers of $\phi$.
The result follows.
2) Assume that $p\nmid (r+1)$. By remark \ref{rem:action} we see that
$\ker (d)$, ${\operatorname{im}} (d)$, $H(d)$ has respective ${\mathbb F}_p$-bases
as follows:
\begin{align*}
& b_0^{j-1} a_i, \medspace b_0^{k-1} b_i & \text{ for } &
p\mid ((r+1)i+k), \\
& b_0^{h-1} a_i & \text{ for } &
p\nmid ((r+1)i+h), \\
& [b_0^{k-1}b_i], \medspace [b_0^{k-1}a_i] & \text{ for } &
p\mid ((r+1)i+k),
\end{align*}
where $0\leq i$ and $1 \leq h,k,j \leq r$. The result follows.
\end{proof}
\begin{corollary} \label{cor:Poincare}
If $X$ satisfies the hypothesis of proposition \ref{th:E3Serre},
then the Poincar\'e series for the $E_3$ term of the
Serre spectral sequence is given by the following when $p\mid (r+1)$:
\[
\frac {1-t^{(r+1)\alpha}} {(1-t^{p\alpha})(1-t^{(r+1)\alpha -2})} \cdot
\big( \frac {1+t^{p\alpha -1}} {1-t^2}+
\frac {t^{\alpha -1}-t^{p\alpha -1}} {1-t^{\alpha}} \big) ,
\]
and by the following when $p\nmid (r+1)$:
\[
\frac 1 {1-t^2} \big( 1+P_{{\mathcal IF} (r,p,\alpha )}(t)+
\frac 1 t P_{{\mathcal IF} (r,p,\alpha )}(t) \big)+
\frac 1 t P_{{\mathcal IT} (r,p,\alpha )}(t).
\]
\end{corollary}
By proposition \ref{imd} and proposition \ref{th:E3Serre} we have
\begin{corollary}
If $X$ satisfies the hypothesis of proposition \ref{th:E3Serre} then,
\begin{enumerate}
\item If $p\mid (r+1)$ then the element
$\gamma_i(\omega ) \phi^j \delta_k\in E_3$ survives
to $E_\infty$ and is represented by
$\tau (x^{pj+k+1}\gamma_i (\omega ))$ for
$0\leq i$, $0\leq j \leq (r+1)/p$ and $0\leq k \leq p-2$. \\
\item If $p\nmid (r+1)$ then the generator $\trcl i h \in E_3$ survives
to $E_\infty$ and is represented by
$\tau (b_0^{h-1}b_i)$ for $1\leq h\leq r$ with $p\nmid ((r+1)i+h)$
and $0\leq i$.
\end{enumerate}
\end{corollary}
\section{Comparing the two spectral sequences.}
\label{se:MorseSerreSS}
In this section we will complete the investigation of the Morse
spectral sequence $E_*({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}})$.
We write ${\mathcal F}_0 \subseteq {\mathcal F}_1 \subseteq \dots \subseteq L{\mathbb{C}\mathrm{P}}^r$
for the energy filtration. So far, the main structural facts which
we proved in section \ref{sec:MorseCPr} are the following:
\begin{enumerate}[{SF}(1)]
\item \label{Geometry}
The classes of even total degree are concentrated in
$\bigoplus_m E_*^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}})$ (theorem
\ref{th:MorsePoincare}).
\item \label{SSmodule}
$E_*^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$ is a free ${\mathbb F}_p [u]$ module.
If $p\nmid n$, then $E_*^{n,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$ is a finite
dimensional vector space (theorem \ref{th:MorsePoincare}).
\item \label{Oddeven}
Every non-trivial differential goes from even total degree to
odd total degree (lemma \ref{le:Preliminary}).
\item \label{Epcoll}
The spectral sequence collapses from the $E_p$ page (theorem
\ref{Epcollapse}).
\end{enumerate}
\begin{remark}
\label{re:filtsurjective}
It follows from SF(\ref{Oddeven}) that the inclusion
$j:({\mathcal F}_n)_{h{\mathbb T}} \hookrightarrow L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}}$ induces a surjective
map $j^*:H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )\to H^{\text{odd}}(({\mathcal F}_n)_{h{\mathbb T}})$
for all $n\geq 0$.
\end{remark}
By SF(\ref{Geometry}) and SF(\ref{Epcoll}) we see that the
only possibly non-trivial differentials in the spectral sequence are
\[ \xymatrix@C=1cm{
E_s^{pm,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar[r]^-{d_s} &
E_s^{pm+s,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )\cong
E_1^{pm+s,*}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )
} \]
for $1\leq s\leq p-1$.
We are also going to use the non-equivariant spectral sequence
$E_*({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )$. From theorem \ref{le:nonequivariant}
we have the following structural facts:
\begin{enumerate}[{SF}(1)]
\setcounter{enumi}{4}
\item \label{Oddvanishing}
$E_1^{n,2i+1-n}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )=0$ if $p\mid (r+1)$ and $i\geq rn+1$ or if
$p\nmid (r+1)$ and $i\geq rn$.
\item \label{Morsesurject}
The map
$E^{\text{odd}}_1({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )
\to E^{\text{odd}}_1({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )$ is surjective.
\end{enumerate}
\begin{remark}
\label{re:Homologysurject}
By SF(\ref{Morsesurject}) and SF(\ref{Oddeven}) the map
$E^{\text{odd}}_\infty ({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )
\to E^{\text{odd}}_\infty ({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r )$
is a surjection. A filtration argument shows that
if a map in some degree induces a surjective map on the $E_\infty$ pages,
then it also induces a surjective map on the cohomology
of the targets of the spectral sequences. So
$H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} ) \to H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r )$ is also
surjective.
\end{remark}
Our plan for computing $H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$ goes as follows.
First, we concentrate on the odd part of $H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$.
The sum of the odd dimensional cohomology groups
$H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ is a submodule of
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ over
$H^*(B{\mathbb T} )$, since this ring is concentrated in even degrees.
We will list a set of elements in this module, and use the above
properties of the spectral sequences to show that these elements
are generators. We also give the relations satisfied by these elements,
thus computing the odd part of the Borel cohomology.
To determine the odd part of the cohomology is not quite
the same as to determine the differentials in the Morse
spectral sequence, but in our situation it is close enough.
We can use the knowledge of the odd cohomology together with
the spectral sequences to determine the even dimensional cohomology.
The first step in this program is to find elements in
$H^\text{odd}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$ which can serve as generators.
We use the results on the transfer map to find the right elements.
Consider the ${{\mathbb T}}$ transfer map $\tau$ in the context
of the Morse filtration of $L {\mathbb{C}\mathrm{P}}^r$. Let
$i: L{\mathbb{C}\mathrm{P}}^r \to L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}}$ be the inclusion.
It follows from theorem \ref{th:s1transfer} that the composite
\[ \xymatrix@C=1cm{
H^{*+1}(L{\mathbb{C}\mathrm{P}}^r )\ar[r]^-{\tau} & H^*(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )\ar[r]^-{i^*}
& H^* (L{\mathbb{C}\mathrm{P}}^r )
} \]
equals the action differential $d$.
We can now chose one half of our generators. This bunch of generators
come with the relation that the generators are annihilated by
multiplication by $u$.
\begin{lemma}
\label{le:torsionGen}
There is a graded subgrup
${\mathcal T}^* \subseteq H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$
such that
\begin{enumerate}
\item $u{\mathcal T}^*=0$.
\item The map $i^*: H^*(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )\to H^*(L{\mathbb{C}\mathrm{P}}^r)$
restricts to an injective map on ${\mathcal T}^*$.
\item The subgroup $i^*({\mathcal T}^* )\subseteq H^*(L{\mathbb{C}\mathrm{P}}^r )$
agrees with the image of the composite map
$i^*\circ \tau :H^{*+1}(L{\mathbb{C}\mathrm{P}}^r )\to H^*(L{\mathbb{C}\mathrm{P}}^r )$.
\end{enumerate}
\end{lemma}
\begin{proof}
We chose a graded subgroup
$\overline {\mathcal T}^{*+1} \subseteq \tilde H^{*+1}(L{\mathbb{C}\mathrm{P}}^r )$
which maps isomorphically to the image of $d$. That is,
we chose (arbitrarily) a splitting of the surjective map
$d: H^{*+1}(L{\mathbb{C}\mathrm{P}}^r )\to {\mathrm {im}}(d)^*$.
Then ${\mathcal T}^* =\tau (\overline {\mathcal T}^{*+1} )\subseteq
H^*(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$ is a subgroup which by its definition satisfies
the second and third property. It also satisfies the first property,
since $u\tau=0$ by theorem \ref{th:s1transfer}.
\end{proof}
Our second bunch of generators is not involved with any relations.
\newcommand{{\mathcal U}}{{\mathcal U}}
\begin{lemma} \label{le:freeGen}
There is a graded subgrup
${\mathcal U}^* \subseteq H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$
such that the composite
\[ \xymatrix@C=1cm{
{\mathcal T}^* \oplus {\mathcal U}^* \ar[r] &
H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )\ar[r]^-{i^*} &
H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r)
} \]
is an isomorphism. In addition to this, the restriction
\[ \xymatrix@C=1cm{
{\mathcal U}^{2i+1} \ar[r] &
H^{2i+1}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar[r]^-{j^*} &
H^{2i+1}(({\mathcal F}_{pm})_{h{{\mathbb T}}} )
} \]
is trivial if either $p\nmid (r+1)$ and $i\geq rpm$,
or $p\mid (r+1)$ and $i\geq rpm+1$.
\end{lemma}
\begin{proof}
To construct ${\mathcal U}^*$, we make a choice
$\overline {\mathcal U}^* \subseteq H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r )$ of
a complementary subgroup of $i^*({\mathcal T} )$, so that we have a
direct sum decomposition of vector spaces
$H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r )\cong i^*({\mathcal T}^* )\oplus \overline {\mathcal U}^*$.
We intend to find ${\mathcal U}^* \subseteq H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$
such that $i^*$ maps this subgroup isomorphically to
$\overline {\mathcal U}^*$ and such that the statement for the
restriction map holds.
According to the long exact sequence of
theorem \ref{th:s1transfer}, the following diagram has exact rows.
Because of remark \ref{re:filtsurjective} the left and middle
vertical maps are surjections, because of remark \ref{re:Homologysurject}
the upper right horizontal map is a surjection.
\[ \xymatrix@C=2cm{
H^{2i-1}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar@{>>}[d] \ar[r]^-{u} &
H^{2i+1}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar@{>>}[d] \ar@{>>}[r]^-{i^*} &
H^{2i+1}(L{\mathbb{C}\mathrm{P}}^r ) \ar[d] \\
H^{2i-1}(({\mathcal F}_{pm} )_{h{{\mathbb T}}} )\ar[r]^-{u} &
H^{2i+1}(({\mathcal F}_{pm} )_{h{{\mathbb T}}} )\ar[r] &
H^{2i+1}({\mathcal F}_{pm} ). \\
} \]
Assume that $p\nmid (r+1)$ and $i\geq rpm$, or
$p\mid (r+1)$ and $i\geq rpm+1$. By SF(\ref{Oddvanishing}) this
implies that $H^{2i+1}({\mathcal F}_n , {\mathcal F}_{n-1})=0$ for $0\leq n \leq pm$
such that $H^{2i+1}({\mathcal F}_{pm})=0$. Thus
$\overline {\mathcal U}^{2i+1}$ is contained in the kernel of the
right vertical map.
The rest is a diagram chase. By the surjectivity of the upper
right map $\overline {\mathcal U}^*$ is the isomorphic image of a
subgroup of $H^*(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}})$. The degree $2i+1$ part of this
subgroup might not itself be in the kernel of the middle vertical map,
but using the surjectivity of the left vertical map, we can replace
it with a subgroup ${\mathcal U}^{2i+1} \subseteq H^{2i+1}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$
which also maps isomorphically to $\overline {\mathcal U}^*$, and
such that ${\mathcal U}^{2i+1}$ is in the kernel of the middle vertical map.
\end{proof}
\begin{remark}
\label{re:dimension}
It follows by \ref{action:cokernelsum1}. and
\ref{action:cokernelsum2}. of lemma \ref{le:countingtransfer},
that if $p\nmid (r+1)$, the dimension of the group
$\bigoplus_{1\leq 2k-1\leq 2rpm-1} {\mathcal U}^{2k-1}$ is $rm$.
If $p\mid (r+1)$ the dimension of the group
$\bigoplus_{1\leq 2k-1\leq 2rpm+1} {\mathcal U}^{2k-1}$ is
$(r+1)m$.
\end{remark}
\begin{theorem}
\label{th:mainodd}
There is a map of ${\mathbb F}_p [u]$-modules
\[
h_1 \oplus h_2:({\mathbb F}_p [u]\otimes {\mathcal U}^* )\oplus {\mathcal T}^*
\to H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )
\]
which is an isomorphism of ${\mathbb F}_p [u]$-modules.
\end{theorem}
\begin{proof}
We can extend the inclusion of ${\mathcal U}^*$ in a unique way to an
${\mathbb F}_p [u]$-linear map
$h_1:{\mathbb F}_p [u]\otimes {\mathcal U}^* \to H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$.
Because of lemma \ref{le:torsionGen} the inclusion of ${\mathcal T}^*$
is already an ${\mathbb F}_p[u]$-linear map
$h_2:{\mathcal T}^* \hookrightarrow H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} )$.
Lemma \ref{le:freeGen} and the exact sequence
\[ \xymatrix@C=2cm{
H^{2i-1}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar[r]^-{u} &
H^{2i+1}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar[r]^-{i^*} &
H^{2i+1}(L{\mathbb{C}\mathrm{P}}^r )
} \]
shows that $h_1\oplus h_2$ is surjective on indecomposables.
But then it is a surjective ${\mathbb F}_p [u]$-linear map, and it
suffices to show that it is also injective.
The main step is to prove injectivity of the
localized map $(h_1\oplus h_2)[1/u]$.
Because of the vanishing statement of lemma \ref{le:freeGen}
we have a commutative diagram as follows, where $\chi_p (r+1)$
is the number defined in \ref{def:TF}:
\[ \xymatrix@C=2cm{
{\mathbb F}_p[u] \otimes \bigoplus_{0 \leq i} {\mathcal U}^{2i+1}
\ar[d]^-{id \otimes pr} \ar[r]^-{h_1} &
H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) \ar@{>>}[d]^-{j^*} \\
{\mathbb F}_p[u]\otimes \bigoplus_{0 \leq i\leq rpm-\chi_p (r+1)} {\mathcal U}^{2i+1}
\ar[r]^-{\overline h_1} &
H^{\text{odd}}(({\mathcal F}_{pm})_{h{{\mathbb T}}} ).
} \]
The map $j^*$ is surjective by remark \ref{re:filtsurjective}.
We localize by inverting $u$. Because all modules are of finite type,
this localization agrees with tensoring with ${\mathbb F}_p [u,u^{-1}]$ over
${\mathbb F}_p [u]$. Since $h_1\oplus h_2$ is surjective,
and the localization of $h_2$ vanishes, we know that
\[ \xymatrix@C=2cm{
h_1 [ \frac 1 u ] : {\mathbb F}_p [u,u^{-1}] \otimes {\mathcal U}^* \ar[r] &
H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r_{h{{\mathbb T}}} ) [ \frac 1 u ]
} \]
is surjective. It follows by the diagram that the map
\[ \xymatrix@C=2cm{
\overline h_1[\frac 1 u ] : {\mathbb F}_p [u,u^{-1}] \otimes
\bigoplus_{0 \leq i\leq rpm-\chi_p (r+1)} {\mathcal U}^{\text{2i+1}} \ar[r] &
H^{\text{odd}}(({\mathcal F}_{pm})_{h{{\mathbb T}}}) [\frac 1 u ]
} \]
is also surjective. We know from remark \ref{re:dimension}
that as abstract module, the domain space is given by
\[
{\mathbb F}_p [u,u^{-1}] \otimes \bigoplus_{0 \leq i\leq rpm-\chi_p (r+1)}
{\mathcal U} ^{\text{2i+1}} \cong
\begin{cases}
{\mathbb F}_p [u,u^{-1}]^{\oplus (r+1)m} & \text{if } p\mid (r+1) \\
{\mathbb F}_p [u,u^{-1}]^{\oplus rm} & \text{if } p\nmid (r+1).
\end{cases}
\]
The target space is determined by theorem \ref{th:relabel}
and corollary \ref{cor:oddcounting}. By the proof of theorem
\ref{th:relabel} the re-indexing is given by multiplying the
filtration degree by $p$. We find that
\[
H^{\text{odd}}(({\mathcal F}_{pm})_{h{{\mathbb T}}} ) \left[ \frac 1 u \right] \cong
\begin{cases}
{\mathbb F}_p [u,u^{-1}]^{\oplus (r+1)m} & \text{if } p\mid (r+1) \\
{\mathbb F}_p [u,u^{-1}]^{\oplus rm} & \text{if } p\nmid (r+1).
\end{cases}
\]
The punch line is that $h_1[1/u]$ is a surjective map between two
finitely generated, free ${\mathbb F}_p [u,u^{-1}]$-modules of the same rank.
But then the map has to be an isomorphism.
This proves the injectivity of the localization $h_1$. To get
the injectivity of $h_1\oplus h_2$, we consider an element
$(c,t)\in \ker (h_1\oplus h_2)$. We have to prove that this element
is trivial. Since the localization of $t$ vanishes, the localization
of $c$ is in the kernel of the localization of $h_1$. This localized
map is injective, so the localization of $c$ vanishes. But the
canonical map
${\mathbb F}_p [u] \otimes {\mathcal U}^* \to {\mathbb F}_p [u,u^{-1}] \otimes {\mathcal U}^*$
is injective, so $c$ itself vanishes, and $t$ is in the kernel of
$h_2$. But $h_2$ is injective on ${\mathcal T}$, which proves that $t=0$.
\end{proof}
We can finally complete the main calculation of the paper.
Recall the index sets ${\mathcal IT} (r,p,2)$ and ${\mathcal IF} (r,p,2)$
from definition \ref{def:TF}. We need a small perturbation
as follows:
\[ {\mathcal IF}^\prime (r,p,2)=
\begin{cases}
({\mathcal IF} (r,p,2) \setminus 2r{\mathbb N}) \cup (2+2r{\mathbb N}) &
\text{if } p \mid (r+1), \\
{\mathcal IF} (r,p,2) & \text{if } p\nmid (r+1).
\end{cases}
\]
This makes sense, since $2r{\mathbb N} \subseteq {\mathcal IF}(r,p,2)$ by
lemma \ref{le:twosets} when $p\mid (r+1)$.
\begin{theorem}
\label{th:main}
As a graded ${\mathbb F}_p [u]$-module,
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} ;{\mathbb F}_p )$ is isomorphic to the direct sum
\[
{\mathbb F}_p [u] \oplus \bigoplus_{2k\in {\mathcal IF} (r,p,2)} {\mathbb F}_p [u] f_{2k}
\oplus \bigoplus_{2k\in {\mathcal IF}^{\prime} (r,p,2)} {\mathbb F}_p [u]f_{2k-1}
\oplus \bigoplus_{2k\in {\mathcal IT}(r,p,2)} ({\mathbb F}_p [u]/(u)) t_{2k-1}.
\]
Here the lower index denotes the degree of the generator.
\end{theorem}
\begin{proof}
In theorem \ref{th:mainodd} we proved the formula for
the group of odd degree elements. We have to show that
$H^\text{even} (L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ is a free ${\mathbb F}_p [u]$-module,
with generators in the stated degrees.
Put $E_r^{**} = E_r^{**}({\mathcal M} )(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$.
From SF(\ref{Geometry}) and SF(\ref{SSmodule}) we see that
$E_1^{\text{even}}$ is a free ${\mathbb F}_p [u]$-module. The degrees of the
generators can be read off theorem \ref{th:MorsePoincare}.
We see that
\begin{align*}
E_1^{(0,*)(\text{even})}
& \cong \oplus_{0\leq 2i\leq 2r}{{\mathbb F}_p [u] x_{2i}},
\\
E_1^{(pm,*)(\text{even})}
&\cong
\begin{cases}
\oplus_{2rpm\leq 2i\leq 2r(pm+1)} {\mathbb F}_p [u]x_{2i} &
\text{if } p\mid (r+1), \\
\oplus_{2rpm+2\leq 2i\leq 2r(pm+1)} {\mathbb F}_p[u]x_{2i} &
\text{if } p\nmid (r+1).
\end{cases}
\end{align*}
Because of SF(\ref{Epcoll}) together with SF(\ref{SSmodule})
we see that
$E_\infty^{(pm,*)(\text{even})}$ is a
submodule of finite index in $E_1^{(pm,*)(\text{even})}$. By abstract
structure theory of ${\mathbb F}_p [u]$-modules, it follows that
$E_\infty^{(pm,*)(\text{even})}$ is a free module on certain
generators. If we can filter a graded module with quotient which are
free modules, the original module was also a free module. The
generators are in the same degrees as the generators for the direct
sum of the quotients.
What is left is to figure out the degrees of the generators of the
free module $E_\infty^{(pm,*)(\text{even})}$. It would be best if we
could compute the differentials. Unfortunately, we cannot do this.
What we can do, is to compute the dimension of
$E_{\infty}^{\text{even}}$ in each degree, and recover the degrees of
the generators from this.
To do this, we consider Poincar\'e series. Let
$P(t)$, $P^\text{even}(t)$ and $P^\text{odd}(t)$
be the Poincar\'e series of
$H^*(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$, $H^{\text{even}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$ and
$H^\text{odd}(L {\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$ respectively.
Similarly, let $P_r (t)$, $P_r^{\text{even}}(t)$ and
$P_r^{\text{odd}}(t)$ be the Poincar\'e series of $E_r$, the
even total degree part of $E_r$ and the odd total degree part $E_r$
respectively.
The series $P_1^{\text{even}}(t)$ and $P_1^{\text{odd}}(t)$ can be
recovered from $P_1(t)$ as the sum of the even respectively the
sum of the odd monomials occurring in it.
From the computation in Lemma \ref{le:MorsePoincare},
we recall that if $p\mid (r+1)$, then
\[
P_1(t)=\frac 1 {1-t} \cdot \frac {1-t^{2r+2}} {(1-t^2)(1-t^{2pr})}
\]
If follows that
\[
P_1^{\text{even}}(t)=
\frac 1 {1-t^2} \cdot \frac {1-t^{2r+2}} {(1-t^2)(1-t^{2pr})}, \quad
P_1^{\text{odd}}(t)=
\frac t {1-t^2} \cdot \frac {1-t^{2r+2}} {(1-t^2)(1-t^{2pr})}.
\]
The important thing here is that $P_1^\text{odd}(t)=tP_1^\text{even}(t)$.
This means that the dimensions of the group of classes of total
degree $2i$ in $E_1$ agrees with the dimension of the classes
of total degree $2i+1$. Now recall that by SF(\ref{Oddeven}), all
differentials in the spectral sequence go from even total degree to
odd total degree. It follows that for any $r$, including
$r=\infty$, we have that $P_r^{\text{odd}}(t)=tP_r^{\text{even}}(t)$,
and also that $P^{\text{odd}}(t)=tP^{\text{even}}(t)$.
In case $p\nmid (r+1)$ the formula for $P_1(t)$ in
lemma \ref{le:MorsePoincare} is different, but the argument
above still applies, so that $P^{\text{odd}}(t)=tP^{\text{even}}(t)$
in both cases.
We can use theorem \ref{th:mainodd} to give an expression
for $P^{\text{odd}}(t)=P_\infty^{\text{odd}}(t)$. Using the notation of
\ref{def:TF}, we see that
\[
tP^{\text{odd}}(t)=P_{{\mathcal IT} (r,p,2)}(t)+\frac 1 {1-t^2}
P_{{\mathcal IF}^\prime (r,p,2)}(t).
\]
By definition of ${\mathcal IF}^\prime (r,p,2)$, we have
\begin{equation} \label{eq:IFprime}
P_{{\mathcal IF}^\prime (r,p,2)}(t)=P_{{\mathcal IF} (r,p,2)}(t)
-(1-\chi_p(r+1))\frac {t^{2r}(1-t^2)} {1-t^{2r}},
\end{equation}
which we insert above and find
\[
tP^{\text{odd}}(t)=P_{{\mathcal IT} (r,p,2)}(t)+\frac 1 {1-t^2}
P_{{\mathcal IF} (r,p,2)}(t)-(1-\chi_p(r+1))\frac {t^{2r}} {1-t^{2r}}.
\]
Rewriting the first two terms, and using the result
$P^{\text{odd}}(t)=tP^{\text{even}}(t)$, we see that
\begin{align*}
t^2P^{\text{even}}(t) &= tP^{\text{odd}}(t) \\
&= P_{{\mathcal IT} (r,p,2)}(t)+ P_{{\mathcal IF} (r,p,2)}(t) +
\frac {t^2} {1-t^2} P_{{\mathcal IF}(r,p,2)}(t)
-(1-\chi_p(r+1))\frac {t^{2r}} {1-t^{2r}}.
\end{align*}
We rewrite the sum of the first two terms
using lemma \ref{le:Poincaretwosets} and obtain
\[
t^2P^{\text{even}} (t)=\frac {t^2} {1-t^2} +
\frac {t^2} {1-t^2} P_{{\mathcal IF} (r,p,2)}(t),
\]
which completes the proof.
\end{proof}
\begin{corollary}
\label{cor:actionproof}
The numbers $\kappa_i$ in theorem \ref{th:action} are always zero.
\end{corollary}
\begin{proof}
In theorem \ref{th:action} this is proved for $\alpha \geq 4$, and
in some cases also when $\alpha =2$. So assume that $\alpha =2$.
Then an obstruction argument shows that $X$ is homotopy
equivalent to ${\mathbb{C}\mathrm{P}}^r$. So we can without loss of generality assume
that $X={\mathbb{C}\mathrm{P}}^r$. Since the action differential factors over the
transfer map, it sufficient to show that the transfer map
\[
\tau :H^{\text{odd}}(L{\mathbb{C}\mathrm{P}}^r ) \to H^{\text{even}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )
\]
is the zero map. The image of the transfer map is annihilated
by multiplication by $u$ (theorem \ref{th:s1transfer}).
According to theorem \ref{th:main} we also have that
$H^{\text{even}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$ is a free module over ${\mathbb F}_p [u]$, so
multiplication by $u$ is injective. The result follows.
\end{proof}
We can now give an other version of our main result.
Recall that
\[ IF (r,p,2)= \{ 2(ri+j) \mid 0\leq i,\medspace
\chi_p (r+1) \leq j \leq r,\medspace p\mid ((r+1)i+j) \}, \]
where $\chi_p (s)$ equals $0$ when $p$ divides $s$ and
$1$ when $p$ does not divide $s$.
\begin{theorem} \label{th:main2}
Let $\{ E_*\}$ be the mod $p$ Serre spectral sequence for the
fibration sequence $L{\mathbb{C}\mathrm{P}}^r \to (L{\mathbb{C}\mathrm{P}}^r )_{h{\mathbb T}} \to B{\mathbb T}$. That is
\[ E_2^{*,*}=H^*(B{\mathbb T}; {\mathbb F}_p )\otimes H^*(L{\mathbb{C}\mathrm{P}}^r ;{\mathbb F}_p )
\Rightarrow H^*((L{\mathbb{C}\mathrm{P}}^r )_{h{\mathbb T}}; {\mathbb F}_p ). \]
For any positive integer $r$ and any prime $p$ one has that
$E_3=E_\infty$. Furthermore, the Poincar\' e
series $P_{r,p}(t)$ for $H^*((L{\mathbb{C}\mathrm{P}}^r )_{h{\mathbb T}};{\mathbb F}_p )$ is given by
\[
P_{r,p}(t)=\frac 1 {1-t} \big( 1+\sum_{k\in {\mathcal IF} (r,p,2)} t^k \big) .
\]
If $p$ divides $r+1$ we can rewrite this as
\[
P_{r,p}(t)= \frac {1-t^{2(r+1)}} {(1-t)(1-t^{2r})(1-t^{2p})}.
\]
\end{theorem}
\begin{remark}
We have described the ${\mathbb F}_p$-algebra structure of the
$E_\infty$ page in proposition \ref{th:E3Serre} with $\alpha =2$.
\end{remark}
\begin{proof}
We first use theorem \ref{th:main} to prove that the Poincar\' e
series is as stated. By this theorem we have that
\begin{equation} \label{eq:mainpoincare}
P_{r,p}(t) = \frac 1 {1-t^2}
\big( 1+P_{{\mathcal IF} (r,p,2)}(t)+\frac 1 t P_{{\mathcal IF}^\prime (r,p,2)}(t)\big)
+ \frac 1 t P_{{\mathcal IT} (r,p,2)}(t).
\end{equation}
By using equation (\ref{eq:IFprime}) and lemma
\ref{le:Poincaretwosets} we can rewrite this as
\[ P_{r,p}(t)= \frac 1 {1-t^2}
\big( 1+P_{{\mathcal IF} (r,p,2)}(t)+\frac 1 t P_{{\mathcal IF} (r,p,2)}(t) \big)
-\frac 1 t P_{{\mathcal IF} (r,p,2)}(t)+\frac t {1-t^2}. \]
The desired result follows by a small reduction.
When $p$ divides $r+1$, we can write the index set as
\[ {\mathcal IF} (r,p,2)=
\{ 2(ri+pm)\mid 0\leq i, \medspace 0\leq m \leq \frac {r+1} p -1\}
\setminus \{ 0\} .\]
The last formula for $P_{r,p}(t)$ follows.
Since corollary \ref{cor:actionproof} holds, we have a formula for
the Poincar\' e series of the $E_3$ term in corollary
\ref{cor:Poincare} with $\alpha =2$. This series is the same
as $P_{r,t}(t)$ when $p$ does not divide $r+1$ by
(\ref{eq:mainpoincare}). When $p$ divides $r+1$ the two series
also agree by our last formula for $P_{r,p}(t)$. Thus, the
Serre spectral sequence collapses from the $E_3$ page.
\end{proof}
The proof of the main theorem involves the existence of
non-trivial differentials in the Morse spectral sequence.
This fact has a certain geometrical content which we describe
below for $r\geq 2$. By similar methods one can get the
same result for $r=1$, but we will not go into this here.
\begin{corollary}
\label{cor:elastomania}
For any $r\geq 2$ and any $n\geq 2$ there is a
trajectory of loops on ${\mathbb{C}\mathrm{P}}^r$ which converges in positive time
towards a geodesic with period $n$ and in negative time
towards a non-constant geodesic with period $n+1$.
\end{corollary}
\begin{proof}
Assume that there are no such trajectories. Then the geometric map
\[
({\mathcal F}_{n+1} /{\mathcal F}_{n} )_{h{\mathbb T}} \to \Sigma ({\mathcal F}_n )_{h{\mathbb T}} \to
\Sigma ({\mathcal F}_n /{\mathcal F}_{n-1} )_{h{\mathbb T}} ,
\]
which induces the $d_1$ differential in the Morse spectral
sequence, is nullhomotopic. So in the mod $p$ Morse spectral
sequence $E_*=E_* ({\mathcal M}) (L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}} )$, we have
$d_1=0 : E_1^{n,*}\to E_1^{n+1,*}$ for any prime $p$. We intend
to show that you can always chose a prime $p$, such that this
$d_1$ must be non trivial, leading us to the desired contradiction.
Since $n\geq 2$, there is a prime $p$ that divides $n$, say $n=pm$.
We actually claim that any such $p$ will do.
Assume to the contrary, that $d^1$ is trivial on
$E_1^{pm,*}$. The generators of the ${\mathbb F}_p [u]$-module $E_1^{pm,*}$
have degrees less or equal to $2r(pm+1)$. Since the lowest class
in $E_1^{pm+s,*}$ is in degree $(2r)(pm+s-1)+1$, the only possible
non-trivial differential originating in $E_*^{pm,*}$ would be a
$d_2$ hitting the non-trivial class in $E_2^{pm+2,2r(pm+1)-pm-1}$.
But comparison to the non-equivariant spectral sequence shows that
this class is a permanent cycle, so every class in $E_1^{pm,*}$ is
a permanent cycle. (cf. the proof of theorem \ref{Epcollapse}).
But this means that every generator of the free
${\mathbb F}_p[u]$-module $E_1^{(pm,*)\text{even}}$ corresponds to a
generator in the free ${\mathbb F}_p [u]$-module
$H^{\text{even}}(L{\mathbb{C}\mathrm{P}}^r_{h{\mathbb T}})$. So this modules has a generator
in every degree $2k$ where $2rpm+2 \leq 2k \leq 2r(pm+1)$.
In case $p\mid (r+1)$, in addition to this, it has a generator
in degree $2k=2rpm$. We have listed the
generators of this module in theorem \ref{th:main},
so this says that all these numbers $2k$ are contained
in ${\mathcal IF}^\prime (r,p,2)$.
The rest of the proof is just very elementary number theory.
As usual, there are two possible cases.
We first deal with the case $p\mid (r+1)$. According to
the definition of ${\mathcal IF}^\prime (r,p,2)$ just prior to
theorem \ref{th:main}, this set does not contain
any numbers divisible by $2r$. So we cannot have that
$2rpm \in {\mathcal IF}^\prime (r,p,2)$, contradicting the assumption.
So now assume that $p\nmid (r+1)$. We first show that there
are not many pairs of consecutive even numbers in
${\mathcal IF} (r,p,2)= {\mathcal IF}^\prime(r,p,2)$. Assume that both $2k$ and $2k+2$
are contained in ${\mathcal IF} (r,p,2)$. As in definition
\ref{def:TF} we find four numbers
$i_1,i_2,j_1,j_2$ pairwise satisfying the
conditions mentioned in that definition, and such that
$2k=2ri_1+2j_1$ and $2k+2=2ri_2+2j_2$.
Then $2=2r(i_2-i_1)+2(j_2-j_1)$. Since
$\mid j_1-j_2\mid \leq 2(r-1)$ it follows that
either $i_1=i_2$ and $j_2=j_1+1$, or
$i_2=i_1+1$, $j_2=0$ and $j_1=r-1$. The first
possibility together with the congruence condition
leads to $p\mid ((r+1)i_1+j_1)$ and
$p\mid ((r+1)i_1+j_1+1)$, which is a contradiction.
The second possibility leads to $p\mid ((r+1)i_1+r-1)$
and $p\mid ((r+1)(i_1+1))$. Subtracting, we get that
$p\mid 2$, that is $p=2$. So the only possibility for
that $\{ 2k,2k+2\}\in {\mathcal IF} (r,p,2)$ is that
$p=2$, $r$ is even, $j_1=r-1$ is odd and
$k=ri_1+j_1$ is odd. In particular, we can never have that
$\{ 2k,2k+2,2k+4\} \in {\mathcal IF} (r,p,2)$. Since the set of $k$
such that $2rpm+2 \leq 2k \leq 2rp(m+1)$ is a set of
$r$ consecutive numbers, we have that $r=2$.
We are now reduced to showing that ${\mathcal IF} (2,2,2)$ does not
contain two numbers of the form $8m+2,8m+4$. We already noted that
if $2k, 2k+2\in {\mathcal IF} (r,p,2)$, then $k$ is odd.
So if $2k=8m+4$, this particular $k$ does not qualify.
The proof is complete.
\end{proof}
\section{Appendix: The circle transfer map}
\label{Appendix:s1transfer}
In this appendix we describe an elementary construction of a ${\mathbb T}$
transfer map $\tau$. There are several discussions of transfer maps in the
literature, and also of extensive refinements and generalizations. See
for example \cite{MS} or \cite{MMM}. For our purposes, we need only a
rather coarse version. We give a simple, self contained construction
close to the one we gave in \cite{Fund}.
\begin{theorem}
\label{th:s1transfer}
Let $X$ be a based ${\mathbb T}$-CW complex such that the action of ${\mathbb T}$
is free away from the base point. Write $q:X\to X/{\mathbb T}$ for the
canonical projection. Let $R$ be a principal ideal domain and
$M$ an $R$-module. There is a linear map which is natural in
$X$ and $M$ as follows:
\[
\tau : \tilde H^n (X;M) \to \tilde H^{n-1}(X/{\mathbb T} ;M).
\]
It is the connecting homomorphism in a long exact Gysin sequence
\[
\xymatrix@C=0.6cm{
\dots \ar[r]
& \tilde H^{n-2}(X/{\mathbb T} ;M) \ar[r]^-{\gamma}
& \tilde H^{n}(X/{\mathbb T} ;M) \ar[r]^-{q^*}
& \tilde H^{n}(X;M) \ar[r]^-{\tau}
& \tilde H^{n-1}(X/{\mathbb T} ;M) \ar[r]
& \dots
}\]
in particular, $\tau \circ q^*=0$. Assume that $M=R$.
Then Frobenius reciprocity holds
\[
\tau (a q^* (b))=\tau(a) b,
\]
and $q^*\circ \tau=d$ where $d$ denotes the action
differential.
\end{theorem}
\begin{remark}
We have $H^*(B{\mathbb T} ;R)=R[u]$ where
$\deg u=2$ which gives us a class $pr_1^*(u) \in H^2(X_{h{\mathbb T}};R)$.
The map $\gamma$ in the Gysin sequence is given by
multiplication by this class:
\[
\xymatrix@C=1.5cm{
H^{n-2}(X_{h{\mathbb T}},B{\mathbb T} ;M) \ar[r]^-{pr_1^*(u)\cdot }
& H^{n}(X_{h{\mathbb T}},B{\mathbb T} ;M) \\
\tilde H^{n-2}(X/{\mathbb T} ;M) \ar[r]^-{\gamma} \ar[u]^-{\cong}
& \tilde H^{n}(X/{\mathbb T} ;M). \ar[u]^-{\cong}
}\]
\end{remark}
\begin{proof}
The key point is to compare $X$ to $E{\mathbb T} \times X/E{\mathbb T}$.
So we start by considering the spherical fibration
\[
\xymatrix@C=1cm{
{\mathbb T} \ar[r] & E{\mathbb T} \times X \ar[r]^-Q & E{\mathbb T} \times_{{\mathbb T}} X, }
\]
together with the two subspaces $E{\mathbb T}= E{\mathbb T}\times *$ of $E{\mathbb T} \times X$
and $B{\mathbb T} = E{\mathbb T} \times_{{\mathbb T}} *$ of $E{\mathbb T} \times_{{\mathbb T}} X$ with
$Q^{-1} (B{\mathbb T} ) = E{\mathbb T}$.
The fibration $Q$ is a pullback of the fibration ${\mathbb T} \to E{\mathbb T} \to B{\mathbb T}$
along the projection map $pr_2$. Since $B{\mathbb T}$ is $1$-connected
it follows that $Q$ is orientable. Thus there is a relative Gysin sequence
for $Q$ \cite[VII.5.12]{Whitehead}. We let $\tilde \tau$ be the
connecting homomorphism in this sequence.
Let $X_0$ denote $X$ with trivial ${\mathbb T}$ action. Pick a point
$e\in E{\mathbb T}$ and define the ${\mathbb T}$ map
$\theta :{\mathbb T} \times X_0 \to E{\mathbb T} \times X$
by $(z,x)\mapsto (ez^{-1},zx)$.
We have a commutative diagram
\[
\xymatrix@C=1cm{
({\mathbb T} \times X_0, \emptyset ) \ar[r]^{pr_2} \ar[d]^{\theta}
& (X, \emptyset ) \ar[d] \\
(E{\mathbb T} \times X, E{\mathbb T} ) \ar[r]^{Q}
& (E{\mathbb T} \times_{\mathbb T} X,B{\mathbb T} ).}
\]
By naturality of the Gysin sequence we have the following diagram,
where the cohomology groups have coefficients in $M$:
\[ \xymatrix@C=0.6cm{
\ar[r]
& H^{n-2}(X_{h{\mathbb T}}, B{\mathbb T} ) \ar[r]^-{u\cdot } \ar[d]^-{Q^*}
& H^{n}(X_{h{\mathbb T}}, B{\mathbb T} ) \ar[r]^-{Q^*} \ar[d]^-{Q^*}
& \tilde H^{n}(X) \ar[r]^-{\tilde \tau} \ar[d]^-{\theta^*}
& H^{n-1}(X_{h{\mathbb T}}, B{\mathbb T} ) \ar[r]^-{u\cdot} \ar[d]^-{Q^*}
& \\
\ar[r]
& H^{n-2}(X) \ar[r]^-{0}
& H^{n}(X) \ar[r]^-{pr_2^*}
& H^{n}({\mathbb T} \times X) \ar[r]^-{\overline \tau}
& H^{n-1}(X) \ar[r]^-{0}
&
} \]
By the upper sequence, $\tilde \tau \circ Q^*=0$. Write
$\eta : {\mathbb T} \times X \to X$ for the action map. $\eta*$ and
$\theta^*$ are the same in positive degrees. Assume that $M=R$
Then $\eta^* (y)=1\otimes y+v\otimes dy$ where $v$ has degree $1$.
By the diagram $\overline \tau (1\otimes a)=0$ and
$\overline \tau (v\otimes b)=b$ such that
$Q^*\circ \tilde \tau =\overline \tau \circ \eta^*=d$.
Finally, we have Frobenius reciprocity for $\tilde \tau$ by
\cite[VII.5.16]{Whitehead}.
The map $pr_2: E{\mathbb T} \times X/E{\mathbb T} \to X$ is a homotopy equivalence
if we restrict it to the fixed points for any closed subgroup of
${\mathbb T}$. So it is a ${\mathbb T}$-equivariant homotopy equivalence,
see for instance \cite[II.2.7]{tomD}. Thus, we have a natural
isomorphism
\[ pr_2^* : H^*(X/{\mathbb T} ,*;M) \to H^*(X_{h{\mathbb T}},B{\mathbb T} ;M). \]
The result follows.
\end{proof}
\section{Appendix: Rational coefficients}
In this appendix we use theorem \ref{th:action} to obtain results
for rational coefficients.
\begin{proposition} \label{pr:actionQ}
Let $X$ be a 1-connected space with $H_*(X;{\mathbb Z} )$ of finite type
and assume that
\[ H^*(X;{\mathbb Z} )={\mathbb Z} [x]/(x^{r+1}), \]
where $\alpha = \deg (x)$ is even and $r\geq 1$. Put
$\rho =(r+1)\alpha -2$. Then,
\[
H^k(LX;{\mathbb Q} )=
\begin{cases}
{\mathbb Q} , & k\in \{ 0\} \cup
\{\rho i+\alpha j, \rho i+\alpha j-1|0\leq i, 1\leq j \leq r \} ,\\
0 , & \text{otherwise.}
\end{cases}
\]
The action differential
$d: H^k(LX;{\mathbb Q} )\to H^{k-1}(LX;{\mathbb Q} )$
is an isomorphism when
$k\in \{ \rho i +\alpha j|0\leq i, 1\leq j \leq r \}$
and zero otherwise.
\end{proposition}
\begin{proof}
By the Serre spectral sequence for the fibration
$\Omega X \to PX \to X$ we see that $H_*(\Omega X;{\mathbb Z} )$ is of finite
type. The Serre spectral sequence for $\Omega X \to LX \to X$ then
gives us that $H_*(LX;{\mathbb Z} )$ is of finite type.
Consider the universal coefficient sequence where
$A$ is an abelian group:
\[ \xymatrix{
0\ar[r] & {\operatorname{Ext}} (H_{k-1}(LX;{\mathbb Z} ), A) \ar[r] & H^k(LX;A) \ar[r]
& {\operatorname{Hom}} (H_k(LX;{\mathbb Z} ), A) \ar[r] & 0.
} \]
By choosing $A={\mathbb Z} /p$ for $p$ sufficiently large, we obtain
that the ${\operatorname{Ext}}$ group is zero and that we can apply theorem
\ref{th:action} part 2). Thus,
\[
H_k(LX;{\mathbb Z} )/T_k(LX) \cong
\begin{cases}
{\mathbb Z} , & k \in \{ 0\} \cup
\{\rho i+\alpha j, \rho i+\alpha j-1|0\leq i, 1\leq j \leq r\} ,\\
0 , & \text{otherwise,}
\end{cases}
\]
where $T_k(LX)$ denotes the torsion subgroup of $H_k(LX;{\mathbb Z} )$.
We then choose $A={\mathbb Q}$ and obtain the stated result for
$H^k(LX;{\mathbb Q} )$.
Let $\eta :{\mathbb T} \times LX \to LX$ denote the action map. By definition
of the action differential $d$ we have that
$\eta^* (y)=1\otimes y+v\otimes dy$ where $\deg (v)=1$ for
${\mathbb Z} /p$ or ${\mathbb Q}$ coefficients. There is also the projection
$pr_2 : {\mathbb T} \times LX \to LX$ with $pr_2^*(y)=1\otimes y$.
By theorem \ref{th:action} we have $b_0^{j-1}b_i$ in $H^*(LX;{\mathbb Z} /p)$
with $\deg (b_0^{j-1}b_i)=\rho i +\alpha j$ and
\[ d(b_0^{j-1}b_i)=(j+(r+1)i)b_0^{j-1}a_i \]
for $0\leq i$, $1\leq j\leq r$. Fix such $i$ and $j$ and put
$k=\rho i+ \alpha j$.
We use the universal coefficients sequence with abelian group
$A$ again. By naturality we get commutative diagrams for
$\eta$ and for $pr_2$. Choose $A={\mathbb Z} /p$ where $p$ is so
large that the ${\operatorname{Ext}}$ groups vanish, $p>r+1$ and $p>j+(r+1)i$.
Then,
\[ 0\neq \eta_*-(pr_2)_*: H_k({\mathbb T} \times LX;{\mathbb Z} )/T_k({\mathbb T} \times LX)
\to H_k (LX;{\mathbb Z} )/T_k(LX).\]
We then take $A={\mathbb Q}$ and find that $0\neq \eta^*-pr_2^*$ in
degree $k$ for ${\mathbb Q}$ coefficients. Thus, the action differential
$d:H^k(LX;{\mathbb Q})\to H^{k-1}(LX;{\mathbb Q} )$ is an isomorphism as stated.
Since $d\circ d=0$ the vanishing statement for $d$ follows.
\end{proof}
The ring structure of $H^*(LX;{\mathbb Q} )$ was first computed by \v Svarc
in \cite{Svarc}. Combining his computation with the proposition
above we obtain the following result:
\begin{proposition}
Let $X$ be as in proposition \ref{pr:actionQ}. Then,
\[ H^*(LX;{\mathbb Q} )= {\mathbb Q} [a_i,b_i|i\geq 0]/I, \]
where $I$ is the ideal generated by the following elements
for $i,j\geq 0$:
\[ a_ia_j, \quad b_ib_j-b_0b_{i+j}, \quad b_ia_j-b_0a_{i+j}, \quad
b_0^rb_i, \quad b_0^ra_i. \]
The degrees of the generators are $|a_i|=\rho i+\alpha -1$ and
$|b_i|=\rho i +\alpha$. Furthermore,
\[H^*(LX_{h{\mathbb T}};{\mathbb Q} )={\mathbb Q} [u]\otimes
{\mathbb Q} [w_i^{(j)}|0\leq i, 1\leq j \leq r]/J, \]
where $J$ is the ideal generated by all the products
$w_i^{(j)}w_k^{(\ell )}$.
The degrees of the generators are $|u|=2$ and
$|w_i^{(j)}|=\rho i + \alpha j -1$.
\end{proposition}
\begin{proof}
The result of \v Svarc's computation can be found in
\cite[\S 6]{Klein}. Performing the substitutions $a_i=g_{i+1}$ for
$i\geq 0$ and $b_0=x$, $b_i=h_i$ for $i\geq 1$ in Theorem 6.2 and 6.3
of \cite{Klein}, we obtain the stated description of the cohomology of
$LX$.
Consider the Serre spectral sequence for the homotopy orbit space.
Note that the elements $u^k$ for $k\geq 0$ are not hit by any
differentials since we may factor $id_{B{\mathbb T}}$ as
$B{\mathbb T} \to LX_{h{\mathbb T}} \to B{\mathbb T}$, where we use a constant loop to define
the first map and the second map is the projection $pr_1$.
The $d_2$ differential is given by the action differential $d$.
By proposition \ref{pr:actionQ} we see that only the elements
of the form $b_0^{j-1}a_i$ in $E_2^{0,*}$ and $u^k$ in $E_2^{*,0}$
survive to the $E_3$ page and that $E_3=E_\infty$.
The unique class $w_i^{(j)}$ in $H^*(LX_{h{\mathbb T}};{\mathbb Q} )$
representing $b_0^{j-1}a_i$ satisfies $(w_i^{(j)})^2=0$ since it has
odd degree. Thus any product $w_i^{(j)}w_k^{(\ell )}$ has
finite multiplicative order and hence cannot equal a nonzero
constant times a power of $u$. Hence the multiplicative structure
is as stated.
\end{proof}
| {
"timestamp": "2006-05-30T14:48:01",
"yymm": "0605",
"arxiv_id": "math/0605754",
"language": "en",
"url": "https://arxiv.org/abs/math/0605754",
"abstract": "Let X be a space and write LX for its free loop space equipped with the action of the circle group T given by dilation. We compute the equivariant cohomology H^*(LX_hT; Z/p) as a module over H^*(BT; Z/p) when X=CP^r for any positive integer r and any prime number p. The computation implies that the associated mod p Serre spectral sequence collapses from the E_3-page.",
"subjects": "Algebraic Topology (math.AT)",
"title": "String cohomology groups of complex projective spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232935032463,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7093460291946051
} |
https://arxiv.org/abs/1004.1596 | Strict inequalities of critical probabilities on Gilbert's continuum percolation graph | Any infinite graph has site and bond percolation critical probabilities satisfying $p_c^{site}\geq p_c^{bond}$. The strict version of this inequality holds for many, but not all, infinite graphs.In this paper, the class of graphs for which the strict inequality holds is extended to a continuum percolation model. In Gilbert's graph with supercritical density on the Euclidean plane, there is almost surely a unique infinite connected component. We show that on this component $p_c^{site} > p_c^{bond}$. This also holds in higher dimensions. | \section{Introduction}
\label{secintro}
Consider an infinite connected graph $G$ and perform bond
percolation by independently marking each edge open with probability
$p$ and closed otherwise. The critical probability $p_c^{\rm bond}$
refers to the value of $p$ above which there exists almost surely
(a.s.) an infinite connected subgraph of $G$, of open edges.
Similarly, one can perform site percolation by independently marking
each vertex of $G$ open with probability $p$ and refer to $p_c^{\rm
site}$ as the critical probability above which there exists a.s.\ an
infinite connected subgraph of $G$, of open vertices.
The weak inequality $p_c^{\rm site}\geq p_c^{\rm bond}$ can easily be proven by dynamic coupling, see for example Chapter~2 of Franceschetti and Meester~(2007). If $G$ is a tree,
then it is also easy to see that $p_c^{\rm site} = p_c^{\rm bond}$, as each
vertex, other than some arbitrarily selected root, can be uniquely identified by an edge and vice versa. By adding finitely many edges to an infinite tree, one can also construct other connected graphs for which the equality holds.
On the other hand, the strict
inequality $p_c^{\rm site} > p_c^{\rm bond}$ has also been shown to hold in several circumstances. Grimmett and Stacey~(1998) proved it for
a large class of `finitely transitive' graphs including the
$d$-dimensional hypercubic lattices.
These graphs, however, do not include the {\em random} graphs constructed
using {\em continuum} percolation models,
because they are not `finitely transitive':
since their average node degree is not bounded, the group action defined by their automorphisms has infinitely many orbits almost surely. These continuum percolation graphs are the focus of this paper. They are of particular interest in the context of communication networks and are treated extensively in the books by Franceschetti and Meester~(2007), Meester and Roy~(1996), and Penrose~(2003).
We consider {\em Gilbert's graph},
which is defined as follows. Let $\lambda >0$
and let $\mathscr{P}_\lambda$ be a homogeneous
Poisson point process in $\mathbb{R}^2$ of intensity $\lambda$.
Gilbert's graph, here denoted $G(\mathscr{P}_\lambda,1)$,
has as its vertex set
the point set $\mathscr{P}_\lambda$,
and the edges are obtained by connecting every
pair of points $x,y \in \mathscr{P}_\lambda$
such that $|x-y|\leq 1$,
by an undirected edge.
It is well known that there exists
a critical density value $\lambda_c \in (0, \infty)$,
such that if $\lambda > \lambda_c$ then there exists a.s.\ a unique infinite connected component, while if $\lambda < \lambda_c$ then there is a.s no infinite connected component;
see e.g. Meester and Roy (1996).
When it exists, we
denote this infinite component by ${\cal C}$.
In the site percolation model on ${\cal C}$, each vertex is independently marked open with probability $p$, and closed otherwise, and
we look for an unbounded connected component in the induced subgraph ${\cal C}_v$ of the open vertices. It is easy to see that this is equivalent to rescaling the original Poisson process to one with intensity $p \lambda$ and looking for an unbounded connected component there. It follows that for $\lambda > \lambda_c$ there is a critical value $p_c^{\rm site} \in (0,1)$
(namely $p_c^{\rm site} = \lambda_c / \lambda$) such that if $p > p_c^{\rm site}$ then there is a.s.\ an infinite connected component in ${\cal C}_v$, and if
$p < p_c^{\rm site}$ then there is a.s.\ no such infinite component.
In the bond percolation model on ${\cal C}$, we independently declare each edge
to be open with probability $p$, and closed otherwise, and look for an
unbounded connected component in the induced subgraph ${\cal C}_e$ of the open
edges.
There is a critical probability $p_c^{\rm bond} $ such that if $p > p_c^{\rm bond}$ then there is a.s.\ an infinite connected component in ${\cal C}_e$, and
if $p < p_c^{\rm bond}$ then there is
a.s.\ no such infinite component.
Observe that $p_c^{\rm bond} \leq p_c^{\rm site} <1$, and it can also
be shown by a branching process comparison that
$p_c^{\rm bond} >0 $.
Our main result provides strict inequality between
$p_c^{\rm site}$ and $p_c^{\rm bond}$ on
Gilbert's graph. Our proof easily extends to $3$ or more dimensions.
\begin{theo} \label{bool}
Consider $G({\mathscr P}_ \lambda,1)$ for $\lambda>\lambda_c$.
On ${\cal C}$ we have $p_c^{\rm site} > p_c^{\rm bond}$.
\end{theo}
The basic strategy is to adapt the enhancement
technique developed for percolation on lattices by
Menshikov~(1987),
Aizenman and Grimmett~(1991),
Grimmett and Stacey~(1998).
This consists of constructing an `enhanced' version of the site
percolation process for which the critical probability is
\emph{strictly less} than that of the original site process.
Then one can use dynamic coupling of the enhanced model with
bond percolation to complete the proof.
We face two main difficulties when trying to extend the enhancement technique
to a continuum random setting. One of these amounts to constructing
the desired enhancement on a random graph rather than on a deterministic
one. The second one consists in adapting some basic inequalities for
the enhanced graph, given in the discrete setting by Aizenman and
Grimmett~(1991), to the continuum setting. This requires somehow
more involved geometric constructions and a careful incremental
build-up of the Poisson point process. Once we circumvent these obstacles,
it is not too difficult to obtain the final result using a classic dynamic
coupling construction.
The enhancement strategy has been proven useful to show strict inequalities in a variety of contexts: Bezuidenhout, Grimmett, and Kesten~(1993), and Grimmett~(1994), use this technique in the context of Potts and random cluster models; Roy, Sarkar, and White~(1998) use it in the context of directed percolation.
In the continuum, Sarkar~(1997) uses enhancement to demonstrate coexistence
of occupied and vacant phases for the three-dimensional
Poisson Boolean model.
Roy and Tanemura~(2002) use it in the context of percolation of different convex shapes.
\section{Proof of Theorem \ref{bool}}
We now describe the enhancement needed to prove Theorem $1$.
Throughout this section we consider Gilbert's graph
$G({\mathscr P}_ \lambda,1)$ with $\lambda > \lambda_c$.
The
objective is to describe a way to to add open vertices to the site
percolation model to make the probability of an infinite cluster
bigger, without changing the bond percolation model. To do so, we
introduce two kinds of coloured vertices, red vertices (the original
open vertices) and green vertices (closed vertices which have been
enhanced) and for any two vertices $x,y$ we write that $x \sim y$ if
they are joined by an edge. In $G(\mathscr{P}_\lambda,1)$, if we have vertices
$x_1,x_2,x_3,x_4,x_5$ such that $x_1$ is closed, has no neighbours
other than $x_2,\dots,x_5$, which are all red, and $x_2\sim x_3$ and
$x_4\sim x_5$ but there are no other edges amongst $x_2, x_3, x_4$
and $x_5$ then we say $x_1$ is {\em correctly configured}
in $G(\mathscr{P}_\lambda,1)$, and refer to this as a $bow$ $tie$ configuration of edges. If a vertex $x$ is correctly configured
we make it green with probability $q$, independently of everything
else; see Figure~\ref{fig:enh1}.
\begin{figure}[htbp]
\begin{center}
\scalebox{.8}{\includegraphics[angle = 270]{bowtie.pdf}}
\end{center}
\caption{The bow tie enhancement}
\label{fig:enh1}
\end{figure}
Let $B_n$ be the open disc of radius $n$ centred at the origin. Let
$\underline{Y} = (Y_i, i \geq 0)$ and $\underline{Z} = (Z_i, i \geq
0)$ be sequences of independent uniform $[0,1]$ random variables.
List the vertices of $\mathscr{P}_{\lambda}$ in order of increasing distance
from the origin as $x_1,x_2,x_3,\dots$ . Declare a vertex $x_i$ to be
{\em red} if
$Y_i < p$ and {\em closed} otherwise. Once the sets of red and closed
vertices have been decided in this way, apply the enhancement by
declaring each closed vertex $x_j$ to be {\em green}
if it is
correctly configured and $Z_j < q$. Whenever we
insert a vertex of the Poisson process at $x$, it would have values $Y_0$ and $Z_0$ associated with it. We shall refer to vertices that are either
red or green as being {\em coloured}.
Let $\partial B_n$ be the annulus $B_n \setminus B_{n - 0.5}$ and let
$A_n$ be the event that there is a path from a coloured vertex in
$B_{0.5}$ to a coloured vertex in $\partial B_n$ in $G
(\mathscr{P_{\lambda}},1)\cap B_n$ using only coloured
vertices (note that $A_n$ is based on a process completely
inside $B_n$; we do not allow vertices outside of $B_n$ to
affect possible enhancements inside $B_n$).
Let $\theta_n(p,q)$ be the probability that $A_n$ occurs, and define
$$\theta(p,q) \equiv \liminf _{n\rightarrow \infty} (\theta_n(p,q)).$$
The following proposition states that $\theta(p,q)$ is indeed the
percolation function associated to the enhanced model. From now on
we use `vertex' to refer to a point of the Poisson process and `point' to
refer to an arbitrary location in $\mathbb{R}^2$.
\begin{prop}\label{prop:enh}
There is a.s. an infinite connected component in $G({\mathscr P}_ \lambda,1)$ using only red and green vertices if and only if $\theta(p,q) > 0$.
\end{prop}
\noindent {\bf Proof of Proposition~\ref{prop:enh}.} For the if part let $A'_n$ be the event that there is a coloured path from $B_{0.5}$ to outside $B_{n-2}$, so $A_n$ is contained in $A'_n$. Let $\phi_n(p,q)$ be the probability of $A'_n$ occurring and let $\phi(p,q)$ be the limit as $n$ goes to $\infty$. Therefore $\phi_n(p,q) \geq \theta_n(p,q)$ for all $n$ so $\phi(p,q) \geq \theta(p,q) > 0$, but $\phi(p,q)$ is just the probability of there being an infinite coloured component intersecting $B_{0.5}$ and it is well known that there is almost surely an infinite coloured component if
$\phi(p,q) > 0$.
For the only if part, if there is almost surely an infinite component then $\phi(p,q) > 0$. Given $n \geq 6$, we build up the Poisson process on the whole of $B_{n - 3}$. If there are any closed vertices that are not definitely correctly or incorrectly configured, we build up the process in the rest of their $1$-neighbourhood, and this determines whether they are green or uncoloured. If any more closed vertices occur they cannot be correctly configured as they will be joined to a closed vertex. Therefore we have built up the process everywhere in a region $R $ with $B_{n-3} \subset R \subset B_{n-2}$,
and all uncoloured vertices at this stage will remain uncoloured.
Let $V$ be the set of coloured vertices that are joined by a coloured path to a coloured vertex in $B_{0.5}$ at this stage.
Next, we build out the process radially symmetrically from $B_{n-3}$ (apart from where the process has already been built up) until a vertex $v$ occurs that is connected to a vertex in $V$. Let $J$ be the
event that such a vertex $v$ occurs at distance $r$ between $n-3$ and
$n-1$ from the origin, so $J$ must occur for $A_n'$ to occur.
We can find points $a_1, a_2, \ldots, a_9$ on the line $0 v$ extended away from the origin such that $a_1$ is
$r + 0.3$ from the origin, $a_2$ is $r+0.6$ from the origin and so on. Surround $a_1,\ldots, a_9$ with circles $D_1, \ldots, D_9$ of
radius $0.05$ around them. If there is at least one red vertex in
each one of these little circles that is contained in $B_n$
when the process continues to the whole
of $B_n$, and $v$ is also red then $A_n$ occurs. Therefore if $J$ occurs then the
conditional probability of $A_n$ occuring is at least $\gamma$,
where
\[
\gamma = p(1 - \exp(- 0.0025 \lambda p \pi))^9,
\]
as this is the probability of getting at least one red vertex in
each little circle and $v$ being red. Therefore $\theta_n (p,q) \geq \gamma P[J] \geq
\gamma \phi (p,q)$ for all $n \geq 6$, so $\theta (p,q) \geq \gamma
\phi (p,q) > 0$. \hfill{$\Box$} \vspace{.5cm}
Our next lemma provides an analogue of the Margulis-Russo formula for
the enhanced continuum model. First, we need to introduce the notion of
pivotal vertices.
Given the configuration
$(\mathscr{P}_{\lambda},\underline{Y},\underline{Z})$ and
inserting a vertex at $x$ we say that $x$ is $1$-$pivotal$ $in$
$B_n$ if putting $Y_0 = 0$ means that $A_n$ occurs but putting $Y_0
= 1$ means it does not. Notice that $x$ can either complete a path
(but it cannot do via being enhanced), or it could make another
closed vertex correctly configured which in turn would complete a
path. We say that $x$ is $2$-$pivotal$ $in$ $B_n$ if inserting a
vertex at $x$ and putting $Z_0 = 0$ means $A_n$ occurs but putting
$Z_0 = 1$ means it does not. That is, $Y_0 > p$ and adding a closed
vertex $v$ at $x$ means $v$ is correctly configured and enhancing it
to a green vertex means $A_n$ occurs but otherwise it does not.
For $i = 1,2$ let $E_{n,i}(x)$ be the event that $x$ is $i$-pivotal in $B_n$,
and set $P_{n,i}(x,p,q) := P[E_{n,i}(x)]$.
\begin{lemm} \label{prop:Russo}
For all $n > 0.5$ and $p \in (0,1)$ and $q\in (0,1)$ it is the case that
\begin{eqnarray}
\frac{\partial \theta_n(p,q)}{\partial p}=\int^{}_{B_n} \lambda P_{n,1}(x,p,q) \, \mathrm{d}x
\label{eq1}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial \theta_n(p,q)}{\partial q} = \int^{}_{B_n} \lambda P_{n,2}(x,p,q) \, \mathrm{d}x.
\label{eq2}
\end{eqnarray}
\end{lemm}
\noindent {\bf Proof.}
Let ${\cal F}$ be the $\sigma$-algebra generated
by the locations but not the colours of the vertices of $\mathscr{P}_\lambda \cap B_n$.
Let $N_1$ be the number of $1$-pivotal vertices.
Define ${\cal F}$-measurable random variables,
$X_{p,q}$ and
$Y_{p,q}$ as follows; $X_{p,q}$ is
the conditional probability that $A_n$ occurs,
and $Y_{p,q}$ is the conditional expectation of $N_1$,
given the configuration
of $\mathscr{P}_\lambda$.
By the standard version of the Margulis-Russo formula
for an increasing event defined on a finite collection
of Bernoulli variables (Russo~(1981), Lemma 3),
$$
\lim_{ h \to 0}
h^{-1} (X_{p+ h,q} - X_{p,q})
=
Y_{p,q} , ~~ a.s.
$$
Let $M$ denote the total number of vertices of $\mathscr{P}_\lambda$ in $B_n$.
By the standard coupling of Bernoulli variables, and
Boole's inequality,
$|X_{p+ h,q} - X_{p,q}| \leq |h| M$ almost surely,
and since $M$ is integrable we have by dominated convergence
that
\begin{eqnarray}
\frac{\partial \theta_n(p,q)}{
\partial p}
=
\lim_{ h \to 0}
E[ h^{-1} (X_{p+ \delta,q} - X_{p,q})]
=
E[
Y_{p,q} ] = E[N_1],
\label{Mar29a}
\end{eqnarray}
and by a standard application of the Palm theory of Poisson processes
(see e.g. Penrose~(2003)),
the right hand side of (\ref{Mar29a})
equals the right hand side
of (\ref{eq1}). The proof of (\ref{eq2}) is similar.
\hfill{$\Box$} \\
The key step in proving Theorem $1$ is given by the following result.
\begin{lemm}\label{intermediate}
There is a continuous function $\delta:(0,1)^2 \to (0,\infty)$ such that
for all $n > 100$, $x \in B_n$ and $(p,q) \in (0,1)^2$, we have
\begin{equation}
P_{n,2}(x,p,q) \geq \delta(p,q)P_{n,1}(x,p,q).
\label{100419a}
\end{equation}
\end{lemm}
Before proving this, we give a result saying that we can
assume there are only red vertices inside an annulus disk of fixed
size. For $x \in \mathbb{R}^2$, and $0 \leq \alpha < \beta$, let $C_{\alpha}(x)$ be the closed circle (i.e., disk) of radius $\alpha$
around $x$, and let $A_{\alpha,\beta}(x)$ denote the annulus
$C_{\beta}(x) \setminus C_{\alpha}(x)$.
Given $n$ and given $x \in B_n$, let
$R_n(x,\alpha,\beta)$ be the event that all vertices in $A_{\alpha,\beta}(x) \cap B_n$
are red.
\begin{lemm}\label{intermed2}
Fix $\alpha >3$ and and $\beta > \alpha +3$.
There exists a continuous function $\delta_1:(0,1)^2 \to (0,\infty)$,
such that
for all $(p,q) \in (0,1)^2$, all
$n > \beta +3$ and all $x \in B_n$ with $|x|< \alpha -2 $ or
$x > \beta +2$, we have
\[
P[E_{n,1}(x) \cap R_n(x,\alpha, \beta) ] \geq \delta_1(p,q) P_{n,1}(x).
\]
\end{lemm}
\noindent {\bf Proof.}
We shall consider a modified model, which is the same
as the enhanced model but with enhancements suppressed
for all those vertices lying in $A_{\alpha-1,\beta+1}(x)$.
Let $E'_{n,1}(x) $ be the event that $x$ is 1-pivotal in
the modified model.
Returning to the original model,
we first create the Poisson process of intensity $\lambda$
in $B_n \setminus A_{\alpha-1,\beta +1}$,
and determine which of these vertices are red.
Then we build up the Poisson process of intensity $\lambda$ inside
$B_n \cap A_{\alpha-1,\beta +1} $
and for any of these new vertices
with more than $4$ neighbours, or
with at least one closed neighbour outside $A_{\alpha-1,\beta +1}$, we decide
whether they are red or closed. This decides whether or not they
are coloured as these vertices cannot possibly become green because
they are not correctly
configured. We now can tell which of the closed vertices
outside $A_{\alpha-1,\beta +1}$ are correctly configured, and we determine
which of these are green.
This leaves a set $W$ of vertices inside $A_{\alpha-1,\beta +1}$ that have at most four neighbours. If we surround each vertex in $W$ by a circle of radius $0.5$ then we cannot have any point covered by more than $5$ of these circles as this means that there is a vertex in $W$ with at least $5$ neighbours. All
of these circles are contained in $C_{\beta+2}$,
which has area $\pi(\beta+2)^2$. Therefore
\[
|W| \leq \frac{5 \pi (\beta+2)^2}{0.5^2 \pi} = 20(\beta + 2)^2.
\]
For $x$ to have any possibility of being $1$-pivotal,
at this stage there must be a
set $W'$ contained in $W$ such that if every vertex in $W'$ is coloured and every vertex in $W \setminus W'$ is uncoloured then $x$ becomes $1$-pivotal.
In this case, with probability at least
$[p(1-p)]^{20(\beta +2)^2}$ we have every vertex in
$W'$ red and every vertex in $W \setminus W'$ closed, which would imply
event $E'_{n,1}(x)$
occurring. Therefore
$P[E'_{n,1}(x)] \geq [p(1-p)]^{20(\beta+2)^2} P[E_{n,1}(x)]$.
Now we note that the occurrence or otherwise of $E'_{n,1}(x)$
is unaffected by the addition or removal of closed vertices
in $A_{\alpha, \beta}(x)$. This is because the suppression
of enhancements in $A_{\alpha-1,\beta+1}$ means that
these added or removed vertices cannot be enhanced themselves,
and moreover any vertices they cause to be correctly
or incorrectly configured also cannot be enhanced.
Consider creating the marked Poisson process in $B_n$,
with each Poisson point (vertex) $x_i$ marked with
the pair $(Y_i,Z_i)$, in two stages. First, add all marked vertices
in $B_n \setminus A_{\alpha,\beta}(x)$, and just the red vertices
in $B_n \cap A_{\alpha,\beta}(x)$. Secondly, add the closed vertices
in $B_n \cap A_{\alpha,\beta}(x)$.
The vertices added at the second
stage have no bearing on the event $E'_{n,1}(x)$, so
$E'_{n,1}(x)$ is independent of the event that no vertices at all are added in the second stage.
Hence,
$$
P[E'_{n,1}(x) \cap R_n(x,\alpha,\beta)]
\geq \exp(- (1-p) \lambda (\beta^2 - \alpha^2))
P[E'_{n,1}(x)],
$$
with equality if $|x| \leq n- \beta$.
Finally, we use a similar argument to the initial argument in this proof.
Suppose
$E'_{n,1}(x) \cap R_n(x,\alpha,\beta)$ occurs. Then there exist at most
$20(\beta +2)^2$ vertices in
$A_{\beta,\beta +1}(x) \cup A_{\alpha -1,\alpha}(x)$
which are correctly configured for which the possibility of
enhancement has been suppressed. If we now allow these to be possibly enhanced,
there is a probability of at least
$(1-q)^{20(\beta +2)^2}$ that none of them is enhanced, in which
case the set of coloured vertices is the same for the modified model
as for the un-modified model and therefore $E_{n,1}(x)$ occurs.
Taking
$$
\delta_1(p,q) = [p(1-p)(1-q)]^{20(\beta+2)^2} \exp(-(1-p) \lambda
(\beta^2 - \alpha^2 )),
$$
we are done. \hfill{$\Box$} \\
\noindent {\bf Proof of Lemma \ref{intermediate}.}
As a start, we fix $p$ and $q$.
We also fix $n$ and $x \in B_n$,
and just write $P_{n,i}(x)$ for $P_{n,i}(x,p,q)$.
Define event $E_{n,1}(x)$
as before,
so that $P_{n,1} (x) = P[E_{n,1}(x)]$.
Also, write
$C_{r}$ for the disk $C_r(x)$.
For now we assume $30.5 < |x| < n-30.5$.
We create the Poisson process of intensity $\lambda$
everywhere on $B_n$ except inside
$C_{30}$, and decide which of these vertices are red.
Now we create the process of only the red vertices
in $A_{25,30}$ (a Poisson process of intensity $p \lambda$
in this region).
Assuming there will be no closed vertices in $A_{25,30}$,
we then know which of the closed vertices outside $C_{30}$ are correctly
configured, and we determine which of these are green.
Having done all this, let $V$ denote the set of current vertices
in $B_n \setminus C_{25}$ that are connected to $B_{0.5}$ at this stage
(by connected we mean connected via a coloured path), and
let $T$ denote the set of current vertices in $B_n \setminus C_{25}$ that
are connected to $\partial B_{n}$.
Let $N(V)$ be the $1$-neighbourhood of $V$
and let $N(T)$ be the $1$-neighbourhood of $T$.
We build up the red process inwards (i.e., towards $x$ from the boundary
of $C_{25}$)
on $C_{25} \cap (N(V) \triangle N(T))$
until a red vertex $y$
occurs (if such a vertex occurs). Set $r= |y-x|$.
Suppose $y \in N(V)$ (if instead $y \in N(T)$
we would reverse the roles of $V$ and $T$ in the sequel).
Then if $T \cap C_{r+0.05} \neq \emptyset$ we say that
event $F$ has occurred and we let $z$ denote an
arbitrarily chosen vertex
of $T \cap C_{r+0.05} $.
Otherwise, we build up the red process inwards on
$C_{r} \cap N(T) \setminus N(V)$ until a red vertex $z$
occurs (if such a vertex occurs).
Let $E_2$ be the event that (i) such vertices $y$ and $z$ occur,
and (ii) the sets $V$ and $T$ are disjoint, and (iii)
$|y-z| >1$,
and (iv) there is no path from $y$ to $z$ through coloured
vertices in $B_n \setminus
C_{25}$ that are not in $V \cup T$.
If $E_{n,1}(x) \cap R_n(x,20,30)$ occurs, then $E_2$ must occur.
\begin{figure}[htbp]
\includegraphics[angle = 270, width = 14cm]{case1.pdf}
\caption{Our convention in the diagrams is to indicate points with
lower case letters, and areas with upper case letters. The
dashed circles are of radius 1. Here the event $F$ occurs.
} \label{fig:fig1}
\end{figure}
Now suppose $E_2 \cap F$ has occurred.
Let $a_1$ be the point (again we use `point'
to refer to a point in $\mathbb{R}^2$) which is at distance
$r$ from $x$ and distance $1$ from $y$ on the opposite side of the
line $xy$ to the side $z$ is on (see Figure $\ref{fig:fig1}$).
Let $a_2$ be the point lying inside $C_r$ at distance
$1.01$ from $a_1$ and $0.99$ from $y$.
Let $A_2$ be the circle of radius $0.005$ around $a_2$.
Any red vertex in this circle will be connected to the red vertex
$y$ (and therefore to a path to $B_{0.5}$) but cannot be connected
to any coloured path to $\partial B_n$ as $a_1$ is the nearest place for
such a vertex to be, given $E_2$ occurs. Similarly let
$b_1$ be the point lying at distance $1$ from $z$ and distance $r$ from
$x$, on the opposite side of $xz$ to $y$. Then let
$b_2$ be the point at distance $1.01$ from $b_1$ and $0.99$ from
$z$, and let $B_2$ be the circle of radius $0.005$ around $b_2$. Any red
vertex in $B_2$ will be connected to $z$ (and therefore a path to
$\partial B_n$), but not a path to $B_{0.5}$. Also, any vertex in
$A_2$ will be at least $1.1$ away from any vertex in $B_2$.
Now let $l$ be the line through $x$ such that $a_2$ and $b_2$ are on
different sides of the line and at equal distance from the line. We
can pick points $a_3,a_4,\ldots,a_{30}$ such that $a_i$ is within $0.9$
of $a_{i + 1}$ for $2 \leq i \leq 29$, $a_{30}$ and $a_{29}$ are
both within $0.9$ of $x$ but none of the other $a_i$'s are within $1.1$ of $x$, and none of the ${a_i :i \geq 3}$ are within
$1$ of $C_r$ or within $0.5$ of $l$ or within $0.01$ of another
$a_j$. Do the same on the other side of $l$ with $b_3,b_4, \ldots , b_{30}$.
Now consider circles $A_i$ and $B_i$ of radius $0.005$ around them.
Let $I$ be the event that there is at exactly one red vertex in each
of these circles, and also the circles $A_2$ and $B_2$, and there
are no more new vertices anywhere else in $C_{25}$, and no closed
vertices in $C_{30}\setminus C_{25}$. The probability that $I$
occurs, given $E_2 \cap F$, is at least
\[
\delta_2 := (1 - \exp(-0.005^2 \pi \lambda p))^ {60} \exp(-900 \pi \lambda).
\]
If the events $E_2$, $F$, $I$ occur and $Y_0
> p$ then $x$ is $2$-pivotal.
\begin{figure}[htbp]
\includegraphics[angle = 270, width = 12cm]{case2.pdf}
\caption{The case where $F$ does not occur. Here $a_1$ is the `worst possible'
location for $z$}
\label{fig:points1}
\end{figure}
Now we consider the case where $E_2$ occurs but $F$ does not, so $z$ is inside
$C_r$ and is connected to a vertex $z_1$ in $T$ that must be outside
$C_{r+0.05}$ as $z$ is the only vertex in $T$ inside $C_{r+0.05}$ (see Figure $\ref{fig:points1}$).
Let $c$ be the point at distance $1$ from $y$ and $r+0.05$ from $x$, on
the same side of the line $xy$ as $z$ (assume without loss of
generality this is to the right of $y$). This is the closest $z_1$
can be. Let $a_1$ be the point inside $C_r$ at distance $1$ from $y$
and $1$ from $c$, so this is the furthest left that $z$ can be. Let
$d$ be the point at distance $r+0.05$ from $x$ and $1$ from $y$, on
the other side of $y$ to $c$. Then consider the point $b_1$ inside $C_r$ at
distance $1.01$ from $d$ and $0.99$ from $y$, and the small circle
$B_1$ of radius $0.005$ around $b_1$. Then any vertex in $B_1$ is distant at
least $1.01$ from $a_1$, and therefore from $z$, as $z$ cannot be any
nearer than $a_1$. Also any vertex in $B_1$ will be at least $1.005$
from any other vertices in $T$, as $d$ is the nearest place such a
point can be. As before we can then have points $a_2,\ldots,a_{30}$
and $b_2,\ldots,b_{30}$ with small circles around them such that having one red vertex in each of these vertices ensures that $x$ is $2$-pivotal. The
probability of getting at least $1$ red vertex in each of these
circles, a red vertex in $B_1$ and no other new vertices in $C_{25}$,
and no closed vertices in $C_{30} \setminus C_{25}$, is
at least $\delta_2$.
So by Lemma \ref{intermed2},
the probability that $x$ is $2$-pivotal satisfies
\begin{eqnarray*}
P_{n,2}(x)& \geq & \delta_2 P[E_2 \cap F] +
\delta_2 P[E_2 \cap F^c]
\\
& \geq &
\delta_2 P[ E_{n,1}(x) \cap R_n(x,20,30)]
\\ & \geq & \delta_1 \delta_2 P_{n,1}(x)
.
\end{eqnarray*}
This proves the claim (\ref{100419a}) for the case with $30.5 < |x| < n-30.5$.
Now suppose $|x| \leq 30.5$. Then
we create the Poisson process in $B_n \setminus C_{40}$,
and decide which of these vertices are red. Then we create the
red process in $A_{39,40}(x)$, and determine which
vertices in $B_n \setminus C_{40}$ are green, assuming
there are no closed vertices in $A_{39,40}(x)$.
We then build up the red process in $C_{39}$ inwards towards $x$
until a vertex $y$ occurs in the process which is connected to
$\partial B_n$. Let $H_1$ be the event that such a vertex $y$
appears at distance $r$ between $38$ and $39$ from $x$, so $H_1$ must
occur for $E_{n,1}(x) \cap R_n(x,20,40) $ to occur.
If $x$ is inside $B_{0.5}$ we can choose points $a_0$ and $a_1$ such
that they are both outside $B_{0.5}$, at distance between $0.8$ and
$0.9$ from $x$ and at distance between $0.1$ and $0.2$ from each
other. We can then choose $b_0$ and $b_1$ such that they are both
within $0.9$ of $x$, further than $1.5$ from $a_0$ and $a_1$ and
between $0.1$ and $0.2$ from each other. We can then choose points
$a_2, a_3, \ldots, a_{100}$ such that $a_i$ is within $0.9$ of $a_{i+1}$
for $1 \leq i \leq 99$, $a_{100}$ is within $0.9$ of $y$, no two
$a_i$ are within $0.1$ of each other, and no $a_i$ is within $1.1$
of $x$, $b_0$ or $b_1$, or inside $B_{0.5}$ for $i \geq 2$. Then
consider little circles $A_i$ and $B_i$ of radius $0.05$ around
these points. If there is at least one red vertex in each of these
circles and no vertices anywhere else in $C_r$ then $x$ is
$2$-pivotal. If $x$ is outside $B_{0.5}$ we choose points in a
similar way but make sure $b_1$ connects with a path to $B_{0.5}$,
using little circles $B_2, B_3,\ldots,B_{50}$ which are again of radius
$0.05$ and are at least $1.1$ from the $A_i$. Therefore,
setting
\[
\delta_3 := (1-\exp(-0.05^2 \pi \lambda p))^{152}\exp(-1600 \pi
\lambda)
\]
and using
Lemma \ref{intermed2}, we have for some strictly positive continous
$\delta_4(p,q)$ that
$$
P_{n,2}(x) \geq \delta_3 P[H_1] \geq \delta_3 P[E_{n,1}(x)
\cap
R_n(x,20,40)] \geq \delta_3 \delta_4 P_{n,1}(x).
$$
Now suppose $|x| \geq n - 30.5$.
In this case the proof is similar. Again, create
the Poisson process in $B_n \setminus C_{40}$.
Then create the red process in $A_{39,40}(x)$ and
determine which vertices in $B_n \setminus C_{40}$ are green,
assuming there are no closed vertices in $A_{39,40}(x)$.
Then build the red process in $C_{39} \cap B_{n-0.5}$
inwards towards $x$
until a vertex $y$ occurs that is connected to a
path of coloured vertices to $B_{0.5}$ but not to $\partial B_n$. Let
$H_2$ be the event that such a vertex $y$ occurs at distance $r$
between $38$ and $39$ from $x$, and that there is no current coloured
path from $B_{0.5}$ to $\partial B_n$, so $H_2$ has to occur for
$E_{n,1}(x) \cap R_n(x,20,40)$ to occur.
Given this vertex $y$
we can find circles
$A_1, A_2, \ldots ,A_{100}$ and $B_1, B_2, \ldots,B_{50}$ of radius $0.05$
as before such that having a red vertex in each of these little
circles but no other vertices in $C_r$ or $\partial B_n \cap C_{40}$
ensures $x$ is $2$-pivotal. Therefore in this case
\[
P_{n,2}(x) \geq \delta_3 P[H_2 ] \geq \delta_3 P[E_{n,1}(x) \cap
R_n(x,20,40)] \geq \delta_3 \delta_4
P_{n,1}(x).
\]
Take $\delta(p,q) := \delta_1 \delta_2 \delta_3 \delta_4 $. By its
construction $\delta$ is strictly positive and continuous in $p$ and $q$,
completing the proof of the lemma. \hfill{$\Box$} \vspace{.5cm}
The following proposition follows immediately by combining
Lemma~\ref{prop:Russo} and Lemma~\ref{intermediate}.
\begin{prop} \label{prop:final}
There is a continuous function $\delta:(0,1)^2 \to (0,\infty)$
such that
\[
\frac{\partial \theta_n (p,q)}{\partial q} \geq \delta (p,q) \frac{\partial \theta_n (p,q)}{\partial p}
\]
for all $n \geq 100$ and $(p,q) \in (0,1)^2$.
\end{prop}
\noindent
\textbf{Proof of Theorem~\ref{bool}.}
Set $p^* = p_c^{\rm site} $
and $q^* = (1/8)(p^*)^2$.
Then using Proposition~\ref{prop:final} and looking at a small
box around $(p^*,q^*)$, we can find
$\epsilon \in (0,\min(p^*/2,1-p^*))$
and $ \kappa \in (0,q^*)$
such that for all $n > 100$ we have
\[
\theta_n(p^* + \epsilon,q^* - \kappa) \leq
\theta_n(p^* - \epsilon,q^* + \kappa).
\]
Taking the limit inferior as $n \rightarrow \infty$,
since
$\theta$ is monotone in $q$ we get
\[
0 < \theta(p^* + \epsilon,0 ) \leq
\theta(p^* + \epsilon,q^*- \kappa)
\leq \theta(p^* - \epsilon,q^* + \kappa).
\]
Now set $p = p^* - \epsilon $. Then $q^* + \kappa \leq p^2$,
so that $\theta(p,p^2)>0$, and
by Proposition
\ref{prop:enh}, the enhanced model with parameters $(p,p^2)$ percolates,
i.e. has an infinite coloured component, almost surely.
We finish the proof with a coupling argument along the lines of
Grimmett and Stacey~(1998). Let E be the set of edges and $V$ be the
set of vertices of ${\cal C}$ (the infinite component). Let $(X_e: e \in
E)$ and $(Z_v: v \in V)$ be collections of independent Bernoulli
random variables with mean $p$. From these we construct a new
collection $(Y_v: v \in V)$ which constitutes a site percolation
process on ${\cal C}$. Let
$e_0, e_1, ...$ be an enumeration of the edges of ${\cal C}$ and
$v_0, v_1, ...$ an enumeration of the vertices. Suppose
at some point we have defined $(Y_v: v \in W)$ for some subset $W$
of $V$. Let $\cal Y$ be the set of vertices not in $W$ which are
adjacent to some currently active vertex (i.e. a vertex $u \in W$
with $Y_u = 1$). If $\cal Y = \emptyset$ then let $y$ be the first
vertex not in $W$ and set $Y_y = Z_y$ and add $y$ to $W$. If $\cal Y \neq \emptyset$,
we let $y$ be the first vertex in $\cal Y$ and let $y'$ be the first
currently active vertex adjacent to it, then set $Y_y = X_{yy'}$ and add $y$ to $W$.
Repeating this process builds up the entire red site percolation process,
if it does not percolate, or a percolating subset of the red site percolation
process if it does percolate.
In the latter case, the bond process $\{X_e\}$ also percolates.
Now suppose the red site process does not percolate.
For any correctly configured vertex $x_1$ with $x_2$ up to $x_5$ as
before, $x_1$ itself is not red. Therefore at most one edge to $x_1$
has been examined, so we can can find a first unexamined edge (in
the enumeration) to $x_2$ or $x_3$, and then to $x_4$ or $x_5$. We
then declare $x_1$ to be green only if both of these edges are open,
which happens with probability $p^2$.
This completes the enhanced site process with
$q = p^2$ and every component
in this is contained in a component for the bond process $\{X_e\}$.
Therefore, since the enhanced $(p,p^2)$ site process percolates almost
surely,
so does the bond process,
so $p_c^{\rm bond} \leq p < p_c^{\rm site}$.
\hfill{$\Box$}
\vspace{.5cm}
| {
"timestamp": "2010-04-30T02:01:50",
"yymm": "1004",
"arxiv_id": "1004.1596",
"language": "en",
"url": "https://arxiv.org/abs/1004.1596",
"abstract": "Any infinite graph has site and bond percolation critical probabilities satisfying $p_c^{site}\\geq p_c^{bond}$. The strict version of this inequality holds for many, but not all, infinite graphs.In this paper, the class of graphs for which the strict inequality holds is extended to a continuum percolation model. In Gilbert's graph with supercritical density on the Euclidean plane, there is almost surely a unique infinite connected component. We show that on this component $p_c^{site} > p_c^{bond}$. This also holds in higher dimensions.",
"subjects": "Probability (math.PR)",
"title": "Strict inequalities of critical probabilities on Gilbert's continuum percolation graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232919939074,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.70934602810525
} |
https://arxiv.org/abs/1705.03851 | Rotational subsets of the circle | A rotational subset, relative to a continuous transformation $T: \mathbb{T} \to \mathbb{T}$ on the unit circle, is a closed, invariant subset of $\mathbb{T}$ that is minimal and on which $T$ respects the standard orientation of the unit circle. In the case where $T$ is the standard angle doubling map, such subsets were studied by Bullet and Sentenac. The case where $T$ multiplies angles by an integer $d > 2$ was studied by Goldberg and Tresser, and Blokh, Malaugh, Mayer, Oversteegen, and Parris. These authors prove that infinite rotational subsets arise as extensions of irrational rotations of the unit circle. In this paper, we prove that such a structure theorem holds for the wider class of continuous transformations $T$ with finite fibers. Our methods are more squarely analytic in nature than the works mentioned, and hence of interest even in the cases treated by the works mentioned above. The paper concludes with an exposition of those cases from the point of view taken here. | \section*{Introduction}
In what follows, $\mathbb{T}$ denotes the unit circle with the standard orientation.
\begin{defn}
Let $X \subset \mathbb{T}$ and $f: X \to X$ be a continuous transformation. The
map $f$ \emph{preserves cyclic order} if, for any $P, Q, R \in X$ with
distinct images, the arcs $P\,Q\,R$ and $f(P)\,f(Q)\,f(R)$ have the
same orientation.
\end{defn}
Now consider a continuous transformation $T:\mathbb{T} \to \mathbb{T}$ and a compact set $X \subseteq \mathbb{T}$.
\begin{defn}
The subset $X$ is \emph{rotational} if
\begin{itemize}
\item $X$ is invariant, \emph{i.e.} $TX \subseteq X$,
\item $X$ is minimal, and
\item $T\vert_X$ preserves cyclic order.
\end{itemize}
\end{defn}
Our objective is to study the structure of
infinite, rotational, proper subsets under fairly general assumptions about the
nature of $T$. The main result is:
\begin{thm}\label{thm:main}
Let $T: \mathbb{T} \to \mathbb{T}$ be a continuous function with finite fibers and
$X \subset \mathbb{T}$ an infinite, rotational, proper subset of $\mathbb{T}$ with
respect to this transformation. Then:
\begin{itemize}
\item[i.] The dynamical system $(X,T)$ is an extension of an irrational rotation
of the circle.
\item[ii.] The function $\phi: X \to \mathbb{T}$ that realizes this extension has singleton
fibers except at countably many points of $\mathbb{T}$. Over these exceptional points,
the fibers have cardinality two, corresponding to endpoints of gaps of the
set $X$ in $\mathbb{T}$.
\item[iii.] $(X,T)$ has a unique ergodic measure $\mu$ and $\phi_\ast \mu$ is the
standard Lebesgue measure on $\mathbb{T}$.
\end{itemize}
\end{thm}
The above theorem was proved in the case when $T: \mathbb{T} \to \mathbb{T}$ is the angle
doubling map by Bullet and Sentenac \cite{bull-sent}. It was established in the
case where $T$ is the standard $d$-fold cover of the unit circle by Goldberg
and Tresser \cite{gold-tress} and by Blokh, Malaugh, Mayer, Oversteegen and
Parris \cite{bl-mal}. These works were motivated by the study of
the action of quadratic dynamical systems in the complement of the Julia set.
The proof of theorem \ref{thm:main} will be accomplished
over the next two sections. We then revisit the $d$-fold cover case
from our point of view. The last section presents a class of rotational
subsets that include examples which are not conjugate to the previously known cases.
On the basis of theorem \ref{thm:main} and the examples of the last section, it is natural
to ask if rotational subsets exist for any continuous transformation of $\mathbb{T}$ with
degree larger than $1$.
\section*{Structure of rotational subsets}
\emph{Henceforth}, unless otherwise stated, we will work on the case where
\begin{itemize}
\item $T$ has finite fibers, and
\item the rotational set $X$ is an infinite, proper subset of $\mathbb{T}$.
\end{itemize}
Suppose $x_0 \in X$ is an isolated point of such a dynamical system. Minimality
implies that the forward orbit, $\{T^nx_0: n > 0\}$, is dense in
$X$. As a consequence, we have that $T^n x_0 = x_0$ for some positive integer
$n$. The forward orbit $\mathcal{O}$ must then be finite as well as dense.
Therefore, $X = \{T^nx_0: n \ge 0\}$. This contradiction implies that $X$ cannot
have any isolated points. Therefore $X$ is a perfect subset of $\mathbb{T}$.
By conjugating with the appropriate rotation, we can assume that $0 \notin X$.
Parameterize $\mathbb{T}$ by the unit interval $[0,1[$ and note that
\[
0 < \alpha = \inf X < \beta = \sup X < 1.
\]
\begin{lem}\label{lem:perturbation}
Suppose $a, b \in X$ and $Ta = b$. Then, there are strictly monotone
sequences $a_n$ and $b_n$ such that $\lim_{n\to\infty} a_n = a$,
$\lim_{n\to\infty} b_n = b$ and $Ta_n = b_n$, for all $n \in \mathbb{N}$.
\end{lem}
\begin{proof}
Since $X$ is perfect, one can construct a sequence $a_n \in X$ such
that $a_n \ne a$ for all $n \in \mathbb{N}$ and $a_n \to a $ as $n \to \infty$.
By passing to a subsequence, one can further arrange the $a_n$ to be strictly
monotonic. Since $T$ has finite fibers, $Ta_n \ne b$ for all but finitely many
$n$. Moreover, $Ta_n \to Ta$. By once again passing to a subsequence, if necessary,
we may arrange that $Ta_n$ is strictly monotone.
\end{proof}
\begin{prop}\label{prop:fiber}
The fibers of $T|X$ have cardinality at most two.
\end{prop}
\begin{proof}
Suppose, to the contrary, that $x_0, x_1, x_2 \in X$ are three distinct points arranged in
increasing order with the same image under $T$. So $T x_i = y$ for $i =
0, 1$ and $2$. By lemma \ref{lem:perturbation}, there are
strictly monotone sequences $a^{(i)}_n, i = 0,1,2$ such that $a^{(i)}_n
\to x_i$. Morevoer, $b^{(i)}_n = T a^{(i)}_n$ are strictly monotone and
approach $y$.
The proof proceeds according to how $b^{(0)}_n$ and $b^{(2)}_n$
approach $y$.
\emph{Case $b^{(0)}_n$ and $b^{(2)}_n$ approach $y$ from the same side:}
Without loss of generality, assume that both sequences approach $y$ from below.
In this case, we can use the properties of the sequence $b^{(i)}_n$ to find
indices $k$ and $l$ so that
\[
b^{(0)}_k < b^{(2)}_l < y
\]
and $a^{(0)}_k$, $x_1$ and $a^{(2)}_l$ are in increasing order.
This means that $T$ changes the cyclic order of the points $a^{(0)}_k$, $x_1$,
and $a^{(2)}_l$.
\emph{Case $b^{(0)}_n \searrow y$ and $b^{(2)}_n \nearrow y$:}
Choose $n$ sufficiently large so that
\[
a^{(0)}_n < x_1 < a^{(2)}_n.
\]
But $T a^{(2)}_n < y < T a^{(0)}_n$, so $T$ doesn't preserve cyclic order.
\emph{Case $b^{(0)}_n \nearrow y$ and $b^{(2)}_n \searrow y$:}
By invoking lemma \ref{lem:perturbation} again, we construct a small perturbation of $x_1$,
say $a'$, with the property that $Ta' \ne y$. We treat the case where $Ta' < y$ ---
the other case is handled similarly. In this situation, there is a sufficiently large
$n$ with the property that
\[
Ta' < Ta^{(0)}_n < y < Ta^{(2)}_n.
\]
Observe that $T$ doesn't preserve the cyclic order of $a^{(0)}_n, a'$, and $a^{(2)}_n$.
Thus, in all three cases, we have a contradiction.
\end{proof}
The minimality condition insures that $T$ is surjective. Consequently, we may put
$\alpha' = \max \{x: Tx = \beta\}$ and $\beta' = \min \{x: Tx = \alpha\}$.
\begin{prop}
The inequality
\[
\alpha < \alpha' < \beta' < \beta
\]
holds. Moreover, $[\alpha',\beta']$ is a gap for $X$, \emph{i.e.}
$X \cap ]\alpha',\beta'[ = \emptyset$.
\end{prop}
\begin{proof}
Suppose that $\alpha' > \beta'$. Minimality ensures that $\beta' >
\alpha$. By using the perturbation result (lemma
\ref{lem:perturbation}), we can find an $x$ near $\alpha$ such that
$\alpha < Tx < \beta$. From this we have that $T$ reverses the cyclic
order of the triple $x, \beta', \alpha'$.
Now, consider an $x \in X$ with $\alpha' < x < \beta'$. By the definition
of $\alpha'$ and $\beta'$, $Tx$ must be distinct
form $\alpha$ or $\beta$. Hence $Tx$ must be strictly between $\alpha$
and $\beta$. This means that $T$ reverses the cyclic order of $\alpha',
x, \beta'$. Contradiction.
Finally, since $X$ has no isolated points, $\alpha' \ne \alpha$
and $\beta' \ne \beta$.
\end{proof}
\begin{lem}\label{lem:increasing}
Let $x, y \in X$ If $x < y$ and $Tx < Ty$, then for any $z \in X$ with
$x < z < y$, we must have $Tx \le Tz \le Ty$.
\end{lem}
\begin{proof}
Failure of the conclusion clearly entails that $T$ reverses the cyclic
order of the triple $x, z, y$.
\end{proof}
\begin{prop}
$\alpha < T\beta \le T\alpha < \beta$.
\end{prop}
\begin{proof}
Note that minimality rules out the possibilities
that $T\alpha = \alpha$ and $T\beta = \beta$.
If $T\beta = \alpha$, then by lemma
\ref{lem:perturbation} there is an $x \in X$ near $\beta$ such that
$T\alpha > Tx > \alpha$. This means that $T$ reverses the cyclic order
of the triple $\alpha, x, \beta$. In a similar way, $T\alpha = \beta$
can also be ruled out.
Only the middle inequality remains. Suppose, to the contrary, that
$T\beta > T\alpha$. The previous proposition implies that $T\alpha \le
Tx \le T\beta$, for any $x \in X$ that is strictly between $\alpha$ and
$\beta$. Thus, $\image T$ is a proper subset of $X$. This violates the
minimality assumption.
\end{proof}
For any $x_0, x_1 \in [0,1)$, set
\[
X_{x_0,x_1} = X \cap [x_0,x_1].
\]
\begin{prop}
$T|X$ is monotone increasing on the sets $X_{\alpha,\alpha'}$ and
$X_{\beta',\beta}$. Moreover,
\[
Tx > x \qquad \forall x \in X_{\alpha,\alpha'}
\]
and
\[
Tx < x \qquad \forall x \in X_{\beta,\beta'}.
\]
\end{prop}
\begin{proof}
Let $x, y \in X \cap [\alpha,\alpha']$ with $x < y$. Suppose $Ty <
Tx$. Since $y < \beta'$, the definition of $\beta'$ forces $\alpha =
T\beta' < Ty$. Thus, $T$ inverts the cyclic order of $x, y, \beta'$.
This contradiction shows that $T$ is monotonic increasing on
$[\alpha,\alpha']$. A similar argument applies to $X \cap
[\beta',\beta]$.
Next, let $x \in X \cap [\alpha,\alpha']$ and suppose $Tx \le x$. Since
$T$ has no fixed points in $X$, $Tx < x$. As neither $\alpha$ nor $\alpha'$ have
this property, $\alpha < x < \alpha'$ and hence $\alpha < T\alpha \le Tx < x$.
Lemma \ref{lem:increasing} then implies that $X \cap [\alpha,x]$ is a
nonempty, proper, closed invariant subset of $X$. Thus $Tx > x$ for $x
\in X \cap [\alpha,\alpha']$. The last inequality can be verified
similarly.
\end{proof}
\section*{Coding by irrational rotations of the circle}
By the Krylov-Bogolioubov theorem, there is a Borel probability measure
on $X$ that is invariant under $T$. Fix one such, $\mu$. Regard $\mu$ as a
measure on $[0,1)$ and write $\tilde\Phi$ for its cumulative distribution
function. Since every point of the infinite set $X$ has dense orbit, the invariant
probability measure $\mu$ cannot include any point masses. Thus $\tilde\Phi$ is
continuous. As a consequence, $\Phi = \tilde\Phi|_X$ is a continuous, monotone increasing
map from $X$ to $[0,1]$. Finally, because
\[
\tilde\Phi(\alpha) = 0 \qquad \text{and} \qquad \tilde\Phi(\beta) = 1
\]
and the fact that $\tilde\Phi$ is locally constant on the complement of $X$,
$\Phi: X \to [0,1]$ is surjective.
\begin{prop}\label{prop:aboutPhi}
\begin{itemize}
\item If $x_0,x_1 \in X$ and $X \cap (x_0,x_1) = \emptyset$, then
$\Phi(x_0) = \Phi(x_1)$.
\item On the other hand, if $X \cap (x_0,x_1) \ne \emptyset$, then
$\Phi(x_0) > \Phi(x_1)$.
\end{itemize}
\end{prop}
\begin{proof}
The first statement follows directly from the observation that $\mu$ contains no
point masses.
To prove the second statement, first put $U = X \cap (x_0,x_1)$ and
\[
\nu_n(U,x) = \left| \{k: k = 0,\dots,n-1 \text{\ and \ } T^kx \in U \} \right|.
\]
Finally, write $\mathcal{P}: L^2(X,\mu) \to L^2(X,\mu)$ for the projection onto the subspace
of function left invariant by $T$.
By Mean Ergodic Theorem,
\[
\lim_{n\to\infty} \frac {\nu_n(U,x)}n
\]
converges in $L^2(X,\mu)$ to the projection $\mathcal{P}\mathbf{1}_U$.
On the other hand, since $U$ is a non-empty
open set and $(X,T)$ is a minimal dynamical system, there is an $\epsilon > 0$ such that
\[
\liminf_{n\to\infty} \frac {\nu_n(U,x)}n > \epsilon \qquad \text {for all $x \in X$.}
\]
(See, for example, proposition 4.7 in \cite{queff}.)
Hence,
\[
\Phi(x_1) - \Phi(x_0) = \mu(U) \ge \|\mathcal{P}\mathbf{1}_U\| \ge \epsilon > 0.
\]
\end{proof}
A corollary of this is that any non-empty open subset of $X$ has positive $\mu$-measure.
\begin{prop}
\begin{itemize}
\item[i.] There is an irrational number $\theta_0 \in (0,1)$ such
that \[ \Phi(Tx) = \theta_0 + \Phi(x) \qquad \mod 1 \] for
all $x \in X$.
\item[ii.] If $P, Q$ and $R$ are distinct points of $X \subset \mathbb{T}$
with distinct images under $\Phi$ then the arcs $PQR$
and \newline$\Phi(P)\Phi(Q)\Phi(R)$ both have the same orientation in $\mathbb{T}$.
\end{itemize}
\end{prop}
\begin{proof}
Put $\theta_0 = \mu([\beta',\beta])$. If $x \in X \cap
[\alpha,\alpha']$, then the $T$ invariance of the measure $\mu$ implies
that
\begin{align*}
\Phi(Tx) &= \mu([\alpha,Tx]) \\
&= \mu([\alpha,T\beta]) + \mu([T\alpha,Tx) \\
&= \mu([T\beta',T\beta]) + \mu([T\alpha,Tx]) \\
&= \theta_0 + \mu([\alpha,x]) \\
&= \theta_0 + \Phi(x).
\end{align*}
If $x \in X \cap [\beta',\beta]$, then
\begin{align*}
\Phi(Tx) &= \mu([\alpha,Tx]) \\
&= \mu([T\beta',Tx]) \\
&= \mu([\beta',x]) \\
&= \Phi(x) - \mu([\alpha,\alpha']) \\
&= \Phi(x) + \mu([\beta',\beta]) - 1 \\
&= \Phi(x) + \theta_0 \qquad \mod 1.
\end{align*}
Moreover, $\theta_0$ is irrational. Otherwise, any finite (closed) orbit in $\mathbb{T}$ under
translation by $\theta_0$ would have a non-dense preimage under $\Phi$ that is $T$ invariant.
The second statement follows directly from the definition of $\Phi$.
\end{proof}
If we define $\phi:X \to \mathbb{T}$ by $\phi(x) = \exp(2\pi\imath \Phi(x))$ and $\tau:\mathbb{T} \to \mathbb{T}$
by $\tau(z) = \exp(2\pi\imath \theta_0)z$, the above considerations show that $\phi$ is a
factor map from $(X,T)$ to $(\mathbb{T},\tau)$.
\begin{lem}\label{lem:aboutX_0}
Let $X_0$ be those points $x = X$ with the property that
for every $\delta > 0$, the intervals $(x,x + \delta)$ and $(x,x - \delta)$
contain infinitely many points of $X$. Then: $X_0$ is a dense, uncountable
subset of $X$.
\end{lem}
\begin{proof}
A point is in the complement of $X_0$ in $X$ precisely when it is the endpoint of a
gap, \emph{i.e.} a maximal open interval contained in $\mathbb{T} \backslash X$. Since there
are at most countably many such open intervals, we conclude that $X \backslash X_0$
is countable. As the perfect set $X$ is uncountable, $X_0$ is uncountable as well.
Furthermore, each point of $X \backslash X_0$ is nowhere dense. Therefore, $X \backslash X_0$
is nowhere dense. In other words, $X_0$ is dense.
\end{proof}
\begin{thm} The dynamical system $(X,T)$ is uniquely ergodic. \end{thm}
\begin{proof}
Let $x_0 \in X_0$. By proposition \ref{prop:aboutPhi} and lemma \ref{lem:aboutX_0},
\begin{itemize}
\item $x \in X$ and $x < x_0 \Rightarrow \Phi(x) < \Phi(x_0)$, and
\item $x \in X$ and $x > x_0 \Rightarrow \Phi(x) > \Phi(x_0)$.
\end{itemize}
Therefore, for any $y \in X$ and $n \ge 0$,
\[
T^n y \le x_0 \qquad \Leftrightarrow \qquad
\{ \Phi(y) + n \theta_0 \} \le \Phi(x_0).
\]
(Here the braces denote fractional part.) Consequently,
\begin{align*}
\lim_{n\to\infty} \frac {\# \{k: T^k y \le x_0 \text{\ and \ } 0 \le k < n \}}n & \\
= & \ \lim_{n\to\infty} \frac {\# \left\{k: \{\Phi(y) + n \theta_0\} \le \Phi(x_0) \text{\ and \ } 0 \le k < n \right\}}n \\
= & \ \Phi(x_0) = \mu( X \cap [\alpha,x_0]).
\end{align*}
The second limit has been evaluated by invoking the Weyl's equidistribution theorem.
Since such $x_0$ are dense in $X$, the cumulative distribution of the measure $\mu$ is uniquely
determined by $(X,T)$. In other words, $(X,T)$ is uniquely ergodic.
\end{proof}
With this last result, we have completed the proof of theorem \ref{thm:main}.
\section*{The $z \mapsto z^d$ case}
We now specialize to the case where $T: \mathbb{T} \to \mathbb{T}$ is given by
\[
Tx = d\cdot x \ \ \ \mod 1.
\]
In this situation, the inverse image of $0$ consists of the
$d$ points
\[
\xi_k = \frac k{d} \mod 1 \qquad k = 0,...,d-1.
\]
In addition,
$T$ has $d-1$ fixed points:
\[
\eta_k = \frac k{d-1} \mod 1 \qquad k = 0,...,d-2.
\]
We also set $\xi_d = \eta_{d-1} = 1$. Given our blanket conditions on $X$, none of the
$\xi_k$ or $\eta_k$ lie in $X$.
Set $I_k = [\xi_k, \xi_{k+1}]$ for $k = 0,...,d-1$ and note that the interior of $I_k$ are
precisely those points in $\mathbb{T}$ with a unique $d$-adic expansion that starts with
the digit $k$. Note also that $T I_k = [0,1]$, since $T$ is just the shift map on the
$d$-adic expansion. Moreover, $T$ is monotonic increasing on the interior of each $I_k$.
Each closed interval $I_k$ contains a unique fixed point $\eta_k$. The behavior of $T$ at
these fixed points can be readily determined. In particular, one checks that:
\[
Tx < x \qquad \text{for $\xi_k < x < \eta_k$}
\]
and
\[
Tx > x \qquad \text{for $\eta_k < x < \xi_{k+1}$.}
\]
Let $X_1,...,X_\ell$ be the non-empty sets in the list
\[
X \cap I_0, X \cap I_1, ..., X \cap I_{d-1}.
\]
The indexing can be arranged so $X_i \subset I_{k_i}$ for some $0 \le k_i < d$ with
\[
k_1 < k_2 < \cdots < k_\ell.
\]
Set
\[
\alpha_i = \inf X_i \qquad \text{and} \qquad \beta_i = \sup X_i.
\]
Since $X$ is perfect, $\alpha_i < \beta_i$ for $i = 1,...,\ell$ and $X \cap (\alpha_i,\beta_i)$
is non-empty. Consequently, $\Phi(X_i)$ is the closed interval $[\Phi(\alpha_i),\Phi(\beta_i)]$
and has positive length (see proposition \ref{prop:aboutPhi}). Because $\mu$ has no mass on
the open interval $(\beta_i, \alpha_{i+1})$, $\Phi(\beta_i) = \Phi(\alpha_{i+1})$. Set $t_0 = 0$
and $t_i = \Phi(\beta_i)$ for $i = 1,...,\ell$. Then,
\[
0 = t_0 < t_1 < \cdots < t_\ell = 1
\]
and
\[
\Phi(X_i) = [t_{i-1},t_i] \qquad \text{for $i = 1,...,\ell$.}
\]
Recall a consequence of our previous analysis: every point $x \in X$ with $Tx > x$
must lie to the left of every point $y \in X$ with $Ty < y$. Thus, each
$X_i$ must lie completely to one side of the unique fixed point in $I_{k_i}$.
Moreover, if $\sup X_i < \eta_{k_i}$ and $\inf X_j > \eta_{k_j}$ then $i > j$
and $k_i > k_j$. Let $m$ be the last index, $i$ between $1$ and $\ell$ with
$\sup X_i < \eta_{k_i}$. Clearly, $m < \ell$ and $\sup X_m = \alpha'$ and
$\inf X_{m+1} = \beta'$. Hence, $t_m = \Phi(\alpha_{m+1}) = \Phi(\beta_{m}) = -\theta_0$.
We next seek to understand the fibers of $\Phi$. Set
\[
\mathfrak D_0 = \{\omega \in [0,1]: \omega + n \theta_0 \ne t_i \text{\ for any\ }n \ge 0
\text{\ and\ } i = 1,...,\ell\}.
\]
(Note that almost every $\omega \in [0,1]$ is in $\mathfrak D_0$.)
The sequence of points $\omega + n \theta_0$ with $n \ge 0$ determines a sequence of intervals
of the form $[t_k,t_{k+1}]$. This, in turn, implies that the base $d$ expansion of any point in
the fiber of $\omega$ is uniquely determined. Therefore, $\Phi^{-1}(\omega)$ is a singleton.
In summary, this discussion shows how any rotational subset $X$ must arise from
the symbolic flow of an irrational rotation of $\mathbb{T}$ relative to an appropriate partition.
The next section shows that this process can be reversed.
\section*{The Inverse Process}
Let $\theta_0$ be an irrational number in $\mathbb{T}$ and let $\tau: \mathbb{T} \to \mathbb{T}$ denote
rotation by $\theta_0$. Consider a partition of $[0,1]$ into
$\ell \le d$ subintervals with the requirement that one of the interior nodes is $-\theta_0$:
\[
0 = t_0 < t_1 < \cdots < t_m < t_{m+1} < \cdots < t_\ell = 1
\]
and $t_m = - \theta_0$. Set $J_k = [t_{k},t_{k+1}]$ for $k = 0,...,\ell-1$. Next, select a
coding that maps $\{0,...,\ell-1\}$ to the set of digits $\{0,...,d-1\}$. More precisely,
choose integers $k_0,...,k_{\ell-1}$ that satisfy
\[
0 \le k_0 < k_1 < \cdots < k_{\ell-1} \le d-1.
\]
We will show that this data, determines a rotational subset of $\mathbb{T}$ that
inverts the process described in previous section.
Let $\mathfrak{D}_0$ be the set of $\omega \in [0,1]$ satisfying
$\tau^n(\omega) = \omega + n \theta_0 \ne t_i \mod 1$ for all $n \in \mathbb{N}_0$ and $i = 0,...,\ell$.
In other words, $\mathfrak{D}_0$ consists of those points of
$\mathbb{T} \backslash \{t_0, t_1, ..., t_{\ell-1}\}$ whose forward orbit doesn't contain any of
the nodes $t_i$.
\begin{prop}
\begin{itemize}
\item[i.] The complement of $\mathfrak{D}_0$ in $\mathbb{T}$ is countable.
\item[ii.] For every $t \in \mathbb{T}$, there is an integer $n \ge 0$ with the property
that $T^n t \in \mathfrak D_0$.
\end{itemize}
\end{prop}
\begin{proof}
The map $\tau$ is invertible. The complement of $\mathfrak D_0$ is just the
countable set
\[
\{\tau^{-k} t_i: i = 1,...,\ell \text{\ and\ } k \ge 0 \}.
\]
For the proof of the second claim, fix $t \in \mathbb{T}$, If the orbit of $t$ hits
the $\{ t_i: i = 0,...,\ell\}$ infinitely often, then there must be an
index $i$ such that
\[
t + n \theta_0 = t_i \mod 1
\]
for infinitely many $n \in \mathbb{N}$. This means that there are two distinct, positive
integers $n_0, n_1$ with the property that
\[
(n_1 - n_0) \theta_0 = 0 \mod 1.
\]
But this contradicts the condition that $\theta_0$ is irrational. This argument proves that
the forward orbit of such a $t$ must eventually lie completely in $\mathfrak D_0$.
\end{proof}
The trajectory of any point $\omega \in \mathfrak D_0$ can be encoded by an infinite string,
\begin{equation}\label{eqn:about_E}
E(\omega) = a_0a_1a_2,...
\end{equation}
where $a_n = k_i$ precisely when $\omega + n \theta_0 \mod 1$ is in $J_i$. Note
that $E(\omega + \theta_0)$ is the shift $a_1a_2...$. The
Kronecker approximation theorem implies that each of the digits $k_i$, $i =
0,...,\ell-1$ occurs infinitely often. In particular, we may unambiguously
interpret $E(\omega)$ as the $d$-adic expansion of a unique real number in the
open unit interval $]0,1[$.
\begin{prop}
The map $E: \mathfrak{D}_0 \to \mathbb{T}$ is a continuous, injective, monotonic increasing and
\begin{equation}\label{eqn:dynMap}
E(\tau(\omega)) = T( E(\omega)).
\end{equation}
\end{prop}
\begin{proof}
Kronecker's theorem also implies injectivity of $E$. Let $\omega$ and $\omega'$
be distinct points in $\mathfrak D_0$. Set $\delta = \omega - \omega'$
and note that there must be an $s$ with the property that $s$ and $s' =
s + \delta$ lie in the interior of different $J_i$'s. We may then
choose $n \in \mathbb{N}_0$ with the property that $\tau^n \omega$ and $\tau^n
\omega' = \tau^n \omega + \delta$ approximate $s$ and $s'$,
respectively. If the error is sufficiently small then $\tau^n\omega$
and $\tau^n\omega'$ will be in different $J_i's$. Hence the $d$-adic
encoding of $\omega$ and $\omega'$ are different. Since neither of
these encodings can end with an infinite string of $d-1$'s, $E(\omega)
\ne E(\omega')$.
Equation \ref{eqn:dynMap} follows directly from equation \ref{eqn:about_E} and
the ensuing discussion.
Let $\omega \in \mathfrak D_0$ and $\omega_n$ a sequence in $\mathfrak D_0$
that converges to $\omega$. Then, since $\tau^k \omega$ is in the
interior of one of the $J_i$, $\tau^k \omega_n$ must eventually have
this property as well. Thus, $E$ must be continuous.
It only remains to prove that $E$ is monotone increasing. Let $\omega, \omega'
\in \mathfrak{D}_0$ with $\omega < \omega'$. Write $E(\omega) =
a_0a_1a_2...$ and $E(\omega') = a'_0a'_1a'_2...$. Because $E$ is
injective, there is a first index $i \ge 0$ for which $a_i \ne a'_i$.
If $i = 0$, $\omega < \omega'$ implies $a_0 < a'_0$. This in turn
yields $E(\omega) < E(\omega')$. If $i > 0$, then $a_{i-1} = a'_{i-1}$
entails that both $\tau^{i-1}\omega$ and $\tau^{i-1}\omega'$ are in
interior of the same $J_k$. Since $\tau$ is increasing on the interior
of any $J_n$, we must have $a_i < a'_i$. As a consequence, $E(\omega) <
E(\omega')$ in this case as well.
\end{proof}
Write $\mu = E_\ast\mathcal{L}$ for the image of Lebesgue measure on $\mathbb{T}$ under $E$.
By equation \ref{eqn:dynMap}, $\mu$ is invariant under $T$. Since, $E$ is injective
and $\mathcal{L}$ is absolutely continuous, $\mu$ has no point masses and hence is
absolutely continuous.
Write $X_0$ for the image of $\mathfrak D_0$ under $E$ and set $X = \overline{X_0}$.
Since $X_0$ is invariant under $T:\mathbb{T} \to \mathbb{T}$, so is its closure $X$. The Borel measure
$\mu$ is supported on $X$. Define a map $\Phi: X \to \mathbb{T}$ by
\[
\Phi(x) = \mu([\alpha,x])
\]
where $\alpha = \min X$. If $x = E t$ for some $t \in \mathfrak D_0$,
\begin{align*}
\Phi(x) & = \mu([\alpha,x]) \\
& = \mathcal{L}( E^{-1} [\alpha,x]) \\
& = \mathcal{L}( [0,t]) = t
\end{align*}
It follows that $E:\mathfrak D_0 \to X_0$ and $\Phi|_{X_0}$ are inverses.
Thus $\Phi$ is an invariant map from $X_0$ to $\mathfrak D_0$.
The continuity of $\Phi$ then implies that it is invariant on $X$ as well.
\begin{thm}
The dynamical system $(X,T)$ is rotational.
\end{thm}
\begin{proof}
We can now prove the minimality of $(X,T)$. Let $x \in X$ and
write $t = \Phi(x)$. We would like to prove that the orbit of $x$ is
dense in $X$. We know that there is a positive integer $N$ with the
property that for all $n \ge N$, $\Phi(T^n x) = t + n \theta_0 \mod 1$
is in $\mathfrak D_0$. The continuity of $E$ together with the density
of the set $\{t + n \theta_0: n \ge N \}$ in $\mathbb{T}$ implies that $\{T^n
x: n \ge N\}$ is dense in $X_0$ and hence in $X$. Consequently, $(X,T)$
is minimal. Finally, since $(\mathbb{T},\tau)$ preserves cyclic order and
$\Phi$ is monotonic, it is a easy to check that $(X,T)$ preserves
cyclic order. It follows that $(X,T)$ is rotational and that
$(\mathbb{T},\tau)$ is its canonical factor.
\end{proof}
\section*{Examples for a class of continuous maps}
In this section, we show that infinite rotational sets exist in a certain class
of continuous transformations on $\mathbb{T}$. Members of this class can have
arbitrarily many fixed points and hence will not, in general, be conjugate to
the transformations treated in the previous two sections.
Here as before, we parametrize $\mathbb{T}$ by the half-open interval $[0,1[$.
Fix an integer $d > 1$, and let
\[
0 = x_0 < x_1 < \cdots < x_d = 1
\]
be a partition of the unit interval. Consider a continuous transformation $T: \mathbb{T} \to
\mathbb{T}$ that, for each $k = 0,...,d-1$, satisfies
\begin{itemize}
\item[i.] monotone increasing function on each half-open interval $[x_k,x_{k+1})$, and
\item[ii.] $T(x_k) = 0$ and $\lim_{x\to x_{k+1}^-} T(x) = 1$.
\end{itemize}
We set up a standard symbolic encoding for $T$ in terms of infinite strings
in the alphabet $\mathcal{A} = \{0, 1,...,d-1\}$. In particular, for each
finite (non-empty) word $I = i_1i_2...i_n$, let
\[
A_I = \{ x \in [0,1[\ : \forall k = 1,...n, f^{k-1}(x) \in [x_{i_k},x_{i_k+1}[ \ \}.
\]
We collect some useful remarks that are easy to check.
\begin{rem}
\begin{itemize}
\item[i.] The $A_I$ are half-open intervals that are closed on the left.
\item[ii.] If the finite word $I$ is a prefix for the word $J$, then $A_J \subset A_I$.
\item[iii.] For any fixed $I$ and $n \ge |I|$, the collection
\[
\{ A_J: |J| = n \ \text{and\ $I$ is a prefix for $J$}\ \}
\]
is a partition of $A_I$.
\item[iv.] If $I = i_1i_2...i_n$ and $I' = i_2...i_n$, then
\[
x \in A_I \implies Tx \in A_{I'}.
\]
\item[v.] If $I$ precedes $J$ in lexicographical order, then
\[
x \in A_I\ \text{and}\ y \in A_J \implies x < y.
\]
\end{itemize}
\end{rem}
Each point $x \in \mathbb{T}$ determines a unique infinite word $\iota(x) \in \mathcal{A}^{\mathbb{N}}$.
Since, by construction,
\begin{equation}\label{eqn:factmap}
\iota \circ T = S \circ \iota
\end{equation}
we have that $\mathcal{A}^{\mathbb{N}}$ is a Borel measurable factor of $(\mathbb{T},T)$.
\begin{rem}\label{rem:lexord}
The lexicographical ordering on $\mathcal{A}^{\mathbb{N}}$ and the usual order on
$[0,1[$ are also compatible with the factor map $\iota$. In particular, for any
$x, y \in [0,1[$,
\begin{itemize}
\item[i.] $x < y$ implies $\iota(x) \le \iota(y)$, and
\item[ii.] $\iota(x) < \iota(y)$ implies $x < y$.
\end{itemize}
\end{rem}
\begin{lem}\label{lem:symbcoding}
If $I \in \mathcal{A}^{\mathbb{N}}$
doesn't terminate with an infinite sequence of $d-1$'s, there is an $x
\in [0,1[$ with the property that $\iota(x) = I$.
\end{lem}
\begin{proof}
Let $I = i_1i_2....$ and set $I_n = i_1...i_n$ for each $n \in
\mathbb{N}$. It is enough to show that
\[
\bigcap_{n=1}^\infty A_{I_n} \ne \emptyset.
\]
The hypothesis entails that for any $I_n$ there is an $m > n $ with the
property that $\overline{A_{I_m}} \subset A_{I_n}$. This in turn means
that
\[
\bigcap_{n=1}^\infty A_{I_n} = \bigcap_{n=1}^\infty \overline{A_{I_n}}.
\]
The claim follows since a decreasing sequence of bounded, closed
intervals must have a non-empty intersection.
\end{proof}
\begin{lem}\label{lem:contsymb}
Let $x_0 \in \mathbb{T}$ be a point with the property that $\iota(x_0)$ doesn't
terminate with an infinite sequence of $0$'s. Then $x_0$ is a point of
continuity for the map $\iota: \mathbb{T} \to \mathcal{A}^{\mathbb{N}}$.
\end{lem}
\begin{proof}
Write $I = i_1i_2...$ for $\iota(x_0)$ and let $I_n =
i_1...i_n$. The hypothesis entails that $x_0$ does not lie on the
left endpoint of any of the $A_{I_n}$. In other words, $x_0$ is in the
interior of all the $A_{I_n}$. Suppose $x_k \to x_0$ as $k \to \infty$.
For any $n \in \mathbb{N}$, $x_k \in A_{I_n}$ for all sufficiently large $k$.
For such $k$, $\iota(x_k)$ will have $I_n$ as a prefix.
\end{proof}
In view of the analysis of the previous section, we may select an infinite
rotational subset $X \subset \mathbb{T}$ for the transformation $T_0: x \mapsto d\cdot
x \mod 1$ that is an extension for an irrational rotation of $\mathbb{T}$ by $\theta_0
\in \mathbb{R}/\mathbb{Z}$. By mapping each point of $X$ to its $d$-adic expansion, we may
embed $X$ continuously in $\mathcal{A}^{\mathbb{N}}$. (The transformation $T_0$ is
just the restriction of the standard shift map on $\mathcal{A}^{\mathbb{N}}$ to $X$.
Since no point of $X$ has a $d$-adic expansion that terminates in an infinite
sequence of $d-1$'s, there is an $a \in \mathbb{T}$ with the property that $\iota(a)
\in X$. The orbit of $a$,
\[
\mathcal{O}(a) = \{ T^i a: i \ge 0 \}
\]
is invariant under $T$. By remark \ref{rem:lexord} and equation
\ref{eqn:factmap}, $T$ preserves cyclic order. One can check that the same facts
extend to the closure $Y = \overline{\mathcal{O}(a)}$.
\begin{lem}
Every point of $Y$ is a point of continuiuty for $\iota$.
\end{lem}
\begin{proof}
By lemma \ref{lem:contsymb}, it suffices to show that for every $y \in
Y$, $\iota(y)$ does not end with an infinite sequence of $0$'s. If this
is not the case then by applying $T$ sufficiently many times we obtain
a point $y \in Y$ for which $\iota(y) = 000...$. In particular,
the orbit of $y$ lies in the set $\iota^{-1}(000...)$. The
analysis in the previous section implies that there exist two points
$u,v \in X$ whose order is reversed by $S$. Since $\iota(a)$ has dense
orbit in $X$, there are integers $n, m \in \mathbb{N}$ such that $\iota(T^na)$
and $\iota(T^ma)$ are so close to $u$ and $v$ that their order is
reversed by $S$ as well. By remark \ref{rem:lexord}, $T^na$ and $T^ma$
have thier orders reversed by $T$. Consequently, $T$ does not preserve
the cyclic order of $y, T^na$ and $T^ma$. This contradiction proves
that every point of the closed set $Y$ is a point of continuity of
$\iota$.
\end{proof}
Now let $Y_0$ be a minimal subset of the dynamical system $(Y,T)$. Since
$\iota|_{Y_0}$ is continuous, $\iota(Y_0)$ is a compact and invariant in the
minimal set $X$. Thus, $\iota(Y_0) = X$. Since $X$ is an infinite factor of
$Y_0$, $Y_0$ is not finite. With a little more care, it is not hard to show
that the rotation number of any $y \in Y_0$ with resprect to $T$ is $\theta$.
It is natural to ask at this point if rotational subsets of $\mathbb{T}$ always exist
with respect to any continuous transformation of the circle with degree $d > 1$.
| {
"timestamp": "2017-12-19T02:02:33",
"yymm": "1705",
"arxiv_id": "1705.03851",
"language": "en",
"url": "https://arxiv.org/abs/1705.03851",
"abstract": "A rotational subset, relative to a continuous transformation $T: \\mathbb{T} \\to \\mathbb{T}$ on the unit circle, is a closed, invariant subset of $\\mathbb{T}$ that is minimal and on which $T$ respects the standard orientation of the unit circle. In the case where $T$ is the standard angle doubling map, such subsets were studied by Bullet and Sentenac. The case where $T$ multiplies angles by an integer $d > 2$ was studied by Goldberg and Tresser, and Blokh, Malaugh, Mayer, Oversteegen, and Parris. These authors prove that infinite rotational subsets arise as extensions of irrational rotations of the unit circle. In this paper, we prove that such a structure theorem holds for the wider class of continuous transformations $T$ with finite fibers. Our methods are more squarely analytic in nature than the works mentioned, and hence of interest even in the cases treated by the works mentioned above. The paper concludes with an exposition of those cases from the point of view taken here.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Rotational subsets of the circle",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232914907945,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7093460277421317
} |
https://arxiv.org/abs/0812.4977 | Decay of mass for nonlinear equation with fractional Laplacian | The large time behavior of nonnegative solutions to the reaction-diffusion equation $\partial_t u=-(-\Delta)^{\alpha/2}u - u^p,$ $(\alpha\in(0,2], p>1)$ posed on $\mathbb{R}^N$ and supplemented with an integrable initial condition is studied. We show that the anomalous diffusion term determines the large time asymptotics for $p>1+{\alpha}/{N},$ while nonlinear effects win if $p\leq1+{\alpha}/{N}.$ | \section{Introduction}
We study the behavior, as $t\to\infty$, of solutions to the
following initial value problem for the reaction-diffusion equation with
the anomalous diffusion
\begin{eqnarray}\label{eq}
\partial_t u &=& -\Lambda^\alpha u + \lambda u^p,\qquad x\in\mathbb{R}^N,t>0,\\
u(x,0) &=& u_0(x),\label{ini}
\end{eqnarray}
where the pseudo-differential operator
$\Lambda^\alpha=(-\Delta)^{\alpha/2}$ with $0<\alpha\leq2$ is
defined by the Fourier transformation:
\widehat{\Lambda^\alpha u}(\xi)=|\xi|^\alpha\widehat{u}(\xi).
$
Moreover, we assume that $\lambda\in\{-1,1\}$ and $p>1$.
Nonlinear evolution problems involving fractional Laplacian
describing {\it the ano\-ma\-lous diffusion} (or $\alpha$-stable
L\'evy diffusion) have been extensively studied in the mathematical
and physical literature (see \cite{BKW01, KW08, 6} for references).
One of possible ways to understand the interaction between the
anomalous diffusion operator (given by $\Lambda^\alpha$ or, more
generally, by the L\'evy diffusion operator) and the nonlinearity in
the equation under consideration is the study of the large time
asymptotics of solutions to such equations. Our goal is to
contribute to this theory and our results can be summarized as
follows. For $\lambda=-1$ in equation \rf{eq}, nonnegative solutions
to the Cauchy problem exist globally in time. Hence, we study the
decay properties of the mass $M(t)= \int_{\mathbb{R}^N}u(x,t)\,dx$
of the solutions $u=u(x,t)$ to problem \rf{eq}-\rf{ini}. We prove
that $ \lim_{t\to\infty}M(t)=M_\infty >0$ for $p>1+{\alpha}/{N}$
(cf. Theorem 1, below), while
$M(t)$ tends to zero as $t\to\infty$
if $1<p\leq 1+\alpha/N$ (cf. Theorem 2).
As a by-product of our analysis, we show the blow-up of all
nonnegative solutions to \rf{eq}-\rf{ini} with $\lambda=1$ in the case of the
critical nonlinearity exponent $p=1+\alpha/N$ (see Theorem 3, below).
The idea which allows to express the competition between diffusive and nonlinear terms
in an evolution equation by studying the large time behavior of the space integral
of a solution was already introduced by Ben-Artzi \& Koch \cite{BK99}
who considered
the viscous Hamilton-Jacobi equation $u_t=\Delta u-|\nabla u|^p$ (see also Pinsky \cite{21}).
An analogous result
for the equation $u_t=\Delta u+|\nabla u|^p$
(with the growing-in-time mass of solutions) was proved
by Lauren\c cot \& Souplet \cite{LS03}. Such questions concerning
the asymptotic behavior of solutions to the Hamilton-Jacobi equation
with the L\'evy diffusion operator were answered in \cite{KW08}.
In the case of the classical reaction-diffusion equation
({\it i.e.}~equation \rf{eq} with $\alpha=2$), for $p<1+{2}/{N}$,
Fujita \cite{8} proved the nonexistence of nonnegative global-in-time
solution for any nontrivial initial condition.
On other hand,
if $p>1+{2}/{N},$ global solutions do exist for any
sufficiently small nonnegative initial data.
The proof of a blow-up of all nonnegative solutions in the critical case
$p=1+{2}/{N}$ was completed in \cite{11,22,17}.
Analogous blow-up results for problem \rf{eq}-\rf{ini}
with the fractional Laplacian
(and with the critical exponent $p=1+\alpha/N$
for the existence/nonexistence of solutions)
are contained {\it e.g.} in \cite{22,9,10, BLW02}.
\section{Statement of results}
In all theorems below, we always assume that $u=u(x,t)$ is the nonnegative
(possibly weak)
solution of problem \rf{eq}-\rf{ini} corresponding to the nonnegative initial datum
$u_0\in L^1(\mathbb{R}^N).$ Let $u_0\not\equiv 0$, for simplicity of the exposition.
We refer the reader to \cite{6} for several results on the existence, the uniqueness
and the regularity of solutions to \rf{eq}-\rf{ini} as well as for the proof of the maximum principle
(which assures that the solution is nonnegative if the corresponding
initial datum is so).
First, we deal with the equation \rf{eq} containing the absorbing
nonlinearity $(\lambda=-1)$ and we study the decay of the ``mass''
\begin{equation}\label{mass}
M(t)\equiv \int_{\mathbb{R}^N}u(x,t)\,dx=\int_{\mathbb{R}^N}u_0(x)\,dx -\int_0^t\int_{\mathbb{R}^N}u^p(x,s)\,dxds.
\end{equation}
\noindent{\bf Remark.~}In order to obtain equality \rf{mass}, it
suffices to integrate equation \rf{eq} with respect to $x$ and $t$.
Another method which leads to \rf{mass} and which requires weaker
regularity assumptions on a solution consists in integrating with
respect to $x$ the integral formulation of problem \rf{eq}-\rf{ini}
(see \rf{3.8}, below) and using the Fubini theorem. {\hfill$\Box$}
\bigskip
Since we limit ourselves to nonnegative solutions, the function
$M(t)$ defined in \rf{mass} is nonnegative and non-increasing. Hence, the limit
$M_\infty = \lim_{t\rightarrow\infty}M(t)$ exists and we answer
the question whether it is equal to zero or not.
In our first theorem, the diffusion phenomena determine the large time
asymptotics of solutions to \rf{eq}-\rf{ini}.
\begin{theorem}
Assume that $u=u(x,t)$ is a nonnegative nontrivial solution of
\rf{eq}-\rf{ini} with $\lambda=-1$ and $p>1+{\alpha}/{N}.$ Then
\lim_{t\to\infty}M(t)=M_\infty >0.
Moreover, for all $q\in[1,\infty)$
\begin{equation}\label{2.3}
t^{\frac{N}{\alpha}\left(1-\frac{1}{q}\right)}\|u(t)-M_\infty
P_\alpha(t)\|_q\to
0\quad\hbox{as} t\to\infty,
\end{equation}
where the function $P_\alpha(x,t)$ denotes the fundamental
solution of the linear equation $u_t + \Lambda^\alpha u = 0$
(cf. equation \rf{3.3} below).
\end{theorem}
In the remaining range of $p$, the mass $M(t)$ converges to zero and
this phenomena can be interpreted as the domination of nonlinear
effects in the large time asymptotic of solutions to \rf{eq}-\rf{ini}.
Note here that the mass $M(t)=\int_{\mathbb{R}^N}u(x,t)\,dx$
of every solution to linear equation
$u_t+\Lambda^\alpha u=0$ is constant in time.
\begin{theorem}
Assume that $u=u(x,t)$ is a nonnegative solution of problem
\rf{eq}-\rf{ini} with $\lambda=-1$ and $1< p\leq1+{\alpha}/{N}.$
Then
\lim_{t\rightarrow\infty}M(t)=0.
\end{theorem}
Let us emphasize that the proof of Theorem 2 is based on the
so-called the rescaled test function method which was
used by Mitidieri \& Pokhozhaev ({\it cf.~e.g.}~\cite{19,20} and
the references therein) to prove
the nonexistence of solutions to nonlinear elliptic and parabolic equations.
As the by-product of our analysis, we can also contribute to the
theory on the blow-up of solutions to \rf{eq}-\rf{ini} with
$\lambda=+1.$ Recall that the method of the rescaled test function
(which we also apply here) was use in
\cite{9,10} to show the blow-up of all positive
solutions to \rf{eq}-\rf{ini} with $\lambda=1$ and
$p<1+{\alpha}/{N}.$ Here, we complete
that result by the simple proof of the blow-up in
the critical case $p=1+{\alpha}/{N}.$
\begin{theorem}
If $\lambda=1,$ $\alpha \in (0,2]$ and $ p = 1+{\alpha}/{N},$
then any nonnegative nonzero solution of \rf{eq}-\rf{ini}
blows up
in a finite time.
\end{theorem}
\section{Proofs of Theorems 1, 2, and 3}
Note first that any (sufficiently regular) nonnegative solution to \rf{eq}-\rf{ini}
satisfies
\begin{equation}\label{3.1}
0\leq\int_{\mathbb{R}^N}u(x,t)\,dx=\int_{\mathbb{R}^N}u_0(x)\,dx + \lambda\int_0^t\int_{\mathbb{R}^N}u^p(x,s)\,dx\,ds.
\end{equation}
\noindent Hence, for $\lambda=-1$ and $u_0\in L^1(\mathbb{R}^N),$ we
immediately obtain
\begin{equation}\label{3.2}
u\in L^\infty([0,\infty),L^1(\mathbb{R}^N))\cap
L^p(\mathbb{R}^N\times(0,\infty)).
\end{equation}
\bigskip
{\it Proof of Theorem 1.}
First, we recall that the fundamental solution
$P_\alpha=P_\alpha(x,t)$ of the linear equation $\partial_t
u+\Lambda^\alpha u=0$ can be written via the Fourier transform
as follows
\begin{equation}\label{3.3}
P_\alpha(x,t)=t^{-N/\alpha}P_\alpha(xt^{-1/\alpha},1)=\frac{1}{(2\pi)^{N/2}}\int_{\mathbb{R}^N}e^{ix.\xi-t|\xi|^\alpha}\,d\xi.
\end{equation}
It is well-known that for each $\alpha\in(0,2],$ this function
satisfies
\begin{equation}\label{3.4}
P_\alpha(1)\in L^\infty(\mathbb{R}^N)\cap
L^1(\mathbb{R}^N),\quad
P_\alpha(x,t)\geq0,\quad\int_{\mathbb{R}^N}P_\alpha(x,t)\,dx=1,
\end{equation}
\noindent for all $x\in\mathbb{R}^N$ and $t>0.$ Hence, using the
Young inequality for the convolution and the self-similar form of
$P_\alpha,$ we have
\begin{eqnarray}
\|P_\alpha(t)\ast u_0\|_p&\leq&Ct^{-N(1-1/p)/\alpha}\|u_0\|_1,\label{3.5}\\
\|\nabla P_\alpha(t)\|_p&=&Ct^{-N(1-1/p)/\alpha-1/\alpha},\label{3.6}\\
\|P_\alpha(t)\ast u_0\|_p&\leq& \|u_0\|_p,\label{3.7}
\end{eqnarray}
for all $p\in[1,\infty]$ and $t>0.$
In the next step, using the following well-known integral representation of
solutions to \rf{eq}-\rf{ini}
\begin{equation}\label{3.8}
u(t)= P_\alpha(t)\ast u_0 -\int_0^tP_\alpha(t-s)\ast
u^p(s)\,ds,
\end{equation}
we immediately obtain the estimate $0\leq u(x,t)\leq
P_\alpha(x,t)\ast u_0(x).$ Hence, by \rf{3.5} and \rf{3.7} we get
\begin{eqnarray}\label{3.9}
\|u(t)\|^p_p &\leq& \|P_\alpha(t)\ast u_0\|_p^p\nonumber\\
&\leq& \min\left\{C t^{-N(p-1)/\alpha}\|u_0\|_1^p ; \|u_0\|_p^p\right\}\equiv H(t,p,\alpha,u_0).
\end{eqnarray}
Now, for fixed $\varepsilon\in(0,1],$ we consider the solution
$u^\varepsilon=u^\varepsilon(x,t)$ of \rf{eq}-\rf{ini} with the initial
condition $\varepsilon u_0(x).$ The comparison principle implies
that $0\leq u^\varepsilon(x,t)\leq u(x,t)$ for every
$x\in\mathbb{R}^N$ and $t>0.$ Hence, it suffices to show that for
small $\varepsilon>0,$ which will be determined later, we have
\begin{equation*}\label{3.10}
M^\varepsilon_\infty\equiv\lim_{t\rightarrow\infty}\int_{\mathbb{R}^N}u^\varepsilon(x,t)\,dx >0.
\end{equation*}
Note first the using equality $(\ref{3.1})$ in the case of the
solution $u^\varepsilon,$ we obtain
\begin{equation}\label{3.11}
M^\varepsilon_\infty = \varepsilon\left\{\int_{\mathbb{R}^N}u_0(x)\,dx -\frac{1}{\varepsilon} \int_0^\infty\int_{\mathbb{R}^N}\left(u^\varepsilon(x,t)\right)^p\,dx\,dt\right\}.
\end{equation}
Now, we apply $(\ref{3.9})$ with $u$ replaced by
$u^\varepsilon$. Observe that the function $H$ defined in
\rf{3.9} satisfies
$H(t,p,\alpha,\varepsilon
u_0) = \varepsilon^p H(t,p,\alpha,u_0).$ Hence
\begin{eqnarray*}
\frac{1}{\varepsilon}\int_0^\infty\int_{\mathbb{R}^N}
\left(u^\varepsilon (x,t)\right)^p\,dxdt
&\leq&\frac{1}{\varepsilon}\int_0^\infty H(t,p,\alpha,\varepsilon u_0)\,dt \\
&=&\varepsilon^{1-p}\int_0^\infty H(t,p,\alpha,u_0)\,dt.
\end{eqnarray*}
It is follows immediately from the definition of the
function $H$
that the integral on the right-hand side is convergent
for $p>1+{\alpha}/{N}.$ Consequently,
$$
\frac{1}{\varepsilon} \int_0^\infty\int_{\mathbb{R}^N}\left(u^\varepsilon(x,t)\right)^p\,dx\,dt\to
0\qquad\hbox{as}\quad\varepsilon\searrow 0,
$$
and the constant $M^\varepsilon_\infty$ given by
$(\ref{3.11})$ is positive for sufficiently small $\varepsilon>0.$
From now on, the proof of the asymptotic relation \rf{2.3} is standard,
hence, we shall be brief in details. First we recall that for every
$u_0\in L^1(\mathbb{R}^N)$ we have
\begin{equation}\label{L1:lin}
\lim_{t\to\infty} \|P_\alpha(t)*u_0-MP_\alpha(t)\|_1=0,
\end{equation}
where $M=\int_{\mathbb{R}^N}u_0(x)\,dx$. This is the immediate consequence
of the Taylor argument combined with an
approximation argument. Details of this reasoning can be found in
\cite[Lemma 3.3]{BKW01}.
Now, to complete the proof of Theorem 1, we adopt the reasoning from \cite{LS03}.
It follows from the
integral equation $(\ref{3.8})$ and inequality $(3.7)$ with $p=1$ that
\begin{equation*}\label{3.14}
\|u(t)-P_\alpha(t-t_0)\ast u(t_0)\|_1\leq \int_{t_0}^t
\|u(s)\|_p^p\,ds\quad\hbox{for all}\quad t\geq t_0\geq 0.
\end{equation*}
Hence, using the triangle inequality we infer
\begin{equation}\label{3.15}
\begin{split}
\|u(t)-M_\infty P_\alpha(t)\|_1 \leq& \int_{t_0}^t
\|u(s)\|_p^p\,ds\\
&+\|P_\alpha(t-t_0)\ast
u(t_0)-M(t_0)P_\alpha(t-t_0)\|_1\\
&+\|M(t_0)(P_\alpha(t-t_0)-P_\alpha(t))\|_1\\
&+
\|P_\alpha(t)\|_1\left|M(t_0)-M_\infty\right|.
\end{split}
\end{equation}
Applying first \rf{3.4} and \rf{L1:lin} with $u_0=u(t_0)$, and next
passing to the limit as $t\to\infty$ on the right-hand side of
\rf{3.15}, we obtain
$$\limsup_{t\to\infty}\|u(t) - M_\infty
P_\alpha(t)\|_1\leq\int_{t_0}^\infty
\|u(s)\|_p^p\,ds + \left|M(t_0)-M_\infty\right|.$$\\
By letting $t_0$ go to $+\infty$ and using \rf{3.2} we conclude
that
\begin{equation}\label{3.16}
\|u(t) - M_\infty
P_\alpha(t)\|_1\to 0\quad\quad\hbox{as} \quad
t\to\infty.
\end{equation}
In order to obtain the asymptotic term for $p>1$, observe that by the
integral equation \rf{3.8} and estimate \rf{3.5},
for each $m\in[1,\infty],$ we have
\begin{equation}\label{3.17}
\|u(t)\|_m\leq\|P_\alpha(t)\ast
u_0\|_m\leq Ct^{-N(1-1/m)/\alpha}\|u_0\|_1.
\end{equation}
Hence, for every $q\in[1,m),$ using the H\"older inequality,
we obtain
\begin{equation*}\label{3.18}
\begin{split}
\|u(t)-M_\infty P_\alpha(t)\|_q&\leq \|u(t)-M_\infty
P_\alpha(t)\|_1^{1-\delta}\left(\|u(t)\|_m^\delta + \|M_\infty P_\alpha(t)\|_m^\delta\right)\\
&\leq Ct^{-N(1-1/q)/\alpha}\|u(t)-M_\infty
P_\alpha(t)\|_1^{1-\delta},
\end{split}
\end{equation*}
with $\delta=(1-1/q)/(1-1/m).$
Finally, applying \rf{3.16} we complete the proof of Theorem~1.
{\hfill$\Box$}
\bigskip
{\it Proof of Theorem 2.}
Let us define the function
$\varphi(x,t)=\left(\varphi_1(x)\right)^\ell\left(\varphi_2(t)\right)^\ell$
where
$$
\ell=\frac{2p-1}{p-1},\quad
\varphi_1(x)=\psi\left(\frac{|x|}{BR}\right), \quad
\varphi_2(t)=\psi\left(\frac{t}{R^\alpha}\right), \quad R>0,
$$
and
$\psi$ is a smooth non-increasing function on $[0,\infty)$ such that
$$\psi(r)=\left\{\begin {array}{l}1\qquad \quad \mbox {if }0\leq r\leq 1,\\
0\qquad \quad \mbox {if }r\geq 2.
\end {array}\right.$$
The constant $B>0$ in the definition of $\varphi_1$ is fixed and
will be chosen later. In
fact, it plays some role in the critical case $p=1+{\alpha}/{N}$
only while in the subcritical case $p<1+\alpha/N$ we simply put
$B=1$.
In the following, we denote by $\Omega_1$ and $\Omega_2$
the supports of $\varphi_1$ and $\varphi_2,$ respectively:
\begin{equation*}\label{3.19}
\Omega_1=\left\{x\in\mathbb{R}^N \;:\; |x|\leq
2BR\right\},\qquad\Omega_2=\left\{t\in[0,\infty)\;:\; t\leq
2R^\alpha\right\}.
\end{equation*}
Now, we multiply equation \rf{eq} by
$\varphi(x,t)$ and integrate with respect to $x$ and $t$ to obtain
\begin{eqnarray}
&&\hspace{-2cm}\int_{\Omega_1}u_0(x)\varphi(x,0)\,dx -\int_{\Omega_2}\int_{\Omega_1}u^p(x,t) \varphi(x,t)\,dxdt\nonumber\\
&=&\int_{\Omega_2}\int_{\mathbb{R}^N}u(x,t)\varphi_2(t)^\ell \Lambda^\alpha(\varphi_1(x))^\ell\,dxdt\nonumber\\
&&-\int_{\Omega_2}\int_{\Omega_1}u(x,t) \varphi_1(x)^\ell\partial_t\varphi_2(t)^\ell\,dxdt\label{3.20}\\
&\leq& \ell\int_{\Omega_2}\int_{\Omega_1}u(x,t)\varphi_2(t)^\ell\varphi_1(x)^{\ell-1}\Lambda^\alpha\varphi_1(x)\,dxdt\nonumber\\
&&- \ell\int_{\Omega_2}\int_{\Omega_1}u(x,t) \varphi_1(x)^\ell\varphi_2(t)^{\ell-1}\partial_t\varphi_2(t)\,dxdt.\nonumber
\end{eqnarray}
In \rf{3.20}, we have used the inequality $ \Lambda^\alpha
\varphi_1^\ell \leq\ell\varphi_1^{\ell-1}\Lambda^\alpha\varphi_1, $
(see \cite[Prop. 2.3]{5} and \cite[Prop. 3.3]{13} for its proof)
which is valid for all $\alpha\in(0,2]$, $\ell\geq 1,$ and any
sufficiently regular, nonnegative, decaying at infinity function
$\varphi_1$.
Hence, by the $\varepsilon$-Young inequality
$ab\leq \varepsilon
a^p + C(\varepsilon)b^{\ell -1}$ (note that $1/p+1/(\ell-1)=1$)
with $\varepsilon>0,$ we deduce from \rf{3.20}
\begin{equation}\label{3.21}
\begin{split}
&\int_{\Omega_1}u_0(x)\varphi(x,0)\,dx-(1+2\ell\varepsilon)
\int_{\Omega_2}\int_{\Omega_1}u^p(x,t) \varphi(x,t)\,dx\,dt\\
& \leq
C(\varepsilon)\ell \left\{\int_{\Omega_2}\int_{\Omega_1}\varphi_1
\varphi_2^{\ell}\left|\Lambda^\alpha \varphi_1\right|^{\ell-1}\,dxdt
+\int_{\Omega_2}\int_{\Omega_1}\varphi_1^{\ell}\varphi_2
\left|\partial_t\varphi_2\right|^{\ell -1}\,dxdt\right\}.
\end{split}
\end{equation}
Recall now that the functions $\varphi_1$ and $\varphi_2$
depend on $R>0.$ Hence changing the variables $\xi=R^{-1}x$ and
$\tau=R^{-\alpha}t,$ we easily obtain from \rf{3.21} the
following estimate
\begin{equation}\label{3.22}
\int_{\Omega_1}u_0(x)\varphi(x,0)\,dx -(1+2\ell \varepsilon)\int_{\Omega_2}\int_{\Omega_1}u^p(x,t) \varphi(x,t)\,dxdt\leq
C R^{N+\alpha-\alpha(\ell-1)},
\end{equation}
where the constant $C$ on the right hand side of \rf{3.22} is independent
of $R$. Note that
$N+\alpha-\alpha(\ell-1)\leq0$ if and only if $ p\leq
1+{\alpha}/{N}.$ Now, we consider two cases.
For $p<1+{\alpha}/{N},$ we have $N+\alpha-\alpha(\ell-1)<0.$
Hence, computing the
limit $R\to\infty$ in \rf{3.22} and using the
Lebesgue dominated convergence theorem, we obtain
$$
M_\infty=\int_{\mathbb{R}^N}u_0(x)\,dx
-\int_0^\infty\int_{\mathbb{R}^N}u^p(x,t)\,dxdt
\leq 2\ell\varepsilon\int_0^\infty\int_{\mathbb{R}^N}u^p\,dxdt.
$$
Since $u\in L^p(\mathbb{R}^N\times (0,\infty))$ (cf. \rf{3.2}) and
since
$\varepsilon>0$ can be chosen arbitrary small, we immediately
obtain that $M_\infty=0.$
In the critical case $p=1+{\alpha}/{N}$, we estimate first term
on the right hand side of inequality \rf{3.20} using again by the
$\varepsilon$-Young inequality and the second term by
the H\"{o}lder inequality (with $\bar p =p/(p-1)=\ell-1$)
as follows
\begin{eqnarray}
&&\hspace{-1cm}\int_{\Omega_1}u_0(x)\varphi(x,0)\,dx -\int_{\Omega_2}\int_{\Omega_1}u^p\varphi(x,t)\,dxdt\nonumber\\
&\leq&\ell\varepsilon\int_{\Omega_2}\int_{\Omega_1}u^p(x,t)\,dxdt \nonumber\\
&&+C(\varepsilon)\int_{\Omega_2}\int_{\Omega_1}\varphi_2^{\ell\bar{p}}(t)\varphi_1^{(\ell-1)\bar{p}}(x)\left|\Lambda^\alpha\varphi_1(x)\right|^{\bar{p}}\,dxdt\label{3.23}\\
&&+\ell\left(\int_{\Omega_3}\int_{\Omega_1}u^p(x,t)\,dxdt\right)^{1/p}\nonumber\\
&&\quad \times \left(\int_{\Omega_2}\int_{\Omega_1}\varphi_1^{\ell\bar{p}}(x)\varphi_2^{(\ell-1)\bar{p}}(t)\left|\partial_t\varphi_2(t)\right|^{\bar{p}}\,dxdt\right)^{1/\bar{p}}.\nonumber
\end{eqnarray}
Here,
$\Omega_3=\left\{t\in[0,\infty) \;:\; R^\alpha\leq t\leq
2R^\alpha\right\}$
is the support of $\partial_t\varphi_2.$ Note that
$$
\int_{\Omega_3}\int_{\Omega_1}u^p(x,t)\,dx\,dt\to 0
\quad\hbox{as}\quad R\to\infty,$$
because $u\in L^p(\mathbb{R}^N\times[0,\infty))$ (cf. \rf{3.2}).
Now, introducing the new variables
$\xi=(BR)^{-1}x$, $\tau=R^{-\alpha}t$ and recalling that $p=1+\alpha/N$,
we rewrite \rf{3.23} as follows
\begin{equation}\label{3.24}
\begin{split}
\int_{\Omega_1}u_0(x)\varphi(x,0)\,dx
&-\int_{\Omega_2}\int_{\Omega_1}u^p(x,t)\varphi(x,t)\,dxdt
-\varepsilon\ell\int_{\Omega_2}\int_{\Omega_1}u^p(x,t)\,dxdt\\
&\leq C_1B^{N/\bar p}\left(\int_{\Omega_3}\int_{\Omega_1}u^p(x,t)\,dxdt\right)^{1/p}+C_2C(\varepsilon)B^{-\alpha},
\end{split}
\end{equation}
where the constants $C_1, C2$ are independent of $R$, $B$, and of
$\varepsilon$. Passing in \rf{3.24} to the limit as $R\to+\infty$
and using the Lebesgue dominated convergence theorem we get
\begin{equation}\label{3.25}
\begin{split}
\int_{\mathbb{R}^N}u_0(x)\,dx&-\int_0^\infty\int_{\mathbb{R}^N}u^p(x,t)\,dxdt
-\varepsilon\ell \int_0^\infty\int_{\mathbb{R}^N}u^p(x,t)\,dxdt\\
&\leq C_2C(\varepsilon)B^{-\alpha}.
\end{split}
\end{equation}
Finally, computing the limit $B\to\infty$ in \rf{3.25}
we infer that $M_\infty=0$ beacuse
$\varepsilon>0$ can be arbitrarily small.
This complete the proof of Theorem 2. {\hfill$\Box$}
\bigskip
{\it Proof of Theorem 3.}
The proof proceeds by contradiction. Let $u$ be a non-negative
non-trivial solution of \rf{eq}-\rf{ini} with $\lambda=1$. Take
the test function $\varphi$ the same as in the proof of Theorem 2.
Repeating the estimations which lead to ($\ref{3.24}$), we obtain
\begin{equation}\label{3.26}
\begin{split}
&\int_{\Omega_1}u_0(x)\varphi(x,0)\,dx
+\int_{\Omega_2}\int_{\Omega_1}u^p(x,t) \varphi(x,t)\,dxdt\\
&\qquad\qquad-\varepsilon\ell\int_{\Omega_2}\int_{\Omega_1}u^p(x,t)\,dxdt\nonumber\\
&\leq C_1B^{N/\bar p}\left(\int_{\Omega_3}\int_{\Omega_1}u^p(x,t)\,dxdt\right)^{1/p}+ C_2C(\varepsilon)B^{-\alpha}.
\end{split}
\end{equation}
Now, we chose $\varepsilon=1/(2\ell)$ in \rf{3.26} and we pass to
the following
limits: first $R\to\infty$, next $B\to\infty$.
Using the Lebesgue dominated convergence theorem, we obtain
$$\int_{\mathbb{R}^N}u_0(x)\,dx + \frac{1}{2}\int_0^\infty\int_{\mathbb{R}^N}u^p(x,t)\,dxdt\leq 0.
$$
Hence, $u(x,t)=0$ which contradicts our assumption imposed
on $u.$ {\hfill$\Box$}
| {
"timestamp": "2008-12-29T22:45:11",
"yymm": "0812",
"arxiv_id": "0812.4977",
"language": "en",
"url": "https://arxiv.org/abs/0812.4977",
"abstract": "The large time behavior of nonnegative solutions to the reaction-diffusion equation $\\partial_t u=-(-\\Delta)^{\\alpha/2}u - u^p,$ $(\\alpha\\in(0,2], p>1)$ posed on $\\mathbb{R}^N$ and supplemented with an integrable initial condition is studied. We show that the anomalous diffusion term determines the large time asymptotics for $p>1+{\\alpha}/{N},$ while nonlinear effects win if $p\\leq1+{\\alpha}/{N}.$",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Decay of mass for nonlinear equation with fractional Laplacian",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232894783427,
"lm_q2_score": 0.721743200312399,
"lm_q1q2_score": 0.7093460262896585
} |
https://arxiv.org/abs/1605.04829 | The degree of commutativity and lamplighter groups | The degree of commutativity of a group $G$ measures the probability of choosing two elements in $G$ which commute. There are many results studying this for finite groups. In [AMV17], this was generalised to infinite groups. In this note, we compute the degree of commutativity for wreath products of the form $\mathbb{Z}\wr \mathbb{Z}$ and $F\wr \mathbb{Z}$ where $F$ is any finite group. | \section{Introduction}
Let $F$ be a finite group. Then degree of commutativity of $F$, denoted $\dc(F)$, is the probability of choosing two elements in $F$ which commute i.e.\
\[\dc(F):=\frac{|\{(a, b) \in F^2 : ab=ba\}|}{|F|^2}.\]
This definition was generalised to infinite groups in \cite{dcA} in the following way. Let $G$ be a finitely generated group and let $S$ be a finite generating set for $G$. Let $|g|_S$ denote the length of $g$ with respect to the generating set $S$ i.e.\ the infimum of all word lengths of words in $S$ which represent $g$. For any $n \in \mathbb{N}$, let $\mathbb{B}_S(n):=\{g \in G : |g|_S\le n\}$, the ball of radius $n$ in the Cayley graph of $G$ with respect to the generating set $S$. Then the degree of commutativity of $G$ with respect to $S$, as defined in \cite{dcA}, is
\begin{align}\label{dcdefn}
\limsup_{n\rightarrow\infty}\frac{|\{(a, b) \in \mathbb{B}_S(n)^2 : ab=ba\}|}{|\mathbb{B}_S(n)|^2}
\end{align}
and is denoted by $\dc_S(G)$. They also pose an intriguing conjecture.
\begin{conj*}\cite[Conj. 1.6]{dcA} Let $G$ be a finitely generated group, and let $S$ be a finite generating set
for $G$. Then: (i) $\dc_S(G)>0$ if and only if $G$ is virtually abelian; and (ii) $\dc_S(G) > 5/8$
if and only if $G$ is abelian.
\end{conj*}
They verify this conjecture for hyperbolic groups and groups of polynomial growth (for an introduction to the growth of groups, see \cite{introtogrowth}). In this note we will investigate the conjecture for groups which are wreath products.
Perhaps the best known example of an infinite wreath product are the lamplighter groups $C\wr \mathbb{Z}$ where $C$ is cyclic. In this note we investigate such groups and from this also obtain a result for groups of the form $F\wr \mathbb{Z}$ where $F$ is finite. Such groups are sensible to investigate with respect to the conjecture since they have exponential growth and yet all elements in the base of $C\wr \mathbb{Z}$ commute. We obtain the following results.
\begin{thmR} \label{mainthm} Let $G=C\wr\mathbb{Z}$, where $C$ is a non-trivial cyclic group. Let $S$ be a generating set for $G$ of size 2, consisting of a generator of $\mathbb{Z}$ and a generator of $C_i$ for some $i \in \mathbb{Z}$. Then $\dc_S(G)=0$.
\end{thmR}
\begin{thmR} \label{mainthm2} Let $F$ be finite group. Then there is a finite generating set $S$ of $F\wr \mathbb{Z}$ such that $\dc_S(F\wr \mathbb{Z})=0$.
\end{thmR}
Note that the groups of Theorem \ref{mainthm2} include the first known examples of non-residually finite groups with degree of commutativity 0, since it is currently open as to whether there exists a non-residually finite hyperbolic group. We now introduce wreath products from an algebraic viewpoint, but will provide intuition (using permutations) below.
\begin{defn*} Given groups $G$ and $H$, the \emph{unrestricted wreath product} of $G$ and $H$ has elements consisting of an element $h \in H$ and a function $f: H\rightarrow G$. Let $B'$ be the set of all such functions. If $f_1, f_2 \in B'$, then $(f_1\times f_2)(h):=f_1(h)\cdot f_2(h)$
for all $h \in H$, where $\cdot$ denotes the binary operation of $G$. Moreover if $k \in H$ then $k^{-1}(f(h))k:=f(hk^{-1})$ for all $h \in H$. This is equal to the semidirect product $B'\rtimes H$. The \emph{restricted wreath product}, denoted $G\wr H$, is defined analogously as the semidirect product $B\rtimes H$ where $H$ is the \emph{head} of $G\wr H$ and $B$, the \emph{base} of $G\wr H$, is the subgroup of $B'$ consisting of functions with finite support i.e.\ functions $f \in B'$ such that $f(h)\ne1$ for only finitely many $h$. Since the base is a direct sum of $|H|$ copies of $G$, for any $h \in H$ let $G_h$ denote the copy of $G$ corresponding to $h$.
\end{defn*}
It may be useful to provide some of the intuition used when thinking about lamplighter groups i.e.\ groups of the form $C\wr \mathbb{Z}$ where $C$ is cyclic. Each of these groups acts naturally on the corresponding set $C\times \mathbb{Z}$. We shall picture $C$ as addition modulo $n$ if $|C|=n$ and as $\mathbb{Z}$ otherwise. Hence $C=\{0, 1, \ldots, n-1\}$ or $C=\mathbb{Z}$. A well used generating set is $\{a_0, t\}$ where $\Supp(a_0)=\{(0,0), (1,0), \ldots, (n-1, 0)\}$ and $\Supp(t)=C\times \mathbb{Z}$ with $t:(m,n)\rightarrow (m,n+1)$ for all $m \in C$ and $n \in \mathbb{Z}$. In the case where $|C|=2$, the base of $C\wr \mathbb{Z}$ can be thought of as a countable collection of street lamps, with each lamp having an `off' or `on' setting. If $2<|C|<\infty$, then we can consider each `lamp' to have a finite number of settings (possibly corresponding to different levels of brightness). In the case of $\mathbb{Z}\wr \mathbb{Z}$, the base can be thought of as lamps, where each lamp has an associated `voltage' which takes a value in $\mathbb{Z}$. Although this intuition will not be taken any further, it can also be seen to apply to subgroups of $\mathbb{R} \wr \mathbb{R}$.
\begin{rem*} In the case where $G$ is finite, it is well known that
\[\dc(G)=\frac{\#\;\text{conjugacy classes of}\;G}{|G|}.\]
One could therefore define the degree of commutativity for any finitely generated infinite group with respect to a finite generating set $S$ to be
\[\limsup_{n\rightarrow \infty}\frac{\#\;\text{conjugacy classes in}\;\mathbb{B}_S(n)}{|\mathbb{B}_S(n)|}.\]
Such a limit may not be a real limit. Note that this definition includes the conjugacy growth function of $G$, which was introduced in \cite{introtoconjgrowth} and studied, for example, in \cite{sapirconjgrowth} and \cite{osinconjgrowth}.
\end{rem*}
Two questions then present themselves.
\begin{quR} With this definition for degree of commutativity, does the conjecture above (from \cite{dcA}) hold?
\end{quR}
\begin{quR} Does this definition for the degree of commutativity coincide with (\ref{dcdefn}) above?
\end{quR}
The author is unaware of such questions having been posed before, and these questions are not discussed further in this note.
\vspace{0.25cm}
\noindent\textbf{Acknowledgements.} This work would not have been completed without the guidance of my PhD supervisor, Armando Martino. I also thank the other authors of \cite{dcA} for a paper filled with so many ideas.
\section{Proving Theorem \ref{mainthm}}
The key result we shall draw upon is the following. For the group $G=H\wr \mathbb{Z}$ we shall use the base of $H\wr \mathbb{Z}$ as the set $N$.
\begin{lem}\cite[Lem. 3.1]{dcA}\label{mainlem} Let $G$ be a finitely generated group, and let $S$ be a finite generating system
for $G$. Suppose that there exists a subset $N \subseteq G$ satisfying the following conditions:
\begin{enumerate}[i)]
\item $N$ is $S$-negligible, i.e.\, $\lim_{n\rightarrow\infty}\frac{|N \cap\mathbb{B}_S(n)|}{|\mathbb{B}_S(n)|}=0$;
\item $\lim_{n\rightarrow\infty}\frac{|C_G(g) \cap \mathbb{B}_S(n)|}{|\mathbb{B}_S(n)|}=0$ uniformly in $g \in G \setminus N$.
\end{enumerate}
Then, $\dc_S(G) = 0$.
\end{lem}
\begin{rem*}
Throughout we will restrict ourselves to generating sets which are the union of a generator of $\mathbb{Z}$ and a generating set for $G_i$ for some fixed $i \in \mathbb{Z}$.
\end{rem*}
\subsection{Proving that groups $C\wr\mathbb{Z}$ satisfy (ii) of Lemma \ref{mainlem}}
This is the simpler of the two conditions to prove for such groups. We first introduce the translation lengths of a group. For more discussions on these, see \cite{connertranslationnumbers} and the references therein.
\begin{defn} Let $G$ be a finitely generated group with finite generating set $S$ and let $g \in G$. Then $\tau_S(g):=\limsup_{n\rightarrow \infty}\frac{|g^n|_{_S}}{n}$, the \emph{translation length} of $g$. Let $F(G)$ denote the set of non-torsion elements in $G$. If there is a finite generating set $S'$ of $G$ such that $\{\tau_{S'}(g):g \in F(G)\}$ is uniformly bounded away from 0, then we say that $G$ is \emph{translation discrete}. If a group is translation discrete with respect to one finite generating set, it is translation discrete with respect to all generating sets (see \cite[Lem. 2.6.1]{translationdiscreteproof})
\end{defn}
We shall use the following.
\begin{lem} Let $G$ be finitely generated, $S$ a finite generating set for $G$, and $|\mathbb{B}_S(n)|\ge f(n)$ for all $n \in \mathbb{N}$, where $f$ is a polynomial of degree 2. Let $N\subseteq G$. If (i) $C_G(g)$ is cyclic for all $g \in G\setminus N$; and (ii) the translation lengths of $G$ are uniformly bounded away from 0, then $\lim_{n\rightarrow\infty}\frac{|C_G(g) \cap \mathbb{B}_S(n)|}{|\mathbb{B}_S(n)|}=0$ uniformly in $g \in G \setminus N$.
\end{lem}
\begin{proof}
This argument can be found within the proof of \cite[Thm. 1.7]{dcA}. From (ii), there exists a constant $\lambda \in \mathbb{R}$ such that $\tau_S(g)\ge 1/\lambda$ for all $g \in G$.
Let $h \in G$. By (i), $C_G(h)=\langle g\rangle$ for some $g \in G$. We now consider how $C_G(h)\cap \mathbb{B}_S(n)$ grows with respect to $n$. If $g^k \in C_G(h)\cap \mathbb{B}_S(n)$, then $|g^k|_{_S}\le n$ and $|g^k|_{_S}\ge |k|\tau_S(g)\ge |k|/\lambda$. Thus $|k|\le \lambda n$ and
\[|C_G(h)\cap \mathbb{B}_S(n)|\le 2\lambda n+1.\]
Hence, since $\mathbb{B}_S(n)$ grows faster than any linear function, the claim follows.
\end{proof}
We must therefore show the two conditions in this lemma are satisfied. Note that they are independent of the choice of finite generating set used.
\begin{defn} Let $A$ denote the base of $G=H\wr \mathbb{Z}$ where $H$ is a finitely generated group. If $g \in A$, then $g=\prod_{i \in I}g_i$ where $I$ is a finite subset of $\mathbb{Z}$ and $g_i \in H_i$ for each $i \in I$. Now $g_{\min}:=\inf\{I\}$ and $g_{\max}:=\sup\{I\}$, the infimum and supremum of $I$, respectively.
\end{defn}
\begin{lem} Let $G:=H\wr \mathbb{Z}$ and let $A$ denote the base of $G$. If $g \in A$, then $C_G(g)\le A$ (and if $H$ is abelian, then $C_G(g)=A$). If $g \in G\setminus A$, then $C_G(g)$ is cyclic.
\end{lem}
\begin{proof} The first claim is clear. For the second, let $g \in G\setminus A$, so that $g=wt^k$ for some $w \in A$ and $k \in \mathbb{Z}\setminus \{0\}$. Now, for any $v \in A$,
\begin{align}\label{eqn1-wreathcentralisers}
&v^{-1}wt^kv=wt^k\nonumber\\
\Leftrightarrow&v^{-1}wt^kvt^{-k}=w\nonumber\\
\Leftrightarrow&t^kvt^{-k}=w^{-1}vw
\end{align}
and so, if $v$ is non-trivial, then $(w^{-1}vw)_{\min}>(t^kvt^{-k})_{\min}$ and so $v \not\in C_G(wt^k)$.
Now assume that $vt^{\alpha}\in C_G(wt^k)$. If $v't^{\alpha} \in C_G(wt^k)$, then $v't^{\alpha}(vt^{\alpha})^{-1}=v'v^{-1}$ and so by (\ref{eqn1-wreathcentralisers}), $v'v^{-1}=1$ i.e.\ $v'=v$. Thus for each $s \in \mathbb{Z}$ such that $vt^s \in C_G(wt^k)$ there is no $v'\ne v$ such that $v't^s \in C_G(wt^k)$. Now assume that $\alpha$ is the smallest positive integer such that there exists a $v \in A$ with $vt^{\alpha} \in C_G(wt^k)$. If, for some $\beta \in \mathbb{Z}$ there is a $u \in A$ such that $ut^\beta \in C_G(wt^k)$, then, by the division algorithm, $\beta=n\alpha$ for some $n \in \mathbb{Z}$. Thus $ut^\beta=(vt^{\alpha})^n$ since for each $s \in \mathbb{Z}$ there is at most one $\nu \in A$ such that $\nu t^s \in C_G(wt^k)$.
\end{proof}
\begin{lem} Let $G=H\wr \mathbb{Z}$ where $H$ is a finitely generated group and let $A$ denote the base of $G$. Then $\{\tau_S(g) : g \in G\setminus A\}$ is uniformly bounded away from 0 i.e.\ $G$ is translation discrete.
\end{lem}
\begin{proof} Let $S_H$ denote a finite generating set for $H_0$. We work with the generating set $S:=S_H\cup\{t\}$ of $G$.
If $g \in G\setminus A$, then $g=wt^k$ where $w \in A$ and $t \in \mathbb{Z}\setminus\{0\}$. Thus for any $n \in \mathbb{N}$, $|g^n|_{_S}\ge |k|n\ge n$ and so $\tau_S(g)\ge 1$.
\end{proof}
Let $H$ be finitely generated with $\tau_S(H)\subseteq\mathbb{N}\cup \{0\}$ for some finite generating set $S$. Then one can prove, with $S'$ as a finite generating set consisting of the generating set $S$ for $H_0$ and a generator of the head of $H\wr \mathbb{Z}$, that $\tau_{S'}(H\wr \mathbb{Z})=\mathbb{N}\cup\{0\}$ and that $\tau_{S'}^{-1}(0)$ is equal to $\{w \in \bigoplus_{i \in I}H_i\mid I$ is a finite subset of $\mathbb{Z}$ and $w$ is torsion$\}$. Moreover, if we drop the condition on the translation lengths of $H$ and let $A$ denote the base of $H\wr \mathbb{Z}$, then $\tau_{S'}(H\wr\mathbb{Z}\setminus A)=\mathbb{N}$.
\subsection{Proving that groups $C\wr\mathbb{Z}$ satisfy (i) of Lemma \ref{mainlem}}\label{conditioni}
The author is unaware of how to show that the negligibility of a set is independent of the generating set used. When working with groups of exponential growth, it seems that the `density' of a set $A\subset G$ may depend on the choice of generating set. Here, density is thought of as the number
\[\limsup_{n\rightarrow\infty}\frac{|N \cap\mathbb{B}_S(n)|}{|\mathbb{B}_S(n)|}\]
(so that a set is negligible if and only if it has density 0). Note that if the negligibility of a set is independent of the finite generating set used, then the results that follow would apply to any finite generating set.
\begin{rem*}
We shall work with the generating set $\langle a_0, t\rangle$ where $a_0$ is a generator of $C_0$ and $t$ is a generator of $\mathbb{Z}$. The arguments also work for $C_i$ for any $i \in \mathbb{Z}$.
\end{rem*}
Essentially we reduce counting the number of elements in the base in $\mathbb{B}_S(n)$ to known results regarding the number of possible compositions of a number.
\begin{defn} A multiset, denoted $[\ldots]$, is a collection of objects where repeats are allowed e.g.\ $[1,2,2,3,5]$. An ordered multiset, denoted $[\ldots]_{\ord}$, is a multiset with a given ordering. Thus $[1,2,2,3,5]_{\ord}\ne[1,2,3,2,5]_{\ord}$.
\end{defn}
\begin{defn} Let $n \in \mathbb{N}$. Then a composition of $n$ is an ordered collection of natural numbers that sum to $n$. Thus there is a natural correspondence between compositions of $n$ and ordered multisets whose elements lie in $\mathbb{N}$ and sum to $n$. A weak composition of $n$ is a collection of non-negative integers that sum to $n$. There is a natural correspondence between weak compositions of $n$ and ordered multisets whose elements lie in $\mathbb{N}\cup \{0\}$ and sum to $n$.
\end{defn}
The following are well known.
\begin{lem}
Let $n \in \mathbb{N}$. Then the number of compositions of $n$ is $2^{n-1}$.
\end{lem}
\begin{proof}
We consider a multiset with elements in $\mathbb{N}$, which sum to $n$, and where each box either represents a plus or a comma.
\[[1\square1\square1\ldots1\square1]_{\ord}\]
Now, for each box a choice of a comma or a plus provides a unique ordered multiset consisting of elements in $\mathbb{N}$.
\end{proof}
\begin{lem}\label{combinatorialresult} Let $n \in \mathbb{N}$. Then the number of weak compositions of $n$ into exactly $k$ parts is give by the binomial coefficient
\[\begin{pmatrix}
n+k-1\\k-1
\end{pmatrix}.\]
\end{lem}
\begin{proof}
From the previous proof the number of compositions of $n$ into exactly $k$ parts is given by the number of ways of placing exactly $k-1$ commas into $n-1$ boxes i.e.\
\[\begin{pmatrix}
n-1\\k-1
\end{pmatrix}.\]
Now, each composition of $n+k$ into $k$ parts can be thought of as a weak composition of $n$ into $k$ parts by mapping $k$ element multisets which sum to $n+k$ and consist of natural numbers to $k$ element multisets which sum to $n$ and consist of non-negative integers i.e.\ the map $[m_1, m_2, \ldots, m_k]_{\ord}\mapsto [m_1-1, m_2-1 \ldots, m_k-1]_{\ord}$.
\end{proof}
We are now ready to prove our first theorem.
\begin{reptheorem}{mainthm}
Let $G=C\wr \mathbb{Z}$ and $S=\langle a, t\rangle$ be a generating set for $G$ with $a \in C_i$ (for some $i \in \mathbb{Z}$) and $t \in \mathbb{Z}$. Then the base $A$ of $G$ is negligible in $G$.
\end{reptheorem}
\begin{proof} Fix an $n \in \mathbb{N}$. Our aim is to produce a bound for $|\mathbb{B}_S(n)\cap A|$. For discussions on normal forms for elements of $C\wr \mathbb{Z}$, see \cite{lamplighternormalform}. Let $k \le {n}/{2}$. If $|g|=n$ and $g_{\min}\ge0$, then there is a word of length $n$ of the form below which represents $g$
\begin{align}\label{elementsofthebase}
w^{(0)}t^{-1}w^{(1)}t^{-1}\ldots w^{(k-1)}t^{-1}w^{(k)}tw^{(k+1)}t\ldots w^{(2k)}
\end{align}
where, for each $i \in \{0,1,\ldots,2l\}$, $w^{(i)}=a^{d_i}$ for some $d_i \in \mathbb{Z}$. Now, any word $g \in |\mathbb{B}_S(n)\cap A|$ with $g_{\min}\ge0$ can be expressed in the form (\ref{elementsofthebase}) and must satisfy
\begin{align*}
\sum\limits_{i=0}^{2k}|w^{(i)}|_{\{a\}}\le n-2k.
\end{align*}
We now justify why it is sufficient to look at only those $g \in A$ with $g_{\min}\ge0$. Let $A_s:=\{g \in A : g_{\min}\ge s\}$. By conjugating a word of the form (\ref{elementsofthebase}) by $t^{-s}$, we have, for any $s \in \mathbb{Z}\setminus \mathbb{N}$, that
\[|\mathbb{B}_S(n)\cap (A_s\setminus A_{s+1})|\le|\mathbb{B}_S(n)\cap A_0|.\]
Also, for any $s\le -n$, $|\mathbb{B}_S(n)\cap (A_s\setminus A_{s+1})|=0$. Thus
\begin{align*}
|\mathbb{B}_S(n)\cap A|&\le \left(\bigcup\limits_{-n\le s\le-1}\!\!\!\!\!|\mathbb{B}_S(n)\cap (A_s\setminus A_{s+1})|\right)\cup|\mathbb{B}_S(n)\cap A_0|\\
&\le (n+1)|\mathbb{B}_S(n)\cap A_0|.
\end{align*}
Since groups of the form $C\wr \mathbb{Z}$ (where $C$ is a non-trivial cyclic group) are of exponential growth, producing a bound for $|\mathbb{B}_S(n)\cap A_0|$ will be sufficient to bound $|\mathbb{B}_S(n)\cap A_0|$.
In (\ref{elementsofthebase}), the words $\{w^{(j)} : j=k+1, k+2,\ldots, 2k\}$ are redundant since
\begin{align*}
&w^{(0)}t^{-1}w^{(1)}t^{-1}\ldots w^{(k-1)}t^{-1}w^{(k)}tw^{(k+1)}t\ldots w^{(2k)}\\
=&w^{(0)}w^{(2k)}t^{-1}w^{(1)}w^{(2k-1)}t^{-1}\ldots w^{(k-1)}w^{(k+1)}t^{-1}w^{(k)}t^k.
\end{align*}
Thus any word $g \in |\mathbb{B}_S(n)\cap A|$ with $g_{\min}\ge0$ can be expressed in the form
\begin{align}\label{elementsofthebase2}
w^{(0)}t^{-1}w^{(1)}t^{-1}\ldots w^{(k-1)}t^{-1}w^{(k)}t^k
\end{align}
where, for each $i \in \{0,1,\ldots,k\}$, $w^{(i)}=a^{d_i}$ for some $d_i \in \mathbb{Z}$.
From \cite{growthofwreathproducts}, the growth of $C\wr \mathbb{Z}$ with our generating set is greater than $2^n$ if $|C|\ge3$ and is $\frac{1+\sqrt5}{2}$ if $|C|=2$.
We first work with $|C|=2$. In this case each $w^{(i)}$ has length 0 or 1. Thus, for each $k$, there are at most $2^{k+1}$ choices for the values of $\{w^{(i)} : i=0,1,\ldots, k\}$. Hence the size of $|\mathbb{B}_S(n)\cap A_0|$ is bounded by
\[\sum\limits_{j=0}^{\floor*{n/2}}2^{j+1}\le 4\cdot (\sqrt2)^n\le 4\cdot\left(\frac{1+\sqrt5}{2}\right)^n\]
and so the base of $C_2\wr \mathbb{Z}$ is negligible.
For the case where $|C|>2$, we shall use Lemma \ref{combinatorialresult}. Our aim is to show that $|(\mathbb{B}_S(n)\setminus \mathbb{B}_S(n-1))\cap A_0|$ is bounded by a function which has growth rate $2^n$ (since this will mean that $|\mathbb{B}_S(n)\cap A_0|$ is also bounded by a function which has growth rate $2^n$). Fix an $k \in \{0, 1, \ldots, \floor*{\frac{n}{2}}\}$. We note that all such elements can be represented by a word of the form (\ref{elementsofthebase2}), where $|w^{(i)}|<n-2k$ for all $i \in \{0,\ldots, k\}$. Each word is in bijection with a multiset
\begin{align}\label{elementsofthebase3}
[u^{(0)}, v^{(0)}, u^{(1)}, v^{(1)},\ldots, u^{(k-1)}, v^{(k-1)}, u^{(k)}, v^{(k)}]_{\ord}
\end{align}
where $u^{(i)}, v^{(i)} \in \mathbb{N}\cup \{0\}$ and the condition that $u^{(i)}v^{(i)}=0$ for all $i$. This is therefore bounded by the number of weak compositions of $n-2k$ into $2k+2$ parts. From Lemma \ref{combinatorialresult} this is equal to
\begin{align*}
\begin{pmatrix}
n-2k+2k+2-1\\2k+2-1
\end{pmatrix}=\begin{pmatrix}
n+1\\2k+1
\end{pmatrix}.
\end{align*}
Now we sum over all viable $k$:
\begin{align*}
\sum\limits_{k=0}^{\floor*{\frac{n}{2}}}\begin{pmatrix}
n+1\\2k+1
\end{pmatrix}
&\le \sum\limits_{j=0}^{n+1}\begin{pmatrix}
n+1\\j
\end{pmatrix}=2^{n}.
\end{align*}
Hence $|(\mathbb{B}_S(n)\setminus \mathbb{B}_S(n-1))\cap A|\le (n+1)|(\mathbb{B}_S(n)\setminus \mathbb{B}_S(n-1))\cap A_0|\le (n+1)\cdot2^n$, and so is negligible in $C\wr \mathbb{Z}$.
\end{proof}
Note that from this proof it immediately follows that $(C_2\times C_2)\wr \mathbb{Z}$, with the generating set consisting of two generators of $(C_2\times C_2)_0$ and a generator of the head, has degree of commutativity 0.
\begin{reptheorem}{mainthm2}
Let $G:=F\wr \mathbb{Z}$ where $F$ is a non-trivial finite group. Then there exists a generating set $S$ such that $\dc_S(G)=0$.\end{reptheorem}
\begin{proof} Let $|F|=m>1$ and let $A$ denote the base of $G$. Then $A:=\bigoplus_{i \in \mathbb{Z}}F_i$ where $F_i=F$ for each $i \in \mathbb{Z}$. Let $S$ denote the generating set consisting of the non-trivial elements of $F_0$ and a generator $t$ of the head of $G$. From Section \ref{conditioni} we need only show that the base of $G$ is negligible in $G$.
First we produce a lower bound on the growth of $G$. Consider words of the form
\begin{align*}
w_1tw_2tw_3\ldots tw_kt^{\epsilon}
\end{align*}
where, for each $i$, $w_i \in S$ and $\epsilon \in \{0, 1\}$. There are $m^k$ such words (since $|S|=m$) and so $|\mathbb{B}_S(n)|\ge |\mathbb{B}_S(n)\setminus \mathbb{B}_S(n-1)|\ge m^{\ceil*{n/2}}$.
We now produce an upper bound on the growth of $A$, the base of $G$. As with the previous proof, we produce an upper bound for words $g \in A\cap(\mathbb{B}_S(n)\setminus\mathbb{B}_S(n-1))$ with $g_{\min}\ge 0$. Such words are of the form
\begin{align*}
w_0t^{-1}w_1t^{-1}w_2t^{-1}\ldots w_{k-1}t^{-1}w_kt^k
\end{align*}
where each $w_i$ is either trivial or in $S\setminus\{t\}$ and $\floor*{\frac{n-1}{3}}\le k\le n-1$ since there must be at least 1 non-trivial $w_i$ and at most $\floor*{\frac{n-1}{3}}$ non-trivial $w_i$. This produces the bound
\begin{align*}
\sum\limits_{k=\floor*{\frac{n-1}{3}}}^{n-1}\begin{pmatrix}k+1\\n-2k\end{pmatrix}(m-1)^{n-2k}
\end{align*}
since, for each $k$, $n-2k$ of the $\{w_i\mid i=0, \ldots, k\}$ may be chosen from $S\setminus\{t\}$ and the other $w_i$ are trivial. Now
\begin{align*}
\sum\limits_{k=\floor*{\frac{n-1}{3}}}^{n-1}\begin{pmatrix}k+1\\n-2k\end{pmatrix}(m-1)^{n-2k}&\le
\sum\limits_{k=\floor*{\frac{n-1}{3}}}^{n-1}\begin{pmatrix}n\\n-2k\end{pmatrix}(m-1)^{n-2k}\\&\le
(m-1)^2\cdot\sum\limits_{j=0}^{\floor*{n/3}}\begin{pmatrix}n\\j\end{pmatrix}(m-1)^j\\&\le (m-1)^2\cdot (m-1+1)^{\floor*{n/3}}
\end{align*}
and so the base of $G$ is negligible in $G$.
\end{proof}
We end by posing a question. This seems natural in the context of Theorem \ref{mainthm} and Theorem \ref{mainthm2}.
\begin{quR} Given a finitely generated group $H$, is the base of $G:=H\wr \mathbb{Z}$ negligible in $G$? Moreover, what if $\mathbb{Z}$ is replaced with another finitely generated infinite group?
\end{quR}
\bibliographystyle{amsalpha}
\def$'${$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
| {
"timestamp": "2016-05-17T02:16:22",
"yymm": "1605",
"arxiv_id": "1605.04829",
"language": "en",
"url": "https://arxiv.org/abs/1605.04829",
"abstract": "The degree of commutativity of a group $G$ measures the probability of choosing two elements in $G$ which commute. There are many results studying this for finite groups. In [AMV17], this was generalised to infinite groups. In this note, we compute the degree of commutativity for wreath products of the form $\\mathbb{Z}\\wr \\mathbb{Z}$ and $F\\wr \\mathbb{Z}$ where $F$ is any finite group.",
"subjects": "Group Theory (math.GR)",
"title": "The degree of commutativity and lamplighter groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232894783426,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7093460262896583
} |
https://arxiv.org/abs/1910.10828 | Superconvergent flux recovery of the Rannacher-Turek nonconforming element | This work presents superconvergence estimates of the nonconforming Rannacher--Turek element for second order elliptic equations on any cubical meshes in $\mathbb{R}^{2}$ and $\mathbb{R}^{3}$. In particular, a corrected numerical flux is shown to be superclose to the Raviart--Thomas interpolant of the exact flux. We then design a superconvergent recovery operator based on local weighted averaging. Combining the supercloseness and the recovery operator, we prove that the recovered flux superconverges to the exact flux. As a by-product, we obtain a superconvergent recovery estimate of the Crouzeix--Raviart element method for general elliptic equations. | \section{Introduction and preliminaries}\label{sec1}
Finite element superconvergent recovery is quite popular in practice for its simplicity and ability to develop asymptotically exact a posteriori error estimators. The theory of superconvergent recovery for conforming Lagrange elements is well-established, see, e.g., \cite{BS1977,Thomee1977,ZZ1987,ZZ1992,BX2003a,BX2003b,BaXuZheng2007,XZ2003,ZhangNaga2005}. Let $u_{h}$ be the finite element solution approximating the PDE solution $u$. The framework of superconvergent recovery is often divided into two steps. The starting point is a supercloseness estimate between $u_{h}$ and the finite element \emph{canonical interpolant} $u_{I}$, where $u_I$ and $u$ share the same degrees of freedom (dofs) corresponding to certain finite element. Then a postprocessed solution $R_{h}u_{h}$ is shown to superconverge to $u$ in suitable norm, provided $R_{h}$ is a bounded operator with super-approximation property.
On the other hand, since the interelement boundary continuity of nonconforming elements is very weak, superconvergence analysis of nonconforming methods is often more difficult and limited. The Crouzeix--Raviart (CR) \cite{CR1973,BS2008} element for the Poisson equation is an important model problem for the analysis of nonconforming methods. In this case, it can be numerically observed that the CR canonical interpolant $u_{I}$ and the finite element solution $u_{h}$ are not superclose in the energy norm. Hence the aforementioned recovery framework does not work. In \cite{Ye2001}, Ye developed superconvergence estimates of the CR element using least-squares surface fitting \cite{Wang2000,WangYe2001}. Guo and Huang \cite{GuoZhang2015} presented a polynomial preserving gradient recovery method for the CR element with numerically confirmed superconvergence. Based on an equivalence between the CR method and the lowest order Raviart--Thomas (RT) method for Poisson's equation (cf.~\cite{Marini1985,AB1985}), Hu and Ma \cite{HM2016} proved a recovery-type superconvergence estimate for the CR element using superconvergence of RT elements in \cite{Brandts1994}. This result is then improved and generalized in e.g., \cite{YL2018,HuMa2018,ZHY2019}. Readers are also referred to e.g., \cite{ChenLi1994,Chen2002,MS2009,LN2008} and references therein for superconvergence of other nonconforming elements.
The nonconforming Rannacher--Turek (NCRT) element \cite{RT1992} is a natural generalization of the CR element on quadrilateral meshes. It is noted that there is a superconvergence estimate of the NCRT element at some special points under certain mildly distorted \emph{square} meshes, see \cite{MSX2006}. For the Poisson equation, it has been shown in \cite{LTZ2005} that several rectangular nonconforming methods do not admit natural supercloseness estimates. In particular, $u_{I}$ and $u_{h}$ from the NCRT element are superclose in the energy norm only under \emph{square} meshes. To overcome this barrier, the authors of \cite{LTZ2005} enriched the NCRT element by one degree of freedom at the centroid of each element and proved superconvergent gradient recovery estimates of the modified nonconforming element.
In this paper, we shall consider the standard NCRT method \eqref{RT} for solving the general elliptic equation \eqref{elliptic}. First we compute a corrected numerical flux $\bm{\sigma}_{h}$ from the NCRT finite element solution, see Theorem \ref{superclose}. We shall show that $\bm{\sigma}_{h}$ is superclose to $\Pi_{h}(a\nabla u)$ by comparing it with an auxiliary $H(\divg)$-conforming flux $\bar{\bm{\sigma}}_{h}$ and using well-established superconvergence tools and techniques for RT elements in e.g., \cite{Duran1990,Brandts1994,YL2018}. Here $\Pi_{h}$ is the canonical interpolation of the lowest order rectangular RT element. We then construct a local edge-based weighted averaging operator $A_{h}$, which makes $\|a\nabla u-A_{h}\Pi_{h}(a\nabla u)\|$ supersmall on any rectangular mesh. Hence $A_{h}\bm{\sigma}_{h}$ superconverges to $a\nabla u$ on any rectangular mesh by a triangle-inequality argument. To the best of our knowledge, this is the first superconvergent recovery method for the NCRT element on arbitrary rectangular meshes. As far as we know, there is no superconvergence analysis of the tetrahedral CR element in $\mathbb{R}^3.$ In contrast, our superconvergence results could be directly generalized to the cubic NCRT element in $\mathbb{R}^3$, see Section \ref{sec4}.
For elliptic equations with variable coefficients and lower order terms, Arbogast and Chen in \cite{AC1995} can reformulate various mixed methods as modified nonconforming methods. However, the general equivalence expression is complicated and it is unclear how far the standard nonconforming finite element solution is from the modified one. On the other hand, superconvergence analysis of $H(\divg)$-conforming mixed finite elements is well established, see, e.g., \cite{Duran1990,Brandts1994,YL2018,BaLi2019}. Hence we shall relate nonconforming methods to their mixed counterparts as in \cite{HM2016}. In our superconvergence analysis, it is not necessary to rewrite the NCRT method \eqref{RT} as an equivalent mixed method for the \emph{general elliptic equation}. All we need is the equivalence given by Lemma \ref{mainlemma} for the \emph{Poisson} equation. As far as we know, it is the first superconvergence estimate of the CR and NCRT element methods for the general elliptic equation.
In the rest of this section, we introduce preliminary definitions and notations. Let $\Omega=[\omega_1,\omega_2]\times[\omega_3,\omega_4]\subset\mathbb{R}^{2}$ be a rectangle. Consider the second order elliptic equation
\begin{subequations}\label{elliptic}
\begin{align}
-\nabla\cdot(a\nabla u)+\bm{b}\cdot\nabla u+cu&=f\quad \text{in}\ \Omega,\\
u&=g\quad \text{on}\ \partial\Omega,
\end{align}
\end{subequations}
where $a(\bm{x})\geq a_{0}>0$ for all $\bm{x}=(x_{1},x_{2})^{T}\in\Omega$, and $a, \bm{b}, c$, and $f$ are smooth functions in $\bm{x}$ on $\bar{\Omega}$.
Let $\mathcal{T}_{h}$ be a partition of $\Omega$ by rectangles. Given a rectangle $K\in\mathcal{T}_{h}$, let $\ell_{K,1}$ and $\ell_{K,2}$ denote the width and height of $K$ and $h=\max_{K\in\mathcal{T}_{h}}\max(\ell_{K,1},\ell_{K,2})$ the mesh size. We assume that $h<1$ and $\mathcal{T}_{h}$ is nondegenerate, i.e.
$$\max_{K\in\mathcal{T}_{h}}\max\left\{\frac{\ell_{K,1}}{\ell_{K,2}},\frac{\ell_{K,2}}{\ell_{K,1}}\right\}\leq C_{\mathcal{T}_{h}}<\infty,$$ where $C_{\mathcal{T}_{h}}$ is an absolute constant independent of $h$. Let $\mathcal{E}_{h}$, $\mathcal{E}_{h}^{o}$, and $\mathcal{E}_{h}^{\partial}$ denote the set of edges, interior edges, and boundary edges, respectively. The following edge-based patch $\omega_{E}$ will be frequently used.
\begin{enumerate}
\item For $E\in\mathcal{E}_{h}^{o}$, let $\omega_{E}=K^{+}\cup K^{-}$ where $K^{+}$ and $K^{-}$ are the two adjacent rectangles sharing $E$.
\item For $E\in\mathcal{E}_{h}^{\partial}$, let $\omega_{E}=K$, where $K$ is the rectangle having $E$ as an edge.
\end{enumerate}
The NCRT finite element space is defined as
\begin{equation*}
\begin{aligned}
\mathcal{V}_{g,h}:=&\{v_{h}\in L^{2}(\Omega): v_{h}|_{K}\in\text{span}\{1,x_{1},x_{2},x_{1}^{2}-x_{2}^{2}\}\text{ for\ all}\ K\in\mathcal{T}_{h}, \\
&\fint_{E}v_{h}\text{ is\ single-valued for all}\ E\in\mathcal{E}_{h}^{o}, \fint_{E}v_{h}=\fint_{E}g\text{ for all } E\in\mathcal{E}_{h}^{\partial}\},
\end{aligned}
\end{equation*}
where $\fint_{E}v:=\frac{1}{|E|}\int_{E}v$ is the mean value of $v$ on $E$. The name `nonconforming' is due to the fact $\mathcal{V}_{g,h}\not\subseteq H^1(\Omega)$. Let $$H^1(\mathcal{T}_{h}):=\{v\in L_2(\Omega): v|_K\in H^1(K)~\forall K\in\mathcal{T}_{h}\}$$ be the space of piecewise $H^1$ functions and $\nabla_{h}$ denote the piecewise gradient w.r.t.~$\mathcal{T}_{h}$, namely,
$$(\nabla_h v)|_K:=\nabla (v|_K),\quad\forall v\in H^1(\mathcal{T}_{h}),\quad \forall K\in\mathcal{T}_{h}.$$
The NCRT method for \eqref{elliptic} is to find $u_{h}\in\mathcal{V}_{g,h}$, such that
\begin{equation}\label{RT}
\ab{a\nabla_{h}u_{h}}{\nabla_{h}v}+\ab{\bm{b}\cdot\nabla_{h}u_{h}}{v}+\langle cu_{h},v\rangle=\ab{f}{v},\quad\forall v\in\mathcal{V}_{0,h},
\end{equation}
where $\ab{\cdot}{\cdot}$ is the $L_2(\Omega)$-inner product.
Throughout this paper, we adopt the notation $A\lesssim B$ when $A\leq CB$ for some generic constant $C$ that is independent of $h$. We assume that the standard a priori error estimate for the NCRT method holds:
\begin{equation}\label{apriori}
\|u-u_{h}\|+h\|\nabla_{h}(u-u_{h})\|\lesssim h^{2}\|u\|_{H^{2}},
\end{equation}
where $\|\cdot\|$ denotes the norm $\|\cdot\|_{L_2(\Omega)}$ and $\|\cdot\|_{H^{2}}$ abbreviates $\|\cdot\|_{H^{2}(\Omega)}$, similar for other Sobolev norms. Readers are referred to \cite{BS2008} for the analogue of \eqref{apriori} for the CR method. The estimate \eqref{apriori} implies that \eqref{RT} is a first order method in the discrete energy norm $\|\nabla_h\cdot\|$. Therefore, an improved recovery-type error estimate of order $1+s$ suffices to declare superconvergence, where $s>0$ is an absolute constant. Similarly, we say two functions are superclose whenever the $\|\nabla_h\cdot\|$-distance between them is $O(h^{1+s}).$
The following NCRT element space $\widetilde{\mathcal{V}}_{h}$ using DOFs based on pointwise function evaluation will be used in Section \ref{sec3}.
\begin{equation*}
\begin{aligned}
\widetilde{\mathcal{V}}_{h}:=&\{v_{h}\in L^{2}(\Omega): v_{h}|_{K}\in\text{span}\{1,x_{1},x_{2},x_{1}^{2}-x_{2}^{2}\}\text{ for\ all}\ K\in\mathcal{T}_{h}, \\
&v_{h}\text{ is\ continuous at the midpoint of each}\ E\in\mathcal{E}_{h}^{o}\}.
\end{aligned}
\end{equation*}
Let $Q_{k,l}(K)$ denote the set of polynomials of degree $\leq k$ in $x_{1}$ and of degree $\leq l$ in $x_{2}$ on the element $K$. Let $$H(\divg,\Omega):=\{\bm{\tau}\in L_2(\Omega)\times L_2(\Omega): \nabla\cdot\bm{\tau}\in L_2(\Omega)\}.$$ The lowest order rectangular RT finite element space is
\begin{equation*}
\mathcal{RT}_{h}:=\{\bm{\tau}_{h}\in H(\divg,\Omega): \bm{\tau}_{h}|_{K}\in Q_{1,0}(K)\times Q_{0,1}(K)\text{ for all }K\in\mathcal{T}_{h}\}.
\end{equation*}
For convenience we also introduce the broken RT space
$$\mathcal{RT}_{h}^{-1}:=\{\bm{\tau}_{h}\in L_2(\Omega)\times L_2(\Omega): \bm{\tau}_{h}|_{K}\in Q_{1,0}(K)\times Q_{0,1}(K), \forall K\in\mathcal{T}_{h}\}.$$
The dofs for $\mathcal{RT}_h$ consist of integrals of normal components of a vector-valued function on each edge in $\mathcal{T}_{h}$. Given $\bm{\tau}\in H^1(\Omega)\times H^1(\Omega)$, the RT canonical interpolant $\Pi_{h}\bm{\tau}$ is the unique finite element function in $\mathcal{RT}_{h}$ such that
\begin{equation}\label{RTinterpolation}
\int_{E}(\Pi_{h}\bm{\tau})\cdot\bm{n}_E=\int_{E}\bm{\tau}\cdot\bm{n}_E,\quad\forall E\in\mathcal{E}_{h},
\end{equation}
where $\bm{n}_E$ is a unit normal to $E$. Let $P_{h}$ be the $L_2(\Omega)$-projection onto the space of piecewise constant functions. It is well known that
\begin{equation}\label{commuting}
\nabla\cdot\Pi_{h}\bm{\tau}=P_{h}\nabla\cdot\bm{\tau}.
\end{equation}
Let $E\in\mathcal{E}_{h}^{o}$ and $K^{+}, K^{-}$ be the two rectangles sharing $E$. Let $\bm{n}^{+}$ and $\bm{n}^{-}$ denote the outward unit normal induced by $K^{+}$ and $K^{-}$ respectively. In the analysis of nonconforming methods, it is convenient to introduce notations for jumps and averages on $E$:
\begin{equation*}
\begin{aligned}
&\llbracket\bm{\tau}\rrbracket:=\bm{\tau}|_{K^{+}}\cdot\bm{n}^{+}+\bm{\tau}|_{K^{-}}\cdot\bm{n}^{-},\\
&\{\bm{\tau}\}:=(\bm{\tau}|_{K^{+}}+\bm{\tau}|_{K^{-}})/2,\\
&\llbracket v\rrbracket:=(v|_{K^{+}}\bm{n}^{+}+v|_{K^{-}}\bm{n}^{-})/2,\\
&\{v\}:=(v|_{K^{+}}+v|_{K^{-}})/2,
\end{aligned}
\end{equation*}
where $\bm{\tau}$ is a vector and $v$ is a scalar. For $E\in\mathcal{E}_{h}^{\partial}$,
\begin{equation*}
\llbracket\bm{\tau}\rrbracket:=\bm{\tau}\cdot\bm{n},\quad\{v\}:=v,\quad\llbracket v\rrbracket:=\bm{0}.
\end{equation*}
where $\bm{n}$ is the outward unit normal to $\partial\Omega$. It is readily checked that
\begin{equation}\label{product}
\begin{aligned}
\llbracket\bm{\tau}v\rrbracket&=\llbracket\bm{\tau}\rrbracket\{v\}+\llbracket v\rrbracket\cdot\{\bm{\tau}\}.
\end{aligned}
\end{equation}
By these notations, a useful fact is that
\begin{equation}\label{equivRT}
\bm{\tau}_{h}\in\mathcal{RT}_{h}\text{ if\ and\ only\ if } \bm{\tau}_{h}\in\mathcal{RT}_{h}^{-1}\text{ and }\llbracket\bm{\tau}_{h}\rrbracket=0~\forall E\in\mathcal{E}_{h}^{o}.
\end{equation}
\textbf{Abbreviation.} For the reader's convenience, abbreviations of finite elements in this paper are summarized as follows.
\begin{align*}
&\text{Rannacher--Turek: NCRT}\\
&\text{Raviart--Thomas: RT}\\
&\text{Crouzeix--Raviart: CR}
\end{align*}
The rest of this paper is organized as follows. Section \ref{sec2} discusses the supercloseness estimate in Theorem \ref{superclose}. In Section \ref{sec3}, we propose a postprocessing operator and prove the recovery superconvergence estimate in Theorem \ref{superconvergence}. In Section \ref{sec4}, we extend our superconvergence analysis to the CR element and NCRT element in $\mathbb{R}^3$. Numerical experiments are presented in Section \ref{sec5}. Concluding remarks are given in Section \ref{sec6}.
\section{Supercloseness}\label{sec2}
In this section, we derive a supercloseness estimate for the NCRT element, which is essential to develop superconvergent flux recovery.
First we need a lemma in the spirit of Marini (cf. \cite{Marini1985}).
\begin{lemma}\label{mainlemma}
Let $\bar{f}$ be a piecewise constant, $\bm{\tau}_{h}|_{K}\in Q_{1,0}(K)\times Q_{0,1}(K)$ and $\nabla\cdot(\bm{\tau}_{h}|_{K})=0$ for all $K\in\mathcal{T}_{h}$. Assume that
\begin{equation}\label{var}
\ab{\bm{\tau}_{h}}{\nabla_{h}v}=\ab{\bar{f}}{v}
\end{equation}
for all $v\in\mathcal{V}_{0,h}$. Then $\bm{\tau}_{h}-\bar{f}\bm{r}_{h}\in\mathcal{RT}_{h},$ with
$$\bm{r}_{h}|_{K}(x_1,x_2):=\left( \frac{\ell_{K,2}^{2}}{\ell_{K,1}^{2}+\ell_{K,2}^{2}}(x_{1}-x_{K,1}),~\frac{\ell_{K,1}^{2}}{\ell_{K,1}^{2}+\ell_{K,2}^{2}}(x_{2}-x_{K,2}) \right)^{T},$$
where $K=[x_{1,i},x_{1,i+1}]\times[x_{2,j},x_{2,j+1}]$, $\ell_{K,1}=x_{1,i+1}-x_{1,i}$, $\ell_{K,2}=x_{2,j+1}-x_{2,j}$, and $(x_{K,1},x_{K,2})^{T}$ is the centroid of $K$.
\end{lemma}
\begin{proof}
Consider any vertical edge $E\in\mathcal{E}_{h}^{o}$ and the two rectangles
$$K^{-}=[x_{1,i},x_{1,i+1}]\times[x_{2,j},x_{2,j+1}],\quad K^{+}=[x_{1,i+1},x_{1,i+2}]\times[x_{2,j},x_{2,j+1}]$$
sharing it. Let $v\in\mathcal{V}_{0,h}$ be the basis function such that
$$\fint_{E}v_{E}=1,\quad\fint_{E'}v_{E}=0\text{ for } \mathcal{E}_{h}\ni E'\neq E.$$
Note that $\bm{\tau}_{h}\cdot(1,0)^{T}$ is a constant on $E$. It then follows from \eqref{var} with $v=v_E$, $\nabla_{h}\cdot\bm{\tau}_{h}=0$ and integration by parts that
\begin{equation}\label{inter}
\int_{E}\llbracket\bm{\tau}_{h}\rrbracket=\int_{K^{+}\cup K^{-}}\bar{f}v_{E}.
\end{equation}
Direct calculation shows that
\begin{equation}\label{intvE}
\int_{K^{\pm}}v_{E}=\frac{|K^{\pm}|\ell_{K^{\pm},2}^{2}}{2(\ell_{K^{\pm},1}^{2}+\ell_{K^{\pm},2}^{2})}.
\end{equation}
Then combining \eqref{intvE} with \eqref{inter} and the definition of $\bm{r}_h$ yields
\begin{equation}\label{edgejump}
\llbracket\bm{\tau}_{h}-\bar{f}\bm{r}_{h}\rrbracket=0\text{ on }E.
\end{equation}
Similarly, \eqref{edgejump} also holds for horizontal edges. Combining \eqref{edgejump} with the fact $(\bm{\tau}_{h}-\bar{f}\bm{r}_{h})|_{K}\in Q_{1,0}(K)\times Q_{0,1}(K)$, we conclude that $\bm{\tau}_{h}-\bar{f}\bm{r}_{h}\in\mathcal{RT}_{h}.$
\qed\end{proof}
\begin{remark}
It seems that the NCRT method using dofs based on pointwise function evaluation does not have a similar equivalence.
\end{remark}
To apply Lemma \ref{mainlemma}, we then introduce the auxiliary nonconforming method: Find $\bar{u}_{h}\in\mathcal{V}_{g,h},$ such that
\begin{equation}\label{auxnc}
\ab{a\nabla_{h}\bar{u}_{h}}{\nabla_{h}v}=\ab{P_{h}(f-cu-\bm{b}\cdot\nabla u)}{v},\quad\forall v\in\mathcal{V}_{0,h}.
\end{equation}
The following lemma shows that $u_{h}$ and $\bar{u}_{h}$ are superclose in the $H^{1}$-norm.
\begin{lemma}\label{superuubar}
Let $u_{h}$ and $\bar{u}_{h}$ solve \eqref{RT} and \eqref{auxnc}, respectively. Then
$$\|\nabla_{h}(u_{h}-\bar{u}_{h})\|\lesssim h^{2}\|u\|_{H^{2}}.$$
\end{lemma}
\begin{proof}
Subtracting \eqref{auxnc} from \eqref{RT} gives
$$\ab{a\nabla_{h}(u_{h}-\bar{u}_{h})}{\nabla_{h}v}\\
=\ab{f-cu_{h}-\bm{b}\cdot\nabla_{h}u_{h}-P_{h}(f-cu-\bm{b}\cdot\nabla u)}{v},$$
where $v\in\mathcal{V}_{0,h}$.
It then follows from \eqref{apriori}
that
\begin{equation}\label{all}
\begin{aligned}
&\ab{a\nabla_{h}(u_{h}-\bar{u}_{h})}{\nabla_{h}v}\\
&=\ab{f-cu-\bm{b}\cdot\nabla u-P_{h}(f-cu-\bm{b}\cdot\nabla u)}{v-P_{h}v}\\
&\quad+\ab{c(u-u_{h})}{v}+\ab{\bm{b}\cdot\nabla_{h}(u-u_{h})}{v}\\
&=O(h^{2})(\|f\|_{H^{1}}+\|u\|_{H^{2}})\|\nabla_{h}v\|+\ab{\bm{b}\cdot\nabla_{h}(u-u_{h})}{v}.
\end{aligned}
\end{equation}
It remains to show that $\ab{\bm{b}\cdot\nabla_{h}(u-u_{h})}{v}$ is supersmall. By integrating by parts, \eqref{product}, and $\fint_{E}\llbracket u-u_{h}\rrbracket=\bm{0}$, we have
\begin{equation*}
\begin{aligned}
&\ab{\bm{b}\cdot\nabla_{h}(u-u_{h})}{v}\\
&\quad=\sumth\int_{\partial K}(u-u_{h})v\bm{b}\cdot\bm{n}-\int_{K}(u-u_{h})\nabla\cdot(\bm{b}v)\\
&\quad=\sum_{E\in\mathcal{E}_{h}}\int_{E}\{u-u_{h}\}\llbracket v\bm{b}-\bm{c}_{E}\rrbracket+\llbracket u-u_{h}\rrbracket\cdot\{v\bm{b}-\bm{d}_{E}\}\\
&\qquad-\int_{\Omega}(u-u_{h})\nabla_{h}\cdot(\bm{b}v)
\end{aligned}
\end{equation*}
for any constants $\bm{c}_{E}\in\mathbb{R}^{2}$ and $\bm{d}_{E}\in\mathbb{R}^{2}$. In particular, let $\bm{c}_{E}=\bm{d}_{E}=\bm{b}(m_{E})\fint_{E}v$, where $m_E$ is the midpoint of $E.$ By the trace inequality
\begin{equation}\label{trace}
\|w\|_{L_2(\partial K)}\lesssim h^{-\frac{1}{2}}\|w\|_{L_2(K)}+h^{\frac{1}{2}}\|\nabla w\|_{L_2(K)},
\end{equation}
we have
\begin{equation}\label{trace1}
\begin{aligned}
&\|\{u-u_{h}\}\|_{L_2(E)}+\|\llbracket u-u_{h}\rrbracket\|_{L_2(E)}\\
&\quad\lesssim h^{-\frac{1}{2}}\|u-u_{h}\|_{L_2(\omega_{E})}+h^{\frac{1}{2}}\|\nabla_{h}(u-u_{h})\|_{L_2(\omega_{E})}
\end{aligned}
\end{equation}
and
\begin{equation}\label{trace2}
\|\llbracket v\bm{b}-\bm{c}_{E}\rrbracket\|_{L_2(E)}+\|\{v\bm{b}-\bm{d}_{E}\}\|_{L_2(E)}\\
\lesssim h^{\frac{1}{2}}\|\nabla_{h}(\bm{b}v)\|_{L_2(\omega_{E})}.
\end{equation}
It follows from the Cauchy--Schwarz inequality, \eqref{trace1}, \eqref{trace2} and \eqref{apriori} that
\begin{equation}\label{buuhv}
\begin{aligned}
&|\ab{\bm{b}\cdot\nabla_{h}(u-u_{h})}{v}|\\
&\lesssim\sum_{E\in\mathcal{E}_{h}}\big(\|\{u-u_{h}\}\|_{L_2(E)}\|\llbracket v\bm{b}-\bm{c}_{E}\rrbracket\|_{L_2(E)}\\
&\quad+\|\llbracket u-u_{h}\rrbracket\|_{L_2(E)}\|\{v\bm{b}-\bm{d}_{E}\}\|_{L_2(E)}\big)+\|u-u_{h}\|\|\nabla_{h}\cdot(\bm{b}v)\|\\
&\leq\sum_{E\in\mathcal{E}_{h}}\big(\|u-u_{h}\|_{L_2(\omega_{E})}+h\|\nabla_{h}(u-u_{h})\|_{L_2(\omega_{E})}\big)\|\nabla_{h}(\bm{b}v)\|_{L_2(\omega_{E})}\\
&\quad+\|u-u_{h}\|\|\nabla_{h}\cdot(\bm{b}v)\|\\
&\lesssim\big(\|u-u_{h}\|+h\|\nabla_{h}(u-u_{h})\|\big)\|\nabla_{h}(\bm{b}v)\|+\|u-u_{h}\|\|\nabla_{h}\cdot(\bm{b}v)\|\\
&\lesssim h^{2}\|u\|_{H^{2}}\big(\|v\|+\|\nabla_{h} v\|\big).
\end{aligned}
\end{equation}
Combining \eqref{buuhv} with \eqref{all} and using the discrete Poincar\'e inequality (cf.~Theorem 10.6.12.~in \cite{BS2008})
$\|v\|\lesssim\|\nabla_{h}v\|$,
we complete the proof.
\qed\end{proof}
Now we are in a position to present supercloseness results. Let $Q_{h}$ be the $L_{2}$-projection onto $\nabla _{h}\mathcal{V}_{0,h}$ and
$$\bm{\sigma}_{h}:=Q_{h}(a\nabla_{h}u_{h})-\bm{r}_{h}P_{h}(f-cu_{h}-\bm{b}\cdot\nabla_{h}u_{h})$$
be the corrected flux, where $\bm{r}_{h}$ is defined in Lemma \ref{mainlemma}. Note that $Q_{h}$ is indeed {\color{red}an} element-by-element projection and $Q_{h}(a\nabla_{h}u_{h})=a\nabla_{h}u_{h}$ if $a$ is a piecewise constant. The next theorem shows that $\bm{\sigma}_h$ approximates the exact flux $\bm{\sigma}:=a\nabla u$ very well.
\begin{theorem}\label{superclose}
It holds that
\begin{equation*}
\|\Pi_{h}\bm{\sigma}-\bm{\sigma}_{h}\|\lesssim
h^{2}\|u\|_{H^{3}}.
\end{equation*}
\end{theorem}
\begin{proof}
Let $\bar{\bm{\sigma}}_{h}:=Q_{h}(a\nabla_{h}\bar{u}_{h})-\bm{r}_{h}P_{h}(f-cu-\bm{b}\cdot\nabla u)$. Using the definition of $\bar{u}_h$, $\nabla_{h}\cdot Q_{h}=0$ and Lemma \ref{mainlemma}, we conclude that $\bar{\bm{\sigma}}_{h}\in \mathcal{RT}_{h}\subset H(\divg,\Omega)$. Let $\bm{\tau}_{h}=\Pi_{h}\bm{\sigma}-\bar{\bm{\sigma}}_{h}$. It follows from \eqref{commuting} and $\nabla_{h}\cdot\bm{r}_{h}=1$ that
\begin{equation*}
\nabla\cdot\bm{\tau}_{h}=P_{h}\nabla\cdot(a\nabla u)-P_{h}(f-cu-\bm{b}\cdot\nabla u)=0.
\end{equation*}
Hence $\bm{\tau}_{h}|_{K}=(c_{1}x_{1}+c_{2},-c_{1}x_{2}+c_{3})^{T}$ for some $c_{i}\in\mathbb{R}$ on an element $K\in\mathcal{T}_{h}$. On the other hand, direct calculation shows that
\begin{equation*}
\begin{aligned}
&\int_{K}\bm{r}_{h}\cdot\bm{\tau}_{h}=\int_{K}\bm{r}_{h}\cdot\big(\bm{\tau}_{h}-(c_{2}+c_{1}x_{K,1},c_{3}-c_{1}x_{K,2})^{T}\big)\\
&\quad=\frac{c_{1}}{\ell_{K,1}^{2}+\ell_{K,2}^{2}}\int_{K}\ell_{K,2}^{2}(x_{1}-x_{K,1})^{2}-\ell_{K,1}^{2}(x_{2}-x_{K,2})^{2}=0.
\end{aligned}
\end{equation*}
With the above identity, $\bm{\sigma}=a\nabla u$ and $\bm{\tau}_h\in\nabla_h\mathcal{V}_{0,h},$ we obtain
\begin{equation}\label{total}
\begin{aligned}
&\|\Pi_{h}\bm{\sigma}-\bar{\bm{\sigma}}_{h}\|^{2}=I+II,
\end{aligned}
\end{equation}
where
\begin{align*}
I=\ab{\Pi_{h}\bm{\sigma}-\bm{\sigma}}{\bm{\tau}_{h}},\quad II=\ab{a\nabla_{h}(u-\bar{u}_{h})}{\bm{\tau}_{h}}.
\end{align*}
By Lemma 3.1 with $k=0$ in \cite{Duran1990} and the Bramble--Hilbert lemma,
\begin{equation}\label{bdI}
|I|\lesssim|\bm{\sigma}|_{H^{2}}\|\bm{\tau}_{h}\|.
\end{equation}
For part $II$, due to $\nabla\cdot(\bm{\tau}_{h}|_{K})=0$, we have
\begin{equation}\label{totalII}
\begin{aligned}
II&=\sum_{K\in\mathcal{T}_{h}}\int_{K}a\nabla(u-\bar{u}_{h})\cdot\bm{\tau}_{h}\\
&=\sum_{K\in\mathcal{T}_{h}}\int_{K}(\nabla(a(u-\bar{u}_{h}))-(u-\bar{u}_{h})\nabla a)\cdot\bm{\tau}_{h}\\
&=II_{1}+II_{2},
\end{aligned}
\end{equation}
where $II_1$ and $II_2$ are given by
\begin{align*}
II_{1}=\sum_{K\in\mathcal{T}_{h}}\int_{\partial K}a(u-\bar{u}_{h})\bm{\tau}_{h}\cdot\bm{n},\quad II_2=-\ab{(u-\bar{u}_{h})\nabla a}{\bm{\tau}_{h}}.
\end{align*}
The part $II_{2}$ is estimated by Lemma \ref{superuubar} and the a priori estimate \eqref{apriori}:
\begin{equation}\label{bdII2}
|II_{2}|\lesssim h^{2}\|u\|_{H^{2}}\|\bm{\tau}_{h}\|.
\end{equation}
Note that the normal component of $\{\bm{\tau}_{h}\}$ is constant on $E$ and $\llbracket\bm{\tau}_{h}\rrbracket=0$ by \eqref{equivRT}. It then follows from $\fint_{E}\llbracket\bar{u}_{h}\rrbracket=\bm{0}$, \eqref{product} , the trace inequality \eqref{trace}, an inverse inequality, \eqref{apriori}, and Lemma \ref{superuubar}, that
\begin{equation}\label{bdII1}
\begin{aligned}
&II_{1}=\sum_{E\in\mathcal{E}_{h}}\int_{E}\llbracket a(u-\bar{u}_{h})\bm{\tau}_{h}\rrbracket\\
&\quad=\sum_{E\in\mathcal{E}_{h}}\int_{E}\llbracket(a-\fint_{E}a)(u-\bar{u}_{h})\rrbracket\cdot\{\bm{\tau}_{h}\}\\
&\quad\lesssim h\sum_{E\in\mathcal{E}_{h}}\|\llbracket u-\bar{u}_{h}\rrbracket\|_{L^{2}(E)}\|\{\bm{\tau}_{h}\}\|_{L^{2}(E)}\\
&\quad\lesssim h^{\frac{1}{2}}\sum_{E\in\mathcal{E}_{h}}(h^{-\frac{1}{2}}\|u-\bar{u}_{h}\|_{L^{2}(\omega_{E})}+h^{\frac{1}{2}}\|\nabla_{h}(u-\bar{u}_{h})\|_{L^{2}(\omega_{E})})\|\bm{\tau}_{h}\|_{L^{2}(\omega_{E})}\\
&\quad\lesssim \big(\|u-\bar{u}_{h}\|+h\|\nabla_{h}(u-\bar{u}_{h})\|\big)\|\bm{\tau}_{h}\|\lesssim h^{2}\|u\|_{H^{2}}\|\bm{\tau}_{h}\|.
\end{aligned}
\end{equation}
Combining \eqref{total}--\eqref{bdII1}, we obtain
\begin{equation}\label{finalsigmabar}
\|\Pi_{h}\bm{\sigma}-\bar{\bm{\sigma}}_{h}\|\lesssim
h^{2}\|u\|_{H^{3}}.
\end{equation}
On the other hand, Lemma \ref{superuubar} implies
\begin{equation}\label{sigmasigmabar}
\|{\bm{\sigma}}_{h}-\bar{\bm{\sigma}}_{h}\|\lesssim h^{2}\|u\|_{H^{2}}.
\end{equation}
The theorem then follows from \eqref{finalsigmabar} and \eqref{sigmasigmabar}.
\qed\end{proof}
Key ingredients in the proof of Theorem \ref{superclose} include the RT flux $\bar{\bm{\sigma}}$ and the superconvergence estimate \eqref{bdI} for rectangular RT elements. Similarly, Cockburn et al.~\cite{CoGuWa2009} postprocessed the approximate fluxes from a large class of discontinuous Galerkin methods to obtain $H(\text{div})$-conforming RT fluxes, which facilitates the superconvergence analysis of recovered potentials.
Theorem \ref{superclose} shows that the corrected flux $\bm{\sigma}_h$ is superclose to the canonical RT interpolant $\Pi_h\bm{\sigma}.$ In contrast, many supercloseness results in the literature are based on corrected interpolants/projections that are superclose to the numerical solution. Readers are referred to \cite{Chen2002,ChenHu2013,CSYZ2015,CaoHuang2017,CSYZ2018} and references therein for superconvergence analysis of $H^1$-conforming and discontinuous Galerkin methods by corrected projection technique using orthogonal polynomials.
\section{Postprocessing and superconvergence}\label{sec3}
For the rectangular RT element, Dur\'an \cite{Duran1990} gave a postprocessing operator $K_{h}^{D}$ satisfying
\begin{subequations}\label{Kh}
\begin{align}
&\|K_{h}^{D}\bm{\tau}_{h}\|\lesssim\|\bm{\tau}_{h}\|\text{ for all }\bm{\tau}_{h}\in\mathcal{RT}_{h},\\
&\|\bm{\sigma}-K_{h}^{D}\Pi_{h}\bm{\sigma}\|\lesssim h^{2}|\bm{\sigma}|_{H^{2}}.
\end{align}
\end{subequations}
Here the input for $K_{h}^{D}$ needs to be $H(\divg)$-conforming. Now assume the corrected flux $\bm{\sigma}_{h}\in\mathcal{RT}_{h}$, e.g., $f$ is piecewise constant, $\bm{b}=\bm{0}$, and $c=0$. Using \eqref{Kh}, Theorem \ref{superclose}, and the triangle inequality
\begin{equation*}
\|a\nabla u-K_{h}^{D}\bm{\sigma}_{h}\|\leq\|a\nabla u-K_{h}^{D}\Pi_{h}\bm{\sigma}\|+\|K_{h}^{D}(\Pi_{h}\bm{\sigma}-\bm{\sigma}_{h})\|,
\end{equation*}
we obtain
\begin{equation*}
\|a\nabla u-K_{h}^{D}\bm{\sigma}_{h}\|\lesssim h^{2}\|u\|_{H^{3}}.
\end{equation*}
However, $\bm{\sigma}_{h}\in\mathcal{RT}^{-1}_{h}$ and $\bm{\sigma}_{h}\notin\mathcal{RT}_{h}$ in general and thus $K_{h}^{D}$ cannot be directly applied to $\bm{\sigma}_{h}$. In this section, we introduce a simple recovery operator $A_{h}$ by local weighted averaging.
\begin{definition}\label{defAh}
The operator $A_{h}: \mathcal{RT}_{h}^{-1}\rightarrow\widetilde{\mathcal{V}}_{h}$ is defined as follows.
\begin{enumerate}
\item For each $E\in\mathcal{E}_{h}^{o}$, let $m$ be the midpoint of $E$. Let $K^{+}$ and $K^{-}$ be the two rectangles sharing $E$ as an edge. Define
\begin{equation*}
(A_{h}\bm{\tau}_{h})(m):=\frac{|K^{-}|}{|K^{+}|+|K^{-}|}\bm{\tau}_{h}|_{K^{+}}(m)+\frac{|K^{+}|}{|K^{+}|+|K^{-}|}\bm{\tau}_{h}|_{K^{-}}(m).
\end{equation*}
\item For each $E\in\mathcal{E}_{h}^{\partial}$, let $m$ denote the midpoint of $E$ and $K$ the element having $E$ as an edge. Let $E'$ be the edge of $K$ opposite to $E$ with midpoint $m'$. Let $K'$ be the other element having $E'$ as an edge and $m''$ the midpoint of the edge of $K'$ opposite to $E'$. Define
\begin{equation*}
(A_{h}\bm{\tau}_{h})(m):=((A_{h}\bm{\tau}_{h})(m')-w'(A_{h}\bm{\tau}_{h})(m''))/w,
\end{equation*}
where
\begin{equation*}
w=\frac{|K'|}{|K|+|K'|},\quad w'=\frac{|K|}{|K|+|K'|}.
\end{equation*}
\end{enumerate}
Then $A_{h}\bm{\tau}_{h}$ is the unique finite element in $\widetilde{\mathcal{V}}_{h}$ whose midpoint values are specified in the above two steps.
\end{definition}
Note that $A_h\bm{\tau}_h\not\in H^1(\Omega)$ and the weight constants in Definition \ref{defAh} are not chosen in a standard way. We show that $A_{h}$ has a super-approximation property on any nondegenerate rectangular meshes.
\begin{theorem}\label{superapprox}
For $\bm{\tau}_{h}\in\mathcal{RT}_{h}^{-1}$ and $\bm{\tau}\in H^2(\Omega)$, it holds that
\begin{subequations}
\begin{align}
\|A_{h}\bm{\tau}_{h}\|&\lesssim\|\bm{\tau}_{h}\|,\label{a}\\
\|\bm{\tau}-A_{h}\Pi_{h}\bm{\tau}\|&\lesssim h^{2}|\bm{\tau}|_{H^{2}}.\label{b}
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
Consider $K\in\mathcal{T}_{h}$ and
$$\omega_{K}:=\bigcup_{E\subset\partial K}\omega_{E}.$$
Using the stability of $A_{h}$ in the $L_{\infty}$-norm and the inverse inequality, we prove the stability of $A_{h}$ in the $L_{2}$-norm:
\begin{equation*}
\|A_{h}\bm{\tau}_{h}\|_{L_{2}(K)}\lesssim h\|A_{h}\bm{\tau}_{h}\|_{L_{\infty}(K)}\lesssim h\|\bm{\tau}_{h}\|_{L_{\infty}(\omega_{K})}\lesssim \|\bm{\tau}_{h}\|_{L_{2}(\omega_{K})}.
\end{equation*}
\eqref{a} then follows from the above estimate and sum of squares.
Let $E\in\mathcal{E}_{h}^{o}$ with midpoint $m$ and two adjacent elements $K^{+}, K^{-}$ sharing $E$. For $\bm{\tau}_{1}\in Q_{1,1}(\omega_{E})\times Q_{1,1}(\omega_{E})$, we first want to show $(\bm{\tau}_{1}-A_{h}\Pi_{h}\bm{\tau}_{1})(m)=\bm{0}$. Since $\Pi_{h}$ preserves functions in $Q_{1,0}(\omega_{E})\times Q_{0,1}(\omega_{E})$, it suffices to check when $\bm{\tau}_{1}=(y,0)^{T}$ or $(0,x)^{T}$. By linearity we can assume $m=\bm{0}$ without loss of generality. If $E$ is a horizontal interior edge, let $K^{+}=[-\ell_{1}/2,\ell_{1}/2]\times[0,\ell_{2}^{+}]$, $K^{-}=[-\ell_{1}/2,\ell_{1}/2]\times[-\ell_{2}^{-},0]$. Then,
$$\Pi_{h}\begin{pmatrix}y\\0\end{pmatrix}=\left\{\begin{aligned}(\ell_{2}^{+}/2,0)^{T}\quad\text{if }y>0\\(-\ell_{2}^{-}/2,0)^{T}\quad\text{if }y<0\end{aligned}\right.,\quad\Pi_{h}\begin{pmatrix}0\\x\end{pmatrix}=\begin{pmatrix}0\\0\end{pmatrix}.$$
In each case, $(\bm{\tau}_{1}-A_{h}\Pi_{h}\bm{\tau}_{1})(m)=0$. The same argument works for vertical interior edges.
Let $E\in\mathcal{E}_{h}^{\partial}$ and $K$ the element having $E$ as an edge. Let $E'$ be the edge of $K$ opposite to $E$ and $K'$ be the element sharing the edge $E'$ with $K$. Let $E''$ be the edge of $K'$ opposite to $E'$ and $K''$ be the element sharing $E''$ with $K'$. Let $\omega_{E}=K\cup K'\cup K''$. By similar argument, we have $(\bm{\tau}_{1}-A_{h}\Pi_{h}\bm{\tau}_{1})(m)=0$ when $\bm{\tau}_{1}\in Q_{1,1}(\omega_{E})\times Q_{1,1}(\omega_{E}).$
Using the property derived in the above three paragraphs, for $\bm{\tau}_{1}\in Q_{1,1}(\omega_{K})\times Q_{1,1}(\omega_{K})$, we have
\begin{equation*}
\begin{aligned}
&\|\bm{\tau}-A_{h}\Pi_{h}\bm{\tau}\|_{L_{2}(K)}\lesssim h\|\bm{\tau}-A_{h}\Pi_{h}\bm{\tau}\|_{L_{\infty}(K)}\\
&\quad\lesssim h\|(\text{id}-A_{h}\Pi_{h})(\bm{\tau}-\bm{\tau}_{1})\|_{L_{\infty}(K)}\lesssim h\|\bm{\tau}-\bm{\tau}_{1}\|_{L_{\infty}(\omega_{K})},
\end{aligned}
\end{equation*}
where $\text{id}$ is the identity mapping. Then by standard finite element approximation theory (cf. Corollary 4.4.7 in \cite{BS2008}),
\begin{equation}\label{l2max}
\inf_{\bm{\tau}_{1}\in Q_{1,1}(\omega_{K})\times Q_{1,1}(\omega_{K})}\|\bm{\tau}-\bm{\tau}_{1}\|_{L_{\infty}(\omega_{K})}\lesssim h|\bm{\tau}|_{H^{2}(\omega_{K})}
\end{equation}
and thus
\begin{equation}\label{local}
\|\bm{\tau}-A_{h}\Pi_{h}\bm{\tau}\|_{L_2(K)}\lesssim h^{2}|\bm{\tau}|_{H^{2}(\omega_{K})}.
\end{equation}
Then \eqref{b} follows from \eqref{local} and sum of squares.
\qed\end{proof}
Combining Theorems \ref{superclose} and \ref{superapprox}, we obtain the superconvergent flux recovery estimate.
\begin{theorem}\label{superconvergence}
It holds that
\begin{equation*}
\|a\nabla u-A_{h}\bm{\sigma}_{h}\|\lesssim h^{2}\|u\|_{H^{3}}.
\end{equation*}
\end{theorem}
\begin{proof}
Combining Theorems \ref{superclose}, \ref{superapprox} and the triangle inequality
\begin{equation*}
\|a\nabla u-A_{h}\bm{\sigma}_{h}\|\leq\|a\nabla u-A_{h}\Pi_{h}\bm{\sigma}\|+\|A_{h}(\Pi_{h}\bm{\sigma}-\bm{\sigma}_{h})\|
\end{equation*}
completes the proof.
\qed\end{proof}
Consider $\tilde{\bm{\sigma}}_{h}\in\mathcal{RT}_{h}^{-1}$, where
\begin{equation}\label{sigmatilde}
\tilde{\bm{\sigma}}_{h}|_{K}:=Q_{h}(a\nabla_{h}u_{h})-\bm{r}_{h}(f-\bm{b}\cdot\nabla_{h} u_{h}-cu_{h})(\bm{x}_{K}),
\end{equation}
with $\bm{x}_K=(x_{K,1},x_{K,2})^T$ being the centroid of $K.$
Since $\bm{r}_{h}=O(h)$, we have
$$\|\tilde{\bm{\sigma}}_{h}-\bm{\sigma}_{h}\|\lesssim h^{2}\|u\|_{H^{2}}.$$
and thus
$$\|a\nabla u-A_{h}\tilde{\bm{\sigma}}_{h}\|\lesssim h^{2}\|u\|_{H^{3}}.$$
$\tilde{\bm{\sigma}}_{h}$ is favorable because of lower computational cost.
\begin{remark}
Let $\widetilde{\mathcal{T}}_h$ be the refinement of $\mathcal{T}_{h}$ by connecting midpoints of opposite edges of each rectangle in $\mathcal{T}_{h}.$ Let $\phi_h$ be a bilinear nodal basis function on $\widetilde{\mathcal{T}}_h$ scaled and translated such that $\phi_h$ is centered at $\bm{0}$ and $\int_{\mathbb{R}^2}\phi_h=1$. For a uniform $\mathcal{T}_{h}$ and a piecewise constant $\bm{\tau}_h$ on $\mathcal{T}_{h},$ the convolution $\bm{\tau}_h*\phi_h$ coincides with $A_h\bm{\tau}_h$ at the midpoint of each interior edge in $\mathcal{T}_{h}$.
Since $\nabla_hu_h$ is not piecewise constant and $\mathcal{T}_{h}$ is not uniform, the edge-based averaging $K_h$ is generally not the same as $\phi_h$-convolution at midpoints of interior edges.
For conforming finite elements, local postprocessing based on spline convolution kernels \cite{BS1977,Thomee1977} are able to produce high order superconvergence on uniform meshes, see also \cite{RyanShu2007} for similar technique in discontinuous Galerkin methods. It would be interesting to check whether those kernels lead to superconvergence for nonconforming methods.
\end{remark}
\section{Extensions to triangular elements and higher dimensional space}\label{sec4}
In this section, we extend superconvergence analysis in Section \ref{sec3} to triangular CR elements and NCRT elements in $\mathbb{R}^{d}$ with $d\geq3$.
\subsection{Crouzeix--Raviart elements in $\mathbb{R}^{2}$}
Based on the equivalence between mixed and nonconforming methods for Poisson's equation, a superconvergent recovery for CR elements applied to Poisson's equation has been developed in \cite{HM2016}. We generalize this result for elliptic equations with lower order terms and variable coefficients. In this subsection, let $\mathcal{T}_{h}$ be a triangular mesh on $\Omega$.
The CR finite element space is
\begin{equation*}
\begin{aligned}
\mathcal{V}_{g,h}^\Delta:=&\{v_{h}\in L_2(\Omega): v_{h}|_{K}\in\text{span}\{1,x_{1},x_{2}\}\text{ for\ all}\ K\in\mathcal{T}_{h}, \\
&v_{h}\text{ is\ continuous at the midpoint of each}\ E\in\mathcal{E}_{h}^{o}, \\
&\fint_{E}v_{h}=\fint_{E}g\text{ for all } E\in\mathcal{E}_{h}^{\partial}\}.\\
\end{aligned}
\end{equation*}
The CR method for \eqref{elliptic} is to find $u_{h}^\Delta\in\mathcal{V}_{g,h}^\Delta$, such that
\begin{equation*}
\ab{a\nabla_{h}u_{h}^\Delta}{\nabla_{h}v}+\ab{\bm{b}\cdot\nabla_{h}u_{h}^\Delta}{v}+\langle cu_{h}^\Delta,v\rangle=\ab{f}{v},\quad\forall v\in\mathcal{V}_{0,h}^\Delta.
\end{equation*}
The lowest order triangular RT finite element space is
\begin{equation*}
{\mathcal{RT}^\Delta_{h}}:=\{\bm{\tau}_{h}\in H(\divg,\Omega): \bm{\tau}_{h}|_{K}\in\text{span}\left\{\begin{pmatrix}1\\0\end{pmatrix},\begin{pmatrix}0\\1\end{pmatrix},\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}\right\}\text{ for all }K\in\mathcal{T}_{h}\}.
\end{equation*}
It has been shown in \cite{Marini1985} that CR and RT finite element spaces are closely related by the following lemma.
\begin{lemma}\label{triMarini}
Let $\bar{f}$ and $\bm{\tau}_{h}$ be piecewise constant functions with respect to $\mathcal{T}_{h}$. Assume that
\begin{equation*}
\ab{\bm{\tau}_{h}}{\nabla_{h}v}=\ab{\bar{f}}{v}
\end{equation*}
for all $v\in\mathcal{V}_{0,h}^\Delta$. Then $\bm{\tau}_{h}-\bar{f}\bm{r}_{h}^\Delta\in{\mathcal{RT}}^\Delta_{h},$ with
$$\bm{r}_{h}^\Delta|_{K}(x_1,x_2):=\frac{1}{2}\left(x_{1}-x_{K,1}, x_{2}-x_{K,2} \right)^{T},$$
where $(x_{K,1},x_{K,2})$ is the centroid of $K$.
\end{lemma}
We say $\mathcal{T}_{h}$ is a uniform parallel mesh if each pair of adjacent triangles in $\mathcal{T}_{h}$ forms a parallelogram. A supercloseness estimate follows from Lemma \ref{triMarini}, a supercloseness estimate for triangular RT elements in \cite{YL2018,HuMa2018}, and the same procedure in Section \ref{sec2}. By abuse of notation, $\Pi_h$ denotes the canonical interpolation onto $\mathcal{RT}_h^\Delta.$
\begin{theorem}\label{supercloseCR}
Let $\mathcal{T}_{h}$ be a uniform parallel mesh. Let $$\bm{\sigma}_{h}^\Delta:=\bar{a}\nabla_{h}u_{h}^\Delta-\bm{r}_{h}^\Delta P_{h}(f-cu_{h}^\Delta-\bm{b}\cdot\nabla_{h}u_{h}^\Delta),$$
where $\bar{a}|_{K}=\fint_{K}a$ for $K\in\mathcal{T}_{h}$.
It holds that
\begin{equation*}
\|\Pi_{h}\bm{\sigma}-\bm{\sigma}_{h}^\Delta\|\lesssim
h^{2}|\log h|^{\frac{1}{2}}\|u\|_{W^3_\infty}.
\end{equation*}
\end{theorem}
\begin{proof}
We use similar notations and proceed as in the proof of Theorem \ref{superclose}. Let $\bm{\tau}_{h}=\Pi_{h}\bm{\sigma}-\bar{\bm{\sigma}}_{h}^\Delta$, where $\bar{\bm{\sigma}}_{h}^\Delta=\bar{a}\nabla_{h}\bar{u}_{h}^\Delta-\bm{r}_{h}^\Delta P_{h}(f-cu-\bm{b}\cdot\nabla u)$ and $\bar{u}_h^\Delta$ is the solution to the auxiliary problem \eqref{auxnc} with $\mathcal{V}_{0,h}^\Delta$ replacing $\mathcal{V}_{0,h}$.
It then follows from Lemma \ref{triMarini} that $\bm{\tau}_{h}\in\mathcal{RT}_h^\Delta$ with $\nabla\cdot\bm{\tau}_{h}=0$. Hence $\bm{\tau}_{h}=\nabla^{\perp}w_{h}$ for some continuous piecewise linear function $w_{h}$, where $\nabla^{\perp}=(-\partial_{x_{2}},\partial_{x_{1}})^{T}$. The bound \eqref{bdI} for part $I$ is replaced by
$$|\ab{\bm{\sigma}-\Pi_{h}\bm{\sigma}}{\nabla^\perp w_{h}}|\lesssim h^{2}|\log h|^{\frac{1}{2}}\|\bm{\sigma}\|_{W^2_\infty}\|\nabla^{\perp}w_{h}\|,$$
which is proved in \cite{YL2018}. The rest of the proof is the same as Theorem \ref{superclose}.
\qed\end{proof}
For the recovery purpose, let \begin{equation*}
\begin{aligned}
\mathcal{V}_{h}^\Delta:=&\{v_{h}\in L_2(\Omega): v_{h}|_{K}\in\text{span}\{1,x_{1},x_{2}\}\text{ for\ all}\ K\in\mathcal{T}_{h}, \\
&v_{h}\text{ is\ continuous at the midpoint of each}\ E\in\mathcal{E}_{h}^{o}\}.
\end{aligned}
\end{equation*}
Then we consider the postprocessing operator $K_{h}$ defined in \cite{Brandts1994}, see also \cite{Duran1991}.
\begin{definition}
Let $\bm{\tau}_{h}$ be a piecewise constant function.
\begin{enumerate}
\item For each $E\in\mathcal{E}_{h}^{o}$, let $m$ be the midpoint of $E$. Let $K^{+}$ and $K^{-}$ be the two rectangles sharing $E$ as an edge. Define
\begin{equation*}
(K_{h}\bm{\tau}_{h})(m):=\frac{1}{2}\bm{\tau}_{h}|_{K^{+}}(m)+\frac{1}{2}\bm{\tau}_{h}|_{K^{-}}(m).
\end{equation*}
\item For each $E\in\mathcal{E}_{h}^{\partial}$, let $m$ denote the midpoint of $E$ and $K$ the element having $E$ as an edge. Let $E'$ be another edge of $K$ with midpoint $m'$. Let $K'$ be the other element having $E'$ as an edge and $m''$ the midpoint of the edge of $K'$ that is parallel to $E$. Define
\begin{equation*}
(K_{h}\bm{\tau}_{h})(m):=2(K_{h}\bm{\tau}_{h})(m')-(K_{h}\bm{\tau}_{h})(m'').
\end{equation*}
\end{enumerate}
Then $K_{h}\bm{\tau}_{h}$ is the unique element in $\mathcal{V}_{h}^\Delta$ whose midpoint values are specified in the above two steps.
\end{definition}
Based on Theorem \ref{supercloseCR}, we obtain the superconvergent recovery for the CR element.
\begin{theorem}
Let $\mathcal{T}_{h}$ be a uniform parallel mesh. Then
\begin{equation*}
\|a\nabla u-K_{h}(\bar{a}\nabla_{h}u_{h}^\Delta)\|\lesssim
h^{2}|\log h|^{\frac{1}{2}}\|u\|_{W^3_\infty}.
\end{equation*}
\end{theorem}
\begin{proof}
The operator $K_{h}$ is known to satisfy Theorem \ref{superapprox} with $K_{h}$ replacing $A_{h}$, see \cite{Brandts1994}.
It then follows from Theorem \ref{supercloseCR} and the same argument in the proof of Theorem \ref{superconvergence} that
\begin{equation}\label{super1}
\|a\nabla u-K_{h}\bm{\sigma}_{h}^\Delta\|\lesssim h^{2}|\log h|^{\frac{1}{2}}\|u\|_{W^3_\infty}.
\end{equation}
Let $p=f-cu-\bm{b}\cdot\nabla u$ and $\tilde{\bm{\sigma}}_{h}^\Delta:=\bar{a}\nabla_{h}u^\Delta_{h}-\bm{r}^\Delta_{h}P_{h}p$. It follows from $\|\bm{r}_h\|_{L_\infty}=O(h)$ and \eqref{apriori} that
\begin{equation}\label{sigmabarCR}
\|\bm{\sigma}_{h}^\Delta-\tilde{\bm{\sigma}}_{h}^\Delta\|\lesssim h^{2}\|u\|_{H^{2}}.
\end{equation}
Let $m$ be the midpoint of any $E\in\mathcal{E}_{h}^{o}$. We have
\begin{equation*}
\begin{aligned}
&[(K_{h}(\bm{r}_{h}^\Delta P_{h}p)](m)=[K_{h}(\bm{r}_{h}^\Delta p)](m)+[K_{h}(\bm{r}_{h}^\Delta(P_{h}p-p))](m)\\
&\quad=(K_{h}\bm{r}_{h}^\Delta)(m)p(m)+O(h^{2})\|u\|_{W^2_\infty}=O(h^{2})\|u\|_{W^2_\infty}.
\end{aligned}
\end{equation*}
In the last equality, we use $(K_{h}\bm{r}_{h}^\Delta)(m)=0$.
Similar argument works for $E\in\mathcal{E}_{h}^{\partial}$.
Hence
\begin{equation}\label{hot}
\|K_{h}(\bm{r}_{h}^\Delta P_{h}p)\|\lesssim\|K_{h}(\bm{r}_{h}^\Delta P_{h}p)\|_{L_{\infty}}\lesssim h^{2}\|u\|_{W^2_\infty}.
\end{equation}
Combining \eqref{super1}-\eqref{hot} and the triangle inequality
\begin{equation*}
\begin{aligned}
&\|a\nabla u-K_{h}(\bar{a}\nabla u_{h}^\Delta)\|\leq\|a\nabla u-K_{h}\bm{\sigma}_{h}^\Delta\|\\
&\quad+\|K_{h}(\bm{\sigma}_{h}^\Delta-\tilde{\bm{\sigma}}_{h}^\Delta)\|+\|K_{h}(\bm{r}_{h}^\Delta P_{h}p)\|
\end{aligned}
\end{equation*}
completes the proof.
\qed\end{proof}
It is noted that $K_{h}$ superconverges on mildly structured meshes, see, e.g., \cite{YL2018}. For superconvergence results on mildly perturbed uniform triangular grids, readers are also referred to \cite{LMW2000,BX2003a,XZ2003,BaLi2019,DuZhang2019} and references therein. A disadvantage of $K_{h}$ is that it outputs a nonconforming function which is sometimes undesirable. For a vertex $z$ in $\mathcal{T}_{h}$, let $\omega_{z}$ be the patch which is the union of triangles surrounding $z$. Define
$$\widetilde{K}_{h}(\bar{a}\nabla_{h}u_{h}^\Delta)(z):=\sum_{K\subset\omega_{z}}\frac{|K|}{|\omega_{z}|}\bar{a}\nabla_{h}u_{h}^\Delta|_{K}.$$
We then obtain a nodal averaging procedure $\widetilde{K}_{h}$ and a continuous piecewise linear function $\widetilde{K}_{h}(\bar{a}\nabla_{h}u_{h}^\Delta)$. Following similar argument in this section, it is straightforward to show
$$\|a\nabla u-\widetilde{K}_{h}(\bar{a}\nabla_{h}u_{h}^\Delta)\|\lesssim h^{\frac{3}{2}}\|u\|_{H^{3}},$$
provided $\mathcal{T}_{h}$ is uniformly parallel.
\subsection{Rannacher--Turek elements in $\mathbb{R}^{d}$}
Let $\Omega=\Pi_{j=1}^d[\omega_{j,1},\omega_{j,2}]\subset\mathbb{R}^{d}$ be a hypercube where $d\geq3$ is an integer. We assume that $a, \bm{b}, c, f, g$ in \eqref{elliptic} are functions in $\bm{x}=(x_{1},\ldots,x_{d})^T\in\Omega$. Let $\mathcal{T}_{h}$ be a cubical mesh of $\Omega$, where each element $K$ in $\mathcal{T}_{h}$ is of the form
$$K=\Pi_{j=1}^d[x_{j,i_j},x_{j,i_j+1}]=[x_{1,i_1},x_{1,i_1+1}]\times[x_{2,i_2},x_{2,i_2+1}]\times\cdots[x_{d,i_d},x_{d,i_d+1}]$$
with $i_1,\ldots,i_d\in\mathbb{Z}^+.$ Let $\mathcal{F}_h$, $\mathcal{F}_h^{o}$, and $\mathcal{F}_h^{\partial}$ denote the set of faces, interior faces, and boundary faces, respectively. The NCRT element space in $\mathbb{R}^{d}$ is
\begin{equation*}
\begin{aligned}
\mathcal{V}_{g,h}^{(d)}:=&\{v\in L_2(\Omega): v|_{K}\in\text{span}\{1,x_{1},\ldots,x_{d},x_{1}^{2}-x_{2}^{2},\ldots,x_{1}^{2}-x_{d}^{2}\}\\
&\text{ for\ all}\ K\in\mathcal{T}_{h}, ~\fint_{F}v\text{ is\ single-valued\ for\ all}\ F\in\mathcal{F}_h^{o}, \\
&\fint_{F}v=\fint_{F}g\text{ at the centroid of each } F\in\mathcal{F}_h^{\partial}\},\\
\end{aligned}
\end{equation*}
where $\fint_{F}v:=\frac{1}{|F|}\int_Fv$ is the surface mean of $v$ on $F$.
The NCRT method for \eqref{elliptic} in $\mathbb{R}^{d}$ is to find $u_{h}^{(d)}\in\mathcal{V}_{g,h}^{(d)}$, such that
\begin{equation}\label{RT3}
\ab{a\nabla_{h}u_{h}^{(d)}}{\nabla_{h}v}+\ab{\bm{b}\cdot\nabla_{h}u_{h}^{(d)}}{v}+\langle cu_{h}^{(d)},v\rangle=\ab{f}{v},\quad\forall v\in\mathcal{V}_{0,h}^{(d)}.
\end{equation}
Let $Q_{1}^{(j)}(K)$ {\color{red}be} the space of polynomials on $K$ that are linear in $x_{j}$ and constant in $x_{i}$ for $i\neq j$. Let
\begin{equation*}
\mathcal{RT}_{h}^{(d)}:=\{\bm{\tau}_{h}\in H(\divg,\Omega): \bm{\tau}_{h}|_{K}\in \Pi_{j=1}^{d}Q_{1}^{(j)}(K)\text{ for all }K\in\mathcal{T}_{h}\}.
\end{equation*}
The $H(\divg)$-space in $\mathbb{R}^{d}$ is $H(\divg;\Omega)=\{\bm{\tau}\in\Pi_{j=1}^{d}L_2(\Omega): \nabla\cdot\bm{\tau}\in L_2(\Omega)\}$. The next lemma is a direct genearlization of Lemma \ref{mainlemma}. The proof follows from direct (but tedious) calculation.
\begin{lemma}\label{lemma2}
Let $\bar{f}$ be a piecewise constant, $\bm{\tau}_{h}|_{K}\in\Pi_{j=1}^{d}Q_{1}^{(j)}(K)$ and $\nabla\cdot(\bm{\tau}_{h}|_{K})=0$ for all $K\in\mathcal{T}_{h}$. Assume that
\begin{equation*}
\ab{\bm{\tau}_{h}}{\nabla_{h}v}=\ab{\bar{f}}{v}
\end{equation*}
for all $v\in\mathcal{V}_{0,h}^{(d)}$. Then $\bm{\tau}_{h}-\bar{f}\bm{r}_{h}^{(d)}\in\mathcal{RT}_{h}^{(d)},$
with
\begin{align*}
&\bm{r}_{h}^{(d)}|_{K}(x_1,x_2,\ldots,x_d)\cdot\bm{e}_{i}\\
&\quad:=\ell_{K,1}^{2}\ldots\widehat{\ell_{K,i}^{2}}\ldots\ell_{K,d}^{2}(x_{i}-x_{K,i})/\sum_{j=1}^{d}\ell_{K,1}^{2}\ldots\widehat{\ell_{K,j}^{2}}\ldots\ell_{K,d}^{2}
\end{align*}
for $1\leq i\leq d$,
where $\bm{e}_{i}$ is the $i$-th unit vector, $\widehat{\cdot}$ means the variable below is suppressed, $K=\Pi_{j=1}^{d}[x_{j,i_j},x_{j,i_j+1}]$, $\ell_{K,j}=x_{j,i_j+1}-x_{j,i_j}$, and $(x_{K,1},\ldots,x_{K,d})$ is the centroid of $K$.
\end{lemma}
Given $\bm{\tau}\in\Pi_{j=1}^dH^1(\Omega)$, the $d$-dimensional RT interpolant $\Pi_{h}^{(d)}\bm{\tau}\in\mathcal{RT}^{(d)}_{h}$ is determined by
\begin{equation}\label{RTinterpolation3}
\int_F(\Pi^{(d)}_{h}\bm{\tau})\cdot\bm{n}_F=\int_F\bm{\tau}\cdot\bm{n}_F,\quad\forall F\in\mathcal{F}_h,
\end{equation}
where $\bm{n}_F$ is a unit normal to $F$.
By Lemma \ref{lemma2} and following exactly the same procedure in Section \ref{sec3}, we obtain a supercloseness estimate in $\mathbb{R}^{d}$.
\begin{theorem}\label{superclose3d}
Let $Q_{h}^{(d)}$ be the $L_2$-projection onto $\nabla_{h}\mathcal{V}_{0,h}^{(d)}$ and
\begin{equation*}
\bm{\sigma}_{h}^{(d)}:=Q_{h}^{(d)}(a\nabla_{h}u_{h}^{(d)})-\bm{r}_{h}^{(d)}P_{h}(f-cu_{h}^{(d)}-\bm{b}\cdot\nabla_{h}u_{h}^{(d)}).
\end{equation*}
It holds that
\begin{equation*}
\|\Pi_{h}^{(d)}(a\nabla u)-\bm{\sigma}_{h}^{(d)}\|\lesssim
h^{2}\|u\|_{H^{3}}.
\end{equation*}
\end{theorem}
In particular, for $d=3$, we have
\begin{equation*}
\bm{r}_{h}^{(3)}|_{K}(\bm{x})=\frac{
\big(\ell_{K,2}^{2}\ell_{K,3}^{2}(x_{1}-x_{K,1}),\ell_{K,3}^{2}\ell_{K,1}^{2}(x_{2}-x_{K,2}),\ell_{K,1}^{2}\ell_{K,2}^{2}(x_{3}-x_{K,3})\big)^{T}}{\ell_{K,1}^{2}\ell_{K,2}^{2}+\ell_{K,2}^{2}\ell_{K,3}^{2}+\ell_{K,3}^{2}\ell_{K,1}^{2}}.
\end{equation*}
Let $A_{h}^{(3)}$ be the face-based weighed averaging generalized from $A_{h}$ in Definition \ref{defAh}. Using an argument very similar to the proof of Theorem \ref{superapprox}, one could show that $A_{h}^{(3)}\Pi_{h}^{(3)}\bm{\sigma}$ superconverges to $\bm{\sigma}$ in the $L_2$-norm. Hence we obtain the superconvergent flux recovery in $\mathbb{R}^{3}$.
\begin{theorem}\label{superconvergence3d}
For $d=3$, it holds that
\begin{equation*}
\|a\nabla u-A_{h}^{(3)}\bm{\sigma}_{h}^{(3)}\|\lesssim h^{2}\|u\|_{H^{3}}.
\end{equation*}
\end{theorem}
\begin{proof}
The proof is same as Theorems \ref{superapprox} and \ref{superconvergence}. We require $d=3$ since the inequality \eqref{l2max} with $h^{2-\frac{d}{2}}$ replacing $h$ does not hold for $d>3$.\qed
\end{proof}
\section{Numerical experiments}\label{sec5}
\begin{table}[tbhp]
\caption{Rate of convergence in $\mathbb{R}^{2}$}
\label{table2d}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
ne & $\|u-u_{h}\|$
&$\|a\nabla u-a\nabla_{h}u_{h}\|$
& $\|\Pi_{h}(a\nabla u)-\tilde{\bm{\sigma}}_{h}\|$
& $\|a\nabla u-A_{h}\tilde{\bm{\sigma}}_{h}\|$ \\
\hline
6 &3.455e-02&1.157e+00&5.551e-01&1.451e+00\\
24 &8.394e-03&5.723e-01&1.366e-01&4.591e-01\\
96 &2.112e-03&2.890e-01&3.509e-02&6.692e-02\\
384 &5.350e-04&1.457e-01&8.812e-03&1.274e-02\\
1536 &1.352e-04&7.316e-02&2.227e-03&2.969e-03\\
6144 &3.410e-05&3.671e-02&5.638e-04&7.318e-04\\
24576 &8.582e-06&1.841e-02&1.419e-04&1.826e-04\\
\hline
order &2.045&1.023&2.042&2.098\\
\hline
\end{tabular}
\end{table}
In this section, we test the recovery operators $A_{h}$ and $A_{h}^{(3)}$. Instead of using $\bm{\sigma}_{h}$ analyzed in Sections \ref{sec3} and \ref{sec4}, we compute the modified flux $\tilde{\bm{\sigma}}_{h}$ in \eqref{sigmatilde} in the 2d experiment. For the numerical example in $\mathbb{R}^3$, we modify $\bm{\sigma}_{h}^{(3)}$ in Theorem \ref{superclose3d} and compute the flux $\tilde{\bm{\sigma}}_{h}^{(3)}$ given by
\begin{equation}\label{sigmatilde3}
\tilde{\bm{\sigma}}_{h}^{(3)}|_K=Q_{h}^{(3)}(a\nabla_{h}u_h^{(3)})-\bm{r}_{h}^{(3)}(f-cu_{h}^{(3)}-\bm{b}\cdot\nabla_{h}u_{h}^{(3)})(\bm{x}_K)
\end{equation}
on each cube $K\in\mathcal{T}_h,$
where $\bm{x}_K$ is the centroid of $K$. It is noted that $\nabla _{h}\mathcal{V}_{0,h}$ and $\nabla_{h}\mathcal{V}^{(3)}_{0,h}$ are broken spaces without any inter-element continuity. As a consequence, the projection $Q_{h}$ onto $\nabla _{h}\mathcal{V}_{0,h}$ in \eqref{sigmatilde} and the projection $Q^{(3)}_{h}$ onto $\nabla_{h}\mathcal{V}^{(3)}_{0,h}$ in \eqref{sigmatilde3} can be computed element-wise. Based on Definition \ref{defAh}, the value of $A_h\tilde{\bm{\sigma}}_h\in\widetilde{\mathcal{V}}_h$ at the midpoint of each interior edge is determined by a special weighted average of $\tilde{\bm{\sigma}}_h$ across that edge, while an extrapolation is used to compute $A_h\tilde{\bm{\sigma}}_h$ at midpoints of boundary edges. Recall that midpoint function values at all edges form the dofs of $\widetilde{\mathcal{V}}_h$ and correspond to locally supported basis functions of $\widetilde{\mathcal{V}}_h$. Therefore one could combine midpoint values of $A_h\tilde{\bm{\sigma}}_h$ and the induced basis of $\widetilde{\mathcal{V}}_h$ to compute the value of $A_h\tilde{\bm{\sigma}}_h$ at any necessary discrete points. The postprocessed flux $A^{(3)}_h\tilde{\bm{\sigma}}^{(3)}_h$ in $\mathbb{R}^3$ is calculated in a similar way.
To compute the RT interpolant $\Pi_h(a\nabla u)$, it suffices to use RT edge basis functions and the dof $\int_E(a\nabla u)\cdot\bm{n}_E$ on each edge $E\in\mathcal{E}_h,$ see \eqref{RTinterpolation}. The 4-point Gaussian quadrature $\{(b_i,c_i)\}_{i=1}^4$ is used to approximate the edge integral $\int_E(a\nabla u)\cdot\bm{n}_E$, where $\{b_i\}_{i=1}^4$ are positive weights and $\{c_i\}_{i=1}^4$ are coordinates of quadrature points on a reference interval. As for the interpolant $\Pi^{(3)}_h(a\nabla u)$ in $\mathbb{R}^3$, the related face integral $\int_F(a\nabla u)\cdot\bm{n}_F$ (see \eqref{RTinterpolation3}) is evaluated using the 2d \emph{tensor product} of $\{(b_i,c_i)\}_{i=1}^4$ with 16 interior quadrature points on each rectangular face $F$. When assembling stiffness matrices and right hand sides, we use the 2d (resp.~3d) tensor product of $\{(b_i,c_i)\}_{i=1}^4$ to approximate integrals on rectangular (resp.~cubical) elements. The 3d quadrature rule in each cube makes use of $4^3=64$ quadrature points.
The basis of $\mathcal{V}_{0,h}$ (resp.~$\mathcal{V}^{(3)}_{0,h}$) is chosen to be dual to the dofs $\{\fint_E\cdot\}_{E\in\mathcal{E}^o_h}$ (resp.~$\{\fint_F\cdot\}_{F\in\mathcal{F}^o_h}$). With such a basis and the aforementioned element-wise approximate integration, we could numerically solve \eqref{RT} (resp.~\eqref{RT3}) to obtain the dofs of $u_h$ (resp.~$u_h^{(3)}$). Those dofs are then combined with the dual basis to calculate
$u_h$ and $u_h^{(3)}$ at the discrete quadrature points necessary for integral quantities shown in Tables \ref{table2d} and \ref{table3d}.
In each table, `ne' denotes the number of elements in $\mathcal{T}_{h}$. The order of convergence is $p$ such that the error $\approx Ch^{p}$ with some constant $C$ independent of $h$. We evaluate $p$ by least squares using the data in Tables \ref{table2d} and \ref{table3d}.
\textbf{Problem 1:}
Consider the equation \eqref{elliptic} with $\Omega=[0,1]\times[0,1]$,
\begin{equation*}
\begin{aligned}
&u=\exp(2x_{1}+x_{2})x_{1}^{2}(x_{1}-1)^{2}x_{2}^{2}(x_{2}-1)^{2},\\
&a(\bm{x})=\exp(x_{1}),\quad\bm{b}(\bm{x})=\bm{x},\quad c(\bm{x})=\exp(x_{1}+x_{2}),
\end{aligned}
\end{equation*}
and corresponding $g$ and $f$. The initial rectangular mesh is
$$\mathcal{T}_{h}=\bigcup_{0\leq i\leq2,0\leq j\leq1}[x_{1,i},x_{1,i+1}]\times[x_{2,j},x_{2,j+1}],$$
where $x_{1,0}=0, x_{1,1}=0.4, x_{1,2}=0.8, x_{1,3}=1$ and $x_{2,0}=0, x_{2,1}=0.7, x_{2,2}=1$. We refine the mesh by connecting the midpoints of opposite edges of each rectangle. In the refinement, we randomly perturb the mesh along $x_{1}$- and $x_{2}$-directions by $20\%$ of the length of the smallest interval in that direction, respectively. Numerical results are presented in Table \ref{table2d}. The first three rows in Table \ref{table2d} are not used to evaluate the order since they are outside of the asymptotic regime.
\begin{table}[tbhp]
\caption{Rate of convergence in $\mathbb{R}^{3}$}
\label{table3d}
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
ne & $\|u-u_{h}^{(3)}\|$
&$\|a\nabla u-a\nabla_{h}u_{h}^{(3)}\|$
& $\|\Pi_{h}^{(3)}(a\nabla u)-\tilde{\bm{\sigma}}_{h}^{(3)}\|$
& $\|a\nabla u-A_{h}^{(3)}\tilde{\bm{\sigma}}_{h}^{(3)}\|$ \\
\hline
8&9.341e-01&1.280e+01&1.863e+01 & 2.238e+01 \\
64&4.158e-01&9.418e+00 & 5.547e+00 & 1.516e+01 \\
512&1.200e-01 & 5.032e+00 & 1.902e+00 & 3.448e+00 \\
4096&3.010e-02 & 2.525e+00 & 4.967e-01& 8.599e-01 \\
32768&7.661e-03 & 1.269e+00 & 1.285e-01 & 1.709e-01 \\
\hline
order &2.085&1.044&2.042&2.274\\
\hline
\end{tabular}
\end{table}
\textbf{Problem 2:}
In the second experiment, we consider the equation \eqref{elliptic} with $\Omega=[0,1]\times[0,1]\times[0,1]$,
\begin{equation*}
\begin{aligned}
&u(\bm{x})=\exp(x_{1}+x_{2})\sin(3\pi x_{1})\sin(2\pi x_{2})\sin(\pi x_{3}),\\
&a(\bm{x})=\exp(x_{1}+x_{2}+x_{3}),\quad\bm{b}(\bm{x})=\bm{0},\quad c(\bm{x})=0,
\end{aligned}
\end{equation*}
and corresponding $g$ and $f$. The initial cubical mesh is
$$\mathcal{T}_{h}=\bigcup_{0\leq i\leq1,0\leq j\leq1,0\leq k\leq1}[x_{1,i},x_{1,i+1}]\times[x_{2,j},x_{2,j+1}]\times[x_{3,k},x_{3,k+1}],$$
where
\begin{align*}
&(x_{1,0},x_{1,1},x_{1,2})=(0,0.5,1),\\
&(x_{2,0},x_{2,1},x_{2,2})=(0,0.6,1),\\ &(x_{3,0},x_{3,1},x_{3,2})=(0,0.4,1).
\end{align*}
We refine the mesh by connecting the centroid of opposite faces of each element. In the refinement, we randomly perturb the mesh along $x_{1}$-, $x_{2}$-, and $x_{3}$-directions by $20\%$ of the length of the smallest interval in that direction, respectively. Numerical results are presented in Table \ref{table3d}. For similar reason, the first two rows are not used.
In the two experiments, since the mesh is randomly perturbed, computed errors are not exactly the same (but similar) every time. The numerical results show that our superconvergence estimates Theorems \ref{superclose}, \ref{superconvergence}, and \ref{superclose3d} are asymptotically sharp. We also note that the rate of convergence in the last column of Table \ref{table3d} is slightly larger than the predicted order $2$ from Theorem \ref{superconvergence3d}. One possible reason is that the mesh size in $\mathbb{R}^3$ is not small enough. In fact, the numerical solution of \eqref{RT3} on the uniform refinement of the finest mesh in Table \ref{table3d} is beyond the computational power of our machine.
\section{Concluding remarks}\label{sec6}
We have developed a superconvergent flux recovery process for NCRT and CR element methods for second order elliptic equations. It is well-known that these elements are originally designed for efficiently solving the Stokes equation, see \cite{CR1973,RT1992}. Hence, extending our analysis and results to the Stokes equation is of practical interest and a direction of future research.
\section{Declarations}
\textbf{Funding} The author did not receive support from any organization for this work.
\vspace{0.2cm}
\noindent\textbf{Conflicts of interest} The author has no relevant financial or non-financial interests to disclose.
\vspace{0.2cm}
\noindent\textbf{Availability of data} Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
\vspace{0.2cm}
\noindent\textbf{Code availability} The code used in this study is available from the author upon request.
| {
"timestamp": "2021-03-10T02:26:37",
"yymm": "1910",
"arxiv_id": "1910.10828",
"language": "en",
"url": "https://arxiv.org/abs/1910.10828",
"abstract": "This work presents superconvergence estimates of the nonconforming Rannacher--Turek element for second order elliptic equations on any cubical meshes in $\\mathbb{R}^{2}$ and $\\mathbb{R}^{3}$. In particular, a corrected numerical flux is shown to be superclose to the Raviart--Thomas interpolant of the exact flux. We then design a superconvergent recovery operator based on local weighted averaging. Combining the supercloseness and the recovery operator, we prove that the recovered flux superconverges to the exact flux. As a by-product, we obtain a superconvergent recovery estimate of the Crouzeix--Raviart element method for general elliptic equations.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Superconvergent flux recovery of the Rannacher-Turek nonconforming element",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232879690036,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7093460252003031
} |
https://arxiv.org/abs/2009.04266 | The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation | Comparing metric measure spaces (i.e. a metric space endowed with aprobability distribution) is at the heart of many machine learning problems. The most popular distance between such metric measure spaces is theGromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. The GW distance is however limited to the comparison of metric measure spaces endowed with a probability distribution. To alleviate this issue, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more tractable upper-bounding relaxation.They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries. The first formulation is a positive and definite divergence based on a relaxation of the mass conservation constraint using a novel type of quadratically-homogeneous divergence. This divergence works hand in hand with the entropic regularization approach which is popular to solve large scale optimal transport problems. We show that the underlying non-convex optimization problem can be efficiently tackled using a highly parallelizable and GPU-friendly iterative scheme. The second formulation is a distance between mm-spaces up to isometries based on a conic lifting. Lastly, we provide numerical experiments onsynthetic examples and domain adaptation data with a Positive-Unlabeled learning task to highlight the salient features of the unbalanced divergence and its potential applications in ML. |
\section{Conclusion}
This paper defines two Unbalanced Gromov-Wasserstein formulations. We prove that they are both positive and definite. We provide a scalable, GPU-friendly algorithm to compute one of them, and show that the other is a distance between mm-spaces up to isometry.
These divergences and distances allow for the first time to blend in a seamless way the transportation geometry of GW with creation and destruction of mass.
This hybridization is the key to unlock both theoretical and practical issues.
It raises new questions on the metric properties of those divergences and distances, as well as the geodesic structure that it might induce.
It also offers new perspectives on the extension of GW methods to ML applications.
\section*{Acknowledgements}
The works of Thibault S\'ejourn\'e and Gabriel Peyr\'e is supported by the ERC grant NORIA.
The work of G. Peyré was supported in part by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d’avenir" program, reference ANR19-P3IA-0001 (PRAIRIE 3IA Institute).
The authors thank Rémi Flamary for his remarks and advices, as well as Laetitia Chapel for her help to reproduce her experiments.
\section{Numerical experiments}
\label{sec-xp}
This section presents simulations on synthetic examples to highlight the qualitative behavior of UGW and the tightness of the bound $\UGW\geq\CGW$.
In the synthetic experiments, $\mu$ and $\nu$ are probability distributions, which allows us to compare GW with UGW.
\paragraph{Tightness of the bound CGW$\leq$UGW}
We propose to approximate CGW by doing a similar alternate minimization as for $\UGW$, as detailed in Appendix~\ref{sec-app-xp}.
This numerical scheme does not scale to large problems, but allows us to explore numerically how tight is the upper bound $\UGW\geq\CGW$.
Figure~\ref{fig:cgw-ugw-local} highlights the fact that in Euclidean space $X=Y=\RR^d$, this bound seems to be tight when the two measures are sufficiently close.
We consider discrete measures $\mu = \frac{1}{n}\sum_i \de_{x_i}$ in $X=Y=\RR^d$ and $\nu_t = \frac{1}{n}\sum_i \de_{y_i}$ where $y_i=x_i + t \De_i$ where $\De_i$ are random perturbations and denote $(\Xx,\Yy_t)$ the two mm-spaces associated to the Euclidean distance. As $t \rightarrow 0$, $\mu$ and $\nu_t$ get closer, and numerically UGW$\approx$CGW.
Figure~\ref{fig:cgw-ugw-hist} considers random points $(x_i)_i$ and $(y_i)_i$ and displays the histograms of the ratio CGW/UGW for $n=3$. This shows that while the bound CGW$\leq$UGW seems not tight, the ratio appears to be bounded even for points not being close.
This numerical experiment suggests that UGW and CGW are locally equivalent and that UGW is in practice an acceptable proxy of the distance CGW.
Thus our experiment are performed considering Euclidean mm-spaces composed of samples $N,M\in\{2,3,5\}$, and we take $K=L=10$.
To guarantee as much as possible that we reach the global minima, we consider $10$ random initializations and $10$ random permutation matrices $P$ lifted as conic plan by setting $\al_{\cdot\cdot kl} = P$ for any $(k,l)$.
The latter initialization is assumed to be close to extremal points of the constraint polytope. Since Theorem~\ref{ThKonno'sGeneralization} holds for Euclidean mm-spaces, the optimal plan is also an extremal point of the polytope.
To compare $\CGW$ with $\UGW$, we set a solver with a level of entropy $\epsilon=10^{-3}$.
In Figure~\ref{fig:cgw-ugw-hist} we set $\rho=10^{-1}$.
\begin{figure}
\centering
\begin{minipage}{0.40\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{sections/figures/fig_compare_locally_ugw_cgw.png}
\caption{Comparison of $\UGW(\Xx,\Yy_t)$ and $\CGW(\Xx,\Yy_t)$ as the support gets shifted by a perturbation. }
\label{fig:cgw-ugw-local}
\end{minipage}\hfill
\begin{minipage}{0.40\textwidth}
\centering
\includegraphics[width=0.9\linewidth]{sections/figures/fig_compare_hist_ugw_cgw.png}
\caption{ Histograms of the ratio CGW / UGW for random spaces with $n\in\{2,3,5\}$ samples. Ratios over 1 are due to local minima.}
\label{fig:cgw-ugw-hist}
\end{minipage}
\end{figure}
\paragraph{Robustness to imbalanced classes.}
In this first example, we take $X=\RR^3$, $Y=\RR^2$ and consider $\Ee_2$, $\Ee_3$, $\Cc$ and $\Ss$ to be uniform distributions on a 2D and 3D ellipse, a square and a sphere.
We consider mm-spaces of different dimensions to emphasize the ability of (U)GW to compare different spaces.
Figure~\ref{fig-weight} contrasts the transportation plan obtained by GW and UGW for a fixed $\mu=0.5 \Ee_3 + 0.5 \Ss$ and $\nu$ obtained using two different mixtures of $\Ee_2$ and $\Cc$.
The black segments show the largest entries of the transportation matrix $\pi$, for a sub-sampled set of points (to ease visibility), thus effectively displaying the matching induced by the plan.
Furthermore, the width of the dots are scaled according to the mass of the marginals $\pi_1 \approx \mu$ and $\pi_2 \approx \nu$, i.e. the smaller the point, the smaller is the amount of transported mass.
This figure shows that the exact conservation of mass imposed by GW leads to a poor geometrical matching of the shapes which have different global mass.
As this should be expected, $\UGW$ recovers coherent matchings. We suspect the alternate minimization algorithm is able to find the global minimum in these cases.
\paragraph{Influence of $\epsilon$ and debiasing.}
This figure (and the following ones) does not show the influence of $\epsilon$ (which is set of a low value $\epsilon=10^{-2}$ on a domain $[0,1]^2$). This influence is similar to those of classical OT, namely that it introduces an extra diffusion bias. This bias can be corrected by computing a debiased cost $\UGW_\epsilon(\mu,\nu)-\UGW_\epsilon(\mu,\mu)/2-\UGW_\epsilon(\nu,\nu)/2 + \tfrac{\epsilon}{2}(m(\mu)^2 - m(\nu)^2)^2$. While this debiasing is shown to lead to a valid divergence for W in~\cite{feydy2019interpolating} and UW in~\cite{sejourne2019sinkhorn}, we leave its study for UGW for future works.
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{1mm}}|@{\hspace{1mm}}c@{\hspace{8mm}}c@{\hspace{1mm}}|@{\hspace{1mm}}c}%
{\includegraphics[height=.2\linewidth]{sections/figures/fig_matching_plan_balanced_ratio30}} &
{\includegraphics[height=.2\linewidth]{sections/figures/fig_matching_plan_ugw_ratio30}} &
{\includegraphics[height=.2\linewidth]{sections/figures/fig_matching_plan_balanced_ratio70}} &
{\includegraphics[height=.2\linewidth]{sections/figures/fig_matching_plan_ugw_ratio70}}\\[1mm]
GW & UGW & GW & UGW
\end{tabular}
\caption{GW vs. $\UGW$ transportation plan, using $\nu=0.3 \Ee_2 + 0.7 \Cc$ on the left, and
$\nu=0.7 \Ee_2 + 0.3 \Cc$ on the right. The 2D mm-spaces is lifted into $\RR^3$ by padding the third coordinate to zero.}
\label{fig-weight}
\end{figure}
\paragraph{Robustness to outlier}
Figure~\ref{fig-match} shows another experiment on a 2-D dataset, using the same display convention as in Figure~\ref{fig-weight}. It corresponds to the two moons dataset with additional outliers (displayed in cyan).
Decreasing the value of $\rho$ (thus allowing for more mass creation/destruction in place of transportation) is able to reduce and even remove the influence of the outliers, as expected.
Furthermore, using small values of $\rho$ tends to favor ``local structures'', which is a behavior quite different from UW~\eqref{eq-uw}.
Indeed, for UW, $\rho \rightarrow 0$ sets to zero all the mass of $\pi$ outside of the diagonal (points are not transported), while for $\UGW$, it is rather pairs of points with dissimilar pairwise distances which cannot be transported together.
\begin{figure}[h!]
\centering
\begin{tabular}{c@{}c@{}c@{}c}
{\includegraphics[width=.24\linewidth]{sections/figures/matching_outlier_1}} &
{\includegraphics[width=.24\linewidth]{sections/figures/matching_outlier_2}} &
{\includegraphics[width=.24\linewidth]{sections/figures/matching_outlier_3}} &
{\includegraphics[width=.24\linewidth]{sections/figures/matching_outlier_4}}\\[0mm]
$\begin{matrix} \GW \\ \rho=\infty \end{matrix}$
& $\begin{matrix} \UGW \\ \rho=10^0 \end{matrix}$
& $\begin{matrix} \UGW \\ \rho=10^{-1} \end{matrix}$
& $\begin{matrix} \text{UW} \\ \rho=10^{-2} \end{matrix}$
\end{tabular}
\caption{GW and UGW applied to two moons with outliers. A matching using UW is provided to display how invariance to isometries is encoded in the matching.}
\label{fig-match}
\end{figure}
\paragraph{Graph matching and comparison with Partial-GW.}
We now consider two graphs $(X,Y)$ equipped with their respective geodesic distances. These graphs correspond to points embedded in $\RR^2$, and the length of the edges corresponds to their Euclidean length. These two synthetic graphs are close to be isometric, but differ by addition or modification of small sub-structures.
The colors $c(x)$ are defined on the ``source'' graph $X$ and are mapped by an optimal plan $\pi$ on $y \in Y$ to a color $\frac{1}{\pi_1(y)} \int_X c(x) \d \pi(x,y)$. This allows to visualize the matching induced by $\GW$ and UGW for a varying $\rho$, as displayed in Figure~\ref{fig-graph}. The graphs for GW should be taken as reference since there is no mass creation. The POT library~\cite{flamary2017pot} is used to compute GW.
For large values of $\rho$, UGW behaves similarly to GW, thus producing irregular matchings which do not preserve the overall geometry of the shapes.
In sharp contrast, for smaller values of $\rho$ (e.g. $\rho=10^{-1}$), some fine scale structures (such as the target's small circle) are discarded, and UGW is able to produce a meaningful partial matching of the graphs.
For intermediate values ($\rho=10^0$), we observe that the two branches and the blue cluster of the source are correctly matched to the target, while for GW the blue points are scattered because of the marginal constraint.
\newcommand{\myfig}[1]{\includegraphics[width=.22\linewidth]{sections/figures/plots_graph/pic_graph_#1}}
\begin{figure}[!h]
\centering
\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
\rotatebox{90}{\quad Source $X$} & \myfig{source_UGW_rho0_1_eps0_01.pdf} & \myfig{source_UGW_rho1_0_eps0_01.pdf} & \myfig{source_UGW_rho10_0_eps0_01.pdf} & \myfig{source_GW.pdf} \\
\rotatebox{90}{\quad Target $Y$} & \myfig{target_UGW_rho0_1_eps0_01.pdf} & \myfig{target_UGW_rho1_0_eps0_01.pdf} & \myfig{target_UGW_rho10_0_eps0_01.pdf} & \myfig{target_GW.pdf} \\
& $\rho=0.1$ & $\rho=1$ & $\rho=10$ & GW ($\rho=\infty$)
\end{tabular}
\caption{Comparison of UGW and GW for graph matching.}
\label{fig-graph}
\end{figure}
\newcommand{\myfigd}[1]{\includegraphics[width=.2\linewidth]{sections/figures/plots_graph/pic_graph_#1}}
\begin{figure}
\centering
\begin{tabular}{@{}c@{}c@{}c@{}c@{}c@{}}
\rotatebox{90}{\quad Source $X$} & \myfigd{source_PGW_mass0_645.pdf} & \myfigd{source_PGW_mass0_925.pdf} & \myfigd{source_PGW_mass0_978.pdf} & \myfig{source_GW.pdf} \\
\rotatebox{90}{\quad Target $Y$} & \myfigd{target_PGW_mass0_645.pdf} & \myfigd{target_PGW_mass0_925.pdf} & \myfigd{target_PGW_mass0_978.pdf} & \myfig{target_GW.pdf} \\
& m=$0.64$ & m=$0.93$ & m=$0.98$ & GW (m=$1$)
\end{tabular}
\caption{Comparison of Partial-GW for graph matching. Here $m$ is the budget of transported mass.}
\label{fig-graph-2}
\end{figure}
Figure~\ref{fig-graph-2} shows a comparison with Partial-GW~\cite{chapel2020partial}, computed using the POT library. It is close to UGW with a $\TV^\otimes$ penalty, since partial OT is equivalent to the use of a TV relaxation of the marginal.
UGW with a $\KL^\otimes$ penalty is first computed for a given $\rho$, then the total mass $m$ of the optimal plan is computed, and is used as a parameter for PGW which imposes this total mass as a constraint. Figure~\ref{fig-graph} and~\ref{fig-graph-2} display the transportation strategy associated to both methods.
KL-UGW operates smooth transitions between transportation and creation of mass, while PGW either performs pure transportation or pure destruction/creation of mass. In Figure~\ref{fig-graph-2} nodes of the graphs are removed and thus ignored by the matching. Note also that since PGW is equivalent to solving GW on sub-graphs, the color distribution of GW and PGW are similar.
\paragraph{Influence of $\epsilon$.}
Figures~\ref{fig-weight}, \ref{fig-match}, \ref{fig-graph} and \ref{fig-graph-2} do not show the influence of $\epsilon$.
This parameter is set of a low value $\epsilon=10^{-2}$ on a domain $[0,1]^2$ so as to approximate the optimal plan of the unregularized $\UGW$ problem.
We present now an experiment on graphs which highlights the impact of $(\epsilon,\rho)$ on the plan $\pi$.
We compare two graphs $(\Xx,\Yy)$ displayed Figure~\ref{fig:foobar}. The graph $\Xx$ is composed of two communities of equal size connected with random edges. The graph $\Yy$ is similar to $\Xx$, but the communities are imbalanced and it contains outliers. Moving inside a community costs $1$, reaching another community costs $4$ and reaching an outlier $2$. We equip the mm-space with uniform weights and shortest path distance.
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{3mm}}c}
\includegraphics[width=0.2\textwidth]{sections/figures/figures_graph_matching_cor/plot_graph_1.pdf}&
\includegraphics[width=0.2\textwidth]{sections/figures/figures_graph_matching_cor/plot_graph_2.pdf}
\end{tabular}
\caption{Graphs $\Xx$ (left) and $\Yy$ (right) plotted using networkx.}
\label{fig:foobar}
\end{figure}
We plot in Figure~\ref{fig-graph-match} optimal transport plans $\pi$ for given values of $(\epsilon,\rho)$, including the balanced case $\GW_\epsilon$ where $\rho=\infty$.
The transport matrix has a block structure: the 2 horizontal blocks correspond to $\Xx$ and its two communities, the 4 vertical blocks corresponds to $\Yy$ (with, from left to right, the large blue community, the small red one, then the pink and green outliers).
Decreasing $\rho$ results in a more structured transport matrix: outliers are removed and inter-community matching is avoided. Again, the marginal constraint of $\GW_\epsilon$ makes the plan more sensitive to structural noise (e.g. outliers) in graphs.
Concerning the parameter $\epsilon$, increasing it creates correlations between pairs of points whose distortion is of order of $\sqrt{\epsilon}$. Indeed, we see for $\epsilon=3$ that correlation between communities and their outliers appear, even for small $\rho$.
Furthermore, when $\epsilon$ is too large the transport becomes uninformative, which highlights a crucial trade-off between computational speed and expressiveness of the transport plan.
\newcommand{\myfigu}[2]{\includegraphics[width=.2\linewidth]{sections/figures/figures_graph_matching_cor/plot_matrix_eps#1_rho#2.pdf}}
\begin{figure}
\centering
\begin{tabular}{c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c@{\hspace{1mm}}c}
& $\epsilon=8.$ & $\epsilon=3.$ & $\epsilon=1.5$ & $\epsilon=0.5$\\[0mm]
\rotatebox{90}{\quad$\rho=1.$} & {\myfigu{80}{10}} & {\myfigu{30}{10}} & {\myfigu{15}{10}} & {\myfigu{05}{10}}\\[0mm]
\rotatebox{90}{\quad$\rho=5.$} & {\myfigu{80}{50}} & {\myfigu{30}{50}} & {\myfigu{15}{50}} & {\myfigu{05}{50}}\\[0mm]
\rotatebox{90}{\quad$\rho=\infty$} & {\myfigu{80}{None}} & {\myfigu{30}{None}} & {\myfigu{15}{None}} & {\myfigu{05}{None}}\\[0mm]
\end{tabular}
\caption{Display of the optimal transport plan $\pi$. The color scale is common to all plots.}
\label{fig-graph-match}
\end{figure}
\section{Algorithms}
\label{sec-algo}
We focus in this section on the numerical computation of the upper bound $\UGW$ using a bi-convex relaxation and derive an alternate minimization scheme coupled with entropic regularization.
We also propose to approximate CGW by doing a similar alternate minimization, as detailed in Appendix~\ref{sec-app-xp}.
For both computations we provide guarantees of tightness on the bi-convex relaxation (see Theorem~\ref{ThTightLowerBound}).
The computation of the distance $\CGW$ is heavy in practice because it requires an optimization over a lifted conic space, which needs to be discretized.
Thus it does not scale to large problem for $\CGW$, but allows to explore numerically how tight is the upper bound $\UGW\geq\CGW$, see Section~\ref{sec-xp}.
The algorithm for $\UGW$ is presented on arbitrary measures, the special case of discrete measures being a particular case.
The discretized formulas and algorithms are detailed in Appendix~\ref{appendix-algo}, see also~\cite{chizat2016scaling,peyre2016gromov}.
All implementations are available at \url{https://github.com/thibsej/unbalanced_gromov_wasserstein}.
\subsection{Bi-convex relaxation and tightness}
In order to derive a simple numerical approximation scheme, following~\cite{memoli2011gromov}, we introduce a lower bound obtained by introducing two transportation plans. To further accelerate the method and enable GPU-friendly iterations, similarly to~\cite{gold1996softmax,solomon2016entropic}, we consider an entropic regularization. It reads, for any $\epsilon \geq 0$,
\begin{align}\label{eq-lower-bound}
\UGW_\epsilon(\Xx, \Yy) &\eqdef \inf_\pi\Ll(\pi) +\epsilon\KL^\otimes(\pi|\mu\otimes\nu)\nonumber\\
&\geq\uinf{\pi,\ga} \Ff(\pi,\ga) +\epsilon\KL(\pi\otimes\gamma|(\mu\otimes\nu)^{\otimes 2}),\\
\Ff(\pi,\ga) &\eqdef \int_{X^2 \times Y^2} \C(|d_X - d_Y|)\d\pi \otimes \ga\\
&+ \D_\phi(\pi_1\otimes\ga_1|\mu\otimes\mu) + \D_\phi(\pi_2 \otimes \ga_2|\nu \otimes \nu)\,,\nonumber
\end{align}
where $(\ga_1,\ga_2)$ denote the marginals of the plan $\ga$. In the sequel we write $\Ff_\epsilon = \Ff + \epsilon\KL^\otimes$.
Note that in contrast to the entropic regularization of GW~\cite{peyre2016gromov}, here we use a tensorized entropy to maintain the overall homogeneity of the energy.
A simple method to approximate this lower bound is to perform an alternate minimization on $\pi$ and $\ga$, which is known to converge for smooth $\phi$ to a stationary point since the coupling term in the functional is smooth~\cite{tseng2001convergence}.
Note that if $\pi\otimes\gamma$ is optimal then so is $(s\pi)\otimes(\frac{1}{s}\gamma)$ with $s\geq 0$. Thus without loss of generality we optimize under the constraint $m(\pi)=m(\gamma)$ by setting $s=\sqrt{m(\ga) / m(\pi)}$. Such assumption allows to prove that the bi-convex relaxation is tight. We provide first a general result holding for a wide range of quadratic programs, then we state its application in the setting of $\UGW_\epsilon$.
\begin{theorem}[Tight relaxation]
\label{ThKonno'sGeneralization}
Let $B$ a Banach space and $f:B \mapsto \RR \cup \{ +\infty\}$ be a function and
let $\Ll: C \subset B \mapsto \RR$ the function defined on the convex set $C \subset B$ by $\Ll(\pi) = \frac 12 \langle \pi,k(\pi)\rangle + 2f(\pi)$
where $k$ is a symmetric bilinear map which is negative (not necessarily definite) on $\Delta C \eqdef \operatorname{Span}(\{ \pi- \ga \,;\, (\pi,\ga) \in C\})$, that is, for any $z \in \Delta C$,
$\langle z , kz \rangle \leq 0$.
Assume that there exists $\pi_0 \in C$ such that $\Ll(\pi_0) < +\infty$.
Then, $ \forall (\pi_*,\ga_*) \in \arg \min \Ff(\pi,\ga) \eqdef \frac 12 \langle \pi,k(\ga)\rangle + f(\pi) + f(\ga)$, we have $\Ff(\pi_*,\pi_*) = \Ff(\ga_*,\ga_*) = \Ff(\pi_*,\ga_*)$.
\par
Moreover, if one assumes either that $k$ is a definite kernel or $f$ is strictly convex, one gets $\pi_* = \ga_*$.
\end{theorem}
\begin{proof}
The function $\Ff$ is the symmetrization of $\Ll$, so that $\Ff(\pi,\pi) = \Ll(\pi)$. By the hypothesis on $\Ll$, the mimimum values of the functions (if it exists) are finite.
The two following inequalities are obtained by optimality of $(\pi_*,\ga_*)$,
\begin{equation}
\begin{cases}\label{EqTwoIneq}
\Ff(\pi_*,\ga_*) \leq \Ff(\pi_*,\pi_*) \\
\Ff(\pi_*,\ga_*) \leq \Ff(\ga_*,\ga_*)\,.
\end{cases}
\end{equation}
Note that the hypotheses imply that $\Ff(\pi_*,\pi_*)$ and $\Ff(\ga_*,\ga_*)$ are both finite.
Combining these two inequalities leads to
$
\Ff(\pi_*,\pi_*) + \Ff(\ga_*,\ga_*) - 2\Ff(\pi_*,\ga_*) \geq 0\,,
$
which implies
\begin{equation}
\frac 12 \langle \pi_* - \ga_*, k (\pi_*-\ga_*) \rangle \geq 0\,,
\end{equation}
since the separable parts in $\Ff$ cancel.
Therefore, we deduce when $k$ is definite that $\pi_* = \ga_*$.
\par
We now treat the case when $k$ is not definite. In this case, we only have $\frac 12 \langle \pi_* - \ga_*, k (\pi_*-\ga_*) \rangle= 0$ which implies that $\pi_* - \ga_* \in \operatorname{Ker}(k)$ since $k$ is non positive. The first inequality in \eqref{EqTwoIneq} implies $f(\pi_*) \leq f(\ga_*)$ and by symmetry $f(\pi_*) = f(\ga_*)$ and as a conclusion $\Ff(\pi_*,\pi_*) = \Ff(\pi_*,\ga_*) = \Ff(\ga_*,\ga_*)$.
\par
The last case follows from the observation that
on the segment $[\pi_*,\ga_*] \subset C$, the quadratic part of $\Ff$ is constant. Indeed one has for $t\in [0,1]$, for $z=t(\pi_* - \ga_*) + \ga_*$ one has
\begin{align*}
\dotp{z}{k(z)} &= t^2\dotp{(\pi_* - \ga_*)}{k(\pi_* - \ga_*)} + 2t\dotp{\ga_*}{k(\pi_* - \ga_*)} + \dotp{\ga_*}{k(\ga_*)}\\
&=\dotp{\ga_*}{k(\ga_*)},
\end{align*}
since $\pi_* - \ga_* \in \operatorname{Ker}(k)$.
Thus minimizing $\Ff$ on $[\pi_*,\ga_*]$ is reduced to the minimization of $f$ on this segment. By the above remark, $f(\pi_*) = f(\ga_*)$ which implies $\pi_* = \ga_*$ by strict convexity.
\end{proof}
We focus now on the application of Theorem~\ref{ThKonno'sGeneralization} to the setting of Gromov-Wasserstein distances. Konno's result applies for unregularized, Balanced-GW. What is new is its extension to both $\GW$ and $\UGW$, in presence of an entropic regularization.
\begin{theorem}[Tightness for $\textrm{(U)GW}_\epsilon$]
\label{ThTightLowerBound}
For any $\epsilon\geq 0$, for Balanced-GW or for KL-UGW under the constraint $m(\pi)=m(\gamma)$, if the kernel $\lambda(|d_X-d_Y|)$ is negative semi-definite, then minimizing the lower bound \eqref{eq-lower-bound} is equivalent to minimizing the functional defining $\UGW_\epsilon(\Xx, \Yy)$, i.e. the relaxation is tight. More precisely, all minimizers $(\pi_*,\gamma_*)$ of \eqref{eq-lower-bound} satisfy $\Ff_\epsilon(\pi_*,\ga_*)=\Ff_\epsilon(\pi_*,\pi_*)=\Ff_\epsilon(\ga_*,\ga_*)$.
\end{theorem}
\begin{proof}[Proof of theorem \ref{ThTightLowerBound}]
The functional defining $\UGW_\varepsilon$ is
$\mathcal{J}(\pi) \eqdef \Ll(\pi) +\epsilon\KL^\otimes(\pi|\mu\otimes\nu)$ and its symmetrization (the lower bound) is
\begin{align}
\Ff_\epsilon(\pi,\ga) &\eqdef \int_{X^2 \times Y^2} \C(|d_X - d_Y|)\d\pi \otimes \ga\\
&+ \D_\phi(\pi_1\otimes\ga_1|\mu\otimes\mu) + \D_\phi(\pi_2 \otimes \ga_2|\nu \otimes \nu) +\epsilon\KL(\pi\otimes\gamma|(\mu\otimes\nu)^{\otimes 2}).\nonumber
\end{align}
Under the linear constraint $m(\pi) = m(\ga)$ the quadratic KL divergence is separable. Indeed, one has thanks to Proposition~\ref{prop-decompose-kl}:
\begin{align*}
\KL(\pi\otimes\ga|(\mu\otimes\nu)^{\otimes 2}) &= m(\ga)\KL(\pi|\mu\otimes\nu) + m(\pi)\KL(\ga|\mu\otimes\nu)\\
&\qquad+ (m(\pi) - m(\mu\otimes\nu))(m(\ga) - m(\mu\otimes\nu))\\
& = m(\pi)\KL(\pi|\mu\otimes\nu) + m(\ga)\KL(\ga|\mu\otimes\nu)\\
&\qquad+ \tfrac{1}{2}(m(\pi) - m(\mu\otimes\nu))^2 + \tfrac{1}{2}(m(\ga) - m(\mu\otimes\nu))^2\\
&= f(\pi) + f(\ga),
\end{align*}
where one writes $f(\pi) = m(\pi)\KL(\pi|\mu\otimes\nu) + \tfrac{1}{2}(m(\pi) - m(\mu\otimes\nu))^2$. In the setting of $\UGW_\epsilon$ with $\D_\phi = \rho\KL$, one has a similar separability property w.r.t the marginals of $(\pi,\ga)$.
%
Thus Theorem~\ref{ThKonno'sGeneralization} applies and we get the tightness of the relaxation for any $\epsilon\geq 0$.
In the Balanced setting where $\D_\phi(\pi_1\otimes\ga_1|\mu\otimes\mu) = \iota_{(=)}(\pi_1\otimes\ga_1|\mu\otimes\mu)$, note that the convex indicator function verifies $\iota_{(=)}(\pi_1\otimes\ga_1|\mu\otimes\mu) = \iota_{(=)}(\pi_1|\mu) + \iota_{(=)}(\ga_1|\mu)$, which is a separable penalty. The same argument holds for $(\pi_2,\ga_2)$. Furthermore, imposing such equality constraint enforces that $m(\pi)=m(\ga)=m(\mu)=m(\nu)$ (if $m(\mu)\neq m(\nu)$ the program is infeasible). Thus for any $\epsilon\geq 0$ the $\KL$ regularization is separable. All in all we have separable penalties and Theorem~\ref{ThKonno'sGeneralization} applies again.
\end{proof}
The kernel $|d_X- d_Y|^2$ is negative semi-definite when both $(d_X,d_Y)$ are conditionally negative kernels.
Such property does not hold in general (e.g. for graph geodesic distances) and the tightness of the relaxation remains open in this setting.
It holds for instance with tree metrics in the case of graphs, or for Euclidean, spherical, hyperbolic distances over their respective manifolds~\cite{feragen2015geodesic}.
In practice we observed in the indefinite setting that the relaxation could fall into spurious minima when $\epsilon$ is small, while it happens much less frequently in the negative semi-definite setting.
\subsection{Alternate Sinkhorn minimization}
Minimizing the lower bound~\eqref{eq-lower-bound} with respect to either $\pi$ or $\ga$ is non-trivial for an arbitrary $\phi$. We restrict our attention to the Kullback-Leibler case $\D_\phi=\rho\KL$ with $\rho >0$, which can be addressed by solving a regularized and convex unbalanced problem as studied in~\cite{chizat2016scaling, sejourne2019sinkhorn}. It is explained in the following proposition.
\begin{proposition}\label{prop-alternate-simple}
For a fixed $\ga$, the optimal
$\pi\in\arg\umin{\pi} \Ff(\pi,\ga) +\epsilon\KL(\pi\otimes\gamma|(\mu\otimes\nu)^{\otimes 2})$
is the solution of
\begin{align*}
\umin{\pi} \int c^\epsilon_\ga(x,y) \d\pi(x,y) &+ \rho m(\ga) \KL(\pi_1|\mu) + \rho m(\ga) \KL(\pi_2|\nu)\\
&+ \epsilon m(\ga) \KL(\pi|\mu\otimes\nu),
\end{align*}
where $m(\ga) \eqdef \ga(X \times Y)$ is the mass of $\ga$, and
where we define the cost associated to $\ga$ as
\begin{align*}
c^\epsilon_\ga(x,y) &\eqdef \int \C(|d_X(x,\cdot) - d_Y(y,\cdot)|)\d\ga \\
&+ \rho \int \log(\frac{\d\ga_1}{\d\mu})\d\ga_1
+ \rho \int \log(\frac{\d\ga_2}{\d\nu})\d\ga_2
+ \epsilon \int \log(\frac{\d\ga}{\d\mu\d\nu})\d\ga.
\end{align*}
\end{proposition}
Computing the cost $c^\epsilon_\ga$ for spaces $X$ and $Y$ of $n$ points has in general a cost $O(n^4)$ in time and memory. However, as explained for instance in~\cite{peyre2016gromov}, for the special case $\C(t)=t^2$, this cost is reduced to $O(n^3)$ in time and $O(n^2)$ in memory.
This is the setting we consider in the numerical simulations.
This makes the method applicable for scales of the order of $10^4$ points. For larger datasets one should use approximation schemes such as hierarchical approaches~\cite{xu2019scalable} or Nystr\"om compression of the kernel~\cite{altschuler2018massively}.
\begin{minipage}{0.95\textwidth}
\begin{algorithm}[H]
\caption{-- \textbf{UGW($\Xx$, $\Yy$, $\rho$, $\epsilon$)} \label{algo-flb-sinkhorn}}
\small{
\textbf{Input:} mm-spaces $(\Xx,\Yy)$, relax. $\rho$, regul. $\epsilon$\\
\textbf{Output:}~$\pi,\ga$~solving~\eqref{eq-lower-bound}
\begin{algorithmic}
\State Init. $\pi=\ga=\mu\otimes\nu / \sqrt{m(\mu) m(\nu)}$, $\g=0$.
\While{$(\pi,\ga)$ has not converged}
\State Update $\pi\leftarrow\ga$,
\State then $c \leftarrow c^\epsilon_{\pi}$, $\;\tilde{\rho} \leftarrow m(\pi)\rho$, $\;\tilde{\epsilon} \leftarrow m(\pi)\epsilon$
\vspace*{0.05cm}
\While{$(\f,\g)$ has not converged}
\State $ \f \leftarrow -\frac{\tilde{\epsilon}\tilde{\rho}}{\tilde{\epsilon} + \tilde{\rho}}
\log \int e^{(\g(y) - c(\cdot,y)) / \tilde{\epsilon}}\d\nu(y)$
\State $\g \leftarrow -\frac{\tilde{\epsilon}\tilde{\rho}}{\tilde{\epsilon} + \tilde{\rho}}
\log \int e^{(\f(x) - c(x,\cdot)) / \tilde{\epsilon}}\d\mu(x)$
\EndWhile
\State Upd. $\ga(x,y)\!\!\leftarrow\!\! e^{\frac{\f(x)+\g(y)-c(x,y)}{\tilde{\epsilon}}}\mu(x)\nu(y)$
\State Rescale $\ga\leftarrow \sqrt{m(\pi) / m(\ga)} \ga$
\EndWhile
\State Return $(\pi,\ga)$.
\end{algorithmic}
}
\end{algorithm}
\end{minipage}
The resulting alternate minimization method is detailed in Algorithm~\ref{algo-flb-sinkhorn}, see Appendix~\ref{appendix-algo} for a discretized version.
It uses the unbalanced Sinkhorn algorithm of~\cite{chizat2016scaling, sejourne2019sinkhorn} as sub-iterations and is initialized using $\pi = \mu\otimes\nu / \sqrt{m(\mu) m(\nu)}$.
This Sinkhorn algorithm operates over a pair of continuous functions (so-called Kantorovitch potentials) $f(x)$ and $g(y)$.
For discrete spaces $X$ and $Y$ of size $n$, these functions are stored in vectors of size $n$, and that integral involved in the updates becomes a sum. Each iteration of Sinkhorn thus has a cost $n^2$, and all the involved operation can be efficiently mapped to parallelizable GPU routines as detailed in~\cite{chizat2016scaling, sejourne2019sinkhorn}.
Another advantage of using an unbalanced Sinkhorn algorithm is its complexity $O(n^2 / \epsilon)$ to compute an $\epsilon$-approximation, as stated in~\cite{pham2020unbalanced}, which should be compared to $O(n^2 / \epsilon^2)$ operations for balanced Sinkhorn.
Note also that balanced GW is recovered as a special case when setting $\rho\rightarrow+\infty$, so that $\tilde{\rho} / (\tilde{\epsilon} + \tilde{\rho})\rightarrow 1$ should be used in the iterations.
In order to speed up Sinkhorn inner-loops, especially for small values of $\epsilon$, one can use linear extrapolation~\cite{thibault2017overrelaxed} or non-linear Anderson acceleration~\cite{scieur2016regularized}.
There is an extra scaling step after computing $\ga$ involving the mass $m(\pi)$. It corresponds to the scaling $s$ of $\pi\otimes\gamma$ such that $m(\pi)=m(\gamma)$, and we observe that this scaling is key not only to impose this mass equality but also to stabilize the algorithm. Otherwise we observed that $m(\ga)<1<m(\pi)$ and underflows whenever $m(\ga)\rightarrow 0$ and $m(\pi)\rightarrow\infty$.
\subsection{Approximation of CGW}
In this section we focus on computing the distance $\CGW$~\eqref{eq-ugw-conic}, which is a quadratic minimization program with linear constraint.
Similar to what is performed with UGW~\ref{sec-algo}, we consider a relaxation using a tensorized conic plan $\al\otimes\be$ with $\al,\be\in\Uu_{p}(\mu,\nu)$.
The minimized cost thus reads
\begin{equation}\label{eq-cgw-relax}
\begin{aligned}
\Hh(\al,\be)\eqdef\int \Dd([d_X(x,x'), r r'], [d_Y(y,y'), s s'])^q \,&\d\al([x,r], [y,s])\\
&\,\d\be([x',r'], [y',s']).
\end{aligned}
\end{equation}
Note that for fixed $\be\in\Uu_{p}(\mu,\nu)$, the minimization w.r.t. $\al$ is a convex linear program with the linear conic constraint set $\Uu_{p}(\mu,\nu)$ and with cost
\begin{equation}\label{eq-conic-local-cost}
\begin{aligned}
\Cc_\Co(x,r,y,s)\eqdef\int &\Dd([d_X(x,x'), r r'], [d_Y(y,y'), s s'])^q \,\d\be([x',r'], [y',s']).
\end{aligned}
\end{equation}
Since we focus on the numerical implementation of CGW, we consider the setting of Gaussian-Hellinger distance which computes the distortion with $\C(t)=t^2$, due to a reduced memory and computation complexity to calculate $|d_X - d_Y|^2$ (see Section~\ref{sec-algo}).
In that case the cone distance reads for a given $\rho$
\begin{equation}\label{eq-gh-conic-cost}
\begin{aligned}
\Dd([d_X(x,x'), r r'], [d_Y(y,y'), s s'])^2 = \rho\Big[ (rr')^2 + (ss')^2 - 2rr'ss'\, e^{- |d_X - d_Y|^2 / 2\rho} \Big].
\end{aligned}
\end{equation}
Before focusing on the discretization of this problem to make it computable, we prove that when $|d_X - d_Y|^2$ is a conditionnaly definite kernel then the above cost is a negative kernel on $\Uu_{p}(\mu,\nu)$.
Thus Theorem~\ref{ThKonno'sGeneralization} holds.
\begin{proposition}
Assume that the kernel $|d_X - d_Y|^2$ is conditionnaly negative definite. Then the cost~\eqref{eq-gh-conic-cost} is a negative definite kernel on $\Uu_{p}(\mu,\nu)$.
\end{proposition}
\begin{proof}
Take any plan $\al\in\Uu_{p}(\mu,\nu)$. Integrating against $(rr')^2$ or $(ss')^2$ yields a constant term. Indeed one has for $(rr')^2$
%
\begin{align*}
\int (rr')^2\d\al([x,r], [y,s])\d\al([x',r'], [y',s']) &= \Bigg( \int (r)^2\d\al([x,r], [y,s]) \Bigg)^2\\
&= \Bigg( \int \d\mu(x) \Bigg)^2 \\
&= m(\mu)^2.
\end{align*}
%
Thus minimizing~\eqref{eq-gh-conic-cost} is equivalent to minimizing w.r.t. $- 2rr'ss'\, e^{- |d_X - d_Y|^2 / 2\rho}$, which is a product of positive definite kernels $(rr')$, $(ss')$ and $e^{- |d_X - d_Y|^2 / 2\rho}$ thanks to Berg's Theorem~\cite{berg1984harmonic} and because we assume the kernel $|d_X - d_Y|^2$ is c.n.d.
%
Due to the extra minus sign we get that the kernel is negative definite, which ends the proof.
\end{proof}
An important point is to implement the constraint set $\Uu_{p}(\mu,\nu)$ which integrates against radial coordinates $(r,s)\in\RR_+^2$.
Such integration is impossible in practice, but thanks to~\cite[Theorem 7.20]{liero2015optimal}, we know that the radius can be restricted to $[0,R]$ where $R^2 = m(\mu)^2 + m(\nu)^2$ (up to a dilation of the plan).
Thus we propose to discretize the constraint by sampling regularly the interval as $\{lR / L, l\in\llbracket 0,L\rrbracket\}$.
We consider discrete mm-spaces as in Section~\ref{appendix-algo}, i.e. mm-spaces noted as $\Xx = (D^X_{i,j}, (\mu_i)_i)$ and $\Yy=(D^Y_{i,j}, (\nu_j)_j)$.
Write a conic plan $\al_{ijkl}=\al([x_i, r_k],[y_j, s_l])$. The conic constraints read for $k\in\llbracket 0,K\rrbracket$ and $l\in\llbracket 0,L\rrbracket$
\begin{align*}
\sum_{j,k,l}\Big(\frac{kR}{K}\Big)^2\al_{ijkl} = \mu_i \qandq \sum_{i,k,l}\Big(\frac{lR}{L}\Big)^2\al_{ijkl} = \nu_j.
\end{align*}
The cost $\Cc_\Co$~\eqref{eq-conic-local-cost} is computed via the formula
\begin{align*}
\Cc_{ijkl} &\eqdef \sum_{i',j',k',l'} \rho\Bigg[ (\tfrac{kR}{K}\tfrac{k'R}{K})^2 + (\tfrac{lR}{L}\tfrac{l'R}{L})^2\Bigg]\al_{i'j'k'l'}\\
&- 2 \sum_{i',j',k',l'} \rho\Bigg[(\tfrac{kR}{K}\tfrac{k'R}{K}\tfrac{lR}{L}\tfrac{l'R}{L})e^{-|D^X_{i,i'} - D^Y_{j,j'}|^2 / 2\rho} \Bigg]\al_{i'j'k'l'}
\end{align*}
Eventually, the whole program solving one step of the alternate minimization algorithm is given Equation~\eqref{eq-cgw-discrete}.
The approximation of $\CGW$ is performed by alternatively updating $\al$ and $\Cc_\Co$ until the minimization attains a local minima
\begin{equation}\label{eq-cgw-discrete}
\min_{\al_{ijkl}}
\left\{
\begin{aligned}
\sum_{i,j,k,l} \Cc_{ijkl}\al_{ijkl} \,\, \textrm{s.t.}\,\,
\sum_{j,k,l}(\tfrac{kR}{K})^2\al_{ijkl} = \mu_i \,\,\textrm{and}\,\, \sum_{i,k,l}(\tfrac{lR}{L})^2\al_{ijkl} = \nu_j
\end{aligned}
\right\}.
\end{equation}
\section{Background on unbalanced optimal transport}
\label{sec-background}
Following~\cite{liero2015optimal}, this section reviews and generalizes the homogeneous and conic formulations of unbalanced optimal transport. These three formulations are equal in the convex setting of UOT. Our relaxed divergence UGW and conic distance CGW defined in Section~\ref{sec-distance} build upon those constructions but are not anymore equal due to the non-convexity of GW problems.
\subsection{Homogeneous formulation}
To ease the description of the homogeneous formulation, we use reverse Csiszàr entropy functions $\D_\psi$ defined below~\eqref{eq-defn-csiszar}.
Formulation~\eqref{eq-uw} is rewritten as
\begin{equation}
\begin{aligned}\label{eq-uw-reverse}
\text{UW}(\mu,\nu)^q = \uinf{\pi \in \Mm(X^2)} &\int L_{\C(d(x,y))}(\f(x), \g(y))\d\pi(x,y)
+\psi^\prime_\infty(|\mu^\bot| + |\nu^\bot|),
\end{aligned}
\end{equation}
where $L_{c}(r,s) \eqdef c + \psi(r) + \psi(s)$, with $|\mu^\bot|\eqdef\mu^\bot(X)$ and $(\f, \g)\eqdef (\frac{\d\mu}{\d\pi_1}, \frac{\d\nu}{\d\pi_2})$ are the densities of the Lebesgue decomposition of $(\mu,\nu)$ with respect to $(\pi_1,\pi_2)$ and
\begin{equation}
\mu = \f \pi_1 + \mu^\bot \qandq \nu = \g \pi_2 + \nu^\bot. \label{eq-leb-dens-1}
\end{equation}
Reverse entropies are helpful to explicit the terms of pure mass creation/destruction $(|\mu^\bot| + |\nu^\bot|)$ and reinterpret the integral under $\pi$ as a transport term with a new cost $L_{\C(d)}$.
Then the authors of~\cite{liero2015optimal} define the homogeneous formulations HUW as
\begin{equation}
\begin{aligned}\label{eq-uw-homog}
\text{HUW}(\mu,\nu)^q \eqdef \uinf{\pi \in \Mm(X^2)} &\int H_{\C(d(x,y))}(\f(x), \g(y))\d\pi(x,y)
+\psi^\prime_\infty(|\mu^\bot| + |\nu^\bot|),
\end{aligned}
\end{equation}
where the 1-homogeneous function $H_c$ is the perspective transform of $L_c$
\begin{align}\label{eq-def-homog-persp}
H_c(r, s) \eqdef \inf_{\theta\geq 0} \theta\big( c + \psi(\tfrac{r}{\theta}) + \psi(\tfrac{s}{\theta} ) \big)
= \inf_{\theta\geq 0} \theta L_c(\tfrac{r}{\theta}, \tfrac{s}{\theta}).
\end{align}
By definition one has $L_c\geq H_c$, thus $\text{UW}\geq \text{HUW}$. In fact, one actually has $\text{UW}=\text{HUW}$ as proved in~\cite[Theorem 5.8]{liero2015optimal}.
\subsection{Cone sets, cone distances and explicit settings}
\label{sec-setups}
The conic formulation detailed in Section~\ref{sec-conic-uw} is obtained by performing the optimal transport on the cone set $\Co[X] \eqdef X\times\RR_+ / (X\times\{0\})$, where the extra coordinate accounts for the mass of the particle.
Coordinates of the form $(x,0)$ are merged into a single point called the apex of the cone, noted $\zc_X$. In the sequel, points of $X\times\RR_+$ are noted $(x,r)$ and those of $\Co[X]$ are noted $[x,r]$ to emphasize the quotient operation at the apex.
For a pair $(p,q)\in\RR_+$, we define for any $[x,r],[y,s]\in\Co[X]^2$
\begin{align}\label{eq-def-cone-dist}
\Dd_{\Co[X]}([x,r], [y,s])^q \eqdef H_{\C(d(x,y))}(r^p, s^p).
\end{align}
In general $\Dd_{\Co[X]}$ is not a distance, but it is always definite as proved by the following result.
\begin{proposition}\label{prop-cone-dist-definite}
Assume that $d$ is definite, $\C^{-1}(\{0\})=\{0\}$ and $\phi^{-1}(\{0\})=\{1\}$. Assume also that for any $(r,s)$, there always exists $\theta^*$ such that $H_c(r, s)= \theta^* L_c(\tfrac{r}{\theta^*}, \tfrac{s}{\theta^*})$.
Then $\Dd_{\Co[X]}$ is definite on $\Co[X]$, i.e. $\Dd_{\Co[X]}([x,r], [y,s]) =0$ if and only if $(r=s=0) \;\textrm{or}\; (r=s \;\textrm{and}\; x=y)$.
\end{proposition}
\begin{proof}
Assume $\Dd_{\Co[X]}([x,r], [y,s]) =0$, and write $\theta^*$ such that
\begin{align*}
\Dd_{\Co[X]}([x,r], [y,s])^q &= \theta^* L_c(\tfrac{r^p}{\theta^*}, \tfrac{s^p}{\theta^*})\\
&= \theta^* \C(d(x,y)) + r^p\phi(\tfrac{\theta^*}{r^p}) + s\phi(\tfrac{\theta^*}{s^p}),
\end{align*}
where the last line is given by the definition of reverse entropy.
There are two cases. If $\theta^* >0$, since all terms are positive, there are all equal to $0$. By definiteness of $d$ it yields $x=y$ and because $\phi^{-1}(\{0\})=\{1\}$ we have $r^p=s^p=\theta^*$ and $r=s$.
If $\theta^*=0$ then $\Dd_{\Co[X]}([x,r], [y,s])^q=\phi(0)(r^p+s^p)$. The assumption $\phi^{-1}(\{0\})=\{1\}$ implies $\phi(0)>0$, thus necessarily $r=s=0$.
\end{proof}
The function $H_c$ can be computed in closed form for a certain number of common entropies $\phi$, and we refer to~\cite[Section 5]{liero2015optimal} for an overview.
Of particular interest are those $\phi$ where $\Dd_{\Co[X]}$ is a distance, which necessitates a careful choice of $\C,p$ and $q$. We now detail three particular settings where this is the case. In each setting we provide $(\D_\phi, \C, p, q)$ and its associated cone distance $\Dd_{\Co[X]}$.
\paragraph{Gaussian Hellinger distance}
It corresponds to
\begin{align*}
\D_\phi = \KL, \quad \C(t) = t^2 \qandq q=p=2,\\
\Dd_{\Co[X]}([x,r], [y,s])^2 = r^2 + s^2 - 2rse^{-d(x,y) / 2},
\end{align*}
in which case it is proved in~\cite{liero2015optimal} that $\Dd_{\Co[X]}$ is a cone distance.
\paragraph{Hellinger-Kantorovich / Wasserstein-Fisher-Rao distance}
It reads
\begin{align*}
\D_\phi =\KL, \quad \C(t) = -\log\cos^2(t\wedge\tfrac{\pi}{2}) \qandq q=p=2,\\
\Dd_{\Co[X]}([x,r], [y,s])^2 = r^2 +s^2 - 2rs\cos(\tfrac{\pi}{2}\wedge d(x,y)),
\end{align*}
in which case it is proved in~\cite{burago2001course} that $\Dd_{\Co[X]}$ is a cone distance.
The weight $\C(t) = -\log\cos^2(t\wedge\tfrac{\pi}{2})$, which might seem more peculiar, is in fact the penalty that makes unbalanced OT a length space induced by the Gaussian-Hellinger distance (if the ground metric $d$ is itself geodesic), as proved in~\cite{liero2016optimal,chizat2018interpolating}.
This weight introduces a cut-off, because $\C(d(x,y))=+\infty$ if $d(x,y)>\pi/2$. There is no transport between points too far from each other. The choice of $\pi/2$ is arbitrary, and can be modified by scaling $\la \mapsto \la(\cdot/s)$ for some cutoff $s$.
\paragraph{Partial optimal transport}
It corresponds to
\begin{align*}
\D_\phi = \TV, \quad \C(t)=t^q \qandq q\geq 1 \qandq p=1,\\
\Dd_{\Co[X]}([x,r], [y,s])^q = r + s - (r\wedge s)(2-d(x,y)^q)_+,
\end{align*}
in which case it is proved in~\cite{chizat2018unbalanced} that $\Dd_{\Co[X]}$ is a cone distance.
The case $\D_\phi=\TV$ is equivalent to partial unbalanced OT, which produces discontinuities (because of the non-smoothness of the divergence) between regions of the supports which are being transported and regions where mass is being destroyed/created.
Note that~\cite{liero2015optimal} do not mention that this $\Dd_{\Co[X]}$ defines a distance, so this result is new to the best of our knowledge, although it can be proved without a conic lifting that partial OT defines a distance as explained in~\cite{chizat2018unbalanced}.
\subsection{Conic formulation of UW}
\label{sec-conic-uw}
The last formulation reinterprets HUW as an OT problem on the cone, with the addition of two linear constraints. Informally speaking, $H_c$ becomes $\Dd_{\Co[X]}$, the term $(|\mu^\bot| + |\nu^\bot|)$ is taken into account by the constraints~\eqref{eq-uw-conic-set} below, and the variables $(\f,\g)$ are replaced by $(r^p,s^p)$. It reads
\begin{align}\label{eq-uw-conic}
\text{CUW}(\mu,\nu)^q \eqdef \inf_{\al\in \Uu_p(\mu,\nu)} \int \Dd_{\Co[X]}([x, r], [y, s]))^q\d\al([x,r], [y,s]),
\end{align}
where the constraint set $\Uu_{p}(\mu,\nu)$ is defined as
\begin{align}\label{eq-uw-conic-set}
\Uu_{p}(\mu,\nu) \eqdef \enscond{ \al\in\Mm_+(\Co[X]^2) }{
\int_{\RR_+} r^p \d\al_1(\cdot,r)=\mu,
\int_{\RR_+} s^p \d\al_2(\cdot,s)=\nu
}.
\end{align}
Thus CUW consists in minimizing the Wasserstein distance $\text{W}_{\Dd_{\Co[X]}}(\al_1,\al_2)$ on the cone $(\Co[X], \Dd_{\Co[X]})$. The additional constraints on $(\al_1,\al_2)$ mean that the lift of the mass on the cone must be consistent with the total mass of $(\mu,\nu)$. When $\Dd_{\Co[X]}$ is a distance, CUW inherits the metric properties of $W_{\Dd_{\Co[X]}}$. Our theoretical results rely on an analog construction for GW.
The following proposition states the equality of the three formulations and recapitulates its main properties.
\begin{proposition}[From~\cite{liero2015optimal}]
One has $\text{\upshape{UW}}=\text{\upshape{HUW}}=\text{\upshape{CUW}}$, which are symmetric, positive and definite.
Furthermore, if $(X,d_X)$ and $(\Co[X], \Dd_{\Co[X]})$ are metric spaces with $X$ separable, then $\Mm_+(X)$ endowed with $\text{\upshape{CUW}}$ is a metric space.
\end{proposition}
\begin{proof}
The equality $\text{UW}=\text{HUW}$ is given by~\cite[Theorem 5.8]{liero2015optimal}, while the equality $\text{HUW}=\text{CUW}$ holds thanks to~\cite[Theorem 6.7 and Remark 7.5]{liero2015optimal}, where the latter theorem can be straightforwardly generalized to any cone distance defined by~\eqref{eq-def-cone-dist}. Since $\Dd_{\Co[X]}$ is symmetric, positive and definite (see Proposition~\ref{prop-cone-dist-definite}), then so is CUW. Furthermore, if $\Dd_{\Co[X]}$ satisfies the triangle inequality, separability of $X$ allows to apply the gluing lemma~\cite[Corollary 7.14]{liero2015optimal} which generalizes to any exponent $p$ defining $\Uu_{p}(\mu,\nu)$ and any cone distance $\Dd_{\Co[X]}$.
\end{proof}
\section{Algorithmic details and formulas}
\label{appendix-algo}
\subsection{Properties of the quadratic KL divergence}
We present in this section an additional property on the quadratic-KL divergence which allows to reduce the computational burden to evaluate it by involving the computation of a standard $\KL$ divergence.
\begin{proposition}\label{prop-decompose-kl}
For any measures $(\mu,\nu)\in\Mm_+(\Xx)$, one has
\begin{equation}
\begin{aligned}
\KL(\mu\otimes\nu|\al\otimes\be) &= m(\nu)\KL(\mu|\al) + m(\mu)\KL(\nu|\be)\\
&\qquad+ (m(\mu) - m(\al))(m(\nu) - m(\be)).
\end{aligned}
\end{equation}
In particular,
\begin{align}
\KL(\mu\otimes\mu|\nu\otimes\nu) = 2m(\mu)\KL(\mu|\nu) + (m(\mu) - m(\nu))^2.
\end{align}
\end{proposition}
\begin{proof}
Assuming $\KL(\mu\otimes\nu|\al\otimes\be)$ to be finite, one has $\mu = \f \al$ and $\nu = \g\be$. It reads
\begin{align*}
\KL(\mu\otimes\nu|\al\otimes\be) &= \int \log(\f\otimes\g) \d\mu\d\nu - m(\mu)m(\nu) + m(\al)m(\be)\\
&= m(\nu)\int\log(\f)\d\mu + m(\mu)\int\log(\g)\d\nu\nonumber\\
&\qquad- m(\mu)m(\nu) + m(\al)m(\be)\\
&= m(\nu)\big[ \KL(\mu|\al) + m(\mu) - m(\al) \big]\nonumber\\
&\qquad+ m(\mu)\big[ \KL(\nu|\be)+ m(\nu) - m(\be) \big]\nonumber\\
&\qquad- m(\mu)m(\nu) + m(\al)m(\be)\\
&=m(\nu)\KL(\mu|\al) + m(\mu)\KL(\nu|\be)\\ &\qquad+ m(\mu)m(\nu) - m(\nu)m(\al) - m(\mu)m(\be) + m(\al)m(\be)\nonumber\\
&=m(\nu)\KL(\mu|\al) + m(\mu)\KL(\nu|\be)\nonumber\\
&\qquad+ (m(\mu) - m(\al))(m(\nu) - m(\be)).
\end{align*}
\end{proof}
In the Balanced setting, with $(\mu,\nu)$ probabilities, the regularization reads $\KL^\otimes(\pi|\mu\otimes\nu) = 2\KL(\pi|\mu\otimes\nu)$. Thus (up to a factor 2) we retrieve as a particular case the setting of~\cite{peyre2016gromov}.
\subsection{Proof of Proposition~\ref{prop-alternate-simple}}
We now prove Proposition~\ref{prop-alternate-simple} which applies the above result.
\begin{proof}
First note that $\Ff(\ga,\pi)=\Ff(\pi,\ga)$ so that minimizing with the first or the second argument gives the same solution.
%
Setting $\ga$ to be fixed, the rest follows from the factorisation
\begin{align*}
\KL(\pi_1\otimes\ga_1|\mu\otimes\mu) &= m(\ga)\KL(\pi_1|\mu)\\
&+ m(\pi)\KL(\ga_1|\mu) + (m(\ga) - m(\mu))(m(\pi) - m(\mu))\\
& = m(\pi)\Big[ \KL(\ga_1|\mu) + m(\ga) - m(\mu) \Big]\\
&+ m(\ga)\KL(\pi_1|\mu) - m(\ga)m(\mu)\\
& = m(\pi) \int \log(\frac{\d\ga_1}{\d\mu})\d\ga_1 + m(\ga)\KL(\pi_1|\mu) - m(\ga)m(\mu)\\
& = \int \Bigg(\int \log(\frac{\d\ga_1}{\d\mu})\d\ga_1 \Bigg)\d\pi + m(\ga)\KL(\pi_1|\mu) - m(\ga)m(\mu),
\end{align*}
and also from
$\KL(\pi_1|\mu) = \int\log(\frac{\d\ga_1}{\d\mu})\d\ga_1 - (m(\ga) - m(\mu))$.
Similar formulas hold for $(\pi_2,\gamma_2)$ and $(\pi,\gamma)$. Summing all $\KL$ terms yields the expression for $c^\epsilon_\ga$.
\end{proof}
\subsection{Discrete setting and formulas}
In order to implement those algorithms, one consider discrete mm-spaces $X=(x_i)_{i=1}^n$ and $Y=(y_j)_{j=1}^m$, endowed with discrete measures
$\mu=\sum_i \mu_i \de_{x_i}$ and $\nu=\sum_j \nu_j \de_{y_j}$, where $\mu_i,\nu_j \geq 0$.
The distance matrices are $D^X_{i,i'} \eqdef d_X(x_i,x_{i'})$ and $D^X_{j,j'} \eqdef d_X(y_j,y_{j'})$.
Transport plans are thus also discrete $\pi=\sum_{i,j} \pi_{i,j}\de_{(x_i,y_j)}$.
The functional $\Ll$ now reads in this discrete setting
\begin{align*}
\int(d_X(x,x') - d_Y(y,y'))^2\d\pi(x,y)\d\pi(x',y') = \sum_{i,j,k,\ell} (D_{i,j}^X - D_{k,\ell}^Y)^2\pi_{i,k}\pi_{j,\ell},
\end{align*}
\begin{align*}
\qandq \KL(\pi_1\otimes\pi_1 | \mu\otimes\mu) &= \sum_{i,j}\log\Big(\frac{\pi_{1,i}\pi_{1,j}}{\mu_i\mu_j}\Big) \pi_{1,i}\pi_{1,j} - \sum_{i,j} \pi_{1,i}\pi_{1,j} + \sum_{i,j} \mu_i\mu_j\\
&= 2m(\pi)\sum_{i}\log\Big(\frac{\pi_{1,i}}{\mu_i}\Big) \pi_{1,i} -m(\pi)^2 + m(\mu)^2,
\end{align*}
where we define the marginals $\pi_{1,k} \eqdef \sum_j \pi_{k,j}$, $\pi_{2,\ell} \eqdef \sum_i \pi_{i,\ell}$ and $m(\pi)=\sum_{i,j} \pi_{i,j}$.
When one runs the stabilized implementation of Sinkhorn's iterations with a ground cost $C_{i,j}=C(x_i,y_j)$ between the points, it is necessary to use a Log-Sum-Exp reduction which reads
\begin{align}\label{eq-stable-lse}
\f_i \leftarrow -\frac{\epsilon\rho}{\epsilon + \rho} \text{LSE}_j\big[(g_j - C_{i,j}) / \epsilon + \log(\mu_{j})\big]
\end{align}
where $\text{LSE}_j$ is a reduction performed on the index $j$. It reads
\begin{align}
\text{LSE}_j(C_{i,j}) \eqdef \log \Big(\sum_j \exp(C_{i,j} - \max_k C_{i,k})\Big) + \max_k C_{i,k},
\end{align}
where the logarithm and exponential are pointwise operations.
\begin{algorithm}[tb]
\caption{-- \textbf{UGW($\Xx$, $\Yy$, $\rho$, $\epsilon$)} in discrete form}
\textbf{Input:} mm-spaces $\Xx = (D^X_{i,j}, (\mu_i)_i)$ and $\Yy=(D^Y_{i,j}, (\nu_j)_j)$, relaxation $\rho$, regularization $\epsilon$ \\
\textbf{Output:} approximation $(\pi,\ga)$ minimizing~\ref{eq-lower-bound}
\begin{algorithmic}[1]
\State Initialize matrix $\pi_{i,j}=\ga_{i,j}=\mu_i\nu_j / \sqrt{(\sum_i \mu_i) (\sum_j \nu_j)}$, vector $g^{(s=0)}_j=0$.
\While{$\pi$ has not converged}
\State Update $\pi\leftarrow\ga$
\State Define $m(\pi)\leftarrow \sum_{i,j} \pi_{i,j}$, $\tilde{\rho} \leftarrow m(\pi)\rho$, $\tilde{\epsilon} \leftarrow m(\pi)\epsilon$
\State Define $c \leftarrow$ ComputeCost($\Xx$, $\Yy$, $\pi$, $\rho$, $\epsilon$)
\vspace*{0.05cm}
\While{$(\f,\g)$ has not converged}
\State $\f\leftarrow -\frac{\tilde{\epsilon}\tilde{\rho}}{\tilde{\epsilon} + \tilde{\rho}}
\log\Big[\sum_j \exp\big((\g_j - c_{i,j}) / \tilde{\epsilon} + \log\nu_j \big)\Big]$
\State $\g\leftarrow -\frac{\tilde{\epsilon}\tilde{\rho}}{\tilde{\epsilon} + \tilde{\rho}}
\log\Big[\sum_i \exp\big((\f_i - c_{i,j}) / \tilde{\epsilon} + \log\mu_i \big)\Big]$
\EndWhile
\State Update $\ga_{i,j}
\leftarrow \exp\Big[ (\f_i+\g_j-c_{i,j}) / \tilde{\epsilon} \Big]\mu_i\nu_j$
\State Rescale $\ga\leftarrow \sqrt{m(\pi) / m(\ga)} \ga$
\EndWhile
\State Return $(\pi,\ga)$.
\end{algorithmic}
\end{algorithm}
We also provide an algorithm that computes the cost $c^\epsilon_{\pi}$ defined in Proposition~\eqref{prop-alternate-simple}. We focus on the case $\D_\phi=\rho\KL$ and $\C(t)=t^2$ which is computable with complexity $O(n^3)$ as shown in~\cite{peyre2016gromov}. Indeed, note that one has
\begin{align*}
\int (d_X(x,x') - d_Y(y,y'))^2\d\pi(x',y') &= \int d_X(x,x')^2\d\pi_1(x')\\
&+ \int d_Y(y,y')^2\d\pi_2(y')\\
&- 2 \int d_X(x,x')d_Y(y,y')\d\pi(x',y').
\end{align*}
\begin{algorithm}[H]
\caption{-- \textbf{ComputeCost($\Xx$, $\Yy$, $\pi$, $\rho$, $\epsilon$)} in discrete form}
\textbf{Input:} mm-spaces $\Xx = (D^X_{i,j}, (\mu_i)_i)$ and $\Yy=(D^Y_{k,\ell}, (\nu_j)_j)$, transport matrix $(\pi_{j,k})_{j,k}$, relaxation $\rho$, regularization $\epsilon$ \\
\textbf{Output:} cost $c^\epsilon_{\pi}$ defined in Proposition~\ref{prop-alternate-simple}
\begin{algorithmic}[1]
\State Compute $\pi_{1,j} \leftarrow \sum_k \pi_{j,k}$ and $\pi_{2,k} \leftarrow \sum_j \pi_{j,k}$ \Comment{$\pi_1=\pi\bm{1}$ and $\pi_2=\pi^\top\bm{1}$}
\State Compute $A_i \leftarrow \sum_{j} (D^X_{i,j})^2 \pi_{1,j}$ \Comment{$A = (D^X)^{\circ 2}\pi_1$}
\State Compute $B_\ell \leftarrow \sum_{k} (D^Y_{k,\ell})^2 \pi_{2,k}$ \Comment{$B = (D^Y)^{\circ 2}\pi_2$}
\State Compute $C_{i,\ell} \leftarrow \sum_j D^X_{i,j} \big( \sum_k D^Y_{k,\ell}\pi_{j,k} \big)$ \Comment{$C = D^X \pi D^Y$}
\State Compute $E \leftarrow \rho \sum_j \log\big(\frac{\pi_{1,j}}{\mu_j}\big)\pi_{1,j} + \rho \sum_k \log\big(\frac{\pi_{2,k}}{\nu_k}\big)\pi_{2,k} + \epsilon\sum_{j,k} \log\big(\frac{\pi_{jk}}{\mu_j\nu_k}\big)\pi_{j,k}$
\State Return $c^\epsilon_{\pi, i,\ell} \leftarrow A_i + B_\ell - 2 C_{i,\ell} + E$
\end{algorithmic}
\end{algorithm}
\section{Construction of the conic formulation}
\label{sec-construction-ugw-cgw}
In this section we derive the formulation CGW from UGW in an analog manner to what is presented in Section~\ref{sec-background} so as to highlight the differences with standard UOT.
We first focus on rewriting the functional $\Ll$. Note that the Csiszar divergence $\D_\phi^\otimes(\pi_1|\mu)$ is an integral against $\mu\otimes\mu$, while we want an integral against $\pi\otimes\pi$. To solve this, we use the reverse entropies intoduced in Section~\ref{sec-background}. The reverse entropy satisfies the relation $\D_\phi^\otimes(\pi_1|\mu)=\D_\psi^\otimes(\mu|\pi_1)$, which is an integral against $\pi\otimes\pi$ (up to a marginalization).
So as to explicit $\D_\psi^\otimes(\mu|\pi_1)$, we need the Lebesgue decomposition of $(\mu,\nu)$ w.r.t. $(\pi_1,\pi_2)$. They read
\begin{align}
\mu = \f \pi_1 + \mu^\bot \qandq \nu = \g \pi_2 + \nu^\bot,\\
\mu\otimes\mu = (\f\otimes\f) \pi_1\otimes\pi_1 + (\mu\otimes\mu)^\bot \\
\nu\otimes\nu = (\g\otimes\g) \pi_2\otimes\pi_2 + (\nu\otimes\nu)^\bot.
\end{align}
Since in UGW we impose $\pi_1\ll\mu$ and $\pi_2\ll\nu$, note that we have
\eq{\tfrac{\d\pi_1}{\d\mu}=1/\f \qandq \tfrac{\d\pi_2}{\d\nu}=1/\g.}
Now that notations are set, the functional $\Ll$ reads
\begin{align}
\Ll(\pi) &= \int_{X^2 \times Y^2} \C(\Gamma)\d\pi\d\pi
+ \rho\D_\phi^\otimes(\pi_1|\mu) + \rho\D_\phi^\otimes(\pi_2|\nu)\\
&= \int_{X^2 \times Y^2} \C(\Gamma)\d\pi\d\pi
+ \rho\D_\psi^\otimes(\mu|\pi_1) + \rho\D_\psi^\otimes(\nu|\pi_2)\\
&= \int_{X^2 \times Y^2} \C(\Gamma)\d\pi\d\pi + \int_{X^2} \rho\psi(\f\otimes\f)\d\pi_1\d\pi_1 + \int_{Y^2} \rho\psi(\g\otimes\g)\d\pi_2\d\pi_2\nonumber\\
&+\int_{X^2} \rho\psi^\prime_\infty\d(\mu\otimes\mu)^\bot + \int_{Y^2} \rho\psi^\prime_\infty\d(\nu\otimes\nu)^\bot\\
&= \int_{X^2 \times Y^2} L_{\C(\Gamma)}(\f\otimes\f,\g\otimes\g)\d\pi\d\pi\nonumber\\
&+\rho\psi^\prime_\infty((\mu\otimes\mu)^\bot(X^2) + \d(\nu\otimes\nu)^\bot(Y^2)).\label{eq-rewrite-ugw}
\end{align}
In Equation~\eqref{eq-rewrite-ugw} we see a new function integrated against $\pi\otimes\pi$ that depends on $(x,x',y,y')$ through $(\Gamma,\f\otimes\f,\g\otimes\g)$. It is noted $L_c= c + \rho\psi(r) + \rho\psi(s).$ and is the same function involved in UOT, see Section~\ref{sec-background}.
Similar to Section~\ref{sec-background}, the parameter $\theta$ is relaxed to become a pointwise scaling that optimises the perspective of $L_c$ to define $H_c \leq L_c$.
There is an important subtlety between UW and UGW here. We mentioned that Fenchel duality allows to prove that optimizing against $L_c$ or $H_c$ yields the same cost for UW. For UGW Fenchel duality no longer holds and thus we have no guaranty of equality between both formulations. Furthermore, in this construction we apply a scaling $\theta$ to $\pi\otimes\pi$ instead of $\pi$. Thus this operation introduces a form of inconsistency with the problem's structure that should instead consider scalings of the form $\theta(x,y)\theta(x',y')$. This remark is further discussed in Lemma~\ref{lem-eq-both-form}.
All in all we get a lower bound of the UGW functional that reads
\begin{align}
\Ll(\pi)&\geq \int_{X^2 \times Y^2} H_{\C(\Gamma)}(\f\otimes\f,\g\otimes\g)\d\pi\d\pi\nonumber\\
&\qquad+\rho\psi^\prime_\infty((\mu\otimes\mu)^\bot(X^2) + (\nu\otimes\nu)^\bot(Y^2)).
\end{align}
We have defined a new functional that will lower bound the divergence UGW. Now we detail how it is connected to a formulation defined on a cone. The explicit formulas are given in Section~\ref{sec-setups}. They satisfy two key assumptions
\begin{enumerate}
\item There exists $\theta^*=\theta_c(r,s)$ such that
$$ H_c(r,s)=\theta^* L_c(\tfrac{r}{\theta^*},\tfrac{s}{\theta^*}),$$
\item Defining $\Dd_\Co([x,r],[y,s])^q \eqdef H_{\C(|x-y|)}(r^p,s^p)$, we assume that the choice of $(\C,\D_\phi;p,q)$ is such that $\Dd_\Co$ is a distance on the cone.
\end{enumerate}
Under those two assumptions we operate a relaxation by introducing new variables. Instead of having $(\f(x),\g(y))$ that depend on $(x,y)$, we write $\f=r^q$ and $\g=s^q$ such that $\f\otimes\f=(rr')^p$ and $\g\otimes\g=(ss')^p$. We consider variables $(x,r)$, $(y,s)$, and we inject those variables into the cones $\Co[X],\Co[Y]$ because $H_\C$ is homogeneous by construction. Under those notations it reads
\begin{align}
H_{\C(|d_X - d_Y|)}(\f\otimes\f,\g\otimes\g) = \Dd_\Co([d_X(x,x'),rr'],[d_Y(y,y'),ss'])^q.
\end{align}
This function is the one we use in our conic formulation. The lift constraint of Equation~\eqref{eq-ugw-conic} appears by considering the remaining terms. They read
\begin{align}
\int_{X^2}\d(\mu\otimes\mu)^\bot &= \int_{X^2}\d[\mu\otimes\mu - (\f\otimes\f) \pi_1\otimes\pi_1]\\
&= \int_{X^2}\d[\mu\otimes\mu - (rr')^p \pi_1\otimes\pi_1].
\end{align}
The orthogonal terms encode a penalization of the homogeneous marginal constraint. In our formulation we let this term vanish by enforcing the constraint, i.e. we impose on the transport plan $\al([x,y],[y,s])$ that it verifies
\begin{align}
\int \xi(x,x')(rr')^p\d\al\d\al = \int\xi(x,x')\d\mu\d\mu \Leftrightarrow \int \xi(x)r^p\d\al = \int\xi(x)\d\mu.
\end{align}
Note that both properties are equivalent. One implication is immediate, and the other holds due to the density of tensorized functions (i.e. of the form $\xi(x)\tilde{\xi}(x')$).
All in all, we have described where the conic functional and the constraints come from. In the sequel we consider them as definitions and study the property of the conic-GW program.
\section{Discussion on the quadratic Csiszar divergences}
\label{appendix-quad-csiszar}
We mentioned that the choice of divergences $\D_\phi(\rho\otimes\rho|\nu\otimes\nu)$ instead of $\D_\phi(\rho|\nu)$ was fundamental to get the property that $\UGW$ is a distance. We assert that the latter choice of penalty induces a scaling bias that breaks the homogeneity property. We make this statement more quantitative here.
We now provide a result allowing to find the optimal mass given a plan $\pi$. This closed-form allows to alternate between optimizing the plan and its mass.
\begin{proposition}\label{prop-optim-scaling}
Take any plan $\pi\in\Mmp(X_1\times X_2)$ such that $\pi\neq 0$ and $(\epsilon,\rho)>0$.
When $\D_\phi = \rho\KL$ the optimal scaling defined by
\begin{align}
\theta^* \eqdef \arg\min_{\theta\geq 0} \Ll(\theta\pi)
+ \epsilon\KL(\pi\otimes\gamma|(\mu_1\otimes\mu_2)\otimes (\mu_1\otimes\mu_2))
\end{align}
satisfies the following relation
\begin{align}
2(2\rho + \epsilon)m(\pi)\log\theta = &- \int\C(\Gamma)\d\pi\d\pi\\ &- \rho\int\log(\frac{\d\pi_1}{\d\mu_1}\frac{\d\pi_1}{\d\mu_1})\d\pi_1\d\pi_1 - \rho\int\log(\frac{\d\pi_2}{\d\mu_2}\frac{\d\pi_2}{\d\mu_2})\d\pi_2\d\pi_2\nonumber\\
&- \epsilon\int\log(\frac{\d\pi}{\d\mu_1\d\mu_2}\frac{\d\pi}{\d\mu_1\d\mu_2})\d\pi\d\pi\nonumber.
\end{align}
\end{proposition}
\begin{proof}
The function $f(\theta) = \KL(\theta\mu\otimes\mu|\al\otimes\al)$ is differentiable for any $\theta\geq 0$ (the dominated convergence theorem applies). We can differentiate under the integral and because $\phi_{KL}^\prime = \log$ we get
\begin{align}
f^\prime(\theta) = \int \frac{\d\mu\d\mu}{\d\al\d\al}\log\big(\theta\frac{\d\mu\d\mu}{\d\al\d\al}\big)\d\al\d\al = \int \log\big(\theta\frac{\d\mu\d\mu}{\d\al\d\al}\big)\d\mu\d\mu.
\end{align}
If we fix a plan $\pi$ and use this result to differentiate $\Ll(\theta\pi)$ w.r.t. $\theta$ we get the following first order condition
\begin{align}
2\theta\int\C(\Gamma)\d\pi\d\pi &+ 2\theta\rho\int\log(\theta^2\frac{\d\pi_1}{\d\mu_1}\frac{\d\pi_1}{\d\mu_1})\d\pi_1\d\pi_1
+ 2\theta\rho\int\log(\theta^2\frac{\d\pi_2}{\d\mu_2}\frac{\d\pi_2}{\d\mu_2})\d\pi_2\d\pi_2\\
&+ 2\theta\epsilon\int\log(\theta^2\frac{\d\pi}{\d\mu_1\d\mu_2}\frac{\d\pi}{\d\mu_1\d\mu_2})\d\pi\d\pi = 0.
\end{align}
After simplification by $2\theta$ and using the additivity of the log, we get the desired result.
\end{proof}
Note that the numerical complexity can again be reduced by using
\begin{align}
\int\log(\frac{\d\pi_i}{\d\mu_i}\frac{\d\pi_i}{\d\mu_i})\d\pi_i\d\pi_i = 2 m(\pi)\int\log(\frac{\d\pi_i}{\d\mu_i})\d\pi_i.
\end{align}
According to Proposition~\ref{prop-optim-scaling}, given a plan $\pi$, the optimal scaling $\theta$ that minimizes $\Ll(\theta\pi)$ verifies
\begin{align}\label{eq-optimal-scaling-unreg}
\theta^* = \frac{ \sqrt{ m(\mu_1)m(\mu_2) } }{ m(\pi) } \exp\big[&-\tfrac{1}{4\rho} \int\C(\Gamma)\d\tilde{\pi}\d\tilde{\pi}\\ &- \tfrac{1}{4}\int\log(\frac{\d\tilde{\pi}_1}{\d\tilde{\mu}_1}\frac{\d\tilde{\pi}_1}{\d\tilde{\mu}_1})\d\tilde{\pi}_1\d\tilde{\pi}_1 - \tfrac{1}{4}\int\log(\frac{\d\tilde{\pi}_2}{\d\tilde{\mu}_2}\frac{\d\tilde{\pi}_2}{\d\tilde{\mu}_2})\d\tilde{\pi}_2\d\tilde{\pi}_2 \big]\nonumber
\end{align}
To obtain equation~\eqref{eq-optimal-scaling-unreg} we use again the additivity of the log and the fact that $\d\mu/\d\al = \tfrac{m(\al)}{m(\mu)}\d\tilde{\mu}/\d\tilde{\al}$ to normalize all measures.
Note in this formula that we retrieve the homogeneity of the functional. If one multiplies the $(\mu_1,\mu_2)$ by some constant, then $\theta^*$ is multiplied by the same factor.
By comparison, if we consider instead the following functional
\begin{align}
\mathfrak{L}(\pi) \eqdef &\int_{X^2 \times Y^2} \C(\Gamma( (x_1, y_1), (x_2, y_2)))\d\pi(x_1, x_2)\d\pi(y_1, y_2)\\
&+ \rho\D_\phi(\pi_1|\mu_1) + \rho\D_\phi(\pi_2|\mu_2).\nonumber
\end{align}
Then the optimal scaling $\theta$ that minimizes $\mathfrak{L}(\theta\pi)$ verifies
\begin{align}
&2\theta\int\C(\Gamma)\d\pi\d\pi + \rho\int\log(\theta\frac{\d\pi_1}{\d\mu_1})\d\pi_1
+ \rho\int\log(\theta\frac{\d\pi_2}{\d\mu_2})\d\pi_2 = 0\\
&\Leftrightarrow
2\rho m(\pi)\log\theta + 2\theta\int\C(\Gamma)\d\pi\d\pi + \rho\int\log(\frac{\d\pi_1}{\d\mu_1})\d\pi_1
+ \rho\int\log(\frac{\d\pi_2}{\d\mu_2})\d\pi_2 = 0
\end{align}
The solution of such equation involves the Lambert function $W$ defined for $z\geq 0$ by $W(z)e^{W(z)}=z$. Writing the terms as
\begin{align*}
a &= 2\rho m(\pi)\\
b &= \int\C(\Gamma)\d\pi\d\pi\\
c &= \rho\int\log(\frac{\d\pi_1}{\d\mu_1})\d\pi_1 + \rho\int\log(\frac{\d\pi_2}{\d\mu_2})\d\pi_2,
\end{align*}
the solution $\theta$ of such equation reads
\begin{align}
\theta = \exp\big[W(\frac{b}{a}e^{-\frac{c}{a}}) - \frac{c}{a}\big].
\end{align}
Those formulas allow to fix a plan and compute the optimal scaling, for instance $\pi=\mu\otimes\nu$. Then we take some constant $\kappa$ and look at the optimal scaling for $(\kappa\mu,\kappa\nu)$. What is observed is that $\theta$ scales linearly in $\kappa$ for UGW as is shown in formula~\eqref{eq-optimal-scaling-unreg}.
This is not the case for the functional $\mathfrak{L}$. More precisely, if we write $\theta_1 = \arg\min\Ll(\theta\pi)$ and $\theta_2 = \arg\min\mathfrak{L}(\theta\pi)$, we have $\theta_1<\theta_2$ when $\kappa<1$ and $\theta_1>\theta_2$ when $\kappa>1$.
In other terms, the functional $\mathfrak{L}$ tends to concentrate the mass around 1. We observed a similar effect on the optimizers $\pi$ of $\mathfrak{L}$, which is a counterintuitive behaviour due to the expected homogeneity in the mass.
\section{Introduction}
Comparing data distributions on different metric spaces is a basic problem in machine learning.
This class of problems is for instance at the heart of surfaces~\cite{bronstein2006generalized} or graph matching~\cite{xu2019scalable} (equipping the surface or graph with its associated geodesic distance), regression problems in quantum chemistry~\cite{gilmer2017neural} (viewing the molecules as distributions of points in $\RR^3$) and natural language processing~\cite{grave2019unsupervised,alvarez2018gromov} (where texts in different languages are embedded as points distributions in different vector spaces).
This paper defines for the first time a class of distances between these objects, which we define below as metric measure spaces.
\paragraph{Metric measure spaces.}
The mathematical way to formalize these problems is to model the data as \emph{metric measure spaces} (mm-spaces).
A mm-space is denoted as $\Xx=(X, d, \mu)$ where $X$ is a complete separable set endowed with a distance $d$ and a positive Borel measure $\mu \in \Mm_+(X)$.
For instance, if $X=(x_i)_i$ is a finite set of points, then $\mu=\sum_i m_i \de_{x_i}$ (here $\de_{x_i}$ is the Dirac mass at $x_i$) is simply a set of positive weights $m_i = \mu(\{x_i\}) \geq 0$ associated to each point $x_i$, which accounts for its mass or importance. For instance, setting some $m_i$ to $0$ is equivalent to removing the point $x_i$.
We refer to~\cite{sturm2012space} for a mathematical account on the theory of mm-spaces.
In all the applications highlighted above, it makes sense to perform the comparisons up to isometric transformations of the data.
Two mm-spaces $\Xx=(X, d_X, \mu)$ and $\Yy=(Y, d_Y, \nu)$ are considered to be equal (denoted $\Xx \sim \Yy$) if they are isometric, meaning that there is a bijection $\psi : spt(\mu) \rightarrow spt(\nu)$ (where $spt(\mu)$ is the support of $\mu$) such that $d_X(x,y) = d_Y(\psi(x),\psi(y))$ and $\psi_\sharp \mu=\nu$. Here $\psi_\sharp$ is the push-forward operator, so that $\psi_\sharp \mu=\nu$ is equivalent to imposing $\nu(A)=\mu(\psi^{-1}(A))$ for any set $A \subset Y$. For discrete spaces where $\mu = \sum_i m_i \de_{x_i}$, then one should have $\nu=\psi_\sharp \mu=\sum_i m_i \de_{\psi(x_i)}$.
As highlighted by \cite{memoli2011gromov}, considering mm-spaces up to isometry is a powerful way to formalize and analyze a wide variety of problems such as matching, regression and classification of distributions of points belonging to different spaces.
The key to unlock all these problems is the computation of a distance between mm-spaces up to isometry. So far, existing distances (reviewed below) assume that $\mu$ is a probability distribution, i.e. $\mu(X)=1$. This constraint is not natural and sometimes problematic for most of the practical applications to machine learning. The goal of this paper is to alleviate this restriction.
\paragraph{Csiszár divergences}
The simplest case is when $X=Y$ and one simply ignores the underlying metric.
One can then use Csiszár divergences (or $\phi$-divergences), which perform a pointwise comparison (which should be contrasted to optimal transport distances, which perform a displacement comparison). It is defined using an entropy function $\phi: \RR_+ \rightarrow [0, +\infty]$, which is a convex, lo\-wer se\-mi-con\-tinu\-ous positive function satisfying $\phi(1)=0$.
Its associated recession constant is $\phi^\prime_\infty \eqdef \lim_{r \rightarrow \infty}\phi(r) / r \in \RR \cup \{+\infty\}$.
For any $(\mu,\nu) \in\Mm_+(X)^2$, we write the Lebesgue decomposition as $\mu = \frac{\d\mu}{\d\nu} \nu + \mu^\bot$. The Csiszár $\phi$-divergence is defined as
\begin{align}\label{eq-defn-csiszar}
\D_\phi(\mu|\nu) \eqdef \int_X \phi\pa{\frac{\d\mu}{\d\nu}}\d\nu + \phi^\prime_\infty\int_X \d\mu^\bot.
\end{align}
This divergence $\D_\phi$ is convex, positive, 1-homogeneous and weak* lower-semi\-con\-tinu\-ous, see~\cite{liero2015optimal} for details.
One can reverse it in the sense $\D_\phi(\mu|\nu)=\D_\psi(\nu|\mu)$ where $\psi$ is the reverse entropy defined as $\psi(r)=r\phi(1/r)$, $\psi(0)=\phi^\prime_\infty$ and $\psi^\prime_\infty=\phi(0)$.
Particular instances of $\phi$-divergences are Kullback-Leibler ($\KL$) for $\phi(r)=r\log(r)-r+1$ (note that here $\phi^\prime_\infty=\infty$), the Total Variation ($\TV$) for $\phi(r)=|r-1|$ and the Hellinger distance for $\phi(r)=(\sqrt{r}-1)^2$.
The indicator divergence $\D_\phi(\mu,\nu)=+\infty$ for $\mu \neq \nu$ and $0$ otherwise is obtained by using $\phi(r)=\iota_{=}(r)$ which is 0 if $r=1$ and $+\infty$ otherwise.
\paragraph{Balanced and unbalanced optimal transport.}
If the common embedding space $X$ is equipped with a distance $d(x,y)$, one can use more elaborated methods, and in particular consider optimal transport (OT) distances, which can be computed by solving convex optimization problems.
This type of methods has proven useful for ML problems as diverse as domain adaptation~\cite{courty2014domain}, supervised learning over histograms~\cite{frogner2015learning} and unsupervised learning of generative models~\cite{WassersteinGAN}.
For this simple case, the extension from probability distributions to arbitrary positive measures $(\mu,\nu) \in \Mm_+(X)^2$ is now well understood and corresponds to the theory of unbalanced OT.
Following~\cite{liero2015optimal,chizat2018unbalanced}, a family of unbalanced Wasserstein distances is defined by solving
\begin{align}\label{eq-uw}
\text{UW}(\mu,\nu)^q \eqdef \uinf{\pi \in \Mm(X \times X)} \int \C(d(x,y)) \d\pi(x,y)
+ \D_\phi(\pi_1|\mu) + \D_\phi(\pi_2|\mu).
\end{align}
Here $(\pi_1,\pi_2)$ are the two marginals of the joint distribution $\pi$, defined by $\pi_1(A) = \pi(A \times Y)$ for $A \subset X$. The mapping $\la : \RR^+ \rightarrow \RR$ and exponent $q\geq 1$ should be chosen wisely to ensure for instance that \text{UW} defines a distance (see Section~\ref{sec-background} for details).
It is frequent to take $\rho\D_\phi$ instead of $\D_\phi$ (or equivalently take $\psi=\rho\phi$) to adjust the strength of the penalization of the marginals.
Classical (balanced) optimal transport is retrieved with $\phi=\iota_{=}$ or by taking the limit $\rho \rightarrow +\infty$, which enforces exact conservation of mass $\pi_1=\mu$ and $\pi_2=\nu$.
In the limit $\rho \rightarrow 0$, when $\D_\phi=\rho\KL$ is the Kullback-Leibler relative entropy divergence, $\text{UW}(\mu,\nu)^2/\rho$ tends to the squared Hellinger distance, which does not introduce any transportation at all.
When $0 < \rho < +\infty$, unbalanced OT operates a tradeoff between transportation and creation of mass, which is crucial for instance to be robust to outliers in the data and to cope with mass variations in the modes of the distributions.
For supervised tasks, the value of $\rho$ should be cross-validated to obtain the best performances.
Its use is gaining popularity in applications, such as supervised learning~\cite{frogner2015learning}, medical imaging registration~\cite{feydy2019fast}, videos~\cite{lee2019parallel} and gradient flow to train neural networks~\cite{chizat2018global,rotskoff2019global}.
Furthermore, existing efficient algorithms for balanced OT extend to this unbalanced problem. In particular Sinkhorn's iterations, introduced in ML for balanced OT by~\cite{CuturiSinkhorn}, extend to unbalanced OT~\cite{chizat2016scaling, sejourne2019sinkhorn}, as detailed in Section~\ref{sec-algo}.
\paragraph{The Gromov-Wasserstein distance and its applications.}
The Gromov-Wasserstein (GW) distance~\cite{memoli2011gromov,sturm2012space} generalizes the notion of OT to the setting of mm-spaces up to isometries. It corresponds to replacing the linear cost $\int \C(d) \d \pi$ of OT by a quadratic function
\eql{\label{eq-defn-gw}
\text{GW}(\Xx,\Yy)^q \eqdef \!\!\!\!
\umin{\pi \in \Mm_+(X \times Y)} \enscond{
\int \C( \Gamma(x, x',y, y') ) \d\pi(x,y) \d\pi(x',y')
}{
\begin{matrix}\pi_1=\mu \\ \pi_2=\nu\end{matrix}
},
}
where the distortion kernel is $\Gamma(x, x',y, y') \eqdef \Delta(d_X(x, x'), d_Y(y, y'))$, with $\Delta$ any distance on $\RR_+$.
The construction detailed in~\cite{memoli2011gromov,sturm2012space} considers $\Delta$ the Euclidean distance, $\C(r) = r^q$ with $q\geq 1$, in which case it is proved that GW defines a distance on balanced mm-spaces (i.e. the measures are probability distributions) up to isometries.
In this paper, we extend this construction to the three cases of Section~\ref{sec-setups} and to arbitrary positive measures.
This distance is applied successfully in various domains. It is used in natural language processing for unsupervised translation learning~\cite{grave2019unsupervised, alvarez2018gromov}, in generative learning for objects lying in spaces of different dimensions~\cite{bunne2019learning} and to build VAE for graphs~\cite{xu2020learning}. It has been adapted to take into consideration additional structures for domain adaptation over different spaces~\cite{redko2020co}. It is also a relevant distance to compute barycenters between graphs or shapes by leveraging additional features of the data~\cite{vayer2018fused} or the metric structure of GW~\cite{chowdhury2020gromov}.
In the specific case where the metric spaces are Euclidean, then this distance compares distributions up to rigid isometry, and is closely related (but not equal) to metrics defined by procrustes analysis~\cite{grave2019unsupervised, alvarez2019towards}.
The problem~\eqref{eq-defn-gw} is non convex because the quadratic form $\int \C(\Gamma) \d\pi\otimes \pi$ is not positive in general.
It is in fact closely related to quadratic assignment problems~\cite{burkard1998quadratic}, which are used for graph matching problems, and are known to be NP-hard in general.
Nevertheless, non-convex optimization methods have been shown to be successful in practice to use GW distances for ML problems. This includes for instance alternating minimization~\cite{memoli2011gromov,redko2020co} and entropic regularization~\cite{peyre2016gromov, gold1996graduated}.
\paragraph{Related works and contributions.}
The work of~\cite{chapel2020partial} relaxes the GW distance to the unbalanced setting. It hybridizes GW with partial OT~\cite{figalli2010optimal} for unsupervised labeling. It ressembles one particular setting of our formulation, but with some important differences, detailed in Section~\ref{sec-distance}.
Our construction is also connected to partial matching methods, which find numerous applications in graphics and vision~\cite{cosmo2016shrec}. In particular, \cite{rodola2012game} introduces a mass conservation relaxation of the GW problem.
The two main contributions of this paper are the definition of two formulations relaxing the GW distance. The first one is called the Unbalanced Gromov-Wasserstein (UGW) divergence and can be computed efficiently on GPUs. The second one is called the Conic Gromov-Wasserstein distance (CGW). It is proved to be a distance between mm-spaces endowed with positive measures up to isometries, as stated in Theorem~\ref{thm-ugw-dist} which is the main theoretical result of this paper. We also prove in Theorem~\ref{thm-ineq-formul} that UGW can be used as a surrogate upper-bounding CGW.
We present those concepts and their properties in Section~\ref{sec-distance}.
We also detail in Section~\ref{sec-algo} an efficient computational scheme for a particular setting of UGW. This method computes an approximate stationary point of the non-convex energy. It leverages the strength of entropic regularization and the Sinkhorn algorithm, namely that it is GPU-friendly and defines smooth loss functions amenable to back-propagation for ML applications.
Section~\ref{sec-xp} provides some numerical experiments to highlight the qualitative behavior of this algorithm, which shed some lights on the favorable properties of UGW to cope with outliers and mass variations in the modes of the distributions.
\section{UGW formulation and definiteness}
\label{appendix-distance-ugw}
We present in this section the proofs of the properties of our divergence $\UGW$. We refer to Section~\ref{sec-distance} for the definition of the UGW formulation and its related concepts.
We first start with the existence of minimizers stated in Proposition~\ref{thm-exist-minimizer}. It illustrates in some sense that our divergence is well-defined.
\begin{proposition}[Existence of minimizers]
Assume $(\Xx,\Yy)$ to be compact mm-spaces and that we either have
\begin{enumerate}
\item $\phi$ superlinear, i.e $\phi^\prime_\infty=\infty$
\item $\C$ has compact sublevel sets in $\RR_+$ and $2\phi^\prime_\infty + \inf \C >0$
\end{enumerate}
Then there exists $\pi\in\Mm_+(X\times Y)$ such that $\UGW(\Xx,\Yy)=\Ll(\pi)$.
\end{proposition}
\begin{proof}
We adapt here from ~\cite[Theorem 3.3]{liero2015optimal}. The functional is lower semi-continuous as a sum of l.s.c terms.
Thus it suffices to have relative compactness of the set of minimizers. Under either one of the assumptions, coercivity of the functional holds thanks to Jensen's inequality
\begin{align*}
\Ll(\pi)&\geq m(\pi)^2\inf\C(\Gamma) + m(\mu)^2 \phi(\frac{m(\pi)^2}{m(\mu)^2}) + m(\nu)^2 \phi(\frac{m(\pi)^2}{m(\nu)^2})\\
&\geq m(\pi)^2 \Big[ \inf\C(\Gamma) + \frac{m(\mu)^2}{m(\pi)^2} \phi(\frac{m(\pi)^2}{m(\mu)^2}) + \frac{m(\nu)^2}{m(\pi)^2} \phi(\frac{m(\pi)^2}{m(\nu)^2})\Big].
\end{align*}
As $m(\pi)\rightarrow +\infty$ the right hand side converges to $2\phi^\prime_\infty + \inf \C >0$, which under either one of the assumptions yields $\Ll(\pi)\rightarrow +\infty$, hence the coercivity.
Thus we can assume there exists some $M$ such that $m(\pi)<M$. Since the spaces are assumed to be compact, the Banach-Alaoglu theorem holds and gives relative compactness in $\Mm_+(X\times Y)$.
Take any sequence of plans $\pi_n$ that approaches $\UGW(\Xx,\Yy)=\inf \Ll(\pi)$. Compactness gives that a subsequence $\pi_{n_k}$ weak* converges to some $\pi^*$. Because $\Ll$ is l.s.c, we have $\Ll(\pi^*) \leq \inf \Ll(\pi)$, thus $\Ll(\pi^*) = \inf \Ll(\pi)$. The existence of such limit reaching the infimum gives the existence of a minimizer.
\end{proof}
Note that this formulation is nonegative and symmetric because the functional $\Ll$ is also nonegative and symmetric in its inputs $(\Xx,\Yy)$. This formulation allows straightforwardly to prove the definiteness of $\UGW$.
\begin{proposition}[Definiteness of UGW]\label{thm-ugw-definite}
Assume that $\phi^{-1}(\{0\})=\{1\}$ and $\C^{-1}(\{0\})=\{0\}$.
The following assertions are equivalent:
\begin{enumerate}
\item $\UGW(\Xx,\Yy)=0$
\item $\exists\pi\in\Mm_+(X\times Y)$ whose marginals are $(\mu,\nu)$ such that $d_X(x, x')= d_Y(y, y')$ for $\pi\otimes\pi$-a.e. $(x, x', y, y')\in (X\times Y)^2$.
\item There exists a mm-space $(Z, d_Z, \eta)$ with full support and Borel maps $\psi_X:Z\rightarrow X$ and $\psi_Y:Z\rightarrow Y$. such that $(\psi_X)_\sharp \eta =\mu$, $(\psi_Y)_\sharp \eta =\nu$ and $d_Z = (\psi_X)^\sharp d_X = (\psi_Y)^\sharp d_Y$
\item There exists a Borel measurable bijection between the measures' supports $\psi:spt(\mu)\rightarrow spt(\nu)$ with Borel measurable inverse such that $\psi_\sharp\mu = \nu$ and $d_Y = \psi^\sharp d_X$.
\end{enumerate}
\end{proposition}
\begin{proof}
Recall that $(2) \Leftrightarrow (3) \Leftrightarrow (4)$ from~\cite[Lemma 1.10]{sturm2012space}. thus it remains to prove $(1)\Leftrightarrow(2)$.
If there is such coupling plan $\pi$ between $(\mu,\nu)$ then one has $\pi\otimes\pi$-a.e. that $\Gamma=0$, and all $\phi$-divergences are zero as well, yielding a distance of zero a.e.
Assume now that $\UGW(\Xx,\Yy)=0$, and write $\pi$ an optimal plan. All terms of $\Ll$ are positive, thus under our assumptions we have $\Gamma=0$, $\pi_1\otimes\pi_1=\mu\otimes\mu$ and $\pi_2\otimes\pi_2=\nu\otimes\nu$. Thus we get that $\pi$ has marginals $(\mu,\nu)$ and that $d_X(x, x')= d_Y(y, y')$ $\pi\otimes\pi$-a.e.
\end{proof}
We end with the proof of Lemma~\ref{lem-rewrite-ugw}.
\begin{lemma}
One has
\begin{align*}
\Ll(\pi)= &\int_{X^2 \times Y^2} L_{\C(\Gamma)}(\f\otimes\f,\g\otimes\g)\d\pi\d\pi+\psi^\prime_\infty(|(\mu\otimes\mu)^\bot| + |(\nu\otimes\nu)^\bot|).
\end{align*}
\end{lemma}
\begin{proof}
Using Equation~\eqref{eq-tensor-leb-dens} with the Lebesgue decomposition and the definitions of reverse entropies and $L_c$ from Section~\ref{sec-background}, one has
\begin{align*}
\Ll(\pi) &= \int_{X^2 \times Y^2} \C(\Gamma)\d\pi\d\pi
+ \D_\phi^\otimes(\pi_1|\mu) + \D_\phi^\otimes(\pi_2|\nu)\\
&= \int_{X^2 \times Y^2} \C(\Gamma)\d\pi\d\pi
+ \D_\psi^\otimes(\mu|\pi_1) + \D_\psi^\otimes(\nu|\pi_2)\\
&= \int_{X^2 \times Y^2} \C(\Gamma)\d\pi\d\pi + \int_{X^2}\psi(\f\otimes\f)\d\pi_1\d\pi_1 + \int_{Y^2}\psi(\g\otimes\g)\d\pi_2\d\pi_2\nonumber\\
&\qquad+\psi^\prime_\infty(|(\mu\otimes\mu)^\bot| + |(\nu\otimes\nu)^\bot|)\\
&= \int_{X^2 \times Y^2} L_{\C(\Gamma)}(\f\otimes\f,\g\otimes\g)\d\pi\d\pi+\psi^\prime_\infty(|(\mu\otimes\mu)^\bot| + |(\nu\otimes\nu)^\bot|).\nonumber
\end{align*}
\end{proof}
\section{Conic formulation and metric properties}
\label{appendix-distance-cgw}
We present in this section the proofs of the properties mentioned in Section~\ref{sec-distance}. We refer to Sections~\ref{sec-background} and ~\ref{sec-distance} for the definition of the conic formulation and its related concepts.
In this section we frequently use the notion of marginal for neasures. For any sets $E,F$, we write $\margp{E}:E\times F\rightarrow E$ the \textbf{canonical projection} such that for any $(x,y)\in E\times F,\, \margp{E}(x,y)=x$. Consider two complete separable mm-spaces $\Xx = (X, d_X, \mu)$ and $\Yy=(Y, d_Y, \nu)$. Write $\pi\in\Mm_+(X\times Y)$ a coupling plan, and define its marginals by $\pi_1 = \margp{X}_\sharp\pi$ and $\pi_2 = \margp{Y}_\sharp\pi$. The definition of the marginals can also be seen by the use of test functions. In the case of $\pi_1$ it reads for any test function $\xi$
\begin{align*}
\int \xi(x)\d\pi_1(x) = \int \xi(x) \d\pi(x,y).
\end{align*}
\subsection{Invariance to dilation and triangle inequality}
We introduced the notion of dilation in section~\ref{sec-distance} which is key to prove the triangular inequality. We show this property and its consequences in this section.
\begin{lemma}[Invariance to dilation]
The problem $\CGW$ is invariant to dilations, i.e. for any $\al\in\Uu_p(\mu,\nu)$, we have $\dil{v}(\al)\in\Uu_p(\mu,\nu)$ and $\Hh(\al) = \Hh(\dil{v}(\al))$.
\end{lemma}
\begin{proof}
First we prove the stability of $\Uu_p(\mu,\nu)$ under dilations. Take $\al\in\Uu_p(\mu,\nu)$. For any test function $\xi$ defined on $X$ we have
\begin{align*}
\int \xi(x)r^p\d\dil{v}(\al) = \int \xi(x)(\frac{r}{v})^p.v^p\d(\al) = \int\xi(x)r^p\d\al = \int\xi(x)\d\mu(x).
\end{align*}
Similarly we get $\margp{Y}_\sharp(s^q \dil{v}(\al)) = \nu$, thus $\dil{v}(\al)\in\Uu_p(\mu,\nu)$.
It remains to prove the invariance of the functional. Recall that $\Dd^q$ is p-homogeneous. It yields
\begin{align*}
\Hh(\dil{v}(\al)) &= \int \Dd([d_X(x,x'), rr'], [d_Y(y,y'), ss']))^q\d\dil{v}(\al)\d\dil{v}(\al)\\
&= \int\Dd([d_X(x,x'), \frac{r}{v}\cdot\frac{r'}{v}], [d_Y(y,y'), \frac{s}{v}\cdot\frac{s'}{v}]))^q v^{p}\cdot v^p\d\al \d\al\\
&= \int\frac{1}{v^{2p}}\Dd([d_X(x,x'), rr'], [d_Y(y,y'), ss']))^q v^{2p}\d\al \d\al\\
&= \int\Dd([d_X(x,x'), rr'], [d_Y(y,y'), ss']))^q \d\al \d\al\\
& = \Hh(\al)
\end{align*}
Both the functional and the constraint set are invariant, thus the whole CGW problem is invariant to dilations.
\end{proof}
The above lemma allows to normalize the plan such that one of its marginal is fixed to some value. Fixing a marginal allows to generalize the gluing lemma which is a key ingredient of the triangle inequality in optimal transport.
\begin{lemma}[Normalization lemma]
Assume there exists $\al\in\Uu_p(\mu,\nu)$ such that $\CGW(\Xx,\Yy)=\Hh(\al)$. Then there exists $\tilde{\al}$ such that $\tilde{\al}\in\Uu_p(\mu,\nu)$ and $\CGW(\Xx,\Yy)=\Hh(\tilde{\al})$ and whose marginal on $\Co[Y]$ is $\nu_{\Co[Y]}=\margp{\Co[Y]}\sharp\tilde{\al} = \delta_{\zc_Y} + \margc_\sharp(\nu \otimes \delta_1)$, where $\margc$ is the canonical injection from $Y\times\RR_+$ to $\Co[Y]$.
\end{lemma}
\begin{proof}
The proof is exactly the same as~\cite[Lemma 7.10]{liero2015optimal} and is included for completeness. Take an optimal plan $\al$. Because the functional and the constraints are homogeneous in $(r,s)$, the plan $\hat{\al} = \al + \delta_{\zc_X}\otimes\delta_{\zc_Y}$ verifies $\hat{\al}\in\Uu_p(\mu,\nu)$ and $\Hh(\hat{\al}) = \Hh(\al)$. Indeed, because of this homogeneity the contribution $\delta_{\zc_X}\otimes\delta_{\zc_Y}$ has $(r,s)=(0,0)$ which has thus no impact.
Considering $\hat{\al}$ instead of $\al$ allows to assume without loss of generality that the transport plan charges the apex, i.e. setting
\begin{align}
S = \{[x,r],[y,s]\in\Co[X]\times\Co[Y], [y,s]=\zc_Y\},
\end{align}
one has $\omega_Y \eqdef \al(S) \geq 1$.
Then we can define the following scaling
\begin{align}
v([x,r], [y,s]) =
\begin{cases}
s \textrm{ if }s>0\\
\omega_Y^{-1/q} \textrm{ otherwise}.
\end{cases}
\end{align}
We prove now that $\dil{v}(\hat{\al})$ has the desired marginal on $\Co(Y)$ by considering test functions $\xi([y,s])$. We separate the integral into two parts with the set $S$, and write $\hat{\al} = \rest{\hat{\al}}{S} + \rest{\hat{\al}}{S^c}$ their restrictions to $S$ and $S^c$ respectively.
It reads
\begin{align*}
\int \xi([y,s])\d\dil{v}(\hat{\al}) &= \int\xi([y,s / v])v^p\d\hat{\al}\\
& = \int\xi([y,s / v])v^p\d\rest{\hat{\al}}{S} + \int\xi([y,s / v])v^p\d\rest{\hat{\al}}{S^c}\\
&=\int\xi(\zc_Y)\omega_Y^{-1}\d\rest{\hat{\al}}{S} + \int\xi([y,s / s])s^p\d\rest{\hat{\al}}{S^c}\\
& = \xi(\zc_Y)\cdot\omega_Y\cdot\omega_Y^{-1} + \int\xi([y,1])s^p\d\hat{\al}\\
& = \xi(\zc_Y) + \int\xi(\margc(y,s))\d(\nu(y)\otimes\delta_1(s))\\
& = \int\xi([y,s])\d(\delta_{\zc_Y} + \margc_\sharp(\nu \otimes \delta_1)),
\end{align*}
which is the formula of the desired marginal on $\Co[Y]$. Since $\hat{\al}\in\Uu_p(\mu,\nu)$, its dilation is also in $\Uu_p(\mu,\nu)$, and $\Hh(\al) = \Hh(\hat{\al})=\Hh(\dil{v}(\hat{\al}))$.
\end{proof}
\if 0
\subsection{Comparison of UGW and CGW formulations}
The inequality $\UGW\geq\CGW$ was proved in Section~\ref{sec-distance}. In this section we detail refinements of the comparison between both formulations, and provide a sufficient condition to obtain equality between both formulations.
The key property used here is the existence of $\theta^*=\theta_c(r,s)$ such that
$ H_c(r,s)=\theta^* L_c(\tfrac{r}{\theta^*},\tfrac{s}{\theta^*})$, see section~\ref{sec-background} for details.
We conjecture that due to the overrelaxation of the scaling $\theta$ which is not tensorized because it depends on $(x,r,x',r',y,s,y',s')$, there is not equality between UGW and CGW. Nevertheless, if we assume that the optimal scaling is tensorized we are able to derive the converse inequality.
\begin{lemma}\label{lem-eq-both-form}
For $\D_\phi=\KL$ or $\TV$, assume that $\C$ is such that the quantity $\theta^*_\C(r,s)$ exists and is Borel measurable. Furthermore, assume that one optimal $\al$ in $\CGW(\Xx,\Yy)$ is such that $\al\otimes\al$-a.e. we have
$$\theta^*_{\lambda(\Gamma(x,x',y,y'))}((rr')^p, (ss')^p)=\tilde{\theta}(x,r,s,y)\otimes\tilde{\theta}(x',r',s',y').$$
Then we have $\CGW(\Xx,\Yy)=\UGW(\Xx,\Yy)$.
\end{lemma}
\begin{proof}
Take the optimal $\al$ and consider $\rest{\tilde{\al}}{S}=\dil{v}(\al)$ with $v=\tilde{\theta}^{1/p}$, restricted on the set $S=\{\theta^* >0\}$.
Due to the assumed tensor structure of $\theta^*$ and to its homogeneity, we get that $\theta^*_{\lambda(\Gamma(x,x',y,y'))}((rr')^p, (ss')^p)=1$, $\tilde{\al}\otimes\tilde{\al}$-almost everywhere. It reads
\begin{align*}
\int (\theta^{*}- 1 )\d\rest{\tilde{\al}}{S}\d\rest{\tilde{\al}}{S}&=
\int (\theta^{*}(x,r/v_1,y,s/v_1, x',r'/v_2,y',s'/v_2)- 1 )(v_1 v_2)^p\d\al\d\al\\
&=\int (\frac{\theta^{*}(x,r,y,s, x',r',y',s')}{(\tilde{\theta}\otimes\tilde{\theta})^p}- 1 )(\tilde{\theta}\otimes\tilde{\theta})^{p}\d\al\d\al\\
&=0,
\end{align*}
where $v_1 =\tilde{\theta}(x,y,r,s)$ and $v_2 =\tilde{\theta}(x',y',r',s')$
Thus $\tilde{\al}\otimes\tilde{\al}$-almost everywhere, we have that
\begin{align}
\Dd([d_X(x, x'), rr'], [d_Y(y,y'), ss'])^q = \C(\Gamma) +\psi((rr')^p)+ \psi((ss')^p).
\end{align}
We apply the same construction and calculation from the proof of~\cite[Theorem 7.20, pp 1071-1072]{liero2015optimal}. This construction builds from an optimal conic plan $\al$ a new plan $\pi$ defined on $X\times Y$. They consider $\be =\dil{v}(\rest{\al}{S})$ with $v=\tilde{\theta}^{1/p}$, and show that its marginal $\pi=\margp{X\times Y}(\be)$ is suboptimal for UGW, i.e. $\Hh(\al)\geq \Ll(\pi)$.
The proof requires that the function $x\mapsto\psi(x)-x$ is non-increasing (where $\psi$ is the reverse entropy of $\phi$). Such property holds when $\D_\phi=\TV$ or $\KL$. The proof also involves the notion of measure disintegration which is only valid in separable metric spaces, hence the assumption on $(\Xx,\Yy)$.
\end{proof}
\begin{corollary}
In the setting $\D_\phi=\KL$ or $\TV$, if $\CGW(\Xx,\Yy)=0$ then $\UGW(\Xx,\Yy)=0$. In particular, $\CGW$ is definite in every setting of Section~\ref{sec-setups}.
\end{corollary}
\begin{proof}
Take the optimal $\al$. Having $\CGW(\Xx,\Yy)=0$ means that $d_X(x,x')=d_Y(y,y')$ and $rr'=ss'$ $\al\otimes\al$-almost everywhere. In the setting of $\KL$, it yields $\theta^*_{\lambda(\Gamma)}((rr')^2,(ss')^2)=rr'ss'$, which is tensorizable with $\tilde{\theta}=rs$. In the case of $\TV$ we get $\theta^*_{\lambda(\Gamma)}(rr',ss')=(rr')\wedge (ss')$ which is tensorized with $\tilde{\theta}=r$ since $rr'=ss'$.
In virtue of the previous lemma, it yields that $\UGW(\Xx,\Yy)=0$ because $\CGW(\Xx,\Yy)\geq\UGW(\Xx,\Yy)\geq 0$.
Since UGW is definite, then so is CGW, see Theorem~\ref{thm-ugw-definite}.
\end{proof}
\fi
\section{Unbalanced Gromov-Wasserstein formulations}
\label{sec-distance}
We present in this section our two new formulations and their properties. The first one, called UGW, is amenable to computation on GPUs, and is exploited in Section~\ref{sec-algo} to derive an efficient algorithm, which is used in the numerical experiments of Section~\ref{sec-xp}. The second one, called CGW, defines a distance between mm-spaces up to isometries.
In all what follows, we consider complete separable mm-spaces endowed with a metric and a positive measure.
\subsection{The unbalanced Gromov-Wasserstein divergence}
This new formulation makes use of quadratic $\phi$-divergences, defined as $\D_\phi^\otimes(\rho|\nu) \eqdef \D_\phi(\rho \otimes \rho|\nu \otimes \nu)$, where $\rho \otimes \rho \in \Mm_+(X^2)$ is the tensor product measure defined by $\d(\rho \otimes \rho)(x,y)=\d\rho(x)\d\rho(y)$.
Note that $\D_\phi^\otimes$ is not a convex function in general.
\begin{definition}[Unbalanced GW]
The Unbalanced Gromov-Wasserstein divergence is defined as
\begin{equation}\label{eq-ugw-func}
\begin{aligned}
&\qquad\qquad \UGW(\Xx, \Yy) = \inf_{\pi\in\Mmp(X\times Y)} \Ll(\pi) \qwhereq \\
&\Ll(\pi) \eqdef \int_{X^2 \times Y^2} \C(\Gamma(x,x',y, y'))\d\pi(x, y)\d\pi(x', y')
+ \D_\phi^\otimes(\pi_1|\mu) + \D_\phi^\otimes(\pi_2|\nu).
\end{aligned}
\end{equation}
\end{definition}
This definition can be understood as an hybridation between~\eqref{eq-defn-gw} and~\eqref{eq-uw} but with a twist: one needs to use the quadratic divergence $\D_\phi^\otimes$ in place of $\D_\phi$. In the TV case, this is the most important distinction between UGW and partial GW~\cite{chapel2020partial}.
Using quadratic divergences results in UGW being 2-homogeneous: if $\mu$ and $\nu$ are multiplied by $\th \geq 0$, then $\UGW(\Xx, \Yy)$ is multiplied by $\th^2$.
Not using such divergences results in a lack of 2-homogeneity of $\Ll$ which is crucial to show the connection between our proposed formulations.
Note also that the balanced GW distance~\eqref{eq-defn-gw} is recovered as a particular case when using $\phi=\iota_=$ or by letting $\rho \rightarrow +\infty$ for an entropy $\psi=\rho\phi$.
We first prove the existence of optimal plans $\pi$ solution to~\eqref{eq-ugw-func}. The hypotheses of this theorem cover the three key settings of Section~\ref{sec-setups}. This proposition is proved in Appendix~\ref{appendix-distance-cgw}.
\begin{proposition}[Existence of minimizers]\label{thm-exist-minimizer}
We assume that $(X,Y)$ are compact and that either (i) $\phi$ superlinear, i.e $\phi^\prime_\infty=\infty$, or (ii) $\C$ has compact sublevel sets in $\RR_+$ and $2\phi^\prime_\infty + \inf \C >0$.
Then there exists $\pi\in\Mm_+(X\times Y)$ such that $\UGW(\Xx,\Yy)=\Ll(\pi)$.
\end{proposition}
The following proposition ensures that the functional $\UGW$ can be used to compare mm-spaces.
\begin{proposition}\label{prop-ugw-definite}
Assume that $\phi^{-1}(\{0\})=\{1\}$ and $\C^{-1}(\{0\})=\{0\}$. Then $\UGW(\Xx,\Yy) \geq 0$ and is $0$ if and only if $\Xx\sim\Yy$.
\end{proposition}
\begin{proof}
Assume $\UGW(\Xx,\Yy)=0$. In the three considered settings, positivity implies that all the terms appearing in $\Ll$ are zero. Similarly to the balanced case~\cite{memoli2011gromov}, the distortion being zero imposes that the plan $\pi$ defines an isometry. The $\phi$-divergences being zero implies that $\pi$ has marginals equal to $(\mu,\nu)$. This plan thus defines an isometric bijection between $\Xx$ and $\Yy$, see Appendix~\ref{appendix-distance-ugw} for details.
\end{proof}
We end this section with a lemma which makes an analogy with Equation~\eqref{eq-uw-reverse}, and makes the transition with the next section on the conic formulation. Its proof is defered to Appendix~\ref{appendix-distance-ugw}.
\begin{lemma}\label{lem-rewrite-ugw}
One has
\begin{equation}
\begin{aligned}\label{eq-rewrite-ugw}
\Ll(\pi)= &\int_{X^2 \times Y^2} L_{\C(\Gamma)}(\f\otimes\f,\g\otimes\g)\d\pi\d\pi
+\psi^\prime_\infty(|(\mu\otimes\mu)^\bot| + |(\nu\otimes\nu)^\bot|),
\end{aligned}
\end{equation}
where $(\f,\g)$ are given by~\eqref{eq-leb-dens-1}.
\end{lemma}
\subsection{The conic Gromov-Wasserstein distance}
We introduce a second conic-GW formulation adapted from~\eqref{eq-uw-conic} which is connected to UGW. Most importantly is Theorem~\ref{thm-ugw-dist} which states that it defines a distance between mm-spaces equipped with arbitrary positive measures.
\subsubsection{Conic formulation}
We refer to Section~\ref{sec-background} for the construction of cone sets. We consider the cone distance $\Dd\eqdef \Dd_{\Co[\RR_+]}$ on $\RR$, where the distance $d$ of~\eqref{eq-def-cone-dist} is now $\Delta$ from~\eqref{eq-defn-gw}. The conic formulations optimizes over measures $\al\in\Uu_p(\mu,\nu)$ defined in Equation~\eqref{eq-uw-conic-set}, with the slight change that $\al\in\Mm_+(\Co[X]\times\Co[Y])$ instead of $\Mm_+(\Co[X]^2)$ for UW~\eqref{eq-uw-conic}.
It reads $\CGW(\Xx,\Yy) \eqdef \inf_{\al\in\Uu_p(\mu,\nu)}\Hh(\al)$ where
\begin{equation}\label{eq-ugw-conic}
\begin{aligned}
& \Hh(\al)\eqdef\int \Dd([d_X(x,x'), r r'], [d_Y(y,y'), s s'])^q\d\al([x,r], [y,s])\d\al([x',r'], [y',s']).
\end{aligned}
\end{equation}
It is an adaptation of the program~\eqref{eq-uw-conic} to GW.
Starting from Lemma~\ref{lem-rewrite-ugw}, we perform a derivation similar to the construction of CUW from UW as presented in Section~\ref{sec-background}.
The densities $\f(x),\f(x'),\g(y),\g(y')$ are replaced by variables $r^p,r^{\prime p},s^p,s^{\prime p}$, and the tensor structure $r\cdot r'$ of those variables is due to the tensorized structure of the plan $\al\otimes\al$.
Note that similarly to the GW formulation~\eqref{eq-defn-gw} -- and in sharp contrast with the formulation~\eqref{eq-uw-conic} of UW -- here the transport plans are defined on the cone $\Co[X] \times \Co[Y]$ but the cost $\Dd$ is a distance on $\Co[\RR_+]$.
We state that CGW defines a distance in the theorem below.
\begin{theorem}\label{thm-ugw-dist}
The divergence $\CGW$ is symmetric, positive and definite up to isometries. Furthermore, if $\Dd$ is a distance on $\Co[\RR_+]$, then $\CGW^{1/q}$ is a distance on the set of mm-spaces up to isometries.
\end{theorem}
As an application of the above result we have the following corollary.
\begin{corollary}
When $\Delta$ is the Euclidean distance on $\RR$, for all choices of $(\D_\phi,\C,p,q)$ given in Section~\ref{sec-setups}, $\CGW^{1/q}$ is a distance on the set of mm-spaces up to isometries.
\end{corollary}
The next theorem shows that while the distance $\CGW^{1/q}$ seems difficult to compute (because it is defined on a lifted space), it can be controlled by $\UGW$, which can be approximated with efficient numerical schemes as detailed in Section~\ref{sec-algo}.
\begin{theorem}\label{thm-ineq-formul}
For any choice of $(\D_\phi, \C, p, q)$ and for $\Dd$ defined with Equation~\eqref{eq-def-cone-dist}, one has $\UGW\geq\CGW$.
\end{theorem}
\subsubsection{Preliminary results}
We present concepts and properties which are necessary for the proofs of Theorem~\ref{thm-ugw-dist} and Theorem~\ref{thm-ineq-formul}.
\begin{definition}[dilations]
Consider $v([x, r], [y, s])$ a Borel measurable scaling function depending on $[x, r], [y, s]\in\Co[X]\times\Co[Y]$. Take a plan $\al\in\Mm_+(\Co[X]\times\Co[Y])$. We define the dilation $\dil{v}: \al\mapsto (h_v)_\sharp(v^{p}\al)$ where
\begin{align*}
h_v([x, r], [y, s]) \eqdef ([x, r/w], [y, s/w]),
\end{align*}
where $w = v([x, r], [y, s])$. It reads for any test function $\xi$
\begin{align*}
\int \xi([x, r], [y, s])\d\dil{v}(\al) = \int\xi([x, r/w], [y, s/w])w^p \d\al.
\end{align*}
\end{definition}
A crucial property of the conic formulation~\eqref{eq-ugw-conic} is its invariance to dilations of the radial coordinates.
We state this invariance in the following lemma. The proof can be found in Appendix~\ref{appendix-distance-cgw}.
\begin{lemma}[Invariance to dilation]
The problem $\CGW$ is invariant to dilations, i.e. for any $\al\in\Uu_p(\mu,\nu)$, we have $\dil{v}(\al)\in\Uu_p(\mu,\nu)$ and $\Hh(\al) = \Hh(\dil{v}(\al))$.
\end{lemma}
The invariance to dilation allows one to normalize the plan $\al$ and assume extra properties without loss of generality.
For instance with $v=\al(\Co[X]\times\Co[Y])^{-1/q}$ we can assume that $\al$ is a probability distribution, which allows us to leverage results of balanced optimal transport on the cone. It can also be leveraged to prove the following lemma, which is used to prove the triangle inequality.
\begin{lemma}[Normalization lemma]\label{lem-norm}
Assume there exists $\al\in\Uu_p(\mu,\nu)$ s.t. $\CGW(\Xx,\Yy)=\Hh(\al)$. Then there exists $\tilde{\al}$ such that $\tilde{\al}\in\Uu_p(\mu,\nu)$, $\CGW(\Xx,\Yy)=\Hh(\tilde{\al})$ and whose marginal on $\Co[Y]$ is
$$\nu_{\Co[Y]}= \delta_{\zc_Y} + \margc_\sharp(\nu \otimes \delta_1),$$
where $\margc :X\times\RR_+ \rightarrow\Co[X]$ is the canonical injection into the cone that reads $\margc(x,r)\eqdef[x,r]$.
\end{lemma}
Before detailing the computational algorithm in Section~\ref{sec-algo}, we provide the proofs of our main results.
\subsubsection{Proof of Theorem~\ref{thm-ugw-dist}}
\emph{Non-negativity} and \emph{symmetry} hold since $\Hh$ is a sum of non-negative symmetric terms.
%
%
To prove \emph{Definiteness}, assume $\CGW(\Xx,\Yy)=0$, and write $\al$ an optimal plan. We have $\al\otimes\al$-a.e. that $d_X(x,x')=d_Y(y,y')$ and $rr'=ss'$ because $\Dd$ is definite (see Proposition~\ref{prop-cone-dist-definite}). Thanks to the completeness of $(\Xx,\Yy)$ and a result from~\cite[Lemma 1.10]{sturm2012space}, such property implies the existence of a Borel isometric bijection with Borel inverse between the supports of the measures $\psi:\Supp(\mu)\rightarrow\Supp(\nu)$, where $\Supp$ denotes the support. The bijection $\psi$ verifies $d_X(x,x')=d_Y(\psi(x),\psi(x'))$. To prove $\Xx\sim\Yy$ it remains to prove $\psi_\sharp\mu=\nu$. Due to the density of continuous functions of the form $\xi(x)\xi(x')$, the constraints of $\Uu_p(\mu,\nu)$ are equivalent to
\begin{align*}
\int_{\RR_+} (rr')^p \d\al_1(\cdot,r)\d\al_1(\cdot,r')=\mu\otimes\mu,\quad
\int_{\RR_+} (ss')^p \d\al_2(\cdot,s)\d\al_2(\cdot,s')=\nu\otimes\nu.
\end{align*}
Take a continuous test function $\xi$ defined on $\Supp(\nu)^2$. Writing $y=\psi(x)$ and $y'=\psi(x')$, one has
\begin{align*}
\int\xi(y,y')\d\nu\d\nu &= \int\xi(y,y') (ss')^p\d\al\d\al\\
&= \int\xi(\psi(x),\psi(x')) (ss')^p\d\al\d\al\\
&= \int\xi(\psi(x),\psi(x')) (rr')^p\d\al\d\al
\end{align*}
\begin{align*}
\int\xi(y,y')\d\nu\d\nu &= \int\xi(\psi(x),\psi(x')) \d\mu\d\mu\\
&= \int\tilde{\xi}(x,x') \d\psi_\sharp\mu\d\psi_\sharp\mu.
\end{align*}
Since $\psi$ is a bijection, there is a bijection between continuous functions $\xi$ of $\Supp(\nu)^2$ and functions $\tilde{\xi}$ of $\Supp(\mu)^2$. Thus we obtain $\nu=\psi_\sharp\mu$ and we have $\Xx\sim\Yy$.
It remains to prove the \emph{triangle inequality}. Assume now that $\Dd$ satisfies it.
%
Given three mm-spaces $(\Xx,\Yy,\Zz)$ respectively equipped with measures $(\mu,\nu,\eta)$, consider $\al,\be$ which are optimal plans for $\CGW(\Xx,\Yy)$ and $\CGW(\Yy,\Zz)$.
%
Using Lemma~\ref{lem-norm} to both $\al$ and $\be$, we can consider measures $(\bar{\al},\bar{\be})$ which are also optimal and have a common marginal $\bar\nu$ on $\Co[Y]$. Thanks to this common marginal and the separability of $(X,Y,Z)$, the standard gluing lemma~\cite[Lemma 7.6]{villani2003} applies and yields a glued plan $\ga\in\Mm_+(\Co[X]\times\Co[Y]\times\Co[Z])$ whose respective marginals on $\Co[X]\times\Co[Y]$ and $\Co[Y]\times\Co[Z]$ are $(\bar{\al},\bar{\be})$. Furthermore, the marginal $\bar{\ga}$ of $\ga$ on $\Co[X]\times\Co[Z]$ is in $\Uu_p(\mu,\eta)$. Indeed, $(\bar{\ga},\bar{\al})$ have the same marginal on $\Co[X]$ and same for $(\bar{\ga},\bar{\be})$ on $\Co[Z]$, hence this property.
Write $d_X=d_X(x,x')$ for sake of conciseness (and similarly for $Y,Z$). The calculation reads
\begin{align}
&\CGW(\Xx, \Zz)^{\tfrac{1}{q}}\\
&\leq \Big(\int \Dd([d_X, rr'],[d_Z, tt'])^q\d\bar{\ga}([x,r],[z,t])\d\bar{\ga}([x',r'],[z',t'])\Big)^{\tfrac{1}{q}}\label{eq-triangle-1}\\
\quad &\leq\Big(\int \Dd([d_X, rr'],[d_Z, tt'])^q\d\ga([x,r],[y,s],[z,t])\d\ga([x',r'],[y',s'],[z',t'])\Big)^{\tfrac{1}{q}}\label{eq-triangle-2}\\
\quad &\leq \Big(\int (\Dd([d_X, rr'],[d_Y, ss']) + \Dd([d_Y, ss'],[d_Z, tt']))^q\d\ga\d\ga\Big)^{\tfrac{1}{q}}\label{eq-triangle-3}\\
\quad &\leq \Big(\int \Dd([d_X, rr'],[d_Y,ss'])^q\d\ga\d\ga\Big)^{\tfrac{1}{q}} +\Big(\int\Dd([d_Y,ss'],[d_Z,tt'])^q\d\ga\d\ga\Big)^{\tfrac{1}{q}}\label{eq-triangle-4}\\
&\leq \Big(\int \Dd([d_X, rr'],[d_Y, ss'])^q\d\bar{\al}([x,r],[y,s])\d\bar{\al}([x',r'],[y',s'])\Big)^{\tfrac{1}{q}}\nonumber\\
&\qquad+ \Big(\int \Dd([d_Y, ss'],[d_Z, tt'])^q\d\bar{\be}([y,s],[z,t])\d\bar{\be}([y',s'],[z',t'])\Big)^{\tfrac{1}{q}}\label{eq-triangle-5}\\
&\leq \CGW(\Xx, \Yy)^{\tfrac{1}{q}}+ \CGW(\Yy, \Zz)^{\tfrac{1}{q}}.\label{eq-triangle-6}
\end{align}
Since $\bar{\ga}\in\Uu_p(\mu,\eta)$, it is thus suboptimal, which yields Equation~\eqref{eq-triangle-1}. Because $\bar{\ga}$ is the marginal of $\ga$ we get Equation~\eqref{eq-triangle-2}. Equations~\eqref{eq-triangle-3} and~\eqref{eq-triangle-4} are respectively obtained by the triangle and Minkowski inequalities, which hold because $\Dd$ which is a distance. Equation~\eqref{eq-triangle-5} is the marginalization of $\ga$, and Equation~\eqref{eq-triangle-6} is given by the optimality of $(\bar{\al},\bar{\be})$, which ends the proof of the triangle inequality.
\subsubsection{Proof of Theorem~\ref{thm-ineq-formul}}
The proof consists in considering an optimal plan $\pi$ for UGW, building a lift $\al$ of this plan into the cone such that $\Ll(\pi)\geq\Hh(\al)$, and prove that $\al$ is admissible for the program CGW, thus suboptimal.
Using Equation~\eqref{eq-leb-dens-1}, we have
\begin{equation}
\begin{aligned}\label{eq-tensor-leb-dens}
\mu\otimes\mu &= (\f\otimes\f) \pi_1\otimes\pi_1 + (\mu\otimes\mu)^\bot, \\
(\mu\otimes\mu)^\bot &= \mu^\bot\otimes(\f\pi_1) + (\f\pi_1)\otimes\mu^\bot + \mu^\bot\otimes\mu^\bot,\\
\nu\otimes\nu &= (\g\otimes\g) \pi_2\otimes\pi_2 + (\nu\otimes\nu)^\bot,\\
(\nu\otimes\nu)^\bot &= \nu^\bot\otimes(\g\pi_2) + (\g\pi_2)\otimes\nu^\bot + \nu^\bot\otimes\nu^\bot.
\end{aligned}
\end{equation}
Recall that the canonic injection $\margc$ reads $\margc(x,r)=[x,r]$. Based on the above Lebesgue decomposition, we define the conic plan
\begin{equation}
\begin{aligned}\label{eq-plan-conic-lifted}
\al &= (\margc(x, \f(x)^{\frac{1}{p}}), \margc(y,\g(y)^{\frac{1}{p}}))_\sharp\pi(x,y) + \delta_{\zc_X}\otimes\margc_\sharp[\nu^\bot\otimes\de_1] + \margc_\sharp[\mu^\bot\otimes\de_1]\otimes\delta_{\zc_Y}.
\end{aligned}
\end{equation}
We have that $\al\in\Uu_p(\mu,\nu)$. Indeed for the first marginal (and similarly for the second) we have for any test function $\xi(x)$
\begin{align*}
\int\xi(x)(r)^p\d\al &= \int\xi(x)\f(x)\d\pi_1(x) + 0 + \int\xi(x)(1)^p\d\mu^\bot(x)\\
&=\int\xi(x)\d(\f(x)\pi_1 + \mu^\bot)\\
&=\int\xi(x)\d\mu(x).
\end{align*}
We define $\theta^*=\theta^*_c(r,s)$ the parameter which verifies $H_c(r,s)=\theta^* L_c(r/\theta^*, s/\theta^*)$. We restrict $\al\otimes\al$ to the set $S=\{\theta^*_{\lambda(\Gamma)}((rr')^p, (ss')^p)>0\}$. By construction, $\theta^*_c(r,s)$ is 1-homogeneous in $(r,s)$. Thus on S we necessarily have $r,r',s,s' >0$. It yields
\begin{align*}
\rest{\al\otimes\al}{S} &= (\margc(x, \f(x)^{\frac{1}{p}}), \margc(y,\g(y)^{\frac{1}{p}}),\margc(x', \f(x')^{\frac{1}{p}}), \margc(y',\g(y')^{\frac{1}{p}}))_\sharp(\pi\otimes\pi).
\end{align*}
Concerning the orthogonal part of the decomposition, note that whenever $\theta^*=0$, due to the definition of $H$ the cone distance reads
\begin{align}
\Dd([x,r], [y,s])^q = \psi^\prime_\infty(r^p + s^p).
\end{align}
It geometrically means that the shortest path between $[x,r]$ and $[y,s]$ must pass via the apex, which corresponds to a pure mass creation/destruction regime.
Furthermore we have that
\begin{align*}
|(\mu\otimes\mu)^\bot| &= \int (r\cdot r')^p \d\rest{(\al\otimes\al)}{S^c},\\
|(\nu\otimes\nu)^\bot| &= \int (s\cdot s')^p \d\rest{(\al\otimes\al)}{S^c}.
\end{align*}
Indeed, thanks to Equation~\eqref{eq-plan-conic-lifted} we have for the first marginal that
\begin{align*}
|(\mu\otimes\mu)^\bot| &= \big(\mu^\bot\otimes(\f\pi_1) + (\f\pi_1)\otimes\mu^\bot + \mu^\bot\otimes\mu^\bot\big)(X^2)\\
&= \int (rr')^p\d\margc_\sharp[\mu^\bot\otimes\de_1]\d\margc(x', \f(x')^{\frac{1}{p}})_\sharp\pi_1(x')\\
&\qquad+ \int (rr')^p\d\margc(x, \f(x)^{\frac{1}{p}})_\sharp\pi_1(x)\d\margc_\sharp[\mu^\bot\otimes\de_1]\\
&\qquad+ \int (rr')^p\d\margc_\sharp[\mu^\bot\otimes\de_1]\d\margc_\sharp[\mu^\bot\otimes\de_1]\\
&= \int (rr')^p \d\rest{(\al\otimes\al)}{S^c}.
\end{align*}
Note that the last equality holds because each term of $\al\otimes\al$ involving a measure $\delta_{\zc_{X}}$ cancels out when integrated against $(rr')^p$.
Eventually the computation gives (thanks to Lemma~\ref{lem-rewrite-ugw})
\begin{align*}
\Ll(\pi) &= \int_{X^2 \times Y^2} L_{\C(\Gamma)}(\f\otimes\f,\g\otimes\g)\d\pi\d\pi+\psi^\prime_\infty(|(\mu\otimes\mu)^\bot| +|(\nu\otimes\nu)^\bot|)\\
&\geq \int H_{\C(\Gamma)}(\f\otimes\f , \g\otimes\g)\d\pi\d\pi + \psi^\prime_\infty(|(\mu\otimes\mu)^\bot| +|(\nu\otimes\nu)^\bot|)\\
&\geq \int \Dd([d_X(x, x'), (\f\otimes\f)^{\frac{1}{p}}], [d_Y(y,y'), (\g\otimes\g)^{\frac{1}{p}}])^q\d\pi\d\pi\nonumber\\
&\qquad+ \int \psi^\prime_\infty(rr')^p \d\rest{(\al\otimes\al)}{S^c} + \int \psi^\prime_\infty(ss')^p \d\rest{(\al\otimes\al)}{S^c}\\
&\geq \int \Dd([d_X(x, x'), rr'], [d_Y(y,y'), ss'])^q\d\rest{(\al\otimes\al)}{S}\nonumber\\
&\qquad+ \int \psi^\prime_\infty((rr')^p + (ss')^p)\d\rest{(\al\otimes\al)}{S^c}\\
&\geq \int \Dd([d_X(x, x'), rr'], [d_Y(y,y'), ss'])^q\d\al\d\al\\
&\geq \Hh(\al).
\end{align*}
Thus we have $\UGW(\Xx,\Yy)=\Ll(\pi)\geq\Hh(\al)\geq\CGW(\Xx,\Yy)$.
\section{Details on learning experiments}
\label{sec-app-xp}
We provide details in this section on the PU learning experiments~\ref{sec-xp}.
We present the characteristics of the datasets in Table~\ref{table:data-info}. The variance of the accuracy results presented in Table~\ref{table:data-perf} is presented in Table~\ref{table:data-std}. The computations were made on an internal GPU cluster composed of 10 Tesla K80 and 3 Tesla P100. We also detail the parameters of the numerical solver computing UGW which is the core component of our numerical experiments.
\begin{itemize}
\item The maximum number of iteration to update the plan is set to $3000$.
\item The tolerance on convergence of $\pi$ in log-scale is set to $10^{-5}$, i.e. the algorithm stops when $\norm{\log\pi^{t+1} - \log\pi^{t} }_\infty < tol$.
\item The maximum number of iteration to update the Sinkhorn potentials is set to $3000$.
\item The tolerance on convergence of $(\f, \g)$ is set to $10^{-6}$, i.e. the algorithm stops when $\norm{\f^{t+1} - \f^t}_\infty < tol$.
\end{itemize}
\begin{table}
\begin{center}
\resizebox{\textwidth}{!}{
\begin{tabular}{ |c|c|c|c|c| }
\hline
Dataset & \# of samples & \# of positives & Dim. & PCA Dim. \\
\hline
*-caltech & 1,123 & 151 & surf: 800 / decaf: 4096 & surf: 10 / decaf: 40 \\
*-amazon & 958 & 92 & surf: 800 / decaf: 4096 & surf: 10 / decaf: 40 \\
*-webcam & 295 & 29 & surf: 800 / decaf: 4096 & surf: 10 / decaf: 40 \\
*-dslr & 157 & 12 & surf: 800 / decaf: 4096 & surf: 10 / decaf: 40 \\
\hline
\end{tabular}
}
\end{center}
\caption{Characteristics of datasets}
\label{table:data-info}
\end{table}
\paragraph{Initialization for cross-domain tasks.}
To initialize UGW when the features are different we propose to use a UOT solution of a matching between distance histograms which reads
\begin{align}\label{eq-def-flb}
\FLB(\Xx,\Yy) \eqdef \min \int_{X\times Y} |\bar{\mu}\star d_X - \bar{\nu}\star d_Y|^2 \d\pi &+ \rho\KL(\pi_1|\mu) + \rho\KL(\pi_2|\nu) \\
&+ \epsilon\KL(\pi|\mu\otimes\nu),\nonumber
\end{align}
where $\mu\star d_X(x) \eqdef \int d_X(x,x')\d\mu(x')$ is the eccentricity, i.e. a histogram of aggregated distances, and $\bar{\mu} = \mu / m(\mu)$.
In~\cite{memoli2011gromov} this relaxation is refered as FLB and is a lower bound of GW, but in our unbalanced setting this program cannot a priori be compared with UGW.
\paragraph{Reducing the number of parameters.}
In Table~\ref{table:data-perf}, the accuracy for $\UGW$ is performed by selecting a pair of parameters $(\rho_1,\rho_2)$ for each task via a validation protocol detailed Section~\ref{sec-xp}.
It is desirable to reduce the number of parameters, to see if the performance does not significantly decrease, and avoid overparameterization of the task.
We propose in this section two strategies
The first case keeps one pair $(\rho_1,\rho_2)$ over all tasks.
The second case keeps a pair for each pair of domain tasks (i.e. surf$\leftrightarrow$surf, decaf$\leftrightarrow$decaf, surf$\leftrightarrow$decaf and decaf$\leftrightarrow$surf) for a total of 8 parameters, which allows to normalize adaptively each dataset via an adapted choice of parameters $(\rho_1,\rho_2)$.
The validation protocol is modified since we aggregate accuracies from different tasks.
The selected parameters are obtained by taking the highest mean excess accuracy over all tasks, where the excess is defined by comparing the accuracy to the case where we only predict false positives.
This measure of performance is computed on the validation folds, and we report the accuracy over the testing folds in Table~\ref{table:data-appendix}.
\input{sections/array_table_appendix}
\input{sections/array_table_std} | {
"timestamp": "2021-06-09T02:13:09",
"yymm": "2009",
"arxiv_id": "2009.04266",
"language": "en",
"url": "https://arxiv.org/abs/2009.04266",
"abstract": "Comparing metric measure spaces (i.e. a metric space endowed with aprobability distribution) is at the heart of many machine learning problems. The most popular distance between such metric measure spaces is theGromov-Wasserstein (GW) distance, which is the solution of a quadratic assignment problem. The GW distance is however limited to the comparison of metric measure spaces endowed with a probability distribution. To alleviate this issue, we introduce two Unbalanced Gromov-Wasserstein formulations: a distance and a more tractable upper-bounding relaxation.They both allow the comparison of metric spaces equipped with arbitrary positive measures up to isometries. The first formulation is a positive and definite divergence based on a relaxation of the mass conservation constraint using a novel type of quadratically-homogeneous divergence. This divergence works hand in hand with the entropic regularization approach which is popular to solve large scale optimal transport problems. We show that the underlying non-convex optimization problem can be efficiently tackled using a highly parallelizable and GPU-friendly iterative scheme. The second formulation is a distance between mm-spaces up to isometries based on a conic lifting. Lastly, we provide numerical experiments onsynthetic examples and domain adaptation data with a Positive-Unlabeled learning task to highlight the salient features of the unbalanced divergence and its potential applications in ML.",
"subjects": "Optimization and Control (math.OC); Machine Learning (stat.ML)",
"title": "The Unbalanced Gromov Wasserstein Distance: Conic Formulation and Relaxation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232945094719,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7093460240384486
} |
https://arxiv.org/abs/2005.02185 | A note on total co-independent domination in trees | A set $D$ of vertices of a graph $G$ is a total dominating set if every vertex of $G$ is adjacent to at least one vertex of $D$. The total domination number of $G$ is the minimum cardinality of any total dominating set of $G$ and is denoted by $\gamma_t(G)$. The total dominating set $D$ is called a total co-independent dominating set if $V(G)\setminus D$ is an independent set and has at least one vertex. The minimum cardinality of any total co-independent dominating set is denoted by $\gamma_{t,coi}(G)$. In this paper, we show that, for any tree $T$ of order $n$ and diameter at least three, $n-\beta(T)\leq \gamma_{t,coi}(T)\leq n-|L(T)|$ where $\beta(T)$ is the maximum cardinality of any independent set and $L(T)$ is the set of leaves of $T$. We also characterize the families of trees attaining the extremal bounds above and show that the differences between the value of $\gamma_{t,coi}(T)$ and these bounds can be arbitrarily large for some classes of trees. | \section{Introduction} \label{Intro}
Throughout this work we consider $G=(V(G),E(G))$ as a simple graph of order $n=|V(G)|$. That is, graphs that are finite, undirected, and without loops or multiple edges. Given a vertex $v$ of $G$, $N_G(v)$ represents the \emph{open neighborhood} of $v$, \emph{i.e.}, the set of all neighbors of $v$ in $G$ and the \emph{degree} of $v$ is $\delta(v) = |N_G(v)|$. If $S\subseteq V(G)$, then the \emph{open neighborhood} of $S$ is $N_G(S)=\cup_{v\in S} N_G(v)$. The \emph{minimum} and \emph{maximum degrees} of $G$ are denoted by $\delta(G)$ and $\Delta(G)$, respectively. For any two vertices $u$ and $v$, the \emph{distance} $d(u,v)$ between $u$ and $v$ is the length of a shortest $u-v$ path. The \emph{diameter} of $G$ is largest possible distance between any two vertices of $G$, and is denoted by $diam(G)$. For any other graph theory terminology we follow the book \cite{west}.
\noindent
A \emph{dominating set} of a graph $G$ is a set of vertices $D\subseteq V(G)$ such that every vertex of $G$ not in $D$ is adjacent to at least a vertex in $D$. The \emph{domination number} of $G$ is the minimum cardinality of a dominating set of $G$ and is denoted by $\gamma(G)$. Domination in graphs is a classical topic, and nowadays one of the most active areas, of research in graph theory. This fact can be seen for instance through the more than 1620 articles published in the topic (more than 1070 of them in the last 10 years), according to the MathSciNet database with the queries: ``domination number'' or ``dominating sets''. The two books \cite{book-dom-1,book-dom-2}, although a little not updated by now, contain a significant amount of the most important results in the topic before this new century.
\noindent
One interesting research activity on domination in graphs concerns its relationship with other graph's parameters. A remarkable case regards vertex independence, \emph{i.e.}, sets of vertices inducing an edgeless graph. The most natural relationship in this direction is clearly the mixing of both concepts which gives raise to the independent domination number, \emph{i.e.}, the minimum cardinality of a dominating set inducing an edgeless graph. Independent domination in graphs was formally introduced in \cite{berge} and \cite{ore} at earliest 1960's (a fairly complete survey on this topic was recently published in \cite{indep-dom-surv}). On the other hand, some other investigations connecting domination and independence in graphs can easily be found in the literature. We remark here two of them. The first one remarkable case is the searching of two disjoint sets in a graph, in which one of the sets is a maximal independent set and the second one a minimal dominating set (a particular case appears whether both sets form a partition of the vertex set of the graph). Several studies on this topic have been developed in the last years. The Ph. D. thesis \cite{Lowenstein} contains several results and citations on the subject. A second case is related to finding the minimum cardinality of a dominating set which intersects every maximal independent set of a graph: independent transversal domination (see \cite{hamid}, where the theme was introduced, and also \cite{yero}.)
\noindent
Other popular studies on domination in graphs deal with modifying the domination property. One of the most popular parameters in this way is the total domination, which differs from the standard domination by the added property of dominating all the vertices of the graph, instead of only that outside of the set. It is then not surprising that total domination has been also related to independence in graphs. For instance, the total domination version of independent transversal domination is already known from \cite{yero2, ITTD-tree}. However, not much exists concerning finding two disjoint sets in which one of the sets is a total dominating set, and the other one, an independent set. A closely related idea to this one above was recently published in \cite{Soner2012}, where the concept of total co-independent dominating set was introduced. Such work contains just a few and not deep results. However, the idea of the concept is interesting and deserves a more detailed study.
\noindent
More formally, a set $D\subseteq V(G)$ is a \emph{total dominating set} of $G$ if every vertex in $V(G)$ is adjacent to at least one vertex in $D$. The \emph{total domination number} of $G$ is the minimum cardinality of any total dominating set of $G$ and is denoted by $\gamma_t(G)$. A $\gamma_t(G)$-\emph{set} is a total dominating set of cardinality $\gamma_t(G)$. For more information on total domination see the survey \cite{Henning2009}. On the other hand, a set $S$ of vertices is \emph{independent} if $S$ induces an empty graph. An independent set of maximum cardinality is a \emph{maximum independent set} of $G$. The \emph{independence number} of $G$ is the cardinality of a maximum independent set of $G$ and is denoted by $\beta(G)$. An independent set of cardinality $\beta(G)$ is called a $\beta(G)$-\emph{set}.
\noindent
A total dominating set $D$ of a graph $G$ is called a \emph{total co-independent dominating set} if the set of vertices of the subgraph induced by $V(G)\setminus D$ is independent and not empty. The minimum cardinality of any total co-independent dominating set is denoted by $\gamma_{t,coi}(G)$. A total co-independent dominating set of cardinality $\gamma_{t,coi}(G)$ is a $\gamma_{t,coi}(G)$-\emph{set}. These concepts were previously introduced and barely studied in \cite{Soner2012}, and also recently and deeper, in \cite{Cabrera2017-1}. A slightly different version of this parameter was introduced in \cite{k2}, and also recently studied in \cite{MPSY}, where the condition of $V(G)\setminus D$ being not empty was not required. In this latter works, the total co-independent domination number is called total outer-independent domination number. Even so, both parameters behaves almost always in the same manner, we prefer to continue using the terminology of \cite{Soner2012}. Clearly, the results on this parameter would lead to deduce some conclusions into the existence of partitions of the vertex set of a graph into a total dominating set and an independent set, as already mentioned.
\noindent
Let $T$ be a tree. A \emph{leaf} of $T$ is a vertex of degree one. A \emph{support vertex} of $T$ is a vertex adjacent to a leaf which is not a leaf, and a \emph{semi-support vertex} is a vertex adjacent to a support vertex that is not a leaf. By an \emph{isolated support vertex} of $T$ we mean an isolated vertex of the subgraph induced by the support vertices of $T$. The set of leaves is denoted by $L(T)$, the set of support vertices is denoted by $S(T)$, and the set of semi-support vertices is denoted by $SS(T)$. Moreover, $S^\ast(T)$ is the set of isolated support vertices of $T$. Also, a \emph{double star} is a tree with exactly two adjacent vertices of degree larger than or equal to two and the remaining vertices are leaves.
\noindent
Studies on characterizing domination related parameters in trees have been very popular in the last decade. One can find in the literature several works showing all the trees satisfying diverse properties. For instance, to just name a few of them, we remark some examples, which could probably be not all the most remarkable and/or recent cases.
\begin{itemize}
\item In \cite{Chellali2006} was proved that for any tree $T$ of order $n$ and $l$ leaves, $\gamma_t(T)\ge (n-l+2)/2$, and all the trees achieving such bound are given.
\item In \cite{shan}, a characterization of the family of trees with equal total domination and paired-domination numbers was given.
\item In \cite{harary}, a characterization of trees with equal domination and independent domination numbers was presented.
\item In \cite{domke} was proved that the restrained domination number of a tree $T$ of order $n$ is bounded below by $\lceil(n+2)/3\rceil$ and all the extremal trees achieving this lower bound were constructively characterized.
\item In \cite{alvarado}, a constructive characterization of the trees for which the Roman domination
number strongly equals the weak Roman domination number was given.
\item In \cite{hen-rall} were characterized the trees with equal total domination and game total domination number.
\item In \cite{Cabrera2018}, a constructive characterization of vertex cover Roman trees was given, that is, trees whose outer independent Roman domination number equals twice its vertex cover number.
\item In \cite{Cabrera2020}, three different characterizations concerning weak Roman domination in trees were presented.
\end{itemize}
Other styles of characterizations of domination parameters in trees were presented in \cite{dorfling,hattingh}. In this sense, in the present work we pretend to improve the visibility of this new parameter, namely the total co-independent domination number, throughout characterizing several families of trees achieving some specific values of this mentioned parameter.
\noindent
The total co-independent domination number of a graph $G$ has been introduced in \cite{Soner2012}, where a few of its combinatorial properties were dealt with. Among them, a couple of almost trivial bounds in terms of $\beta(G)$ and the order of $G$ were proved for $\gamma_{t,coi}(G)$. As example, for any graph $G$ of order $n$, $n-\beta(G)\leq \gamma_{t,coi}(G)\leq n-1$. It is readily seen that such bounds could be improved in a number of situations. For instance, in the case of a tree $T$, this can be done for the upper bound as we next show.
\begin{theorem}\label{teo1}
For any tree $T$ of order $n$ and $diam(T)\geq 3$, $n-\beta(T)\leq \gamma_{t,coi}(T)\leq n-|L(T)|.$
\end{theorem}
\begin{proof}
The lower bound was already given in \cite{Soner2012}. Moreover, the upper bound follows by the fact that the set $V(T)\setminus L(T)$ is a total co-independent dominating set, since $T$ has diameter at least three.
\end{proof}
\noindent
One can immediately think into characterizing the trees achieving the bounds for the total co-independent domination number given above. This is made in the next section. In concordance with this, now on we assume that $|S(T)|\geq 2$ since the case $diam(T)=1$ ($T$ is a $P_2$ and $\gamma_{t,coi}(T)$ is not defined) and $diam(T)=2$ ($T$ is a star graph $S_n$ and $\gamma_{t,coi}(T)=2$) are straightforward.
\noindent
In order to easily proceed with our exposition from now on we say that a tree $T$ belongs to the family $\mathcal{T}_\beta$, if $\gamma_{t,coi}(T)= n-\beta(T)$ or $T$ is in the family $\mathcal{T}_L$, if $\gamma_{t,coi}(T)= n-|L(T)|$. Note that $\mathcal{T}_L = \mathcal{T}_\beta$ if and only if $L(T)$ is a $\beta(T)$-set.
\section{The characterizations} \label{results}
\noindent
In order to provide a constructive characterization of the trees belonging to the family $\mathcal{T}_\beta$ we need to introduce some operations to be made over a tree. In this concern, by attaching a path $P$ to a vertex $v$ of $T$ we mean adding the path $P$ and joining $v$ to a vertex of $P$.
\begin{description}
\item[Operation $O_1$:] Attach a path $P_1$ to a vertex of $T$, which is in some $\gamma_{t,coi}(T)$-set.
\item[Operation $O_2$:] Attach a path $P_2$ to a vertex of $T$, which is in some $\gamma_{t,coi}(T)$-set.
\item[Operation $O_3$:] Attach a path $P_4$ to a vertex $v$ of $T$, which is in some $\gamma_{t,coi}(T)$-set, by joining $v$ to a leaf of $P_4$.
\item[Operation $O_4$:] Attach a path $P_4$ to a vertex $v$ of $T$, which is in some $\beta(T)$-set, by joining $v$ to a support of $P_4$.
\end{description}
\noindent
Let $\mathcal{T}$ be the family of trees defined as $\mathcal{T} = \{T \mid T $ is $P_4$ or is a tree obtained from $P_4$ by a finite
sequence of operations $O_1, O_2, O_3, O_4\}$. We first show that every tree in the family $\mathcal{T}$ belongs to the family $\mathcal{T}_\beta$.
\begin{lemma}\label{lem-right}
If $T \in \mathcal{T}$, then $T\in \mathcal{T}_\beta$.
\end{lemma}
\begin{proof}
\noindent
We proceed by induction on the number $r(T)$ of operations required to construct the tree $T$. If $r(T)=0$, then $T=P_4$ and $T \in \mathcal{T}_\beta$. This establishes the base case. Hence, we now assume that $k\geq 1$ is an integer and that each tree $T' \in \mathcal{T}$ with $ r(T')<k$ satisfies that $T'\in \mathcal{T}_\beta$.
Let $T \in \mathcal{T}$ be a tree with $r(T)=k$. Then, $T$ can be obtained from a tree $T' \in \mathcal{T}$ with $ r(T')=k-1$ by one of the operations $O_1, O_2, O_3$ or $O_4$. We shall prove that $T \in \mathcal{T}_\beta$. To this end, as $T'\in \mathcal{T_\beta}$, let $D'$ be a $\gamma_{t,coi}(T')$-set containing no leaves and let $B'=V(T')\setminus D'$ be a $\beta(T')$-set containing all leaves. We consider the following situations.\\
\noindent
\textbf{Case 1.} $T$ is obtained from $T'$ by operation $O_1$. Assume $T$ is obtained from $T'$ by adding the vertex $u$ and the edge $uv$, where $v\in D'$. Notice that $u$ is a leaf of $T$ and that $B'\cup \{u\}$ is a $\beta(T)$-set, otherwise there would be an independent set in $T'$ with one vertex more than in $B'$, since only one vertex has been added to $T'$ (to obtain $T$) that would increase the value for $\beta(T)$ in more than one. On the other hand, $D'$ is a total co-independent dominating set of $T$. Thus, $\gamma_{t,coi}(T)\le |D'|= |V(T)|-(\beta(T')+1)=|V(T)|-\beta(T)$, and by Theorem \ref{teo1}, $\gamma_{t,coi}(T)=|V(T)|-\beta(T)$, which means $T\in T_\beta$.\\
\noindent
\textbf{Case 2.} $T$ is obtained from $T'$ by operation $O_2$. Assume $T$ is obtained from $T'$ by adding the path $u_1u_2$ and the edge $u_1v$ where $v\in D'$. Note that the vertex $u_2$ is a leaf of $T$, and that $u_1$ is its support vertex. So, $B'\cup\{u_2\}$ is a $\beta(T)$-set, otherwise there would be an independent set in $T'$ with one vertex more than in $B'$, since only one vertex from the pair $u_1,u_2$ could be added to any independent set of $T'$ to get an independent set of $T$ of larger cardinality than that of $B'$. It is now not difficult to see that $D = D' \cup \{u_1\}$ is a total co-independent dominating set of $T$. Hence, $\gamma_{t,coi}(T)\le |D|=|D'| + 1=|V(T')|-\beta(T') + 1=|V(T)|-2 -(\beta(T)-1)+1=|V(T)|-\beta(T)$. Therefore, by Theorem \ref{teo1}, $\gamma_{t,coi}(T)= |V(T)|-\beta(T)$ and $T \in \mathcal{T}_\beta$.\\
\noindent
\textbf{Case 3.} $T$ is obtained from $T'$ by operation $O_3$. Assume $T$ is obtained from $T'$ by adding the path $P_4=h_1u_1u_2h_2$ to a vertex $v\in D'$ through the edge $vh_1$. By using some similar reasons as in the case above, it is easily seen that $B=B'\cup \{h_1,h_2\}$ is a $\beta(T)$-set and that $D=D' \cup \{u_1,u_2\}$ is a total co-independent dominating set of $T$. So, $\gamma_{t,coi}(T)\le |D|=|D'|+2=|V(T')|-\beta(T')+2=(|V(T)|-4)-(\beta(T)-2)+2=|V(T)|-\beta(T)$. Therefore, by Theorem \ref{teo1}, $\gamma_{t,coi}(T)= |V(T)|-\beta(T)$ and $T \in \mathcal{T}_\beta$.\\
\noindent
\textbf{Case 4.} $T$ is obtained from $T'$ by operation $O_4$. Assume $T$ is obtained from $T'$ by adding a path $P_4=h_1u_1u_2h_2$ to a vertex $v$ of $T'$, which is in some $\beta(T')$-set, throughout the edge $vu_1$. Note that the set $B=B' \cup \{h_1,h_2\}$ is an independent set of $T$. Moreover, since there can be at most two vertices of the path $P_4$ in any $\beta(T)$-set, it must happen that $B$ is a $\beta(T)$-set, otherwise there would be an independent set in $T'$ of cardinality larger than $\beta(T')$, which is not possible. Now, it is readily seen that $D=D'\cup \{u_1,u_2\}$ is a total co-independent dominating set in $T$. Thus, $\gamma_{t,coi}(T)\le |D|=|D'|+2=|V(T')|-\beta(T')+2=(|V(T)|-4)-(\beta(T)-2)+2=|V(T)|-\beta(T)$. Therefore, again as above, by Theorem \ref{teo1}, $\gamma_{t,coi}(T)= |V(T)|-\beta(T)$ and $T \in \mathcal{T}_\beta$.
\end{proof}
\noindent
We now turn our attention to the opposite direction concerning the lemma above. In this sense, from now on we shall need the following terminology and notation in our results. Given a tree $T$ and a set $S\subsetneq V(T)$, by $T-S$ we denote a tree obtained from $T$ by removing from $T$ all the vertices in $S$ and all its incident edges (if $S=\{v\}$ for some vertex $v$, then we simply write $T-v$). For an integer $r\geq 2$, by $Q_r$ we mean a graph which is obtained from a path $P_{r+2}=v s s_1 s_2 \ldots s_r$ by attaching a path $P_1$ to any vertex of $P_{r+2}-v$. In Figure \ref{figure-1} we show the example of $Q_5$.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=.6, transform shape]
\node [draw, shape=circle] (s) at (0,0) {};
\node at (0.5,-0.5) {\Large $s$};
\node [draw, shape=circle] (s1) at (0,1.5) {};
\node [draw, shape=circle] (s2) at (0,-1.5) {};
\node at (0.5,-1.5) {\Large $v$};
\node [draw, shape=circle] (a1) at (1.5,0) {};
\node at (1.5,-0.5) {\Large $s_1$};
\node [draw, shape=circle] (a11) at (1.5,1.5) {};
\node [draw, shape=circle] (a2) at (3,0) {};
\node at (3,-0.5) {\Large $s_2$};
\node [draw, shape=circle] (a21) at (3,1.5) {};
\node [draw, shape=circle] (a3) at (4.5,0) {};
\node at (4.5,-0.5) {\Large $s_3$};
\node [draw, shape=circle] (a31) at (4.5,1.5) {};
\node [draw, shape=circle] (a4) at (6,0) {};
\node at (6,-0.5) {\Large $s_4$};
\node [draw, shape=circle] (a41) at (6,1.5) {};
\node [draw, shape=circle] (a5) at (7.5,0) {};
\node at (7.5,-0.5) {\Large $s_5$};
\node [draw, shape=circle] (a51) at (7.5,1.5) {};
\draw(s)--(a1)--(a2)--(a3)--(a4)--(a5);
\draw(s)--(s1);
\draw(s)--(s2);
\draw(a1)--(a11);
\draw(a2)--(a21);
\draw(a3)--(a31);
\draw(a4)--(a41);
\draw(a5)--(a51);
\end{tikzpicture}
\caption{The structure of the tree $Q_5$.}
\label{figure-1}
\end{figure}
\noindent
We next show that every tree of the family $\mathcal{T}_\beta$ belongs to the family $\mathcal{T}$. To this end, we need the following remark.
\begin{remark}\label{dist-B}
Let $B$ be a $\beta(T)$-set of a tree $T$. Then for every $v\in B$ there exists a vertex $u\in B$ such that $d(v,u)\leq 3$.
\end{remark}
\begin{lemma}\label{lem-left}
If $T\in \mathcal{T}_\beta$, then $T \in \mathcal{T}$.
\end{lemma}
\begin{proof}
\noindent
We proceed by induction on the order $n\geq 4$ of the trees $T\in \mathcal{T}_\beta$. If $T$ is a double star, then $T$ can be obtained from $P_4$ by repeatedly applying operation $O_1$. This establishes the base case. We assume that $k > 4$ is an integer and that each tree $T' \in \mathcal{T}_\beta$ with $|V(T')|<k$ satisfies that $T'\in \mathcal{T}$.
\noindent
Let $T$ be a tree such that $T\in \mathcal{T}_\beta$ and $|V(T)|=k$. Let $(D,B)$ be a partition of $V(T)$ where $D$ is a $\gamma_{t,coi}(T)$-set containing no leaves and $B$ is a $\beta(T)$-set containing all leaves. We analyze the following situations.\\
\noindent
\textbf{Case 1: $|S(T)|<|L(T)|$.} We consider a support vertex $v$ that is adjacent to at least two leaves. Let $h\in N(v)\cap L(T)$ and $T'=T-h$. Since $h\in B$, it follows $B'=B\setminus \{h\}$ is an independent set in $T'$, which is moreover a $\beta(T')$-set, since otherwise we would get an independent set of larger cardinality in $T$ than $\beta(T)$, which is not possible. Moreover, it is easily seen that $D$ is also a total co-independent dominating set of $T'$. Thus, $\gamma_{t,coi}(T')\le |D|=\gamma_{t,coi}(T)=|V(T)|-\beta(T)=|V(T')|+1-(\beta(T')+1)=|V(T')|-\beta(T')$. So, by Theorem \ref{teo1} , $\gamma_{t,coi}(T')=|V(T')|-\beta(T')$ and $T' \in \mathcal{T}_\beta$. Now, by the induction hypothesis $T' \in \mathcal{T}$. Therefore, since $T$ can be obtained from $T'$ by operation $O_1$, it follows $T \in \mathcal{T}$.\\
\noindent
\textbf{Case 2: $|S(T)|=|L(T)|$ and $|SS(T)|=0$.} In this case we note that $V(T)=S(T)\cup L(T)$ and it is easily observed that $|S(T)|\geq 3$, $S(T)$ is a $\gamma_{t,coi}(T)$-set and $L(T)$ is a $\beta(T)$-set. Let $s\in S(T)$ such that $|N(s)\cap S(T)|=1$ (note that such $s$ always exists) and let $h\in L(T)$ be the leaf adjacent to $s$. First notice that the vertex $x\in N(s)\cap S(T)$ must have at least two neighbors in $S(T)$, otherwise $T$ is $P_4$, which is not possible. Hence, we observe that $B'=B\setminus\{h\}$ is an independent set of $T'=T-\{h,s\}$. Moreover, by a similar reason as in the case above, $B'$ is also a $\beta(T')$-set. On the other hand, $S(T')=S(T) \setminus \{s\}$ is clearly a total co-independent dominating set of $T'$. So, $\gamma_{t,coi}(T')\le |S(T')|=|S(T)|-1=|V(T)|-|L(T)|-1=|V(T')|+2-(|L(T')|+1)-1=|V(T')|-\beta(T')$. Thus, by Theorem \ref{teo1}, $\gamma_{t,coi}(T')=|V(T')|-\beta(T')$ and $T' \in \mathcal{T}_\beta$. By induction hypothesis $T' \in \mathcal{T}$. Since $T$ can be obtained from $T'$ by operation $O_2$, we get $T \in \mathcal{T}$.\\
\noindent
\textbf{Case 3: $|S(T)|=|L(T)|$ and $|SS(T)|>0$.} Herein we denote by $P(x,y)$ the set of vertices of one shortest path between $x$ and $y$, including $x$ and $y$. Let $h,h'$ be two leaves at the maximum possible distance in $T$ such that there is $v \in SS(T)\cap P(h,h')$ with $d(v,h)=2$ or $d(v,h')=2$. Without loss of generality assume that $d(v,h)=2$ and let $s$ be the support adjacent to $h$. Since $|S(T)|=|L(T)|$, we observe that $N(s)\subseteq S(T)\cup \{h,v\}$ and also every support vertex is adjacent to exactly one leaf. We have now some possible scenarios.\\
\noindent
\textbf{Case 3.1 $|N(s)\cap S(T)|=1$.} Hence, by the maximality of the path $P(h,h')$, it must happen that $T$ has an induced subgraph isomorphic to a graph $Q_r$, as previously described, obtained from the vertices $v, s, h$ and some supports, say $s_1,s_2, \ldots ,s_r\in S(T)$, with the leaves $h_1,h_2,\ldots,h_r$, adjacent to the supports $s_1,s_2, \ldots ,s_r$, respectively.
\noindent
Assume $r=1$. Note that $s,s_1\in D$ and $h,h_1\in B$. We first consider $v\in D$. Let $T'=T-\{h_1,s_1\}$, $D'=D\setminus\{s_1\}$ and $B'=B\setminus\{h_1\}$. Clearly, $B'$ is a $\beta(T')$-set. Also, we note that $D'$ is a total co-independent dominating set of $T'$. So, $\gamma_{t,coi}(T')\le |D'|=|D|-1=(|V(T)|-|B|)-1=(|V(T')|+2)-(\beta(T')+1)-1=|V(T')|-\beta(T')$. Thus, by Theorem \ref{teo1}, $\gamma_{t,coi}(T')=|V(T')|-\beta(T')$ and $T' \in \mathcal{T}_\beta$. By inductive hypothesis $T' \in \mathcal{T}$, and since $T$ can be obtained from $T'$ by operation $O_2$, we obtain $T \in \mathcal{T}$.
We now consider $v\in B$. Let $T'=T-\{s,h,s_1,h_1\}$, $D'=D \setminus \{s, s_1\}$ and $B'=B\setminus\{h,h_1\}$. We can again deduce $B'$ is a $\beta(T')$-set since at most two vertices of $\{s,h,s_1,h_1\}$ can belong to any independent set of $T$. Moreover, $D'$ is a total co-independent dominating set of $T'$. So, $\gamma_{t,coi}(T')\le |D'|=|D|-2=(|V(T)|-|B|)-2=(|V(T')|+4)-(\beta(T')+2)-2=|V(T')|-\beta(T')$ and, by Theorem \ref{teo1}, we get $\gamma_{t,coi}(T')=|V(T')|-\beta(T')$, which leads to $T' \in \mathcal{T}_\beta$. By inductive hypothesis $T' \in \mathcal{T}$, and together with the fact that $T$ can be obtained from $T'$ by operation $O_4$, the required result $T \in \mathcal{T}$ follows.
\noindent
Assume now that $r\geq 2$. Since $s,s_1,\ldots,s_r\in D$ and $h,h_1,\ldots,h_r\in B$, if $T'=T-\{h_r,s_r\}$, then it is readily seen that $B'=B\setminus\{h_r\}$ is a $\beta(T')$-set and that $D'=D \setminus \{s_r\}$ is a total co-independent dominating set of $T'$. By using a similar procedure as above ($r=1$) we obtain $T' \in \mathcal{T}$ and, due to that $T$ can be obtained from $T'$ by operation $O_2$, it follows $T \in \mathcal{T}$.\\
\noindent
\textbf{Case 3.2 $|N(s)\cap S(T)|>1$.} We note that this case is analogous to that one above whether $|N(s)\cap S(T)|=1$ and $r\geq 2$.\\
\noindent
\textbf{Case 3.3: $|N(s)\cap S(T)|=0$.} Clearly, $s$ has degree two since it has one leaf neighbor, no support neighbors and cannot have more than one (it has exactly one) semi-support neighbor due to the maximality of $P(h,h')$. Also, it must happen $v,s\in D$ and $h\in B$. Assume the subgraph induced by $P(h,h')$ is $h\, s\, v\, u_1 u_2\ldots s'\, h'$, where $h, h' \in L(T)$ and $s,s'\in S(T)$. Note that $N(v)\subseteq S(T)\cup \{u_1\}$. We consider again some possible scenarios.\\
\noindent
\textbf{Case 3.3.1: $|N(v)\cap S(T)|>1$.} In this case, the vertex $v$ is also totally dominated by another support different from $s$. We note that $u_1\in B$, because $B\cup\{v\}$ would be an independent set of cardinality larger than that of $B$. Hence, we consider the tree $T'=T-\{h,s\}$ and the sets $D'=D \setminus \{s\}$, $B'=B\setminus\{h\}$. It can be deduced as above that $B'$ is a $\beta(T')$-set. Moreover, $D'$ is a total co-independent dominating set of $T'$. So, $\gamma_{t,coi}(T')\le |D'|=|D|-1=(|V(T)|-|B|)-1=(|V(T')|+2)-(\beta(T')-1)-1=|V(T')|-\beta(T')$. Thus, by Theorem \ref{teo1}, we obtain $\gamma_{t,coi}(T')=|V(T')|-\beta(T')$, which means $T' \in \mathcal{T}_\beta$, and by inductive hypothesis $T' \in \mathcal{T}$. Furthermore $T$ can be obtained from $T'$ by operation $O_2$, which allows to claim $T \in \mathcal{T}$.\\
\noindent
\textbf{Case 3.3.2: $|N(v)\cap S(T)|=1$.} Clearly $s,v$ have degree two. We firstly consider the case when $N(u_1)=\{v,u_2\}$. By Remark \ref{dist-B} we note that $u_1\in B$, and consequently $u_2\in D$. Let $T'=T-\{h,s,v,u_1\}$, $D'=D \setminus\{v,s\}$ and $B'=B\setminus\{h,u_1\}$. Now, similarly to Case 3.1 (whether $r=1$ and $v\in B$) we can deduce $B'$ is a $\beta(T')$-set. Also, $D'$ is a total co-independent dominating set of $T'$. So, $\gamma_{t,coi}(T')\le |D'|=|D|-2=(|V(T)|-|B|)-2=(|V(T')|+4)-(\beta(T')+2)-2=|V(T')|-\beta(T')$, and, by Theorem \ref{teo1}, $\gamma_{t,coi}(T')=|V(T')|-\beta(T')$. Thus, $T' \in \mathcal{T}_\beta$. Hence, the inductive hypothesis $T' \in \mathcal{T}$ together with the fact that $T$ can be obtained from $T'$ by operation $O_3$, we deduce $T\in \mathcal{T}$.
\noindent
On the other hand, assume that there is a vertex $w\in N(u_1)\setminus\{v,u_2\}$. Since $u_1\in B$ must happen, it must be also $w\in D$, and consequently $w$ has a neighbor belonging to $D$ (which is clearly not $u_1$). Moreover, by the maximality of $P(h',h)$, we note that $w\notin SS(T)$ because $B' = B\setminus\{u_1\} \cup \{v,w\}$ is an independent set of cardinality larger than that of $B$. Hence, it follows $w\in S(T)\setminus S^\ast(T)$.
\noindent
Let $x\in V(T)$ be a leaf adjacent to the support $w$. We note that $T$ should have an induced subgraph isomorphic to a graph $Q_r$, obtained from the vertices $u_1, w, x$ and some supports, say $w_1,w_2, \ldots w_r\in S(T)$, with the leaves $x_1,x_2,\ldots,x_r$, adjacent to the supports $w_1,w_2, \ldots w_r$, respectively. By using a similar procedure to that of Case $3.1$, it follows $T \in \mathcal{T}$ which completes the proof.
\end{proof}
\noindent
As an immediate consequence of Lemma \ref{lem-right} and Lemma \ref{lem-left} we have the following characterization.
\begin{theorem}\label{teo-lower}
Let $T$ be a tree. Then $T\in \mathcal{T}_\beta$ if and only if $T \in \mathcal{T}$.
\end{theorem}
\noindent
We next see that all the operations $O_1$ to $O_4$ are required in the characterization above. First, it is easy to see that operation $O_1$ is required to obtain a double star from the path $P_4$. The examples given in Figure \ref{figure-2} show that operations $O_2$, $O_3$ and $O_4$ are also required.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=.5, transform shape]
\node [draw, shape=circle] (s1) at (-1.5,0) {};
\node [draw, shape=circle] (s2) at (0,0) {};
\node [draw, shape=circle] (s3) at (1.5,0) {};
\node [draw, shape=circle] (s4) at (3,0) {};
\node [draw, shape=circle] (s21) at (0,-1.5) {};
\node [draw, shape=circle] (s22) at (0,-3) {};
\node [draw, shape=circle] (s23) at (0,-4.5) {};
\node [draw, shape=circle] (s24) at (0,-6) {};
\node [draw, shape=circle] (s31) at (1.5,-1.5) {};
\node [draw, shape=circle] (s32) at (1.5,-3) {};
\node [draw, shape=circle] (s33) at (1.5,-4.5) {};
\node [draw, shape=circle] (s34) at (1.5,-6) {};
\draw(s1)--(s2)--(s3)--(s4);
\draw(s2)--(s21)--(s22)--(s23)--(s24);
\draw(s3)--(s31)--(s32)--(s33)--(s34);
\node [left] at (-1.3,-5.25) {\Large(I)};
\end{tikzpicture}
\hspace*{0.7cm}
\begin{tikzpicture}[scale=.5, transform shape]
\node [draw, shape=circle] (s1) at (0,0) {};
\node [draw, shape=circle] (s2) at (1.5,0) {};
\node [draw, shape=circle] (s3) at (3,0) {};
\node [draw, shape=circle] (s4) at (-1.5,0) {};
\node [draw, shape=circle] (s5) at (-3,0) {};
\node [draw, shape=circle] (s6) at (-4.5,0) {};
\node [draw, shape=circle] (s21) at (1.5,1.5) {};
\node [draw, shape=circle] (s31) at (3,1.5) {};
\node [draw, shape=circle] (s11) at (0,-1.5) {};
\node [draw, shape=circle] (s12) at (0,-3) {};
\node [draw, shape=circle] (s111) at (-1.5,-1.5) {};
\node [draw, shape=circle] (s1111) at (-1.5,-3) {};
\draw(s1)--(s2)--(s3);
\draw(s1)--(s4)--(s5)--(s6);
\draw(s2)--(s21);
\draw(s3)--(s31);
\draw(s1)--(s11)--(s12);
\draw(s11)--(s111)--(s1111);
\node [left] at (-2.8,-2.25) {\Large(II)};
\end{tikzpicture}
\hspace*{0.7cm}
\begin{tikzpicture}[scale=.5, transform shape]
\node [draw, shape=circle] (s1) at (0,0) {};
\node [draw, shape=circle] (s2) at (0,1.5) {};
\node [draw, shape=circle] (s3) at (0,3) {};
\node [draw, shape=circle] (s4) at (0,4.5) {};
\node [draw, shape=circle] (s5) at (0,6) {};
\node [draw, shape=circle] (s31) at (-1.5,3) {};
\draw(s1)--(s2)--(s3)--(s4)--(s5);
\draw(s3)--(s31);
\node [left] at (-1.3,0.75) {\Large(III)};
\end{tikzpicture}
\caption{The tree (I) can only be obtained from $P_4$ by a sequence of operations $O_3, O_3$ or $O_4, O_3$; the tree (II) can only be obtained from $P_4$ by a sequence of operations $O_4, O_4$ or $O_3,O_4$ and the tree (III) can only be obtained from $P_4$ by the operation $O_2$.}
\label{figure-2}
\end{figure}
\noindent
Now, in order to characterize the trees $T$ such that $T\in \mathcal{T}_L$, we need some primary results, which we next present.
\begin{theorem}\label{teo-min}{\em \cite{Soner2012}}
A total co-independent dominating set $D$ of a graph $G$ is minimal if and only if for each vertex $v\in D$, one of the following conditions is satisfied.
\begin{itemize}
\item[\em{(i)}] There exists a vertex $u \in V(G)$ such that $ N(u)\cap D=\{v\}$.
\item[\em{(ii)}] There exists $w\in V(G)\setminus D$ adjacent to $v$.
\end{itemize}
\end{theorem}
\begin{lemma}\label{lem-upper}
If $T\in \mathcal{T}_L$ and $S^\ast(T)\neq \emptyset$, then $SS(T)\subseteq N(S^\ast(T))$.
\end{lemma}
\begin{proof}
\noindent
Let $D$ be a $\gamma_{t,coi}(T)$-set containing no leaves. Clearly, $T\in \mathcal{T}_L$ implies that every vertex of $T$ is either a leaf or belongs to $D$. Note that if $S^\ast(T)\neq \emptyset$, then $SS(T)\neq \emptyset$. Let $v\in SS(T)$. Since $D$ is a $\gamma_{t,coi}(T)$-set and $v$ is not adjacent to a leaf, by Theorem \ref{teo-min}, there exists a vertex $u\in D$ such that $ N(u)\cap D=\{v\}$. Hence, it is easily observed that $u\in S^\ast(T)$. Therefore, $v\in N(S^\ast(T))$, which completes the proof.
\end{proof}
\begin{corollary}\label{cor-upper}
If $T\in \mathcal{T}_L$ and $S^\ast(T)=\emptyset$, then $V(T)=L(T)\cup S(T)$.
\end{corollary}
The next theorem contains a characterization for those trees satisfying $T\in \mathcal{T}_L$.
\begin{theorem}\label{teo-upper}
Let $T$ be any tree of order $n$ and $diam(T)\ge 3$. Then $T\in \mathcal{T}_L$ if and only if $V(T)=L(T)\cup S(T)\cup SS(T)$ and $SS(T)\subseteq N(S^\ast(T))$.
\end{theorem}
\begin{proof}
\noindent
We first assume that $T\in \mathcal{T}_L$ and, since $diam(T)\ge 3$, let $D$ be a $\gamma_{t,coi}(T)$-set containing no leaves. If $S^\ast(T)=\emptyset$, then, by Corollary \ref{cor-upper}, it follows that $V(T)=L(T)\cup S(T)$. If $S^\ast(T)\neq \emptyset$, then, by using Lemma \ref{lem-upper}, we get that $SS(T)\subseteq N(S^\ast(T))$. Hence, we note that $D\setminus(SS(T)\cup S(T))$ is empty. Otherwise, there exists $v\in N(SS(T))\setminus S(T)$ such that $D\setminus\{v\}$ is a total co-independent dominating set, which contradicts the fact that $T\in \mathcal{T}_L$. Therefore $V(T)=L(T)\cup S(T)\cup SS(T)$ and $SS(T)\subseteq N(S^\ast(T))$.
\noindent
On the other hand, if we consider that $V(T)=L(T)\cup S(T)\cup SS(T)$ and $SS(T)\subseteq N(S^\ast(T))$, then it is readily seen that $T\in \mathcal{T}_L$, which completes the proof.
\end{proof}
\noindent
An interesting question that arises from the Theorems \ref{teo-lower} and \ref{teo-upper} is the following. Can the differences $\gamma_{t,coi}(T)-(|V(T)|-\beta(T))$ and $(|V(T)|-|L(T)|)-\gamma_{t,coi}(T)$ be as large as possible? We next give an affirmative answer to that question. In this sense, the following family of trees $\mathcal{F}$ is required. Given two integers $b,d$, a tree $T_{b,d}\in \mathcal{F}$ is defined as follows.
\begin{itemize}
\item We begin with a tree $T$ of order $n=b+d$ with vertex set $V(T)=\{u_1,\ldots,u_b,v_1,\ldots,v_d\}$.
\item Attach two paths $P_1$ to every vertex of $T$.
\item Attach a star $S_3$ to every vertex $u_i\in V(T)$, $i\in \{1,\ldots,b\}$, by adding an edge between $u_i$ and a leaf of the star $S_3$.
\item Attach a star $S_3$ with a subdivided edge to every vertex $v_i\in V(T)$, $i\in \{1,\ldots,d\}$, by adding an edge between $v_i$ and the leaf corresponding to the subdivided edge.
\end{itemize}
An example of a tree of the family $\mathcal{F}$ is given in Figure \ref{family-F-fig}.
\begin{figure}[h]
\centering
\hspace*{0.7cm}
\begin{tikzpicture}[scale=.5, transform shape]
\node [draw, shape=circle] (u1) at (1.5,-2) {};
\node [draw, shape=circle] (u2) at (5.5,-2) {};
\node [draw, shape=circle] (v1) at (9.5,-2) {};
\node [draw, shape=circle] (v2) at (13.5,-2) {};
\node [draw, shape=circle] (v3) at (17.5,-2) {};
\node [draw, shape=circle] (s1) at (0,0) {};
\node [draw, shape=circle] (s2) at (1.5,0) {};
\node [draw, shape=circle] (s3) at (3,0) {};
\node [draw, shape=circle] (s4) at (4,0) {};
\node [draw, shape=circle] (s5) at (5.5,0) {};
\node [draw, shape=circle] (s6) at (7,0) {};
\node [draw, shape=circle] (s7) at (8,0) {};
\node [draw, shape=circle] (s8) at (9.5,0) {};
\node [draw, shape=circle] (s9) at (11,0) {};
\node [draw, shape=circle] (s10) at (12,0) {};
\node [draw, shape=circle] (s11) at (13.5,0) {};
\node [draw, shape=circle] (s12) at (15,0) {};
\node [draw, shape=circle] (s13) at (16,0) {};
\node [draw, shape=circle] (s14) at (17.5,0) {};
\node [draw, shape=circle] (s15) at (19,0) {};
\node [draw, shape=circle] (r1) at (9.5,2) {};
\node [draw, shape=circle] (r2) at (13.5,2) {};
\node [draw, shape=circle] (r3) at (17.5,2) {};
\node [draw, shape=circle] (ss1) at (0,4) {};
\node [draw, shape=circle] (ss2) at (1.5,4) {};
\node [draw, shape=circle] (ss3) at (3,4) {};
\node [draw, shape=circle] (ss4) at (4,4) {};
\node [draw, shape=circle] (ss5) at (5.5,4) {};
\node [draw, shape=circle] (ss6) at (7,4) {};
\node [draw, shape=circle] (ss7) at (8,4) {};
\node [draw, shape=circle] (ss8) at (9.5,4) {};
\node [draw, shape=circle] (ss9) at (11,4) {};
\node [draw, shape=circle] (ss10) at (12,4) {};
\node [draw, shape=circle] (ss11) at (13.5,4) {};
\node [draw, shape=circle] (ss12) at (15,4) {};
\node [draw, shape=circle] (ss13) at (16,4) {};
\node [draw, shape=circle] (ss14) at (17.5,4) {};
\node [draw, shape=circle] (ss15) at (19,4) {};
\draw(u1)--(u2)--(v1)--(v2)--(v3);
\draw(s1)--(u1)--(s3);
\draw(s4)--(u2)--(s6);
\draw(s7)--(v1)--(s9);
\draw(s10)--(v2)--(s12);
\draw(s13)--(v3)--(s15);
\draw(ss1)--(ss2)--(ss3);
\draw(ss4)--(ss5)--(ss6);
\draw(ss7)--(ss8)--(ss9);
\draw(ss10)--(ss11)--(ss12);
\draw(ss13)--(ss14)--(ss15);
\draw(u1)--(s2)--(ss2);
\draw(u2)--(s5)--(ss5);
\draw(v1)--(s8)--(r1)--(ss8);
\draw(v2)--(s11)--(r2)--(ss11);
\draw(v3)--(s14)--(r3)--(ss14);
\node [below] at (1.5,-2.5) {\Large $u_1$};
\node [below] at (5.5,-2.5) {\Large $u_2$};
\node [below] at (9.5,-2.5) {\Large $v_1$};
\node [below] at (13.5,-2.5) {\Large $v_2$};
\node [below] at (17.5,-2.5) {\Large $v_3$};
\end{tikzpicture}
\caption{The tree $T_{2,3}$ by taking $T$ as the path $P_5$.}
\label{family-F-fig}
\end{figure}
\noindent
We next give several properties of the trees of the family $\mathcal{F}$, which are almost straightforward to observe and, according to this fact, the proofs are left to the reader.
\begin{remark}\label{family-F}
Let $b,d$ be any two positive integers. Then,
\begin{itemize}
\item[{\em (i)}] $T_{b,d}$ has order $3n+4b+5d$,
\item[{\em (ii)}] $T_{b,d}$ has $2n+2b+2d$ leaves,
\item[{\em (iii)}] $\beta(T_{b,d})=2n+3b+3d$,
\item[{\em (iv)}] $\gamma_{t,coi}(T_{b,d})=n+2b+2d$.
\end{itemize}
\end{remark}
According to the results above, for any positive integers $b,d$ we see that the tree $T_{b,d}\in \mathcal{F}$ satisfies
$$\gamma_{t,coi}(T)-(|V(T)|-\beta(T))=b\;\;\mbox{ and }\;\;(|V(T)|-|L(T)|)-\gamma_{t,coi}(T)=d,$$
which gives answer to our previous question.
\section*{Acknowledgement}
We want to thank the reviewer of this article for the useful comments that helped us to improve our work.
| {
"timestamp": "2020-05-06T02:15:11",
"yymm": "2005",
"arxiv_id": "2005.02185",
"language": "en",
"url": "https://arxiv.org/abs/2005.02185",
"abstract": "A set $D$ of vertices of a graph $G$ is a total dominating set if every vertex of $G$ is adjacent to at least one vertex of $D$. The total domination number of $G$ is the minimum cardinality of any total dominating set of $G$ and is denoted by $\\gamma_t(G)$. The total dominating set $D$ is called a total co-independent dominating set if $V(G)\\setminus D$ is an independent set and has at least one vertex. The minimum cardinality of any total co-independent dominating set is denoted by $\\gamma_{t,coi}(G)$. In this paper, we show that, for any tree $T$ of order $n$ and diameter at least three, $n-\\beta(T)\\leq \\gamma_{t,coi}(T)\\leq n-|L(T)|$ where $\\beta(T)$ is the maximum cardinality of any independent set and $L(T)$ is the set of leaves of $T$. We also characterize the families of trees attaining the extremal bounds above and show that the differences between the value of $\\gamma_{t,coi}(T)$ and these bounds can be arbitrarily large for some classes of trees.",
"subjects": "Combinatorics (math.CO)",
"title": "A note on total co-independent domination in trees",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232930001333,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7093460229490938
} |
https://arxiv.org/abs/2211.05611 | Modular forms via invariant theory | We discuss two simple but useful observations that allow the construction of modular forms from given ones using invariant theory. The first one deals with elliptic modular forms and their derivatives, and generalizes the Rankin-Cohen bracket, while the second one deals with vector-valued modular forms of genus greater than one. | \section*{Acknowledgements}
The first author was supported by Simons Foundation Award
546235 at the Institute for Computational and
Experimental Research in Mathematics at Brown University.
The second author thanks ICERM for hospitality enjoyed during a visit
when this paper was written. We thank Carel Faber for useful remarks.
\begin{section}{The First Observation}
Here we start with an elliptic modular form $f$
of weight $k$
on some congruence subgroup $\Gamma$ of ${\rm SL}(2,{\mathbb Z})$.
The space of modular forms of weight $k$ on $\Gamma$
is denoted by $M_k(\Gamma)$ and the subspace of cusp forms by $S_k(\Gamma)$.
We write $\tau$ for the variable in
the upper half plane~$\mathfrak{H}$.
We consider the derivatives $f^{(n)}=d^nf/d\tau^n$ for $n=0,\ldots,r$ of $f$
and using these derivatives we shall associate a modular form
to each invariant of binary forms
of degree~$r$.
Let $V=\langle x_1,x_2 \rangle$ be the vector space generated by $x_1$
and $x_2$. The group ${\rm GL}(V)={\rm GL}(2)$ acts on $V$.
Recall that an invariant of a binary form $\sum_{i=0}^r a_i \binom{r}{i}x_1^{r-i}x_2^i
\in {\rm Sym}^r(V)$ of degree~$r$ is a polynomial in the coefficients
$a_0,\ldots,a_r$ invariant under ${\rm SL}(V)$. It has order $n$ if for
all its monomials the sum of the indices of the $a_i$ is equal to $n$;
equivalently, if it changes under ${\rm GL}(V)$ by the $n$th power
of the determinant.
We will write $V_m$ for ${\rm Sym}^m(V)$.
\begin{theorem}\label{thm}
Suppose that $I$ is an invariant of degree $d$ and order $n$ of
a binary form $\sum_{i=0}^r a_i \binom{r}{i} x_1^{r-i}x_2^i$ of degree~$r$ and let
$f$ be an elliptic modular form of weight $k$ on a congruence subgroup
$\Gamma$ of ${\rm SL}(2,{\mathbb Z})$.
Then by the substitution
$$
a_i \mapsto i! \, \binom{k+r-i}{i}\, f^{(r-i)}
$$
for $i=0,\ldots,r$ in $I$
we obtain a linear map
$ \Psi_I: M_k(\Gamma) \longrightarrow M_{dk+n}(\Gamma) $.
\end{theorem}
\begin{proof}
To the modular form $f=f(\tau)$ we associate the vector
that is the transpose of
$$
F=(f^{(r)}\, z^r, k\, f^{(r-1)}\, z^{r-1}, k(k+1)\, f^{(r-2)} z^{r-2},
\ldots, k(k+1) \cdots (k+r-1) f) \, .
$$
An element $\left( \begin{matrix} a & b \\ c & d \\ \end{matrix} \right)$
in ${\rm SL}(2,{\mathbb Z})$ acts on $\mathfrak{H}\times {\mathbb C}$
by $(\tau,z) \mapsto
((a\tau +b)/(c\tau+d), z/(c\tau+d))$.
Then by repeatedly differentiating the equation
$f((a\tau+b)/(c\tau+d))=(c\tau+d)^kf(\tau)$
one finds that the induced action on $F$, viewed as a function $F(\tau,z)$ on $\mathfrak{H}\times {\mathbb C}$,
is by
$$
F\left( \frac{a\tau+b}{c\tau+d}, \frac{z}{c\tau+d}\right)=(c\tau+d)^k \, {\rm Sym}^r
\left( \begin{matrix} c\tau +d & cz \\
0 & 1 \\ \end{matrix} \right) \circ F(\tau,z)\, .
$$
We view $F$ as a binary form (written inhomogeneously in $z$).
Evaluating the invariant $I$
at this binary form amounts to the substitution given
in the theorem and we thus get a function $\Psi_I(f)$ on
$\mathfrak{H}$ that transforms under ${\rm Sym}^r
\left( \begin{matrix} c\tau +d & cz \\
0 & 1 \\ \end{matrix} \right)$ by a factor equal to the $n$th power of
the determinant of the matrix $\left( \begin{matrix} c\tau +d & cz \\
0 & 1 \\ \end{matrix} \right)$ in ${\rm GL}(2)$. Together with the
$d$th power of $(c\tau+d)^k$ we get the factor $(c\tau+d)^{dk+n}$.
\end{proof}
\begin{remark} Of course,
the theorem applies as well to modular forms with a character. We then get a map
$$
\Psi_I: M_k(\Gamma,\chi) \to M_{kd+n}(\Gamma,\chi^d)\, ,
$$
where $d$ is the degree of $I$.
For fixed $f \in M_k(\Gamma)$ we obtain a map
$$
\Psi_{\bullet}(f): \mathcal{I}(V_r) \to R(\Gamma)
$$
of the ring of invariants
$\mathcal{I}(V_r)$
of a binary form of degree $r$ to the ring $R(\Gamma)$ of modular forms on $\Gamma$.
\end{remark}
The theorem makes the relation between transvectants and Rankin-Cohen brackets
transparent. Recall that invariant theory associates to a pair $(F,G)$ of binary forms of degree
$m$ and $n$ a so-called transvectant in $V_{m+n-2r}$ defined by
$$
(F,G)_r=\frac{(m-r)!(n-r)!}{m! \, n!} \sum_{j=0}^r (-1)^j \binom{r}{j}
\frac{\partial^r F}{\partial x_1^{r-j}\partial x_2^{j} }
\frac{\partial^r G}{\partial x_1^{j}\partial x_2^{r-j} } \, .
$$
If $F \in V_{2r}$ denotes the universal binary form of degree $2r$
then the invariant $I=(F,F)_{2r}$ is of degree $2$ and order $2r$ and when applied
to a modular form $f \in M_k(\Gamma)$
it gives
$$
\Psi_I(f)= (2r)! \, (2\pi i)^{2r} \,[f,f]_{2r} \in M_{2k+2r}(\Gamma)
$$
with $[f,f]_{2r}$ the Rankin-Cohen bracket. Recall that for a pair of
modular forms $f_1 \in M_{k_1}(\Gamma)$, $f_2 \in M_{k_2}(\Gamma)$
the $r$th Rankin-Cohen bracket $[f,g]_{r}$
is defined as
\[
[f,g]_r=\frac{1}{(2\pi i)^r}
\sum_{n+m=r}
(-1)^n
\binom{k_1+r-1}{m}
\binom{k_2+r-1}{n}
\frac{d^n f}{d\tau^n}
\frac{d^m g}{d\tau^m}
\]
and is an element of $M_{k_1+k_2+2r}(\Gamma)$ and is a cusp form for $r>0$.
\begin{remark}
Note that for the case $r=0$ we are dealing with the covariant $FG$ and the product $fg$.
\end{remark}
We give an example.
The ring of invariants of $V_3$ is generated by
$$
I_3=a_0^2a_3^2-6\,a_0a_1a_2a_3+4\,a_0a_2^3+4\,a_1^3a_3-3\,a_1^2a_2^2\, .
$$
It gives for the normalized Eisenstein series $e_k=1 - (2k/B_k) \sum_{n\geq 1} \sigma_{k-1}(n)q^n$
on ${\rm SL}(2,{\mathbb Z})$ of weights $4$ and $6$ the results
$$
\Psi_{I_3}(e_4)=-53084160000\pi^6\,e_4\Delta^2, \quad
\Psi_{I_3}(e_6)=-203928109056\pi^6\,(8e_4^3+e_6^2)\Delta^2\, .
$$
\subsection{Examples from multi-invariants}
As indicated above, we may take the invariant $I=(F,G)_r$ for $F,G \in V_r$
and apply it to a pair of modular forms $(f,g)$. We find by applying the
substitution of Theorem \ref{thm} to both pairs $(F,f)$ and $(G,g)$ that
$$
\Psi_I(f,g)= (-1)^r r! (2\pi i)^r [f,g]_r \, .
$$
We recover in a transparent way
the relation between Rankin-Cohen brackets and transvectants.
This relation was apparently first observed by Zagier \cite[p.\ 74]{Zagier1994},
and there is an extensive literature on this relation,
see for example~\cite{Olver}.
\smallskip
But there are many more invariants, bi-invariants and multi-invariants
than the trans\-vectants.
An invariant $I$ of the action on $V_m\oplus V_n$
of degree $d_1$ (resp.\ $d_2$) in the coefficients of the
binary form of degree $d_1$ (resp.\ $d_2$) and of order $n$ defines a map
$$
\Psi_I: M_{k_1}(\Gamma) \times M_{k_2}(\Gamma)
\longrightarrow M_{d_1k_1+d_2k_2+2n}(\Gamma)\, .
$$
As a concrete example,
we take $(F,G)\in V_3\oplus V_1$ with binary forms
$F=\sum_{i=0}^3 a_i\binom{3}{i} x_1^{3-i}x_2^i$
and $G=b_0x_1+b_1x_0$. In this case the generators of the
ring of invariants are known, see \cite{Draisma}.
For example, there is an invariant
$$
I=20 \, (F,G^3)_3=a_0b_1^3-3\, a_1b_0b_1^2+ 3\, a_2 b_0^2b_1-a_3b_0^3 \, .
$$
It defines a linear map
$\Psi_I: M_{k_1}(\Gamma) \times M_{k_2}(\Gamma) \longrightarrow M_{k_1+3k_2+6}(\Gamma)$,
e.g.,
$$
\Psi_{I}(e_6,e_4)=i \, 86016\pi^3\,\Delta(e_4^3+2e_6^2).
$$
This procedure generalizes in a straightforward way to multi-invariants
of binary forms. For example, take binary forms $F,G,H$ of degrees
$3,2,1$ with coefficients $a_i,b_i,c_i$ and consider the tri-invariant
$$
I=a_0b_2c_1-2\,a_1b_1c_1-a_1b_2c_0+a_2b_0c_1+2\,a_2b_1c_0-a_3b_0c_0\, .
$$
For $f,g,h$ modular forms of weights $k_1,k_2,k_3$ on $\Gamma$
we have $\Psi_I(f,g,h)\in M_{k_1+k_2+k_3+6}(\Gamma)$.
We can also use quasi-modular forms, that is,
if one of the forms we start with is quasi-modular, we get a quasi-modular form. Thus
$
\Psi_I(e_4,e_2,e_6)
$
is a quasi-modular form of weight $18$ on ${\rm SL}(2,{\mathbb Z})$. A direct computation shows that
$$
\Psi_I(e_4,e_2,e_6)=-\frac{20}{3}\, i\, \pi^3\, (e_2^3 + 3e_2e_4 + 2e_6)\Delta\, .
$$
\end{section}
\begin{section}{The Second Observation}
We formulate the second observation for the case where the modular
form has weight~$\rho$, with $\rho$ denoting the highest weight
of an irreducible representation
$V$ of ${\rm GL}(g)$. Let $U$ be the standard representation
of ${\rm GL}(g)$.
The modular form can live on a bounded symmetric
domain with factor of automorphy $\rho$, or on a moduli space where
it is a section of the vector bundle ${\mathbb E}_{\rho}$ constructed
from the Hodge bundle ${\mathbb E}$ by a Schur functor defined by $\rho$.
Recall that an invariant relative to the action of ${\rm GL}(g)$ on the
irreducible representation $V$ of highest weight $\rho$ is an element in
the algebra on $V$ invariant under ${\rm SL}(g)$.
In fact, an invariant can be obtained by an equivariant embedding of
${\rm GL}(g)$-representations
$$
\det(U)^{\otimes m} \hookrightarrow {\rm Sym}^d(V). \eqno(1)
$$
Viewing
the fibre of the dual bundle ${\mathbb E}_{\rho}^{\vee}$ of ${\mathbb E}_{\rho}$
as a ${\rm GL}(g)$-representation
and a section $f$
of ${\mathbb E}_{\rho}$ as a function on ${\mathbb E}_{\rho}^{\vee}$,
we can evaluate ${\rm Sym}^d(f)$ via the embedding (1) on
the subbundle $(\det{\mathbb E}^{\vee})^{\otimes m}$ and
obtain a section of the bundle $\det({\mathbb E})^m$.
More generally, if $W$ is an irreducible
representation of ${\rm GL}(g)$ of highest weight $\sigma$,
an equivariant embedding
$$
W \hookrightarrow {\rm Sym}^d(V) \eqno(2)
$$
defines a concomitant of type $(d,\sigma)$ for $\rho$.
Equivalently, the equivariant embedding $\phi:
W \hookrightarrow {\rm Sym}^d(V)$ may be viewed as an equivariant
embedding
$\phi':{\mathbb C} \to {\rm Sym}^d(V)\otimes W^{\vee}$. Then the image
$\phi'(1)$ is called a concomitant.
Again, if $f$ is a modular form, say a section of ${\mathbb E}_{\rho}$,
then via (2) we obtain a modular form of weight $\sigma$
from $f$. By the phrase `applying the invariant or concomitant
to~$f$' we mean this evaluation. We thus obtain the following.
\begin{observation}\label{observation}
Let $f$ be a modular form of weight $\rho$ and let $I$ be a concomitant
of type $(d,\sigma)$ for some $d$
for the action of ${\rm GL}(g)$ on the representation $\rho$.
Then by applying $I$ to $f$ one obtains a modular form of weight $\sigma$.
\end{observation}
One may formulate variants of this involving a finite set of
modular forms to which invariant theory (via multi-invariants) is applied.
\end{section}
\begin{section}{Illustrations}
We now illustrate the second observation by a number of special cases.
\subsection{Siegel modular forms of degree two}
Here we are dealing with the representation theory of ${\rm GL}(2)$
and the invariant theory of binary forms.
There is a wealth of explicit results in invariant theory
that can be applied.
Let $\Gamma \subset {\rm Sp}(4,{\mathbb Q})$ be a group commensurable with
${\rm Sp}(4,{\mathbb Z})$. We let $\rho$ be an irreducible representation of
${\rm GL}(2)$ of highest weight $(j+k,k)$.
By Observation \ref{observation} we find for a given modular form
$f \in M_{j,k}(\Gamma)$, that is,
a section of ${\rm Sym}^j({\mathbb E})\otimes \det({\mathbb E})^k$,
a homomorphism
$$
\Psi_{\bullet}(f): \mathcal{I}(U_j) \longrightarrow R(\Gamma), \qquad
J \mapsto \Psi_J(f) \, ,
$$
of the ring of invariants $\mathcal{I}(U_j)$ of binary forms of degree $j$
to the ring $R(\Gamma)$ of scalar-valued modular forms on $\Gamma$.
Let us illustrate this with the simplest non-trivial case for $j=2$.
The discriminant $I=a_1^2-4a_0a_2$ of a quadratic form
$a_0x_1^2+a_1x_1x_2+a_2x_2^2$ gives an invariant of degree $2$.
If $f$, the transpose of $(f_0,f_1,f_2)$, is a modular form of weight
$(2,k)$ on some congruence subgroup $\Gamma$ of
${\rm Sp}(4,{\mathbb Z})$ then we find a scalar-valued one
$\Psi_I(f)=f_1^2-4f_0f_2 \in M_{0,2k+2}(\Gamma)$.
This can be extended to tuples of modular forms by using
multi-invariants of binary forms. We write $U_n={\rm Sym}^n(U)$
with $U$ the standard representation of ${\rm GL}(2)$.
We consider the ring of invariants
$$
\mathcal{I}_{n_1,\ldots,n_m}=\mathcal{I}(U_{n_1} \oplus
\cdots \oplus U_{n_m})
$$
relative to the action of ${\rm GL}(2)$. One can consider
elements of $\mathcal{I}_{n_1,\ldots,n_m}$ as (multi-)invariants of a $m$-tuple
of binary forms $b_1,\ldots,b_m$.
Let $J \in \mathcal{I}_{n_1,\ldots,n_m}$ be a multi-invariant that is
of degree $(d_1,\ldots,d_m)$ in the coefficients of
the binary forms $b_1,\ldots,b_m$. Then applying $J$ to an
$m$-tuple of modular forms $f_i \in M_{j_i,k_i}(\Gamma_i)$
defines a map
$$
\prod_{i=1}^m M_{j_i,k_i}(\Gamma) \longrightarrow
M_{0,k}(\Gamma)
\qquad \text{ with $k=\sum_{i=1}^m d_i(k_i+j_i/2)$.}
$$
For example, take $(n_1,n_2)=(4,2)$. If we write
$$
b_1= \sum_{i=0}^4 \alpha_i x_1^{4-i}x_2^i, \quad
b_2=\sum_{i=0}^2 \beta_i x_1^{2-i}x_2^i\, ,
$$
we have the invariant
$$
J_{1,2}= 6\, \alpha_0\beta_2^2 -3\, \alpha_1\beta_1 \beta_2
+ 2\, \alpha_2 \beta_0 \beta_2 + \alpha_2\beta_1^2 -3\, \alpha_3\beta_0\beta_1 + 6\, \alpha_4 \beta_0^2\, .
$$
The invariant $J_{1,2}$ defines a bilinear map
$$
M_{4,k_1}(\Gamma_1) \otimes M_{2,k_2}(\Gamma_2)
\longrightarrow
M_{0,k_1+2k_2+4}(\Gamma)\, .
$$
Next we look at covariants of binary forms. Recall that the ring
of covariants $\mathcal{C}(U_j)$
of a binary form of degree $j$ can be identified
with the ring of invariants $\mathcal{I}(U_j\oplus U_1)$
of $U_j \oplus U_1$, see \cite[3.3.9]{Springer}. If we write
an element of $U_1$ as $l_1x_1+l_2x_2$, the isomorphism
can be given explicitly by substituting $l_1=-x_2$ and $l_2=x_1$
in an invariant of $U_j\oplus U_1$.
The following proposition is a direct consequence of Observation
\ref{observation}.
\begin{proposition}
If $C \in \mathcal{C}(U_j)$ is a covariant of
degree $a$ in the coefficients
of the binary form and of degree $b$ in $x_1,x_2$,
then applying $C$ defines a linear map
$$
\Psi_C: M_{j,k}(\Gamma) \longrightarrow
M_{b,a(k+j/2)-b/2}(\Gamma)\, .
$$
\end{proposition}
\begin{proof} We write $U[n+m,m]$ for ${\rm Sym}^n(U)\otimes \det(U)^m$.
The covariant $C$ corresponds to an equivariant embedding $U[b+l,l]\hookrightarrow
{\rm Sym}^a(U[j+k,k])$. The irreducible representations occurring in
${\rm Sym}^a(U[j,0])$ are of the form $U[aj-r,r]$ for non-negative $r$.
Therefore, if $U[b+l,l]$ occurs then $(b+l,l)=(aj-r+ak,r+ak)$, that is, $2l=aj+2ak-b$.
\end{proof}
If we fix a modular form $f \in M_{j,k}(\Gamma)$ we get
an induced map
$$
\Psi_{\bullet}(f): \mathcal{C}(U_{j}) \to M(\Gamma), \quad C \mapsto \Psi_C(f)\, ,
$$
where $M(\Gamma)$ is the ring of vector-valued Siegel modular forms
on $\Gamma$ of degree $2$.
In the paper \cite{CFG1} it was shown that for
$\Gamma={\rm Sp}(4,{\mathbb Z})$, $j=6$ and $f=\chi_{6,8}$,
a generator of the space of cusp forms
$S_{6,8}({\rm Sp}(4,{\mathbb Z}))$, every
vector-valued modular form on ${\rm Sp}(4,{\mathbb Z})$ can be obtained
from a form $\Psi_C(\chi_{6,8})$ for a $C \in \mathcal{C}(U_6)$
after dividing by an appropriate power of
the cusp form $\chi_{10}$ of weight~$10$.
Alternatively,
using the meromorphic modular form
$\chi_{6,-2}=\chi_{6,8}/\chi_{10}$, we found maps
$$
M({\rm Sp}(4,{\mathbb Z})) \longrightarrow \mathcal{C}(U_{6})
\xrightarrow{\Psi_{\bullet}(\chi_{6,-2})} M({\rm Sp}(4,{\mathbb Z}))[1/\chi_{10}]\, ,
$$
the composition of which is the identity.
\smallskip
A variation of this deals with multi-covariants. We can also allow
modular forms with a character.
\begin{proposition}
If $C$
is a covariant in $\mathcal{C}(U_{j_1} \oplus \cdots \oplus U_{j_m})$
of degree $a_i$ in the coefficients of the binary form in $U_{j_i}$
and degree $b$ in $x_1,x_2$, then $C$ defines a linear map
$$
\otimes_{i=1}^m M_{j_i,k_i}(\Gamma,\chi_i) \longrightarrow
M_{b,k}(\Gamma, \chi_1^{a_1}\cdots\chi_m^{a_m})
$$
with $k= \sum_{i=1}^m a_i(k_i+j_i/2) -b/2$.
\end{proposition}
\subsection{Siegel and Teichm\"uller modular forms of degree three}
If $U_{\rho}$ is an irreducible representation of ${\rm GL}(3)$
of highest weight $\rho$
and $f$ is a Siegel modular form of weight $\rho$ on some group
$\Gamma \subset {\rm Sp}(6,{\mathbb Q})$ commensurable
with ${\rm Sp}(6,{\mathbb Z})$, then we get a homomorphism
$$
\Psi_{\bullet}(f): \mathcal{I}(U_{\rho}) \longrightarrow R(\Gamma)
$$
of the ring $\mathcal{I}(U_{\rho})$ of invariants
to $R(\Gamma)$, the ring of scalar-valued modular forms on $\Gamma$.
We can extend this map.
For a given irreducible representation $\sigma$
of ${\rm GL}(3)$ we let
$\mathcal{C}_{\sigma}(U_{\rho})$ be the $\mathcal{I}(U_{\rho})$
-module of covariants obtained from equivariant embeddings of
$U_{\sigma}$ into the symmetric algebra on $U_{\rho}$.
We get for a given form $f \in M_{\rho}(\Gamma)$
a map
$$
\Psi_{\bullet}(f): \mathcal{C}_{\sigma}(U_{\rho}) \longrightarrow
M_{\sigma}(\Gamma), \quad
C \mapsto \Psi_C(f)\, ,
$$
where $M_{\sigma}(\Gamma)$ is the $R(\Gamma)$-module
$\oplus_k M_{\sigma \otimes {\det}^k}(\Gamma)$.
The moduli space $\overline{\mathcal{M}}_3$ of stable curves of genus $3$
carries a Hodge bundle,
also denoted ${\mathbb E}$. Its restriction to $\mathcal{M}_3$ is the pullback of the Hodge bundle on $\mathcal{A}_3$ under the Torelli map.
For given irreducible representation $\rho$
of ${\rm GL}(3)$
we have a vector bundle ${\mathbb E}_{\rho}$ obtained by a Schur functor
from ${\mathbb E}$ and $\rho$. Sections of ${\mathbb E}_{\rho}$ on
$\overline{\mathcal{M}}_3$ are called Teichm\"uller forms of degree or genus $3$ and weight
$\rho$. We have a graded ring of scalar-valued Teichm\"uller forms
$T_3=\oplus_k H^0(\overline{\mathcal{M}}_3, \det({\mathbb E})^k)$ of genus $3$.
The ring $T_3$ is a quadratic extension of the
ring of scalar-valued Siegel modular forms
$R({\rm Sp}(6,{\mathbb Z}))$ by $\chi_9$, with
$\chi_9$ the Teichm\"uller modular cusp form of weight $9$ that vanishes
simply on the closure of the hyperelliptic locus. Its square is a Siegel
modular cusp form of weight $18$, the product of the $36$ even theta constants.
Given a Teichm\"uller modular form $f$ of weight $\rho$ we have a similar map
$$
\Psi_{\bullet}(f): \mathcal{C}_{\sigma}(U_{\rho})\longrightarrow T_{\sigma}(\Gamma)\, ,
$$
where $T_{\sigma}(\Gamma)$ is the $T_3$-module
$$
\oplus_k H^0(\overline{\mathcal{M}}_3, {\mathbb E}_{\sigma}\otimes \det{\mathbb E}^{\otimes k})\, .
$$
With $\chi_{4,0,8}$ a generator of the space of cusp forms
$S_{4,0,8}(\rm Sp(6,{\mathbb Z}))$,
the quotient
$\chi_{4,0,-1}=\chi_{4,0,8}/\chi_9$ is a meromorphic section of
${\rm Sym}^4({\mathbb E})\otimes \det({\mathbb E})^{-1}$ on $\overline{\mathcal{M}}_3$.
In \cite{CFG2} we used
this form to construct maps
$$
H^0(\overline{\mathcal{M}}_3, {\mathbb E}_{\sigma})
\longrightarrow \mathcal{C}_{\sigma}({\rm Sym}^4(U))
\xrightarrow{\Psi_{\bullet}(\chi_{4,0,-1})}
H^0(\overline{\mathcal{M}}_3, {\mathbb E}_{\sigma})[1/\chi_9]
$$
the composition of which is the identity. Here $U$ is the standard representation
of ${\rm GL}(3)$. This enables one to construct
all Teichm\"uller and all Siegel modular forms of genus $3$ on ${\rm Sp}(6,{\mathbb Z})$
by concomitants
for the action of ${\rm GL}(3)$ on ternary quartics using $\chi_{4,0,-1}$.
\subsection{Teichm\"uller forms of genus $3$ and $4$}
In \cite{vdG-K} a Teichm\"uller modular form $f$ of
weight $(2,0,0,8)$, a section of ${\rm Sym}^2({\mathbb E}) \otimes \det({\mathbb E})^8$
on $\overline{\mathcal{M}}_4$, is constructed. This Teichm\"uller modular
form can not be obtained by pulling back a Siegel modular form
under the Torelli morphism. It is associated to the quadric containing
the canonical image of the generic curve of genus $4$.
As invariant we now take
the discriminant $I$ of a quadratic form in four variables and apply
it to $f$. It yields a scalar-valued Teichm\"uller modular form
of weight $34$ that vanishes on the closure of the
locus of non-hyperelliptic curves of genus $4$
for which the unique quadric that contains the canonical image
is singular. Its square is the pull back of a Siegel modular form of
degree $4$
and weight $68$ that is the product of all the even theta characteristics.
An analogous case is given by the section $\chi_{2,0,4}$ of ${\rm Sym}^2({\mathbb E}) \otimes
\det({\mathbb E})^4$ on the Hurwitz space
$\overline{\mathcal{H}}_{3,2}$ of admissible covers of genus $3$
and degree $2$, constructed in \cite{vdG-K}.
Its discriminant is a modular form
of weight $14$, related to the discriminant of binary octics, whose
square is the pullback of a Siegel modular cusp form of weight $28$.
\subsection{Picard modular forms}
Here we fix an imaginary quadratic field $F$ with ring of integers $O_F$
and consider a non-degenerate Hermitian form $h=z_1z_2'+z_1'z_2+z_3z_3'$
on the vector space $F^3$ with the prime denoting complex conjugation.
The group of similitudes of $h$ is an algebraic group $G$
over ${\mathbb Q}$ of type ${\rm GU}(2,1)$. The connected component
$G^{+}({\mathbb R})$ of $G({\mathbb R})$ acts on the set
$\mathfrak{B}$ of negative complex lines
$$
\mathfrak{B}=\{ L : L \subset F^3\otimes_{\mathbb Q} {\mathbb R},\, \dim_{\mathbb C}L=1, h_{|L}<0\}
$$
which can be identified with the complex ball
$\{ (u,v) \in {\mathbb C}^2: v+\bar{v}+u\bar{u}<0\}$. If $\Gamma$ is an arithmetic subgroup of $G$,
the quotient $\Gamma\backslash \mathfrak{B}$ is a moduli space of $3$-dimensional
abelian varieties with multiplication by $F$. It carries a Hodge bundle ${\mathbb E}$
that splits as $W\oplus L$ of rank $2$ and $1$ and we obtain two factors of automorphy.
A power of the determinant of $W$ equals a power of $L$.
We then have modular forms that can be seen as sections of ${\rm Sym}^j(W)
\otimes L^k$. We can apply the second observation to this situation.
If $\det(W)\neq L$ we have to deal with modular forms with a character.
In the paper \cite{CvdG2021} we considered the case where $F={\mathbb Q}(\sqrt{-3})$
and $\Gamma=\Gamma[\sqrt{-3}]$ is a certain congruence subgroup.
We constructed modular forms
$\chi_{1,1} \in M_{1,1}(\Gamma[\sqrt{-3}], \det)$
and $\chi_{4,4}\in M_{4,4}(\Gamma[\sqrt{-3}],\det^2)$.
We also have a scalar-valued cusp form $\zeta \in S_{0,6}(\Gamma[\sqrt{-3}],\det)$.
We refer to \cite{CvdG2021} for the notation.
The quotient $\chi_{4,-2}=\chi_{4,4}/\zeta$ is a
meromorphic modular form of weight $(4,-2)$.
We constructed in \cite{CvdG2021} maps
$$
M(\Gamma) \to \mathcal{C}(V_1 \oplus V_4) \xrightarrow{\Psi_{\bullet}(\chi_{1,1},\chi_{4,-2})} M(\Gamma)[1/\zeta]
$$
from the ring $M(\Gamma)$ of vector-valued modular
forms to the ring $\mathcal{C}(V_1\oplus V_4)$ of bi-covariants
for the action of ${\rm GL}(2)$ on
$V_1 \oplus V_4$. The map $\Psi_{\bullet}(\chi_{1,1},\chi_{4,-2})$ illustrates
Observation \ref{observation}. The composition of the maps is the identity
and this shows that we can obtain all modular forms this way.
\end{section}
\begin{section}{An example involving both observations}
The example we now treat deals with Picard modular forms living on the $2$-ball
and is inspired by both observations and constructs
new vector-valued Picard modular
forms from given Picard modular forms and their derivatives.
For simplicity's sake we deal with modular forms of weight $(1,k)$
on the Picard modular group $\Gamma[\sqrt{-3}]$, see \cite{CvdG2021} for
definitions and notation.
Let $f \in M_{1,k}(\Gamma[\sqrt{-3}])$ be a modular form.
We write
$f$ as the transpose of
$(f_0, f_1)$.
We can view
$f$ as defining a linear form $l=f_0x_1+f_1x_2$.
We put
$\partial f$ as the transpose of
$$
(f_{0u},(f_{1u}+f_{0v})/2,f_{1v})\, ,
$$
where $f_{iu}=\partial f_i/\partial u$ and $f_{iv}=\partial f_i/\partial v$
for $i=0,1$.
We view $\partial f$ as the vector of coefficients of binary form $q$
of degree $2$.
Writing $V=\langle x_1,x_2 \rangle$ and
$l=a_0x_1+a_1x_2$ and $q=b_0x_1^2+2\, b_1x_1x_2+b_2x_2^2$ we
consider now multi-invariants for the action of
${\rm GL}(2)$ on $V_1\oplus V_2$.
\begin{proposition}
Let $I$ be a multi-invariant of the binary forms $l,q$ of degree $d_1$
in the $a_i$ and degree $d_2$ in the $b_i$ and order $n$.
If $f\in M_{1,k}(\Gamma)$
then applying $I$ defines a Picard modular form $\Psi_I(f)$ of weight
$(0,d_1\, k+d_2(k+1)+n)$ on $\Gamma$.
\end{proposition}
The proof follows the pattern of the proof of Theorem \ref{thm}.
Is seems complicated to
extend it to the case where higher derivatives are involved.
We finish by giving an example. Let $I$ be the bi-invariant
$$
I=a_0^2b_2-2a_0a_1b_1+a_1^2b_0 \, .
$$
If we let $f=E_{1,1} \in M_{1,1}(\Gamma[\sqrt{-3}],\det)$ as defined in \cite{CvdG2021}
with Fourier expansion
$$
E_{1,1}(u,v)=
\sum_{\alpha \in O_F}
\left[\begin{smallmatrix} X'(\alpha u) \\ \frac{2\pi}{\sqrt{3}}\bar{\alpha}X(\alpha u) \end{smallmatrix}\right]
q_v^{N(\alpha)}
=
\left[\begin{smallmatrix} X'(0) \\ 0 \end{smallmatrix}\right]+
6\left[\begin{smallmatrix} X'(u) \\ \frac{2\pi}{\sqrt{3}}X(u) \end{smallmatrix}\right]\,q_v +
6\left[\begin{smallmatrix} (XYZ)'(u) \\ \frac{2\pi}{\sqrt{3}}3XYZ(u) \end{smallmatrix}\right]\,q^3_v +
\ldots
$$
we get
$$
\Psi_{I}(E_{1,1})(u,v)=8\pi^2a^2
\bigg(
Xq_v+
\frac{9X}{a^2}
(4(XX''-(X')^2)+a^2YZ)
\bigg)q_v^3+\ldots
$$
with $a=X'(0)$,
but we know that $XX''-(X')^2=-a^2YZ$, so we get
$$
\Psi_{I}(E_{1,1})(u,v)=
8\pi^2a^2
(
Xq_v-27XYZq_v^3+\ldots
)
=8\pi^2a^2 \zeta(u,v).
$$
Here $\zeta \in S_{0,6}(\Gamma[\sqrt{-3}],\det)$ is the form that appeared in the
preceding section.
\end{section}
| {
"timestamp": "2022-11-11T02:14:28",
"yymm": "2211",
"arxiv_id": "2211.05611",
"language": "en",
"url": "https://arxiv.org/abs/2211.05611",
"abstract": "We discuss two simple but useful observations that allow the construction of modular forms from given ones using invariant theory. The first one deals with elliptic modular forms and their derivatives, and generalizes the Rankin-Cohen bracket, while the second one deals with vector-valued modular forms of genus greater than one.",
"subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)",
"title": "Modular forms via invariant theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232924970204,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7093460225859755
} |
https://arxiv.org/abs/2103.08463 | How to distribute data across tasks for meta-learning? | Meta-learning models transfer the knowledge acquired from previous tasks to quickly learn new ones. They are trained on benchmarks with a fixed number of data points per task. This number is usually arbitrary and it is unknown how it affects performance at testing. Since labelling of data is expensive, finding the optimal allocation of labels across training tasks may reduce costs. Given a fixed budget of labels, should we use a small number of highly labelled tasks, or many tasks with few labels each? Should we allocate more labels to some tasks and less to others? We show that: 1) If tasks are homogeneous, there is a uniform optimal allocation, whereby all tasks get the same amount of data; 2) At fixed budget, there is a trade-off between number of tasks and number of data points per task, with a unique solution for the optimum; 3) When trained separately, harder task should get more data, at the cost of a smaller number of tasks; 4) When training on a mixture of easy and hard tasks, more data should be allocated to easy tasks. Interestingly, Neuroscience experiments have shown that human visual skills also transfer better from easy tasks. We prove these results mathematically on mixed linear regression, and we show empirically that the same results hold for few-shot image classification on CIFAR-FS and mini-ImageNet. Our results provide guidance for allocating labels across tasks when collecting data for meta-learning. | \section{Introduction}
Deep learning (DL) models require a large amount of data in order to perform well, when trained from scratch, but labeling data is expensive and time consuming.
An effective approach to avoid the costs of collecting and labeling a large amount of data is transfer learning: train a model on one big dataset, or a few related datasets that are already available, and then fine-tune the model on the target dataset, which can be of much smaller size \cite{donahue_decaf:_2014}.
In this context, there has been a recent surge of interest in the field of \emph{meta-learning}, which is inspired by the ability of humans to \emph{learn how to learn} \cite{hospedales_meta-learning_2020}.
A model is \emph{meta-trained} on a large number of tasks, each characterized by a small dataset, and \emph{meta-tested} on the target dataset.
The number of data points per task is usually set to an arbitrary number in standard meta-learning benchmarks.
For example, in few-shot image classification benchmarks, such as \emph{mini}-ImageNet \cite{vinyals_matching_2017}, \cite{ravi_optimization_2017} and CIFAR-FS \cite{bertinetto_meta-learning_2019}, each task has five classes ($5$-way) and either one or five images per class is used during testing ($1$-shot or $5$-shots).
During training, the number of data points per class is usually set to an arbitrary value, and it remains unclear how this number should be set to achieve the best testing performance.
We focus on training, rather than testing data, because the former can be optimized by following specific procedures for data partitioning and collection.
Intuitively, one would think that the performance always improves with the number of training data points.
However, if the total number of labels is limited, is it better to have a large number of tasks with little data in each task, or a smaller number of highly labelled tasks?
Should some tasks be given more labels than other tasks?
The answers to these questions remain unknown, although they are important to inform the design of new meta-learning benchmarks and the application of meta-learning algorithms to real
problems, especially given that data labelling is costly.
Hence, we address these questions for the first time, for a specific meta-learning algorithm: MAML \cite{finn_model-agnostic_2017}.
Our contributions are:
\begin{itemize}
\item We introduce the problem of optimizing data allocation in meta-learning, with a fixed budget of total data points to distribute across training tasks. We show that, when tasks are homogeneous, the optimal solution is distributing data uniformly across tasks: all tasks get the same amount of data. This setting is considered in most meta-learning problems (See \emph{'The data allocation problem'} section \ref{section:data_allocation}, Theorem \ref{unifThm}).
\item When data is distributed uniformly across tasks, we show that the trade-off between number of tasks and number of data points per task, at fixed budget, has a unique solution for the optimum for large budgets (section \ref{sec:uniform}\emph{'Solution of the uniform allocation'}, Theorems \ref{underThm}, \ref{optallocThm}, Figures \ref{fig:linreg}, \ref{fig:image_classification}).
\item Next, we consider the problem of two sets of tasks, easy and hard. When trained separately, we show that hard tasks need more data (per task) than easy tasks. While it is intuitive that hard tasks require more data for training, we emphasize that the total number of data points is fixed by the given budget, therefore the number of tasks is smaller (section \ref{sec:easyhardonly}\emph{'Separate training'}, Figure \ref{fig:easyhardonly}).
\item Finally, we study the problem of training a non-homogeneous mixture of easy and hard tasks. In contrast to when they are trained separately, we show that better performance is obtained by allocating more data to easy tasks. Our interpretation is that, as long as learning transfers from easy to hard tasks, it is better to train more on the former since they are easier to learn. Interestingly, human visual skills also transfer better from easy tasks \cite{ahissar_task_1997} (section \emph{'Joint training'} \ref{sec:easyhard}, Figure \ref{fig:easyhard}).
\end{itemize}
We prove results mathematically on mixed linear regression, and confirm those results empirically on few-shot image classification on CIFAR-FS and \emph{mini}-ImageNet (code in the supplementary material).
\section{Related Work}
In the context of meta-learning and mixed linear regression, the work of \cite{kong_meta-learning_2020} asks whether more tasks with a small amount of data can compensate for a lack of tasks with big data.
However, they do not address the problem of finding the optimal allocation of data for a fixed budget, which is the main scope of our work.
The work of \cite{shekhar_adaptive_2020} studies the problem of allocating a fixed budget of data points to a finite set of discrete distributions.
In contrast to our work, they do not study the meta-learning problem and their data has no labels.
Similar to us, a few theoretical studies looked at the problem of mixed linear regression in the context of meta-learning (\cite{bernacchia_meta-learning_2021}, \cite{denevi_learning_2018}, \cite{bai_how_2021}, \cite{tripuraneni_provable_2020}, \cite{du_few-shot_2020}, \cite{collins_why_2020}, \cite{gao_modeling_2020}).
However, none of these studies look into the problem of data allocation, which is our main focus.
An alternative approach to avoid labelling a large amount of data is \emph{active learning}, where a model learns with fewer labels by accurately selecting which data to learn from \cite{settles_active_2010}.
In the context of meta-learning, the option of implementing active learning has been considered in a few recent studies \cite{bachman_learning_2017}, \cite{garcia_few-shot_2018}, \cite{kim_bayesian_2018}, \cite{finn_probabilistic_2019}, \cite{requeima_fast_2020}.
However, they considered the active labeling of data within a given task, for the purpose of improving performance in that task only.
Instead, we ask how data should be distributed across tasks.
In the context of recommender systems and text classification, a few studies considered whether labeling a data point, within a given task, may increase performance not only in that task but also in all other tasks.
This problem has been referred to as \emph{multi-task active learning} \cite{reichart_multi-task_2008}, \cite{zhang_multi-task_2010}, \cite{saha_online_2011}, \cite{harpale_multi-task_2012}, \cite{fang_active_2017}, or \emph{multi-domain active learning} \cite{li_multi-domain_2012}, \cite{zhang_multi-domain_2016}.
However, none of these studies consider the problem of meta-learning with a fixed budget.
A few studies have looked into actively choosing the next task in a sequence of tasks \cite{ruvolo_active_2013}, \cite{pentina_curriculum_2015}, \cite{pentina_multi-task_2017}, \cite{sun_active_2018}, but they do not look at how to distribute data across tasks.
\section{Meta-learning}\label{sec:meta}
The reader may refer to \cite{hospedales_meta-learning_2020} for a general introduction to meta-learning with neural networks.
In this work, we consider the cross-task setting, where we have a distribution of tasks $\tau\sim p(\tau)$ and a distribution of data points for a given task $\mathcal D^{\tau}\sim p( \mathcal D |\tau)$.
Each task has a loss function $\mathcal{L}(\theta;\mathcal{D})$ that depends on a set of parameters $\theta$ and data $\mathcal{D}$.
Here we assume that the loss has the same functional form across tasks (e.g. square loss if they are all regression tasks, cross-entropy if they are all classification tasks).
The goal of meta-learning is minimizing the mean of the loss across tasks and data.
In the \emph{meta-training} phase, $m$ tasks $(\tau_i)_{i=1}^m$ are sampled from $p(\tau)$ and, for each task, $n_i^t$ training data points $\mathcal D_i^t=(\mathbf{x}_{ij}^t,y_{ij}^t)_{j=1}^{n_i^t}$ and $n_i^v$ validation data points $\mathcal D_i^v=(\mathbf{x}_{ij}^v,y_{ij}^v)_{j=1}^{n_i^v}$,
are sampled independently from the same distribution $p( \mathcal D |\tau_i)$.
We assume that the data is given by input $\mathbf{x}$ - label $y$ pairs.
The meta-training loss is a function of the data and the meta-parameters $\boldsymbol\omega$, is equal to
\begin{equation}\label{meta-loss-empirical}
\mathcal{L}^{meta}\left(\boldsymbol\omega;\mathcal D^t,\mathcal D^v\right)=\frac{1}{m}\sum_{i=1}^m\frac{1}{n_i^v} \sum_{j=1}^{n_i^v}\mathcal{L}\Big(\boldsymbol\theta(\boldsymbol\omega,\mathcal{D}_i^t); \mathbf{x}_{ij}^v,y_{ij}^v\Big)
\end{equation}
The parameters are adapted to each task $i$ by using the transformation $\theta(\boldsymbol\omega,\mathcal{D}_i^t)$.
Different meta-learning algorithms correspond to a different choice of this transformation.
Here we use MAML \cite{finn_model-agnostic_2017}, which performs a fixed number of stochastic gradient descent steps with respect to the data for each task.
With a single gradient step, that is equal to
\begin{equation}
\label{adaptation}
\theta(\boldsymbol\omega,\mathcal D_i^t)=\boldsymbol\omega-\frac{\alpha_i}{n_i^t}\sum_{j=1}^{n_i^t}\nabla_{\boldsymbol\omega} \mathcal{L}\left(\boldsymbol\omega; \mathbf{x}_{ij}^t,y_{ij}^t\right)
\end{equation}
where $\alpha_i$ is the learning rate for task $i$.
This equation corresponds to a full-batch update, employing all the data for a given task, but mini-batch gradient updates can be performed as well.
A number $k$ of gradient steps may be used instead of one.
This step is referred to as \emph{inner loop} of meta-learning.
The loss in Eq.(\ref{meta-loss-empirical}) is minimized with respect to the meta-parameters $\boldsymbol\omega$, namely
\begin{equation}
\label{omegastarmain}
\boldsymbol\omega^\star\left(\mathcal D^t,\mathcal D^v\right)=\mathop{\argmin}_{\boldsymbol\omega}\mathcal{L}^{meta}\left(\boldsymbol\omega;\mathcal D^t,\mathcal D^v\right)
\end{equation}
This minimum is searched by stochastic gradient descent, using a distinct learning rate $\alpha_{meta}$.
At each gradient step, Eq.(\ref{adaptation}) is computed for each task and the gradient of Eq.(\ref{meta-loss-empirical}) with respect to $\boldsymbol\omega$ is taken.
This step is referred to as \emph{outer loop} of meta-learning.
Note that Eq.(\ref{meta-loss-empirical}) includes all $m$ tasks, which translates into full-batch training when taking the gradient.
However, a mini-batch of tasks may be also drawn from the set of $m$ tasks at each step of the optimization.
Standard optimization procedures such as early stopping and scheduling of the learning rate $\alpha_{meta}$ can be applied.
In the case of mixed linear regression (section \emph{'Solution of the uniform allocation'} \ref{sec:uniform}), we solve Eq.(\ref{omegastarmain}) exactly by linear algebra.
In the \emph{meta-testing} phase, the test loss $\mathcal{L}^{test}$ is computed using the optimal value $\boldsymbol\omega^\star$ and test datasets $\tilde{\mathcal D}^t,\tilde{\mathcal D}^v$
\begin{equation}
\label{test_loss}
\mathcal{L}^{test}\left(\mathcal D^t,\mathcal D^v,\tilde{\mathcal D}^t,\tilde{\mathcal D}^v\right)=\mathcal{L}^{meta}\left(\boldsymbol\omega^\star\left(\mathcal D^t,\mathcal D^v\right);\tilde{\mathcal D}^t,\tilde{\mathcal D}^v\right)
\end{equation}
The test datasets correspond to a new draw of both tasks and data points.
The values of hyperparameters $m, n^t, n^v, \alpha, k$ for meta-testing are not necessarily the same as those used during meta-training.
The main focus of this work is optimizing $m, n_i^t, n_i^v$ for meta-training, while they are fixed during meta-testing.
To evaluate the performance of the model for a given choice of the hyperparameters, we compute the average test loss, defined as
\begin{equation}
\label{avg_test_loss}
\overline{\mathcal{L}}^{test}(n_1^t, \ldots n_{m}^t, n_1^v,\ldots n_{m}^v) =\mathop{\mathbb{E}}_{\mathcal D_t}\mathop{\mathbb{E}}_{\mathcal D_v}\mathop{\mathbb{E}}_{\tilde{\mathcal D}_t}\mathop{\mathbb{E}}_{\tilde{\mathcal D}_v}\mathcal{L}^{test}
\end{equation}
\section{The data allocation problem}\label{section:data_allocation}
We denote the number of data points per task $i$ during meta-training as $N_i=n_i^t+n_i^v$, equal to the sum of training and validation data.
In all experiments we used an equal split of training and validation, $n_i^t=n_i^v=n_i$.
We assume that the total number of data points for meta-training, referred to as \emph{budget}, is constant and equal to $b=\sum_{i=1}^{m}N_i=2\sum_{i=1}^{m}n_i$.
This is equal to the total number of data points across all training tasks, and is assumed fixed, while the number of data points per task $N_i$ are allowed to vary.
We denote by $\mathbf{n}$ the vector of $n_i$ values, $\mathbf{n}=(n_1,\ldots,n_m)$, and define the \emph{data allocation} problem of finding the value of $\mathbf{n}$ such that the average test loss is minimized
\begin{equation}
\label{dataalloc}
\mathbf{n}^\star=\mathop{\argmin}_{\mathbf{n}\;:\;\sum_{i=1}^{m}n_i=b/2}\overline{\mathcal{L}}^{test}(n_1,\ldots,n_m)
\end{equation}
The optimal value $\mathbf{n}^\star$ is referred to as \emph{optimal allocation}, it may depend on the budget and on other hyperparameters of the model.
The optimal allocation determines which tasks should get more or less data, for a fixed budget $b$ and number of tasks $m$.
In the following theorem, we provide conditions under which the optimal data allocation is uniform.
\vspace{\baselineskip}
\begin{theorem}
\label{unifThm}
If the test loss $\overline{\mathcal{L}}^{test}$ is invariant under permutations of task allocations, i.e. permutations of its arguments $(n_1,\ldots,n_m)$ then the uniform allocation $\mathbf{n}=(n,\ldots,n)$ with $n=\frac{b}{2m}$ is a local extremum of the constrained optimization problem, provided that it is non-degenerate.
Furthermore, if
\begin{equation}\label{mini-schur}
\overline{\mathcal{L}}^{test}\left(\frac {n_1 + n_2}{2}, \frac{n_1 + n_2}{2}, n_3,...,n_m\right)\leq{\mathcal{L}}^{test}(n_1,...,n_m),
\end{equation}
for all $n_1,...,n_m$, subject to $\sum_{i=1}^k n_i = \frac {b}{2}$,
then the uniform allocation is the global minimum of the data allocation problem.
\end{theorem}
\begin{proof}
The proof of the first part (see Modern Purkiss principle) is given by \cite{waterhouse_symmetric_1983}, noting that the action of the symmetric group preserves the constraint and is irreducible, while the proof of the second part (global minimum) is given by \cite{keilson_global_1967}.
\end{proof}
Note that convexity of the test loss is a sufficient condition for the global minimum.
We show in the \emph{'Mixed linear regression'} section \ref{sec:linear} that the Purkiss principle applies to the case of mixed linear regression with homogeneous tasks.
This result motivates, in addition to the data allocation problem (\ref{dataalloc}), the study of the \emph{uniform allocation} problem, in which the number of data points is assumed to be equal across tasks, but now the number of tasks $m$ is allowed to vary.
The solution of this problem is defined by
\begin{equation}
\label{unifdataalloc}
n^\star=\mathop{\argmin}_{n\;:\;nm=b/2}\overline{\mathcal{L}}^{test}(n)
\end{equation}
In this case, the question is whether to have more data and less tasks, or less data and more tasks, for the fixed budget $b$.
In the next sections, we study both problems of data allocation and uniform allocation on mixed linear regression and few-shot image classification on CIFAR-FS and \emph{mini}-ImageNet.
\vspace{-1mm}
\subsection{Computation of the optimum}
\label{sec:compopt}
In the case of linear regression, we derive exact expressions for $\overline{\mathcal{L}}^{test}$ and $\mathbf{n}^\star$ in some limiting cases.
In few-shot image classification, and in further linear regression experiments, we estimate $\overline{\mathcal{L}}^{test}$ empirically by searching a grid of values of $\mathbf{n}$.
We average the test loss over multiple repetitions with different data samples and different initial conditions for $\boldsymbol\omega$.
Then, we determine the mean and standard deviation for the optimum $\mathbf{n}^\star$ by the following procedure: we generate multiple instances of test loss/accuracy vs $\mathbf{n}$ by sampling uniformly from the repetitions at each value of $\mathbf{n}$, we record the optimal $\mathbf{n}^\star$ of each instance and construct a distribution of $\mathbf{n}^\star$ across all instances.
We also provide nonlinear (sinusoid) regression experiments in the appendix.
\section{Solution of the uniform allocation}
\label{sec:uniform}
In this section we consider the problem of uniform allocation, while the non-uniform case is studied in the section \emph{'Easy vs hard tasks'} \ref{sec:easyvshard}.
We look at the trade-off between having either more tasks or more data per task, for a fixed budget, and we show that this problem has a unique optimum.
We study this trade-off on two problems: mixed linear regression, where we compute a closed form expression for the optimum, and few-shot image classification, where we show empirical results.
\subsection{Mixed linear regression}
\label{sec:linear}
In mixed linear regression, each task is characterized by a different linear function, and the loss is the mean squared error:
\begin{figure*}[th]
\centering
\includegraphics[width = 0.75\textwidth]{AllotTask_neurips_linreg.png}
\caption{\textbf{The optimal number of data points per task is constant for large budgets: linear regression}. \textbf{A}: Test loss vs. number of data points per task at fixed budget (more data points imply less tasks). Dots: experimental values; Lines: theoretical prediction Eq.(\ref{testloss}), different lines correspond to different budgets (legend). As predicted by Theorem \ref{underThm}, theoretical prediction is more accurate for larger budgets. Each curve has a unique optimum. \textbf{B}: Optimal number of data points per task vs. budget, the four points correspond to the four curves in panel A. The theoretical prediction of Eq.(\ref{optimalN}) (orange line) is close to the estimated experimental optimum (see section \ref{sec:compopt}\emph{'Computation of the optimum'} for its computation).}
\label{fig:linreg}
\end{figure*}
\begin{align}
\mathcal{L}\left(\boldsymbol\theta; \mathbf{x},y\right)=\frac{1}{2}\left(y-\boldsymbol\theta^T\mathbf{x}\right)^2
\end{align}
where the label $y$ is a scalar, while the input $\mathbf{x}$ and the parameter $\boldsymbol\theta$ are vectors of $p$ components.
Each task corresponds to a different value of the generating parameter $\boldsymbol\theta$.
Across tasks, that is distributed according to a Gaussian
\begin{align}
\label{generative_task}
\boldsymbol\theta\sim\mathcal{N}\left(\boldsymbol\theta_0,\frac{\nu^2}{p}I_p\right)
\end{align}
where $\boldsymbol\theta_0$, $\nu$ are hyperparameters, and $I_p$ is the $p\times p$ identity matrix.
The distribution of data for a given task is given by
\begin{align}
\label{generative_data}
&y\;|\;\mathbf{x},\boldsymbol\theta\sim\mathcal{N}\left(\boldsymbol\theta^T\mathbf{x},\sigma^2\right)\\
&\mathbf{x} \sim \mathcal{N}\left(\mathbf{0},\lambda^2I_p\right)
\end{align}
where $\sigma$ is the label noise and $\lambda$ is the input variability.
Each data point is independently drawn from this distribution, for either training or validation set.
We distinguish between the case of \emph{homogeneous} tasks, where all tasks have the same values of $(\sigma,\lambda)$, and \emph{non-homogeneous} tasks, where we allow those values to vary across tasks.
In the following theorem, we compute an approximate expression for the average test loss for mixed linear regression.
\vspace{\baselineskip}
\begin{theorem}
\label{underThm}
Consider the algorithm of section \ref{sec:meta} (MAML one-step) and data generated according to the mixed linear regression model. Let $\sum_{i=1}^mn_i>p$ (underparameterized model), and let $n_i=n_i(\xi)$, $m=m(\xi)$ be any functions of order $\Theta(\xi)$ as $\xi\rightarrow\infty$. Then, the average test loss is equal to
\begin{align}
\label{testloss}
&\overline{\mathcal{L}}^{test}=\frac{\sigma_r^2}{2}\left(1+\frac{\lambda_r^4\alpha_r^2p}{n_r}\right)+\frac{\lambda_r^2h_r\nu^2}{2}+\nonumber\\
&+\frac{\lambda_r^2h_rp}{2}\left[\sum_{i=1}^m\lambda_i^2h_i\right]^{-2}\sum_{i=1}^m\frac{\lambda_i^2}{n_i}\Bigg\{\nonumber\\
&\sigma_i^2\left[h_i+\frac{\lambda_i^4\alpha_i^2}{n_i}\left[\left(n_i+1\right)g_{1i}+p\;g_{2i}\right]\right]+\nonumber\\
&+\frac{\nu^2}{p}\lambda_i^2\left[\left(n_i+1\right)g_{3i}+p\;g_{4i}\right]\Bigg\}+O\left(\xi^{-3}\right)
\end{align}
where the subscript $i$ denotes meta-training hyperparameters for task $i$, while the subscript $r$ denotes meta-testing hyperparameters.
We have defined the function $h_i=\left(1-\lambda_i^2\alpha_i\right)^2+\lambda_i^4\alpha_i^2\frac{p+1}{n_i}$, and the functions $g$ are polynomials in $\lambda_i^2\alpha_i$ with coefficients of order $O(1)$, defined in the appendix, Equation (\ref{gpoly1}).
\end{theorem}
\begin{proof}
The proof is given in the appendix. It provides a generalization of the results of \cite{bernacchia_meta-learning_2021} in the case of non-homogeneous tasks and parametric input variability.
\end{proof}
When tasks are homogeneous ($\sigma_i=\sigma$, $\lambda_i=\lambda$) and a fixed learning rate is used for all meta-training tasks ($\alpha_i=\alpha$), we note that the test loss (\ref{testloss}) is permutation invariant, thus the Purkiss principle of Theorem \ref{unifThm} applies.
Therefore, in the remainder of this section we consider only the case of uniform allocation ($n_i=n$).
Non-homogeneous tasks and non-uniform allocation are studied in the \emph{'Easy vs hard tasks'} section \ref{sec:easyvshard}.
Note also that Theorem \ref{underThm} assumes an underparameterized model ($p<\sum_{i=1}^mn_i$).
For completeness, we also study the overparameterized case in the appendix.
\begin{figure*}[ht]
\centering
\includegraphics[width = 0.9\textwidth]{ImageClassification_Allotv6.pdf}
\caption{\textbf{The optimal number of data points per task is constant for large budgets: Few-shot image classification}: \textbf{A,B}: CIFAR-FS, \textbf{D,E}: \emph{mini}-ImageNet dataset. Format is the same as of Figure \ref{fig:linreg}. Curves are noisy and tend to flatten at large budgets, but there seems to be a unique optimum for each budget value. The optimum is computed empirically as explained in the \emph{'Computation of the optimum'} section \ref{sec:compopt}. The optimal number of data points converges to $\sim 7$ for CIFAR-FS and to $\sim 10$ for \emph{mini}-ImageNet. Error bars show standard deviation.}
\label{fig:image_classification}
\end{figure*}
Figure \ref{fig:linreg}A plots the meta-test loss of mixed linear regression as a function of $n$ for different budgets.
It shows a good agreement between the experiments and the theoretical prediction of equation (\ref{testloss}) (see appendix for details).
According to equation (\ref{testloss}), the error between theory and experiment is expected to be of order $O\left(b^{-3/2}\right)$, since $b\sim O\left(\xi^2\right)$, indeed theoretical prediction is more accurate for larger budgets.
As expected, test loss decreases with budget, since more data implies better performance.
We emphasize that curves have a convex shape, implying that there is a unique optimal value of $n$ for each budget.
While the curves tend to flatten at large budgets, the optimum remains approximately constant, as shown in Figure \ref{fig:linreg}B.
In the following theorem, we compute the unique solution of the uniform allocation problem for mixed linear regression.
\vspace{\baselineskip}
\begin{theorem}
\label{optallocThm}
Under the assumptions of Theorem \ref{underThm}, consider the test loss of Equation (\ref{testloss}) and the uniform allocation problem in Equation (\ref{unifdataalloc}) . Furthermore, let $p=p(\xi)$ be a function of order $\Theta(\xi)$ as $\xi\rightarrow\infty$, neglect orders $O\left(\xi^{-2}\right)$ in Equation (\ref{testloss}). Then, for all sufficiently small values of the learning rate $\alpha$, the uniform allocation problem has a unique minimum, which does not depend on the budget and is given by
\begin{equation}
\label{optimalN}
\nonumber
n^\star=Cp\\
\end{equation}
where the constant $C$ is defined in Equation (\ref{optimaln}) in the appendix.
\end{theorem}
\begin{proof}
The proof is provided in the appendix..
\end{proof}
This theorem implies that once the suitable error terms in the approximation of $\mathcal{L}^{test}$ are ignored, there is a unique and constant optimum for the number of data points per task at large budgets. Note that the magnitude of the error terms does depend on the budget and the relation between $n$, $p$ and $m$.
While the theoretical optimum does not depend on the budget, it may depend on whether tasks are hard or easy (see section \ref{sec:easyvshard} \emph{'Easy vs hard tasks'}).
Figure \ref{fig:linreg}B shows the optimal $n^\star$ as a function of budget,
it shows that the theoretical value of the optimum (orange line) agrees with the experiments.
\begin{figure*}[ht]
\centering
\includegraphics[width = 0.9\textwidth]{cifarfs_easyonlyhardonly_fig3_horizontal.png}
\caption{\textbf{Hard tasks prefer more data (and less tasks) when trained separately}. Few shot image classification on CIFAR-FS. \textbf{A} Tasks are made harder by drawing classes within a hierarchy; \textbf{B} Tasks are made harder by adding label noise. Both plots show test accuracy versus the number of data points per class, as in Figure \ref{fig:image_classification}A. Each plot shows an estimate of the point of maximum accuracy, with error bars showing standard deviation (see the \emph{'Computation of the optimum'} section \ref{sec:compopt} for its computation). In both cases, performance is lower and the optimal number of data points per class is larger for hard tasks.}
\label{fig:easyhardonly}
\end{figure*}
\subsection{Few-shot image classification}
\label{sec:image}
We next tested whether the results of mixed linear regression generalize to the more interesting problem of few-show image classification.
In this case, the loss function is the cross-entropy, $\mathcal{L}\left(\theta; x,y\right)=-y^T\log\left(f_\theta(x)\right)$,
where $y$ is a one-hot encoding of the class label, and $f_\theta(x)$ is the output vector of a neural network with parameters $\theta$ and input $x$.
We use a convolutional neural network commonly used with MAML on image classification \cite{finn_model-agnostic_2017} (see appendix \ref{appendix:image} for details).
We investigate the CIFAR-FS \cite{bertinetto_meta-learning_2019} and \emph{mini}-ImageNet \cite{vinyals_matching_2017} datasets, which are few-shot versions of CIFAR-100 and ImageNet, respectively.
Both classification problems are $5$-way: each task contains $5$ classes.
We refer to the number of data points \emph{per class}, which has to be multiplied by $5$ to find the number of data points \emph{per task}.
As in previous studies, we used $5$ \textit{shots} during testing ($5$ data points per class), while the number of shots during training depends on the data allocation.
In previous work \cite{vinyals_matching_2017}, \cite{bertinetto_meta-learning_2019}, we note that tasks are usually re-sampled indefinitely until convergence of the model, thus there is no limit on the number of tasks that can be generated.
We instead pre-sample a set of tasks in order to fix the budget constraint.
For comparison, we also run experiments in the usual way, and we call this the \emph{infinite budget} case.
However, the total number of labels is fixed and, even if tasks are re-sampled indefinitely, it does not imply that the amount of data is infinite, rather the same image may appear in multiple tasks.
As expected, Figure \ref{fig:image_classification} shows that test performance improves with the budget, for both CIFAR-FS and \emph{mini}-ImageNet (Figure \ref{fig:image_classification}A,C).
For infinite budget, the accuracy is similar to previously reported values ($\sim 63\%$ for \emph{mini}-ImageNet \cite{finn_model-agnostic_2017}, $\sim 71\%$ for CIFAR-FS \cite{bertinetto_meta-learning_2019}).
For CIFAR-FS, the optimal number of data points per class was $\sim 20$ at small budgets, but decreased and remained approximately constant at $\sim 7$ for large budgets (Figure \ref{fig:image_classification}B).
For \emph{mini}-ImageNet, the optimal number of data points per class was $\sim 5$ at very small budget and then increased and remain approximately constant at $\sim 10$ (Figure \ref{fig:image_classification}D).
The performance curves in Figure \ref{fig:image_classification}A,C tend to flatten at higher budgets, but the optimum does not change significantly.
Overall, the empirical study of both datasets confirms our prediction that the optimal number of data points per task is constant at large budgets.
\begin{figure*}[ht]
\centering
\includegraphics[width = 0.9\textwidth]{cifarfs_easyhard_fig4_horizontal.png}
\caption{\textbf{When training on a mixture of easy and hard tasks, it is better to allocate more data to easy tasks}. Few shot image classification on CIFAR-FS. \textbf{A} Tasks are made harder by drawing classes within a hierarchy; \textbf{B} Tasks are made harder by adding label noise. Both plots show test accuracy versus the relative amount of easy vs hard data points per class. Each plot shows an estimate of the point of maximum accuracy, with error bars showing standard deviation. In both cases, a slightly higher performance is obtained by allocating more data to easy tasks than to hard ones.}
\label{fig:easyhard}
\end{figure*}
\section{Easy vs hard tasks}
\label{sec:easyvshard}
In this section we consider the case of non-homogeneous tasks.
We distinguish between two sets of tasks, easy and hard.
We use two independent definitions of hard tasks, one affects the input and the other affects the output (label) of a dataset.
We apply this definition in a similar way to both mixed linear regression and few-shot image classification.
For the problem of mixed linear regression, we define \emph{task difficulty} in terms of the hyperparameters $\sigma$ and $\lambda$.
A task is harder if it has a larger $\sigma$ (at equal $\lambda$) or smaller $\lambda$ (at equal $\sigma$).
The case of larger $\sigma$ is intuitive, a task is harder to learn if labels are more corrupted by noise.
In the case of smaller $\lambda$, the smaller input range makes it harder to solve the regression problem in presence of noise.
In few-shot image classification, the first method to make a task harder is to introduce label noise \cite{song_learning_2021}: each input image has $20\%$ probability of having its label swapped with another random class.
The second method is similar to \cite{collins_why_2020}: we take advantage of the hierarchical tree of the CIFAR-100 dataset and we constraint each task to draw classes from one of three superclasses: 1) animals, 2) vegetations, 3) object and scenes.
Therefore, we assume that each task has a smaller variability of its input, not in terms of pixels color or intensity, but in terms of semantic relations.
Intuitively, it is harder to distinguish inputs when they are more similar to each other.
We refer to the two different definitions as, respectively, \emph{noisy labels} and \emph{class hierarchy}.
\vspace{-1mm}
\subsection{Separate training}
\label{sec:easyhardonly}
Before studying the training of a mixture of easy and hard tasks, we ask what is the optimal uniform allocation when the two types of tasks are trained separately.
In mixed linear regression, the expression for the optimum of the uniform allocation $n^\star$ is given by Eq.(\ref{optimalN}), but is hard to evaluate how it depends on $\sigma$ and $\lambda$.
Therefore we computed an approximation that holds for small $\alpha'$ (see equation (\ref{nstarapprox}) in the appendix):
\begin{align} \label{Nstar}
n^\star= \left[2\left(1+\frac{\sigma'^2}{\nu^2}\right)\right]^{\frac{1}{3}}\alpha'^{\frac{4}{3}}p+O\left(\alpha'^{\frac{5}{3}}\right)
\end{align}
where $\alpha'=\lambda^2\alpha$ and $\sigma'=\sigma/\lambda$.
The optimum increases with $\sigma$, suggesting that harder tasks require more data (and less tasks) at fixed budget.
For $\lambda$, there are two opposing forces: 1) On one hand a smaller $\lambda$ is equivalent to amplifying output noise $\sigma'$ and increasing the optimum $n^\star$; 2) On the other hand, $\lambda$ rescales the learning rate $\alpha'$ with the opposite and stronger effect that a smaller $\lambda$ decreases the optimum.
Figure \ref{fig:easyhardonly} shows that introducing task difficulty in few-shot image classification on CIFAR-FS increases the optimum for both methods (A: class hierarchy; B: noisy labels, see section \ref{sec:easyhardapp} \emph{'Easy and hard tasks creation in the image classification experiments'} for details).
As expected, performance is lower for hard tasks in both cases (note that we train and test on the same set of tasks, either only easy or only hard).
While it is intuitive that hard tasks require more data to learn, we emphasize that, for a fixed budget, this comes at the expense of a smaller number of tasks.
\vspace{-1mm}
\subsection{Joint training}
\label{sec:easyhard}
We now turn to the problem of training on a mixture of easy and hard tasks.
In addition to a fixed budget, we further assume an equal number of easy and hard tasks, and a constant sum of easy and hard data points per task.
Therefore, the only hyperparameter of interest is the relative number of data points per task for easy vs hard.
Note that we use a mixture of easy and hard tasks also for testing, but we always use an equal number of easy and hard data points and tasks in that case (see section \emph{'Easy and hard tasks creation in the image classification experiments'} \ref{sec:easyhardapp} for details).
After the results of section \ref{sec:easyhardonly}, we expect better results when allocating more data to hard tasks.
Surprisingly we find that the opposite is true.
Figure \ref{fig:easyhard} shows that a slightly higher performance is obtained when allocating more data to easy tasks, in few-shot image classification on CIFAR-FS for both methods (Panel A: class hierarchy; Panel B: noisy labels).
Intuitively, easy tasks are easier to learn than hard tasks.
Therefore, it may be that if training on easy tasks transfers to better performance on hard tasks, then it is better to allocate more data to easy tasks.
\section{Discussion}
In this paper we analysed the problem of optimal data allocation in meta-learning when the budget of labelled examples is limited.
When tasks are homogeneous, we showed that uniform data allocation across tasks is optimal (under the assumptions of Theorem \ref{unifThm}).
We further studied whether one should use less tasks with more data or more tasks and less data.
For mixed linear regression, we found a unique solution for the optimum at large budgets.
We confirmed this finding empirically on few-shot image classification (an example of nonlinear regression is also included in the appendix).
In the case of non-homogeneous tasks, with a mixture of easy and hard tasks, we showed how to optimally allocate data between the two types of tasks.
In particular, we found that it is better to allocate more data to easy tasks.
This result echoes findings in experimental neuroscience, where it was found that human visual skills indeed transfer better from easy tasks than from hard ones \cite{ahissar_task_1997}.
Our findings provide a guideline for collecting meta-learning data in a way that achieves the best performance under a fixed budget.
We do not expect our study to have a negative societal impact, at least not in a direct way.
Overall, our study exemplifies the importance of optimal data allocation in meta-learning and gives a series of empirical and theoretical insights on the relation between model performance and data allocation for MAML.
While the behaviour of other meta-learners need not be the same, we surmise that the problem of training models close to optimal allocation is important, and leave much space for empirical study in a variety of contexts, as well as for the development of a more general theoretical framework.
For example, we have only scratched the surface of the problem of non-uniform allocation, which requires much further study.
| {
"timestamp": "2022-04-11T02:11:34",
"yymm": "2103",
"arxiv_id": "2103.08463",
"language": "en",
"url": "https://arxiv.org/abs/2103.08463",
"abstract": "Meta-learning models transfer the knowledge acquired from previous tasks to quickly learn new ones. They are trained on benchmarks with a fixed number of data points per task. This number is usually arbitrary and it is unknown how it affects performance at testing. Since labelling of data is expensive, finding the optimal allocation of labels across training tasks may reduce costs. Given a fixed budget of labels, should we use a small number of highly labelled tasks, or many tasks with few labels each? Should we allocate more labels to some tasks and less to others? We show that: 1) If tasks are homogeneous, there is a uniform optimal allocation, whereby all tasks get the same amount of data; 2) At fixed budget, there is a trade-off between number of tasks and number of data points per task, with a unique solution for the optimum; 3) When trained separately, harder task should get more data, at the cost of a smaller number of tasks; 4) When training on a mixture of easy and hard tasks, more data should be allocated to easy tasks. Interestingly, Neuroscience experiments have shown that human visual skills also transfer better from easy tasks. We prove these results mathematically on mixed linear regression, and we show empirically that the same results hold for few-shot image classification on CIFAR-FS and mini-ImageNet. Our results provide guidance for allocating labels across tasks when collecting data for meta-learning.",
"subjects": "Machine Learning (cs.LG)",
"title": "How to distribute data across tasks for meta-learning?",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232914907945,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7093460218597387
} |
https://arxiv.org/abs/0811.4576 | Concentration of the integral norm of idempotents | This is a companion paper of a recent one, entitled {\sl Integral concentration of idempotent trigonometric polynomials with gaps}. New results of the present work concern $L^1$ concentration, while the above mentioned paper deals with $L^p$-concentration.Our aim here is two-fold. At the first place we try to explain methods and results, and give further straightforward corollaries. On the other hand, we push forward the methods to obtain a better constant for the possible concentration (in $L^1$ norm) of an idempotent on an arbitrary symmetric measurable set of positive measure. We prove a rather high level $\gamma_1>0.96$, which contradicts strongly the conjecture of Anderson et al. that there is no positive concentration in $L^1$ norm.The same problem is considered on the group $\mathbb{Z}/q\mathbb{Z}$, with $q$ say a prime number. There, the property of absolute integral concentration of idempotent polynomials fails, which is in a way a positive answer to the conjecture mentioned above. Our proof uses recent results of B. Green and S. Konyagin on the Littlewood Problem. | \section{Introduction and statement of results}\label{sec:intro}
The problem of $p$-concentration
on the torus for idempotent polynomials has been considered first
in \cite{first}, \cite{CRMany}, \cite{CRSome}, \cite{DPQ}. We use
the notation $\mathbb T:=\mathbb R/\mathbb Z$ for the torus. Then $e(t):=e^{2\pi i
t}$ is the usual exponential function adjusted to interval length
$1$, and we denote $e_h$ the function $e(ht)$. For obvious reasons
of being convolution idempotents, the set
\begin{equation}\label{eq:idempotents}
\PP:=\left\{ \sum_{h\in H}e_h ~:~ H\subset \mathbb N, ~ \sharp H< \infty
\right\}
\end{equation}
is called the set of \emph{(convolution-)idempotent exponential
(or trigonometric) polynomials}, or just \emph{idempotents} for
short. The $p$-concentration problem comes from the following
definition.
\begin{definition}
Let $p>0$. We say that there is $p$-concentration if there
exists a constant $\gamma>0$ so that for any symmetric (with
respect to $0$) measurable set $E$ of positive measure one can
find an idempotent $f\in\PP$ with
\begin{equation}\label{eq:Lpconcentration}
\int_E |f|^p \geq \gamma\int_\mathbb T |f|^p.
\end{equation}
The supremum of all such constants $\gamma$ will be denoted as
$\gamma_p$, and called the level of $p$-concentration.
\end{definition}
The main theorem of \cite{Many} can be stated as:
\begin{theorem}[{\bf Anderson, Ash, Jones, Rider, Saffari}]
\label{th:largepconcentration} There is $p$-concentration for all
$p>1$.
\end{theorem}
We prove in our recent paper \cite{BR} that there is
$p$-concentration for all $p>1/2$, while the same authors
conjectured that idempotent concentration fails already for $p=1$.
Moreover, we prove that the constant $\gamma_p$ is equal to $1$
when $p>1$ and $p$ is not an even integer. This is in line with
the fact that $L^p$ norms behave differently depending on whether
$p$ is an even integer or not in a certain number of problems,
such as the Hardy-Littlewood majorant problem ({\sl does an
inequality on absolute values of Fourier coefficients imply an
inequality on $L^p$ norms?}) or the Wiener property for periodic
positive definite functions ({\sl does a positive definite
function\comment{with large gaps in its Fourier series ???} belong
to $L^p$ when it is the case on a small interval?}). The fact that
one can find idempotents among counter-examples to the
Hardy-Littlewood majorant problem had been conjectured by
Montgomery \cite{M} and was recently proved by Mockenhaupt and
Schlag \cite{MS}, and we rely on their construction in \cite{BR}.
At the same time, we were able to revisit the Wiener property in
order to construct counter-examples among idempotents \cite{BR2}.
\medskip
Even if we disproved the conjecture of \cite{Many} for $p=1$, the
situation is not yet entirely clear. Indeed, the constant $\gamma$
can be taken arbitrarily close to $1$ when we restrict the class
of symmetric measurable sets to symmetric open sets or enlarge the
class of trigonometrical polynomials to all positive definite
ones, that is, allow all non negative coefficients and not only
$0$ or $1$. So one may conjecture that $\gamma_1=1$ (even if we
understand that one should be cautious with such conjectures). By
pushing forward our techniques, we improve our previous constant
and prove the following.
\begin{theorem}\label{th:L1} For $p=1$
there is concentration at the level $\gamma_1>0.96$.
Moreover, for arbitrarily large given $N$ the corresponding
concentrating idempotent can be chosen with gaps at least $N$
between consecutive frequencies.
\end{theorem}
In order to prove this theorem, we will describe the main steps of
our proofs in \cite{BR} before focusing on the improvements. When
doing this, we also give a relatively simple proof of the fact
that the best constant $\gamma_2$ for symmetric measurable sets is
the same as for open sets. This is proved in \cite{Many}, as it is
a particular case of their general result, but their proof is not
easy to read. We describe it here so that a simpler, explanatory
proof be available. The constant for open sets has been obtained
by D\'echamps-Gondim, Piquard-Lust and Queff\'elec \cite{DPQ,
DPQ2}, so that
\begin{equation}\label{case2}
\gamma_2=\sup_{0\leq x}\frac {2\sin^2 x}{\pi x}=0.4613\cdots.
\end{equation}
\bigskip
In all proofs, the same kind of estimates as
\eqref{eq:Lpconcentration}, but with finite sums on a grid of
points replacing integrals, plays a central role in the proofs. So
it was natural to get interested in best constants on these finite
structures. This led us to the same problem, but taken on finite
groups, which we describe now.
\bigskip
Let us consider $\mathbb Z_q:=\mathbb Z/q\mathbb Z$, which identifies with the grid
(or subgroup) $\mathbb{G}_q:=\{ k/q; k=0, 1, \cdots, q-1\}$
contained in the torus. We do not assume that $q$ is a prime
number at this point. We still denote by $e(x):=e^{2\pi i x/q}$
the exponential function adapted to the group $\mathbb Z_q$ and by $e_h$
the function $e(hx)$. Again the set
\begin{equation}\label{eq:idempotents_q}
\PP_q:=\left\{ \sum_{h\in H}e_h ~:~ H\subset \{0,\cdots, q-1\}
\right\}
\end{equation}
is called the set of \emph{idempotents} on $\mathbb Z_q$. In this
context, the set of idempotents has $2^q$ elements.
We then adapt the definition of $p$-concentration to the setting
of $\mathbb Z_q$.
\begin{definition}\label{def:gammapsharp}
Let $p>0$. We say that there is uniform (in $q$) $p$-concentration
for $\mathbb Z_q$ if there exists a constant $\gamma>0$ so that for each
prime number $q$ one can find an idempotent $f\in\PP_q$ with
\begin{equation}\label{p-conc}
2|f(1)|^p \geq \gamma \sum_{k=0}^{q-1}|f(k)|^p.
\end{equation}
Moreover, writing $\gamma^\sharp_p(q)$ for the maximum of all such
constants $\gamma$, we put
$$
\gamma_p^\sharp:=\liminf_{q\to \infty} \gamma^\sharp_p(q).
$$
Then $\gamma_p^\sharp$ is called the uniform
level of $p$-concentration.
\end{definition}
Here we can formulate a discrete analogue of the problem in
\cite{CRMany, Many}. {\sl Does $q$-uniform concentration fail for
$p=1$?}
The reader may note that in order to define $p$-concentration in
the setting of $\mathbb Z_q$, one should also look for $f$ that
satisfies \eqref {p-conc}, but with $f(a)$, for some arbitrary
$a\in \mathbb Z_q$, in the left hand side. This is easy when $q$ is
prime. Indeed, for $a=0$ the Dirac mass at $0$, which is an
idempotent, has the required property with constant $1$.
Otherwise, if $a\neq 0$ and $f$ satisfies \eqref {p-conc}, then
the function $g(x):=f(a^{-1}x)$ satisfies the same inequality, but
with $g(a)$ in the left hand side. Here $a^{-1}$ is the unique
inverse for the multiplication in $\mathbb Z_q$. Clearly $g(a) = f(1)$,
and all other values taken by $f$ are taken by $g$ since
multiplication is one-to-one in $\mathbb Z_q$ for $q$ prime, so that the
right hand side is the same for $f$ and $g$.
\begin{remark}\label{not-prime} We can also replace $1$ by $a$ in
the left-hand side of \eqref {p-conc} when $q$ is any integer,
but $a$ and $q$ are co-prime.
\end{remark}
\medskip
As we said, $p$-concentration on $\mathbb Z_q$ plays a role in proofs
for $p$-concentration on the torus. In order to solve the
$2$-concentration problem on the torus, D\'echamps-Gondim,
Piquard-Lust and Queff\'elec \cite{DPQ, DPQ2} have considered the
concentration problem on $\mathbb Z_q$, proving the precise value that
we already mentioned,
\begin{equation}\label{case2-q}
\gamma_2^\sharp=\sup_{0\leq x}\frac {2\sin^2 x}{\pi
x}=0.4613\cdots.
\end{equation}
Moreover, they obtained $\gamma_p^\sharp\geq 2
(\gamma_2^\sharp/2)^{p/2}$ for all $p>2$. The last assertion is an
easy consequence of the decrease of $\ell^p$ norms with $p$, and
we have, in general,
\begin{equation}\label{comparison} \gamma_p^\sharp\geq 2 (\gamma^\sharp_{p'}/2)^{p/p'}\end{equation} for $p>p'$.
Let us also mention that they considered the same problem for the
class of positive definite polynomials, that is
\begin{equation}\label{eq:pos-def}
\PP_q^+:=\left\{ \sum_{h\in H}a_he_h ~:~ a_h\geq 0,
h\in\{0,\cdots, q-1\} \right\}.
\end{equation}
We say that there is uniform $p$-concentration on $\mathbb Z_q$ for the
class of positive definite polynomials if there exists some
constant $\gamma$ such that \eqref{p-conc} holds for some $f\in
\PP^+_q$. We denote by $c_p^+$ the level of $p$-concentration for
the class of positive definite polynomials, which is defined as
the maximum of all admissible constants in (5) (similarly to the
class of idempotents).
With these notations, it has been proved in \cite{DPQ} that
$c_2^+=1/2$. Since the class of positive definite polynomials is
stable by taking products, it follows that, for all even integers
$2k$,
$$\gamma_{2k}^\sharp\leq c_{2k}^+\leq 1/2.$$
It is easy to see that there is uniform $p$-concentration on
$\mathbb Z_q$ for all $p>1$, using Dirichlet kernels. This has been used
in our paper \cite{BR}, where the discrete problem under
consideration here has been largely studied, at least for $p$ an
even integer.
On the other hand, coming back to our main point, i.e. to the case
of $p=1$, and using the recent results of B. Green and S. Konyagin
\cite{GK}, we answer negatively in this case, which gives an
affirmative answer to the conjecture of \cite{Many} for finite
groups $\mathbb Z_q$.
All the results on $\mathbb Z_q$ summarize in the following theorem,
which gives an almost complete answer to the $p$-concentration
problem under consideration, except for the best constants, which
are not known for $p\neq 2$.
\begin{theorem}\label{th:concentration} For all $1<p<\infty$ we
have uniform $p$-concentration on $\mathbb Z_q$. We have
$\gamma^\sharp_2$ given by \eqref{case2}, then
$0.495<\gamma^\sharp_4\leq 1/2$. For all $p>2$, we have
$\gamma^\sharp_{p}>0.483$. On the other hand for $p\leq 1$ we do
not have uniform $p$-concentration.
\end{theorem}
Positive results are implicitly contained in \cite{BR}, where they
are used as tools for the problem of concentration on the torus.
As far as necessary upper bounds for $\gamma^\sharp_p$ are
considered, since the polynomials $f$ with positive coefficients
have their maximum at $0$, we have the trivial upper bound
$\gamma^\sharp_p\leq 2/3$. Moreover, for $p$ an even integer, we
have seen that $\gamma^\sharp_p\leq 1/2$. Let us remark that
\eqref{comparison} provides an improvement on the bound $2/3$
between two even integers. Indeed, for $p\leq 2k$, we have
$$\gamma^\sharp_p\leq 2^{1-p/k}.$$
\bigskip
In the next two sections, we will consider the case of $\mathbb Z_q$,
first for $p>1$, then for $p=1$. Then, in Section 4, we will come
back to the case $p=2$ on the torus and exploit the proof for
giving concentration results by means of the use of the grid
$\mathbb{G}_q$. In the last section, we prove Theorem \ref{th:L1}.
\bigskip
We tried to keep the notations for the constants the same as in
\cite{BR}, since we refer to the proofs there, and apologize for
sometimes these notations seem more complicated than they should
be.
\section{uniform $p$-concentration}\label{finite}
In this section, we will recall the situation on the group $\mathbb Z_q$
by transferring the results that have been obtained for the grid
$$\mathbb{G}_q:=\{ k/q; k=0, 1, \cdots, q-1\}$$ contained in $\mathbb T$.
By a slight abuse of notation, let us still denote
\begin{equation}\label{eq:idem_q}
\PP_q:=\left\{ \sum_{h\in H}e_h ~:~ H\subset \{0,\cdots, q-1\}
\right\}
\end{equation}
the set of trigonometrical idempotents of degree less than $q$ on
$\mathbb T$, with $e_h$ denoting the exponential $e_h(x):=e^{2\pi i hx}$
adapted to $\mathbb T$. When restricted to $\mathbb{G}_q$ identified
with $\frac 1q\mathbb Z_q$, it coincides with the corresponding
idempotent (the coefficients are the same, but the exponential is
now adapted to $\mathbb Z_q$) on $\mathbb Z_q$. This is a one-to-one
correspondence between idempotents of $\mathbb Z_q$ and idempotents of
degree less than $q$, since these last ones are determined by
their values on $q$ points, and, in particular, on $\mathbb{G}_q$.
We will prefer to deal with ordinary trigonometrical polynomials,
and see $\mathbb Z_q$ as the grid $\mathbb{G}_q$.
Unless explicitly mentioned, we will only consider Taylor
polynomials, that is, trigonometrical polynomials with only non
negative frequencies.
We consider the following quantities, written in these new
notations, and identify them with the quantities defined for
$\mathbb Z_q$ in the introduction.
\begin{equation}\label{new-c} \gamma^\sharp_p:=
\liminf_{q\rightarrow \infty} \gamma^\sharp_p(q), \qquad
\gamma^\sharp_p(q):=\sup_{R\in \PP_q} \frac{2\left |R\left(\frac
1q\right) \right|^p}{ \sum_{k=0}^{q-1} \left|R\left(\frac
kq\right) \right|^p}.
\end{equation}
One can obtain a lower bound of $\gamma^\sharp_p$, with $p>1$, by
the only consideration of the Dirichlet kernels
\begin{equation}\label{eq:Dndef}
D_n(x):=\sum_{\nu=0}^{n-1} e(\nu x) = e^{\pi i(n-1)x}
\frac{\sin(\pi n x)}{\sin(\pi x)}.
\end{equation}
Here the constraint on the degree restricts us to $n<q$. Having
$n$ and $q$ tend to infinity with $n/q$ tending to $t$, we proved
in \cite{BR} (see Lemma 35) that
\begin{lemma}\label{l:majB}
For $p>1$, we have the inequality
\begin{equation}\label{eq:majB}
2(\gamma^\sharp_p)^{-1}\leq \inf_{0<t<1/2}B(p, t),
\end{equation} where, for $\lambda>1$,
\begin{equation}\label{eq:Bdef}
B(\lambda,t):=\left(\frac{\pi t}{\sin \pi
t}\right)^{\lambda}\left(1+2\sum_{k=1}^{\infty}
\left|\frac{\sin\left(k\pi t\right)}{k\pi
t}\right|^{\lambda}\right).
\end{equation}
\end{lemma}
\medskip
It is clear that $B(\lambda,t)$ is bounded for $\lambda>1$, so
that $\gamma^{\sharp}_p>0$ and there is uniform $p$-concentration:
just take as a bound the value for $t=1/4$. Let us try to get more
precise estimates. The computation of $\inf_{0<t<1/2}B(\lambda,
t)$ can be executed explicitly for $\lambda=2$ and $\lambda= 4$.
In the first case we recognize in the sum the Fourier coefficients
of $\chi_{[-t/2,t/2]}$, whose $L^2$ norm is $\sqrt t$. So
\eqref{eq:majB} leads to the minimization of the function $\frac
{2\sin^2 t}{\pi t}$, and to the estimate $ \gamma^\sharp_2\geq
\sup_{0\leq t}\frac {2\sin^2 t}{\pi t}=0.4613\cdots. $ This is the
formula given by D\'{e}champs-Gondim, Lust-Piquard and
Queff\'{e}lec in \cite{DPQ}. We refer to them for the necessity of
the condition, for which they give a smart proof. For $\lambda=4$,
we recognize in the sum of \eqref{eq:Bdef} the Fourier
coefficients of the convolution product
$\chi_{[-t/2,t/2]}*\chi_{[-t/2,t/2]}$, whose $L^2$ norm is equal
to $(2t^3/3)^{1/2}$. Using Plancherel Formula we obtain that
\begin{equation}\label{p=4}
\gamma^\sharp_4\geq \max_{0<t<1/2} \frac{3 \left(\sin ^4 (\pi t)
\right)}{\pi^4 t^3 } > 0.495.
\end{equation}
For larger integer values of $\lambda$, the computations do not
seem to be easily handled. But we can prove that there exists a
uniform lower bound for $\gamma^\sharp_p$ when $p\geq 6$. To see
this, we need another lemma that can be found in \cite{BR}. Let us
first give new definitions, relative to positive definite
polynomials.
As for idempotents, by the same slight abuse of notation, let us
still denote
\begin{equation}\label{pos-def}
\PP_q^+:=\left\{ \sum_{h\in H}a_he_h ~:~ a_h\geq 0,\,
h\in\{0,\cdots, q-1\} \right\}.
\end{equation}
the set of trigonometrical polynomials with non negative
coefficients of degree less than $q$ on $\mathbb T$, with $e_h$ denoting
the exponential adapted to $\mathbb T$. Again, when restricted to
$\mathbb{G}_q$, it coincides with the corresponding positive
definite polynomial with non negative coefficients on $\mathbb Z_q$, and
this defines a one-to-one correspondence between positive definite
polynomials of $\mathbb Z_q$ and positive definite polynomials on $\mathbb T$
of degree less than $q$. The constant $c_p^+$ can then be defined
by
\begin{equation}\label{new-c+} c_p^+:=
\liminf_{q\rightarrow \infty} c_p^+(q), \qquad
c_p^+(q):=\sup_{R\in \PP_q^+} \frac{2\left |R\left(\frac 1q\right)
\right|^p}{ \sum_{k=0}^{q-1} \left|R\left(\frac kq\right)
\right|^p}.
\end{equation}
It is much easier to find positive definite polynomials in
$\PP_q^+$ than idempotents. In particular, whenever $P$ is in
$\PP_q$, then, for each positive integer $L$ the polynomial $Q$,
which has degree less than $q$ and has the same values on
$\mathbb{G}_q$ as $P^L$, is in $\PP_q^+$. So we can take as well
powers of Dirichlet kernels as polynomials $R$ in the right hand
side of \eqref{new-c+}. This leads to the following bounds, using
Lemma \ref{l:majB}.
\begin{eqnarray} 2(c_p^+)^{-1}&\leq&\inf_{L\geq
1}\;\inf_{0<t<1/2}B(Lp, t)\nonumber\\
&\leq & \inf_{\kappa>0}\limsup_{\lambda\mapsto \infty}B\left
(\lambda, \kappa\sqrt{6 /\lambda }\right)\label{bound-above}\\
&\leq & 4.13273.\nonumber\end{eqnarray} The two last estimates
may be found in \cite{BR}, see (55), and lead to
\begin{equation}\label{for+ uniform}
c_p^+> 0.483.
\end{equation}
The first one gives a non explicit bound for a fixed $p$:
\begin{equation}\label{p-fixed}
c_p^+\geq 2\sup _{L\geq
1}\;\sup_{0<t<1/2}B(Lp, t)^{-1}.
\end{equation}
We prove now that we have the same estimates for $\gamma^\sharp_p$
when $p>2$.
\begin{theorem}\label{th:gammapsharpp2} We have $\gamma_p^{\sharp}
> 0.483$ uniformly for all $p>2$ .
\end{theorem}
This is a consequence of the following proposition, which is more
general than the corresponding results in \cite{BR}.
\begin{proposition}\label{random} Let $p>2$ and $c>0$, $\varepsilon>0$.
Then there exists $q_0:=q_0(c, \varepsilon)$ such that, if
$q>q_0$ and $P:=\sum_0^{q-1}a_h e_h$ is a polynomial of degree
less than $q$ that satisfies the two conditions
\begin{equation}\label{cond-c}
cq\max_h|a_h| \leq \sum |a_h|\leq c^{-1}|P(1/q)|,
\end{equation}
\begin{equation}
|P(1/q)|\geq c \left( \sum_{k=0}^{q-1}|P(k/q)|^p \right)^{1/p},
\label{concentr}
\end{equation}
then there exists a polynomial $Q$ of degree less than $q$, whose
coefficients are either $a_h/|a_h|$ or $0$, such that
\begin{eqnarray}\label{at-p}
|Q(1/q)|&\geq & (1-\varepsilon) |P(1/q)|,\\
\left(\sum_{k=0}^{q-1}|Q(k/q)-P(k/q)|^p\right)^{1/p} & \leq &
\varepsilon |P(1/q)|.\label{in-mean}
\end{eqnarray}
\end{proposition}
Observe that, for $P$ positive definite, $Q$ is an idempotent. In
this case, the first condition can be reduced to $P(0)\geq
cq\max_h|a_h|$. Indeed, the fact that $|P(1/q)|\geq c P(0)$
follows from the second one.
Let us take the proposition for granted, and use it in our
context.
\comment{We claim that this allows us to conclude for the bound
below
\begin{equation}\label{foruniform}
\gamma_p^\sharp > 0.483.
\end{equation}
}
\begin{proof}[Proof of Theorem \ref{th:gammapsharpp2}]
Let us take for $P$ a positive-definite polynomial of degree less
than $q$ for which
$$
\frac{2\left |P\left(\frac 1q\right) \right|^p}{ \sum_{k=0}^{q-1}
\left|P\left(\frac kq\right) \right|^p}\geq c_0 >0.483.
$$
We claim that there exists an idempotent $Q$ for which the same
ratio is bounded below by $c_0 C(\varepsilon)$, with
$C(\varepsilon)$ tending to $1$ when $\varepsilon$ tends to $0$.
Indeed, we can apply the proposition as soon as we have proved
that $P$ satisfies the condition \eqref{cond-c} (uniformly for $q$
large). We have seen that $P$ can be taken as the polynomial of
degree less than $q$, which coincides with $D_n^L$ on the grid
$\mathbb{G}_q$, for $n$ chosen in such a way that $n/q\approx
t=\kappa\sqrt{6/\lambda}$ is small enough so that we approach the
extremum in \eqref{bound-above}. Next, it is easy to see that
$P(0)=n^L$, while $|\hat P(k)|\leq Ln^{L-1}$. So we have
\eqref{cond-c} with a very small constant $c$, but what is
important that it does not depend on $q$ tending to $\infty$ (for
fixed $\varepsilon$). To conclude the proof, we use the fact that,
by Minkowski's inequality, and using the assumption on $P$, we
have
\begin{align*} \left(\sum_{k=0}^{q-1}
\left|Q\left(\frac kq\right) \right|^p\right)^{1/p}&\leq&
\left(\sum_{k=0}^{q-1} \left|P\left(\frac kq\right)
\right|^p\right)^{1/p}+ \e
|P(1/q)|\\
&\leq & ((2/c_0)^{1/p}+\e)|P(1/q)|\\
&\leq & (1-\e)((2/c_0)^{1/p}+\e)|Q(1/q)|.
\end{align*}
The constant tends to $(2/c_0)^{1/p}$ when $\e$ tends to $0$,
which concludes the proof.
\end{proof}
The same method leads to
\begin{equation}\label{p-fixed-gamma}
\gamma^\sharp_p\geq 2\sup _{L\geq 1}\;\sup_{0<t<1/2}B(Lp, t)^{-1}.
\end{equation}
This finishes the proof of the part of Theorem
\ref{th:concentration} concerning $p>1$, except for the proof of
Proposition \ref{random}, which we do now. It relies on the
construction of random polynomials, which may have an independent
interest.
\begin{proof}[Proof of Proposition \ref{random}] Without loss of
generality we may assume that $\max_h |a_h|=1$. We put
$\alpha_k:=|a_k|$ and $\sigma:=\sum \alpha_k$, so that $0\leq
\alpha_k\leq 1$ and $ cq\leq \sigma\leq c^{-1} |P(1/q)|$. We take
a sequence of independent random variables $X_0,X_1,\dots,X_{q-1}$
that follow the Bernoulli law with parameters $\alpha_0, \alpha_1,
\dots, \alpha_{q-1}$ on some probability space $(\Omega,
\mathcal{A}, \mathbb{P})$ and set
$$
P_\omega:=\sum_0^{q-1}b_h X_h(\omega)e_h
$$
with $b_h:=a_h/|a_h|$ for $a_h\neq 0$, otherwise $b_h=0$. Then the
expectation of $P_\omega$ is equal to $P$. We will prove that
$Q=P_\omega$ satisfies \eqref{at-p} and \eqref{in-mean} with
positive probability. Let us first consider \eqref{at-p}, and
prove that the converse inequality holds with probability less
than $1/3$ for $q$ large enough. Indeed, one has the inclusions
$$
\{\omega; |P_\omega(1/q)|\leq (1-\varepsilon)|P(1/q)|\}\subset \{\omega;
|P_\omega(1/q)-P(1/q)|>\varepsilon|P(1/q)|\},
$$
so that, by Markov inequality, using the fact that the variance of
$P_\omega (1/q)$ is $\sum \alpha_k(1-\alpha_k)\leq \sigma$, we
have
$$
\mathbb{P}\left(\left|\frac{P_\omega(1/q)}{P(1/q)}\right| \leq
1-\e\right)\leq c^{-2}\e^{-2}\sigma^{-1}.
$$
By \eqref{cond-c} we know that this quantity is small for $q$
large.
Next, to show \eqref{in-mean}, in view of \eqref{cond-c} it is
sufficient to prove that with probability $2/3$,
$$\sum_{k=0}^{q-1}|P_\omega(k/q)-P(k/q)|^p \leq c^{p}\varepsilon^p
\sigma^p.$$ We claim that there exists some uniform constant
$C_p$, for $p>2$, such that, for each $k$,
\begin{equation}\label{burkh}
\mathbb{E}(|P_\omega(k/q)-P(k/q)|^p)\leq C_p \sigma^{p/2}.
\end{equation}
Let us take this for granted and finish the proof. By simple
estimation
$$\mathbb{P}\left(\sum|P_\omega(k/q)-P(k/q)|^p \geq (c\varepsilon\sigma)^p
\right)\leq c^{-p}\varepsilon^{-p}C_p \,q\, \sigma^{-p/2}.$$ From
this we conclude easily, using the fact that $\sigma\geq cq$, so
that the right hand side tends to $0$ when $q$ tends to infinity.
Finally, \eqref{burkh} is a well-known property of independent
sums of Bernoulli variables, e.g. in \cite{BR} (Lemma 54) a proof
of the following lemma can be found.
\begin{lemma}\label{mart-bern}For $p>2$ there exists some constant $C_p$
with the following property. Let $\alpha_k\in [0,1]$ and $b_k\in
\mathbb C$ be arbitrary for $k=0,1,\dots,N$. For $X_k$ a sequence of
independent Bernoulli random variables with parameter $\alpha_k$,
we have
$$
\mathbb{E}\left(|\sum_{k=0}^Nb_k (X_k-\alpha_k)|^{p}\right)\leq
C_p\cdot \max_{k=1,\dots,N} |b_k|^{p} \cdot (1+\sum_{k=0}^N
\alpha_k)^{p/2}.
$$
\end{lemma}
\end{proof}
Of course one would like to know whether constants are
the same for classes $\PP_q$ and $\PP_q^+$. We know that it is
not the case for $p=2$ thanks to
the work of D\'{e}champs-Gondim, Lust-Piquard and Queff\'{e}lec, but the
last proposition induces to conjecture that they are the same for
$p>2$. Note that Proposition \ref{random} holds when \eqref{cond-c} is replaced by the
weaker assumption $\sigma\geq \delta(q)q^{2/p}\max |a_h|$, with $\delta$ tending to infinity with $q$.
\section{Failure of uniform $1$-concentration on $\mathbb Z_q$}\label{sec:proof}
We prove here the negative result of Theorem
\ref{th:concentration}. It will be more convenient, in this
section, to work directly on $\mathbb Z_q$, and not on the grid $
\mathbb{G}_q$. We now restrict to $q$ prime, which is sufficient
to conclude negatively.
Assume that
there exists some constant $c$ and some idempotent $f=\sum_{h\in
H}e_h$ such that
\begin{equation}\label{1-conc}
|f(1)| \geq c \sum_{k=0}^{q-1}|f(k)|.
\end{equation}
We claim
that $H$ may be assumed having cardinality $\leq q/2$. Indeed, $H$
is certainly not the whole set $\{0, \cdots, q-1\}$, since the
corresponding idempotent is $q$ times the Dirac mass at $0$.
Moreover, the idempotent $\widetilde{f}$, having spectrum $^c H$,
takes the same absolute values as $f$ outside $0$, while its value
at $0$ is $q-{\mbox{Card }} H$. So, if ${\mbox{Card }} H>q/2$, then $\widetilde{f}$
satisfies also \eqref{1-conc}.
From now on, let $r:= {\mbox{Card }} H \leq q/2$. We have by assumption
\eqref{1-conc} $\sum_{k=0}^{q-1}|f(k)|\leq |f(1)|/c \leq f(0)/c
=r/c$. So the function
$$
g:= r^{-1}\left (f-r\delta_0\right)
$$
is $0$ at $0$, has $\ell^1$ norm bounded by $\frac 1c+1$, while
its Fourier coefficients are equal to $1/r-1/q$ ($r$ of them), or
$-1/q$, since the delta function has all Fourier coefficients
equal to $1/q$. But, according to Theorem 1.3 of \cite{GK}, we
should have $q\min_k |\hat g(k)|$ tending to $0$ when $q$ tends to
$\infty$ (note that the Fourier transform here is replaced by the
inverse Fourier transform in \cite{GK}, which is the reason for
multiplication by $q$ compared to the statement given there). This
gives a contradiction, and allows to conclude that there is no
uniform $1$-concentration. This finishes the proof.
\bigskip
We leave the following as an open question.
\begin{problem} In line with Definition \ref{def:gammapsharp},
for given fixed $q$ denote
$\gamma^{\sharp}_1(q):=\max\limits_{f\in \PP_q}
2|f(1)|/\sum_{k=0}^{q-1} |f(k)|$. Determine
$\beta:=\liminf\limits_{q\to\infty} \log (1/\gamma^{\sharp}_1(q))/
\log\log q$.
\end{problem}
Using the full strength of the result of \cite{GK}, the constant
$c$ in the proof of Theorem \ref{th:concentration} may be chosen
uniformly bounded from below in $q$ by $\log^{-\alpha}q$, with
$\alpha$ less than $1/3$ (that is, the proof by contradiction
shows that $c>\log^{-\alpha} q$ is not possible, hence $\beta \geq
1/3$). On the other hand the Dirichlet kernel exhibits
$\gamma_1^{\sharp}(q)\geq C / \log q$, i.e. $\beta\leq 1$. This
leaves open the question if $\beta$ achieves 1, i.e.
$\log(1/\gamma_1^{\sharp}(q))/\log\log q$ can be taken anything
less than $1$. The problem is in relation with the Littlewood
conjecture on groups $\mathbb Z_q$, for which there has been new
improvements by Sanders \cite{S}.
\section{$2$-concentration on measurable sets}
We prove in this section that $\gamma_2\geq \gamma_2^\sharp$. The
converse inequality follows from the fact that the constant for
measurable sets is smaller than the one when restricted to open
sets, which is $\gamma^\sharp _2$, whose explicit value is given
by \eqref{case2-q}. In this paragraph we shall basically use the
method of Anderson et al. \cite{Many}. Our improvements are mainly
expository. The method is valid for all $p>1$, and we will write
it in this context, even if better results can be obtained for
$p\neq 2$. Indeed, it will be easier, later on, to explain how to
improve the method starting from this first one.
So we are going to prove the following proposition.
\begin{proposition}\label{p:comparison} For $p>1$, we have
$$\gamma_p\geq \gamma^\sharp _p.$$
\end{proposition}
\begin{proof}[Proof]
We are given an arbitrary symmetric measurable set, with $|E|>0$.
We want to find some idempotent $f$ that concentrates on $E$. We
will use a variant of Khintchine's Theorem in Diophantine
approximation, which we summarize in the next lemma (Proposition
36 in \cite{BR}).
\begin{lemma}\label{l:grid}Let $E$ be a measurable set
of positive measure in $\mathbb T$. For all $\theta>0$, $\eta>0$ and
$q_0\in \mathbb{N}$, there exists an irreducible fraction $a/q$
such that $q>q_0$ and
\begin{equation}\label{khint}
\left|\left (\frac aq-\frac{\theta}{q^2},\frac aq
+\frac{\theta}{q^2}\right)\cap E\right|\geq (1-\eta)\frac{2\theta}
{q^2}.
\end{equation}
Moreover, given a positive integer $\nu$, it is possible to choose
$q$ such that $(\nu, q)=1$.
\end{lemma}
The parameter $\theta$ will play no role at the moment, so we can
set it as $1$. It will appear as necessary for generalizations
only later. We consider the grid $ \mathbb{G}_q:= \{ k/q; k=0, 1,
\cdots, q-1\}$ contained in the torus, for $a$ and $q$ given by
Lemma \ref{l:grid}, for given values of $\eta$ and $q_0$ to be
fixed later on. We assume that $q$ is sufficiently large so that
we can find $R\in \PP_q$ with the property that
\begin{equation}\label{given}
2|R(a/q)|^p\geq c \sum_{k=0}^{q-1}|R(k/q)|^p,
\end{equation}
with $\e>0$ chosen arbitrarily small and $c>\gamma_p^\sharp-\e$.
When $a=1$, the existence of such a $P$ follows from the
definition of $\gamma_p^\sharp$. See Remark \ref{not-prime} for
the fact that we can replace $1$ by $a$ whenever $a$ and $q$ are
co-prime. We then claim that the polynomial $Q(t):=R(t) D_n(qt)$,
which is an idempotent, is such that
$$
\int_E |Q|^p \geq c \kappa(\varepsilon)\int_\mathbb T |Q|^p,
$$
with $\kappa(\varepsilon)<1$ tending to $1$ when $\varepsilon$
tends to $0$, and parameters $\eta$ and $n$ are chosen suitably
depending on $\e$.
The idea of the proof goes as follows: since $D_n$ concentrates
the $L^p$ norm near $0$ (it can be concentrated in any subset $F$
of the interval $\left(-\frac{1}{q}, +\frac{1}{q}\right)$, with
$|F|>2(1-\eta)/q$), then $D_n(qt)$ concentrates equally on the
$q$ subsets around the points of the grid $\mathbb G_q$. We take
$F$ such as $qt$ belongs to $F$ when $t$ belongs to $\left(\frac
aq-\frac{\theta}{q^2},\frac aq +\frac{\theta}{q^2}\right)\cap E$.
Now multiplication by $R$ will concentrate the integral on the
subset around $a/q$, which we wanted. We need to know that the
polynomial $R$ is almost constant on each of these subsets, which
is given by Bernstein's Theorem.
Let us now enter into details. We have the following lemma on
Dirichlet kernels.
\begin{lemma}\label{dirichlet}
Let $p>1$. For $\e$ given, one can find $\eta>0$ and $\delta_0>0$
such that, for all $0<\delta<\delta_0$ , if $F$ is a measurable
subset of $(-\delta, +\delta)\subset \mathbb T$ of measure larger than
$2\delta(1-\eta)$, we can find some suitable $n\in\mathbb N$ so that
$$\int_F |D_n |^p\geq (1-\e)\int_\mathbb T |D_n|^p.$$
\end{lemma}
\begin{proof}[Proof] It is well known that $\int_\mathbb T |D_n|^p\geq
\kappa_p n^{p-1}$ (see \cite{Many} for instance for precise
estimates). So it is sufficient to prove that we can obtain
$$ \int_{^cF} |D_n |^p\leq \varepsilon n^{p-1}.$$
This is a consequence of the fact that $$\int_{(-\delta,
+\delta)\setminus F} |D_n|^p\leq 2n^p\eta \delta,$$ while
$$\int_{\mathbb T\setminus(-\delta, +\delta)} |D_n|^p\leq \left(\frac \pi
2\right)^p\int_{|t|>\delta} t^{-p}dt= \kappa'_p \delta^{1-p}.$$ We
choose for $n$ the smallest integer larger than $
(2\kappa'_p/\e)^{1/(p-1)}\delta^{-1}$ and $\eta$ such that
$8(2\kappa'_p/\e)^{1/(p-1)}\eta=\e$.
We remark that here we did not need the flexibility linked to the
parameter $\delta_0$. It is here for further generalizations.
\end{proof}
Next we recall classical Bernstein and Marcinkiewicz-Zygmund type
inequalities, in the forms tailored to our needs and proved in
\cite{BR}, Lemma 41. Recall that here polynomials are Taylor polynomials,
that is, trigonometrical polynomials with only non negative
frequencies, which is the case for the polynomial $R$.
\begin{lemma}\label{l:Bernstein}
For $1< p<\infty$ there exists a constant $K_p$ such that, for $P$
a polynomial of degree less than $q$ and for $|t|<1/2$, we have
the two inequalities
\begin{equation}\label{maj-grid}
\sum_{k=0}^{q-1}|P( t+k/q)|^p\leq K_p \sum_{k=0}^{q-1}|P(k/q)|^p,
\end{equation}
\begin{equation}\label{maj-grid1}
\sum_{k=0}^{q-1}\left||P(t+k/q)|^p-|P(k/q)|^p \right| \leq K_p
|qt| \sum_{k=0}^{q-1}|P(k/q )|^p.
\end{equation}
\end{lemma}
For our polynomial $R$, this gives the inequality
\begin{equation}\label{bernstein-a}
||R(t)|^p-|R(a/q)|^p|\leq 2c^{-1}K_p qt |R(a/q)|^p,
\end{equation}
This implies that, for $|t-\frac aq|<\frac \theta {q^2}$ with $q$
large enough,
\begin{equation}\label{inequality-1}
|R(t)|^p\geq (1-\e)|R(a/q)|^p.
\end{equation}
We have also, for $|t|<\frac\theta {q^2}$, that
$$
\sum_{k=0}^{q-1}|R(t+k/q)|^p \leq \sum_{k=0}^{q-1}|R(k/q)|^p
+2K_p\frac \theta q c^{-1}|R(a/q)|^p
$$
which leads to the inequality, valid for $|t|<\frac\theta {q^2}$
for $q$ large enough,
\begin{equation}\label{inequality-2}
\sum_{k=0}^{q-1}|R(t+k/q)|^p \leq 2c^{-1}(1+\e)|R(a/q)|^p.
\end{equation}
Let us finally remark that \eqref{maj-grid} leads to the
following, valid for all $t$.
\begin{equation}\label{inequality-3} \sum_{k=0}^{q-1}|R(t+k/q)|^p
\leq 2 c^{-1}K_p|R(a/q)|.
\end{equation}
We can now proceed to the proof of the required inequality for
$R$. We have fixed $\e$ and chosen $q_0$ large enough so that
estimates \eqref{inequality-1} and \eqref{inequality-2} hold
(recall that for the moment $\theta=1$). Then we use Lemma
\ref{l:grid}, which fixes some $a/q$, and find $D_n$, which is
assumed to be adapted to $\delta:=\frac{\theta}q$. We denote
$\tau^p:=\int_{\mathbb T}|D_n|^p$ and $I:= \left(\frac
aq-\frac{\theta}{q^2},\frac aq +\frac{\theta}{q^2}\right)$.
\begin{align}\notag
\frac 12 \int_E |Q|^p \geq \int_{I\cap E} |R|^p|D_n|^p & \geq (1-
\varepsilon)|R(a/q)|^p \int_{I\cap E} |D_n(qt)|^pdt \\
& \geq \frac{1}{q}(1-\varepsilon)|R(a/q)|^p
~~ \int_{F\cap (-\delta, +\delta)} |D_n|^p\notag \\
& \geq \frac{(1-\varepsilon)^2\tau^p}{q}|R(a/q)|^p. \label{ineq-3}
\end{align}
Here $F$ is the pre-image by $t\mapsto qt$ of $I\cap E$, which
has measure at least $2(1-\eta)\delta$, and so concentrates the
integral of $|D_n|^p$.
Let us now look for a bound of the whole integral. We write
$$\int_\mathbb T |Q|^p = \int_{-1/q}^{1/q}\left(\sum_k|R(t+\frac
kq)|^p\right)|D_n(qt)|^p dt$$ and cut the integral into two parts,
depending on the fact that $|t|\leq \frac \theta{q^2}$ or not. For
the first part we use \eqref{inequality-2}, for the second one
\eqref{inequality-3}. We recall that the integral of $D_n$ outside
the interval $(-\theta/q, \theta/q)$ is bounded by $\e \tau^p$.
Finally
\begin{eqnarray*}
\int_\mathbb T |Q|^p
& \leq &2c^{-1}\,\frac{1+\varepsilon}{q}|R(a/q)|^p
~~ \int_{\mathbb T} |D_n|^p +2c^{-1} K_p\,\cdot \,\frac{\varepsilon}{q} |R(a/q)|^p \int_{\mathbb T} |D_n|^p \\
& \leq &2c^{-1}\,\frac{(1+C\varepsilon)\tau^p}{q}|R(a/q)|^p.
\end{eqnarray*}
We conclude by comparison with \eqref{ineq-3}.
\end{proof}
As said above, we have obtained optimal results for $p=2$. At this
point, we can see how results can be improved for $p\neq 2$. The
main point is the possibility to replace the Dirichlet kernel
$D_n$ by an idempotent $T$, which satisfies nearly the same
properties as the Dirichlet kernel that are summarized in Lemma
\ref{dirichlet}, but has the additional property to have
arbitrarily large gaps. More precisely, we say that {\sl $T$ has
gaps larger than $N$ } if $|k-k'|\leq N$ implies that one of the
two Fourier coefficients $\hat T(k)$ and $\hat T(k')$ is zero. We
state the existence of such idempotents $T$ as a lemma, and refer
to \cite{BR} for their construction.
\begin{lemma}\label{l:peak-meas}
Let $p>0$ different from $2$. Then for $\varepsilon>0$ there
exists $\delta_0>0$ and $\eta>0$ such that, for all
$\delta<\delta_0$ and $N\in \mathbb{N}$, if $E$ is a measurable
set that satisfies, for $\alpha=0$, the assumption $|E\cap [
\alpha-\delta, \alpha+\delta]|>2(1-\eta)\delta$, then there exists
an idempotent $T$ with gaps larger than $N$ such that
$$\int_{E\cap
[\alpha-\delta, \alpha+\delta]} |T|^p>(1-\varepsilon)\int_0^1
|T|^p.$$ Moreover, if $p$ is not an even integer, this is also
valid for $\alpha=1/2$.
\end{lemma}
For the moment we use this lemma with $\alpha=0$. We are no more
restricted to consider polynomials of degree less than $q$ in
order that $R(t)T(qt)$ be an idempotent. It is sufficient that the
degree of $R$ be less than $Nq$, and, since $N$ is arbitrary, this
gives essentially no constraint. The fact that $R$ has degree less
than $q$ was also used for \eqref{inequality-1} and
\eqref{inequality-2}. It is where the flexibility given by the
parameter $\theta$ can be used: if $R$ has degree less than $q^2$,
then roughly speaking we can also use Bernstein Inequality, but
$\theta/q$ has to be replaced by $\theta$ in \eqref{bernstein-a}.
This is of no inconvenience, since $\theta$ can be chosen
arbitrarily small.
At this point, we could proceed with a polynomial of degree less
than $q^2$ for \eqref{inequality-1}, but certainly not for Lemma
\ref{l:Bernstein}, since such a polynomial can be identically $0$
on the grid $\mathbb{G}_q$. To develop such inequalities for
polynomials $S$ of degree larger than $q$, we will restrict to
those that can be written as $S(t):=R(t)R((q+1)t)$, with $R$ an
idempotent that satisfies \eqref{given}, but for $2p$ instead of
$p$ (so that the condition on $p$ is now $p>1/2$). The important
point is that $S$ is also an idempotent, and so is $ST$ if $T$ has
sufficiently large gaps. Also $|S(k/q)|^p =|R(k/q)|^{2p}$ at each
point of the grid, and in particular at $a/q$. Moreover, it is
easy to see that, for $\theta$ small enough, one still has the
inequalities \eqref{inequality-1}, \eqref{inequality-2} and
\eqref{inequality-3} with $2p$ in place of $p$, both for the
polynomials $R(t)$ and $R((q+1)t)$ (for this last one we have to
choose $\theta$ small enough, as we mentioned earlier.) The fact
that \eqref{inequality-1}, \eqref{inequality-2} and
\eqref{inequality-3} are valid for $S$ follows from Cauchy-Schwarz
Inequality. The rest of the proof goes the same way as the
previous one and leads to the following, for which we leave
details to the reader.
\begin{proposition} \label{2pgivesp} One has $p$-concentration for $p>1/2$, and,
for $p\neq 2$, one has the inequality $\gamma_p\geq
\gamma_{2p}^\sharp$. In particular $\gamma_1\geq
\gamma_2^{\sharp}$.
\end{proposition}
We could as well have taken $S=R_1R_2$ and used H\"older's
Inequality, taking $R_1$ approaching the maximum concentration on
the grid for the exponent $r$ and $R_2$ approaching the maximum
concentration on the grid for the exponent $s$, with $\frac
pr+\frac ps=1$. This leads to the following generalization of the
last proposition.
\begin{proposition} \label{with-holder} One has $p$-concentration for $p>1/2$, and,
for $p\neq 2$, one has the inequality $\gamma_p\geq
\left(\gamma_{r}^\sharp\right)^{p/r}\left(\gamma_{s}^\sharp\right)^{p/s}$
for all $r>p$ and $s>p$ such that $\frac pr+\frac ps=1$.
\end{proposition}
Before concluding this section, let us make a last observation.
Once we use an idempotent $T$ with arbitrarily large gaps, it is
not difficult to build idempotents with arbitrarily large gaps. It
is sufficient to start from the polynomial $R(\nu t)$, with $\nu$
arbitrarily large. Recall that when using Lemma \ref{l:grid}, we
can take $q$ such that $(\nu,q)=1$. This means that there exists
$b$ (mod $q$) such that $\nu a=b$ (mod $q$), and we choose $R$
that satisfies \eqref{given}, but with $b/q$ in place of $a/q$.
The rest of the proof can be adapted. We state it as a
proposition.
\begin{proposition} In Proposition \ref{p:comparison} and Proposition
\ref{with-holder}, when $p\neq 2$, we can have arbitrarily large
gaps. That is, when $1/2<p\neq 2$, given a symmetric measurable
set $E$ of positive measure, and any constant $c<\gamma_p^\sharp$
(resp.
$\left(\gamma_{r}^\sharp\right)^{p/r}\left(\gamma_{s}^\sharp\right)^{p/s}$),
there exists an idempotent $P$ with arbitrarily large gaps such
that
$$
\int_E |P|^p>c \int_{\mathbb T} |P|^p.
$$
\end{proposition}
\section{ Improvement of constants for $p$ not an even integer }
We proved in \cite{BR} that $\gamma_p=1$ for $p>1$ and $p$ not an
even integer. Let us give the main lines of the proof, which will
be used again for the improvement of the constant when $p=1$. As
we shall see, it has been slightly simplified compared to the
proof in \cite{BR}. The main ingredient is the fact that there are
idempotents that concentrate as the Dirichlet kernels, but with
arbitrarily large gaps, and at $1/2$ instead of $0$. We have
already stated this in Lemma \ref{l:peak-meas}.
\medskip
If we take such a peaking function $T$, then $T(qx)$ concentrates
around the points of the translated grid
\begin{equation}\label{def:gridhalf} \mathbb{G}_q^\star:=\frac 1{2q}
+ \mathbb{G}_q = \left\{\frac {2k+1}{2q}\ \ ;\ \ k=0, \cdots,
q-1\right\}.
\end{equation}
We have considerably gained with this new grid compared to
$\mathbb{G}_q$ because $0$ -- where, by positive definiteness, we
always must have a maximal value of any idempotent -- does not
belong to the grid any more, and thus we will even be able to find
idempotents $P$ such that the maximal value of $|P|$ (over the
grid) will be attained at the points $\pm 1/2q$, moreover, the sum
of the values $|P|^p$ on $\mathbb{G}_q^\star$ is just slightly
larger than $2|P(1/2q)|^p$.
Let us interpret the new constants that we will introduce in terms
of another concentration problem on a finite group. More
precisely, we view $\mathbb{G}_q^\star$ as
$\mathbb{G}_{2q}\setminus \mathbb{G}_q$, and identify
$\mathbb{G}_{2q}$ with $ \mathbb Z_{2q}$, while $ \mathbb{G}_{q}^*$
identifies with a coset. Recall that the idempotents on $
\mathbb Z_{2q}$ are identified with polynomials in $\PP_{2q}$. We are
interested in relative concentration inside the coset, and give
the following definition.
\begin{definition}\label{def:Gammap-star} We define
\begin{equation}
\Gamma_p^\star:= \sup_{K<\infty} \liminf_{q\rightarrow \infty}
\Gamma_p^\star(q,K), \end{equation} where $\Gamma_p^\star(q,K)$ is
the maximum of all constants $\gamma$ for which there exists $R\in
\PP_{2q}$ satisfying
\begin{eqnarray}
2\left|R\left(\frac 1{2q}\right) \right|^p&\geq & \gamma
\sum_{k=0}^{q-1} \left|R\left(\frac {2k+1}{2q}\right) \right|^p \label{Gam-star} \\
2\left|R\left(\frac 1{2q}\right) \right|^p&\geq & \gamma K^{-1}
\sum_{k=0}^{q-1}\left|R\left(\frac{k}{q}\right)\right|^p.
\label{cond-K}
\end{eqnarray}
\end{definition}
In other words, $\Gamma_p^\star$ is positive when there is
uniform concentration at $1/2q$, (which is the case for $p>1$),
but the grids $\mathbb{G}_q$ and $\mathbb{G}_q^\star$ do not play
the same role; the constant $\Gamma_p^{\star}$ is only the
relative concentration on $\mathbb{G}_q^{\star}$, which we try to
maximize.
\begin{remark}\label{not-prime-bis} We can also replace $1$ by $2a+1$ in the left-hand side of
\eqref {Gam-star} when $q$ is any integer, but $2a+1$ and $2q$
co-primes.
\end{remark}
This is the equivalent of Remark \ref{not-prime}. Multiplication
by $b$, such that $b(2a+1)\equiv 1$ modulo $2q$, will send $1$ to
$2a+1$ and define a bijection on $\mathbb{G}_q^\star$ (resp.
$\mathbb{G}_q$).
\medskip
Lower bounds for $\Gamma_{p}^\star $ are given in the lemma below,
which is a slight modification of Lemma 34 in \cite{BR}.
\begin{lemma}\label{l:majA}
For $p>1$, we have the inequality
\begin{equation}\label{eq:majA}
\frac1{\Gamma^\star_p} \leq \inf_{0<t<1/2}A(p, t),
\end{equation}
where, for $\lambda>1$,
\begin{equation}\label{eq:Alimit}
A(\lambda, t):=\frac 1 {\left(\sin(\pi t) \right)^{\lambda}}
\sum_{k=0}^{\infty}\left|\frac{\sin\left((2k+1)\pi t\right)}
{2k+1}\right|^{\lambda}.
\end{equation}
\end{lemma}
The inequality is obtained by taking Dirichlet kernels $D_n$,
with $n/2q$ tending to $t$, a point that will be used later on.
Observe that $A(\lambda, t)$ tends to $\infty$ when $t$ tends to
$0$, so that the infimum is obtained away from $0$. The uniformity
in the second inequality \eqref{cond-K} is given by a bound of (a
small modification of) $B(\lambda, t)$ defined in \eqref{eq:Bdef},
for which we have the inequality
\begin{equation}\label{K-above}
B(\lambda, t)\leq \left (\frac \pi 2\right)^\lambda +2 \left
(\sum_k k^{-\lambda}\right)t^{-\lambda}.
\end{equation}
$$ $$
Observe that (for fixed $t$) $A(\lambda,t)$, and hence also
$\inf_{0<t<1/2} A(\lambda,t)$ are decreasing functions of
$\lambda$. In \cite{BR} recognizing the Fourier coefficients (at
$k$ and $-k$) of the function $\frac {\pi}2 \left
(\chi_{[-t/2,t/2]}(x)-\chi_{[-t/2,t/2]}(x-1/2)\right)$ we used
Plancherel Formula to calculate
\begin{equation}\label{exact}
A(2,t)=\frac {\pi^2 t}{4\sin^2(\pi t)}.
\end{equation}
Substituting $x=\pi t$ and recalling \eqref{case2} we find that
$$
\Gamma^\star_2\geq 2\gamma_2 \approx 0.9226.
$$
Moreover, it is easy to see that $ \inf_{0<t<1/2} A(\lambda,t)$ is
left continuous in $\lambda$ at $2$, so that
\begin{equation}\label{eq:c2below}
\liminf_{p\to 2-0} \Gamma_p^\star \geq 2\gamma_2.
\end{equation}
Our main estimate for $\Gamma_p^\star$ is the following.
\begin{proposition}\label{optimal}
For $p>2$ we have $\Gamma_p^\star = 1$.
\end{proposition}
We postpone the proof of this proposition and show how to use it.
We need an adaptation of the Khintchine 's type theorem that we
used in the last section. The next lemma uses the inhomogeneous
extension of Khintchine's Diophantine approximation theorem, first
proved by Sz\"usz \cite{Sz} and later generalized by Schmidt
\cite{Sch}. This is Proposition 37 of \cite{BR}.
\begin{lemma}\label{l:grid-half}Let $E$ be a measurable set
of positive measure in $\mathbb T$. For all $\theta>0$, $\eta>0$ and
$q_0\in \mathbb{N}$, there exists an irreducible fraction
$(2k+1)/(2q)$ such that $q>q_0$ and
\begin{equation}\label{szusz}
\left|\left [\frac {2k+1}{2q}-\frac{\theta}{q^2},\frac {2k+1}{2q}
+\frac{\theta}{q^2}\right]\cap E\right|\geq (1-\eta)\frac{2\theta}
{q^2}.
\end{equation}
Moreover, given a positive integer $\nu$, it is possible to choose
$q$ such that $(\nu, q)=1$.
\end{lemma}
Our main result is the following.
\begin{theorem}\label{Star}
For $p$ not an even integer, one has the inequalities
$\gamma_p\geq \Gamma_{p}^\star $ and $\gamma_p\geq
\left(\Gamma_{r}^\star\right)^{p/r}\left(\Gamma_{s}^\star\right)^{p/s}$
for all $r>p$ and $s>p$ such that $\frac pr+\frac ps=1$. Moreover,
given a symmetric measurable set $E$ of positive measure, and any
constant $c<\Gamma_p^\star$ (resp.
$\left(\Gamma_{r}^\star\right)^{p/r}\left(\Gamma_{s}^\star\right)^{p/s}$),
there exists an idempotent $P$ with arbitrarily large gaps such
that
$$
\int_E |P|^p>c \int_{\mathbb T} |P|^p.
$$
\end{theorem}
\begin{proof}[Proof]
We shall first prove the inequality $\gamma_p\geq
\Gamma_{p}^\star $. We will then show how to modify the proof for
the other statements.
We are given a symmetric measurable set $E$. We consider the grid
$ \mathbb{G}_q^\star = \mathbb{G}_{2q}\setminus \mathbb{G}_q$
contained in the torus, with $a$ and $q$ given by Lemma
\ref{l:grid-half}. At this point we have already fixed some
$\e>0$. The values of $q_0$, $\eta$ and $\theta$ are also fixed,
but we will say how to choose them later on. We assume that $q$ is
sufficiently large so that we can find $R\in \PP_{2q}$ with the
property that
\begin{equation}\label{given-star}
2|R(\frac 1{2q}+\frac aq)|^p\geq c
\sum_{k=0}^{q-1}|R(\frac 1{2q}+\frac kq)|^p,
\end{equation}
with $c>(1-\e)\Gamma_p^\star$. Moreover we can assume that
\begin{equation}\label{grid-star}
\sum_{k=0}^{q-1}|R(\frac kq)|^p\leq 2Kc^{-1}|R(\frac 1{2q}+\frac
aq)|^p
\end{equation}
for some uniform constant $K$. The existence of such an $R$ is
given by Definition \ref{def:Gammap-star} and by the remark just
after. Once chosen $R$, we choose a peaking function $T$ at $1/2$
for the value $\e$. We assume now that $\eta$ has been chosen
sufficiently small for the existence of such a function $T$, built
for $\delta:=\theta/q^2$, which is possible if $\theta
q_0^{-2}\leq \delta_0$.
We choose the idempotent $Q(t):=R(t)T(qt)$ (indeed it is an
idempotent if $T$ has sufficiently large gaps) and fix $I:=
\left(\frac {2a+1}{2q}-\frac{\theta}{q^2},\frac {2a+1}{2q}
+\frac{\theta}{q^2}\right)$. We also put
$\tau^p:=\int_{\mathbb T}|T|^p$. From this point on, the proof follows
the same lines as the proof of Proposition \ref{p:comparison}. We
have the inequality
\begin{align*}
\frac 12 \int_E |Q|^p \geq \int_{I\cap E} |R|^p|T|^p & \geq (1-
\varepsilon)|R((2a+1)/(2q))|^p \int_{I\cap E} |T(qt)|^pdt \\
& \geq \frac{1}{q}(1-\varepsilon)|R((2a+1)/(2q))|^p
~~ \int_{F\cap (-\delta, +\delta)} |T|^p \\
& \geq \frac{(1-\varepsilon)^2\tau^p}{q}|R((2a+1)/(2q))|^p.
\end{align*}
We have used that the pre-image $F$ of $I\cap E$ by $t\mapsto qt$
has measure at least $2(1-\eta)\delta$, and concentrates the
integral of $|T|^p$ at $1/2$. We have also used the inequality,
\begin{equation}\label{inequ-1}
|R(t)|^p\geq (1-\e)|R(\frac {2a+1}{2q})|^p,
\end{equation}
valid for $|t-\frac {2a+1}{2q}|<\frac \theta {q}$ with $\theta$
small enough. This is an easy consequence of Lemma
\ref{l:peak-meas} for polynomials of degree $2q$, since the sum of
values of $|R|^p$ on the whole grid $\mathbb{G}_{2q}$ is bounded
by $2c^{-1}(K+1) $ times its value at $(2a+1)/(2q)$. Just take
$\theta$ small enough (we fix $\theta$ in such a way that this is
valid).
Before going on, let us remark that the other two basic
inequalities can be deduced from Lemma \ref{l:peak-meas}. First,
for $|t-\frac {2a+1}{2q}|<\frac \theta {q}$ with $\theta$ small
enough, we have also
\begin{equation}\label{inequ-2}
\sum_{k=0}^{q-1}|R(t+\frac {k}{q})|^p \leq 2 c^{-1}(1+\e)|R(\frac
{2a+1}{2q})|^p.
\end{equation}
Finally, for all $t$, we have, for some constant $\kappa$,
\begin{equation}\label{inequ-3} \sum_{k=0}^{q-1}|R(t+\frac kq)|^p
\leq \kappa|R(\frac {2a+1}{2q})|^p.
\end{equation}
Here we can take $\kappa:=2c^{-1} K_p(K+1)$. Next we look for a
bound of the whole integral
$$\int_\mathbb T |Q|^p = \int_{0}^{1/q}\left(\sum_k|R(t+\frac
kq)|^p\right)|T(qt)|^p dt$$ and cut the integral into two parts,
depending on the fact that $|t-\frac 1{2q}|\leq \frac{\theta}{q}$
or not. For the first part we use \eqref{inequ-2}, for the second
one \eqref{inequ-3}. We recall that the integral of $T$ outside
the interval $(\frac 12-\frac \theta q, \frac 12+\frac \theta q)$
is bounded by $\e \tau^p$.
\begin{eqnarray*}
\int_\mathbb T |Q|^p
& \leq & 2c^{-1}\,\frac{1+\varepsilon}{q}|R(\frac {2a+1}{2q})|^p
~~ \tau^p
+\kappa\frac{\varepsilon}{q} |R(\frac {2a+1}{2q})|^p \tau^p \\
& \leq & 2c^{-1}\,\frac{(1+C\varepsilon)\tau^p}{q}|R(\frac
{2a+1}{2q})|^p.
\end{eqnarray*}
We conclude by comparison with the integral on $E$. This allows to
conclude for the first case, $\gamma_p\geq \Gamma_{p}^\star $.
\medskip
Let us now indicate the necessary modification for finding
$\gamma_p\geq \left(\Gamma_{r}^\star\right)^{p/r}
\left(\Gamma_{s}^\star\right)^{p/s}$. In the following we denote
$r_1:=r$ and $r_2:=s$: the index $j$ will always cover the two
values $j=1$ and $j=2$. Instead of starting from one polynomial,
we start from two polynomials $R_1$ and $R_2$ in $ \PP_{2q}$,
which satisfy the following inequalities, for $j=1,2$.
\begin{equation}\label{given-star2}
2|R_j(\frac {2a+1}{2q})|^{r_j}\geq c_j \sum_{k=0}^{q-1}|R_j(\frac
{2a+1}{2q})|^{r_j},
\end{equation}
with $c_j>(1-\e)\Gamma_{r_j}^\star$. Moreover we assume that
\begin{equation}\label{grid-star2}
\sum_{k=0}^{q-1}|R_j(\frac kq)|^{r_j}\leq
2Kc^{-1}|R_j(\frac{2a+1}{2q})|^{r_j}
\end{equation}
for some uniform constant $K$. We then put
$R(t):=R_1(t)R_2((2q+1)t)$. We remark that, on $\mathbb G_{2q}$,
the values of $R$ coincide with the values of the product $R_1
R_2$. We will prove that we still have inequalities
\eqref{inequ-1} and \eqref{inequ-2} for $|t-\frac
{2a+1}{2q}|<\frac \theta {q^2}$, and \eqref{inequ-3} for all $t$.
Let us first prove that \eqref{inequ-3} holds for some constant
$\kappa$. Indeed, by H\"older Inequality with conjugate exponents
$r_1/p$ and $r_2/p$ and periodicity of $R_2$, we have
$$
\sum_{k=0}^{q-1}|R(t+\frac kq)|^p\leq
\left(\sum_{k=0}^{q-1}|R_1(t+\frac kq)|^{r_1}\right)^{\frac
p{r_1}} \times \left(\sum_{k=0}^{q-1}|R_2((2q+1)t+\frac
kq)|^{r_2}\right)^{\frac p{r_2}}.
$$
Both factors are bounded, up to a constant, respectively by
$|R_1(\frac{2a+1}{2q})|^p$ and $|R_2(\frac{2a+1}{2q})|^p$, which
allows to conclude.
In view of \eqref{inequ-1} and \eqref{inequ-2}, we remark that,
when $t$ differs from $\frac {2a+1}{2q}$ by less than $\frac
\theta {q^2}$, then $(2q+1)t$ differs from $\frac {2a+1}{2q}$
(modulo $1$) by less than $\frac {3\theta} {q}$. So we still have,
for $|t-\frac {2a+1}{2q}|<\frac \theta {q^2}$ with $\theta$ small
enough,
\begin{equation}\label{inequ-4}
|R(t)|^p\geq (1-\e)|R(\frac {2a+1}{2q})|^p.
\end{equation}
For Inequality \eqref{inequ-2}, we first use H\"older Inequality
with conjugate exponents $r_1/p$ and $r_2/p$ as before, then the
same kind of estimate for each factor.
From this point, the proof is the same.
It remains to indicate how to modify the proof to get peaking
idempotents with arbitrarily large gaps. So we fix $\nu$ as a
large odd integer, and we will prove that we can replace the
polynomial $R$ used above by some
$$
S(x):=R_1(\nu x) R_2((2q+1)\nu x),
$$ which has gaps larger than $\nu$. Recall first
that we can take arbitrarily large $q$ satisfying $(\nu,q)=1$, and
get an idempotent by multiplication by $T(qx)$ for $T$ having
sufficiently large gaps. The value taken by the polynomial $S$ at
$\frac{2a+1}{2q}$ is the value of $R_1R_2$ at $\frac{2b+1}{2q}$,
with $\nu (2a+1)\equiv 2b+1 $ mod $2q$. So we choose $R_1$ and
$R_2$ as before, but with $b$ in place of $a$.
From this point the proof is identical, apart from an additional
factor $\nu$, which modifies the value of $\theta$. We know that
$S(\nu x)$ and $R(x)$take globally the same values on both grids
$\mathbb{G}_q$ and $\mathbb{G}_q^{\star}$, because in each case we multiply by an odd
integer that is coprime with $2q$.
\end{proof}
Now Theorem \ref{th:L1} is an easy consequence of Proposition
\ref{optimal} and Theorem \ref{Star}: take $r<2$ and $s>2$, so
that $\gamma_1\geq 1 \cdot (\Gamma_{r}^\star)^{1/r}$, and take the
limit of $\Gamma_r$ for $r\to 2-0$ using \eqref{eq:c2below}.
\bigskip
\begin{proof}[Proof of Proposition \ref{optimal}]
The proof is in the same spirit as the proof of the inequality
$\gamma_p^\sharp > 0.483$. Let us first fix $c<1$ and prove that
we can find a positive definite polynomial of degree less than
$2q$ such that
$$2|P(\frac 1{2q}+\frac aq)|^p\geq c
\sum_{k=0}^{q-1}|P(\frac 1{2q}+\frac kq)|^p,$$ while $$2|P(\frac
1{2q}+\frac aq)|^p\geq c \sum_{k=0}^{q-1}|P(\frac kq)|^p.$$
Indeed, it is proved in \cite{BR} (and elementary) that $A(Lp,
1/4)$ has limit $1/2$ when $L$ tends to $\infty$, which means that
we can take for $P$ a polynomial that coincides with $D_n^L$ on
the grid $\mathbb{G}_{2q}$. We fix $L$ large enough, and choose
$n$ to be approximately $q/4$. The second inequality follows from
\eqref{K-above}.
At this point one can use Proposition \ref{random}, with $q$
replaced by $2q$, to find the idempotent $Q$.
\end{proof}
| {
"timestamp": "2008-11-27T17:07:01",
"yymm": "0811",
"arxiv_id": "0811.4576",
"language": "en",
"url": "https://arxiv.org/abs/0811.4576",
"abstract": "This is a companion paper of a recent one, entitled {\\sl Integral concentration of idempotent trigonometric polynomials with gaps}. New results of the present work concern $L^1$ concentration, while the above mentioned paper deals with $L^p$-concentration.Our aim here is two-fold. At the first place we try to explain methods and results, and give further straightforward corollaries. On the other hand, we push forward the methods to obtain a better constant for the possible concentration (in $L^1$ norm) of an idempotent on an arbitrary symmetric measurable set of positive measure. We prove a rather high level $\\gamma_1>0.96$, which contradicts strongly the conjecture of Anderson et al. that there is no positive concentration in $L^1$ norm.The same problem is considered on the group $\\mathbb{Z}/q\\mathbb{Z}$, with $q$ say a prime number. There, the property of absolute integral concentration of idempotent polynomials fails, which is in a way a positive answer to the conjecture mentioned above. Our proof uses recent results of B. Green and S. Konyagin on the Littlewood Problem.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Concentration of the integral norm of idempotents",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9828232894783427,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7093460204072654
} |
https://arxiv.org/abs/1701.07569 | Data-Driven Sparse Sensor Placement for Reconstruction | Optimal sensor placement is a central challenge in the design, prediction, estimation, and control of high-dimensional systems. High-dimensional states can often leverage a latent low-dimensional representation, and this inherent compressibility enables sparse sensing. This article explores optimized sensor placement for signal reconstruction based on a tailored library of features extracted from training data. Sparse point sensors are discovered using the singular value decomposition and QR pivoting, which are two ubiquitous matrix computations that underpin modern linear dimensionality reduction. Sparse sensing in a tailored basis is contrasted with compressed sensing, a universal signal recovery method in which an unknown signal is reconstructed via a sparse representation in a universal basis. Although compressed sensing can recover a wider class of signals, we demonstrate the benefits of exploiting known patterns in data with optimized sensing. In particular, drastic reductions in the required number of sensors and improved reconstruction are observed in examples ranging from facial images to fluid vorticity fields. Principled sensor placement may be critically enabling when sensors are costly and provides faster state estimation for low-latency, high-bandwidth control. MATLAB code is provided for all examples. | \subsection{Extensions to dynamics, control, and multiscale physics}
Data-driven sensor selection is generally used for instantaneous full-state reconstruction, despite the fact that many signals are generated by a dynamical system~\cite{guckenheimer_holmes,HLBR_turb}.
Even in reduced-order models, sensors are typically used to estimate nonlinear terms instantaneously without taking advantage of the underlying dynamics.
However, it is well known that for linear control systems~\cite{dp:book,sp:book}, the high-dimensional state may be reconstructed with few sensors, if not a single sensor, by leveraging the time history in conjunction with a model of the dynamics, as exemplified by the Kalman filter~\cite{Kalman1960jfe,Welch1995book}.
In dynamic estimation and control, prior placement of sensors and actuators is generally assumed.
Extending the sensor placement optimization to the model reduction~\cite{Moore1981ieeetac,Willcox2002aiaaj,Rowley2005ijbc} and system identification~\cite{ERA:1985,ljung:book,Juang1991nasatm,Juang1994book} of linear control systems is an important avenue of ongoing work.
In particular, sensors and actuators may be chosen to increase the volume of the controllability and observability Gramians, related to the original balanced truncation literature~\cite{Moore1981ieeetac}.
More generally, sensor and actuator placement may be optimized for robustness~\cite{Doyle:1978,Doyle:1981}, or for network control and consensus problems~\cite{Leonard2001cdc,Olfati2004ieeetac,Doyle2005pnas,Leonard2007pieee,Rahmani:SIAMJCO09}.
The sensor placement algorithms discussed above are rooted firmly in linear algebra, making them readily extensible to linear control systems.
Recent advances in dynamical systems are providing techniques to embed nonlinear systems in a linear framework through a suitable choice of \emph{measurement functions} of the state, opening up the possibility of optimized sensing for nonlinear systems.
As early as the 1930s, Koopman demonstrated that a nonlinear system can be rewritten as an infinite-dimensional linear operator on the Hilbert space of measurement functions~\cite{Koopman1931pnas}.
This perspective did not gain traction until modern computation and data collection capabilities enabled the analysis of large volumes of measurement data.
Modern Koopman theory may drive sensor placement and the selection of nonlinear measurement functions on the sensors to embed nonlinear dynamics in a linear framework for optimal nonlinear estimation and control.
This approach is consistent with neural control systems, where biological sensor networks (e.g., strain sensors on an insect wing) are processed through nonlinear neural filters before being used for feedback control.
Much of the modern Koopman operator theory has been recently developed~\cite{Mezic2005nd,Budivsic2009cdc,Budivsic2012chaos,Mezic2013arfm}, and it has been shown that under certain conditions DMD approximates the Koopman operator~\cite{Schmid2010jfm,Rowley2009jfm,Tu2014jcd,Kutz2016book}; sensor fusion is also possible in the Koopman framework~\cite{Williams2015epl}.
Recently, Koopman analysis has been used to develop nonlinear estimators~\cite{Surana2016cdc,Surana2016nolcos} and controllers~\cite{Brunton2016plosone}, although establishing rigorous connections to control theory is an ongoing effort~\cite{Proctor:2016DMDc,Proctor2016arxiv,Korda2016control}.
Koopman theory has also been used to analyze chaotic dynamical systems from time-series data~\cite{Giannakis2015arxiv,Brunton2017natcomm}, relying on the Takens embedding~\cite{Takens1981lnm}, which is related to sensor selection.
Beyond extending sensor selection to nonlinear systems and control, there is a significant opportunity to apply principled sensor selection to multiscale systems.
Turbulence is an important high-dimensional system that exhibits multiscale phenomena~\cite{HLBR_turb,Majda2010dcds,Osth:2014}.
Data-driven approaches have been used to characterize turbulent systems~\cite{Brunton2015amr}, including clustering~\cite{Kaiser2014jfm}, network theory~\cite{nair2015network,Taira2016jfm}, DMD-based model reduction~\cite{Noack2015arxiv,tissot2014model}, and local POD subspaces~\cite{Amsallem2012ijnme}, to name a few.
Recently, a multiresolution DMD has been proposed~\cite{Kutz2016siads}, where a low-dimensional subspace may locally characterize the attractor, despite a high-dimensional global attractor.
This approach may significantly reduce the number of sensors needed for multiscale problems.
\section{Compressed sensing: Random measurements in a universal basis}\label{Sec:Random}
The majority of natural signals, such as images and audio, are highly compressible, meaning that when the signal is written in an appropriate coordinate system, only a few basis modes are active.
These few values corresponding to the large mode amplitudes must be stored for accurate reconstruction, providing a significant reduction compared to the original signal size.
In other words, in the universal transform basis, the signal may be approximated by a \emph{sparse} vector containing mostly zeros.
This inherent sparsity of natural signals is central to the mathematical framework of compressed sensing.
Signal compression in the Fourier domain is illustrated on an image example in~``\nameref{Sidebar4}".
Further, sparse signal recovery using compressed sensing is demonstrated on a sinusoidal example in~``\nameref{sb:cs_example}".
\begin{figure
\begin{Sidebar}{Sidebar: Image compression}{Sidebar: Image compression}
\label{Sidebar4}
Images and audio signals tend to be sparse in Fourier or wavelet bases, providing the foundation of JPEG and MP3 compression, respectively. This is shown schematically in Fig.~\ref{Fig:Compression} using the included Matlab code.
\lstinputlisting[firstline=5,lastline=12]{MATLAB/FIG_X_COMPRESS.m}
\begin{figure}[H]
\begin{overpic}[width=\textwidth]{FIG_S1_compress.pdf}
\put(47,63){$\mathcal{F}$}
\put(46,20){$\mathcal{F}^{-1}$}
\put(80,38){Keep $5\,\%$}
\end{overpic}
\caption{Fourier image compression. \label{Fig:Compression}}
\end{figure}
\end{Sidebar}
\end{figure}
The theory of compressed sensing~\cite{Donoho2006ieeetit,Candes2006cpam,Candes2006ieeetit,Candes2006bieeetit,Candes2006picm,Candes2008ieeespm,Baraniuk2007ieeespm,Baraniuk:2009} inverts this compression paradigm.
Instead of collecting high-dimensional measurements just to compress and discard most of the information, it may be possible to collect a low-dimensional subsample or compression of the data and then infer the sparse vector of coefficients in the transformed coordinate system.
\subsection{Theory of compressed sensing}\label{Sec:CS}
\begin{figure*}
\centering
\begin{overpic}[width=\textwidth]{FIG_1_CS.pdf}
\put(1.5,28){$\mathbf{y}$}
\put(19,28){$\mathbf{C}$}
\put(47.5,28){$\mathbf{\Psi}$}
\put(63.5,28){$\mathbf{s}$}
\put(82,28){$\mathbf{\Theta}$}
\put(98.2,28){$\mathbf{s}$}
\end{overpic}
\vspace{-.1in}
\caption{Compressed sensing provides the sparsest solution to an underdetermined linear system.}\label{Fig:CSschematic}
\end{figure*}
Mathematically, a compressible signal $\mathbf{x}\in\mathbb{R}^n$ may be written as a sparse vector $\mathbf{s}\in\mathbb{R}^n$ in a new basis $\boldsymbol{\Psi}\in\mathbb{R}^{n\times n}$ such that
\begin{equation}
\mathbf{x} = \mathbf{\Psi}\mathbf{s}.
\end{equation}
The vector $\mathbf{s}$ is called $K$-sparse if there are exactly $K$ nonzero elements.
To be able to represent \emph{any} natural signal, rather than just those from a tailored category, the basis $\mathbf{\Psi}$ must be complete.
Consider a set of measurements $\mathbf{y}\in\mathbb{R}^p$, obtained via a measurement matrix $\mathbf{C}\in\mathbb{R}^{p\times n}$, which satisfies
\begin{equation}
\mathbf{y} = \mathbf{C} \mathbf{x} = \mathbf{C}\mathbf{\Psi}\mathbf{s} = \mathbf{\Theta}\mathbf{s}.\label{Eq:CS}
\end{equation}
In general, for $p<n$~\eqref{Eq:CS} is underdetermined, and there are infinitely many solutions. The least least squaressquares (minimum $\|\mathbf{s}\|_2$) solution is not sparse, and typically yields poor reconstruction.
Instead, knowing that natural signals are sparse, we seek the sparsest $\mathbf{s}$ consistent with the measurements $\mathbf{y}$,
\begin{align}
\mathbf{s} = \argmin_{\mathbf{s}'} \|\mathbf{s}'\|_0 \text{, such that }\mathbf{y} = \mathbf{C}\mathbf{\Psi}\mathbf{s}',\label{Eq:L0}
\end{align}
where $\|\mathbf{s}\|_0$ is the $\ell_0$ pseudo-norm corresponding to the number of non-zero entries of $\mathbf{s}$.
Unfortunately, this optimization problem is intractable, requiring a combinatorial brute-force search across all sparse vectors $\mathbf{s}$.
A major innovation of compressed sensing is a set of conditions on the measurement matrix $\mathbf{C}$ that allow the nonconvex $\ell_0$-minimization in~\eqref{Eq:L0} to be relaxed to the convex $\ell_1$-minimization~\cite{Candes2006bieeetit,Donoho:2006b}
\begin{equation}
\mathbf{s} = \argmin_{\mathbf{s}'} \|\mathbf{s}'\|_1 \text{, such that }\mathbf{y} = \mathbf{C}\mathbf{\Psi}\mathbf{s}',\label{Eq:L1}
\end{equation}
where $\|\mathbf{s}\|_1 = \sum_{k=1}^n|s_k|$. This formulation is shown schematically in Fig.~\ref{Fig:CSschematic}.
For the $\ell_1$-minimization in~\eqref{Eq:L1} to yield the sparsest solution in~\eqref{Eq:L0} with high probability, the measurements $\mathbf{C}$ must be chosen so that ${\mathbf{\Theta}=\mathbf{C}\mathbf{\Psi}}$ satisfies a \emph{restricted isometry property} (RIP)
\begin{equation}
(1-\delta_K)\|\mathbf{s}\|_2^2 \leq \|\mathbf{C}\mathbf{\Psi}\mathbf{s}\|_2^2\leq (1+\delta_K)\|\mathbf{s}\|_2^2,
\end{equation}
where $\delta_K$ is a small positive \emph{restricted isometry} constant~\cite{Candes:2005,Candes2008ieeespm}.
In particular, there are two conditions on $\mathbf{C}$ for a RIP to be satisfied for all $K$-sparse vectors $\mathbf{s}$:
\begin{enumerate}
\item The measurements $\mathbf{C}$ must be \emph{incoherent} with respect to the basis $\mathbf{\Psi}$.
This incoherence means that the rows of $\mathbf{C}$ are sufficiently uncorrelated with the columns of $\mathbf{\Psi}$, as quantified by $\mu$
\begin{equation}
\mu(\mathbf{C},\mathbf{\Psi}) = \sqrt{n} \max_{j,k}|\langle \mathbf{c}_k,\mathbf{\boldsymbol\psi}_j\rangle|.
\end{equation}
Small $\mu$ indicates better incoherent measurements, with an optimal value of $\mu=1$.
Here, $\mathbf{c}_k$ denotes the $k$-th row of $\mathbf{C}$ and $\mathbf{\boldsymbol\psi}_j$ the $j$-th column of $\mathbf{\Psi}$, both of which are assumed to be normalized. A more detailed discussion about incoherence and the RIP may be found in~\cite{Baraniuk2007ieeespm,Candes2008ieeespm}.
\item The number of measurements $p$ must satisfy~\cite{Candes2006picm,Candes2006bieeetit, Baraniuk2007ieeespm,Candes2008ieeespm,Candes:2010}
\begin{equation}
p\sim \mathcal{O}(K\log(n/K)).
\end{equation}
The $K\log(n/K)$ term above is generally multiplied by a small constant multiple of the incoherence.
Thus, fewer measurements are required if they are less coherent.
\end{enumerate}
Intuitively, the existence of a RIP implies that the geometry of sparse vectors is preserved through the measurement matrix $\mathbf{C}\mathbf{\Psi}$.
Determining the exact constant $\delta_K$ may be extremely challenging in practice, and it tends to be more desirable to characterize the statistical properties of $\delta_K$, as the measurement matrix $\mathbf{C}$ may be randomly chosen.
``\nameref{Sidebar_6}" describes why it is not possible to use QR pivot locations as optimized sensors for compressed sensing, since they fail to identify the sparse structure of an unknown signal.
Often, a generic basis such as Fourier or wavelets may be used to represent the signal sparsely.
Spatially localized measurements (i.e., single pixels in the case of an image) are optimally incoherent with respect to the Fourier basis, so that ${\mu(\mathbf{C},\mathbf{\Psi})=1}$.
Thus, single pixel measurements are ideal because they excite a broadband frequency response.
In contrast, a measurement corresponding to a fixed Fourier mode would be uninformative; if the signal is not sparse in this particular frequency, this measurement provides no information about the other Fourier modes.
For many engineering applications, spatially localized measurements are desirable, as they correspond to physically realizable sensors, such as buoys in the ocean.
One of the major results of compressed sensing is that random projection measurements of the state (i.e., entries of $\mathbf{C}$ that are Bernoulli or Gaussian random variables) are incoherent with respect to nearly any generic basis $\mathbf{\Psi}$~\cite{Candes2006picm,Candes2006ieeetit,Donoho2006ieeetit}.
This result is truly remarkable; however, the incoherence of random projections is not optimal, and typically scales as $\mu\sim\sqrt{2\log(n)}$.
Moreover, it may be difficult to obtain random projections of the full state $\mathbf{x}$ in physical applications.
There are many alternative strategies to solve for the sparsest solution to~\eqref{Eq:CS}.
Greedy algorithms are often used~\cite{Tropp:2004,Tropp:2006,Tropp:2006b,Tropp2007ieeetit,Gilbert:2010}, including the compressed sampling matching pursuit (CoSaMP) algorithm~\cite{Needell:2010}.
In addition, there is additional theory about how sparse the random projections may be for compressed sensing~\cite{Li:2006,Wang:2010}.
\begin{figure*}[t]
\centering
\begin{overpic}[width=.75\textwidth]{FIG_S2_CS2}
\small
\put(19,-2){Time [s]}
\put(68,-2){Frequency [Hz]}
\put(-2,18.5){$\mathbf{x}$}
\put(-2,53.75){$\mathbf{x}$}
\put(62,30){\small Power Spectral Density}
\put(62,65.5){\small Power Spectral Density}
\end{overpic}
\vspace{.15in}
\caption{Compressed sensing, applied to three-tone signal.}\label{Fig:CSexample}
\end{figure*}
\subsection[Compressed sensing example]{Compressed sensing example}
\label{sb:cs_example}
As a simple example, we consider a sparse signal that is constructed as the sum of three distinct cosine waves,
\begin{equation}
x(t) = \cos(2\pi\times 37 t) + \cos(2\pi\times 420 t) + \cos(2\pi\times 711 t).
\end{equation}
The Shannon-Nyquist sampling theorem~\cite{Nyquist1928taiee,Shannon1948bstj} states that for full signal reconstruction, we must sample at twice the highest frequency present, indicating a theoretical minimum sampling rate of $1422\,$Hz.
However, since the signal is sparse, we may sample at considerably lower than the Nyquist rate, in this case at an average of $256\,$Hz, shown in Figure~\ref{Fig:CSexample}.
Note that for accurate sparse signal reconstruction, these measurements must be randomly spaced in time, so that the relative spacing of consecutive points may be quite close or quite far apart.
Spacing points evenly with a sampling rate of $256\,$Hz would alias the signal, resulting in poor reconstruction. Matlab code for reproducing Figure~\ref{Fig:CSexample} is provided below.
\lstinputlisting[firstline=3,lastline=23]{MATLAB/FIG_X_CS.m}
\section{Optimal sparse sensing in a tailored basis}\label{Sec:Tailored}
\begin{figure*}[t]
\begin{Sidebar}{Sidebar: Proper orthogonal decomposition and eigenfaces}{Sidebar: Proper orthogonal decomposition and eigenfaces}
\label{Sidebar2}
One of the most visually striking and intuitive applications of proper orthogonal decomposition (POD) is the feature extraction of facial images. These POD eigenmodes of these datasets are called {\em eigenfaces} due to their resemblance to generic human faces. We demonstrate this application of POD on the extended Yale B dataset~\cite{YaleB2001ieee,YaleB2005ieee}, consisting of cropped and aligned images of several individuals in different lighting conditions. We obtain a resized version of the dataset in the form of Matlab data files from~\cite{yale}. Each image is a $32\times 32$ matrix of grayscale pixel values, reshaped into a column vector of length 1024 and assembled into a data matrix $\mathbf{X}$. This example, detailed in~``\nameref{Sec:Results:Eigenfaces}", is a benchmark problem for sensor selection.
Matlab code for obtaining eigenfaces from training images is provided. First, training images are used to assemble a mean-subtracted data matrix.
\lstinputlisting[firstline=36,lastline=38]{MATLAB/FIG_YALE_CONVpanel.m}
Next, POD eigenfaces are obtained using the singular value decomposition of the data matrix. Outputs from both code snippets are visualized in Figure~\ref{fig:yale}.
\lstinputlisting[firstline=39,lastline=41]{MATLAB/FIG_YALE_CONVpanel.m}
\begin{figure}[H]
\centering
\begin{overpic}[width=.85\textwidth]{FIG_S3_yale_train.pdf}
\put(17,8){\color{white} Training images from the Extended Yale B dataset\label{fig:yale_train}}
\end{overpic}
\begin{overpic}[width=.85\textwidth]{FIG_S3_eigenface.pdf}
\put(30,8){\color{white} First ten POD eigenfaces\label{fig:eigenfaces}}
\end{overpic}
\caption{Proper orthogonal decomposition (POD) modes successfully recover important facial information such as the main facial features (eyes, nose, mouth) followed by depth information (brows, ridges, chin).\label{fig:yale}}
\end{figure}
| {
"timestamp": "2017-08-21T02:06:02",
"yymm": "1701",
"arxiv_id": "1701.07569",
"language": "en",
"url": "https://arxiv.org/abs/1701.07569",
"abstract": "Optimal sensor placement is a central challenge in the design, prediction, estimation, and control of high-dimensional systems. High-dimensional states can often leverage a latent low-dimensional representation, and this inherent compressibility enables sparse sensing. This article explores optimized sensor placement for signal reconstruction based on a tailored library of features extracted from training data. Sparse point sensors are discovered using the singular value decomposition and QR pivoting, which are two ubiquitous matrix computations that underpin modern linear dimensionality reduction. Sparse sensing in a tailored basis is contrasted with compressed sensing, a universal signal recovery method in which an unknown signal is reconstructed via a sparse representation in a universal basis. Although compressed sensing can recover a wider class of signals, we demonstrate the benefits of exploiting known patterns in data with optimized sensing. In particular, drastic reductions in the required number of sensors and improved reconstruction are observed in examples ranging from facial images to fluid vorticity fields. Principled sensor placement may be critically enabling when sensors are costly and provides faster state estimation for low-latency, high-bandwidth control. MATLAB code is provided for all examples.",
"subjects": "Optimization and Control (math.OC); Systems and Control (eess.SY)",
"title": "Data-Driven Sparse Sensor Placement for Reconstruction",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399077750858,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7093022192783874
} |
https://arxiv.org/abs/0704.0656 | Necessary optimality conditions for the calculus of variations on time scales | We study more general variational problems on time scales. Previous results are generalized by proving necessary optimality conditions for (i) variational problems involving delta derivatives of more than the first order, and (ii) problems of the calculus of variations with delta-differential side conditions (Lagrange problem of the calculus of variations on time scales). | \section{Introduction}
The theory of time scales is a relatively new area, that unify and
generalize difference and differential equations \cite{livro}. It
was initiated by Stefan Hilger in the nineties of the XX century
\cite{Hilger90,Hilger97}, and is now subject of strong current
research in many different fields in which dynamic processes can
be described with discrete or continuous models \cite{Agarwal}.
The calculus of variations on time scales was introduced by Bohner
\cite{CD:Bohner:2004} and by Hilscher and Zeidan \cite{zeidan},
and appears to have many opportunities for application in
economics \cite{Atici06}. In all those works, necessary optimality
conditions are only obtained for the basic (simplest) problem of
the calculus of variations on time scales: in
\cite{Atici06,CD:Bohner:2004} for the basic problem with fixed
endpoints, in \cite{zeidan} for the basic problem with general
(jointly varying) endpoints. Having in mind the classical setting
(situation when the time scale $\mathbb{T}$ is either $\mathbb{R}$
or $\mathbb{Z}$ -- see \textrm{e.g.} \cite{GelfandFomin,Brunt} and
\cite{zeidan2,Logan}, respectively), one suspects that the
Euler-Lagrange equations in \cite{Atici06,CD:Bohner:2004,zeidan}
are easily generalized for problems with higher-order delta
derivatives. This is not exactly the case, even beginning with the
formulation of the problem.
The basic problem of the calculus of variations on time scales is
defined (\textrm{cf.} \cite{CD:Bohner:2004,zeidan}, see
\S\ref{sec:Prel} below for the meaning of the $\Delta$-derivative
and $\Delta$-integral) as
\begin{equation}
\label{eq:EL:B} \mathcal{L}[y(\cdot)]
=\int_{a}^{b}L(t,y^\sigma(t),y^\Delta(t))\Delta t\longrightarrow
\min, \quad (y(a)=y_a)\, , (y(b)=y_b) \, ,
\end{equation}
with $L: \mathbb{T} \times \mathbb{R}^n \times \mathbb{R}^n
\rightarrow \mathbb{R}$, $(y,u) \rightarrow L(t,y,u)$ a
$C^2$-function for each $t$, and where we are using parentheses
around the endpoint conditions as a notation to mean that the
conditions may or may not be present: the case with fixed boundary
conditions $y(a)=y_a$ and $y(b)=y_b$ is studied in
\cite{CD:Bohner:2004}, for admissible functions $y(\cdot)$
belonging to $C^1_{rd}\left(\mathbb{T};\mathbb{R}^n\right)$
($rd$-continuously $\Delta$-differentiable functions); general
boundary conditions of the type $f(y(a),y(b))=0$, which include
the case $y(a)$ or $y(b)$ free, and over admissible functions in
the wider class $C^1_{prd}\left(\mathbb{T};\mathbb{R}^n\right)$
(piecewise $rd$-continuously $\Delta$-differentiable functions),
are considered in \cite{zeidan}. One question immediately comes to
mind. Why is the basic problem on time scales defined as
\eqref{eq:EL:B} and not as
\begin{equation}
\label{eq:EL:BSS} \mathcal{L}[y(\cdot)] =\int_{a}^{b}
L(t,y(t),y^\Delta(t))\Delta t\longrightarrow \min, \quad
(y(a)=y_a)\, , (y(b)=y_b) \, .
\end{equation}
The answer is simple: compared with \eqref{eq:EL:BSS}, definition
\eqref{eq:EL:B} simplifies the Euler-Lagrange equation, in the
sense that makes it similar to the classical context. The reader
is invited to compare the Euler-Lagrange condition
\eqref{eq:EL:Boh} of problem \eqref{eq:EL:B} and the
Euler-Lagrange condition \eqref{minha:E-L} of problem
\eqref{eq:EL:BSS}, with the classical expression (on the time
scale $\mathbb{T} = \mathbb{R}$):
\begin{equation*}
\frac{d}{dt} L_{y'}(t,y_\ast(t),y_\ast'(t))
=L_{y}(t,y_\ast(t),y_\ast'(t)),\ t\in[a,b] \, .
\end{equation*}
It turns out that problems \eqref{eq:EL:B} and \eqref{eq:EL:BSS}
are equivalent: as far as we are assuming $y(\cdot)$ to be
$\Delta$-differentiable, then
$y(t)=y^{\sigma}(t)-\mu(t)y^{\Delta}(t)$ and (i) any problem
\eqref{eq:EL:B} can be written in the form \eqref{eq:EL:BSS}, (ii)
any problem \eqref{eq:EL:BSS} can be written in the form
\eqref{eq:EL:B}. We claim, however, that the formulation
\eqref{eq:EL:BSS} we are promoting here is more natural and
convenient. An advantage of our formulation \eqref{eq:EL:BSS} with
respect to \eqref{eq:EL:B} is that it makes clear how to
generalize the basic problem on time scales to the case of a
Lagrangian $L$ containing delta derivatives of $y(\cdot)$ up to an
order $r$, $r \ge 1$. The higher-order problem will be naturally
defined as
$$\mathcal{L}[y(\cdot)]=\int_{a}^{\rho^{r-1}(b)}
L(t,y(t),y^\Delta(t),\ldots,y^{\Delta^r}(t))\Delta
t\longrightarrow\min,$$
\begin{align}
\left(y(a)=y_a^0\right),& \ \left(y\left(\rho^{r-1}(b)\right)=y_b^0\right), \label{problema } \\
&\vdots\nonumber \\
\left(y^{\Delta^{r-1}}(a)=y_a^{r-1}\right),&\
\left(y^{\Delta^{r-1}}\left(\rho^{r-1}(b)\right)=y_b^{r-1}\right),\nonumber
\end{align}
where $y^{\Delta^i}(t)\in\mathbb{R}^n,\ i\in\{0,\ldots,r\}$,
$y^{\Delta^0}=y$, and $n,\ r\in\mathbb{N}$ (assumptions on the
data of the problem will be specified later, in
Section~\ref{sec:mainResults}). One of the new results in this
paper is a necessary optimality condition in \emph{delta integral
form} for problem \eqref{problema } (Theorem~\ref{thm:HO:E-L:TS}).
It is obtained using the interplay of problems \eqref{eq:EL:B} and
\eqref{eq:EL:BSS} in order to deal with more general optimal
control problems \eqref{eq:PrbCO}.
The paper is organized as follows. In Section~\ref{sec:Prel} we
give a brief introduction to time scales and recall the main
results of the calculus of variations on this general setting. Our
contributions are found in Section~\ref{sec:mainResults}. We start
in \S\ref{subsec:basic} by proving the Euler-Lagrange equation and
transversality conditions (natural boundary conditions -- $y(a)$
or/and $y(b)$ free) for the basic problem \eqref{eq:EL:BSS}
(Theorem~\ref{thm:1}). As a corollary, the Euler-Lagrange equation
in \cite{CD:Bohner:2004} and \cite{zeidan} for \eqref{eq:EL:B} is
obtained. Regarding the natural boundary conditions, the one which
appears when $y(a)$ is free turns out to be simpler and more close
in aspect to the classical condition
$L_{y'}(a,y_\ast(a),y_\ast'(a)) = 0$ for problem \eqref{eq:EL:B}
than to \eqref{eq:EL:BSS}---compare condition \eqref{eq:transv:a}
for problem \eqref{eq:EL:BSS} with the correspondent condition
\eqref{eq:trv:prbBcsig:a} for problem \eqref{eq:EL:B}; but the
inverse situation happens when $y(b)$ is free---compare condition
\eqref{eq:trv:prbBcsig:b} for problem \eqref{eq:EL:B} with the
correspondent condition \eqref{eq:transv:b} for \eqref{eq:EL:BSS},
this last being simpler and more close in aspect to the classical
expression $L_{y'}(b,y_\ast(b),y_\ast'(b)) = 0$ valid on the time
scale $\mathbb{T} = \mathbb{R}$. In \S\ref{subsec:Lag} we
formulate a more general optimal control problem \eqref{eq:PrbCO}
on time scales, proving respective necessary optimality conditions
in Hamiltonian form (Theorem~\ref{thm:PMP}). As corollaries, we
obtain a Lagrange multiplier rule on time-scales
(Corollary~\ref{cor:LagMultRule}), and in \S\ref{subsec:HO} the
Euler-Lagrange equation for the problem of the calculus of
variations with higher order delta derivatives
(Theorem~\ref{thm:HO:E-L:TS}). Finally, as an illustrative
example, we consider in \S\ref{subsec:appl} a discrete time scale
and obtain the well-known Euler-Lagrange equation in delta
differentiated form.
All the results obtained in this paper can be extended: (i) to
nabla derivatives (see \cite[\S 8.4]{livro}) with the appropriate
modifications and as done in \cite{Atici06} for the simplest
functional; (ii) to more general classes of admissible functions
and to problems with more general boundary conditions, as done in
\cite{zeidan} for the simplest functional of the calculus of
variations on time scales.
\section{Time scales and previous results}
\label{sec:Prel}
We begin by recalling the main definitions and properties of time
scales (\textrm{cf.} \cite{Agarwal,livro,Hilger90,Hilger97} and
references therein).
A nonempty closed subset of $\mathbb{R}$ is called a \emph{Time
Scale} and is denoted by $\mathbb{T}$.
The \emph{forward jump operator}
$\sigma:\mathbb{T}\rightarrow\mathbb{T}$ is defined by
$$\sigma(t)=\inf{\{s\in\mathbb{T}:s>t}\},\mbox{ for all $t\in\mathbb{T}$},$$
while the \emph{backward jump operator}
$\rho:\mathbb{T}\rightarrow\mathbb{T}$ is defined by
$$\rho(t)=\sup{\{s\in\mathbb{T}:s<t}\},\mbox{ for all
$t\in\mathbb{T}$},$$ with $\inf\emptyset=\sup\mathbb{T}$
(\textrm{i.e.}, $\sigma(M)=M$ if $\mathbb{T}$ has a maximum $M$)
and $\sup\emptyset=\inf\mathbb{T}$ (\textrm{i.e.}, $\rho(m)=m$ if
$\mathbb{T}$ has a minimum $m$).
A point $t\in\mathbb{T}$ is called \emph{right-dense},
\emph{right-scattered}, \emph{left-dense} and
\emph{left-scattered} if $\sigma(t)=t$, $\sigma(t)>t$, $\rho(t)=t$
and $\rho(t)<t$, respectively.
Throughout the text we let $\mathbb{T}=[a,b]\cap\mathbb{T}_{0}$
with $a<b$ and $\mathbb{T}_0$ a time scale. We define
$\mathbb{T}^k=\mathbb{T}\backslash(\rho(b),b]$,
$\mathbb{T}^{k^2}=\left(\mathbb{T}^k\right)^k$ and more generally
$\mathbb{T}^{k^n}=\left(\mathbb{T}^{k^{n-1}}\right)^k$, for
$n\in\mathbb{N}$. The following standard notation is used for
$\sigma$ (and $\rho$): $\sigma^0(t) = t$, $\sigma^n(t) = (\sigma
\circ \sigma^{n-1})(t)$, $n \in \mathbb{N}$.
The \emph{graininess function}
$\mu:\mathbb{T}\rightarrow[0,\infty)$ is defined by
$$\mu(t)=\sigma(t)-t,\mbox{ for all $t\in\mathbb{T}$}.$$
We say that a function $f:\mathbb{T}\rightarrow\mathbb{R}$ is
\emph{delta differentiable} at $t\in\mathbb{T}^k$ if there is a
number $f^{\Delta}(t)$ such that for all $\varepsilon>0$ there
exists a neighborhood $U$ of $t$ (\textrm{i.e.},
$U=(t-\delta,t+\delta)\cap\mathbb{T}$ for some $\delta>0$) such
that
$$|f(\sigma(t))-f(s)-f^{\Delta}(t)(\sigma(t)-s)|
\leq\varepsilon|\sigma(t)-s|,\mbox{ for all $s\in U$}.$$
We call $f^{\Delta}(t)$ the \emph{delta derivative} of $f$ at $t$.
Now, we define the $r^{th}-$\emph{delta derivative}
($r\in\mathbb{N}$) of $f$ to be the function
$f^{\Delta^r}:\mathbb{T}^{k^r}\rightarrow\mathbb{R}$, provided
$f^{\Delta^{r-1}}$ is delta differentiable on $\mathbb{T}^{k^r}$.
For delta differentiable $f$ and $g$, the next formulas hold:
\begin{align}
f^\sigma(t)&=f(t)+\mu(t)f^\Delta(t)\label{transfor}\\
(fg)^\Delta(t)&=f^\Delta(t)g^\sigma(t)+f(t)g^\Delta(t)\nonumber\\
&=f^\Delta(t)g(t)+f^\sigma(t)g^\Delta(t)\nonumber,
\end{align}
where we abbreviate $f\circ\sigma$ by $f^\sigma$.
Next, a function $f:\mathbb{T}\rightarrow\mathbb{R}$ is called
\emph{rd-continuous} if it is continuous at right-dense points and
if its left-sided limit exists at left-dense points. We denote the
set of all rd-continuous functions by C$_{\textrm{rd}}$ or
C$_{\textrm{rd}}[\mathbb{T}]$ and the set of all delta
differentiable functions with rd-continuous derivative by
C$_{\textrm{rd}}^1$ or C$_{\textrm{rd}}^1[\mathbb{T}]$.
It is known that rd-continuous functions possess an
\emph{antiderivative}, \textrm{i.e.}, there exists a function $F$
with $F^\Delta=f$, and in this case an \emph{integral} is defined
by $\int_{a}^{b}f(t)\Delta t=F(b)-F(a)$. It satisfies
\begin{equation}
\label{sigma}
\int_t^{\sigma(t)}f(\tau)\Delta\tau=\mu(t)f(t) \, .
\end{equation}
We now present some useful properties of the delta integral:
\begin{lem}
\label{integracao:partes}
If $a,b\in\mathbb{T}$ and $f,g\in$C$_{\textrm{rd}}$, then
\begin{enumerate}
\item$\int_{a}^{b}f(\sigma(t))g^{\Delta}(t)\Delta t
=\left[(fg)(t)\right]_{t=a}^{t=b}-\int_{a}^{b}f^{\Delta}(t)g(t)\Delta t$.
\item $\int_{a}^{b}f(t)g^{\Delta}(t)\Delta t
=\left[(fg)(t)\right]_{t=a}^{t=b}-\int_{a}^{b}f^{\Delta}(t)g(\sigma(t))\Delta t$.
\end{enumerate}
\end{lem}
The main result of the calculus of variations on time scales is
given by the following necessary optimality condition for problem
\eqref{eq:EL:B}.
\begin{thm}[\cite{CD:Bohner:2004}]
\label{Th:B:EL-CV} If $y_\ast$ is a weak local minimizer
(\textrm{cf.} \S\ref{sec:mainResults}) of the problem
\begin{gather*}
\mathcal{L}[y(\cdot)]=\int_{a}^{b}L(t,y^\sigma(t),y^\Delta(t))\Delta t \longrightarrow \min\\
y(\cdot) \in C_{\textrm{rd}}^1[\mathbb{T}]\\
y(a)=y_a, \quad y(b)=y_b,
\end{gather*}
then the Euler-Lagrange equation
\begin{equation}
\label{eq:EL:Boh}
L_{y^\Delta}^\Delta(t,y^\sigma_\ast(t),y_\ast^\Delta(t))
=L_{y^\sigma}(t,y^\sigma_\ast(t),y_\ast^\Delta(t)),\
t\in\mathbb{T}^{k^2}
\end{equation}
holds.
\end{thm}
Main ingredients to prove Theorem~\ref{Th:B:EL-CV} are item~1 of
Lemma~\ref{integracao:partes} and the Dubois-Reymond lemma:
\begin{lem}[\cite{CD:Bohner:2004}]
\label{lem:DR} Let $g\in C_{\textrm{rd}}$,
$g:[a,b]^k\rightarrow\mathbb{R}^n$. Then,
$$\int_{a}^{b}g(t) \cdot \eta^\Delta(t)\Delta t=0 \quad
\mbox{for all $\eta\in C_{\textrm{rd}}^1$ with
$\eta(a)=\eta(b)=0$}$$ if and only if $$g(t)=c \mbox{ on $[a,b]^k$
for some $c\in\mathbb{R}^n$}.$$
\end{lem}
\section{Main results}
\label{sec:mainResults}
Assume that the Lagrangian $L(t,u_0(t),u_1(t),\ldots,u_r(t))$
($r\geq 1$) is a $\mathrm{C}^{r+1}$ function of
$(u_0(t),u_1(t),\ldots,u_r(t))$ for each $t\in\mathbb{T}$. Let
$y\in\mathrm{C}_{rd}^r[\mathbb{T}]$, where
$$\mathrm{C}_{rd}^r[\mathbb{T}]
=\left\{y:\mathbb{T}^{k^r}\rightarrow\mathbb{R}^n : y^{\Delta^r}\
\mbox{is $rd$-continuous on}\ \mathbb{T}^{k^r}\right\} \, .$$
We want to minimize the functional $\mathcal{L}$ of problem
\eqref{problema }. For this, we say that
$y_\ast\in\mathrm{C}_{rd}^r[\mathbb{T}]$ is a \emph{weak local
minimizer} for the variational problem \eqref{problema } provided
there exists $\delta>0$ such that
$\mathcal{L}[y_\ast]\leq\mathcal{L}[y]$ for all
$y\in\textrm{C}_{rd}^r[\mathbb{T}]$ satisfying the constraints in
\eqref{problema } and $\|y-y_\ast\|_{r,\infty}<\delta$, where
$$||y||_{r,\infty} := \sum_{i=0}^{r} \left\|y^{\Delta^i}\right\|_{\infty},$$
with $y^{\Delta^0} = y$ and $||y||_{\infty}:= \sup_{t
\in\mathbb{T}^{k^r}} |y(t)|$.
\subsection{The basic problem on time scales}
\label{subsec:basic}
We start by proving the necessary optimality condition for the
simplest variational problem ($r = 1$):
\begin{equation}
\label{P1}
\begin{gathered}
\mathcal{L}[y(\cdot)]=\int_{a}^{b}L(t,y(t),y^\Delta(t))\Delta t \longrightarrow \min \\
y(\cdot) \in C_{\textrm{rd}}^1[\mathbb{T}]\\
\left(y(a)=y_a\right), \quad \left(y(b)=y_b\right) \, .
\end{gathered}
\end{equation}
\begin{rem}
\label{rem:3points} We are assuming in problem \eqref{P1} that the
time scale $\mathbb{T}$ has at least 3 points. Indeed, for the
delta-integral to be defined we need at least 2 points. Assume
that the time scale has only two points: $\mathbb{T} = \{a,b\}$,
with $b=\sigma(a)$. Then,
$\int_{a}^{\sigma(a)}L(t,y(t),y^\Delta(t))\Delta t = \mu(a)
L(a,y(a),y^\Delta(a))$. In the case both $y(a)$ and $y(\sigma(a))$
are fixed, since $y^\Delta(a) = \frac{y(\sigma(a))-y(a)}{\mu(a)}$,
then $\mathcal{L}[y(\cdot)]$ would be a constant for every
admissible function $y(\cdot)$ (there would be nothing to minimize
and problem \eqref{P1} would be trivial). Similarly, for
\eqref{problema } we assume the time scale to have at least $2 r +
1$ points (see Remark~\ref{rem:Pneeds2rp1points}).
\end{rem}
\begin{thm}
\label{thm:1} If $y_\ast$ is a weak local minimizer of \eqref{P1}
(problem \eqref{problema } with $r=1$), then the Euler-Lagrange
equation in $\Delta$-integral form
\begin{equation}\label{eulerint}
L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t))
=\int_a^{\sigma(t)}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))\Delta\xi+c
\end{equation}
holds $\forall t\in\mathbb{T}^k$ and some $c\in\mathbb{R}^n$.
Moreover, if the initial condition $y(a)=y_a$ is not present
($y(a)$ is free), then the supplementary condition
\begin{equation}
\label{eq:transv:a} L_{y^\Delta}(a,y_\ast(a),y_\ast^\Delta(a))
-\mu(a)L_{y}(a,y_\ast(a),y_\ast^\Delta(a)) = 0
\end{equation}
holds; if $y(b)=y_b$ is not present ($y(b)$ is free), then
\begin{equation}
\label{eq:transv:b}
L_{y^\Delta}(\rho(b),y_\ast(\rho(b)),y_\ast^\Delta(\rho(b)))= 0\,
.
\end{equation}
\end{thm}
\begin{rem}
For the time scale $\mathbb{T} = \mathbb{R}$ equalities
\eqref{eq:transv:a} and \eqref{eq:transv:b} give, respectively,
the well-known \emph{natural boundary conditions}
$L_{y'}(a,y_\ast(a),y_\ast'(a)) = 0$ and
$L_{y'}(b,y_\ast(b),y_\ast'(b))= 0$.
\end{rem}
\begin{proof}
Suppose that $y_\ast(\cdot)$ is a weak local minimizer of
$\mathcal{L}[\cdot]$. Let
$\eta(\cdot)\in$\textrm{C}$_{\textrm{rd}}^1$ and define
$\Phi:\mathbb{R}\rightarrow\mathbb{R}$ by
$$\Phi(\varepsilon)=\mathcal{L}[y_\ast(\cdot)+\varepsilon\eta(\cdot)].$$
This function has a minimum at $\varepsilon=0$, so we must have
$\Phi'(0)=0$. Applying the delta-integral properties and the
integration by parts formula 2 (second item in
Lemma~\ref{integracao:partes}), we have
\begin{equation}
\label{eq:1:5:2}
\begin{aligned}
0&=\Phi'(0)\\
&=\int_{a}^{b}[L_{y}(t,y_\ast(t),y_\ast^\Delta(t)) \cdot \eta(t)
+L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t)) \cdot \eta^\Delta(t)]\Delta t\\
&=\left.\int_a^t L_{y}(t,y_\ast(t),y_\ast^\Delta(t)\Delta t
\cdot \eta(t)\right|_{t=a}^{t=b}\\
&\quad -\int_{a}^{b}\left[\int_{a}^{\sigma(t)}L_{y}(\xi,y_\ast(\xi),
y_\ast^\Delta(\xi))\Delta\xi \cdot \eta^\Delta(t)
-L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t)) \cdot \eta^\Delta(t)\right]\Delta t \, .
\end{aligned}
\end{equation}
Let us limit the set of all delta-differentiable functions
$\eta(\cdot)$ with $rd$-continuous derivatives to those which
satisfy the condition $\eta(a)=\eta(b)=0$ (this condition is
satisfied by all the admissible variations $\eta(\cdot)$ in the
case both $y(a)=y_a$ and $y(b)=y_b$ are fixed). For these
functions we have
\begin{equation*}
\int_{a}^{b}\left[L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t))
-\int_{a}^{\sigma(t)}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))
\Delta\xi\right] \cdot \eta^\Delta(t)\Delta t = 0 \, .
\end{equation*}
Therefore, by the lemma of Dubois-Reymond (Lemma~\ref{lem:DR}),
there exists a constant $c\in\mathbb{R}^n$ such that \eqref{eulerint} holds:
\begin{equation}
\label{eq:1:5:3} L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t))
-\int_{a}^{\sigma(t)}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))\Delta\xi=c
\, ,
\end{equation}
for all $t\in\mathbb{T}^k$. Because of \eqref{eq:1:5:3},
condition \eqref{eq:1:5:2} simplifies to
\begin{equation*}
\left.\int_a^t L_{y}(t,y_\ast(t),y_\ast^\Delta(t)\Delta t \cdot
\eta(t)\right|_{t=a}^{t=b}+\left.
c\cdot\eta(t)\right|_{t=a}^{t=b}=0,
\end{equation*}
for any admissible $\eta(\cdot)$. If $y(a) = y_a$ is not present
in problem \eqref{P1} (so that $\eta(a)$ need not to be zero),
taking $\eta(t) = t-b$ we find that $c = 0$; if $y(b) = y_b$ is
not present, taking $\eta(t) = t-a$ we find that $\int_a^b
L_{y}(t,y_\ast(t),y_\ast^\Delta(t) = 0$. Applying these two
conditions to \eqref{eq:1:5:3} and having in mind formula
\eqref{sigma}, we may state that
\begin{multline*}
L_{y^\Delta}(a,y_\ast(a),y_\ast^\Delta(a))
-\int_{a}^{\sigma(a)}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))\Delta\xi=0\\
\Leftrightarrow L_{y^\Delta}(a,y_\ast(a),y_\ast^\Delta(a))
-\mu(a)L_{y}(a,y_\ast(a),y_\ast^\Delta(a))=0,
\end{multline*}
and (note that $\sigma(\rho(b))=b$)
\begin{multline*}
L_{y^\Delta}(\rho(b),y_\ast(\rho(b)),y_\ast^\Delta(\rho(b)))
-\int_{a}^{b}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))\Delta\xi=0\\
\Leftrightarrow
L_{y^\Delta}(\rho(b),y_\ast(\rho(b)),y_\ast^\Delta(\rho(b)))=0.
\end{multline*}
\end{proof}
\begin{rem}\label{rem1}
Since $\sigma(t)\geq t,\ \forall t\in\mathbb{T}$, we must have
\begin{multline*}
L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t))
-\int_{a}^{\sigma(t)}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))\Delta\xi=c\\
\Leftrightarrow L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t))
-\mu(t)L_{y}(t,y_\ast(t),y_\ast^\Delta(t)) \\
=\int_{a}^{t}L_{y}(\xi,y_\ast(\xi),y_\ast^\Delta(\xi))\Delta\xi+c,
\end{multline*}
by formula \eqref{sigma}. Delta differentiating both sides, we
obtain
\begin{multline}
\left(L_{y^\Delta}(t,y_\ast(t),y_\ast^\Delta(t))
-\mu(t)L_{y}(t,y_\ast(t),y_\ast^\Delta(t))\right)^\Delta \\
=L_{y}(t,y_\ast(t),y_\ast^\Delta(t)),\
t\in\mathbb{T}^{k^2}.\label{minha:E-L}
\end{multline}
Note that we can't expand the left hand side of this last
equation, because we are not assuming that $\mu(t)$ is delta
differentiable. In fact, generally $\mu(t)$ is not delta
differentiable (see example 1.55, page 21 of \cite{livro}). We say
that \eqref{minha:E-L} is the Euler-Lagrange equation for problem
\eqref{P1} in the \emph{delta differentiated} form.
\end{rem}
As mentioned in the introduction, the formulations of the problems
of the calculus of variations on time scales with
``$\left(t,y^{\sigma}(t),y^\Delta(t)\right)$'' and with
``$\left(t,y(t),y^\Delta(t)\right)$'' are equivalent. It is
trivial to derive previous Euler-Lagrange equation
\eqref{eq:EL:Boh} from our equation \eqref{minha:E-L} and the
other way around (one can derive \eqref{minha:E-L} directly from
\eqref{eq:EL:Boh}).
\begin{cor}
If $y_\ast \in C_{\textrm{rd}}^1[\mathbb{T}]$ is a weak local
minimizer of
$$\mathcal{L}[y(\cdot)]=\int_{a}^{b}L(t,y^\sigma(t),y^\Delta(t))\Delta
t\, , \quad \mbox{$\left(y(a)=y_a\right)$,
$\left(y(b)=y_b\right)$},$$ then the Euler-Lagrange equation
\eqref{eq:EL:Boh} holds. If $y(a)$ is free, then the extra
transversality condition (natural boundary condition)
\begin{equation}
\label{eq:trv:prbBcsig:a}
L_{y^\Delta}(a,y_\ast^\sigma(a),y_\ast^\Delta(a)) = 0
\end{equation}
holds; if $y(b)$ is free, then
\begin{equation}
\label{eq:trv:prbBcsig:b}
L_{y^\sigma}(\rho(b),y_\ast^\sigma(\rho(b)),y_\ast^\Delta(\rho(b)))\mu(\rho(b))
+L_{y^\Delta}(\rho(b),y_\ast^\sigma(\rho(b)),y_\ast^\Delta(\rho(b)))
= 0\, .
\end{equation}
\end{cor}
\begin{proof}
Since $y(t)$ is delta differentiable, then \eqref{transfor} holds.
This permits us to write
$$L(t,y^\sigma(t),y^\Delta(t))=L(t,y(t)+\mu(t)y^\Delta(t),y^\Delta(t))
=F(t,y(t),y^\Delta(t)).$$ Applying equation \eqref{minha:E-L} to
the functional $F$ we obtain
$$\left(F_{y^\Delta}(t,y(t),y^\Delta(t))
-\mu(t)F_{y}(t,y(t),y^\Delta(t))\right)^\Delta=F_{y}(t,y(t),y^\Delta(t)).$$
But
\begin{align}
F_{y}(t,y(t),y^\Delta(t))&=L_{y^\sigma}(t,y^\sigma(t),y^\Delta(t)) \, , \nonumber\\
F_{y^\Delta}(t,y(t),y^\Delta(t))&=L_{y^\sigma}(t,y^\sigma(t),y^\Delta(t))\mu(t)
+L_{y^\Delta}(t,y^\sigma(t),y^\Delta(t))\, ,\nonumber
\end{align}
and the result follows.
\end{proof}
\subsection{The Lagrange problem on time scales}
\label{subsec:Lag}
Now we consider a more general variational problem with
delta-differential side conditions:
\begin{equation}
\label{eq:PrbCO}
\begin{gathered}
J[y(\cdot),u(\cdot)]=\int_{a}^{b}L(t,y(t),u(t))\Delta t \longrightarrow \min \, , \\
y^\Delta(t)=\varphi(t,y(t),u(t)) \, , \\
\left(y(a)=y_a\right), \quad \left(y(b)=y_b\right) \, ,
\end{gathered}
\end{equation}
where $y(\cdot) \in C_{\textrm{rd}}^1[\mathbb{T}]$, $u(\cdot) \in
C_{\textrm{rd}}[\mathbb{T}]$, $y(t)\in\mathbb{R}^n$ and
$u(t)\in\mathbb{R}^m$ for all $t\in \mathbb{T}$, and $m \le n$. We
assume $L: \mathbb{T} \times \mathbb{R}^n \times \mathbb{R}^m
\rightarrow \mathbb{R}$ and $\varphi: \mathbb{T} \times
\mathbb{R}^n \times \mathbb{R}^m \rightarrow \mathbb{R}^n$ to be
$C^1$-functions of $y$ and $u$ for each $t$; and that for each
control function $u(\cdot) \in
C_{\textrm{rd}}[\mathbb{T};\mathbb{R}^m]$ there exists a
correspondent $y(\cdot) \in
C_{\textrm{rd}}^1[\mathbb{T};\mathbb{R}^n]$ solution of the
$\Delta$-differential equation $y^\Delta(t)=\varphi(t,y(t),u(t))$.
We remark that conditions for existence or uniqueness are
available for O$\Delta$E's from the very beginning of the theory
of time scales (see \cite[Theorem~8]{Hilger97}). Roughly speaking,
forward solutions exist, while existence of backward solutions
needs extra assumptions (\textrm{e.g.} regressivity). In control
theory, however, one usually needs only forward solutions, so we
do not need to impose such extra assumptions
\cite{ZbigEwaPaw:CS06}.
We are interested to find necessary conditions for a pair
$\left(y_\ast,u_\ast\right)$ to be a weak local minimizer of $J$.
\begin{defin}
Take an admissible pair $\left(y_\ast,u_\ast\right)$. We say that
$\left(y_\ast,u_\ast\right)$ is a weak local minimizer for
\eqref{eq:PrbCO} if there exist $\delta > 0$ such that
$J[y_\ast,u_\ast] \leq J[y,u]$ for all admissible pairs
$\left(y,u\right)$ satisfying $\|y-y_\ast\|_{1,\infty} +
\|u-u_\ast\|_{\infty} <\delta$.
\end{defin}
\begin{rem}
Problem \eqref{eq:PrbCO} is very general and includes: (i) problem
\eqref{P1} (this is the particular case where $m = n$ and
$\varphi(t,y,u) = u$), (ii) the problem of the calculus of
variations with higher-order delta derivatives \eqref{problema }
(such problem receive special attention in Section~\ref{subsec:HO}
below), (iii) isoperimetric problems on time scales. Suppose that
the isoperimetric condition
\begin{equation*}
I[y(\cdot),u(\cdot)] = \int_a^b g\left(t,y(t),u(t)\right) \Delta t
= \beta \, ,
\end{equation*}
$\beta$ a given constant, is prescribed. We can introduce a new
state variable $y_{n+1}$ defined by
$$y_{n+1}(t)=\int_a^t g(\xi,y(\xi),u(\xi))\Delta \xi,\ t\in\mathbb{T},$$
with boundary conditions $y_{n+1}(a) = 0$ and $y_{n+1}(b) =
\beta$. Then
\begin{equation*}
y_{n+1}^{\Delta}(t)= g\left(t,y(t),u(t)\right),\ t\in\mathbb{T}^k,
\end{equation*}
and we can always recast an isoperimetric problem as a Lagrange
problem \eqref{eq:PrbCO}.
\end{rem}
To establish necessary optimality conditions for \eqref{eq:PrbCO}
is more complicated than for the basic problem of the calculus of
variations on time scales \eqref{eq:EL:B} or \eqref{eq:EL:BSS},
owing to the possibility of existence of abnormal extremals
(Definition~\ref{def:abn}). The abnormal case never occurs for the
basic problem (Proposition~\ref{rem:CV:CN}).
\begin{thm}[The weak maximum principle on time scales]
\label{thm:PMP} If $\left(y_\ast(\cdot),u_\ast(\cdot)\right)$ is a
weak local minimizer of problem \eqref{eq:PrbCO}, then there
exists a set of multipliers $(\psi_{0_\ast}, \psi_\ast(\cdot)) \ne
0$, where $\psi_{0_\ast}$ is a nonnegative constant and
$\psi_\ast(\cdot) : \mathbb{T} \rightarrow \mathbb{R}^n$ is a
delta differentiable function on $\mathbb{T}^k$, such that
$\left(y_\ast(\cdot),u_\ast(\cdot),\psi_{0_\ast},\psi_\ast(\cdot)\right)$
satisfy
\begin{gather}
y_\ast^\Delta(t) = H_{\psi^\sigma}(t,y_\ast(t),u_\ast(t),
\psi_{0_\ast},\psi_\ast^\sigma(t))\, , \quad
\text{($\Delta$-dynamic equation for $y$)} \label{3}\\
\psi^\Delta_\ast(t)=- H_{y}(t,y_\ast(t),u_\ast(t),
\psi_{0_\ast},\psi_\ast^\sigma(t))\, ,
\quad \text{($\Delta$-dynamic equation for $\psi$)} \label{1} \\
H_{u}(t,y_\ast(t),u_\ast(t),\psi_{0_\ast},\psi_\ast^\sigma(t))=
0\, , \quad \text{($\Delta$-stationary condition)} \label{2}
\end{gather}
for all $t\in\mathbb{T}^k$, where the Hamiltonian
function $H$ is defined by
\begin{equation}
\label{eq:def:Ham} H(t,y,u,\psi_0,\psi^\sigma)= \psi_0 L(t,y,u)
+\psi^\sigma\cdot\varphi(t,y,u) \, .
\end{equation}
If $y(a)$ is free in \eqref{eq:PrbCO}, then
\begin{equation}
\label{eq:trvCondPL:a} \psi_\ast(a) = 0 \, ;
\end{equation}
if $y(b)$ is free in \eqref{eq:PrbCO}, then
\begin{equation}
\label{eq:trvCondPL:b} \psi_\ast(b) = 0 \, .
\end{equation}
\end{thm}
\begin{rem}
From the definition \eqref{eq:def:Ham} of $H$, it follows
immediately that \eqref{3} holds true for any admissible pair
$\left(y(\cdot),u(\cdot)\right)$ of problem \eqref{eq:PrbCO}.
Indeed, condition \eqref{3} is nothing more than the control
system $y_\ast^\Delta(t)=\varphi(t,y_\ast(t),u_\ast(t))$.
\end{rem}
\begin{rem}
For the time scale $\mathbb{T} = \mathbb{Z}$, \eqref{3}-\eqref{2}
reduce to well-known conditions in discrete time (see
\textrm{e.g.} \cite[Ch.~8]{Sethi}): the $\Delta$-dynamic equation
for $y$ takes the form $y(k+1)-y(k) =
H_{\psi}\left(k,y(k),u(k),\psi_0,\psi(k+1)\right)$; the
$\Delta$-dynamic equation for $\psi$ gives $\psi(k+1)-\psi(k) =
-H_{y}\left(k,y(k),u(k),\psi_0,\psi(k+1)\right)$; and the
$\Delta$-stationary condition reads as
$H_u\left(k,y(k),u(k),\psi_0,\psi(k+1)\right) = 0$; with the
Hamiltonian $H = \psi_0 L(k,y(k),u(k)) + \psi(k+1) \cdot
\varphi(k,y(k),u(k))$. For $\mathbb{T} = \mathbb{R}$,
Theorem~\ref{thm:PMP} is known in the literature as \emph{Hestenes
necessary condition}, which is a particular case of the Pontryagin
Maximum Principle \cite{pmp}.
\end{rem}
\begin{cor}[Lagrange multiplier rule on time scales]
\label{cor:LagMultRule} If
$\left(y_\ast(\cdot),u_\ast(\cdot)\right)$ is a weak local
minimizer of problem \eqref{eq:PrbCO}, then there exists a
collection of multipliers $(\psi_{0_\ast}, \psi_\ast(\cdot))$,
$\psi_{0_\ast}$ a~nonnegative constant and $\psi_\ast(\cdot) :
\mathbb{T} \rightarrow \mathbb{R}^n$ a delta differentiable
function on $\mathbb{T}^k$, not all vanishing, such that
$\left(y_\ast(\cdot),u_\ast(\cdot),\psi_{0_\ast},\psi_\ast(\cdot)\right)$
satisfy the Euler-Lagrange equation of the augmented functional
$J^\ast$:
\begin{equation}
\label{eq:prb:Lstar}
\begin{split}
J^\ast[y(\cdot),&u(\cdot),\psi(\cdot)] = \int_{a}^{b}
L^\ast\left(t,y(t),u(t),\psi^\sigma(t),y^\Delta(t)\right) \Delta t \\
&= \int_{a}^{b} \left[ \psi_0 L(t,y(t),u(t)) + \psi^\sigma(t)
\cdot
\left( \varphi(t,y(t),u(t)) - y^\Delta(t) \right)\right] \Delta t \\
&=
\int_{a}^{b}[H(t,y(t),u(t),\psi_0,\psi^\sigma(t))-\psi^\sigma(t)
\cdot y^\Delta (t)]\Delta t \, .
\end{split}
\end{equation}
\end{cor}
\begin{proof}
The Euler-Lagrange equations \eqref{minha:E-L} and \eqref{eq:EL:Boh}
applied to \eqref{eq:prb:Lstar} give
\begin{gather*}
\left(L^\ast_{y^\Delta} -\mu(t)L^\ast_{y}\right)^\Delta =L^\ast_{y} \, , \label{eq:1poss}\\
\left(-\mu(t)L^\ast_{u}\right)^\Delta =L^\ast_{u} \, , \quad
L^\ast_{\psi^\sigma} = 0 \, , \nonumber
\end{gather*}
that is,
\begin{gather}
\left(\psi^\sigma(t) + \mu(t) \cdot H_y\right)^\Delta = - H_{y} \, , \label{eq:1}\\
(-\mu(t)H_{u})^\Delta =H_{u} \, , \label{sei la} \\
y^\Delta(t) = H_{\psi^\sigma}\nonumber \, ,
\end{gather}
where the partial derivatives of $H$ are evaluated at
$(t,y(t),u(t),\psi_0,\psi^\sigma(t))$. Obviously, from \eqref{2}
we obtain \eqref{sei la}. It remains to prove that \eqref{1}
implies \eqref{eq:1} along
$\left(y_\ast(\cdot),u_\ast(\cdot),\psi_{0_\ast},\psi_\ast(\cdot)\right)$.
Indeed, from \eqref{1} we can write $\mu(t) \psi^\Delta(t) = -
\mu(t) H_y$, which is equivalent to $\psi(t) = \psi^\sigma(t) +
\mu(t) H_y$.
\end{proof}
\begin{rem}
Condition \eqref{1} or \eqref{eq:1} imply that along the minimizer
\begin{equation}
\label{eq:new10}
\psi^\sigma(t)=-\int_a^{\sigma(t)}H_{y}(\xi,y(\xi),
u(\xi),\psi_0,\psi^\sigma(\xi))\Delta\xi - c
\end{equation}
for some $c \in \mathbb{R}^n$.
\end{rem}
\begin{rem}
The assertion in Theorem~\ref{thm:PMP} that the multipliers cannot
be all zero is crucial. Indeed, without this requirement, for any
admissible pair $\left(y(\cdot),u(\cdot)\right)$ of
\eqref{eq:PrbCO} there would always exist a set of multipliers
satisfying \eqref{1}-\eqref{2} (namely, $\psi_0 = 0$ and $\psi(t)
\equiv 0$).
\end{rem}
\begin{rem}
Along all the work we consider $\psi$ as a row-vector.
\end{rem}
\begin{rem}
\label{rem:psi0:zoo} If the multipliers
$\left(\psi_0,\psi(\cdot)\right)$ satisfy the conditions of
Theorem~\ref{thm:PMP}, then $\left(\gamma \psi_0,\gamma
\psi(\cdot)\right)$ also do, for any $\gamma
> 0$. This simple observation allow us to conclude that it is
enough to consider two cases: $\psi_0 = 0$ or $\psi_0 = 1$.
\end{rem}
\begin{defin}
\label{def:abn} An admissible quadruple
$\left(y(\cdot),u(\cdot),\psi_0,\psi(\cdot)\right)$ satisfying
conditions \eqref{3}-\eqref{2} (also \eqref{eq:trvCondPL:a} or
\eqref{eq:trvCondPL:b} if $y(a)$ or $y(b)$ are, respectively,
free) is called an extremal for problem \eqref{eq:PrbCO}. An
extremal is said to be normal if $\psi_0 = 1$ and abnormal if
$\psi_0 = 0$.
\end{defin}
So, Theorem~\ref{thm:PMP} asserts that every minimizer is an
extremal.
\begin{prop}
The Lagrange problem on time scales \eqref{eq:PrbCO} has no
abnormal extremals (in particular, all the minimizers are normal)
when at least one of the boundary conditions $y(a)$ or $y(b)$ is
absent (when $y(a)$ or $y(b)$ is free).
\end{prop}
\begin{proof}
Without loss of generality, let us consider $y(b)$ free. We want
to prove that the nonnegative constant $\psi_0$ is nonzero. The
fact that $\psi_0 \ne 0$ follows from Theorem~\ref{thm:PMP}.
Indeed, the multipliers $\psi_0$ and $\psi(t)$ cannot vanish
simultaneously at any point of $t \in \mathbb{T}$. As far as
$y(b)$ is free, the solution to the problem must satisfy the
condition $\psi(b)=0$. The condition $\psi(b)=0$ requires a
nonzero value for $\psi_0$ at $t = b$. But since $\psi_0$ is a
nonnegative constant, we conclude that $\psi_0$ is positive, and
we can normalize it (Remark~\ref{rem:psi0:zoo}) to unity.
\end{proof}
\begin{rem}
\label{rem:CAE} In the general situation abnormal extremals may
occur. More precisely (see proof of Theorem~\ref{thm:PMP}),
abnormality is characterized by the existence of a nontrivial
solution $\psi(t)$ for the system $\psi^\Delta(t) + \psi^\sigma(t)
\cdot \varphi_y = 0$.
\end{rem}
\begin{prop}
\label{rem:CV:CN} There are no abnormal extremals for problem
\eqref{P1}, even in the case $y(a)$ and $y(b)$ are both fixed
($y(a) = y_a$, $y(b) = y_b$).
\end{prop}
\begin{proof}
Problem \eqref{P1} is the particular case of \eqref{eq:PrbCO} with
$y^\Delta(t)= u(t)$. If $\psi_0 = 0$, then the Hamiltonian
\eqref{eq:def:Ham} takes the form $H = \psi^\sigma \cdot\ u$. From
Theorem~\ref{thm:PMP}, $\psi^\Delta = 0$ and $\psi^\sigma = 0$,
for all $t \in \mathbb{T}^k$. Since $\psi^\sigma = \psi + \mu(t)
\psi^\Delta$, this means that $\psi_0$ and $\psi$ would be both
zero, which is not a possibility.
\end{proof}
\begin{cor}
For problem \eqref{P1}, Theorem~\ref{thm:PMP} gives
Theorem~\ref{thm:1}.
\end{cor}
\begin{proof}
For problem \eqref{P1} we have $\varphi(t,y,u) = u$. From
Proposition~\ref{rem:CV:CN}, the Hamiltonian becomes
$H(t,y,u,\psi_0,\psi^\sigma)=L(t,y,u) +\psi^\sigma\cdot u$. By the
$\Delta$-stationary condition \eqref{2} we may write $L_u(t,y,u)
+\psi^\sigma=0$. Now apply \eqref{eq:new10} and the result
follows.
\end{proof}
To prove Theorem~\ref{thm:PMP} we need the following result:
\begin{lem}[Fundamental lemma of the calculus of variations on time scales]
\label{lema fundamental} Let $g\in C_{\textrm{rd}}$,
$g:\mathbb{T}^k\rightarrow\mathbb{R}^n$. Then,
$$
\int_{a}^{b}g(t) \cdot \eta(t)\Delta t=0 \quad \mbox{for all }
\eta\in C_{rd}
$$
if and only if
$$
g(t)=0 \quad \mbox{on }\ \mathbb{T}^k \, .
$$
\end{lem}
\begin{proof}
If $g(t)=0$ on $\mathbb{T}^k$, then obviously $\int_{a}^{b}g(t)
\cdot \eta(t)\Delta t=0$, for all $\eta\in C_{rd}$.
Now, suppose (without loss of generality) that $g(t_0)>0$ for some
$t_0\in\mathbb{T}^k$. We will divide the proof in two steps:
Step 1: Assume that $t_0$ is right scattered. Define in
$\mathbb{T}^k$
\[ \eta(t) = \left\{ \begin{array}{ll}
1 & \mbox{if $t = t_0$};\\
0 & \mbox{if $t \neq t_0$}.\end{array} \right. \] Then $\eta$ is
rd-continuous and
$$\int_a^b g(t)\eta(t)\Delta t=\int_{t_0}^{\sigma(t_0)} g(t)\eta(t)\Delta t=\mu(t_0)g(t_0)>0,$$
which is a contradiction.
Step 2: Suppose that $t_0$ is right dense. Since $g$ is
rd-continuous, then it is continuous at $t_0$. So there exist
$\delta>0$ such that for all
$t\in(t_0-\delta,t_0+\delta)\cap\mathbb{T}^k$ we have $g(t)>0$.
If $t_0$ is left-dense, define in $\mathbb{T}^k$
\[ \eta(t) = \left\{ \begin{array}{ll}
(t-t_0+\delta)^2(t-t_0-\delta)^2 & \mbox{if $t \in (t_0-\delta,t_0+\delta)$};\\
0 & \mbox{otherwise}.\end{array} \right. \] It follows that $\eta$
is rd-continuous and
$$\int_a^b g(t)\eta(t)\Delta t=\int_a^{t_0-\delta} g(t)\eta(t)\Delta t
+\int_{t_0-\delta}^{t_0+\delta} g(t)\eta(t)\Delta t
+\int_{t_0+\delta}^{b} g(t)\eta(t)\Delta t>0,$$
which is a contradiction.
If $t_0$ is left-scattered, define in $\mathbb{T}^k$
\[ \eta(t) = \left\{ \begin{array}{ll}
(t-t_0-\delta)^2 & \mbox{if $t \in [t_0,t_0+\tilde{\delta})$};\\
0 & \mbox{otherwise},\end{array} \right. \] where
$0<\tilde{\delta}<\min\{\mu(\rho(t_0),\delta)\}$. We have: $\eta$
is rd-continuous and
$$\int_a^b g(t)\eta(t)\Delta t=\int_{t_0}^{t_0+\tilde{\delta}} g(t)\eta(t)\Delta t>0,$$
that again leads us to a contradiction.
\end{proof}
\begin{proof} (of Theorem~\ref{thm:PMP})
We begin by noting that $u(t) = \left(u_1(t),\ldots,u_m(t)\right)$
in problem \eqref{eq:PrbCO}, $t\in\mathbb{T}^k$, are arbitrarily
specified functions (controls). Once fixed $u(\cdot) \in
C_{\textrm{rd}}[\mathbb{T}; \mathbb{R}^m]$, then $y(t) =
\left(y_1(t),\ldots,y_n(t)\right)$, $t\in\mathbb{T}^k$, is
determined from the system of delta-differential equations
$y^\Delta(t)=\varphi(t,y(t),u(t))$ (and boundary conditions, if
present). As far as $u(\cdot)$ is an arbitrary function,
variations $\omega(\cdot) \in C_{\textrm{rd}}[\mathbb{T};
\mathbb{R}^m]$ for $u(\cdot)$ can also be considered arbitrary.
This is not true, however, for the variations $\eta(\cdot) \in
$\textrm{C}$_{\textrm{rd}}^1[\mathbb{T}; \mathbb{R}^n]$ of
$y(\cdot)$. Suppose that $(y_\ast(\cdot),u_\ast(\cdot))$ is a weak
local minimizer of $J[\cdot,\cdot]$. Let $\varepsilon \in
(-\delta,\delta)$ be a small real parameter and $y_\varepsilon(t)
= y_\ast(t) + \varepsilon \eta(t)$ (with $\eta(a) = 0$ if
$y(a)=y_a$ is given; $\eta(b) = 0$ if $y(b)=y_b$ is given) be the
trajectory generated by the control $u_\varepsilon(t) = u_\ast(t)
+ \varepsilon \omega(t)$, $\omega(\cdot) \in
C_{\textrm{rd}}[\mathbb{T}; \mathbb{R}^m]$:
\begin{equation}
\label{eq:CSeps}
y_\varepsilon^\Delta(t)=\varphi(t,y_\varepsilon(t),u_\varepsilon(t))
\, ,
\end{equation}
$t \in \mathbb{T}^k$, $\left(y_\varepsilon(a) = y_a\right)$,
$\left(y_\varepsilon(b) = y_b\right)$. We define the following
function:
\begin{equation*}
\begin{split}
\Phi(\varepsilon) &= J\left[y_\varepsilon(\cdot),
u_\varepsilon(\cdot)\right] = J\left[y_\ast(\cdot)+\varepsilon
\eta(\cdot),u_\ast(\cdot) + \varepsilon \omega(\cdot) \right] \\
&= \int_a^b L\left(t,y_\ast(t)+\varepsilon \eta(t), u_\ast(t) +
\varepsilon \omega(t) \right) \Delta t \, .
\end{split}
\end{equation*}
It follows that $\Phi : (-\delta,\delta) \rightarrow \mathbb{R}$
has a minimum for $\varepsilon=0$, so we must have $\Phi'(0) = 0$.
From this condition we can write that
\begin{equation}
\label{eq:VL} \int_a^b \left[ \psi_0
L_y\left(t,y_\ast(t),u_\ast(t)\right) \cdot \eta(t) + \psi_0
L_{u}\left(t,y_\ast(t),u_\ast(t)\right) \cdot \omega(t) \right]
\Delta t = 0
\end{equation}
for any real constant $\psi_0$. Differentiating \eqref{eq:CSeps}
with respect to $\varepsilon$, we get
\begin{equation*}
\eta^\Delta(t) = \varphi_y(t,y_\varepsilon(t),u_\varepsilon(t))
\cdot \eta(t) + \varphi_{u}(t,y_\varepsilon(t),u_\varepsilon(t))
\cdot \omega(t) \, .
\end{equation*}
In particular, with $\varepsilon = 0$,
\begin{equation}
\label{eq:VSC} \eta^\Delta(t) = \varphi_y(t,y_\ast(t),u_\ast(t))
\cdot \eta(t) + \varphi_{u}(t,y_\ast(t),u_\ast(t)) \cdot \omega(t)
\, .
\end{equation}
Let $\psi(\cdot) \in $\textrm{C}$_{\textrm{rd}}^1[\mathbb{T};
\mathbb{R}^n]$ be (yet) an unspecified function. Multiplying
\eqref{eq:VSC} by $\psi^\sigma(t) =
\left[\psi_1^\sigma(t),\ldots,\psi_n^\sigma(t)\right]$, and
delta-integrating the result with respect to $t$ from $a$ to $b$,
we get that
\begin{equation}
\label{eq:MulPsi:Int} \int_a^b \psi^\sigma(t) \cdot \eta^\Delta(t)
\Delta t = \int_a^b \left[\psi^\sigma(t) \cdot \varphi_y \cdot
\eta(t) + \psi^\sigma(t) \cdot \varphi_{u} \cdot \omega(t)\right]
\Delta t
\end{equation}
for any $\psi(\cdot) \in \textrm{C}_{\textrm{rd}}^1[\mathbb{T};
\mathbb{R}^n]$. Integrating by parts
(see Lemma~\ref{integracao:partes}, formula 1),
\begin{equation}
\label{eq:fzIP}
\begin{split}
\int_a^b \psi^\sigma(t) \cdot \eta^\Delta(t) \Delta t &=
\left.\psi(t) \cdot \eta(t)\right|_a^b
- \int_a^b \psi^\Delta(t) \cdot \eta(t) \Delta t \, ,
\end{split}
\end{equation}
and we can write from \eqref{eq:VL}, \eqref{eq:MulPsi:Int} and
\eqref{eq:fzIP} that
\begin{multline}
\label{eq:LogV} \int_a^b \Bigl[ \left(\psi^\Delta(t) + \psi_0 L_y
+ \psi^\sigma(t) \cdot \varphi_y\right) \cdot \eta(t) \\
+ \left(\psi_0 L_{u} + \psi^\sigma(t) \cdot \varphi_{u}\right)
\cdot \omega(t) \Bigr] \Delta t - \left.\psi(t) \cdot
\eta(t)\right|_a^b = 0
\end{multline}
hold for any $\psi(t)$. Using the definition \eqref{eq:def:Ham} of $H$,
we can rewrite \eqref{eq:LogV} as
\begin{equation}
\label{eq:QF} \int_a^b \left[ \left(\psi^\Delta(t) + H_y\right)
\cdot \eta(t) + H_{u} \cdot \omega(t) \right] \Delta t -
\left.\psi(t) \cdot \eta(t)\right|_a^b = 0 \, .
\end{equation}
It is, however, not possible to employ (yet) Lemma~\ref{lema
fundamental} due to the fact that the variations $\eta(t)$ are not
arbitrary. Now choose $\psi(t) = \psi_\ast(t)$ so that the
coefficient of $\eta(t)$ in \eqref{eq:QF} vanishes:
$\psi_\ast^\Delta(t) = - H_y$ (and $\psi_\ast(a) = 0$ if $y(a)$ is
free, \textrm{i.e.} $\eta(a) \ne 0$; $\psi_\ast(b) = 0$ if $y(b)$
is free, \textrm{i.e.} $\eta(b) \ne 0$). In the normal case
$\psi_\ast(t)$ is determined by
$\left(y_\ast(\cdot),u_\ast(\cdot)\right)$, and we choose
$\psi_{0_\ast} = 1$. The abnormal case is characterized by the
existence of a non-trivial solution $\psi_\ast(t)$ for the system
$\psi_\ast^\Delta(t) + \psi_\ast^\sigma(t) \cdot \varphi_y = 0$:
in that case we choose $\psi_{0_\ast} = 0$ in order to the first
coefficient of $\eta(t)$ in \eqref{eq:LogV} or \eqref{eq:QF} to
vanish. Given this choice of the multipliers, the necessary
optimality condition \eqref{eq:QF} takes the form
\begin{equation*}
\int_a^b H_{u} \cdot \omega(t) \Delta t = 0 \, .
\end{equation*}
Since $\omega(t)$ can be arbitrarily assigned for all
$t\in\mathbb{T}^k$, it follows from Lemma~\ref{lema fundamental}
that $H_{u} = 0$.
\end{proof}
\subsection{The higher-order problem on time scales}
\label{subsec:HO}
As a corollary of Theorem~\ref{thm:PMP} we obtain the
Euler-Lagrange equation for problem \eqref{problema }.
We first introduce some notation:
\begin{align}
y^0(t)&=y(t),\nonumber\\
y^1(t)&=y^\Delta(t),\nonumber\\
& \ \ \vdots\nonumber\\
y^{r-1}(t)&=y^{\Delta^{r-1}}(t),\nonumber\\
u(t)&=y^{\Delta^r}(t).\nonumber
\end{align}
\begin{thm}
\label{thm:HO:E-L:TS} If $y_\ast\in\mathrm{C}_{rd}^r[\mathbb{T}]$
is a weak local minimizer for the higher-order problem
\eqref{problema }, then
\begin{equation}\label{111}
\psi_\ast^{r-1}(\sigma(t))= - L_{u}(t,x_\ast(t),u_\ast(t))
\end{equation}
holds for all $t\in\mathbb{T}^{k^r}$, where $x_\ast(t) =
\left(y_\ast(t),y_\ast^\Delta(t),\ldots,y_\ast^{\Delta^{r-1}}(t)\right)$
and $\psi_\ast^{r-1}(\sigma(t))$ is defined recursively by
\begin{align}
\psi_\ast^0(\sigma(t))&=-\int_a^{\sigma(t)}L_{y^0}(\xi,x_\ast(\xi),u_\ast(\xi))\Delta\xi+c_0 \, ,\label{222}\\
\psi_\ast^i(\sigma(t))&=-\int_a^{\sigma(t)}\left[L_{y^i}(\xi,x_\ast(\xi),u_\ast(\xi))
+\psi_\ast^{i-1}(\sigma(\xi))\right]\Delta\xi+c_i\label{333},\
i=1,\ldots,r-1 \, ,
\end{align}
with $c_j$, $j = 0,\ldots, r- 1$, constants. If
$y^{\Delta^i}(\alpha)$ is free in \eqref{problema } for some $i
\in \{0,\ldots,r-1\}$, $\alpha \in \{a,\rho^{r-1}(b)\}$, then the
correspondent condition $\psi_\ast^i(\alpha) = 0$ holds.
\end{thm}
\begin{rem}
From \eqref{111}, \eqref{222} and \eqref{333} it follows that
\begin{equation}\label{final}
L_{u}+\sum_{i=0}^{r-1}(-1)^{r-i}\int_a^{\sigma}\cdots\int_a^\sigma
L_{y^i}+[c_i]_{r-i-1}=0,
\end{equation}
where $[c_i]_{r-i-1}$ means that the constant is free from the
composition of the $r-i$ integrals when $i=r-1$ (for simplicity,
we have omitted the arguments in $L_{u}$ and $L_{y^i}$).
\end{rem}
\begin{rem}
If we delta differentiate \eqref{final} $r$ times, we obtain the
delta differentiated equation for the problem of the calculus of
variations with higher order delta derivatives. However, as
observed in Remark~\ref{rem1}, one can only expand formula
\eqref{final} under suitable conditions of delta differentiability
of $\mu(t)$.
\end{rem}
\begin{rem}
For the particular case with $\varphi(t,y,u) = u$, equation
\eqref{eulerint} is \eqref{final} with $r=1$.
\end{rem}
\begin{prop}
\label{prop:NoAbnCaseHO} The higher-order problem on time scales
\eqref{problema } does not admit abnormal extremals, even when the
boundary conditions $y^{\Delta^i}(a)$ and
$y^{\Delta^i}(\rho^{r-1}(b))$, $i = 0,\ldots,r-1$, are all fixed.
\end{prop}
\begin{rem}
\label{rem:Pneeds2rp1points} We require the time scale
$\mathbb{T}$ to have at least $2r+1$ points. Let us consider
problem \eqref{problema } with all the boundary conditions fixed.
Due to the fact that we have $r$ delta derivatives, the boundary
conditions $y^{\Delta^i}(a)=y_a^{i}$ and
$y^{\Delta^i}(\rho^{r-1}(b))=y_b^{i}$ for all $i\in\{0, \ldots,
r-1\}$, imply that we must have at least $2r$ points in order to
have the problem well defined. If we had only this number of
points, then the time scale could be written as
$\mathbb{T}=\{a,\sigma(a),\ldots,\sigma^{2r-1}(a)\}$ and
\begin{equation}
\label{snormal}
\begin{aligned}
\int_{a}^{\rho^{r-1}(b)}
L(t,y(t),&y^\Delta(t),\ldots,y^{\Delta^r}(t))\Delta t \\
&=\sum_{i=0}^{r-1}\int_{\sigma^i(a)}^{\sigma^{i+1}(a)}L(t,
y(t),y^\Delta(t),\ldots,y^{\Delta^r}(t))\Delta t \\
&=\sum_{i=0}^{r-1}L(\sigma^i(a),y(\sigma^i(a)),
y^\Delta(\sigma^i(a)),\ldots,y^{\Delta^r}(\sigma^i(a))),
\end{aligned}
\end{equation}
where we have used the fact that
$\rho^{r-1}(\sigma^{2r-1}(a))=\sigma^r(a)$. Now, having in mind
the boundary conditions and the formula
$$f^\Delta(t)=\frac{f(\sigma(t))-f(t)}{\mu(t)},$$
we can conclude that the sum in \eqref{snormal} would be constant
for every admissible function $y(\cdot)$ and we wouldn't have
nothing to minimize.
\end{rem}
The following technical result is used in the proof of
Proposition~\ref{prop:NoAbnCaseHO}.
\begin{lem}
\label{lemtecn} Suppose that a function
$f:\mathbb{T}\rightarrow\mathbb{R}$ is such that $f^\sigma(t)=0$
for all $t\in\mathbb{T}^k$. Then, $f(t)=0$ for all
$t\in\mathbb{T}\backslash \{a\}$ if $a$ is right-scattered.
\end{lem}
\begin{proof}
First note that, since $f^\sigma(t)=0$, then $f^\sigma(t)$ is
delta differentiable, hence continuous for all $t\in\mathbb{T}^k$.
Now, if $t$ is right-dense, the result is obvious. Suppose that
$t$ is right-scattered. We will analyze two cases: (i) if $t$ is
left-scattered, then $t\neq a$ and by hypothesis
$0=f^\sigma(\rho(t))=f(t)$; (ii) if $t$ is left-dense, then
$f(t)=\lim_{s\rightarrow t^-}f^\sigma(s)=f^\sigma(t)$, by the
continuity of $f^\sigma$. The proof is done.
\end{proof}
\begin{proof} (of Proposition~\ref{prop:NoAbnCaseHO})
Suppose that $\psi_0=0$. With the notation \eqref{eq:PHO:CO}
introduced below, the higher order problem \eqref{problema } would
have the abnormal Hamiltonian given by
$$
H(t,y^0,\ldots,y^{r-1},u,\psi^0,\ldots,\psi^{r-1})
=\sum_{i=0}^{r-2}\psi^{i}(\sigma(t))\cdot y^{i+1}(t)
+\psi^{r-1}(\sigma(t))\cdot u(t)
$$
(compare with the normal Hamiltonian \eqref{eq:normal:Ham:P}).
From Theorem~\ref{thm:PMP}, we can write the system of equations:
\begin{equation}
\label{eq:syst:ab}
\left\{ \begin{array}{ll}
\hat{\psi}^0(t)&=0 \\
\hat{\psi}^1(t)&=-\psi^0(\sigma(t))\\
&\vdots\\
\hat{\psi}^{r-1}(t)&=-\psi^{r-2}(\sigma(t))\\
\psi^{r-1}(\sigma(t))&=0,
\end{array} \right.
\end{equation}
for all $t\in\mathbb{T}^{k^r}$, where we are using the notation
$\hat{\psi}^i(t)={\psi^i}^\Delta(t)$, $i = 0, \ldots, r-1$. From
the last equation, and in view of Lemma~\ref{lemtecn}, we have
$\psi(t)=0$, $\forall t\in\mathbb{T}^{k^{r+1}}\backslash\{a\}$ if
$a$ is right-scattered. This implies that $\hat{\psi}^{r-1}(t)=0$,
$\forall t\in\mathbb{T}^{k^{r}}\backslash\{a\}$ and consequently
$\psi^{r-2}(\sigma(t))=0$, $\forall
t\in\mathbb{T}^{k^{r}}\backslash\{a\}$. Like we did before,
$\psi^{r-2}(t)=0$, $\forall
t\in\mathbb{T}^{k^{r+1}}\backslash\{a,\sigma(a)\}$ if $\sigma(a)$
is right-scattered. Repeating this procedure, we will finally have
$\hat{\psi}^1(t)=0$, $\forall
t\in\mathbb{T}^{k^{r}}\backslash\{a,\ldots,\sigma^{r-2}(a)\}$ if
$\sigma^{i}(a)$ is right-scattered for all $i\in\{0,\ldots,r-2\}$.
Now, the first and second equations in the system
\eqref{eq:syst:ab} imply that $\forall t\in
A=\mathbb{T}^{k^{r}}\backslash\{a,\ldots,\sigma^{r-2}(a)\}$
$$
0=\hat{\psi}^1(t)=-\psi^0(\sigma(t))
=\psi^0(t)+\mu(t)\psi^\Delta(t)=\psi^0(t)\ .
$$
We pick again the first equation to point out that $\psi^0(t)=c$,
$\forall t\in\mathbb{T}^{k^{r+1}}$ and some constant $c$. Since
the time scale has at least $2r+1$ points
(Remark~\ref{rem:Pneeds2rp1points}), the set $A$ is nonempty and
therefore $\psi^0(t)=0,\ \forall t\in\mathbb{T}^{k^{r+1}}$.
Substituting this in the second equation, we get
$\hat{\psi}^1(t)=0,\ \forall t\in\mathbb{T}^{k^{r}}$. As before,
it follows that $\psi^1(t)=d$, $\forall t\in\mathbb{T}^{k^{r+1}}$
and some constant $d$. But we have seen that there exists some
$t_0$ such that $\psi^1(t_0)=0$, hence $\psi^1(t)=0$, $\forall
t\in\mathbb{T}^{k^{r+1}}$. Repeating this procedure, we conclude
that for all $i\in\{0,\ldots,r-1\}$, $\psi^i(t)=0$ at
$t\in\mathbb{T}^{k^{r}}$. This is in contradiction with
Theorem~\ref{thm:PMP} and we conclude that $\psi_0 \ne 0$.
\end{proof}
\begin{proof} (of Theorem~\ref{thm:HO:E-L:TS})
Denoting $\hat{y}(t)=y^\Delta(t)$, then problem \eqref{problema }
takes the following form:
\begin{equation}
\label{eq:PHO:CO}
\begin{gathered}
\mathcal{L}[y(\cdot)]=\int_{a}^{\rho^{r-1}(b)}L(t,y^0(t),y^1(t),
\ldots,y^{r-1}(t),u(t))\Delta t\longrightarrow\min, \\
\left\{ \begin{array}{l}
\hat{y}^0=y^1 \\
\hat{y}^1=y^2\\
\ \ \ \ \ \vdots\\
\hat{y}^{r-2}=y^{r-1}\\
\hat{y}^{r-1}=u
\end{array} \right.
\end{gathered}
\end{equation}
$$\left(y^i(a)=y_a^i\right),\ \left(y^i\left(\rho^{r-1}(b)\right)=y_b^i\right),\
i=0,\ldots,r-1,\ y_a^i\ \mbox{and}\ y_b^i\in\mathbb{R}^n.$$ System
\eqref{eq:PHO:CO} can be written in the form $y^\Delta = A y + B
u$, where
\begin{equation*}
y = \left(y^0,y^1,\ldots,y^{r-1}\right)
= \left(y_1^0,\ldots,y_n^0,y_1^1,\ldots,y_n^1,\ldots,y_n^{r-1}\right) \in \mathbb{R}^{n r}
\end{equation*}
and the matrices $A$ ($n r$ by $n r$) and $B$ ($n r$ by $n$) are
\begin{equation*}
A = \left(%
\begin{array}{ccccc}
0 & I & 0 & \cdots & 0 \\
0 & 0 & I & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & I \\
0 & 0 & 0 & \cdots & 0 \\
\end{array}%
\right) \, , \quad
B = col\{0,\ldots,0,I\}
\end{equation*}
in which $I$ denotes the $n$ by $n$ identity matrix, and $0$ the
$n$ by $n$ zero matrix. From Proposition~\ref{prop:NoAbnCaseHO}
we can fix $\psi_0 = 1$: problem \eqref{eq:PHO:CO}
is a particular case of \eqref{eq:PrbCO}
with the Hamiltonian given by
\begin{multline}
\label{eq:normal:Ham:P}
H(t,y^0,\ldots,y^{r-1},u,\psi^0,\ldots,\psi^{r-1})\\
=L(t,y^0,\ldots,y^{r-1},u)+\sum_{i=0}^{r-2}\psi^i(\sigma(\cdot))\cdot
y^{i+1} +\psi^{r-1}(\sigma(\cdot))\cdot u.
\end{multline}
From \eqref{eq:new10} and \eqref{2}, we obtain
\begin{align}
\psi^i(\sigma(t))&=-\int_a^{\sigma(t)}H_{y^i}(\xi,x(\xi),u(\xi),\psi^{\sigma}(\xi))\Delta\xi
+c_i,\ i\in\{0,\ldots,r-1\}\label{11}\\
0&=H_{u}(t,x(t),u(t),\psi^\sigma(t))\label{22},
\end{align}
respectively. Equation \eqref{22} is equivalent to \eqref{111},
and from \eqref{11} we get \eqref{222}-\eqref{333}.
\end{proof}
\section{An example}
\label{subsec:appl}
We end with an application of our higher-order Euler-Lagrange
equation \eqref{final} to the time scale
$\mathbb{T}=[a,b]\cap\mathbb{Z}$, that leads us to the usual and
well-known discrete-time Euler-Lagrange equation (in delta
differentiated form) -- see \textrm{e.g.} \cite{Logan}. Note that
$\forall t\in\mathbb{T}$ we have $\sigma(t)=t+1$ and
$\mu(t)=\sigma(t)-t=1$. In particular, we conclude immediately
that $\mu(t)$ is $r$ times delta differentiable. Also for any
function $g$, $g^\Delta$ exists $\forall t\in\mathbb{T}^k$ (see
Theorem 1.16 (ii) of \cite{livro}) and
$g^\Delta(t)=g(t+1)-g(t)=\Delta g$ is the usual \emph{forward
difference operator} (obviously $g^{\Delta^2}$ exists $\forall
t\in\mathbb{T}^{k^2}$ and more generally $g^{\Delta^r}$ exists
$\forall t\in\mathbb{T}^{k^r}$, $r\in\mathbb{N}$).
Now, for any function $f:\mathbb{T}\rightarrow\mathbb{R}$
and for any $j \in \mathbb{N}$ we have
\begin{align}
\label{eq:exGC}
{\underbrace{\left[\int_a^{\sigma(t)} \left(\int_a^\sigma \cdots \int_a^\sigma f \right)
\Delta \tau\right]}_{j-i\text{ integrals}}}^{\Delta^j}
= f^{\Delta^i \sigma^{j-i}} \, , \quad
i \in \{0,\ldots,j-1\} \, ,
\end{align}
where $f^{\Delta^i \sigma^{j-i}}(t)$ stands for $f^{\Delta^i}(\sigma^{j-i}(t))$.
To see this we proceed by induction. For $j = 1$
\begin{align*}
\int_a^{\sigma(t)}f(\xi)\Delta\xi&=\int_a^{t+1}f(\xi)\Delta\xi
=\int_a^{t}f(\xi)\Delta\xi+\int_t^{t+1}f(\xi)\Delta\xi\\
&=\int_a^{t}f(\xi)\Delta\xi+f(t),
\end{align*}
and then $\left[\int_a^{\sigma(t)}f(\xi)\Delta\xi\right]^\Delta
=f(t)+f^\Delta(t) = f^\sigma$. Assuming that \eqref{eq:exGC} is
true for all $j = 1,\ldots,k$, then
\begin{equation*}
\begin{split}
&{\underbrace{\left[\int_a^{\sigma(t)} \left( \int_a^\sigma \cdots \int_a^\sigma f \right)
\Delta \tau\right]}_{k+1-i\text{ integrals}}}^{\Delta^{k+1}} \\
&= \left(
\underbrace{\int_a^{t} \int_a^\sigma \cdots \int_a^\sigma}_{k+1-i} f
\Delta \tau
+
\underbrace{\int_a^{\sigma(t)} \cdots \int_a^\sigma}_{k-i} f
\Delta \tau
\right)^{\Delta^{k+1}} \\
&=
\left(\underbrace{\int_a^{\sigma(t)} \cdots \int_a^\sigma}_{k-i} f
\Delta \tau
\right)^{\Delta^{k}}
+
\left[\left(\underbrace{\int_a^{\sigma(t)} \cdots \int_a^\sigma}_{k-i} f
\Delta \tau
\right)^{\Delta^{k}}\right]^{\Delta} \\
&= f^{\Delta^i \sigma^{k-i}} + \left(f^{\Delta^i \sigma^{k-i}}\right)^\Delta \\
&= f^{\Delta^i \sigma^{k+1-i}} \, .
\end{split}
\end{equation*}
Delta differentiating $r$ times both sides of equation
\eqref{final} and in view of \eqref{eq:exGC}, we obtain the
Euler-Lagrange equation in delta differentiated form (remember
that $y^0=y$, $\ldots$, $y^{r-1}=y^{\Delta^{r-1}}$,
$y^{\Delta^r}=u$):
\begin{equation*}
L_{y^{\Delta^r}}^{\Delta^r}(t,y,y^\Delta,\ldots,y^{\Delta^r})
+ \sum_{i=0}^{r-1} (-1)^{r-i}
L_{y^{\Delta^{i}}}^{\Delta^i \sigma^{r-i}}(t,y,y^\Delta,\ldots,y^{\Delta^r})
=0.
\end{equation*}
\section{Conclusion}
We introduce a new perspective to the calculus of variations on
time scales. In all the previous works
\cite{Atici06,CD:Bohner:2004,zeidan} on the subject, it is not
mentioned the motivation for having $y^\sigma$ (or $y^\rho$) in
the formulation of problem \eqref{eq:EL:B}. We claim the
formulation \eqref{eq:EL:BSS} without $\sigma$ (or $\rho$) to be
more natural and convenient. One advantage of the approach we are
promoting is that it becomes clear how to generalize the simplest
functional of the calculus of variations on time scales to
problems with higher-order delta derivatives. We also note that
the Euler-Lagrange equation in $\Delta$-integral form
\eqref{eulerint}, for a Lagrangian $L$ with $y$ instead of
$y^\sigma$, follows close the classical condition. Main results of
the paper include: necessary optimality conditions for the
Lagrange problem of the calculus of variations on time scales,
covering both normal and abnormal minimizers; necessary optimality
conditions for problems with higher-order delta derivatives. Much
remains to be done in the calculus of variations and optimal
control on time scales. We trust that our perspective provides
interesting insights and opens new possibilities for further
investigations.
\section*{Acknowledgments}
This work was partially supported by the Portuguese Foundation for
Science and Technology (FCT), through the Control Theory Group
(cotg) of the Centre for Research on Optimization and Control
(CEOC -- \texttt{http://ceoc.mat.ua.pt}). The authors are grateful
to M.~Bohner and S.~Hilger for useful and stimulating comments,
and for them to have shared their expertise on time scales.
{\small
| {
"timestamp": "2007-04-04T22:31:16",
"yymm": "0704",
"arxiv_id": "0704.0656",
"language": "en",
"url": "https://arxiv.org/abs/0704.0656",
"abstract": "We study more general variational problems on time scales. Previous results are generalized by proving necessary optimality conditions for (i) variational problems involving delta derivatives of more than the first order, and (ii) problems of the calculus of variations with delta-differential side conditions (Lagrange problem of the calculus of variations on time scales).",
"subjects": "Optimization and Control (math.OC); Classical Analysis and ODEs (math.CA)",
"title": "Necessary optimality conditions for the calculus of variations on time scales",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399051935107,
"lm_q2_score": 0.7310585903489892,
"lm_q1q2_score": 0.7093022173911049
} |
https://arxiv.org/abs/1002.4432 | Adjoint action of automorphism groups on radical endomorphisms, generic equivalence and Dynkin quivers | Let $Q$ be a connected quiver with no oriented cycles, $k$ the field of complex numbers and $P$ a projective representation of $Q$. We study the adjoint action of the automorphism group $\Aut_{kQ} P$ on the space of radical endomorphisms $\radE_{kQ}P$. Using generic equivalence, we show that the quiver $Q$ has the property that there exists a dense open $\Aut_{kQ} P$-orbit in $\radE_{kQ} P$, for all projective representations $P$, if and only if $Q$ is a Dynkin quiver. This gives a new characterisation of Dynkin quivers. | \section*{Introduction}
Let $\Delta$ be a quiver and let $P$ be a projective representation.
We study generic orbits for the adjoint action of $Aut P$ on the
radical endomorphisms $radEndP$. If $\Delta$ is of type $\mathbb{A}$
with linear orientation, then $End P$ is a parabolic subalgebra in
$\mathfrak{gl}_n$ and a dense open orbit exists by a theorem of Richardson
\cite{richardson}. If $\Delta$ is of type $\mathbb{A}$ with
arbitrary orientation, then $End P$ is a seaweed Lie algebra, and a
dense open orbit exists by a theorem of Jensen-Su-Yu [\ref{JSY}].
The following is the main result of this paper.
\begin{theorem} \label{QuiverTheorem}
Let $\Delta$ be a quiver, then there is a dense open $AutP$-orbit in
$radEndP$ for all projective representations $P$, if and only if
$\Delta$ is a Dynkin quiver.
\end{theorem}
The study of $AutP$-orbits in $radEndP$ can be transferred to the
study of good representations of the double quiver $\tilde{\Delta}$
of $\Delta$ modulo some relations $J$, see \cite{HB, HV}. We
investigate relative sections (see Section \ref{relsec}) of
representation varieties of Dynkin quivers and their double quivers.
The main technique of proving Theorem \ref{QuiverTheorem} is to
show that varieties of good representations of the double quiver
$(\tilde{\Delta}, J)$ are generically equivalent to representation
varieties of $\Delta'$, where $\Delta'$ is a Dynkin quiver with the
same underlying graph as $\Delta$. That is, generic good representations
of $(\tilde{\Delta}, J)$ have the same parameter space as
representations of $\Delta'$, which are well understood, although in general
$(\tilde{\Delta}, J)$ is of wild representation type. Since $\Delta'$ is of finite
representation type we can thus conclude the existence of dense open
orbits in the varieties of good representations.
Our main application of Theorem \ref{QuiverTheorem} is to study
the generic orbits for the adjoint action of subgroups of parabolics in
$\mathfrak{gl}_t$ on nilpotent radicals of the corresponding Lie subalgebra.
Let $\mathfrak{l}\subseteq \mathfrak{b}\subseteq \mathfrak{g}=\mathfrak{gl}_n(\mathbb{C})$, where $\mathfrak{b}$
is a Borel subalgebra, and $\mathfrak{l}$ is a Cartan subalgebra of $\mathfrak{g}$.
Let $\mathfrak{n}\subseteq \mathfrak{b}$ be a nilpotent subalgebra such that
$\mathfrak{s}=\mathfrak{l}+\mathfrak{n}\subseteq \mathfrak{b}$ is a subalgebra. To each vector $d\in
\mathbb{N}^n$ we associate Lie algebras $\mathfrak{l}(d)\subseteq \mathfrak{s}(d)
\subseteq \mathfrak{b}(d)\subseteq \mathfrak{gl}_t$, for $t=\sum_id_i$, where
$\mathfrak{b}(d)$ is a parabolic Lie subalgebra with Levi factors $\mathfrak{l}(d)$
and $\mathfrak{s}(d)=\mathfrak{l}(d)+\mathfrak{n}(d)\subseteq \mathfrak{b}(d)$ for the nilpotent
subalgebra $\mathfrak{n}(d)\subseteq rad\mathfrak{b}(d)$.
If $\mathfrak{s}=\mathfrak{b}$, then $\mathfrak{s}(d)=\mathfrak{b}(d)$ is a parabolic subalgebra, and it
is known by a theorem of Richardson \cite{richardson} that there is
an open dense $S(d)$-orbit in $\mathfrak{n}(d)$. The theorem of Richardson
holds for all reductive Lie algebras, but the proof is not
constructive and some effort has been made to explicitly construct
elements with dense orbits. This has been completed in the classical
types by Baur \cite{baur}, and in type $\mathbb{A}$, Br\"{u}stle,
Hille, Ringel and R\"{o}rhle \cite{BHRR} constructed open orbits
using representations of quivers. These results and methods have
been extended in various directions, see for example
\cite{erdmann,hille1,hille2,tan}.
We associate to $\mathfrak{s}$ a certain quiver $\Delta(\mathfrak{s})$ with relations
$\mathcal{I}(\mathfrak{s})$ and realize $\mathfrak{s}(d)$ as the endomorphism ring of a
projective $(\Delta(\mathfrak{s}),\mathcal{I}(\mathfrak{s}))$-representation
$P(d)=\oplus_iP(i)^{d_i}$, where $P(i)$ is the indecomposable
projective representation associated to the vertex $i$. The
following is the main application of Theorem \ref{QuiverTheorem}.
Let $S(d)$ be the Lie group corresponding to $\mathfrak{s}(d)$. We prove the
following theorem.
\begin{theorem} \label{LieTheorem}
If $\Delta(\mathfrak{s})$ is a tree, then there is an open dense $S(d)$ orbit
in the nilpotent radical $\mathfrak{n}(d)$ for all $d$, if and only if
$\Delta(\mathfrak{s})$ is a Dynkin quiver.
\end{theorem}
The remainder of this paper is organized as follows. In Section 1 we
recall basic facts on quivers and their representations. In Section
2 we give a definition of the class of Lie algebras $\mathfrak{s}(d)$ using
root spaces in $\mathfrak{gl}_t$, and in Section 3 we show how these algebras
can be realized as endomorphism algebras of projective
representations of a quiver with relations. In Section 4 we recall
results of Br\"{u}stle-Hille \cite{HB} and Hille-Vossieck \cite{HV}
on the use of double quivers to parameterize $AutP$-orbits of radical
endomorphisms of projective representations $P$. In Section 5 we recall
a result of Voigt and prove a fundamental inequality on the
dimensions of stabilizers of radical endomorphisms. In Section 6 we
recall the construction in type $\mathbb{A}$, and simplify the
proofs using the fundamental inequality. Using the construction in
type $\mathbb{A}$ we prove our main technical results in Section 7
and prove Theorems \ref{QuiverTheorem} and \ref{LieTheorem}
in Section 8.
\section{Representation varieties and algebraic group actions}
\subsection{Representations of quivers}
A quiver $\Delta$ consists of a finite set of vertices $\Delta_0$ and
a finite set of arrows $\Delta_1$ and two functions
$s,t:\Delta_1\rightarrow \Delta_0$ sending an arrow to its starting
and terminating vertex, respectively. A vertex $i$ in $\Delta$ is
called a sink if there are no arrows terminating at $i$, and a
source if there are no arrows starting at $i$. It is called
admissible if it is either a sink or a source, and interior if there
are at least two arrow incident to $i$.
A representation $V$ of $\Delta$ consists of vectors spaces
$\{V_i\}_{i\in \Delta_0}$ and linear maps
$\{f_\alpha:V_{s(\alpha)}\rightarrow V_{t(\alpha)}\}_{\alpha\in
\Delta_1}$. A homomorphism of representations $h:(V,f)\rightarrow
(W,g)$ is a collection of maps $h_i:V_i\rightarrow W_i$ satisfying
$h_jf_\alpha=g_\alpha h_i$ for each arrow $\alpha:i\rightarrow j\in
\Delta_1$. The direct sum of two representation is obtained by
taking direct sum of vector spaces and linear maps. A representation
is indecomposable if it is not isomorphic to the direct sum of two
non-zero representations.
For a representation $V$ we let $dimV=(dimV_i)_{i\in\Delta_0}$
denote the dimension vector of $V$. Let
$$Rep(\Delta,d)=\prod_{\alpha\in \Delta_1}Hom(k^{d_{s(\alpha)}},
k^{d_{t(\alpha})})$$ be the space of representations. The group
$Gl(d)=\prod_iGl_{d_i}$ acts on $Rep(\Delta,d)$ by change of basis,
and we have a bijection between $Gl(d)$-orbits in $Rep(\Delta,d)$
and isomorphism classes of representations of $\Delta$ with
dimension vector $d$. We fix a basis and view elements in
$Rep(\Delta,d)$ and $Gl(d)$ as tuples of matrices.
The path algebra $k\Delta$ of $\Delta$ is the algebra with basis the
set of paths in $\Delta$. For two paths $p$ and $q$, their product
is defined to be the composition $pq$ if $q$ ends where $p$ starts,
and zero otherwise. Let $e_i$ denote the trivial path of length zero
at vertex $i$. The trivial paths form a complete set of orthogonal
idempotents for $k\Delta$. There is an equivalence of categories
between left $k\Delta$-modules and representations of $\Delta$.
Let $< ,>:\mathbb{Z}^n\times \mathbb{Z}^n\rightarrow \mathbb{Z}$ be
the Ringel form defined by $$<d,e>=\sum_{i\in \Delta_0}d_ie_i -
\sum_{\alpha\in \Delta_1}d_{s(\alpha)}e_{t(\alpha)}$$ Let $q_\Delta$
defined by $q_\Delta(d)=<d,d>$ be the corresponding quadratic form,
called the Tits form of $\Delta$.
If $\Delta$ is a Dynkin quiver, that is the underlying graph of
$\Delta$ is one of the Dynkin graphs
$\mathbb{A}_i,\mathbb{D}_j,\mathbb{E}_l$ for $i\geq 1, j\geq 4,
l=6,7,8$, it follows from Gabriel's Theorem \cite{Gabriel} that
there is a dense orbit in $Rep(\Delta,d)$ for any dimension vector
$d$.
\begin{theorem} \label{Gabriel} [Gabriel]
There is a dense open orbit in $Rep(\Delta,d)$ for all $d$, if and
only if $\Delta$ is a Dynkin quiver. Moreover, if $\Delta$ is
Dynkin, then the number of orbits in $Rep(\Delta,d)$ is finite.
\end{theorem}
We also consider representations satisfying relations. Let
$\mathcal{I}\subseteq <\Delta_1>\subseteq k\Delta$ be an ideal
contained in the ideal generated by the arrows in $\Delta$. The
corresponding subset
$Rep(k\Delta/\mathcal{I},d)=Rep(\Delta,\mathcal{I},d) \subseteq
Rep(\Delta,d)$ consisting of representations that are annihilated by
$\mathcal{I}$ is a $Gl(d)$-stable Zariski closed subvariety, which is
called the representation variety of $(\Delta,\mathcal{I})$ with
dimension vector $d$.
\subsection{Relative sections}\label{relsec}
Let $(G,V)$ consist of an algebraic group $G$ which acts regularly
on an irreducible variety $V$.
\begin{definition}
A pair $(H,W)$ is called a (relative) section of $(G,V)$ if
$W\subseteq V$, $H\subseteq G$ is contained in the set-wise
stabilizer of $W$, and $H\cdot w=(G\cdot w)\cap W$. We say that
$(H,W)$ is a generic section if in addition $G\cdot W$ contains an
open subset of $V$.
\end{definition}
If $(H',W')$ is equivariantly isomorphic to $(H,W)$ and $(H,W)$ is a
(generic) section of $(G,V)$, then we also say that $(H',W')$ is a
(generic) section of $(G,V)$. If $(G,V)$ and $(G',V')$ have a common
generic section, then we say they are generically equivalent.
Note that $(1,{x})$ is a section in $(G,V)$ for any point $x\in V$,
where $1$ denotes the trivial group with one element, and that
$(1,{x})$ is a generic section in $(G,V)$ if and only if the orbit
$G\cdot v\subseteq V$ is open. By the theorem of Gabriel we see that
there exists $x\in Rep(Q,d)$ such that $(1,{x})$ is a generic
section in $(Gl(d),Rep(Q,d))$ for any dimension vector $d$, if and
only if $Q$ is Dynkin. In general, for any quiver and dimension vector
$d$, the canonical decomposition defined by Kac \cite{Kac},
$d=\sum_id_i$ gives us a generic section $(\prod_iGl_{d_i},
\prod Rep(Q,d_i))$ of $(Gl(d),Rep(Q,d))$.
\section{A class of Lie subalgebras}
Let $\mathfrak{g}=\mathfrak{gl}_n(\mathbb{C})$, let $\mathfrak{l}\subseteq \mathfrak{b}$ be the Cartan
subalgebra of diagonal matrices, let $\mathfrak{b}\subseteq \mathfrak{g}$ be the Borel
subalgebra of upper triangular matrices. Let $\mathfrak{m}\subset
\mathfrak{b}$ be the nilpotent radical of $\mathfrak{b}$, consisting of strictly
upper triangular matrices. Let $\mathfrak{n}\subseteq \mathfrak{b}$ be a nilpotent
subalgebra such that $\mathfrak{s}=\mathfrak{l}\oplus \mathfrak{n}\subseteq \mathfrak{b}$ is a
subalgebra. Note that $\mathfrak{n}=\mathfrak{s}\cap \mathfrak{m}$ is the nilpotent
radical of $\mathfrak{s}$.
Let $\Phi=\Phi^+\cup \Phi^-$ be the root system determined by $\mathfrak{l}$
such that $\mathfrak{m}=\oplus_{\alpha \in \Phi^+}\mathfrak{g}_\alpha$, where
$\mathfrak{g}_\alpha$ is the root space of $\alpha$ in $\mathfrak{g}$. A subset $\Sigma
\subseteq \Phi$ is closed under addition if $\alpha,\beta\in \Sigma$
and $\alpha+\beta\in \Phi$ imply that $\alpha+\beta\in \Sigma$. A
subset $\Sigma$ closed under addition is generated by a subset
$\Sigma'\subseteq \Sigma$ if $\Sigma$ is the smallest subset of roots
closed under addition and containing $\Sigma'$. That is, the
roots in $\Sigma$ are given as $\mathbb{N}$-linear positive linear
combinations of roots from $\Sigma'$. For a subset $\Sigma\subseteq
\Phi$ which is closed under addition, we let $$\mathfrak{g}_\Sigma=\bigoplus_{\alpha
\in \Sigma}\mathfrak{g}_\alpha \subseteq \mathfrak{gl}_n.$$
The following fact is not difficult to verify and the proof is
skipped.
\begin{lemma} \label{bijlemma1}
There is a bijection between subsets $\Sigma\subseteq \Phi^+$ closed
under addition, and nilpotent subalgebras $\mathfrak{n}\subseteq \mathfrak{b}$ such
that $\mathfrak{l}\oplus \mathfrak{n} \subseteq \mathfrak{b}$ is a Lie subalgebra, given by
$\Sigma \mapsto \mathfrak{g}_\Sigma$.
\end{lemma}
Let $\Sigma\subseteq \Phi^+$ be a subset closed under addition and
let $\mathfrak{s}=\mathfrak{l}\oplus \mathfrak{g}_\Sigma$. Let $d\in \mathbb{N}^n$ be a
dimension vector and let $t=d_1+\cdots+d_n$. We associate to $d$ the
Lie subalgebras $\mathfrak{l}(d)\subseteq \mathfrak{s}(d) \subseteq \mathfrak{b}(d)\subseteq
\mathfrak{gl}_t$ as follows. Let $\Phi_t$ denote the root system of $gl_t$.
Let $\alpha_1,\cdots,\alpha_{n-1}$ denote the simple roots in
$\Phi^+$, ordered such that any root $\alpha\in \Phi^+$ is of the
form $\alpha=\alpha_h + \alpha_{h+1} \cdots + \alpha_j$ for $j\geq
h$, and similarly, let $\beta_1,\cdots,\beta_{t-1}$ denote the
simple roots in $\Phi^+_t$. Let $\Sigma(d)=\Sigma(d)^-\cup
\Sigma(d)^+ \subseteq \Phi_t$ be the set of roots closed under
addition, where
\begin{itemize}
\item[-] $\Sigma(d)^-$ is generated by all simple negative roots except
$\{-\beta_{i_1}, \cdots,-\beta_{i_{n-1}}\}$, where $d_j=i_j-i_{j-1}$
and where we define $i_0=0$ and $i_n=t$,
\item[-] $\Sigma(d)^+$ is generated by $-\Sigma(d)^-$
and all roots of the form $\sum_{s=i_h}^{i_j}\beta_{s}$ for
$\alpha_h + \alpha_{h+1} + \cdots + \alpha_j\in \Sigma$.
\end{itemize}
We define $\mathfrak{s}(d)={(\mathfrak{gl}_t)}_{\Sigma(d)}\oplus\mathfrak{l}_t$, where $\mathfrak{l}_t$ are the diagonal
matrices in $\mathfrak{gl}_t$. Here the root space of
$\alpha_h + \cdots + \alpha_j$ in $\mathfrak{s}$ correspond to a $d_h\times
d_j$ block of root spaces in $\mathfrak{s}(d)$. Note that
$\mathfrak{b}(d)=(\mathfrak{gl}_t)_{\Sigma(d)^-\cup \Phi(d)^+}\oplus\mathfrak{l}_t$ is a parabolic Lie
subalgebra in $\mathfrak{gl}_t$ with Levi factors
$\mathfrak{l}(d)=(\mathfrak{gl}_t)_{\Sigma(d)^-\cup -\Sigma(d)^-}\oplus\mathfrak{l}_t$. We clearly have,
$\mathfrak{l}(d)\subseteq \mathfrak{s}(d) \subseteq \mathfrak{b}(d)$.
Finally, we remark that the description of $\mathfrak{s}(d)$ given in this
section generalizes to subalgebras of parabolic subalgebras of any
reductive Lie algebra.
\section{Lie subalgebras and endomorphism rings of projective
representations}
A seaweed Lie subalgebra $\mathfrak{q}\subseteq \mathfrak{gl}_n$ is the
intersection $\mathfrak{q}=\mathfrak{p}\cap \mathfrak{p}'$ of two
parabolic Lie subalgebras $\mathfrak{p}$ and $\mathfrak{p}'$, where
$\mathfrak{p}+ \mathfrak{p}'=\mathfrak{gl}_n$. It is known
\cite{JSY} that seaweed Lie subalgebras in $\mathfrak{gl}_n$ can be realized
as endomorphism algebras of projective representations of a quiver
of type $\mathbb{A}$, generalizing the fact that endomorphism
algebras of projective representations of quivers of type
$\mathbb{A}$ with linear orientation realize parabolic subalgebras
\cite{BHRR}. In this section we generalize these results to the class
of Lie algebras $\mathfrak{s}(d)$ defined in the previous section.
Let $\Delta$ be a quiver without oriented cycles with vertices
$\Delta_0=\{1,\cdots,n\}$. Let $s(\Delta)$ be the quotient of the
path algebra of $\Delta$ by the relations $p=q$ for all paths $p$
and $q$ in $\Delta$ with the same starting and ending vertex. Note
that we allow $p$ or $q$ to be arrows in $\Delta$, and that
$s(\Delta)=k\Delta$ when $\Delta$ is a tree.
If there are no arrows in the relations defining $s(\Delta)$, we say
that $\Delta$ is minimal. Given $\Delta$, there is always a unique
minimal $\Delta'\subseteq \Delta$ such that $s(\Delta')=s(\Delta)$.
In this case, we say that $\Delta'$ is the quiver of $s(\Delta)$.
The quiver $\Delta$ is said to have a standard orientation if
whenever there is a path from $i$ to $j$ in $\Delta$ then $j\geq i$.
Any quiver without oriented cycles is isomorphic to a quiver with a
standard orientation. For the remainder of this section we assume
that $\Delta$ has a standard orientation.
Let $\mathfrak{s}(\Delta)$ be the Lie algebra $End(s(\Delta))$, where
$End(s(\Delta))$ denotes the endomorphisms of $s(\Delta)$ as a left
module. There is an embedding $\mathfrak{s}(\Delta)\subseteq \mathfrak{b} \subseteq
\mathfrak{gl}_n$, given by identifying $Hom(s(\Delta)e_j,s(\Delta)e_i)
=e_js(\Delta)e_i$ with the root space $\mathfrak{g}_{p}$, where the root
$p=\alpha_i + \cdots + \alpha_j$ corresponds to the path that starts
at $i$ and ends at $j$. We have $End(s(\Delta))\cong s(\Delta)^{op}$
as associative algebras.
\begin{lemma}
There is a bijection between quivers $\Delta$ with standard
orientation, and nilpotent subalgebras $\mathfrak{n}\subseteq \mathfrak{b}$ such that
$\mathfrak{l}\oplus \mathfrak{n}\subseteq \mathfrak{b}$ is a subalgebra, given by $\Delta
\mapsto rad\mathfrak{s}(\Delta)$.
\end{lemma}
\begin{proof}
Consider the set $\Psi=\{p | p \mbox{ a non-trivial path in } \Delta\}$. Clearly,
paths are closed under composition. We define a map $\Psi\rightarrow \Phi^+$ given
by sending a path $p$ from $i$ to $j$, to the root $\alpha_i+\cdots+\alpha_j$. This
map induces a bijection between quivers $\Delta$ with standard orientation and
subsets of positive roots closed under addition. The lemma now follows from Lemma
\ref{bijlemma1}.
\end{proof}
Note that $\mathfrak{s}(\overrightarrow{\mathbb{A}})=\mathfrak{b}\subseteq \mathfrak{g}$, where
$\overrightarrow{\mathbb{A}}$ is a quiver of type $\mathbb{A}$ with
the standard orientation $i\rightarrow i+1$ for $i=1,\cdots,n-1$,
and $\mathfrak{s}(\Delta_0)=\mathfrak{l}$. Clearly, $\mathfrak{l} \subseteq \mathfrak{s}(\Delta)
\subseteq \mathfrak{b}$. Also in the above bijection, nilpotent Lie ideals
$\mathfrak{n}\subseteq \mathfrak{b}$ correspond to quivers $\Delta$ such that
$rads(\Delta)\subseteq s(\overrightarrow{\mathbb{A}})$ is an ideal.
We therefore obtain a bijection between nilpotent ideals in $\mathfrak{b}$
and quotient algebras of $k\overrightarrow{\mathbb{A}}$ by nilpotent
ideals.
Now let $d\in \mathbb{N}^n$ be a vector and let
$P(d)=\oplus^n_{i=1}{P_i}^{d_i}$, where $P_i=s(\Delta)e_i$ is the
indecomposable projective $s({\Delta})$-module at vertex $i$. The
embedding $\mathfrak{s}(\Delta)\subseteq gl_n$ induces an embedding
$End(P(d))\subseteq gl_t$ where $t=\sum_id_i$, and moreover
$End(P(d))=\mathfrak{s}(\Delta)(d)$, $\mathfrak{l}(d)=\mathfrak{s}(\Delta_0)(d)$ and
$\mathfrak{b}(d)=\mathfrak{s}(\overrightarrow{\mathbb{A}})(d)$.
\section{Adjoint actions and double quivers}
Let $\Delta$ be a quiver without oriented cycles, with vertices
$\Delta_0=\{1,\cdots ,n\}$. Let $P=P(d)=\oplus_{i=1}^n{P_i}^{d_i}$
be a projective $\Delta$-representation, where $P_i=k\Delta e_i$ is
the indecomposable projective $k\Delta$-module associated to the
vertex $i\in \Delta_0$, and where $d=(d_1,\cdots,d_n)\in
\mathbb{N}^n$ is a dimension vector. The group $AutP$ of
$\Delta$-automorphisms of $P$ acts adjointly on the
radical $radEndP$ of the Lie-algebra $EndP$. By a result of
Br\"{u}stle-Hille \cite{HB} and Hille-Vossieck \cite{HV} we may
parameterize the orbits of this action using the isomorphism classes
of good modules of a finite dimensional quasi-hereditary $k$-algebra
which we denote by $D=D(\Delta)$.
We recall the construction of $D$ given by Hille-Vossieck \cite{HV}.
The quiver of $D$ is the double quiver $\tilde{\Delta}$. That is,
$\tilde{\Delta}_0=\Delta_0$ and $\tilde{\Delta}_1=\Delta_1\cup
\Delta^*_1$, where for every arrow $\alpha:i\rightarrow j$ in $\Delta_0$
there is a corresponding arrow $\alpha^*:j\rightarrow i$ in $\Delta^*_1$.
The relations defining $D$ are $$\alpha^*\alpha - \sum_{\beta\in
\Delta_1, t(\beta)=s(\alpha)} \beta\beta^*$$ for any arrow
$\alpha\in \Delta_1$, and $\alpha^*\beta$ where $\alpha\neq \beta$
is a pair of arrows in $\Delta$ terminating at the same vertex. By
the relations we see that $D$ has a multiplicative basis $\{pq^*|
s(p)=s(q)\mbox{ for paths }p,q\in \Delta\}$.
By the results of Br\"{u}stle-Hille and Hille-Vossieck there is a
natural correspondence between the $AutP$-orbits in $radEndP$ and
the isomorphism classes of $D$-modules $X$ with $_{k\Delta}X\cong
P$. Indeed, let $e$ be the dimension vector of $P$, let $Rep(D,e)$
be the variety of $\tilde{\Delta}$-representations satisfying the relations
of $D$ and let $Gl(e)$ be the group acting by change of basis on
$Rep(D,e)$. Let $Rep(D,P)\subseteq Rep(D,e)$ consist of
representations $X$ with $_{k\Delta}X=P$. Then the stabilizer $AutP$
acts on $Rep(D,P)$ and the orbits correspond to isomorphism classes
of $D$-modules $X$ with $_{k\Delta}X\cong P$. We let $G(d)=AutP(d)$.
\begin{lemma} The following are true.
\begin{itemize}
\item[i)] $Rep(D,P)\subseteq Rep(D,e)$ is an affine subspace.
\item[ii)] $Gl(e)\cdot Rep(D,P)\subseteq Rep(D,e)$ is irreducible
and open.
\end{itemize}
\end{lemma}
\begin{proof}
$Rep(D,P)$ is an affine subspace since the relations defining $D$
are linear in $\Delta_1^*$. This proves $i)$ and that $Gl(e)\cdot
Rep(D,P)$ is irreducible. There is a projection map
$Rep(D,e)\rightarrow Rep(\Delta,e)$ by forgetting the
$\Delta_1^*$-structure. Now $Gl(e)\cdot Rep(D,P)$ is the preimage of
the open orbit of $P$, and so $ii)$ follows.
\end{proof}
Note that there is an equivariant isomorphism between $radEndP$ and
$Rep(D,P)$, which gives us the correspondence between $Gl(e)$-orbits of
$k\Delta$-projective representations in $Rep(D,e)$ and $AutP$-orbits
in $radEndP$.
The algebra $D$ is quasi-hereditary with Verma modules
$P_1,\cdots,P_n$. A $D$-module $X$ is good if $_{k\Delta}X$ is
projective, which is equivalent to that $X$ has a filtration with
subfactors isomorphic to Verma modules. If $d_i$ is the multiplicity
of $P_i$ in $_{k\Delta}X$, then $d=(d_i)_i$ is called the
$\Delta$-dimension vector of $X$, and is denoted by $d=dim_\Delta
X$. The full sub-quiver of $\Delta$ given by vertices where $d_i>0$
is called the $\Delta$-support of $X$, and is denoted by
$Supp_\Delta X$. The quasi-hereditary structure on $D$ will not
play an important role in the sequel.
The algebra $D$ has a basis of paths $ab^*$, with $a,b$ paths in
$\Delta$. Let $p$ be a path in $\tilde{\Delta}$. If $p=\sum ab^*$
modulo the defining relations, then the number of arrows in $p$ from
$\Delta_1^{*}$ is equal to the length of $b$ for an $b^*$ in the
sum. This follows since the relations defining $D$ are homogeneous
in $\Delta_1^*$. Similarly, the relations are homogeneous in
$\Delta_1$, and so the number of arrows in $p$ from $\Delta_1$ is
equal to the length of $a$ in any term of the sum.
\section{An exact sequence and a relative Voigt's lemma}
Let $J$ be the ideal in $D$ generated by the arrows in $\Delta^*_1$.
Note that $D$ is a split extension of $k\Delta$ by $J$, and that $J$
is projective as a left $D$-module. In this section, let
$A=k\Delta$.
\begin{lemma}\label{projresol}
Let $X$ be a $k\Delta$-projective $D$-module. Then the following
sequence is a projective resolution of $X$,
$$\xymatrix{0\ar[r]& J\otimes_A X\ar[r]^f& D\otimes_A X \ar[r]^{\; \; \; \; \; \; g}&
X\ar[r]& 0 }$$ where $g(d\otimes m)=dm$ and $f(d\beta\otimes x)=
d\beta\otimes x-d \otimes \beta x$ for $d\beta\in J, \beta\in
\Delta^{*}$, and $x\in X$.
\end{lemma}
\begin{proof}
Since $_DJ$ is projective, we only need to prove that the sequence
is exact. By the definition of $f$ and $g$, the map $g$ is
surjective and $gf=0$. For any projective $A$-module $M$, the
sequence
$$
\xymatrix{0\ar[r]& J\otimes_A M\ar[r]^{i\otimes 1}& D\otimes_A M
\ar[r]^{\; \; \; \; \; \; g}& M\ar[r]& 0 },
$$
obtained by applying $-\otimes_A M$ to $$\xymatrix{0 \ar[r] & J
\ar[r]^i & D \ar[r] & A \ar[r] &0}$$ is a projective resolution of
$_DM$. So what remains is to show that $f$ is injective. Let $p$ be
a path in $\tilde{\Delta}$ and define $\mu(p\otimes x)$ to be the
number of arrows from $\Delta_1^*$ in $p$. Let $\sum pb^*\otimes x$,
for $b\in \Delta_1$ be in the kernel of $f$. We may assume that
$\mu(pb^*\otimes x)$ is constant for the terms in the sum. Now
$f(\sum pb^*\otimes x)=\sum (pb^*\otimes x-p\otimes b^*x)=0$ which
shows that $\sum pb^*\otimes x=0$ since $\mu(p\otimes
b^*x)<\mu(pb^*\otimes x)$. Hence $f$ is injective, and we are done.
\end{proof}
\begin{lemma}\label{variety1}
Let $X$ be an $A$-projective $D$-module. Then we have the following
exact sequence.
$$
\xymatrix{0\ar[r]& End_DX\ar[r]& End_AX\ar[r]& radEnd_AX\ar[r]&
Ext^1_D(X,X) \ar[r]& 0 }
$$
\end{lemma}
\begin{proof}
By applying $Hom_D(-,X)$ to the sequence in Lemma \ref{projresol},
we have the following exact sequence:
$$
\xymatrix@=20pt{0\ar[r]& End_DX\ar[r]& Hom_D(D\otimes X, X)
\ar[r]&Hom_D(J\otimes X, X)\ar[r]& Ext^1_D(X,X)\ar[r]& 0 }.
$$
Note that we have the isomorphism $\phi: Hom_D(D\otimes_A X, X)\rightarrow End_AX$
given by $\phi(f)(x)=f(1\otimes x)$. Moreover we can show that
$Hom_D(J\otimes_A X,X) \cong radEndX$, which completes the proof of
the lemma.
\end{proof}
As a corollary of Lemma \ref{variety1}, we give a direct proof of a
version of Voigt's Lemma \cite{Voigt} for $D$-modules. This lemma
includes an inequality for the dimension of stabilizers, which is an
important tool for showing that a given good $D$-module is rigid.
\begin{lemma} \label{charlemma} Let $X$ be an $A$-projective
$D$-module $X$ with $_{A}X=P(d)$. Then $dim End_DX\geq \sum_i
d_i^2$. Moreover, the following are equivalent.
\begin{itemize}
\item[i)] $G(d)\cdot X\subseteq Rep(D,P(d))$ is open.
\item[ii)] $X$ is rigid.
\item[iii)] $dim End_DX=\sum_i d_i^2$.
\end{itemize}
\end{lemma}
\begin{proof}
We have $dim End_AX-dim radEnd_AX=\sum {d_i}^2$, and so $dim
End_DX\geq \sum_i d_i^2$ by the previous lemma, with equality if and
only if $Ext_D^1(X,X)=0$. This shows the equivalence of $ii)$ and
$iii)$. Note that
$$\dim G(d)\cdot X =\dim End_AX-\dim End_DX=dimRep(D,P(d))-
dimExt_D^1(X,X),$$ and so $i)$ and $ii)$ are equivalent.
\end{proof}
\section{Type $\mathbb{A}$}
\subsection{Linear orientation}
In this subsection, let $\Delta$ be of type $\mathbb{A}$ with
orientation $\alpha_i:i\rightarrow i+1$ for each vertex
$i=1,\cdots,n-1$. In this case $AutP$ is a parabolic and the
existence of dense orbits follows from the classical result of
Richardson \cite{richardson}. We now recall an explicit construction
of dense orbits using quivers due to Br\"ustle, Hille, Ringel and
R\"orhle \cite{BHRR}.
The projective $D$-module $Q_n=De_n$ at vertex $n$ is injective and
has a multiplicative basis consisting of paths $pq^*$, where $q$ is
a path in $\Delta$ ending at the vertex $n$. By a multiplicative
basis of a module $M$, we mean that $pv$ is zero or a basis element of $M$
for any path $p$, and any basis element $v$ of $M$. By \cite{BHRR}
there is a bijection between the isomorphism classes of
indecomposable rigid $k\Delta$-projective $D$-modules and submodules
of $Q_n$. A submodule $X$ of $Q_n$ has a multiplicative basis given
by a subset of the paths $pq^*$, and it is uniquely determined by
its $k\Delta$-structure $_{k\Delta}X\cong
\oplus^n_{i=1}P_i^{\delta_i}$ with $\delta_i \in \{0,1\}$. In fact,
there is a bijection between subsets $I\subseteq \Delta_0$ and
$k\Delta$-projective indecomposable rigid $D$-modules $X=X(I)$,
where $I=\{i| \delta_i=1\}$. For a dimension vector $d=(d_1,\cdots,
d_n)$, define $X(d)=\sum^t_{i=1} X(I_i)$, with $_{k\Delta}X(d)\cong
P(d)$, and $I_1\subseteq I_2 \subseteq \cdots \subseteq I_t$.
\begin{theorem} \label{BHRR} [Br\"ustle-Hille-Ringel-R\"orhle]
The $k\Delta$-projective $D$-module $X(d)$ is rigid.
\end{theorem}
The theorem gives a bijection between isomorphism classes of rigid
$k\Delta$-projective $D$-modules and descending chains of subsets of
$\{1,\cdots,n\}$.
We will give a short proof of the above theorem, but first we need
the following lemma on the dimension of homomorphism spaces between
indecomposable rigid $k\Delta$-projective $D$-modules.
\begin{lemma}\label{dim1}
Let $X(I)$ and $X(J)$ be two indecomposable rigid
$k\Delta$-projective $D$-modules, associated to the subsets
$I\subseteq J\subseteq \Delta_0$. Then
$$dim_k Hom_D(X(I),
X(J))=dim_k Hom_D(X(J),X(I))= |I|.$$
\end{lemma}
\begin{proof}
The projective $Q_n$ is generated by $e_n$. We have $End_D(Q_n)$ is
$n$-dimensional with basis $f_i$, where $i=0,\cdots,n-1$, defined by
$f_i(e_n)=(\alpha_{n-1} \alpha_{n-1}^*)^{i}$. Assume $_{k\Delta}X(I)
= \oplus_i P_i^{\delta_i}$. Suppose that $I=J$. Then any $f_i:
Q_n\rightarrow Q_n$ induces $f_i|_{X(I)}: X(I)\rightarrow X(I)$. Moreover
$f_i|_{X(I)}=0$ if and only if $i\geq \sum \delta_i$. Therefore
$dim_k Hom(X(I),X(I)) =\sum_i \delta_i = |I|$. Note that $X(I)$ is a
submodule of $X(J)$, and $Hom(X(I), X(J))\cong Hom(X(I), X(I))$ and
so $dim_kHom_D(X(I), X(J))=|I|$. Similarly, $dim_kHom_D(X(J), X(I))
=|I|$.
\end{proof}
The theorem of Br\"ustle, Hille, Ringel and R\"orhle \cite{BHRR} is
a consequence of the following lemma.
\begin{lemma}
Let $X(I)$ and $X(J)$ be two indecomposable rigid
$k\Delta$-projective $D$-modules, associated to $I\subseteq
J\subseteq \Delta_0$. Then $X(I)\oplus X(J)$ is rigid.
\end{lemma}
\begin{proof}
Let $X=X(I)\oplus X(J)$ and assume that $_{k\Delta}X\cong
\oplus_iP(i)^{x_i}$. Then $x^2_i=4$ if $i\in I\cap J$, $x^2_i=1$ if
$i\in J \backslash I$ and $x^2_i=0$ if $i\not\in J$. By the previous
lemma we see that $dim_k End(X)=\sum x_i^i$, and so $X$ is rigid by
Lemma \ref{charlemma}.
\end{proof}
\subsection{Gluing modules at sinks and sources}
Let $\Delta$ be a quiver of type $\mathbb{A}_n$, with vertices
$\{1,\cdots, n\}$, and arrows $\alpha_i:i\rightarrow i+1$ or
$\alpha_i:i\leftarrow i+1$ for $i=1,\cdots, n-1$. We recall how to
glue a pair of $k\Delta$-projective $D$-modules at an admissible
vertex to obtain a new $k\Delta$-projective $D$-module, which
usually has higher dimension \cite{JSY}.
Let $i_1<i_2\cdots <i_t$ be a complete list of interior admissible
vertices in $\Delta$. Let $i_0=1$ and $i_{t+1}=n$ be the end
vertices of $\Delta$. Let $i=i_l$ be an interior admissible vertex
$\Delta$. Let $M'$ and $M''$ be two indecomposable
$k\Delta$-projective $D$-modules with $Supp_{\Delta}(M')_0\subseteq$
$\{1,\dots,i\}$, $Supp_{\Delta}(M'')_0\subseteq$
$\{i,\dots,n\}$ and $(dim_\Delta M')_i=1=(dim_\Delta M'')_i$.
Assume that $i$ is a sink in $\Delta$. Then $M'$ and $M''$ have unique
quotient modules isomorphic to the simple $D$-module $P_i$.
We have short exact sequences
$$\xymatrix{0 \ar[r] & \ker (f') \ar[r] & M' \ar[r]^{f'} & P_i
\ar[r] & 0}$$ and $$\xymatrix{0 \ar[r] & \ker (f'') \ar[r] & M''
\ar[r]^{f''} & P_i \ar[r] & 0}$$ Let $M$ be given by the pullback of
$f'$ and $f''$, that is, we have a short exact sequence
$$\xymatrix{0 \ar[r] & M \ar[r] & M'\oplus M'' \ar[r] & P_i
\ar[r] & 0}$$ Similarly, if $i$ is a source, we have a pushout
sequence $$\xymatrix{0 \ar[r] & P_i \ar[r] & M'\oplus M'' \ar[r] & M
\ar[r] & 0}$$
In both of these cases, we say that $M$ is obtained by gluing $M'$
and $M''$ at $i$. Gluing of homomorphisms is done similarly.
\subsection{The construction of rigid modules}
Let $\Delta$ be as in the previous section. Let $d$ be a dimension
vector and let $(d^s)_j=d_j$ for $j=i_{s} ,\cdots,i_{s+1}$ and
$(d^{s})_j=0$ otherwise. Using Theorem \ref{BHRR} we can construct a
rigid $D$-module $Y(d^s)$ which has $\Delta$-dimension vector $d^s$ when
considered as a module for the double quiver supported on $\{i_s,\cdots,i_{s+1}\}$
subject to the corresponding relations, and which may not be $k\Delta$-projective.
However, if $i_{s}$ is an interior source, we extend $Y(d^s)$ by
$P_{i_{s}-1}^{d_{i_s}}$, and if $i_{s+1}$ is an interior source, we
extend $Y(d^s)$ by $P_{i_{s+1}+1}^{d_{i_{s+1}}}$, to obtain a
$k\Delta$-projective $D$-module, which we denote by $X(d^s)$. Since
the extension preserve $\Delta$-dimension vector and dimension of
endomorphism ring, by Lemma \ref{charlemma} the module $X(d^s)$ is
rigid. Clearly, $X(d^s)$ can be given a multiplicative basis by
extending the basis for $Y(d^s)$.
The construction of a rigid $k\Delta$-projective module $X(d)$ with
$\Delta$-dimension vector $d$ is done by induction on the number of
interior admissible vertices in $\Delta$.
Clearly, if there are no interior admissible vertices, then $\Delta$
has linear orientation, and we are done.
Let $s>0$ and let $e^s$ be given by $(e^s)_j=d_j$ if $j\leq i_s$ and
let $(e^s)_j=0$ otherwise. Assume by induction we have constructed
an rigid $k\Delta$-projective $D$-module $X(e^s)$ with dimension
vector $e^s$. Also, by induction, we may assume that the summands of
$X(e^s)$ which have $\Delta$-support at $i_s$ are totally ordered
according to the following order $\leq=\leq_{i_s}$: Let $M$ and $N$
be two indecomposable summands of $X(e^s)$ with
$(dim_\Delta M)_{i_s}=1=(dim_\Delta N)_{i_s}$. Assume that $M$ and
$N$ are obtained by gluing indecomposables with supports
$I_a,\cdots,I_{s-1}$ and $J_b,\cdots,J_{s-1}$, respectively. Let $t$
be the smallest number such that $I_{s-t}\neq J_{s-t}$, then $M\leq
N$ if $J_{s-t} \subseteq I_{s-t}$, if $t$ is even, and
$I_{s-t}\subseteq J_{s-t}$, if $t$ is odd. Note that $M\cong N$ if
$t$ does not exist.
We want to glue $X(e^s)$ with $X(d^s)$, by gluing together
indecomposable summands. The gluing leaves unchanged all summands
not $\Delta$-supported at $i_s$.
\begin{lemma}
Let $M$, $N$ and $t$ be as above, and assume that $M\leq N$, let
$i_s\in I\subseteq J \subseteq \{i_{s},\cdots,i_{s+1}\}$, let $X$ be
obtained by gluing $M$ with $X(J)$ and let $Y$ be obtained by gluing
$M$ with $X(I)$. Then $X\oplus Y$ is rigid.
\end{lemma}
\begin{proof}
We first show that $dim_kHom(X,Y)=|I|+ dim_kHom(M,N)-1$. Let $i_s$
be a source. Any map $f:M\rightarrow N$ extends to a map
$f':X\rightarrow Y$, with $f'|_{X(I)}$ equal to the unique injection
$X(I)\subset X(J)$ if $f|_{M_{i_s}}$ is non-zero, and $f'|_{X(I)}=0$
otherwise. In addition there are $|I|-1$ non injective maps
$X(I)\rightarrow X(J)$, which gives us $dim_kHom(X,Y)=|I|+
dim_kHom(M,N)-1$ in this case. The case of sink is similar.
Now the case $Y=X$ follows, and by similar arguments
$$dim_kHom(Y,X)=|I|+dim_kHom(N,M)-1.$$ Then since $M\oplus N$ is rigid,
$dim_kEnd(X\oplus Y)=4|I|+\sum_{i\leq i_s} d_i^2-4=\sum_id_i^2$, and
therefore $X\oplus Y$ is rigid, by Lemma \ref{charlemma}.
\end{proof}
We also need to consider summands without $\Delta$-support at $i_s$.
Let $X\oplus Y$ be as in the previous lemma. If $L$ is a summand of
$X(e^s)$ without $\Delta$-support at $i_s$, then $Hom(L,X\oplus
Y)=Hom(L,M\oplus N)$ and $Hom(X\oplus Y,L)=Hom(M\oplus N,L)$ and so
$L\oplus X\oplus Y$ is rigid, by Lemma \ref{charlemma}. Similarly,
if $L'$ is a summand of $X(d^s)$ without $\Delta$-support at $i_s$,
then $L\oplus L'$ and $L'\oplus X\oplus Y$ are rigid. Therefore, by
induction we construct a rigid $k\Delta$-projective $D$-module
$X(e^{s+1})$, with $\Delta$-dimension vector $e^{s+1}$. Finally,
using that $d=e^{t+1}$ we have the following.
\begin{theorem} \label{JSY} [Jensen-Su-Yu]
$X(d)$ is a rigid $k\Delta$-projective $D$-module with
$\Delta$-dimension vector $d$.
\end{theorem}
By the construction we see that $X(d)$ has a multiplicative basis
obtained by gluing the multiplicative bases of the linear pieces
$X(e^s)$.
We use this basis to give $X(d)$ a grading as follows. Let $E$ be
the quotient of $D$ by the ideal $\mathcal{J}$ generated by all
paths of the form $\alpha\beta^*$ for $\alpha, \beta\in \Delta_1$.
Then $E$ is a string algebra \cite{Br}. We note that
$D=\oplus_{i\geq 0}D^i$ is a graded algebra by letting the degree of
a path $p$ be the biggest $i$ such that $p\in \mathcal{J}^i$. This
grading on $D$ induces a grading on any indecomposable rigid
$k\Delta$-projective $D$-module $X=\oplus_{i\geq 0}X^i$, in such a
way that the graded component $(X^i)_u$ identifies with
$(\mathcal{J}^iX/\mathcal{J}^{i+1}X)_u$, in particular, $X^0\cong
E\otimes_DX$. Furthermore, the basis of a graded component is a
subset of the multiplicative basis of $X$. Also,
$Hom_D(X,Y)=\oplus_i Hom^{i}_D(X,Y)$, where $Hom^{i}_D(X,Y)$ denotes
the degree $i$ homomorphisms from $X$ to $Y$, where $X$ and $Y$ are
indecomposable rigid $k\Delta$- projective $D$-modules $X$ and $Y$.
\subsection{Descending chains} \label{extradescription}
The construction for the linear case shows that rigid modules are in
bijection with descending chains of subsets of the set of vertices
$\{1,\cdots,n\}$. We generalize this fact to quivers of type
$\mathbb{A}$ with arbitrary orientation.
Let $u\in \{1,\cdots,n\}$ with $i_{v-1}\leq u\leq i_{v}$ and let
$I\subseteq \{1,\cdots,n\}$ such that $I$ contains at least one
element $z\in I$ with a path in $\Delta$ from $z$ to $u$. Let
$I_{v-t}=\{i_{v-t-1},\cdots, i_{v-t}\}\cap I$ if $t$ is even and
$I_{v-t}=(\{i_{w}|i_{w}\in I\}\cap \{i_{v-t},i_{v-t-1}\})\cup
(\{i_{v-t-1}+1,\cdots,i_{v-t}-1\}\cap I^c)$ if $t$ is odd. Here
$I^c$ denotes the complement of $I$ in $\{1,\cdots,n\}$. We say that
$I$ is connected if $I_{w-1}\neq \emptyset \neq I_{w+1}$ implies
$i_{w-1}\in I_{w-1}$ and $i_{w}\in I_{w+1}$ for all $w$. Let $X_I$
be the indecomposable rigid $k\Delta$-projective $D$-module obtained
by gluing the modules $X(I_{v-t})$. The following lemma follows
immediately from Theorem \ref{JSY}.
\begin{lemma} \label{chainlemma}Let $u\in \{1,\cdots,n\}$. The
correspondence $I\mapsto X_I$ induces a a bijection between
descending chains of subsets of $\{1,\cdots,n\}$ where each subset
is connected and contains at least one element $z$ with a path in
$\Delta$ from $z$ to $u$, and rigid modules with all indecomposable
summands supported at $u$.
\end{lemma}
We recover the linear case by considering by letting $u$ be the sink
vertex. We now define a total order which will be important in the
next section.
\begin{definition}
If $J\subseteq I$ then $X_I\leq_u X_J$.
\end{definition}
\section{One point extension of $\mathbb{A}$}
Let $\Delta$ be a quiver obtained from a quiver of type
$\mathbb{A}_{n-1}$, with vertices $\{2,\cdots,n\}$, by attaching a
vertex $1$ to an interior vertex $u$ in $\mathbb{A}_{n-1}$ with one
arrow which we denote by $\gamma$.
There are eight possible orientations at the vertex $u$. These are
$$
\xymatrix{
& A) & 1 & & E) & 1 \ar[d]& & \\
& u-1 \ar[r] & u \ar[r] \ar[u] & u+1 & u-1 & u \ar[l] & u+1 \ar[l]& \\
& B) & 1 & & F) & 1 \ar[d] & & \\
& u-1 & u \ar[l] \ar[u] \ar[r] & u+1 & u-1\ar[r] & u & u+1\ar[l] & \\
& C) & 1\ar[d] & & G) & 1 & & \\
& u-1 & u \ar[r] \ar[l] & u+1 & u-1 \ar[r]& u\ar[u] & u+1\ar[l] & \\
}$$ $$ \xymatrix{
& D) & 1 & & H) & 1 \ar[d] & & \\
& u-1 & u \ar[l] \ar[u] & u+1 \ar[l] & u-1 \ar[r]& u \ar[r]& u+1 &}
$$
We will study generic $AutP$-orbits in the three cases $A),B)$ and
$C)$. This is sufficient, since $D)$ follows from $A)$, and
$E),F),G)$ and $H)$ are dual to $A),B),C)$ and $D)$, respectively.
\subsection{Case $A$}
Denote by $\Gamma$ the full subquiver of $\Delta$ supported on the vertices
$\{2,\cdots,n\}$. Also let $\Delta'$ be the quiver with the same
underlying graph as $\Delta$, and orientation $i\rightarrow i+1$ for
$i=2,\cdots u-1$ and $i\leftarrow i+1$ for $i=u,\cdots,n-1$, and the
orientation of $\gamma$ is from $u$ to $1$.
Let $d\in \mathbb{N}^n$ be a vector and let $d'$ be the vector given
by $(d')_1=0$ and $(d')_i=d_i$ for $i\neq 1$. By the previous
section we can construct a rigid module $Y(d')$ for the double
quiver of $\Gamma$ with $\Delta$-dimension vector $d'$.
Let $N^1,\cdots ,N^p$ be representatives of
isomorphism classes of the indecomposable summands of $Y(d')$
supported at $u$ and ordered such that $N^i<_uN^{i+1}$, and let
$n_i$ denote the multiplicity of $N^i$ as a summand in $Y(d')$.
We extend $Y(d')$ to a rigid $k\Delta$-projective $D$-module $X(d')$
with $\Delta$-dimension vector $d'$ as follows. For each
indecomposable summand $Y=N^i$ of $Y(d')$, we construct a
corresponding indecomposable $k\Delta$-projective $D$-module
$X=M^i$. Let $X_1=Y_u$, $X_w=Y_w$ for
$w\neq 1$, $X_\alpha=Y_\alpha$ for any $\alpha\in \Gamma_1\cup
\Gamma^*_1$, $X_\gamma=Id$. If $\beta:u-1\rightarrow u\in \Delta$,
then the equation $\gamma^*\gamma=\beta\beta^*$ uniquely determines
the action of $\gamma^*$, and so $X_{\gamma^*}$ maps
$(X^i)_u=(Y^i)_u$ onto $(X^{i+1})_u$ for all $i$, where $Y^i$ is the
$i$th graded component in the grading induced by the ideal
$\mathcal{J}$. By the construction we see that $X$ is
$k\Delta$-projective. Finally, let $$X(d')=Y'\oplus (\oplus_i
{(M^i)}^{n_i}),$$ where $Y'$ consist of all summands in $Y(d')$ not
supported at $u$. The extension of $Y(d')$ to $X(d')$ preserves the
$\Delta$-dimension vector and the dimension of the endomorphism
ring, and so $X(d')$ is rigid by Lemma \ref{charlemma}. Moreover,
$X(d')$ has a grading induced by the grading on $Y(d')$.
Let $End^{0}(X(d'))_u$ denote the subspace of maps which are
restrictions $f_{|X(d')^0_u}:X(d')_u^0\rightarrow X(d')_u^0$ for
$f:X(d')\rightarrow X(d')$ with $f(X(d')^0_u)\subseteq X(d')^0_u$.
The action of $Aut(X(d'))$ on $X(d')$ induces an action of
$Aut^0(X(d'))_u$ on $X(d')^0_u$, where $Aut^0(X(d'))_u \subseteq
End^0(X(d'))_u$ consists of the invertible maps.
\begin{lemma} \label{actionlemma2}The pair $(Aut^0(X(d'))_u\times
Gl_{d_1},Hom(k^{d_1},X(d')^0_u))$ is a generic section of
$(G(d),Rep(D,P(d)))$.
\end{lemma}
\begin{proof}
A $k\Delta$-projective $D$-module $X$ has a unique submodule
$X'\subseteq X$ with $\Delta$-support in $\{2,\cdots,n\}$, generated
by $X_i$ for $2\leq i \leq n$. We want
to parameterize the $k\Delta$-projective $D$-modules $X$ with
$_{\Delta}X(d)=P(d)$ and $X'=X(d')$.
By choosing a basis the structure of $X=X(c',c'')$ at the subquiver
supported at $1$ and $u$ is
$$\xymatrix{X(d')^0_u\oplus (JX(d'))_u \ar@/^1pc/[rrr]^{\left(
\begin{matrix}0 & Id & 0\\ 0 &
0 & Id \end{matrix}\right)^{tr}} & & & k^{d_1}\oplus X(d')^0_u\oplus
(JX(d'))_u \ar@/^1pc/[lll]^{\left(\begin{matrix}c' & 0 & 0
\\ c'' & z_1 & z_2\end{matrix}\right)} }, $$
where $X(d')_{\gamma^*}=\left(\begin{matrix}0 & 0 \\ z_1 &
z_2\end{matrix}\right)$ and $(z_1 \; z_2)$ is surjective by the
construction of $X(d')$. The set-wise stabilizer at $u$ and $1$ of
the subset containing all modules $X(c',c'')$ is
$$\{(a,\left(\begin{matrix}g & 0 \\ b &
a\end{matrix}\right))|a\in Aut(X(d'))_u, g\in Gl_{d_1} \mbox{ and }
b:k^{d_1}\rightarrow X(d')_u \}$$
Then using column operations we see that $X(c',c'')\cong X(c',0)$.
Moreover $X(c',0)\cong X(c,0)$ if and only if $c$ and $c'$ are
conjugate under the action by $(Aut^0(X(d'))_u\times Gl_{d_1}$. Any
element in $(Aut^0(X(d'))_u$ induces an automorphism of $X'$, and so
we can view $(Aut^0(X(d'))_u$ as a subgroup of $(Aut(X(d'))_u$. This
shows that $(Aut^0(X(d'))_u\times Gl_{d_1},Hom(k^{d_1},X(d')^0_u))$
is a section of $(G(d),Rep(D,P))$. It is generic because $X(d')$ is
rigid.
\end{proof}
We compute $Aut^0(X(d'))_u$.
\begin{lemma} \label{maplemma} Let $i,j\in \{1,\cdots,p\}$. Then
$Hom^0(M^i,M^j)_u=k$ if and only if, either
\begin{itemize}
\item[a)] $i\geq j$ and $(dim_\Delta({M^i}))_w =
(dim_\Delta({M^j}))_w$ for all $w\leq u$, or
\item[b)] $i<j$ and $(dim_\Delta({M^i}))_w =
(dim_\Delta({M^j}))_w$ for all $w>u$,
\end{itemize}
otherwise $Hom^0(M^i,M^j)_u=0$. Moreover, any composition of
non-zero maps is non-zero.
\end{lemma}
We construct a $\Delta'$-representation $Z(d')$ with
$End(Z(d'))_u\cong End^0(X(d')_u)$ and dimension vector denoted by
$e(d')$, with $e(d')_1=0$. For each segment $[i,j]$ for $2\leq i
\leq j \leq n$ there is an associated indecomposable
$\Delta'$-representation $M[i,j]$ supported on
$\{i,\cdots,j\}$. We construct the $\Delta'$- representation
$Z(d')=\oplus Z^{n_i}_i$ as follows. Let $Z^1=M[u,n]$. Given
$Z^i=M[j,j']$, let $Z^{i+1}=M[j-1,j']$ if there is a map
$M^i\rightarrow M^{i+1}$, $Z^{i+1}=M[j,j'-1]$ if there is a map
$M^i\leftarrow M^{i+1}$, and $Z^{i+1}=M[j-1,j'-1]$ otherwise.
\begin{lemma}
We have $2\leq j\leq u\leq j' \leq n$ for any summand $Z^i=M[j,j']$
in $Z(d')$.
\end{lemma}
\begin{proof}
The inequalities $j\leq u$ and $j'\leq n$ are trivial. Now by the
description of $X(d')$ from Lemma \ref{chainlemma} there are at most
$u-2$ cases where either, there is a map $M^i\rightarrow M^{i+1}\in
Hom^0(M^{i+1},M^{i})_u$, or there is no map in either direction.
Therefore $2\leq j$. Similarly, $u\leq j'$.
\end{proof}
Let $e(d)$ denote the dimension vector $e(d)=(d_1,e(d')_2,\cdots,
e(d')_n)$. Let $\Gamma'$ be the full subquiver of $\Delta'$ supported on $\{2,\cdots,n\}$.
\begin{lemma} \label{actionlemma}
The pair $(Aut(Z(d')_u)\times Gl_{d_1},Hom(k^{d_1},Z(d')_u))$ is a
generic section of \\ $(Gl(e(d)),Rep(\Delta',e(d))$.
\end{lemma}
\begin{proof}
The $Aut(Z(d'))\times Gl_{d_1}$ orbits in $Hom(k^{d_1},Z(d')_u)$
parameterize the representations of $\Delta'$ with restriction to
$\Gamma'$ equal to $Z(d')$. The lemma follows if we can prove that
$Z(d')$ is a rigid $\Delta'$-representation.
The full subquiver $\Gamma'$ has a
sink $u$. Let $M=M[i,j]\oplus M[i',j']$, for $1\leq i,i' \leq u \leq
j,j' \leq n$. Then $Ext^1(M[i,j],M[i',j'])\neq 0$ if and only if
$i<u<j$ and $[i',j']\subseteq [i+1,j-1]$. The lemma follows.
\end{proof}
Finally we have that
\begin{lemma} \label{casea} The pairs
$(Gl(e(d)),Rep(\Delta',e(d))$ and $(G(d),Rep(D,P(d)))$ are
generically equivalent.
\end{lemma}
\begin{proof}
There are isomorphisms $(Z^i)_u\rightarrow (M^i)^0_u$, which extend
to an isomorphism $$Hom(k^{d_1},Z(d')_u)\rightarrow
Hom(k^{d_1},X(d')^0_u)$$ of vector spaces. By the construction
$Hom^0(M^i,M^j)_u\cong Hom(Z^i,Z^j)_u$, and by Lemma \ref{maplemma}
we have an isomorphism $Aut(Z(d'))_u\rightarrow Aut^0(X(d'))_u$.
Therefore there is a commutative diagram
$$
\xymatrix{ (Aut(Z(d')\times Gl_{d_1}) \times Hom(k^{d_1},Z(d')_u)
\ar[r] \ar[d] & (Aut^0(X(d')_u\times Gl_{d_1}) \times
Hom(k^{d_1},X(d')^0_u)\ar[d] \\ Hom(k^{d_1},Z(d')_u) \ar[r] &
Hom(k^{d_1},X(d')^0_u) }
$$
where the vertical maps are actions and the horizontal maps are
isomorphisms, and the two actions are equivariantly isomorphic. The
lemma now follows from Lemmas \ref{actionlemma2} and
\ref{actionlemma}.
\end{proof}
\subsection{Case $B$}
Let $\Gamma, \gamma, \Delta',d,d'$ and $Y(d')$ be similarly defined
as in Case $A$. We extend $Y(d')$ to a rigid $k\Delta$-projective
$D$-module $X(d')$ with $\Delta$-dimension vector $d'$ as follows.
Let $Y=N^i$ be an indecomposable summand of $Y(d')$ supported at
$u$. We construct a corresponding indecomposable $k\Delta$-projective $D$-module
$X=M^i$ and let $X(d')=Y'\oplus (\oplus {(M^i)}^{n_i})$, where $Y'$ consist of all
summands of $Y(d')$ not supported at $u$. Let $X_1=Y_u$, $X_w=Y_w$
for $w\neq 1$, $X_\alpha=Y_\alpha$ for any $\alpha\in \Gamma_1\cup
\Gamma_1^*$, $X_\gamma=Id$, and $X_{\gamma^*}=0$. The extension of
$Y(d')$ to $X(d')$ preserves the $\Delta$-dimension vector and the
dimension of the endomorphism algebra, and so $X(d')$ is rigid by
Lemma \ref{charlemma}. Moreover, $X(d')$ has a grading induced by
the grading on $Y(d')$.
The following lemma is proved similarly to Lemma \ref{actionlemma}
in Case $A$, and so we skip the details.
\begin{lemma} \label{actionlemmab} The pair
$(Aut^0(X(d'))_u\times Gl_{d_1},Hom(k^{d_1},X(d')^0_u))$ is a
generic section of $(G(d),Rep(D,P(d)))$.
\end{lemma}
We compute $Aut^0(X(d'))_u$. The vertex $u$ is a source in $\Gamma$
and so we may assume that $u=i_v$ is equal to the $v$th admissible
vertex of $\Gamma$.
\begin{lemma} \label{map1} Let $i,j\in \{1,\cdots,p\}$. Then
$Hom^0(M^i,M^j)_u=k$ if and only if, either
\begin{itemize}
\item[a)] $i<j$ and $(dim_\Delta{M^i})_w = (dim_\Delta{M^j})_w$
for all $w<u$, or
\item[b)] $i\geq j$ and $(dim_\Delta{M^i})_w = (dim_\Delta{M^j})_w$
for all $w>u$,
\end{itemize}
otherwise $Hom^0(M^i,M^j)_u=0$. Moreover, any composition of
non-zero maps is non-zero.
\end{lemma}
Using a similar procedure as in Case $A$, we construct a
$\Delta'$-representation $Z(d')$ such that $End(Z(d'))_u\cong
End^0(X(d'))_u$, its dimension vector $e(d')$ is zero at vertex $1$,
and each indecomposable summand is of the form $M[i,j]$ for a segment $[i,j]$
in $2,\cdots,n$. Again, let $e(d)=(d_1,e(d')_2,\cdots,
e(d')_n)$.
Similar to Case $A$ we have the following lemma.
\begin{lemma} \label{b} The pairs
$(Gl(e(d)),Rep(\Delta',e(d))$ and $(G(d),Rep(D,P(d)))$ are
generically equivalent.
\end{lemma}
\subsection{Case $C$}
Let $\Gamma$ be the full subquiver of $\Delta$ supported on the
vertices $\{1,\cdots,u\}$. Moreover, in this section we assume that
$n=u+2$. In particular $\Delta$ could be of type $\mathbb{E}_6,
\mathbb{E}_7$ or $\mathbb{E}_8$. Let $\alpha$ denote the arrow
$u\rightarrow u+1\in \Delta$, and let $\beta\in \Delta_1$ denote the
arrow between $u+1$ and $u+2$, which could be of either orientation.
That is, either $u+2$ is a sink, or it is a source. Let $\Delta'$ be
the quiver with the same underlying graph as $\Delta$, and with
orientation $i-1\rightarrow i$ for $i\leq u$, $u\leftarrow u+1
\leftarrow u+2$ and $1\rightarrow u$.
Let $d\in \mathbb{N}^n$ be a dimension vector, and let $d'$ be given
by $d'_i=0$ for $i=u+1,u+2$ and $d'_i=d_i$ otherwise. Let $Y(d')$ be
a rigid module for the double quiver of $\Gamma$. Let $N^1,\cdots
,N^p$ be representatives of isomorphism classes of the
indecomposable summands of $Y(d')$ supported at $u$ and ordered such
that $N^i<_uN^{i+1}$, and let $n_i$ denote the multiplicity of $N^i$
as a summand in $Y(d')$.
We extend $Y=Y(d')$ to a rigid $k\Delta$-projective $D$-module
$X=X(d')$ with $\Delta$-dimension vector $d'$ as follows. Let
$X_v=Y_v$, for $v\leq u$ and $X_{u+1}=Y_u$, $X_\sigma=Y_\sigma$ for
any $\sigma\in \Gamma_1\cup \Gamma^*_1$, $X_\alpha=Id$, and
$X_{\alpha^*}=Y_{\gamma}Y_{\gamma^*}$. If $u+2$ is a source, then
$X_{u+2}=0$, and if $u+2$ is a sink then $X_{u+2}=Y_u$, $X_\beta=Id$
and $X_{\beta^*}=X_{\alpha^*}$. For each indecomposable summand
$Y=N^i$ of $Y(d')$, we let $X=M^i$ denote the corresponding
indecomposable summand of $X(d')$. The extension of $Y(d')$ to
$X(d')$ preserves the $\Delta$-dimension vector and the dimension of
the endomorphism ring, and so $X(d')$ is rigid by Lemma
\ref{charlemma}. Moreover, $X(d')$ has a grading induced by the
grading on $Y(d')$.
Let $d''$ be the dimension vector supported on $\{u+1,u+2\}$, given by $d''_i=0$ for $i\leq u$
and $d''_i=d_i$ otherwise. Let $X(d'')$ be the rigid $D$-module with
$\Delta$-dimension vector $d''$, which is supported on the vertices
$\{u+1,u+2\}$.
If $u+2$ is a sink, let $V=soc(X(d''))$, let $H_V$ consist of
restrictions $f|_V$ for $f\in Aut(X(d''))$, let $W=X(d')^0_{u+1}$
and let $H_W=Aut^0(X(d'))_{u+1}$. If $u+2$ is a source, let
$V=(X(d'')/radX(d''))_{u+1}\cong k^{d_{u+1}}$, let $H_V$ consist of
induced maps $\overline{f}:(X(d'')/radX(d''))_{u+1}\rightarrow
(X(d'')/radX(d''))_{u+1}$ for $f\in Aut(X(d''))$, let
$W=X(d')^0_{u}$ and let $H_W=Aut^0(X(d'))_{u}$.
\begin{lemma} The pair
$(H_V\times H_W,Hom_k(V,W))$ is a generic section in
$(G(d),Rep(D,P(d)))$.
\end{lemma}
\begin{proof}
We only consider the case where $u+2$ is a sink, as the other case
is similar.
Associated to any rigid $k\Delta$-projective module $X$ with
$\Delta$-dimension vector $d$ there is a unique submodule
$X'\subseteq X$ with $X'\cong X(d')$ and quotient $X/X'\cong
X(d'')$, since $X(d')$ and $X(d'')$ are rigid.
We decompose $X(d')=M\oplus N\oplus L$, where $M$ consists of all
summands of $X(d')$ with $\Delta$-support at both $1$ and $u$, $L$
consists of all summands not supported at $u$, and $N$ consists of
all other summands. That is, $N$ consists of all summands of $X(d')$
with $\Delta$-support at either $1$ or $u$, but not both. Let
$W_1=(M^0)_{u+1}$, $W_2=(M^1)_{u+1}$ and $W_3=(N^0)_{u+1}$. We have
$W=W_1\oplus W_3$. Let $X(d'')=R\oplus S$, where $R$ consist of all
summands with $\Delta$-support at both $u+1$ and $u+2$, and $S$
consist of all other summands. Let $V_1=(R^0)_{u+2}$,
$V_2=(R^1)_{u+2}$ and $V_3=S_{u+2}$, the linear space of $S$ at vertex $u+2$.
We have $V=V_2\oplus V_3$.
By the relations of $D$, we see that $X=X(c)$ is determined by the
maps between the vertices $u+1$ and $u+2$ which up to isomorphism
have the form $$\xymatrix{X(d')_{u+1} \oplus X(d'')_{u+1}
\ar@/^1pc/[rr]^{\left(\begin{matrix}Id & 0 \\ 0 &
X(d'')_\beta\end{matrix}\right)} & & X(d')_{u+2} \oplus X(d'')_{u+2}
\ar@/^1pc/[ll]^{\left(\begin{matrix}X(d')_{\beta^*} & c \\ 0 &
X(d'')_{\beta^*} \end{matrix}\right)}}$$ where
$c=(c_{ij})_{ij}:\xymatrix{V_1 \oplus V_2 \oplus V_3\ar[r] &
W_1\oplus W_2 \oplus W_3}.$
A computation shows that $X(c)\cong X(c')$ where
$$c'=\left(\begin{matrix}c'_{11} & c_{12} & c_{13} \\ 0 & 0 & 0 \\
0 & c_{32} & c_{33} \end{matrix}\right).$$ Let $a:W_1\rightarrow
W_2$ and $b:V_1\rightarrow V_2$ be restrictions of automorphisms of
$X(d')$ and $X(d'')$, respectively. Then $X(c')\cong X(c'')$ with
$c''_{11}=c'_{11}+(M_{\alpha}M_{\alpha^*})^{-1}ac_{12}
R_{\beta}R_{\beta^*}-c_{12}b$, and $c''_{ij}=c'_{ij}$ otherwise. By
choosing bases we may assume that the matrices of
$M_{\alpha}M_{\alpha^*}$ and $R_{\beta}R_{\beta^*}$ are identities, i.e.
we identify $W_1,V_1$ with $W_2,V_2$, respectively. Moreover,
$b$ can be any quadratic matrix, and that the set of matrices $a$
include all upper triangular matrices, since whenever $i\leq j$ we
have $Hom^0(M^j,M^i)_{u+1}\neq 0$ and $Hom^1(M^i,M^i)_{u+1}\neq 0$.
The map $$\phi:Gl_{dimV_1}\times B_{dimW_1}\rightarrow
Mat_{dimV_1\times dimW_1}, (b,a)\mapsto ac_{12}b^{-1}$$ is dominant
when $c_{12}$ is generic, where $B_{dimW_1}$ consists of all invertible
upper triangular $dimW_1\times dimW_1$-matrices. This is because $(\mathfrak{gl}_{dimV_1}\times
\mathfrak{b}_{dimW_1}, Mat_{dimV_1\times dimW_1})$ is a generic section in
$(Gl(f),Rep(\mathbb{A}_{dimW_1+1},f))$ for a dimension vector $f$.
Then the induced map of $\phi$ on tangent spaces $(b,a)\mapsto
ac_{12}-c_{12}b$ is surjective. Then since $X(d')$ is rigid,
therefore $c_{12}$ is generic, and so there exists $a$ and $b$ such that
$c'_{11}+ac_{12}-c_{12}b=0$.
So $X(c'')\cong X(c''')$ where $c'''_{11}=0$ and
$c'''_{ij}=c''_{ij}$ otherwise. That is, we may consider $c'''$ as a
map $V\rightarrow W$. Now for $c_1,c_2:V\rightarrow W$ we have
$X(c_1)\cong X(c_2)$ if and only if they are conjugate under the
action of $H_V\times H_W$. Then the lemma follows.
\end{proof}
Similar to Case $A$, we construct a rigid $\Delta'$-representation
$Z(d')$ with dimension vector denoted by $e(d')$, where
$e(d')_{u+1}=0=e(d')_{u+2}$ and $e(d')_u=dim_kW$, and with summands
supported on intervals in $2,3,\cdots,u-1,u,1$. The construction
gives us a commutative diagram $$\xymatrix{Aut(Z(d'))_u\times
Z(d')_u \ar[d] \ar[r] & \ar[d] H_W\times W \\ Z(d')_u \ar[r] & W}$$
where the horizontal maps are isomorphisms and the vertical maps are
actions. Similarly, we construct a rigid $\Delta'$-representation
$Z(d'')$ supported at $u+1,u+2$. Let $e(d)$ be the dimension vector
of $Z(d')\oplus Z(d'')$. As before we have the following lemma.
\begin{lemma}
$(Aut(Z(d'')_{u+1}\times Aut(Z(d'))_{u}),Hom(Z(d'')_{u+1},Z(d')_u))$
is a generic section in $(Gl(e(d)),Rep(\Delta',e(d))$.
\end{lemma}
Similar to Case $A$, we may conclude with the following lemma.
\begin{lemma} \label{c} The pairs
$(Gl(e(d)),Rep(\Delta',e(d))$ and $(G(d),Rep(D,P(d)))$ are
generically equivalent.
\end{lemma}
\section{Main theorem}
We prove Theorem \ref{QuiverTheorem} stated in the introduction
\begin{theorem}
Let $\Delta$ be a quiver. Then there is a dense open $AutP$-orbit in
$radEndP$ for all projective representations $P$ if and only if
$\Delta$ is a Dynkin quiver.
\end{theorem}
\begin{proof}
First assume that $\Delta$ is a Dynkin quiver. By Theorem \ref{JSY}
we may assume that $\Delta$ is not of type $\mathbb{A}$ and we
assume that $\Delta$ is obtained from the quiver of type
$\mathbb{A}_{n-1}$, with vertices $\{2,\cdots,n\}$, by attaching an
vertex $1$ to an interior vertex $u$ in $\mathbb{A}_{n-1}$ with an
arrow which we denote by $\gamma$. We consider the possible
orientations $A$, $B$, $C$, $D$, $E$, $F$, $G$ and $H$ given at the
beginning of the previous section. In cases $A$, $B$ and $C$, the
theorem follows from Theorem \ref{Gabriel} which is due to Gabriel
and Lemmas \ref{casea}, \ref{b} and \ref{c}, respectively
Case $D$ follows from Case $A$ by symmetry.
Finally, if $\Delta^{op}$ is the opposite quiver of $\Delta$, then
there is a commutative diagram $$\xymatrix{AutP\times radEndP\ar[r]
\ar[d] & \ar[d] (AutP)^{op}\times rad(EndP)^{op}
\\radEndP \ar[r] & rad(EndP)^{op}}$$ where the vertical
arrows are actions and the horizontal arrows are isomorphisms
$f\mapsto f^{op}$ and $(g,f)\mapsto ((g^{op})^{-1},f^{op})$.
Therefore the case $E$, $F$, $G$ and $H$ follow from the cases $A$,
$B$, $C$ and $D$, respectively.
Conversely, let $\Delta$ be a non-Dynkin quiver. If there is a dense
open orbit for the action of $AutP$ on $radEndP$, then there is a
dense open orbit for the induced action of $AutP$ on
$radEndP/(radEndP)^2$. But, $AutP$-orbits in $radEndP/(radEndP)^2$
are naturally isomorphism classes of representations of
$\Delta^{op}$, with dimension vector equal to $d$ for $P=P(d)$.
Since $\Delta^{op}$ is not Dynkin, there are dimension vectors such
that the associated representation varieties do not have dense open
orbits. Hence, there are projective
$\Delta$-representations $P$ without dense open orbits in $radEndP$.
\end{proof}
Now Theorem \ref{LieTheorem} follows.
\begin{corollary}
Let $\Delta$ be a tree. There is a open dense $S(\Delta)(d)$-orbit
in the nilpotent radical $\mathfrak{n}(\Delta)(d)$ of $\mathfrak{s}(\Delta)(d)$ for all
$d$, if and only if $\Delta$ is a Dynkin quiver.
\end{corollary}
| {
"timestamp": "2010-03-10T02:00:33",
"yymm": "1002",
"arxiv_id": "1002.4432",
"language": "en",
"url": "https://arxiv.org/abs/1002.4432",
"abstract": "Let $Q$ be a connected quiver with no oriented cycles, $k$ the field of complex numbers and $P$ a projective representation of $Q$. We study the adjoint action of the automorphism group $\\Aut_{kQ} P$ on the space of radical endomorphisms $\\radE_{kQ}P$. Using generic equivalence, we show that the quiver $Q$ has the property that there exists a dense open $\\Aut_{kQ} P$-orbit in $\\radE_{kQ} P$, for all projective representations $P$, if and only if $Q$ is a Dynkin quiver. This gives a new characterisation of Dynkin quivers.",
"subjects": "Representation Theory (math.RT)",
"title": "Adjoint action of automorphism groups on radical endomorphisms, generic equivalence and Dynkin quivers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399026119352,
"lm_q2_score": 0.7310585903489891,
"lm_q1q2_score": 0.7093022155038218
} |
https://arxiv.org/abs/2012.14480 | On free subalgebras of varieties | We show that some results of L. Makar-Limanov, P. Malcolmson and Z. Reichstein on the existence of free associative algebras are valid in the more general context of varieties of algebras. | \section*{Introduction}
As stated in \cite[Conjecture~1.1]{Agata}, L. Makar-Limanov made the following conjecture:
\begin{conjecture}
Let $K$ be a field, $A$ be an associative $K$-algebra and $F$ be a field extension of $K$.
If $F\otimes_KA$ contains a free $K$-algebra on at least two free generators, then $A$ also contains
a free $K$-algebra on the same number of free generators.
\end{conjecture}
In \cite[Theorem~1(b)]{ZI}, Z. Reichstein proved that Makar-Limanov's conjecture holds true when
the field $K$ is uncountable:
\begin{theorem}\label{theo:Reichstein}
Let $K$ be an uncountable field, $A$ an associative $K$-algebra and $F$ a field extension of $K$. If
$F\otimes_KA$ contains a copy of a free (noncommutative) associative $K$-algebra, then so does $A$.
\end{theorem}
In his proof, Z. Reichstein made essential use of the following result by
L. Makar-Limanov and P. Malcolmson \cite[Lemma~1]{MLM}:
\begin{lemma}\label{lem:MLM}
Suppose that $K$ is a field
with prime subfield $K_0$ and $A$ is an associative $K$-algebra. Then $x_1,\dotsc,x_n\in A$
are the free generators of a noncommutative free $K$-subalgebra if, and only if, they are the free generators of a
free $K_0$-subalgebra.
\end{lemma}
On the other hand, A. Smoktunowicz proved in \cite[Theorem~1.2]{Agata} that this conjecture fails when $K$ is a countable
field. More precisely, she showed that for every countable field $K$, there is
an associative $K$-algebra $A$ without free subalgebras on at least two free generators and a
field extension $F$ of $K$ such that the algebra $F\otimes_K A$ contains a free $K$-algebra
on at least two free generators.
\medskip
The main aim of this paper is to illustrate the fact that similar phenomena about the existence of free algebras hold true in the context of varieties of (not necessarily associative) algebras.
Before giving more details, we fix some notation that will be used throughout. For unexplained terminology, the reader is referred to \cite[Chapter~1]{Ringsthatarenearly}.
Let $Y$ be a set and $K$ be a field. Let $X$ be a countable set of symbols
$X=\{x_1,x_2,\dotsc\}$ and
let $\mathfrak{M}$ be a variety of $K$-algebras with defining identities $I\subseteq K\{X\}$.
By $K\{Y\}$, we denote the free (nonassociative) $K$-algebra on $Y$. Thus, for any $K$-algebra $A$ and map $\theta\colon Y\rightarrow A$ there exists a unique
homomorphism $\Theta\colon K\{Y\}\rightarrow A$ which extends $\theta$.
We will denote by $K_\mathfrak{M}\{Y\}$ the free $K$-algebra in the variety $\mathfrak{M}$ with set of free generators $Y$. Thus, for any $K$-algebra $A\in\mathfrak{M}$ and map $\theta\colon Y\rightarrow A$ there exists a unique
homomorphism of $K$-algebras $\Theta\colon K_\mathfrak{M}\{Y\}\rightarrow A$ which extends $\theta$.
If
$K\subseteq F$ is a field extension, then $\mathfrak{M}_F$ denotes the variety of $F$-algebras with defining identities $I$. Then $F\otimes_K K_\mathfrak{M}\{Y\}$ is the free $F$-algebra of $\mathfrak{M}_F$ with set of free generators
$\{1\otimes y_1,\dotsc,1\otimes y_n\}$, if $Y=\{y_1,\dotsc,y_n\}$.
We will only consider homogeneous varieties $\mathfrak{M}$ of $K$-algebras. Hence, if $A\in \mathfrak{M}$, then $K$-algebra
$F\otimes_K A\in \mathfrak{M}$.
\medskip
Let $K$ be a field and $\mathfrak{M}$ be a homogeneous variety of $K$-algebras.
We say that $\mathfrak{M}$ is an \emph{MLM variety} if
for any field extension $K\subseteq F$, any $A\in\mathfrak{M}_F$, and subset of at least two elements
$Y=\{y_1,\dotsc,y_n\}\subseteq A$ such
that the $K$-subalgebra of $A$ generated by $Y$ is the free $K$-algebra in the variety $\mathfrak{M}$ with
set of free
generators $Y$, then the $F$-subalgebra of $A$ generated by $Y$ is the free $F$-algebra in the variety $\mathfrak{M}_F$
with set of free generators $Y$. The name MLM stands for
Makar-Limanov and Malcolmson. Note that Lemma~\ref{lem:MLM} states that the variety of associative $K$-algebras is MLM.
Suppose that $K$ is an uncountable field and $\mathfrak{M}$ is a homogeneous variety of $K$-algebras. We say that $\mathfrak{M}$ is a \emph{Reichstein variety} if for any $A\in \mathfrak{M}$ and field extension
$F$ of $K$ such that $F\otimes_K A$ contains a free $K$-algebra in the variety $\mathfrak{M}$ on at least
two free generators, then $A$ contains a free $K$-algebra in the variety $\mathfrak{M}$ on the same
number of free generators. Note that Theorem~\ref{theo:Reichstein} shows that the variety of associative $K$-algebras is Reichstein.
In Section~\ref{conjecture}, we prove that if $K$ is an uncountable field, MLM varieties of $K$-algebras are Reichstein.
Also, as an easy consequence of \cite{Agata}, we show that if $K$ is a countable field and $\mathfrak{M}$
is either the variety of $K$-algebras generated by the special Jordan $K$-algebras or the
variety of Lie $K$-algebras, then Smoktunowicz's result holds. That is, there exist
a $K$-algebra $A$ in $\mathfrak{M}$ and a field extension $F$ of $K$ such that $A$ does
not contain a free $K$-algebra in $\mathfrak{M}$ on at least two free generators but
$F\otimes_K A$ contains a free $K$-algebra in $\mathfrak{M}$ on at least two free generators.
In Section~\ref{ESFS}, we show that if $K$ is a field, the following homogeneous varieties of $K$-algebras are MLM:
\begin{itemize}
\item The variety $\mathcal{A}_K$ of all $K$-algebras
\item The variety $\mathcal{L}_K$ of all Lie $K$-algebras
\item The variety $\mathcal{C}_K$ of all commutative $K$-algebras
\item The variety $\mathcal{AC}_K$ of all anticommutative $K$-algebras
\item The variety $\mathcal{SJ}_K$ generated by the special Jordan $K$-algebras.
\end{itemize}
We end this introduction showing that not all homogeneous varieties of $K$-algebras are MLM. For example, the variety $\mathcal{T}riv_K$
of $K$-algebras which satisfy the identity $x_1x_2=0$ is not MLM.
This variety can be identified with the class of $K$-vector spaces, and the $K$-basis of these vector spaces
with free set of generators.
Let now $F$ be a nontrivial field extension of $K$,
and consider $A$, a one-dimensional $F$-algebra with basis $\{z\}\subset A$. Note that if $f_1,f_2\in F$ are $K$-linearly independent, then the $K$-subalgebra of $A$ generated by $Y=\{f_1z,f_2z\}$ is free on $Y$ of rank two. On the other hand
the $F$-algebra generated by $Y$ is not free on $Y$ because $f_1z,f_2z$ are $F$-linearly dependent.
Another example of a variety that is not MLM is the variety of
commutative and associative $K$-algebras. Indeed, consider the field of fractions $F$ of the polynomial ring in two variables $K[x,y]$. Clearly, $F$
contains a free $K$-algebra on two generators, but $F$ does not contain a free $F$-algebra on $\{x,y\}$.
\section{MLM varieties are Reichstein}\label{conjecture}
In the first part of this
section we prove results analogous to the ones in \cite{ZI} in the context of varieties of algebras.
The proofs are natural adaptations of the ones by Z. Reichstein.
The proof of the following result can be found in \cite[Lemma~1]{ZI}.
\begin{lemma}\label{L3}
Let $K$ be an uncountable field and let $X_1, X_2, \ldots $ be a countable number of Zariski closed subsets of $K^n$. If $\cup^\infty_{i=1}X_i=K^n$, then $X_i=K^n$ for some $i\geq 1$. \qed
\end{lemma}
Let $K$ be a field and $F$ be a field extension of $K$. Suppose that $\mathfrak{M}$
is a homogeneous variety of $K$-algebras.
Let $A\in\mathfrak{M}$.
If $z\in F$ and $a\in A$, we shall denote $za\in F\otimes_K A$ instead of $z\otimes a$.
Let $(a_{11},\dotsc,a_{1r_1})\in A^{r_1},\dotsc,(a_{n1},\dotsc,a_{nr_n})\in A^{r_n}$.
For $z_1=(z_{11},\dotsc, z_{1r_1})\in F^{r_1},\dotsc,\linebreak z_n=(z_{n1},\dotsc,z_{nr_n})\in F^{r_n}$, set
\begin{equation*}
a_{z_i}=\sum_{j=1}^{r_i}z_{ij}a_{ij}\in F\otimes_K A, \quad i\in\{1,\dotsc,n\}.
\end{equation*}
\begin{lemma}\label{lem:Zariskiclosed}
Let $Y=\{y_1,\dotsc,y_n\}, n\geq 2$, be a finite set. Let $f_1,\dotsc,f_m\in K\{Y\}$ be polynomials in $n$ variables.
Then the $n$-tuples $(z_1,\dotsc,z_n)\in F^{r_1+\dotsb+r_n}$ such that
$f_1(a_{z_1},\dotsc,a_{z_n}),\dotsc, f_{m}(a_{z_1},\dotsc,a_{z_n})\in F\otimes_K A$ are $F$-linearly dependent, form
a Zariski closed subset of $F^{r_1+\dotsb+r_n}$ defined over $K$.
\end{lemma}
\begin{proof}
Let $d$ be the maximum of the degrees of $f_1,\dotsc,f_m$ and let
$e_1,e_2,\dotsc,e_s$ be a basis of the $K$-vector subspace of $A$ spanned by all the
possible evaluations in $\{a_{ij}\}_{i,j}$
of the monomials in $K\{Y\}$
of degree $\leq d$. Notice that $e_1,\dotsc,e_s$ are also $F$-linearly independent in $F\otimes_K A$.
Then, for $k=1,\dotsc,m$, we can write
\begin{equation*}
f_k(a_{z_1},\dotsc,a_{z_n})=\sum_{t=1}^s p_{kt}(z_1,\dotsc,z_n)e_t,
\end{equation*}
where each $p_{kt}$ is an associative and commutative polynomial in $r_1+\dotsb+r_n$ variables with coefficients in $K$.
If $m> s$, then the set consisting of the $(z_1,\dotsc,z_n)\in F^{r_1+\dotsb+r_n}$ such that \linebreak
$f_1(a_{z_1},\dotsc,a_{z_n}),\dotsc,f_m(a_{z_1},\dotsc,a_{z_n})$ are $F$-linearly dependent equals
$F^{r_1+\dotsb+r_n}$.
Suppose now that $m\leq s$. Then $f_1(a_{z_1},\dotsc,a_{z_n}),\dotsc,f_m(a_{z_1},\dotsc,a_{z_n})$ are $F$-linearly dependent
if, and only if, the $m\times s$ matrix $(p_{kt}(z_1,\dotsc,z_n))_{k,t}$ has rank $\leq m-1$. This is
equivalent to the vanishing of the $m\times m$ minors of this matrix. Each minor is a commutative and associative polynomial
in $z_{11},\dotsc,z_{1r_1},\dotsc,z_{n1},\dotsc,z_{nr_n}$ with coeficients in $K$.
\end{proof}
With this lemmas we can proof the main result of this section.
\begin{theorem}\label{theo:MLMReichstein}
Let $Q\subseteq K\subseteq F$ be field extensions with $K$ uncountable,
$\mathfrak{M}$ be an MLM variety of $Q$-algebras and $A\in \mathfrak{M}_K$.
If $F\otimes_KA$ contains a free $Q$-algebra in the variety $\mathfrak{M}$ with a finite set of free generators greater or equal than two,
then so does $A$.
As a consequence, if $\mathfrak{M}$ is an MLM
variety of $K$-algebras, then $\mathfrak{M}$ is a Reichstein variety of $K$-algebras.
\end{theorem}
\begin{proof}
Suppose that the elements
\begin{equation*}
a_{u_i}=\sum_{j=1}^{r_i}u_{ij}a_{ij}\in F\otimes_K A,\quad i=1,\dotsc,n,
\end{equation*}
are the free generators of a free $Q$-algebra in the variety $\mathfrak{M}$ for some $(u_{11},\dotsc,u_{1r_1})\in F^{r_1},\dotsc,
(u_{n1},\dotsc,u_{nr_n})\in F^{r_n}$ and $a_{ij}\in A$. Since $\mathfrak{M}$ is MLM, these elements also generate a free $F$-algebra in the variety $\mathfrak{M}_F$ with free set of
generators $\{a_{u_1},\dotsc,a_{u_n}\}$.
Let $\{y_1,\dotsc,y_n\}$ be a finite set. Consider the free $K$-algebra $K\{y_1,\dotsc,y_n\}$
and the
free algebra $K_\mathfrak{M}\{y_1,\dotsc,y_n\}$ in the variety $\mathfrak{M}$ with
free set of generators $\{y_1,\dotsc,y_n\}$.
Consider the natural homomorphism of $K$-algebras
\begin{equation*}
\Phi\colon K\{y_1,\dotsc,y_n\}\rightarrow K_\mathfrak{M}\{y_1,\dotsc,y_n\}, y_i\mapsto y_i.
\end{equation*}
For each $d\geq 1$, fix monomials $m_{d1},\dotsc,m_{dt_d}$ be monomials of degree $d$ in the free algebra
$K\{y_1,\dotsc,y_n\}$ such that $\bigcup\limits_{d\geq 1} \{\Phi(m_{d1}),\dotsc,\Phi(m_{dt_d})\}$ is a $K$-basis of
$K_{\mathfrak{M}}\{y_1,\dotsc,y_n\}$.
For $p\geq 1$, let $X_p\subseteq F^{r_1+\dotsb+r_n}$ be the set of all $n$-tuples
\begin{equation*}
((z_{11},\dotsc,z_{1r_1}),\dotsc,(z_{n1},\dotsc,z_{nr_n}))
\end{equation*}
such that
$m_{d{l}}(a_{z_1},\dotsc,a_{z_n})$, $l=1,\dotsc,t_d$, $d\leq p$, are $F$-linearly dependent. We will write $r_1+\dotsb+r_n:=r$.
By Lemma~\ref{lem:Zariskiclosed}, $X_p$ is a closed subset of $F^r$ defined over $K$.
In order to prove the existence of the free algebra $K_\mathfrak{M}\{y_1,\dotsc,y_n\}$ in $A$, we must
show that all $m_{dj}(a_{z_1},\dotsc,a_{z_n})$ are $K$-linearly independent for some $(z_1,\dotsc,z_n)\in K^r$.
Assume the contrary: for every $(z_1,\dotsc,z_n)\in K^r$, the elements $a_{z_1},\dotsc,a_{z_n}$
are such that there exists $p\geq 1$ with
\begin{equation*}
\sum_{d\leq p,1\leq l\leq t_d} \lambda_{d_l}m_{d_l}(a_{z_1},\dotsc,a_{z_n})=0,\quad \textrm{for some }\lambda_{d_l}\in K.
\end{equation*}
In other words, $(z_1,\dotsc,z_n)\in X_p$. Hence $K^r=\bigcup\limits_{p\geq 1}X_p(K)$,
where $$X_p(K)=\{(z_1,\dotsc,z_n)\in K^r: m_{d_{l}}(a_{z_1},\dotsc,a_{z_n}),\, l=1,\dotsc,t_d,\, d\leq p, \textrm{ are } K\textrm{-l.d.}\}.$$
By Lemma~\ref{L3}, $X_p(K)=K^r$ for some integer $p$.
Note that $X_p(K)\subseteq X_p$. Now, since $K^r$ is dense in $F^r$, we get that
$X_p=F^r$. A contradiction because $a_{u_i}$ generate a free algebra
$F_{\mathfrak{M}_F}\{a_{u_1},\dotsc,a_{u_n}\}$.
Now, if $Q=K$, one obtains the last assertion of the theorem.
\end{proof}
\medskip
We end this section showing that Theorem~\ref{theo:Reichstein} does not hold for the variety of
Lie $K$-algebras and the variety generated by the special Jordan $K$-algebras when the field $K$ is countable.
Suppose that $K$ is any countable field. By \cite[Theorem~1.4]{Agata}, there exists
a field extension $F$ of $K$ and a nil associative $K$-algebra $A$ such that the
associative algebra $F\otimes_K A$ contains a non-commutative free $K$-algebra on
a set of free generators of at least two elements.
Consider the special Jordan $K$-algebra $A^{(+)}$. It is known that $A^{(+)}$ is a
special Jordan nil $K$-algebra. Hence $A^{(+)}$ does not contain a copy of $\mathcal{SJ}_K(Y)$, the free special Jordan $K$-algebra with set of free generators $Y$, where
$Y$ possesses at least two elements. Now since, $F\otimes_K A$ contains a non-commutative free $K$-algebra on
a set of free generators of at least two elements,
the special Jordan algebra $F\otimes_K A^{(+)}$ contains a copy of $\mathcal{SJ}_K(Y)$
where $Y$ has at least two elements.
Now consider the Lie $K$-algebra $A^{(-)}$. It is known that $A^{(-)}$ is a
Lie Engel $K$-algebra. Hence $A^{(-)}$ does not contains a copy of $\mathcal{L}_K(Y)$ where
$Y$ possesses at least two elements. Now, since $F\otimes_K A$ contains a non-commutative free $K$-algebra on
a set of free generators of at least two elements,
the Lie algebra $F\otimes_K A^{(-)}$ contains a copy of $\mathcal{L}_K(Y)$, the free Lie $K$-algebra with set of free generators $Y$,
where $Y$ has at least two elements.
\section{Examples of MLM varieties}\label{ESFS}
Our aim in this section is to show that some important varieties of algebras are MLM. Hence, when the ground field is uncountable, these varieties are Reichstein by Theorem~\ref{theo:MLMReichstein},
To avoid repetitive arguments, we establish here a Setup that will be used at the beginning of the proofs as a standard text.
\begin{setup}\label{setup}
Let $K\subseteq F$ be a field extension and
$\mathfrak{A}$ be a variety of $K$-algebras. Let
$A$ be an $F$-algebra in $\mathfrak{A}_F$. Suppose that $Y=\{y_1,\dotsc,y_n\}\subseteq A$
is a set of at least two elements such that the $K$-subalgebra of $A$
generated by $Y$ is $K_{\mathfrak{A}}\{Y\}$, the free $K$-algebra in the variety $\mathfrak{A}$ with free set of
generators $Y$.
Consider the homomorphism of $F$-algebras
\begin{align*}
\mu \colon F\otimes_K K_\mathfrak{A}\{Y\} &\longrightarrow A\\
c\otimes p &\mapsto cp
\end{align*}
If we want to prove that $\mathfrak{A}$ is an MLM variety, we must show that $\mu$ is injective. We proceed as in the proof of \cite[Lemma~1]{MLM}.
Clearly $\mu(c\otimes p)=0$ if, and only if, $c$ is zero or $p$ is zero. Either way, $c\otimes p$ is zero.
Hence suppose that
\begin{align}\label{eq:muzero}
\mu\left(\sum_{i=1}^{n}{c_i\otimes p_i}\right)=0,
\end{align}
where $n>1$ is minimal. Note that the minimality of $n$ implies that the $c_i$'s and $p_i$'s are linearly independent over $K$.
\end{setup}
\subsection{Variety of all $K$-algebras}
The homogeneous variety $\mathcal{A}_K$ of all $K$-algebras has the empty set of defining relations. Consider $K\{Y\}$, the free $K$-algebra on a set $Y$.
It is well known that every nonassociative word $w$ of degree at least two has unique representation in the form of a product of two nonassociative words. Hence one can introduce
a total ordering in the set $V(Y)$ of nonassociative words as follows. Order the words of length one (variables)
arbitrarily. Assuming that the words of length $n$, $n\geq 1$, have been already ordered in such a
way that words of smaller length precede words of greater length, then given two words $w_1,w_2\in V(Y)$ of length
$n+1$ and represented as a product of two nonassociative words of lesser length $w_1=u_1v_1$, $w_2=u_2v_2$ we
define $w_1<w_2$ if, and only if, $u_1<u_2$ or $u_1=u_2$ and $v_1<v_2$.
Observe that this total ordering
of the non-associative words satisfies that if $w_1<w_2$ then $ww_1<ww_2$ for all $w\in V(Y)$.
\begin{theorem}
The variety $\mathcal{A}_K$ is MLM.
\end{theorem}
\begin{proof}
Consider $\mathfrak{A}=\mathcal{A}_K$ and $\mathfrak{A}_F=\mathcal{A}_F$ in Setup \ref{setup}.
Suppose $p_n$ is the $p_i$ of greatest degree $\geq 1$ and with greatest word in the support among the $p_i$'s.
If $\mu\left(\sum\limits_{i=1}^{n}{c_i\otimes p_i}\right)=\sum\limits_{i=1}^n c_ip_i =0,$
then,
\begin{eqnarray*}
0 & = & \left(\sum_{i=1}^nc_ip_i\right) p_n-p_n\left(\sum_{i=1}^n c_ip_i \right) \\
& = &
\sum_{i=1}^n c_i\left(p_i p_n - p_n p_i\right) = \sum_{i=1}^{n-1} c_i\left(p_i p_n - p_n p_i\right)
\end{eqnarray*}
Hence,
\begin{equation*}
\mu\left(\sum_{i=1}^{n-1} c_i\otimes \Big(p_i p_n - p_n p_i\Big)\right)=0.
\end{equation*}
By the minimality of $n$, we have
$0=p_i p_n-p_n p_i \textrm{ for }i=1,\dotsc,n.$
This equality implies that $p_i$ and $p_n$ have the same maximal word in its support. Thus, there exists $\lambda_i\in K$
such that $p_i-\lambda_i p_n$ has a lesser maximal word in its support. Since
\begin{equation*}
(p_i-\lambda_i p_n)p_n -p_n(p_i-\lambda_i p_n)=0 \textrm{ for }i=1,\dotsc,n,
\end{equation*}
we obtain that $p_i=\lambda_ip_n$ for each $i=1,\dotsc,n.$ This contradicts the fact that the
$p_i$'s are $K$-linear independent.
\end{proof}
\subsection{Variety of commutative $K$-algebras}
The homogeneous variety $\mathcal{C}_K$ of commutative $K$-algebras has the
defining relation $x_1x_2-x_2x_1$. The free commutative $K$-algebra on a set $Y$ will be denoted by $\mathcal{C}_K\{Y\}$.
We will need a result and a definition from \cite{Shirshov}.
Let $Y$ be a nonempty set. Consider the words on $Y$. Words of length one will be called
\emph{regular} and ordered arbitrarily. Assuming that regular words of length less than
$n$, $n>1$, have been already defined and ordered in such a way that words of smaller
length precede words of greater length, a word $w$ of length $n$ will be called \emph{regular} if
\begin{enumerate}
\item $w=uv$ where $u$ and $v$ are regular words;
\item $u\geq v$.
\end{enumerate}
We order the regular words of length $n$ defined in this way, declaring that
$w_1=u_1v_1<w_2=u_2v_2$ if either $u_1<u_2$ or $u_1=u_2$ and $v_1<v_2$. Then
we declare the regular words of length $n$ to be greater
than regular words of smaller length.
By \cite[Theorem~1]{Shirshov}, the collection of all regular words form a basis
of $\mathcal{C}_K\{Y\}$.
\begin{theorem}
The variety $\mathcal{C}_K$ is MLM.
\end{theorem}
\begin{proof}
Consider $\mathfrak{A}=\mathcal{C}_K$ and $\mathfrak{A}_F=\mathcal{C}_F$ in Setup \ref{setup}.
Suppose $p_n$ is the $p_i$ of greatest degree $\geq 1$ and with greatest word in the support among the $p_i$'s.
If $\mu\left(\sum\limits_{i=1}^{n}{c_i\otimes p_i}\right)=\sum\limits_{i=1}^n c_ip_i =0,$
then, for all regular words $w$,
\begin{eqnarray*}
0 & = & \left(\left(\sum_{i=1}^nc_ip_i\right)w\right) p_n-(p_nw)\left(\sum_{i=1}^n c_ip_i \right) \\
& = &
\sum_{i=1}^n c_i\Big((p_iw) p_n - (p_nw) p_i\Big) = \sum_{i=1}^{n-1} c_i\Big((p_iw) p_n - (p_nw). p_i\Big)
\end{eqnarray*}
Hence,
\begin{equation*}
\mu\left(\sum_{i=1}^{n-1} c_i\otimes \Big((p_iw) p_n - (p_nw) p_i\Big)\right)=0,
\end{equation*}
for all regular words $w$.
By the minimality of $n$,
\begin{equation*}
0=(p_iw) p_n-(p_nw) p_i \textrm{ for all regular words $w$ and for }i=1,\dotsc,n.
\end{equation*}
From the commutativity of the product and the definition of regular words,
this equality implies that $p_i$ and $p_n$ have the same maximal regular word in its support.
Thus, there exists $\lambda_i\in K$
such that $p_i-\lambda_i p_n$ has a lesser maximal regular word in its support. Since
$((p_i-\lambda_i p_n)w)p_n -(p_nw)(p_i-\lambda_i p_n)=0$ for all regular words $w$ and $i=1,\dotsc,n$,
we obtain that $p_i=\lambda_ip_n$ for each $i=1,\dotsc,n,$ a contradiction.
\end{proof}
\subsection{Schreier varieties satisfying $x^2=0$}
Let $\mathfrak{M}$ be a variety of $K$-algebras. We say that $\mathfrak{M}$
is a \emph{Schreier variety} if any subalgebra of a free algebra in $\mathfrak{M}$ is a free algebra in $\mathfrak{M}$.
The main
examples of homogeneous Schreier varieties are: the varieties of all algebras \cite{Kurosh47}, commutative and anticommutative algebras \cite{Shirshov}, Lie algebras \cite{Shirshov53} and \cite{Witt},
and algebras with zero multiplication.
Let $A$ be a free $K$-algebra in $\mathfrak{M}$. Let $(x_1,\dotsc,x_n),\, (y_1,\dotsc,y_n)\in A^n$. Let
$V$ and $W$ be the $K$-subspaces of $A$ generated by $(x_1,\dotsc,x_n)$ and $(y_1,\dotsc,y_n)$, respectively.
We say that a transformation $\tau\colon(x_1,\dotsc,x_n)\mapsto (\tau(x_1)=y_1,\dotsc,\tau(x_n)=y_n)$ is an
\emph{elementary transformation} if either:
\begin{enumerate}
\item $\tau$ induces a nonsingular $K$-linear transformation between $V$ and $W$, or
\item $y_1=x_1,y_2=x_2,\dotsc,y_{n-1}=x_{n-1}$ and $y_n=x_n+u_n$ where $u_n$ belongs to the $K$-subalgebra
of $A$ generated by $x_1,\dotsc,x_{n-1}$.
\end{enumerate}
Observe that inverses of elementary transformations are also elementary transformations.
It is known that homogeneous Schreier varieties $\mathfrak{M}$ are \emph{Nielsen},
see for example \cite{Lewin} or \cite[Chapter~11]{MiShYu}. In other words,
if $A$ is a free algebra of $\mathfrak{M}$,
one can transform any finite set of elements $a_1,\dotsc,a_n\in A$
to a free set of generators of the free subalgebra generated by $a_1,\dotsc,a_n$
by using a finite number of elementary transformations and cancelling possible zero elements.
\begin{lemma}\label{L1}
Suppose that $\mathfrak{M}$ is a homogeneous Schreier variety of $K$-algebras
that satisfies the identity $x^2=0$ and does not satisfies
the identity $x_1x_2=0$. Let $Y$ be a set of at least two elements and
denote by $K_\mathfrak{M}\{Y\}$ the free algebra in the variety $\mathfrak{M}$ with set
of free generators $Y$. Suppose that $p,q\in K_\mathfrak{M}\{Y\}$, $q\neq 0$, such that $pq=0.$ Then
$p=\lambda q$ for some $\lambda\in K$.
\end{lemma}
\begin{proof}
Since $\mathfrak{M}$ is a Schreier variety, the subalgebra $B$ of $K_\mathfrak{M}\{Y\}$
generated by $p,q$ is a free subalgebra of $K_\mathfrak{M}\{Y\}$. There exist
elementary transformations
\begin{equation*}
(p,q)\rightarrow \dotsb \rightarrow (x,y),
\end{equation*}
such that $x,y$ (and $y$ may be zero) are free generators of $B$.
If the rank of $B$ is two, then there exist elementary transformations (the inverses of the previous ones)
\begin{equation*}
(x,y)\rightarrow \dotsb \rightarrow (p,q).
\end{equation*}
Since $B$ is free on $x,y$, the elementary transformations induce automorphisms of $K$-algebras.
Hence there exists an isomorphism of $B$ that sends $x\mapsto p$, $y\mapsto q$. Hence
$p,q$ are free generators of $B$. Since $pq=0$, and $p^2=q^2=0$, it implies that
the free algebra in the variety $\mathfrak{M}$ on two free generators has zero product.
Therefore, all the algebras in $\mathfrak{M}$ satisfy the identity $x_1x_2=0$, a contradiction.
It implies that $B$ is a free algebra of rank one. Since $x^2=0$, the free subalgebra generated
by one nonzero element is of dimension $1$. Now $B$ is generated by $q$, as desired.
\end{proof}
Between the homogeneous Schreier varieties that satisfy the conditions of Lemma \ref{L1}, we highlight the following:
\begin{itemize}
\item The variety of Lie $K$-algebras $\mathcal{L}_K$. Its defining relations are: $x^2_1$ and $(x_1x_2)x_3+(x_2x_3)x_1+(x_3x_1)x_2$.
\item The variety of anticommutative $K$-algebras $\mathcal{AC}_K$. It has the defining relation: $x_1x_2+x_2x_1$.
\end{itemize}
\begin{theorem}\label{theo:Schreier}
Let $\mathfrak{M}$ be a Schreier variety of $K$-algebras
that satisfies the identity $x^2=0$ and does not satisfy
the identity $x_1x_2=0$. Then $\mathfrak{M}$ is MLM. In particular,
the varieties $\mathcal{L}_K$ and $\mathcal{AC}_K$ are MLM.
\end{theorem}
\begin{proof}
Consider $\mathfrak{A}=\mathfrak{M}$ and $\mathfrak{A}_F=\mathfrak{M}_F$ in Setup \ref{setup}. Then
\begin{align*}
\mu\left(\sum_{i=1}^{n-1} c_i\otimes p_ip_n\right)=\mu\left(\sum_{i=1}^{n} c_i\otimes p_ip_n\right)=
\sum_{i=1}^n c_i p_ip_n=\left(\sum_{i=1}^{n}c_ip_i \right)p_n=0\cdot p_n=0.
\end{align*}
By the minimality of $n$, $p_ip_n=0$ for all $i=1,\dotsc,n$. By Lemma~\ref{L1}, and the fact that $p_n\neq 0$,
we get that $p_i=\lambda_i p_n$, where $\lambda_i\in K$ for $i=1,\dotsc,n$, a contradiction.
\end{proof}
\subsection{Variety generated by special Jordan algebras}
Let $K$ be a field.
If $A$ is an associative $K$-algebra, we define on the $K$-vector space $A$ a new multiplication $\circ$,
which is connected with the associative multiplication by the formula
$x\circ y=\frac{1}{2}(xy+yx)$.
In this way a new $K$-algebra is obtained and it is denoted by $A^{(+)}$. The $K$-algebra $A^{(+)}$
is a Jordan algebra (that is, satisfies $x_1x_2=x_2x_1$ and $(x_1^2x_2)x_1=x_1^2(x_2x_1)$).
If $J$ is a $K$-subspace of $A$ which is closed with respect to the operation
$\circ$, then $J$ together with $\circ$ is a subalgebra of $A^{(+)}$,
and consequently a Jordan algebra. Such a Jordan algebra is called special Jordan algebra.
The variety generated by all special Jordan $K$-algebras will be denoted by $\mathcal{SJ}_K$ and it
consists of all $K$-algebras which can be obtained as homomorphic
images of special Jordan $K$-algebras.
Let $Y$ be a set. Consider $K\langle Y\rangle$, the free associative $K$-algebra on the set $Y$. The
$K$-subalgebra $\mathcal{SJ}_K(Y)$ of the algebra $K\langle Y\rangle^{(+)}$ generated by the set $Y$ is the free special Jordan $K$-algebra on $Y$ and it is the
free $K$-algebra on the set $Y$ in the variety $\mathcal{SJ}_K$.
\begin{theorem}
Let $K$ be a field of characteristic not two. Then $\mathcal{SJ}_K$ is MLM.
\end{theorem}
\begin{proof}
Consider $\mathfrak{A}=\mathcal{SJ}_K$ and $\mathfrak{A}_F=\mathcal{SJ}_F$ in Setup \ref{setup}.
Suppose $p_n$ is the $p_i$ of greatest degree. If it is of degree zero, then all $p_i$ are
of degree zero and the result follows from the case $n=1$ because there exist $a_i\in K$ such that
\begin{align*}
\mu\left(\sum_{i=1}^{n}{c_i\otimes p_i}\right)=\mu\left(\sum_{i=1}^n c_ia_i\otimes 1\right)=0
\end{align*}
From now on, we suppose that $p_n$ is of positive degree.
\underline{Claim~1:} $p_i$ and $p_n$ commute as elements of $K\langle Y\rangle$.
If $\mu\left(\sum\limits_{i=1}^{n}{c_i\otimes p_i}\right)=\sum\limits_{i=1}^n c_ip_i =0,$
then, for all $z\in \mathcal{SJ}_K(Y)$
\begin{eqnarray*}
0 & = & \left(\left(\sum_{i=1}^nc_ip_i\right)\circ z\right)\circ p_n-\left(\sum_{i=1}^n c_ip_i \right)\circ (z\circ p_n) \\
& = &
\sum_{i=1}^n c_i\left((p_i\circ z)\circ p_n - (p_i\circ (z\circ p_n))\right) = \sum_{i=1}^{n-1} c_i\left((p_i\circ z)\circ p_n - (p_i\circ (z\circ p_n))\right).
\end{eqnarray*}
Hence,
\begin{equation*}
\mu\left(\sum_{i=1}^{n-1} c_i\otimes \Big((p_i\circ z)\circ p_n - (p_i\circ (z\circ p_n))\Big)\right)=0.
\end{equation*}
By the minimality of $n$,
$0=(p_i\circ z)\circ p_n-p_i\circ (z\circ p_n)$ for $i=1,\dotsc,n$.
Using that $\mathcal{SJ}_K(Y)$ is a subalgebra of $K\langle Y\rangle ^{(+)}$, we get
\begin{eqnarray*}
0 & =& (p_i\circ z)\circ p_n-p_i\circ (z\circ p_n)\\ &=& \frac{1}{2}((zp_i+p_iz)\circ p_n-p_i\circ(zp_n+p_nz))\\
& =& \frac{1}{4}( zp_ip_n+p_izp_n+p_nzp_i+p_np_iz- (p_izp_n+p_ip_nz+zp_np_i+p_nzp_i)) \\
& = & \frac{1}{4}[z(p_ip_n-p_np_i)-(p_ip_n-p_np_i)z].
\end{eqnarray*}
This implies that $p_ip_n-p_np_i$ commutes with any variable $z\in Y\subseteq \mathcal{SJ}_K(Y)$.
Hence $p_ip_n-p_np_i$ is in the center of $K\langle Y\rangle $, that is, $p_ip_n-p_np_i\in K$. But
this is only possible if $p_ip_n-p_np_i=0$, because the term of zero degree in $p_ip_n-p_np_i$ is zero.
Therefore $p_i$ and $p_n$ commute in $K\langle Y\rangle$ and the claim is proved.
\underline{Claim~2:} There exists $\lambda_i \in K$ such that
$p_i=\lambda_i p_n$ for $i=1,\dotsc,n$. A contradiction with the fact that the $p_i$'s are $K$-linear independent.
By \cite[Corollary~6.7.7]{CohnFreeIdeal} and Claim~1, there exists a polynomial $u\in K\langle Y\rangle$
of degree at least one such that $p_i\in k[u]$, $i=1,\dotsc,n$.
From \eqref{eq:muzero}, $\sum\limits_{i=1}^n c_ip_i=0$. Thus, for all $z\in\mathcal{SJ}_K(Y)$,
\begin{eqnarray*}
0& = &\left(\sum_{i=1}^n c_ip_i\right)\circ (z\circ p_n^2)-\left(\left(\sum_{i=1}^n c_ip_i\right)\circ p_n \right)\circ(z\circ p_n)\\
& = & \sum_{i=1}^n c_i\Big( (p_i\circ (z\circ p_n^2))-(p_i\circ p_n)\circ (z\circ p_n) \Big) \\
& = & \sum_{i=1}^{n-1} c_i \Big( (p_i\circ (z\circ p_n^2))-(p_i\circ p_n)\circ (z\circ p_n) \Big),
\end{eqnarray*}
where we have used that $(z\circ p_n^2)\circ p_n=p_n^2\circ (z\circ p_n)$ in the last equality.
Hence
\begin{align*}
\mu\left( \sum_{i=1}^{n-1} c_i\otimes \left[(p_i\circ (z\circ p_n^2))-(p_i\circ p_n)\circ (z\circ p_n)\right]\right)=0.
\end{align*}
By the minimality of $n$, and using Step~1 and $\mathcal{SJ}_K(Y)\subseteq K\langle Y\rangle ^{(+)}$,
\begin{eqnarray*}
0 & = & p_i\circ (z\circ p_n^2) - (p_i\circ p_n) \circ (z\circ p_n) \\
& = & \frac{1}{2}p_i\circ (zp_n^2+p_n^2z)-\frac{1}{4}(p_ip_n+p_np_i)\circ (zp_n+p_nz) \\
& = & \frac{1}{4}(p_izp_n^2+p_ip_n^2z+zp_n^2p_i+p_n^2zp_i)\\
& & -\frac{1}{4}(p_ip_nzp_n+p_ip_n^2z+zp_np_ip_n+p_nzp_ip_n) \\
& = & \frac{1}{4}(p_izp_n^2+p_n^2zp_i-p_ip_nzp_n-p_nzp_ip_n)\\
& = & \frac{1}{4}(p_i(zp_n-p_nz)p_n-p_n(p_nz-zp_n)p_i).
\end{eqnarray*}
Hence, $p_i(zp_n-p_nz)p_n=p_n(zp_n-p_nz)p_i$ for all $z\in \mathcal{SJ}_K(Y)\subseteq K\langle Y\rangle ^{(+)}$ and $i=1,\dotsc,n$.
Let now $x\in \mathcal{SJ}_K(Y)$ such that $x$ does not commute with $u$. By \cite[Corollary~6.7.4]{CohnFreeIdeal}, $u$ and $x$ form a free set over $K$.
Then, rewriting the last equality, we obtain
$p_i(u)(xp_n(u)-p_n(u)x)p_n(u)=p_n(u)(xp_n(u)-p_n(u)x)p_i(u)$.
Suppose that the degree on $u$ of $p_i(u)$ is smaller than the degree of $p_n(u)$. Considering the lexicographic order
in $K\langle x,u\rangle$ with $x<u$, we obtain that the greatest monomial on the right hand side of the equation
is obtained from $-p_n(u)^2xp_i(u)$, which cannot be obtained from any other monomial on the left hand side of the equality. A contradiction.
Hence $p_i$ and $p_n$ are polynomials on $u$ of the same degree. Thus there exists $\lambda\in K$ such that
$p_i(u)-\lambda p_n(u)$ has degree smaller than the degree of $p_n$ and
\begin{equation*}
(p_i(u)-\lambda p_n(u))(xp_n(u)-p_n(u)x)p_n(u)=p_n(u)(xp_n(u)-p_n(u)x)(p_i(u)-\lambda p_n(u)).
\end{equation*}
By the foregoing, the only possibility is that $p_i=\lambda_i p_n$ for some $\lambda_i\in K$.
\end{proof}
\subsection{Variety of non-commutative Poisson $K$-algebras}
In this subsection we deviate from our notation and consider a variety of algebras endowed with more than one product. More precisely, the variety $\mathcal{NCP}_K$ of all non-commutative Poisson $K$-algebras.
A vector space $P$ over a field $K$ endowed with two bilinear operations $x\cdot y$ (a multiplication) and $\{x, y\}$ (a Poisson
bracket) is called a non-commutative Poisson algebra if $P$ is an associative algebra under $x\cdot y$, $P$ is a Lie algebra under $\{x, y\}$, and $P$ satisfies the Leibniz identity:
$\{x \cdot y, z\} = \{x, z\} \cdot y + x \cdot \{y, z\}$ for all $x,y,z\in P$.
Let $Y$ be a set.
The free non-commutative Poisson $K$-algebra $\mathcal{P}_K(Y)$ is constructed as follows. Let $\mathcal{L}_K(Y)$ be the free Lie $K$-algebra on $Y$ and suppose that
$X=\{x_1,x_2,\dotsc\}$ is a $K$-basis of $\mathcal{L}_K(Y)$. Then $\mathcal{P}_K(Y)$ is the
free associative algebra on the set of free generators $X$. Using the Leibniz identity
one can uniquely extend the Lie bracket $\{x, y\}$ of $\mathcal{L}_K(Y)$ to a Poisson bracket $\{x, y\}$ on
$\mathcal{P}_K(Y)$, and $\mathcal{P}_K(Y)$ becomes a Poisson algebra.
\begin{corollary}
The variety $\mathcal{NCP}_K$ is MLM.
\end{corollary}
\begin{proof}
Let $K\subseteq F$ be a field extension and let $A\in\mathcal{NCP}_F$. Suppose that $A$ contains a free non-commutative Poisson $K$-algebra on a free set of generators $Y\subseteq A$ of at least two elements. The free Lie $K$-algebra (with respect to $\{x,y\}$) generated by $Y$ is the free Lie $K$-algebra $\mathcal{L}_K(Y)$. By Theorem~\ref{theo:Schreier}, the Lie $F$-subalgebra of $A$ generated by $Y$ is the free Lie $F$-algebra $\mathcal{L}_F(Y)$ with set of free generators $Y$. Note that we can pick the same basis $\mathcal{B}$ for $\mathcal{L}_K(Y)$ and
$\mathcal{L}_F(Y)$.
Now, by Lemma~\ref{lem:MLM}, the associative $F$-subalgebra generated by $\mathcal{B}$ is the free associative $F$-algebra on $\mathcal{B}$, as desired.
\end{proof}
We would like to finish noting that one can mimic the proof of Theorem~\ref{theo:MLMReichstein} to show that the variety
$\mathcal{NCP}_K$ is Reichstein when $K$ is an uncountable field.
| {
"timestamp": "2021-01-01T02:01:27",
"yymm": "2012",
"arxiv_id": "2012.14480",
"language": "en",
"url": "https://arxiv.org/abs/2012.14480",
"abstract": "We show that some results of L. Makar-Limanov, P. Malcolmson and Z. Reichstein on the existence of free associative algebras are valid in the more general context of varieties of algebras.",
"subjects": "Rings and Algebras (math.RA)",
"title": "On free subalgebras of varieties",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399069145609,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.7093022129641803
} |
https://arxiv.org/abs/2302.08661 | Subsampling Suffices for Adaptive Data Analysis | Ensuring that analyses performed on a dataset are representative of the entire population is one of the central problems in statistics. Most classical techniques assume that the dataset is independent of the analyst's query and break down in the common setting where a dataset is reused for multiple, adaptively chosen, queries. This problem of \emph{adaptive data analysis} was formalized in the seminal works of Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014).We identify a remarkably simple set of assumptions under which the queries will continue to be representative even when chosen adaptively: The only requirements are that each query takes as input a random subsample and outputs few bits. This result shows that the noise inherent in subsampling is sufficient to guarantee that query responses generalize. The simplicity of this subsampling-based framework allows it to model a variety of real-world scenarios not covered by prior work.In addition to its simplicity, we demonstrate the utility of this framework by designing mechanisms for two foundational tasks, statistical queries and median finding. In particular, our mechanism for answering the broadly applicable class of statistical queries is both extremely simple and state of the art in many parameter regimes. |
\subsubsection{A mechanism for statistical queries}
\label{subsec:SQ}
Our main application is an extremely simple and accurate mechanism for the broad class of \emph{statistical queries}. Statistical queries, introduced by Kearns \cite{Kea98}, are parameterized by a function $\phi: X \to [0,1]$. A valid answer to such a query is any value close to $\phi(\mcD)$. Many natural analyses can be cast as sequence of statistical queries. This includes most algorithms for both supervised and unsupervised learning, such as least-squares regression, gradient-descent, and moment based methods. We refer the reader to \cite{FGRVX17} for a more extensive list.
\begin{figure}[ht]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Input:} A sample $S \in X^n$ and parameters $\eps, k$.\vspace{2pt}
For each time step $t \in [T]$,\vspace{2pt}
\begin{enumerate}[nolistsep,itemsep=2pt]
\item Receive a query $\phi_t: X\to [0,1]$.
\item Define,
\begin{equation*}
\phi_t'(x) \coloneqq \begin{cases}
\eps & \text{if }\phi_t(x) \leq \eps,\\
1-\eps & \text{if }\phi_t(x) \geq 1-\eps, \\
\phi_t(x)&\text{otherwise.}
\end{cases}
\end{equation*}
\item Sample uniformly $\bx_1, \ldots, \bx_k \iid \mathrm{Unif}(S)$ and for each, sample $\bv_i \sim \mathrm{Bernoulli}(\phi_t'(\bx_i))$.
\item Set $\by_t$ to the mean of $\bv_1, \ldots, \bv_k$ and output $\by_t$.
\end{enumerate}
\end{tcolorbox}
\caption{A mechanism for statistical queries using subsampling.}
\label{fig:SQ-mechanism}
\end{figure}
The mechanism for answering statistical queries is given in \Cref{fig:SQ-mechanism}. Note that an even simpler mechanism that does not ``squash," taking $\phi' = \phi$, still provides accurate answers to statistical queries. The difference is just a log-factor on the sample size as such queries are not guaranteed to be $p$-uniform, and so the improvement of \Cref{thm:main-noisy} to \Cref{thm:high-probability} does not hold.
\begin{theorem}[Accuracy of our mechanism for answering SQs]
\label{thm:SQ}
For any parameters $\tau, \delta > 0$, adaptively chosen sequence of statistical queries $\phi_1, \ldots, \phi_T: X \to [0,1]$, and sample size
\begin{equation}
\label{eq:SQ-sample-size}
n \geq \Omega\paren*{\frac{\sqrt{T \log(T/\delta) \log(1/\delta)}}{\tau^2}},
\end{equation}
if the mechanism of \Cref{fig:SQ-mechanism} is given a sample $\bS \sim \mcD^n$ and parameters
\begin{equation*}
\eps \coloneqq O\paren*{\frac{\log(1/\delta)}{n}} \quad\quad\quad\quad\text{and} \quad\quad\quad\quad k \coloneqq O\paren*{\frac{\log(T/\delta)}{\tau^2}},
\end{equation*}
with probability at least $1 - \delta$, it answers all queries $t \in [T]$ to accuracy
\begin{equation}
\label{eq:SQ-accuracy}
|\by_t - \Ex[\phi_t(\mcD)]| \leq \max(\tau \cdot \std(\phi_t), \tau^2)
\end{equation}
where $\std(\phi) \coloneqq \sqrt{\Ex[\phi(\mcD)](1 - \Ex[\phi(\mcD)])} \leq 1$.
\end{theorem}
The proof of \Cref{thm:SQ} is a simple corollary of \Cref{thm:main-noisy,thm:high-probability}: Each vote ($\bv_i$ in \Cref{fig:SQ-mechanism}) is the output of a subsampling query $\phi:X \to \zo$ and so fits within our framework with $w = 1$, and $|Y| = 2$.
In many settings, the bound of \Cref{thm:SQ} is state of the art. When $\std(\phi) = o(1)$, its accuracy improves upon prior state of the arts, both from Feldman and Steinke.\footnote{It's worth noting that both of these works use a more strict definition of $\std(\phi) = \sqrt{\Var_{\bx \sim \mcD}[\phi(\bx)]}$. If the range of $\phi$ is $\zo$, their definition and ours coincide. Otherwise, their definition can be smaller than ours.}
\begin{enumerate}
\item In \cite{FS17}, they gave a mechanism with the accuracy guarantees of \Cref{thm:SQ}, but requiring a larger sample size of
\begin{equation*}
n \geq \Omega\paren*{\frac{\sqrt{T \log(1/\delta)} \log(T/\tau\delta)}{\tau^2}}.
\end{equation*}
In addition to requiring a larger sample size than \Cref{thm:SQ}, that mechanism is also more complicated than ours. It splits the dataset into chunks of size $\frac{1}{\tau^2}$. Given a query, it computes the average of that query on each chunk, and then computes an approximate median of those averages via a differentially private algorithm. The same mechanism actually solves the approximate median problem described in \Cref{subsec:approx-median}.
\item In \cite{FS18}, they gave a mechanism with a slightly worse sample size\footnote{Their sample size is a $\sqrt{\log(T)}$ multiplicative factor worse than ours when $\delta$ is constant.} \Cref{thm:SQ} when the failure probability, $\delta$, is a constant. Their mechanism is also simple: For a sample $S$ and query $\phi$, they compute $\phi(S) + \zeta$ where $\zeta$ is a Gaussian with mean $0$ and variance that scales with $\std(\phi)$. However, their dependence on $\delta$ is a multiplicative $1/\delta$ factor, exponentially worse than ours.
\end{enumerate}
The pessimistic setting where all the queries satisfy $\std(\phi) = \Theta(1)$ is more well studied \cite{DFHPRR15,BNSSSU16,SU15,SU15between,HU14,DK22}. The state of the art was given recently by Dagan and Kur \cite{DK22}, who showed that a sample of size
\begin{equation*}
n = O\paren*{\frac{\sqrt{T\log(1/\tau \delta)}}{\tau^2}}
\end{equation*}
is sufficient whenever $\tau\delta \geq 2^{-\tilde{O}(T)}$. Their mechanism works by returning $y_t = \phi_t(S) + \zeta$ where $\zeta$ is drawn from a very carefully constructed \emph{bounded} distribution. Our mechanism has a slightly better dependence on $\tau$ and a better accuracy when $\std(\phi)$ is small, but slightly worse dependencies on $T$ and $\delta$.
\paragraph{Advantages of our mechanism.}
Our mechanism has advantages beyond the quantitative bound on the sample size needed for low bias. First, it naturally runs in sublinear time as answering each query requires only looking at $k = n/\sqrt{T}$ of the points in $S$.
Furthermore, our mechanism easily extends to the setting where the analyst does not know ahead of time how many samples, the parameter $k$ in \Cref{fig:SQ-mechanism}, they want for a particular query. Rather, they can sequentially sample $\bv_1, \bv_2, \ldots$ while continually updating an estimate for $\phi(\mcD)$ and stop at any point. The bounds of \Cref{thm:SQ} hold as long as the total number of samples is at most $n\sqrt{T}$. Early stopping can appear naturally in practice. For example,
\begin{enumerate}
\item The analyst may only desire accuracy $\pm \tau$, regardless of what $\std(\phi_t)$ is. If, based on the first $k' < k$ samples, the analyst can determine that $\std(\phi_t)$ is small, they can stop early as a small $\std(\phi_t)$ means fewer samples are needed to acheive the desired accuracy.
\item The analyst may wish to verify whether $\phi(\mcD) \approx c$ for some value $c$. If after the first $k' < k$ samples, the average is far from $c$, the analyst can already determine that $\phi(\mcD)$ is far from $c$. This setting has previously been studied in the influential work of \cite{DFHPR15,DFHPR15b} which showed that there exists mechanisms that answer exponentially many such verification queries, as long as all but a tiny fraction of the inequalities to verify are true. Our analysis does not extend to exponentially many queries, but it can easily intertwine standard queries with verification queries, with the later being cheaper.
\end{enumerate}
\subsubsection{A mechanism for finding approximate medians}
\label{subsec:approx-median}
We also consider a generalization of statistical queries, each of which map $w$ inputs to some set $R \subseteq \R$. For such queries, we give a mechanism for determining an \emph{approximate median} of the distribution $\phi^{(\mathrm{dist})}(\mcD)$.
\begin{definition}[Approximate median]
For a distribution $\mcE$ with support in $\R$, we say a value $y$ is an \emph{approximate median of $\mcE$} if,
\begin{equation*}
\min\left(\Pr_{\bx \sim \mcE}[\bx < y], \Pr_{\bx \sim \mcE}[\bx > y]\right) \geq 0.4.\footnote{All of our results hold as long as this $0.4$ is $0.5 - \eps$ for any fixed $\eps > 0$.}
\end{equation*}
\end{definition}
One particular application for approximate median queries is, once again, for answering statistical queries. Given an SQ $\phi: X \to [0,1]$, we can construct
\begin{equation*}
\phi'(x_1, \ldots, x_w) = \Ex_{\bi \in [w]}[\phi(\bx_i)].
\end{equation*}
Since $\phi'$ and $\phi$ have the same mean, and $\phi'$ has a smaller variance, Chebyshev's inequality implies that any approximate median of $\phi'$ will be within $2/\sqrt{w}$ standard deviations of $\phi(\mcD)$. As a result, the mechanism of \Cref{fig:median-mechanism} can give similar accuracy results as guaranteed in \Cref{thm:SQ}. The sample size required is larger (by log factors), but in exchange, it provides better accuracy when $\std(\phi)$ is smaller than $\tau$. In that setting, the $\tau^2$ term of \Cref{eq:SQ-accuracy} dominates, but the median-based mechanism does not incur it.
\begin{figure}[h]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Input:} A sample $S \in X^n$ split into $k$ groups $S^{(1)}, \ldots, S^{(k)}$ each containing $\geq \floor{n/k}$ elements.
For each time step $t \in [T]$,\vspace{2pt}
\begin{enumerate}[nolistsep,itemsep=2pt]
\item Receive a query $\phi_t: X^{w_t}\to R_t$ where $R_t \subseteq \R$.
\item Perform binary search on the mechanism's output $\by_t \in R_t$ where, to determine whether $\by_t \geq r$, the following procedure is used.
\begin{enumerate}
\item For each group $i \in k$ generate a vote $\bv_i$ by first sampling a random subset $\bS' \sim \binom{S^{(i)}}{w_t}$ and then setting $\bv_i = \Ind[\phi_t(\bS') \geq r]$.
\item Add noise to each vote $\bv_i$ by flipping it with probability $\frac{w}{|S^{(i)}|}$.\footnote{This mechanism still works without the added noise, though with slightly worse parameters.}
\item Determine $\by_t \geq r$ iff at least half of the votes are $1$.
\end{enumerate}
\item After the $\ceil{\log_2(R_t)}$ steps of binary search have finished, a single value $\by_t = r$ will be determined. Output it.
\end{enumerate}
\end{tcolorbox}
\caption{The subsampling mechanism answering approximate median queries.}
\label{fig:median-mechanism}
\end{figure}
\begin{theorem}[Accuracy of our mechanism for answering approximate median queries]
\label{thm:median}
For any adaptively chosen sequence of queries $\phi_1: X^{w_1} \to R_1 , \ldots, \phi_T: X^{w_T} \to R_T$ and $\delta > 0$, if the sample size satisfies
\begin{equation*}
n \geq \Omega\paren*{\log\paren*{\frac{T \log R_{\max}}{\delta}} \sqrt{ w_{\max} \sum_{t \in T} w_t} }
\end{equation*}
where $w_{\max}$ and $R_{\max}$ are upper bounds on $w_t$ and $|R_t|$ respectively, and $k \coloneqq \log\paren*{\frac{T \log R_{\max}}{\delta}}$, then with probability at least $1 - \delta$, for all $t \in [T]$, the output $\by_t$ of \Cref{fig:median-mechanism} is an approximate median for $\phi^{(\mathrm{dist})}_t(\mcD)$.
\end{theorem}
Feldman and Steinke also give a mechanism for answering approximate median queries \cite{FS17}. Their mechanism needs a sample size of
\begin{equation*}
n \geq \Omega\paren*{\log\paren*{\frac{T R_{\max}}{\delta}} \sqrt{\log(1/\delta) T w_{\max}^2} }.
\end{equation*}
Our sample size bound is similar to theirs in the pessimistic settings where $w_t \approx w_{\max}$ for all $t$, with slight improvements on some of the other dependencies. For example, we have a linear dependence on $\log(1/\delta)$, whereas they have a $\log(1/\delta)^{3/2}$ dependence.
Most interestingly, our mechanisms and that of \cite{FS17} is fairly similar -- both rely on splitting the dataset and, roughly speaking, computing an approximate median of the queries' value on each group -- but the analyses are wholly different. Their analysis is based on differential privacy. In contrast, \Cref{thm:median} is a simple corollary of \Cref{thm:main-noisy}. Indeed, it's not difficult to show that our mechanism does \emph{not} satisfy standard $(\eps, \delta)$ differential privacy with strong enough parameters to give a sample size bound close to that of \Cref{thm:median}.
\section{Applications}
\label{sec:apps}
\subsection{The statistical-query mechanism: Proof of \texorpdfstring{\Cref{thm:SQ}}{Theorem 6}}
The proof of \Cref{thm:SQ} combines two bounds: We'll proof that, with high probability, all statistical queries asked have low bias, meaning $\phi(\bS)$ and $\phi(\mcD)$ are close. We'll further show that, with high probability, the answer the mechanism gives when receiving a statistical query $\phi$ is close to $\phi(\bS)$. These two results are combined with the following proposition.
\begin{proposition}[Triangle-like inequality]
\label{prop:triangle-like}
For any $\tau > 0$ and $a,b,c \in [0,1]$ satisfying
\begin{equation*}
\abs{b - a} \leq \max(\tau^2, \tau \sqrt{a(1-a)}) \quad\quad\quad\quad\abs{c - b} \leq \max(\tau^2, \tau \sqrt{b(1-b)})
\end{equation*}
it also holds that
\begin{equation*}
\abs{c - a} \leq 3\max(\tau^2, \tau \sqrt{a(1-a)}).
\end{equation*}
\end{proposition}
\begin{proof}
First, consider the case where $\abs{c - b}\leq \tau^2$. Then,
\begin{equation*}
\abs{c-a} \leq \abs{a-b} + \abs{c-b} \leq \max(\tau^2, \tau \sqrt{a(1-a)}) + \tau^2 \leq 2\max(\tau^2, \tau \sqrt{a(1-a)}).
\end{equation*}
In the other case,
\begin{align*}
\abs{c-b} &\leq \tau\sqrt{b(1-b)} \\
&= \tau \sqrt{(a + (b-a))(1 - a - (b-a))} \\
&= \tau\sqrt{a(1-a) + (b-a)(1-2a) - (b-a)^2}\\
&\leq \tau\sqrt{a(1-a) + (b-a)} \\
&\leq \tau\sqrt{a(1-a)} + \tau\sqrt{b-a} \\
&\leq \tau\sqrt{a(1-a)} + \tau\sqrt{\max(\tau^2, \tau\sqrt{a(1-a)}} \\
&= \tau\sqrt{a(1-a)} + \max(\tau^2, \sqrt{\tau^2 \cdot \tau\sqrt{a(1-a)}}) \\
&\leq 2 \max(\tau^2, \tau \sqrt{a(1-a)}).
\end{align*}
The desired result follows from the standard triangle inequality.
\end{proof}
Next, we give a short proof that the answer's of the mechanism are all close to $\phi(\bS)$ with high probability.
\begin{proposition}
\label{prop:SQ-all-close}
In the setting of \Cref{thm:SQ}, with probability at least $1 - \delta$, for all $t \in [T]$,
\begin{equation*}
\abs{\phi_t(\bS) - \by_t} \leq 3\max\paren*{\tau^2, \tau \sqrt{\phi_t(\bS)(1 - \phi_t(\bS))}}.
\end{equation*}
\end{proposition}
\begin{proof}
The distribution of $\by_t$ is the sum of $k$ independent $\zo$ bits each with mean within $\pm \eps$ of $\phi_t(\bS)$. A standard application of Chernoff bounds and a union bound over the $T$ queries gives that, since $k \coloneqq O\paren*{\frac{\log(T/\delta)}{\tau^2}}$, with probability at least $1 - \delta$, for all $t \in [T]$
\begin{equation*}
\abs{\by_t - \Ex[\by_t]} \leq \max\paren*{\tau^2, \tau \sqrt{\Ex[\by_t](1 - \Ex[\by_t)}}.
\end{equation*}
Based on the parameters in \Cref{thm:SQ},
\begin{equation*}
\abs{\Ex[\by_t] - \phi_t(\bS)} \leq \eps \leq \frac{\tau^2}{\sqrt{T}} \leq \tau^2.
\end{equation*}
The desired result follows from \Cref{prop:triangle-like}.
\end{proof}
Finally, we prove the main result of this subsection.
\begin{proof}[Proof of \Cref{thm:SQ}]
We'll apply the ``monitor" technique of Bassily et al. \cite{BNSSSU16}, setting the test query $\psi$ to the SQ with the ``worst" response. Specifically, after receiving responses $\by_1, \ldots, \by_T$ to SQs $\phi_1, \ldots, \phi_T$, the analyst sets the test query to
\begin{equation*}
\psi \coloneqq \phi_{t^\star} \quad\text{where}\quad t^\star \coloneqq \argmax_{t \in [T]} \frac{\abs{\by_t - \phi_t(\mcD)}}{\max(\tau^2, \tau \sqrt{\phi_t(\mcD)(1-\phi_t(\mcD))}}.
\end{equation*}
It is sufficient to show that \Cref{eq:SQ-accuracy} holds for $t = t^\star$ as, based on how we defined $t^\star$, it then holds for all $t \in [T]$.
The mechanism for answering statistical queries in \Cref{fig:SQ-mechanism} makes a total of $Tk$ subsampling queries each with $w = 1$ and a range $|Y| = |\zo| = 2$. Each query is $(\eps \coloneqq \frac{\log(1/\delta)}{n})$-uniform. Applying the improved cost function in \Cref{thm:main-noisy} to \Cref{thm:high-probability}, the budget is $b = O(Tk/n)$, and so, with probability at least $1 - \delta$,
\begin{equation*}
\mathrm{error}(\psi, \bS, \mcD) \leq O\paren*{\log(1/\delta) \cdot \frac{Tk + n}{n^2}} \leq \tau^2.
\end{equation*}
As $\Var_{\mcD}(\psi) \leq \sqrt{\mcD(\psi)(1 - \mcD(\psi))}$, this implies that
\begin{equation*}
\abs{\psi(\bS) - \psi(\mcD)} \leq \max\paren*{\tau^2, \tau \sqrt{\mcD(\psi)(1 - \mcD(\psi))}}.
\end{equation*}
Furthermore, by \Cref{prop:SQ-all-close}, with probability at least $1 - \delta$, for all $t \in [T]$
\begin{equation*}
\abs{\phi_t(\bS) - \by_t} \leq 3\max\paren*{\tau^2, \tau \sqrt{\phi_t(\bS)(1 - \phi_t(\bS))}}.
\end{equation*}
In particular, the above holds for $\phi_{t^\star}$. We union bound over both of the above equations and apply \Cref{prop:triangle-like}: With probability at least $1 - 2\delta$ \Cref{eq:SQ-accuracy} holds for $t = t^\star$ up to a constant factor. As $t^\star$ is worst-case, it therefore applies for all $t \in [T]$.
The desired result follows by renaming $\delta' \coloneqq \delta/2$ and $\tau' \coloneqq \tau/c$ for appropriate constant $c$.
\end{proof}
\subsection{The median-finding mechanism: Proof of \texorpdfstring{\Cref{thm:median}}{Theorem 7}}
In this section, we prove \Cref{thm:median}. We begin with the following definition.
\begin{definition}[Bad sample]
For a query $\phi:X^w \to R \subseteq \R$, and threshold $r \in R$ that is not an approximate median of $\phi(\mcD)$, we say a sample $S \in X^n$ is $(\phi,r)$-bad if, with probability at least $0.45$ over $\bS' \sim \binom{S}{w}$,
\begin{equation*}
\Ind[\phi(\bS') \geq r] \neq \Ind[\mathrm{median}(\phi(\mcD)) \geq r].
\end{equation*}
\end{definition}
Intuitively, the proof of \Cref{thm:median} is separated into two pieces: We show that, with high probability, for all queries $\phi$ and thresholds $r$ used in \Cref{fig:median-mechanism}, only a small fraction $(\leq 0.02k)$ of the groups are ``bad samples. Then, we show that as long as this is true, with high probability, all answers output by the mechanism are approximate medians.
\begin{proof}[Proof of \Cref{thm:median}]
For each $t \in [T]$ and threshold $r \in R_t$ considered the mechanism, let $p(t,r)$ as follows: If $r$ is an approximate median of $\phi_t(\mcD)$, we define $p(t,r) = 0$. If $r$ is smaller than all approximate medians of $\phi_t(\mcD)$, then we define $p(t,r)$ to be the fraction of votes $\bv_1, \ldots, \bv_k$ that were set to $0$ when determining whether $\by_t \geq r$. If $r$ is larger than all approximate medians of $\phi_t(\mcD)$, then we define $p(t,r)$ to be the fraction of such votes that were set to $1$.
If, for all choices of $t, r$, $p(t,r) < 0.5$, then all $\by_t$ output by the median mechanism must be an approximate median. Let $t^\star, r^\star$ be the choices maximizing $p(t^\star, r^\star)$. Then, it is sufficient to prove that with probability at least $1-\delta$, $p(t^\star, r^\star) < 0.5$. We define the test query $\psi:X^{w_{t^\star}} \to \zo$,
\begin{equation*}
\psi(S) \coloneqq \Ind[\phi_{t^\star}(S) \geq r_{\star}].
\end{equation*}
Here, we apply the improved cost function of \Cref{thm:main-noisy} to \Cref{thm:main-binary}. Each group $\bS^{(i)}$ is a random sample from $\mcD^{n'}$ where $n' \geq \floor{n/k}$. Then the total budget of queries asked to that group is
\begin{equation*}
b \coloneqq \sum_{t \in T} \frac{w_t \cdot 2}{n' - w_t} \cdot (1 + \log 2) \leq O\paren*{\frac{\sum_{t \in T} w_t}{n'}}
\end{equation*}
where the inequality uses that $n' \gg w_{\max}$. Applying Markov's inequality to the conclusion of \Cref{thm:main-binary}, as well as $n' \geq \floor{n/k}$,
\begin{equation*}
\Pr[\mathrm{error}(\psi, \bS^{(i)}, \mcD) \geq 0.05] \leq O\paren*{\frac{b + 1}{n'}} = O\paren*{\frac{k^2\sum_{t \in T} w_t}{n^2} + \frac{k}{n}}
\end{equation*}
Based on how we set $n$ in \Cref{thm:median}, the above probability is at most $0.01$. Furthermore if $\mathrm{error}(\psi, \bS^{(i)}, \mcD) < 0.05$, then $\bS^{(i)}$ cannot be $(\phi_{t^\star}, r^\star)$-bad. Therefore, for each group $i \in [k]$, the probability that $\bS^{(i)}$ is $(\phi_{t^\star}, r^\star)$-bad is at most $0.01$. Applying \Cref{lem:direct-product} and the Chernoff bound of \Cref{fact:chernoff}, with probability at least $1 - \exp(-k/300)$, at most $0.02k$ groups $i\in[k]$ are $(\phi_{t^\star}, r^\star)$-bad.
Next, we note that for any single choice of $\phi_t$ and $r \in R_t$, if at most $0.02k$ groups are $(\phi_t, r_t)$-bad, then the expected value of $p(t,r)$ is at most $0.47$ and, by a Chernoff bound, the probability $p(t,r) \geq 0.5$ is at most $\exp(-\Omega(k))$. We chose $k$ large enough to union bound over all $T \cdot \log_2(R_{\max})$ choices of $t \in [T]$ and $r \in R_t$. Specifically, with probability at least $1 - \delta$, for each $t \in [T]$ and $r_t \in R$ for which at most $0.02k$ groups are bad, $p(t, r) < 0.5$. In particular, this includes $p(t^\star, r^\star)$, proving the desired result.
\end{proof}
\section{From small mutual information to small bias}
\label{sec:gen}
In this section, we connect a mutual information bound to generalization error, completing the proof of \Cref{thm:main-binary} and its generalization in \Cref{thm:main-noisy}. Recall our definition of \emph{error} for a test query.
\begin{definition}[\Cref{def:error-simple}, restated]
\label{def:error-second}
For any $\psi:X^w \to [0,1]$ and distribution $\mcD$ over $X$, we define
\begin{equation*}
\mathrm{error}(\psi, S, \mcD) \coloneqq \frac{1}{w} \cdot \min\paren*{\Delta, \frac{\Delta^2}{\sigma^2}
}
\end{equation*}
where
\begin{equation*}
\Delta \coloneqq \abs{\psi(S) - \psi(\mcD)} \quad\quad\text{and}\quad\quad \sigma^2 \coloneqq \Varx_{\bS \sim \mcD^w}[\psi(\bS)].
\end{equation*}
\end{definition}
Note that if a query has error $\eps$, this corresponds to $\Delta = O(w\eps + \sqrt{w\eps \sigma^2})$.
\begin{theorem}[Mutual information bounds bias]
\label{thm:MI-bounds-bias}
For any $\bS \sim \mcD^n$ and $\by \in Y$, as well as $\Psi^{(\by)}$ a set of test queries each mapping $X^* \to [0,1]$ chosen as a function of $\by$,
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\sup_{\psi \in \Psi^{(\by)}} \mathrm{error}(\psi, \bS, \mcD)} \leq \frac{8I(\bS; \by) + 4\Ex_{\by}\bracket*{\log\abs*{\Psi^{(\by)}}}+ 12 \log 2}{n}.
\end{equation*}
\end{theorem}
Our proof will use the following fact.
\begin{fact}[\cite{FS18}]
\label{fact:gen}
For any random variables $\bS \in X$ and $\bpsi: X \to \R \in \Psi$, as well as $\lambda > 0$,
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\bpsi(\bS)} \leq \frac{1}{\lambda}\paren*{I(\bS; \bpsi) + \sup_{\psi \in \Psi}\log \paren*{\Ex_{\bS}\bracket*{\exp\paren*{\lambda \psi(\bS)}}}}.
\end{equation*}
\end{fact}
\begin{corollary}
\label{cor:MI-expectation}
For any random variables $\bx \in X$ and $\boldf:X \to \R \in F$ satisfying, for all $t \geq 0$ and $f \in F$, $\Pr_{\bx}[f(\bx) \geq t] \leq e^{-t}$,
\begin{equation*}
\Ex[\boldf(\bx)] \leq 2(I(\bx; \boldf) + \log 2).
\end{equation*}
\end{corollary}
\begin{proof}
Fix an arbitrary $f \in F$. We first bound the moment generating function of $f(\bx)$.
\begin{align*}
\Ex[\exp(f(\bx)/2)] &= \int_{0}^\infty \Pr[\exp(f(\bx)/2) \geq t]dt\\
&=1 + \int_{1}^\infty \Pr[f(\bx) \geq 2\log t]dt \\
&\leq 1 + \int_{1}^\infty t^{-1/2}dt = 2.
\end{align*}
The desired result follows from \Cref{fact:gen} with $\lambda = 1/2$.
\end{proof}
\emph{Bernstein's inequality} will give the sorts high probability bounds needed to apply \Cref{cor:MI-expectation}.
\begin{fact}[Bernstein's inequality]
\label{fact:bernstein}
For any iid mean-zero random variables $\ba_1, \ldots, \ba_n$ satisfying $|\ba_i| \leq 1$ almost surely and with variance $\sigma^2$, let $\bA = \frac{1}{n} \cdot \sum_{i \in [n]} \ba_i$
\begin{equation}
\label{eq:berstein-bound}
\Pr\bracket*{|\bA| \geq \Delta} \leq 2\exp\paren*{-\frac{\Delta^2n}{2(\sigma^2 +\frac{\Delta}{3})}}.
\end{equation}
\end{fact}
For our setting, a black-box application of Bernstein's inequality is not sufficient. We wish to prove concentration of a random variable that is \emph{not} necessarily the sum of iid random variables. Fortunately, the proof of Bernstein's inequality only uses a bound on the moment generating function of $\bA$: It proceeds by applying Markov's inequality to the random variable $e^{\lambda \bA}$ for an appropriate choice of $\lambda \in \R$. As a result, the following also holds.
\begin{fact}[Generalization of Bernstein's inequality]
\label{fact:Bernstein-MGF} Let $\bB$ be any random variable satisfying, for all $\lambda \in \R$,
\begin{equation*}
\Ex[e^{\lambda \bB}] \leq \Ex[e^{\lambda \bA}],
\end{equation*}
where $\bA$ is as in \Cref{fact:bernstein}. Then, the bound of \Cref{eq:berstein-bound} also holds with $\bB$ in place of $\bA$.
\end{fact}
We'll use the following to produce a bound in our setting.
\begin{proposition}
\label{prop:convex}
For any $\psi:X^w \to \R$, distribution $\mcD$ over $X$, and convex function $f: \R \to \R$, set $m = \floor{n/w}$. Then,
\begin{equation*}
\Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\psi(\bS)}} \leq \Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \iid \mcD^w}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\psi(\bS^{(i)})]}}}.
\end{equation*}
\end{proposition}
\begin{proof}
Given $S \in X^n$, G, let $\mcG(S)$ be the uniform distribution over all disjoint groupings, $\bS^{(1)}, \ldots, \bS^{(m)} \in X^w$, of $S$. In particular, $\bS^{(1)} \sim \binom{S}{w}$, $\bS^{(2)} \sim \binom{S \setminus \bS^{(1)}}{w}$, and so on. Note that each $\bS^{(i)}$ has a marginal distribution equal to that of a sample from $\binom{S}{w}$. Since $mw \leq n$, all the groups are disjoint. Then,
\begin{align*}
\Ex_{\bS \sim \mcD^n}&\bracket*{f\paren*{\psi(\bS)}} \\
&= \Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\Ex_{\bS' \sim \binom{\bS}{w}}[\psi(\bS')]}}\tag{\Cref{def:error-simple}} \\
&= \Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \iid \binom{\bS}{w}}\bracket*{ \psi(\bS^{(\bi)})}}}}\tag{$\Ex[c] = c$}\\
&= \Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \sim \mcG(\bS)}\bracket*{\Ex_{\bi \sim [m]}\bracket*{ \psi(\bS^{(\bi)})}}}}\tag{$\bS^{(i)}$ marginally from $\binom{\bS}{w}$} \\
&\leq \Ex_{\bS \sim \mcD^n}\bracket*{\Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \sim \mcG(\bS)}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\psi(\bS^{(i)})]}}}}\tag{Jensen's inequality}
\end{align*}
For $\bS \sim \mcD^n$ and $\bS^{(1)}, \ldots, \bS^{(m)} \sim \mcG(\bS)$, since the groups are disjoint, they are each iid draws from $\mcD^w$. Therefore,
\begin{equation*}
\Ex_{\bS \sim \mcD^n}\bracket*{f\paren*{\psi(\bS)}} \leq \Ex_{\bS^{(1)}, \ldots, \bS^{(m)} \iid \mcD^w}\bracket*{f\paren*{\Ex_{\bi \sim [m]}\bracket*{\psi(\bS^{(i)})]}}} \qedhere
\end{equation*}
\end{proof}
Since $x \to \exp{\lambda x}$ is convex, we arrive at the following corollary.
\begin{corollary}
\label{cor:bernstein}
For any $\psi:X^w \to [-1,1]$ and distribution $\mcD$ over $X$ such that $\Ex_{\bS \sim \mcD^w}[\psi(\bS)] = 0$ and $\Varx_{\bS \sim \mcD^w}[\phi(\bS)] = \sigma^2$,
\begin{equation*}
\Prx_{\bS \sim \mcD^n}[|\psi(\bS)| \geq \Delta] \leq 2\exp\paren*{-\frac{\Delta^2m}{2(\sigma^2 + \frac{\Delta}{3})}} \leq 2\exp\paren*{-\frac{m}{2} \min(\Delta, \Delta^2/\sigma^2)}
\end{equation*}
where $m \coloneqq \floor*{\frac{n}{w}}$.
\end{corollary}
Finally, we complete the proof of \Cref{thm:MI-bounds-bias}.
\begin{proof}[Proof of \Cref{thm:MI-bounds-bias}]
For each $y \in Y$, we define $f^{(y)}:X^n \to \R$ as
\begin{equation*}
f^{(y)}(S) \coloneqq \sup_{\psi \in \Psi^{(y)}} \frac{n \cdot\mathrm{error}(\psi, S, \mcD)}{4} - \log (2|\Psi^{(y)}|).
\end{equation*}
We claim that for all $y \in Y$ and $t > 0$, $\Pr[f^{(y)}(\bS) \geq t] \leq e^{-t}$. First, consider a single test function $\psi:X^w \to [0,1]$. By \Cref{cor:bernstein} applied to the centered query $\psi' \coloneqq \psi - \psi(\mcD)$, as well as the bound $n/(2w) \leq m$ when $m \coloneqq \floor{n/w}$
\begin{equation*}
\Prx_{\bS \sim \mcD^n}[\mathrm{error}(\psi, \bS, \mcD) \cdot n/4 \geq \eps] \leq 2\exp(-\eps)
\end{equation*}
By the union bound,
\begin{align*}
\Prx_{\bS}[f^{(y)}(\bS) \geq t] &\leq 2 \sum_{\psi \in \Psi^{(y)}} \Pr\bracket*{\frac{n\cdot \mathrm{error}(\psi, \bS, \mcD)}{4} - \log (2|\Psi^{(y)}|) \geq t} \\
& \leq 2|\Psi^{(y)}|\cdot e^{-t - 2|\Psi^{(y)}|} = e^{-t}.
\end{align*}
Therefore, by \Cref{cor:MI-expectation},
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\sup_{\psi \in \Psi^{(\by)}}\frac{n\cdot \mathrm{error}(\psi, \bS, \mcD)}{4} - \log (2|\Psi^{(\by)}|)} \leq 2(I(\bS; \by) + \log 2).
\end{equation*}
Rearranging yields,
\begin{equation*}
\Ex_{\bS, \by}\bracket*{\sup_{\psi \in \Psi^{(\by)}} \mathrm{error}(\psi, \bS, \mcD)} \leq \frac{8I(\bS; \by) + 4\Ex_{\by}\bracket*{\log\abs*{\Psi^{(\by)}}} +12\log 2}{n}. \qedhere
\end{equation*}
\end{proof}
We've now assembled all the ingredients to prove \Cref{thm:main-binary} and the first part of \Cref{thm:main-noisy}.
\begin{proof}
\Cref{thm:main-binary} is a special case of the first part of \Cref{thm:main-noisy}, so we only prove the later. Since the error of a query is trivially upper bounded by $1$, \Cref{thm:main-noisy} is vacuous if $n = 1$. Therefore, without loss of generality, we may assume that $n \geq 2$.
Let $\by \coloneqq \by_1, \ldots, \by_T$ be the queries responses. If the $t^{\text{th}}$ query is $p_t$-uniform and maps $X^{w_t}$ to $Y_t$, by \Cref{thm:MI-bound-formal}
\begin{align*}
I(\bS ; \by) &\leq n\cdot\Ex\bracket*{\sum_{t \in T} \frac{w_t (|Y_t| - 1)}{(n-1)(n-w_t)} \cdot \min \paren*{5 + 4 \log n, 1 + \log\paren*{1 + \frac{w_t}{np_t}}}}\\
&\leq 10\cdot \Ex\bracket*{\sum_{t \in T} \frac{w_t Y_t}{(n-w_t)} \cdot \min \paren*{1 + \log n, 1 + \log\paren*{1 + \frac{w_t}{np_t}}}} \tag{$\frac{n}{n-1} \leq 2$}
\end{align*}
The desired result then follows from \Cref{thm:MI-bounds-bias}.
\end{proof}
For completeness, we further show that \Cref{thm:formal-simple} is an easy consequence of \Cref{thm:main-binary}.
\begin{proof}[Proof of \Cref{thm:formal-simple} from \Cref{thm:main-binary}]
First note that if $w > n/2$, the guarantee of \Cref{thm:formal-simple} is vacuous. Therefore, we may assume that $w \leq n/2$. Clearly, the analyst in \Cref{thm:formal-simple} is $(\mathrm{cost}_n, b)$-budgeted for
\begin{equation*}
b \coloneqq \frac{tw|Y| \log n}{n-w} \leq \frac{2tw|Y| \log n}{n}.
\end{equation*}
The analyst can choose, for each query $\phi_t$ and outcome $y \in Y$, a test function $\psi_{t,y}(S) \coloneqq \Ind[\phi_t(S) = y]$. The total number of test functions is $m \coloneqq t|Y|$. If $m > n^2$, then, once again, the guarantee of \Cref{thm:formal-simple} is vacuous, so we assume $m \leq n^2$. Applying \Cref{thm:main-binary} as well as the inequality $\paren*{\psi(S) - \psi(\mcD)}^2 \leq w\cdot \mathrm{error}(\psi, S, \mcD)$ gives
\begin{equation*}
\Ex\bracket*{\sup_{t \in [T], y \in Y} \paren*{\phi^{(n)}_t(\bS)(y) - \phi^{(\mathrm{dist})}_T(\mcD)(y)}^2} \leq O\paren*{w\frac{b + \log (n^2) + 1}{n}} \leq O\paren*{w \log n \cdot \paren*{\frac{tw|Y|}{n^2} + \frac{1}{n}}}
\end{equation*}
as desired.
\end{proof}
\section{Introduction}
Data is a scarce and valuable resource. As a result, data analysts often reuse the same dataset to answer multiple queries. \cite{DFHPRR15,HU14} initiated the study of \emph{adaptive data analysis}, which aims to give provable guarantees that the query answers will have low bias, i.e. be representative of the full population, even when a data set is reused adaptively. Since then, there have been a number of works exploring the adaptive reuse of data \cite{DFHPR15,DFHPR15b,SU15,SU15between,BNSSSU16,RZ16,RRST16,smith2017survey,FS17,FS18,FRR20,DK22}.
Prior work can be split into two camps. The first focuses on the design of \emph{mechanisms}, a layer between the analysts and dataset. In those works, analysts do not have direct access to the data. Instead, when they wish to ask a query $\phi$, they pass it to the mechanism. The mechanism uses the dataset to answer $\phi$ without revealing too much information about the dataset; e.g. by adding noise to the output of $\phi$ applied to the data set. \cite{DFHPR15,DFHPR15b,DK22,FS18,FS17,SU15between}
The second camp provides ways to quantify the amount of information that has so far been revealed about the dataset and guarantees that when that quantity is small, query answers will have low bias. Such measures include those based on mutual information \cite{FS18,RZ16}, max information \cite{RRST16}, and differential privacy \cite{DFHPRR15,BNSSSU16}. Work in this camp is closely intertwined with the first camp: Generally, these information measures are proposed in conjunction with a specific mechanism design in mind. In order to prove the mechanism's responses have low bias, the first step is prove it reveals little information, and second step is to prove that revealing little information implies low bias.
This work is motivated by a simple question,
\begin{quote}
\emph{What minimal assumptions can we make about the queries to guarantee that the results will have low bias, even without an explicit mechanism?}
\end{quote}
The purpose of asking this question is twofold. First, it models a reality in which data analysts often do not explicitly use mechanisms to obfuscate query results before looking at them. Second, if these assumptions are sufficiently easy to explain to an analyst, they can be actionable, as the analyst can keep the assumptions in mind when deciding how to analyze data.
A first attempt at answering this question is that the query should reveal little information as quantified by any of the aforementioned measures. We find this to be an unsatisfactory response. It is often difficult to determine whether a natural sequence of queries satisfy those measures, so it is not clear if the measures form a good model of reality. Furthermore, it is not clear what takeaways an analyst not versed in the intricacies of information theory is to glean from the idea that they should try to minimize mutual information or some differential privacy parameters.
\pparagraph{Our approach.} We show that as long as each query takes as input a random subsample of the dataset and outputs to a small range, the results will have low bias. Quantitatively, our results depend on both the size of the subsample and the number of possible outputs the query has. Unlike previous information measures, this requirement is completely syntactic and trivial to verify. The quantities are also intuitive and easy to explain to a data analyst who may be interested in ensuring their results have low bias.
One interpretation of this framework is that it eliminates the need to design a noise distribution for each task. Prior works design mechanisms to bound the bias by adding an appropriate amount of noise to the true result before returning it (e.g. by adding a mean-$0$ Gaussian). Our work shows that the noise inherent in subsampling suffices. It also extends to tasks where it is difficult to design an appropriate noise distribution -- for example, when the output of each query is categorical rather than numerical.
As easy corollaries of this subsampling approach, we give simple mechanisms for two foundational tasks, statistical queries and median finding, demonstrating the power of this framework. In particular, our mechanism for answering the broad and influential class of \emph{statistical queries} (SQs) \cite{Kea98,FGRVX17} achieves state of the art accuracy in many parameter regimes. In addition to their broad applicability, statistical queries have been the standard bearer by which we assess the utility of approaches for adaptive data analysis since the early works of \cite{DFHPRR15,BNSSSU16}. Our SQ mechanism had advantages beyond its accuracy: It runs in sublinear time, and its extreme simplicity renders it broadly applicable in non-standard setting.
\section{Bounding the mutual information}
\label{sec:MI}
In this section, we'll use $\chi^2$ stability to bound the mutual information between the sample $\bS$ and the sequence of responses of the analyst $\by_1, \ldots, \by_T$. Explicitly, we'll prove the following.
\begin{theorem}[Formal version of \Cref{thm:MI-informal}]
\label{thm:MI-bound-formal}
For any deterministic analyst $\mcA$, distribution $\mcD$, and sample size $n \geq 2$, draw $\bS \sim \mcD^n$. For each $t \in [T]$, let $\phi_t = \mcA(t, \by_1, \ldots, \by_{t-1})$ and $\by_t \sim \phi^{(n)}_t(\bS)$, then
\begin{equation*}
I(\bS ; (\by_1, \ldots, \by_T)) \leq n\cdot\Ex\bracket*{\sum_{t \in {T} }\mathrm{cost}(\phi_t)}
\end{equation*}
where the cost of a $p$-uniform query $\phi:X^w\to Y$ is
\begin{equation*}
\mathrm{cost}(\phi) \coloneqq \frac{w (|Y| - 1)}{(n-1)(n-w)} \cdot \min \paren*{5 + 4 \log n, 1 + \log\paren*{1 + \frac{w}{np}}}.
\end{equation*}
\end{theorem}
Before proving \Cref{thm:MI-bound-formal}, we'll be explicit about all sources of randomness. The strategy of the analyst, mapping previous query responses to each new query, is assumed to be deterministic. This is without loss of generality (see \Cref{lem:rand-to-det}) and to simplify notation. Recall, from \Cref{def:p-uniform}, for $\phi:X^w \to Y$, it must be the case that for every $S' \in X^w$ and $y \in Y$, $\Pr[\phi(S') = y] \geq p$. For any $p > 0$, this requires that the query $\phi$ have a random output. Therefore, the explicit sampling process is:
\begin{enumerate}
\item A sample $\bS \sim \mcD^n$ is sampled.
\item At each time step $t \in [T]$, as a \emph{deterministic} function of previous responses $\by_1, \ldots, \by_{t-1}$, the analyst chooses a query $\phi_t:X^w \to Y$. This query is answered by first subsampling $\bS' \sim \binom{\bS}{w}$ and then drawing a response $\by_t \sim \phi_t(\bS')$.
\end{enumerate}
The main technical ingredient of \Cref{thm:MI-bound-formal} is the following bound on the $\chi^2$-stability of a single (possibly randomized) subsampling query.
\begin{lemma}
\label{lem:bound-chi-stab}
For any query $\phi:X^w \to Y$, $\phi^{(n)}$ is $\eps$-$\chi^2$ stable with respect to $\phi^{(n-1)}$ for
\begin{equation*}
\eps \coloneqq \frac{w(|Y| - 1)}{(n-1)(n-w)}.
\end{equation*}
\end{lemma}
\Cref{lem:bound-chi-stab} is an easy corollary of the following, slightly more general, lemma.
\begin{lemma}
\label{lem:var-bound}
For any nonnegative integers $w \leq n - 1$, and $f: \binom{[n]}{w} \to \R$,
\begin{equation}
\label{eq:var-ratio}
\Varx_{\bi \sim [n]} \bracket*{\Ex_{\bT \sim \binom{[n]}{w}}[f(\bT) \mid i \notin \bT] } \leq \frac{w}{(n-1)(n-w)} \Varx_{\bT \sim \binom{[n]}{w}}[f(\bT)].
\end{equation}
\end{lemma}
\begin{proof}[Proof of \Cref{lem:bound-chi-stab} from \Cref{lem:var-bound}]
Fix a sample $S \in X^n$. For each $y \in Y$, we define, $f_y: \binom{[n]}{w} \to \R$ which, upon receiving $i_1, \ldots, i_w \in [n]$, outputs $\Pr[\phi(S_{i_1}, \ldots, S_{i_w}) = y]$. Then,
\begin{align*}
\Ex_{\bi \sim [n]} & \bracket*{\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-\bi})}} \\
& =\Ex_{\bi \sim [n]}\bracket*{\sum_{y \in Y} \frac{\paren*{\phi^{(n)}(S)(y) - \phi^{(n-1)}(S_{-\bi})(y)}^2}{\phi^{(n)}(S)(y)}} \\
&= \sum_{y \in Y} \frac{\Varx_{\bi \sim [n]}[\phi^{(n-1)}(S_{-\bi})(y)]}{\phi^{(n)}(S)(y)} \tag{$\phi^{(n)}(S)(y) = \Ex_{\bi \sim [n]}[\phi^{(n-1)}(S_{-\bi})(y)$}\\
&= \sum_{y \in Y} \frac{\Varx_{\bi \sim [n]}[\phi^{(n-1)}(S_{-\bi})(y)]}{\phi^{(n)}(S)(y)(1 - \phi^{(n)}(S)(y))} \cdot (1 - \phi^{(n)}(S)(y))
\end{align*}
Note that for any random variable $\bx$ bounded within $[0,1]$ with mean $\mu$, $\Ex[\bx^2] \leq \mu$ and so $\Varx[\bx] \leq \mu(1 - \mu)$. Applying this to the random variable $f_y(\bT)$ gives a lower bound on the denominator in the above expression, and so,
\begin{align*}
\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-\bi})}} &\leq \sum_{y \in Y} \frac{\Varx_{\bi \sim [n]} \bracket*{\Ex_{\bT \sim \binom{[n]}{w}}[f_y(\bT) \mid i \notin \bT]}}{\Varx_{\bT \sim \binom{[n]}{w}}[f_y(\bT)]} \cdot (1 - \phi^{(n)}(S)(y)) \\
& \leq \sum_{y \in Y} \frac{w}{(n-1)(n-w)}\cdot (1 - \phi^{(n)}(S)(y)) \tag{\Cref{lem:var-bound}} \\
&= \frac{w}{(n-1)(n-w)} \cdot \paren*{|Y| - \sum_{y \in Y} \phi^{(n)}(S)(y)} \\
&= \frac{w(|Y| - 1)}{(n-1)(n-w)} = \eps.
\end{align*}
Since this holds for any $S \in X^n$, $\phi^{(n)}$ is $\eps$-$\chi^2$ stable w.r.t. $\phi^{(n-1)}$.
\end{proof}
\begin{proof}[Proof of \Cref{lem:var-bound}]
In order to prove \Cref{lem:var-bound}, it is sufficient to restrict ourselves to mean-$0$ functions $f$, as both sides of \Cref{eq:var-ratio} are invariant to translations. We'll consider the vector space of all such functions
\begin{equation*}
\mcV \coloneqq \set*{f: \binom{[n]}{w} \to \R \,\bigg| \Ex_{\bT \sim \binom{[n]}{w}}[f(\bT)] = 0},
\end{equation*}
endowed with the inner product
\begin{equation*}
\langle f, g \rangle \coloneqq \Ex_{\bT \sim \binom{[n]}{w}}[f(\bT)g(\bT)].
\end{equation*}
With this choice of inner product, for any $f \in \mcV$, $\Varx[f] = \langle f, f\rangle = \norm{f}^2$. Defining $\phi: \mcV \times \mcV \to \R$,
\begin{equation*}
\phi(f, g) \coloneqq \Ex_{\bi \sim [n]} \bracket*{\Ex_{\bT \sim \binom{[n]}{w}}[f(\bT) \mid \bi \notin T] \cdot \Ex_{\bT \sim \binom{[n]}{w}}[g(\bT) \mid \bi \notin T]}.
\end{equation*}
we have that for any mean-zero $f$, the left hand side of \Cref{eq:var-ratio} is equal to $\phi(f,f)$. Clearly $\phi$ is bilinear, symmetric, and positive semi-definite. Our goal is to find the maximum $\phi(f,f)$ among $f$ with $\norm{f} \leq 1$, which is just the maximum eigenvalue of the linear map corresponding to $\phi$. We'll show that this maximum occurs for a \emph{linear} $f$, where linear functions are defined as
\begin{equation}
\label{eq:def-linear}
\mcL \coloneqq \set*{T \mapsto \sum_{i \in T} \alpha_i \bigg|\, \alpha \in \R^n} \subseteq \R^{\binom{[n]}{w}}.
\end{equation}
Consider any $f \in \mcV$ that is orthogonal to every linear function. Since the function $T \mapsto \Ind[i \notin T]$ is linear\footnote{This function is formed by setting $\alpha_j = \frac{1}{w} - \Ind[j = i]$ in \Cref{eq:def-linear}.}, for any $i \in [n]$, we have that $\Ex[f(\bT) \mid i \notin \bT] = 0$. Therefore, for any $g \in \mcV$,
\begin{equation*}
\phi(f, g) = \Ex_{\bi \sim [n]} \bracket*{0 \cdot \Ex[g(\bT) \mid \bi \notin T]} = 0.
\end{equation*}
As a result, all $f$ in $\mcL^{\perp}$ are in the kernel of the linear map corresponding to $\phi$. Hence, to find the maximum of $\phi(f,f)$ among $\norm{f} \leq 1$, it is sufficient to just consider linear functions. Pick an arbitrary mean-zero $\alpha \in \R^n$ and set $f = T \mapsto \sum_{i \in T} \alpha_i$. We compute the variance of $f$:
\begin{align*}
\norm{f}^2 &= \Ex[f(\bT)^2] \\
&= \Ex\bracket*{\paren*{\sum_{i \in \bT} \alpha_i}^2} \\
&= \sum_{i \in [n]} \Pr[i \in \bT] \alpha_i^2 + \sum_{i \neq j} \Pr[i,j \in \bT] \alpha_i\alpha_j \tag{Linearity of expectation} \\
&= \frac{w}{n}\sum_{i \in [n]} \alpha_i^2 + \frac{w(w-1)}{n(n-1)}\sum_{i \neq j } \alpha_i \alpha_j \\
&= \frac{w}{n} \paren*{\sum_{i \in [n]} \alpha_i \cdot\paren*{\alpha_i + \frac{w-1}{n-1} \sum_{j \neq i} \alpha_j}}\\
&= \frac{w}{n} \paren*{\sum_{i \in [n]} \alpha_i \cdot\paren*{\frac{n-w}{n-1} \alpha_i + \frac{w-1}{n-1} \sum_{j \in [n]} \alpha_j}}\\
&= \frac{w}{n} \paren*{\sum_{i \in [n]} \alpha_i \cdot\paren*{\frac{n-w}{n-1} \alpha_i}} \tag{$f$ is mean-zero} \\
&= \frac{w(n-w)}{n(n-1)} \sum_{i \in [n]} \alpha_i^2.
\end{align*}
Finally, we bound $\phi(f,f)$. In order to do so, we first compute, for any $i \in [n]$
\begin{align*}
\Ex[f(\bT) \mid i \notin \bT] &= \sum_{j \neq i} \Pr[j \in \bT \mid i \notin \bT] \cdot \alpha_j \\
&=\sum_{j \neq i} \frac{w}{n-1} \cdot \alpha_j \\
&= \frac{w}{n-1} \left(\sum_{j \in [n]} \alpha_j\right) - \frac{w}{n-1} \alpha_i \\
&= -\frac{w}{n-1} \alpha_i \tag{$f$ is mean-zero.}
\end{align*}
This allows us to compute,
\begin{align*}
\phi(f,f) &= \Ex_{\bi \sim [n]} \bracket*{\Ex[f(\bT) \mid \bi \notin T]^2} \\
&= \Ex_{\bi \sim [n]} \bracket*{\paren*{-\frac{w}{n-1} \alpha_i}^2} \\
&= \frac{w^2}{n(n-1)^2} \sum_{i \in [n]} \alpha_i^2.
\end{align*}
Comparing the expressions for $\phi(f,f)$ and $\norm{f}^2$, we conclude that for any linear $f$, \Cref{eq:var-ratio} holds with equality. Since the maximum of $\phi(f,f)$ among $f$ with $\norm{f}^2 \leq 1$ occurs for a linear function, we are done.
\end{proof}
The next step in the proof of \Cref{thm:MI-bound-formal} is to convert the $\eps$-$\chi^2$ stability of each subsampling query to $\eps'$-ALKL stability for appropriately chosen $\eps'$.
\begin{corollary}
\label{cor:bound-ALKL-stab}
For any $p$-uniform query $\phi:X^w \to Y$, $\phi^{(n)}$ is $\eps'$-ALKL stable for
\begin{equation*}
\eps' \coloneqq \frac{w (|Y| - 1)}{(n-1)(n-w)} \cdot \min \paren*{5 + 4 \log n, 1 + \log\paren*{1 + \frac{w}{np}}}.
\end{equation*}
\end{corollary}
\begin{proof}
First note that if $|Y| = 1$, the query is trivially $0$-ALKL stable as its output cannot depend on its input. Therefore, we may assume that $|Y| \geq 2$. By \Cref{lem:bound-chi-stab}, $\phi^{(n)}$ is $\eps$-$\chi^2$ with respect to $\phi^{(n-1)}$ for $\eps = \frac{w (|Y| - 1)}{(n-1)(n-w)}$. By the first guarantee of \Cref{thm:chi-to-KL}, $\phi^{(n)}$ is $\eps'$-ALKL stable for
\begin{align*}
\eps' &\coloneqq \eps \cdot(3 + 2 \log(|Y|/\eps)) \\
&= \eps \cdot \paren*{3 + 2 \log\paren*{\frac{|Y|}{\frac{w (|Y| - 1)}{(n-1)(n-w)}}}}\\
&= \eps \cdot \paren*{3 + 2 \log\paren*{\frac{|Y|}{|Y|-1} \cdot {\frac{(n-1)(n-w)}{w}}}}\\
&\leq \eps \cdot \paren*{3 + 2 \log\paren*{2n^2}} \tag{$|Y| \geq 2, w \geq 1$} \\
&= \eps \cdot\paren*{3 + 2\log 2 + 4 \log n} \leq \eps \cdot(5 + 4 \log n).
\end{align*}
This gives the first of the two desired bounds. For the other bound, we take advantage of the $p$-uniformity of $\phi$. Consider any $S \in X^n$, $y \in Y$, and $i \in [n]$. We can write,
\begin{align*}
\phi^{(n)}(S)(y) &= \Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y]] \\
&= \Pr[i \in \bS']\Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y] \mid i \in \bS'] + \Pr[i \notin \bS']\Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y] \mid i \notin \bS'] \\
&= \frac{w}{n}\Ex_{\bS' \sim \binom{S}{w}}[\Pr[\phi(\bS') = y] + (1 - \frac{w}{n}) \cdot \phi^{(n-1)}(S_{-i})(y)\\
&\leq \frac{w}{n} + (1 - \frac{w}{n}) \cdot \phi^{(n-1)}(S_{-i})(y) \\
&\leq \frac{w}{np}\cdot \phi^{(n-1)}(S_{-i})(y) + (1 - \frac{w}{n}) \cdot \phi^{(n-1)}(S_{-i})(y) \tag{$\phi^{(n-1)}(S_{-i})(y) \geq p$}\\
&\leq \paren*{\frac{w}{np} + 1}\cdot \phi^{(n-1)}(S_{-i})(y).
\end{align*}
Therefore, setting $\tau = (1 + \frac{w}{np})^{-1}$, we have that $\phi^{(n-1)}(S_{-i})(y) \geq \tau \cdot \phi^{(n)}(S)(y)$. Applying the second guarantee of \Cref{thm:chi-to-KL}, $\phi^{(n)}$ is $\eps'$-ALKL stable for
\begin{equation*}
\eps' \coloneqq \eps \cdot(1 + \log(1/\tau)) = \eps \cdot \paren*{1 + \log\paren*{1 + \frac{w}{np}}}.
\end{equation*}
The desired result follows from taking whichever of the two ALKL stability bounds is better.
\end{proof}
Given the above ALKL stability bound, the proof of \Cref{thm:MI-bound-formal} is \emph{almost} completed by \Cref{fact:KL-MI} from \cite{FS18} which connects ALKL stability to mutual information. However, to directly apply \Cref{fact:KL-MI}, we would need an \emph{a priori} bound on the ALKL stability of each query. Instead, we allow the analyst to choose its query as a function of the previous responses and only require the total cost be bounded in expectation. Specifically, we want to show an adaptive composition of ALKL stability even when the stability parameters of later queries are a function of prior queries responses. This has recently been shown to hold in \cite{FZ21}. However, their setting is slightly different, focusing on analysts that are budgeted almost surely rather than in expectation, so we cannot use their results black box. Instead, we prove the following.
\begin{lemma}[Generalization of \Cref{fact:KL-MI}]
\label{lem:KL-to-MI-expectation}
Draw $\bS \sim \mcD^n$. Then, for each $t \in [T]$, let an analyst choose a randomized algorithm $\mcM_t:X^n \to Y$ as a function of $\by_1, \ldots, \by_{t-1}$ and draw an output $\by_t \sim \mcM_t(\bS)$. If $\mcM_t$ is $\beps_t$-ALKL stable, then,
\begin{equation*}
I(\bS; (\by_1, \ldots, \by_T)) \leq n \cdot \Ex\bracket*{\sum_{t \in [T]}\beps_t}
\end{equation*}
where the expectation is taken over the analysts choices for the queries, which in turn depends on the randomness of $\bS$ and $\by_1, \ldots, \by_T$.
\end{lemma}
\Cref{thm:MI-bound-formal} is a direct consequence of \Cref{lem:KL-to-MI-expectation} and \Cref{cor:bound-ALKL-stab}. The proof of \Cref{lem:KL-to-MI-expectation} is mostly a matter of looking through \cite{FS18}'s proof of \Cref{fact:KL-MI} and confirming that everything still holds in this more general setting.
The first statement of \Cref{fact:KL-MI} is a special case of the following more general fact, implicit in \cite{FS18}'s proof.
\begin{fact}[Implicit in \cite{FS18}'s proof of \Cref{fact:KL-MI}]
\label{fact:KL-to-MI-general}
For any randomized algorithm $\mcM:X^n \to Y$ and $\bS \sim \mcD^n$, if $\by \sim \mcM(\bS)$ then for any randomized algorithm $\mcM':X^{n-1} \to Y$
\begin{equation*}
I(\bS; \by) \leq n \cdot \Ex_{\bS}\bracket*{\Ex_{\bi \sim [n]}\bracket*{\KLbig{\mcM(\bS)}{\mcM'(\bS_{-\bi})}}}.
\end{equation*}
\end{fact}
To take advantage of \Cref{fact:KL-to-MI-general}, we'll define the ALKL stability with respect to a distribution. Note that if a randomized algorithm is $\eps$-ALKL stable, it is $\eps$-ALKL stable with respect to all distributions.
\begin{definition}[ALKL stability with respect to a distribution]
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-ALKL stable with respect to a (not necessarily product) distribution $\mcD$ over $X^n$ if there is a randomized algorithm $\mcM': X^{n-1} \to Y$ for which
\begin{equation*}
\Ex_{\bS \sim \mcD, \bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps.
\end{equation*}
\end{definition}
Then, we generalize \cite{FS18}'s adaptive composition of ALKL stability.
\begin{proposition}[Adaptive composition of ALKL stability with respect to a distribution]
\label{prop:adaptive-composition-dist}
For any $\mcM_1: X^n \to Y_1$, $\mcM_2: Y_1 \times X^n \to Y_2$, and distribution $\mcD$ over $X^n$ satisfying
\begin{enumerate}
\item $\mcM_1:X^n \to Y_1$ is $\eps_1$-ALKL stable with respect to $\mcD$.
\item For $\bS \sim \mcD$ and any $y_1 \in Y_1$, $\mcM_2(y_1, \cdot)$ is $(\eps_2^{(y_1)})$-ALKL stable with respect to $(\mcD \mid \mcM_1(\bS) = y_1)$.
\end{enumerate}
The randomized algorithm mapping $S$ to $\mcM_2(\mcM_1(S), S)$ is $\eps'$-ALKL stable with respect to $\mcD$ for
\begin{equation*}
\eps' \coloneqq\eps_1 + \Ex_{\bS \sim \mcD,\by_1 \sim \mcM_1(\bS) }[\eps_2^{(\by_1)}].
\end{equation*}
\end{proposition}
\begin{proof}
This proof uses the well-known chain rule of KL divergence. For any distributions $\mcD$ and $\mcE$ over domain $X \times Y$,
\begin{equation*}
\KL{\mcD}{\mcE} = \KL{\mcD(x)}{\mcE(x)} + \Ex_{\bx' \sim \mcD(x)}\bracket*{\KLbig{\mcD(y \mid x = x')}{\mcE(y \mid x = x')}}
\end{equation*}
where $\mcD(x)$ denotes the marginal distribution of $\mcD$ over $X$ and $\mcD(y \mid x = x')$ its conditional distribution over $Y$. Then, we bound
\begin{align*}
\eps' &= \Ex_{\bS \sim \mcD, \bi \sim [n]} \bracket*{\KLbig{\mcM_2(\mcM_1(\bS), \bS)}{\mcM_2'(\mcM_1'(\bS_{-\bi}), \bS_{-\bi})}} \\
&= \Ex_{\bS \sim \mcD, \bi \sim [n]} \bracket*{\KLbig{\mcM_1(\bS)}{\mcM_1'(\bS)}} + \Ex_{\bS \sim \mcD, \bi \sim [n], \by_1} \bracket*{\KLbig{\mcM_2(\by_1,\bS)}{\mcM_2'(\by_1,\bS)}} \\
&= \eps_1 + \Ex_{\bS \sim \mcD,\by_1 \sim \mcM_1(\bS) }[\eps_2^{(\by_1)}].\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of \Cref{lem:KL-to-MI-expectation}]
By repeatedly applying \Cref{prop:adaptive-composition-dist}, the mechanism that takes as input $\bS$ and outputs $\by = (\by_1, \ldots, \by_t)$ is $\paren*{\Ex\bracket*{\sum_{t \in [T]}\beps_t}}$-ALKL stable w.r.t. $\mcD^n$. The desired result follows from \Cref{fact:KL-to-MI-general}.
\end{proof}
\subsection{Other related work}
Subsampling has been thoroughly explored in the context of privacy amplification (see e.g. \cite{BBG18,ZW19} or the book chapter \cite{steinkeBookChapter}): if $\mcA$ is a differentially private algorithm, running $\mcA$ on a random subset of the data gives an algorithm with even better privacy parameters. Given the previous applications of differential privacy to adaptive data analysis, this seems like a natural starting point for our work. However, such an approach is not sufficient to analyze subsampling queries. Indeed, subsampling queries do not necessarily satisfy $(\eps, \delta)$-differential privacy with sufficiently good parameters to give useful bounds on the bias.
Fish, Reyzin, and Rubinstein explored the use of subsampling to speed up classical mechanisms for adaptive data analysis \cite{FRR20}. For example, their mechanism for answering a statistical query $\phi$, computes $\phi$ on a random subsample of the data \emph{and} adds Laplacian noise to that result. This allows them to retain the accuracy guarantees of prior mechanisms that added Laplacian noise \cite{BNSSSU16} while also running in sublinear time. In contrast, our work shows that subsampling alone is sufficient, and achieves sample size bounds that improve upon prior work.
\subsection{Our results}
We'll show that an analyst can ask an adaptive sequence of \emph{subsampling queries} without incurring large bias.
\begin{definition}[Subsampling query]
\label{def:subsampling-query}
For any sample $S \in X^n$ and query $\phi:X^w \to Y$ where $w \leq n$, the \emph{subsampling query} $\phi$ is answered by drawing $\bx_1, \ldots, \bx_w$ uniformly without replacement from $S$, and then providing the answer $\by = \phi(\bx_1, \ldots, \bx_w)$.
The notation $\phi^{(n)}(S)$ denotes the distribution of $\by$ defined above. Similarly, for any distribution $\mcD$ supported on $X$, the notation $\phi^{(\mathrm{dist})}(\mcD)$ denote the distribution of $\by' = \phi(\bx_1', \ldots, \bx_w')$ when $\bx_1', \ldots, \bx_w' \iid \mcD$.
\end{definition}
We allow the analyst to be \emph{adaptive}: The analyst's choice for the $t^{\text{th}}$ query may depend arbitrarily on the responses to the first $t - 1$ queries, as summarized in \Cref{fig:adaptive-analyst}. We first give an informal result bounding the sample size needed to ensure the results have low bias.
\begin{theorem}[The subsampling mechanism has low bias]
\label{thm:informal}
Suppose an analyst asks an adaptive sequence of $T$ subsampling queries, each mapping $X^w$ to $Y$, to a sample $\bS \sim \mcD^n$. As long as
\begin{equation*}
n \geq \tilde{\Omega}(w\sqrt{T\cdot |Y|}),
\end{equation*}
with high probability, all of the queries will have low bias.
\end{theorem}
\Cref{thm:informal} can be compared to a naive approach which takes a fresh batch of $w$ samples for each query. Subsampling has a quadratically better dependence on $T$ than that approach which requires $n \geq wT$. The following formal version of \Cref{thm:informal} quantifies the bias of each query.
\begin{figure}[b]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Input:} A sample $S \in X^n$ not known to the analyst.\vspace{2pt}
For each time step $t \in [T]$, the analyst \vspace{2pt}
\begin{enumerate}[nolistsep,itemsep=2pt]
\item Selects a query $\phi_t: X^{w_t} \to Y_t$ which can depend on the previous responses $\by_1, \ldots, \by_{t-1}$.
\item Receives the response $\by_t \sim \phi_t(S)$.
\end{enumerate}
\end{tcolorbox}
\caption{An analyst asking an adaptive sequence of subsampling queries.}
\label{fig:adaptive-analyst}
\end{figure}
\begin{theorem}[Formal version of \Cref{thm:informal}]
\label{thm:formal-simple}
For any distribution $\mcD$ over domain $X$, and analyst making a series of adaptive queries $\phi_1, \ldots, \phi_T:X^w \to Y$ to a sample $\bS \sim \mcD^n$,
\begin{equation*}
\Ex\bracket*{\sup_{t \in [T], y \in Y} \paren*{\phi^{(n)}_t(\bS)(y) - \phi^{(\mathrm{dist})}_t(\mcD)(y)}^2} \leq O\paren*{w\log n\paren*{\frac{wT |Y|}{n^2} + \frac{1}{n}}}.
\end{equation*}
where the expectation is both over the sample $\bS$ and the analysts choice of queries, and the notation $\mcD(x)$ denotes $\Prx_{\bx \sim \mcD}[\bx = x]$.
\end{theorem}
By Markov's inequality, with probability at least $0.9$, for all $t \in [T]$ and $y \in Y$
\begin{equation*}
\abs*{\Pr[\phi^{(n)}_t(\bS)(y) - \phi^{(\mathrm{dist})}_t(\mcD)(y)} \leq \tilde{O}\paren*{\frac{w\sqrt{T|Y|}}{n} + \sqrt{\frac{w}{n}}}.
\end{equation*}
The second term, $\sqrt{\frac{w}{n}}$, quantifies the \emph{inherit} bias in a sample: There are\footnote{One such example, for $\mcD = \mathrm{Unif}(\{-1,1\})$, is the query $\phi(S) = \Ind[\sum_{x \in X} x \geq 0]$.} queries $\phi:X^w \to \zo$, for which, even with a fresh sample $\bS$, $\Ex_{\bS \sim \mcD^n}[|\phi^{(n)}(\bS)(1) - \phi^{(\mathrm{dist})}(\mcD)(1)|] = \Theta(\sqrt{\frac{w}{n}})$.
The first term therefore quantifies the extra bias. When $T |Y| \leq \frac{n}{w}$, that first term is dominated by the inherit bias. The guarantee than slowly degrades from the inherit bias to vacuous as $T|Y|$ varies between $\frac{n}{w}$ and $\frac{n^2}{w^2}$. In contrast, naive sample splitting works well from $T \leq \frac{n}{w}$ but cannot move beyond that regime.
Our next theorem generalizes \Cref{thm:formal-simple} in a number of ways. First, it allows the analyst to choose a different domain and range size for each query -- As a function of the responses $\by_1, \ldots, \by_{t-1}$, the analyst chooses $w_t, Y_t$ and a subsampling query $\phi:X^{w_t} \to Y_t$. The only requirement is that the analyst not exceed a total \emph{budget}
\begin{definition}[Budgeted analyst]
\label{def:budgeted-analyst}
For any distribution $\mcD$, sample size $n$, cost function, $\mathrm{cost}$, mapping queries to $\R_{\geq 0}$ and budget $b \geq 0$, we say an analyst is \emph{$(\mathrm{cost},b)$-budgeted in expectation} if
\begin{equation*}
\Ex\bracket*{\sum_{t \in [T]}\mathrm{cost}(\phi_t)} \leq b
\end{equation*}
where the expectation is over the analysts choice of queries $\phi_1, \ldots, \phi_T$ which in turn also depends on the randomness of $\bS \sim \mcD^n$ and the prior query outputs, $\by_1 ,\ldots, \by_{T-1}$ in \Cref{fig:adaptive-analyst}. Similarly, we say that an analyst is \emph{$(\mathrm{cost},b)$-budgeted almost surely} if $\sum_{t \in [T]}\mathrm{cost}(\phi_t) \leq b$ holds almost surely.
\end{definition}
We further generalize \Cref{thm:formal-simple} by disconnecting the test queries from the first $T$ queries the analyst asks. After receiving the response $\by_1, \ldots, \by_T$, the analyst chooses any set of test queries $\psi_1:X^{v_1} \to [0,1], \ldots, \psi_m:X^{v_m} \to [0,1]$ for which we bound $\abs{\Ex_{\by \sim \psi^{(n)}(\bS)}[\by] - \Ex_{\by \sim \psi^{(\mathrm{dist})}(\mcD)}[\by]}$. To recover \Cref{thm:formal-simple}, we define a test query $\psi(x_1, \ldots, x_w) \coloneqq \Ind[\psi_t(x_1, \ldots, x_w)=y]$ for each $t \in [T]$ and $y \in Y_t$. The following notation will be convenient.
\begin{definition}[Expectation of a query]
\label{def:expectation}
For a query $\phi:X^w \to Y \subseteq \R$ and sample $S \in X^n$, we use the notation $\phi(S)$ as shorthand for $\Ex_{\by \sim \phi^{(n)}}[\by]$. Similarly, for a distribution $\mcD$ over domain $X$, we use the notation $\phi(\mcD)$ as shorthand for $\Ex_{\by \sim \phi^{(\mathrm{dist})}}[\by]$.
\end{definition}
A more technical improvement of \Cref{thm:main-binary} over \Cref{thm:formal-simple} is that the error bound improves when $\Var_{\mcD}(\psi) \coloneqq \Varx_{\by \sim \psi^{(\mathrm{dist})}(\mcD)}[\by]$ is small. \Cref{thm:formal-simple} corresponds to the pessimistic bound of $\Var_{\mcD}(\psi) \leq 1$. This improvement will be important to the application of answering statistical queries given in \Cref{subsec:SQ}. To state \Cref{thm:main-binary}, we define the \emph{error} of a test query.
\begin{definition}[Error of a query]
\label{def:error-simple}
For any $\psi:X^w \to [0,1]$, distribution $\mcD$ over $X$ and sample $S \in X^n$, we define
\begin{equation*}
\mathrm{error}(\psi, S, \mcD) \coloneqq \frac{1}{w} \cdot \min\paren*{\Delta, \frac{\Delta^2}{\Var_{\mcD}(\psi)}} \quad\quad\text{where }\Delta \coloneqq \abs{\psi(S) - \psi(\mcD)}.
\end{equation*}
\end{definition}
If $\mathrm{error}(\psi, S, \mcD)\leq \eps$, then $\Delta \leq \max(w\eps, \sqrt{w\eps \Var_{\mcD}(\psi)})$. When $\Var_{\mcD}(\psi) = 1$, the second term, or the trivial bound of $\Delta \leq 1$, dominates. As $\Var_{\mcD}(\psi)$ decreases, the bound improves until it hits a floor of $w\eps$.
\begin{theorem}[Generalization of \Cref{thm:formal-simple}]
\label{thm:main-binary}
For sample size $n$, define the cost of a query as
\begin{equation*}
\mathrm{cost}_n(\phi:X^w \to Y) \coloneqq \frac{w |Y| \log n}{n-w}.
\end{equation*}
For any distribution $\mcD$ over domain $X$ and adaptive analyst that is $(\mathrm{cost}_n, b)$-budgeted in expectation, let $\Psi$, $|\Psi| \leq m$, be a collection of tests the analyst chooses after receiving query responses $\by_1, \ldots, \by_T$. Then
\begin{equation*}
\Ex\bracket*{\sup_{\psi \in \Psi} \mathrm{error}(\psi, \bS, \mcD)} \leq O \paren*{\frac{b + \log m + 1}{n}}
\end{equation*}
where the expectation is both over the sample $\bS \sim \mcD^n$ and the analyst's decisions.
\end{theorem}
While \Cref{thm:main-binary} only guarantees low bias in expectation, we further show that a high probability guarantee holds when $w_t = 1$ for all $t \in T$.
\begin{theorem}[Improved dependence on failure probability]
\label{thm:high-probability}
In the setting of \Cref{thm:main-binary}, if the analyst is $(\mathrm{cost}_n, b)$-budgeted \emph{almost surely}, chooses $w_t = 1$ for all $t \in [T]$ and, as a function of the responses $\by_1, \ldots, \by_T$, chooses a single test $\psi:X \to [0,1]$, then, for any failure probability $\delta > 0$,
\begin{equation*}
\Pr\bracket*{\mathrm{error}(\psi, \bS, \Delta)\geq O\paren*{\log(1/\delta)\cdot\paren*{\frac{b + 1}{n}}}} \leq \delta.
\end{equation*}
\end{theorem}
Note that the logarithmic dependence on $\delta$ means that a union bound suffices to handle the case where the analysts chooses $m$ tests. In that case, the $\log(1/\delta)$ dependence is instead $\log(m/\delta)$.
The case of $w = 1$ is particularly interesting for two reasons.
\begin{enumerate}
\item It is sufficient for our application of answering statistical queries, a widely-applicable query class, given in \Cref{subsec:SQ}. Indeed, one way to characterize statistical queries are precisely those queries $\phi:X^n \to [0,1]$ for which an unbiased (and bounded) estimator of $\phi(S)$ can be computed given a single $\bx \sim \mathrm{Unif}(S)$. Our mechanism for answering statistical queries simply averages many of these unbiased estimators.
\item One way to answer a query with $w \geq 2$ is to cast a sample of $n$ points from $\mcD$ as a sample of $\floor{n/w}$ points each drawn from $\mcD^w$. By doing so, each query $\phi:X^w \to Y$ can be answered by looking at one ``grouped point," and so \Cref{thm:high-probability} gives a high probability guarantee. We conjecture that such grouping is unnecessary and that \Cref{thm:high-probability} would directly hold without the restriction that $w_t = 1$ for all $t \in [T]$. That said, the proof breaks in this setting, and so we consider extending it to be an intriguing open problem.
\end{enumerate}
Lastly, we show it is possible to drop the $\log n$ dependence from \Cref{thm:main-binary,thm:high-probability} for queries that are sufficiently \emph{uniform}, meaning each output of the query is fairly likely to occur. To define uniform queries, we expand \Cref{def:subsampling-query} to allow for subsampling queries that are not deterministic functions. Rather, given a subsample $x_1, \ldots, x_w$, the output of the query may still be a random variable. Equivalently, we may think of every query as accepting as input random bits in addition to the subsample, though we will suppress those random bits from out notation.
\begin{definition}[$p$-uniform]
\label{def:p-uniform}
A subsampling query $\phi:X^w \to Y$ is said to be $p$-\emph{uniform} if, for every $x_1, \ldots, x_w \in X$ and every $y \in Y$, $\Pr[\phi(x_1, \ldots, x_w) = y] \geq p$.
\end{definition}
\begin{theorem}[Improved bounds for $p$-uniform queries]
\label{thm:main-noisy}
\Cref{thm:main-binary} holds, where for a $p$-uniform query $\phi:X^w \to Y$, the improved cost function
\begin{equation}
\label{eq:cost-noisy-exp}
\mathrm{cost}_n(\phi) \coloneqq \frac{w |Y|}{n-w} \cdot \min\paren*{\log n, 1 + \log\paren*{1 + \frac{w}{np}}}
\end{equation}
is used. For \Cref{thm:high-probability}, the improved cost function also incorporates the desired failure probability, $\delta$, and is defined
\begin{equation}
\label{eq:cost-noisy-hp}
\mathrm{cost}_{n,\delta}(\phi) \coloneqq \frac{|Y|}{n} \cdot \min\paren*{\log n, 1 + \log\paren*{1 + \frac{\log(1/\delta)}{np}}}
\end{equation}
\end{theorem}
When $p$ is large enough, the cost function in \Cref{thm:main-noisy} improves upon that of \Cref{thm:main-binary,thm:high-probability} by a $\log n$ factor. In the applications of \Cref{subsec:SQ,subsec:approx-median}, we will add a small amount of noise to eliminate that log factor. Without that noise, the mechanisms would be even simpler, though the sample size needed would increase by a log factor.
\section{Notation and key definitions}
\label{sec:prelim}
\paragraph{Sets and multisets.}
For a natural number $n$, we use $[n]$ to denote the set $\set{1,\ldots, n}$. For a multiset $S \in X^n$ we use the notation $S_i$ to indicate the $i^\text{th}$ element of $S$, and $S_{-i} \coloneqq (S_1, \ldots, S_{i-1}, S_{i+1}, \ldots, S_n)$ denotes the remaining $n-1$ elements. For $w \leq n$, we use the notation $\binom{S}{w}$ to indicate the set of all size-$w$ multisets $S'$ that are contained within $S$.
\paragraph{Random variables and distributions.}
We use \textbf{boldfont} (e.g. $\bx \sim \mcD$) to denote random variables, and generally will use calligraphic font to denote distribution. The notation $\bx \sim S$ for a (multi)set $S$ is shorthand for $\bx \sim \mathrm{Unif}(S)$. For a distribution $\mcD$ over domain $X$ and element $x \in X$, we use $\mcD(x)$ to denote the probability mass function of $\mcD$ evaluated at $x$. Similarly, for a subset $X' \subset X$, the notation $\mcD(X')$ is shorthand for $\sum_{x \in X'}\mcD(x)$. We use $\supp(\mcD) \coloneqq \{x \in X \mid \mcD(x) > 0\}$ for the support of $\mcD$. For convenience, all domains and distributions will be discrete.
\paragraph{Logarithms and exponentials.}
We use $\log$ to denote the natural logarithm, $\log_2$ to denote logarithms in base 2, and $\exp$ to denote the function $x \mapsto e^x$.
\paragraph{Properties of distributions and random variables.}
Throughout this paper, we will use the following two notions of ``closeness" of two distributions.
\begin{definition}[Kullback-Leibler (KL) Divergence]
\label{def:KL}
For distributions $\mcD$, $\mcE$ supported on a domain $X$, the \emph{KL divergence} between $\mcD$ and $\mcE$ is defined as,
\begin{equation*}
\KL{\mcD}{\mcE} \coloneqq \Ex_{\bx \sim \mcD}\bracket*{\log \paren*{\frac{\mcD(\bx)}{\mcE(\bx)}}}.
\end{equation*}
\end{definition}
\begin{definition}[Neyman's $\chi^2$ divergence]
\label{def:chi-dist}
For distribution $\mcD$, $\mcE$ supported on a domain $X$, we define the \emph{$\chi^2$ divergence} between $\mcD$ and $\mcE$
\begin{equation*}
\chisq{\mcD}{\mcE} \coloneqq \Ex_{\bx \sim \mcD} \bracket*{\frac{(\mcD(\bx) - \mcE(\bx))^2}{\mcD(\bx)^2}} = \sum_{x \in X}\frac{(\mcD(x) - \mcE(x))^2}{\mcD(x)}.
\end{equation*}
\end{definition}
Note that our definition of $\chi^2$ divergence reverses the arguments relative to Pearson's $\chi^2$ divergence.
Furthermore, mutual information will play a critical role in our proofs.
\begin{definition}[Mutual information]
\label{def:MI}
For random variables $\bx,\by$ jointly distributed according to a distribution $\mcD$, let $\mcD(x)$ and $\mcD(y)$ be the marginal distributions of $\bx$ and $\by$ respectively. The mutual information between $\bx$ and $\by$ is defined as
\begin{equation*}
I(\bx ; \by) \coloneqq \KL{\mcD}{\mcD(x) \times \mcD(y)} = \Ex_{\by} \bracket*{\KL{\mcD(x \mid \by = y)}{\mcD(y)}}.
\end{equation*}
\end{definition}
\subsection{Formal model of the analyst}
\Cref{fig:adaptive-analyst} summarizes the interaction of the analyst with a sample $S \in X^n$. Formally, we model the analyst $\mcA$ as a function mapping a time step $t \in [T]$, previous query responses $y_1, \ldots, y_{t-1}$, and a source of randomness $\bz \sim \mcZ$ to a subsampling query $\phi_t$. After the $T^{\text{th}}$ step, the analyst outputs a series of test queries $\psi_1:X^{v_1} \to \zo, \psi_{m}:X^{v_m} \to \zo$, also as a function of $y_1, \ldots, y_T$ and $\bz$. With this formal description of the analyst, we can give the following lengthy but fully explicit description of \Cref{thm:main-binary}.
\begin{theorem}[\Cref{thm:main-binary} restated with a formal model of the analyst]
\label{thm:main-analyst}
For any analyst $\mcA$ and distributions $\mcD, \mcZ$, draw $\bS \sim \mcD^n$ and $\bz \sim \mcZ$. For each $t \in [T]$, let $\phi_t = \mcA(t, \by_1, \ldots, \by_{t-1}, \bz)$ and $\by_t \sim \phi^{(n)}_t(\bS)$. Then, for $\psi_1, \ldots, \psi_m = \mcA(T,\by_1, \ldots, \by_T, \bz)$,
\begin{equation}
\label{eq:expectation-analyst}
\Ex[\sup_{i \in [m]} \mathrm{error}(\psi_i, \bS, \mcD)] \leq O\paren*{\Ex\bracket*{\frac{\sum_{t \in T} \mathrm{cost}_n(\phi_t) + \log m + 1}{n} }}.
\end{equation}
\end{theorem}
We will restrict our attention to \emph{deterministic} analysts, where the analysts output is a deterministic function of the previous responses. Deterministic analysts do not take in a source of randomness (previously denoted $\bz \sim \mcZ$). Through the following simple argument, this is without loss of generality.
\begin{lemma}[Deterministic analysts are as powerful as randomized analysts]
\label{lem:rand-to-det}
If \Cref{thm:main-binary} holds for deterministic analysts, it also holds for randomized analysts. The same is true for \Cref{thm:high-probability,thm:main-noisy}.
\end{lemma}
\begin{proof}
Let $\mcA$ be a randomized analyst. We can think of it as a mixture of deterministic analysts: $\mcA$ first draws $\bz \sim \mcZ$ and then executes the deterministic strategy $\mcA_{\bz} \coloneqq \mcA(\cdot, \bz)$. Then,
\begin{align*}
\Ex[\text{Error with $\mcA$ as the analyst}] &= \Ex_{\bz}[\Ex[\text{Error with $\mcA_{\bz}$ as the analyst}]] \\
&\leq \sup_{z} \Ex[\text{Error with $\mcA_{z}$ as the analyst}]
\end{align*}
The left-hand side of \Cref{eq:expectation-analyst} when $\mcA$ is the analyst is the expectation of the same quantity when $\mcA_{\bz}$ is the analyst. Therefore, if it is small for all deterministic analysts, it is also small for all randomized analysts. Similar arguments hold for \Cref{thm:high-probability,thm:main-noisy}.
\end{proof}
\section{Boosting the success probability}
\label{sec:reduction}
\Cref{thm:main-binary} proves that, with constant success probability, the analyst cannot find a test on which the sample is biased. In this section, we prove \Cref{thm:high-probability} showing that, when all of the analysts queries $\phi:X^w \to Y$ satisfy $w = 1$, that guarantee holds with high probability. We do this via a reduction from small failure probability to constant failure probability.
\pparagraph{Notation} Throughout this section, we will only consider analysts who make queries of the form $\phi:X \to Y$ and tests of the form $\psi:X \to [0,1]$ (corresponding to $w = 1$), and analysts that only output a single test. Given an analyst $\mcA$ and sample $S \in X^n$, we'll use the notation $\mcA(S)$ as shorthand for the distribution of tests $\mcA$ asks on the sample $S$. I.e., $\bpsi \sim \mcA(S)$ is shorthand for, at each $t \in [T]$, setting $\phi_t = \mcA(\by_1, \ldots, \by_{t-1})$ and $\by_t \sim \phi^{(n)}_t(S)$, and then setting $\bpsi = \mcA(\by_1, \ldots, \by_T)$.
\begin{lemma}[Boosting from constant to small failure probability]
\label{lem:auto-boost}
For any distribution $\mcD$ over $X$, sample size $n$, budget $b$, cost function $\mathrm{cost}$, and threshold $\tau_{\psi} \in [0,1]$ for each $\psi:X \to [0,1]$, suppose that for all analysts $\mcA$ that are $(\mathrm{cost}, b)$-budgeted almost surely,
\begin{equation*}
\Prx_{\bS \sim \mcD^n, \bpsi \sim \mcA(\bS)}[\psi(\bS) \geq \tau_{\psi}] \leq \frac{1}{100}.
\end{equation*}
Then, for any sample size $N \geq n$, $k = \floor{N/n}$, and all analysts $\mcA'$ that are $(\mathrm{cost}, B \coloneqq bk/100)$-budgeted almost surely,
\begin{equation*}
\Prx_{\bS \sim \mcD^{N}, \bpsi \sim \mcA'(\bS)}\bracket*{\psi(\bS) \geq \tau_{\psi} + 1/n} \leq \exp(-\Omega(k)).
\end{equation*}
\end{lemma}
At a high level, the proof of \Cref{lem:auto-boost} exploits a classic and well-known technique for boosting success probabilities: Given an algorithm that fails with constant probability, if we run $k$ independent copies, with probability $1 - 2^{-\Omega(k)}$, a large fraction of those copies will succeed. Typically, this technique gives a framework for modifying existing algorithms -- for example, by taking the majority of multiple independent runs -- in order to produce a new algorithm with a small failure probability.
Interestingly, in our setting, no modification to the algorithm is necessary. If we answer subsampling queries in the most natural way, they ``automatically" boost their own success probability. The key insight is a single large sample $\bS \sim \mcD^{N \coloneqq nk}$ can be cast as $k$ groups of samples $\bS^{(1)}, \ldots, \bS^{(k)} \iid \mcD^n$ and the output of a query $\by \sim \phi^{(N)}(\bS)$ is the same as that of $\by' \sim \phi^{(n)}(\bS^{(\bi)})$ where $\bi \sim [k]$ is a random group. Using this insight, we are able to prove the following Lemma.
\begin{lemma}[It is exponentially unlikely many groups have high error]
\label{lem:many-groups}
In the setting of \Cref{lem:auto-boost}, let $\bS^{(1)}$ be the first $n$ elements of $\bS$, $\bS^{(2)}$ the next $n$, and so on. Then,
\begin{equation*}
\Prx_{\bS \sim \mcD^N, \bpsi \sim \mcA'(\bS)}\bracket*{\sum_{i \in [k]}\Ind[\bpsi(\bS^{(i)}) \geq \tau_{\bpsi}] \geq 0.03k} \leq e^{-k/300}.
\end{equation*}
\end{lemma}
The hypothesis of \Cref{lem:auto-boost} roughly corresponds to $\Pr[\bpsi(\bS^{(i)}) \geq \tau_{\bpsi}] \leq 1/100$. If these events were independent for each $i \in [k]$, then the conclusion of \Cref{lem:many-groups} would follow from a standard Chernoff. Unfortunately, they are not necessarily independent. To get around that, in \Cref{lem:direct-product} we extend a direct product theorem of Shaltiel's showing, roughly speaking, those events are no worse than independent.
We combine \Cref{lem:many-groups} with the following Lemma.
\begin{lemma}
\label{lem:groups-to-overall}
For $N \geq nk$, let $\bS \in X^N$ be drawn from any permutation invariant distribution, meaning $\Pr[\bS = S] = \Pr[\bS = \sigma(S)]$ for any $\sigma$ a permutation of the indices. Let $\bS^{(1)}$ be the first $n$ elements of $\bS$, $\bS^{(2)}$ the next $n$, and so on. For any query $\psi:X \to Y$ and threshold $\tau$, let $\bb$ be the random variable counting the number of $i \in [k]$ for which $\psi(\bS^{(i)}) \geq \tau$. Then,
\begin{equation*}
\Pr[\psi(\bS) \geq \tau + 1/n]\leq 200 \Pr[\bb \geq 0.03k].
\end{equation*}
\end{lemma}
\Cref{lem:auto-boost} is a straightforward consequence of the above two Lemma.
\begin{proof}[Proof of \Cref{lem:auto-boost} assuming \Cref{lem:many-groups,lem:groups-to-overall}]
The analyst chooses a query $\psi^{(\by)}$ as a function of the responses $\by \coloneqq \by_1 \sim \phi_1(\bS), \ldots, \by_T \sim \phi_T(\bS)$. Our goal is to show that $\Pr[\psi^{(\by)}(\bS) \geq \tau_{\psi^{(\by)}} + 1/n] \leq 200 \exp(-k/300)$.
Conditioned on any possible sequence of response $\by = y$, the distribution of $\bS$ is permutation invariant. Therefore,
\begin{align*}
\Pr[\psi^{(\by)}(\bS) \geq \tau_{\psi^{(\by)}}] &= \Ex_{\by}\bracket*{\Prx_{\bS \mid \by} \bracket*{\psi^{(\by)}(\bS) \geq \tau_{\psi^{(\by)}} + 1/n}}\\
&\leq\Ex_{\by}\bracket*{ 200 \Prx_{\bS}\bracket*{\sum_{i \in [k]} \Ind[\psi^{(\by)}(\bS^{(i)}) \geq \tau_{\psi^{(\by)}} ] \geq 0.03k}} \tag{\Cref{lem:groups-to-overall}} \\
&= 200 \Prx_{\bS \sim \mcD^N, \bpsi \sim \mcA'(\bS)}\bracket*{\sum_{i \in [k]}\Ind[\bpsi(\bS^{(i)}) \geq \tau_{\bpsi}] \geq 0.03k} \\
&\leq 200 e^{-k/300}\tag{\Cref{lem:many-groups}}.
\end{align*}
\end{proof}
\subsection{Proof of \texorpdfstring{\Cref{lem:many-groups}}{Lemma 7.2}}
In order to prove \Cref{lem:many-groups}, in \Cref{fig:analyst-game} we will define an ``analyst game" formalizing the setting in which an analyst has multiple distinct samples each of which they can ask queries to. We then prove a direct product theorem analogous to to Shaltiel's direct product theorem for fair decision trees \cite{Sha04}.
\begin{figure}[ht]
\captionsetup{width=.9\linewidth}
\begin{tcolorbox}[colback = white,arc=1mm, boxrule=0.25mm]
\vspace{2pt}
\textbf{Parameters:} A product distribution $\mcD \coloneqq \mcD_1 \times \cdots \times \mcD_k$ over domain $X$, budgets $b \in \R^k$, a class of queries $\Phi$ each mapping $X$ to a distribution of possible responses, function $\mathrm{cost}: \Phi \to \R_{\geq 0}$, and class of tests $\Psi$ each mapping $X$ to $\zo$. \\
\textbf{Setup:} Samples $\bx_1, \ldots, \bx_k \sim \mcD$ are drawn and \emph{not} revealed to the analyst.\\
\textbf{Execution:} The analyst repeats as many times as desired:
\begin{enumerate}[nolistsep,itemsep=2pt]
\item The analyst chooses a query $\phi \in \Phi$ and index $i \in [k]$
\item The analyst receives the response $\by \sim \phi(\bx_i)$.
\item The budget is decremented: $b_i \leftarrow b_i - \mathrm{cost}(\phi)$.
\end{enumerate}\vspace{4pt}
Afterwards, the analyst chooses tests $\psi_1, \ldots, \psi_k$. The analyst wins if $b_i \geq 0$ and $\psi_i(\bx_i) = 1$ for all $i \in [k]$.
\end{tcolorbox}
\caption{An analyst game.}
\label{fig:analyst-game}
\end{figure}
\begin{lemma}[Direct product theorem for analyst games]
\label{lem:direct-product}
In \Cref{fig:analyst-game}, fix the domain $X$, query class $\Phi$, test class $\Psi$ and cost function $\mathrm{cost}$ for which $\inf_{\phi \in \Phi}(\mathrm{cost}(\phi)) > 0$. For any distributions $\mcD \coloneqq \mcD_1 \times \cdots \mcD_k$ and budgets $b \in \R^k$, let $\mathrm{AG}(\mcD, b)$ be the maximum probability an analyst wins the game described in \Cref{fig:analyst-game}. Then,
\begin{equation}
\label{eq:direct-prod}
\mathrm{AG}(\mcD, b) \leq \prod_{i \in [k]} \mathrm{AG}(\mcD_i, b_i).
\end{equation}
\end{lemma}
It's straightforward to see that the $(\geq)$ direction of \Cref{eq:direct-prod} also holds, but we will not need that direction in this paper.
\begin{proof}
First note that if, for any $i \in [k]$, $b_i < 0$, then both sides of \Cref{eq:direct-prod} are equal to $0$. We may therefore assume, without loss of generality that $b_i \geq 0$ for all $i \in [k]$.
Consider an arbitrary analyst, $\mcA$, for distribution $\mcD$ and budget $b$. We will prove that the probability $\mcA$ wins is at most $\prod_{i \in [k]} \mathrm{AG}(\mcD_I, b_i)$ by induction on the number of iterations of the loop in \Cref{fig:analyst-game} that $\mcA$ executes.\footnote{Note that since $\inf_{\phi \in \Phi}(\mathrm{cost}(\phi)) > 0$, the number of loop executions is finite.} In the base case, $\mcA$ executes the loop zero times and directly chooses tests. Then, $\mcA$'s probability of winning is upper bounded by
\begin{align*}
\sup_{\phi_1, \ldots, \phi_k \in \Psi} \Prx_{\bx \sim \mcD}\bracket*{ \prod_{i \in [k]} \psi_i(\bx_i)} &= \sup_{\phi_1, \ldots, \phi_k \in \Psi} \prod_{i \in [k]}\Prx_{\bx \sim \mcD}\bracket*{ \psi_i(\bx_i)} \tag{$\mcD$ is product} \\
&= \prod_{i \in [k]}\sup_{\phi_i \in \Psi} \Prx_{\bx_i \sim \mcD_i}\bracket*{ \psi_i(\bx_i)} \leq \mathrm{AG}(\mcD_i, b_i).
\end{align*}
In the case where $\mcA$ executes the loop $\geq 1$ times, let $\phi \in \Phi$ and $i\in[k]$ be the query and group respectively that $\mcA$ chooses in the first iteration of the loop. Using $b' \in \R^k$ as the vector satisfying $b'_i = b_i - \mathrm{cost}(\phi)$ and $b'_j = b_j$ for all $j \neq i$, the success probability of $\mcA$ is upper bounded by
\begin{align*}
\Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} &\bracket*{\mathrm{AG}(\mcD \mid \phi(\bx_i) = \by, b')} \\
&\leq \Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\prod_{j \in [k]}\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_j, b'_j)} \tag{inductive hypothesis} \\
& =\Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_i, b_i - \mathrm{cost}(\phi)) \cdot \prod_{j \neq i} \mathrm{AG}(\mcD_j, b_j)}\\
&= \prod_{j \neq i} \mathrm{AG}(\mcD_j, b_j) \cdot \Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_i, b_i - \mathrm{cost}(\phi)}.
\end{align*}
The quantity $\Ex_{\bx \sim \mcD, \by \sim \phi(\bx_i)} \bracket*{\mathrm{AG}((\mcD \mid \phi(\bx_i) = \by)_i, b_i - \mathrm{cost}(\phi)}$ is exactly the win probability on $\mcD_i, b_i$ of the analyst whose first query is $\phi$, and remaining strategy is optimal. Therefore, it is upper bounded by $\mathrm{AG}(\mcD_i, b_i)$ and so the win probability of $\mcA$ is upper bounded by $\prod_{i \in [k]} \mathrm{AG}(\mcD_i, b_i)$, as desired.
\end{proof}
We'll also use the following version of the classic Chernoff bound.
\begin{fact}[Chernoff bound]
\label{fact:chernoff}
Let $\bx_1, \ldots, \bx_k$ be random variables each taking on values in $\zo$ and satisfying, for some $p < 1$ and all $S \subseteq [k]$,
\begin{equation*}
\Pr\bracket*{\bigcap_{i \in S} \bx_i} \leq p^{|S|}.
\end{equation*}
Then, for any $\delta > 0$,
\begin{equation*}
\Pr\bracket*{\sum_{i \in [k]} \bx_i \geq (1 +\delta)pk} \leq \exp\paren*{-\frac{\delta^2 pk}{ 2+\delta}}.
\end{equation*}
\end{fact}
\begin{proof}[Proof of \Cref{lem:many-groups}]
Let $\bS^{(k+1)}$ be the remaining $N - nk$ elements not in $\bS^{(1)}, \ldots, \bS^{(k)}$. At each time step $t \in [T]$, the analyst asks a query $\phi_t:X \to Y_t$ and receives the response $\by_t = \phi_t(\bx_t)$. We can think of $\bx_t$ as being chosen via a two tiered process: First, a group $\bi_t \in [k+1]$ is chosen, and then $\bx_t$ is chosen uniform from $\bS^{(\bi)}$. We will show that \Cref{lem:many-groups} holds even if the analyst gets to choose $\bi_t$ at each step.
For each group $i \in [k]$, let $\ba_i$ be the indicator that the total cost of queries to $\bS^{(i)}$ is at most $b$ and $\psi(\bS^{(i)}) \geq \tau_{\psi}$. Since there is a total of $B = bk/100$ budget, regardless of the analyst partitions the queries to groups, at most $\frac{k}{100}$ groups will receive queries with total cost $\geq b$. It is therefore sufficient to bound the probability that $\sum_{i \in [k]} \ba_i \geq 0.02k$. Applying \Cref{lem:direct-product} and the hypothesis of \Cref{lem:auto-boost}, for any set of groups $G \subseteq [k]$,
\begin{equation*}
\Pr\bracket*{\bigcap_{i \in G} \ba_i} \leq \paren*{\frac{1}{100}}^{|G|}.
\end{equation*}
Applying the Chernoff bound in \Cref{fact:chernoff},
\begin{equation*}
\Pr\bracket*{\sum_{i \in [k]} \ba_i \geq 0.02k} \leq \exp\paren*{-\frac{k}{300}}. \qedhere
\end{equation*}
\end{proof}
\subsection{Proof of \texorpdfstring{\Cref{lem:groups-to-overall}}{Lemma 7.3}}
We begin by proving the following technical Lemma.
\begin{restatable}{lemma}{exceedMean}
\label{lem:sample-exceeds-mean}
For any $S \in [0,1]^N$ and $n < N$, let $\bx$ be sampled by taking the sum of $n$ elements from $S$ chosen uniformly without replacement. Then, if $\Var[\bx]$ is nonzero,
\begin{equation*}
\Pr[\bx > \Ex[\bx]] \geq \frac{2 \sqrt{3} - 3}{12 + \frac{4}{\Var[\bx]}}
\end{equation*}
\end{restatable}
\Cref{lem:sample-exceeds-mean} implies that one of two things must be true: Either $\Var[\bx] \leq 1$, in which case standard techniques give that the median of $\bx$ is within $1$ of the mean. Otherwise, $\bx$ exceeds its mean with a constant probability. To prove \Cref{lem:sample-exceeds-mean}, we use the following inequality connecting the moments of $\bx$ to the probability it exceeds its mean.
\begin{fact}[\cite{V08}]
\label{fact:moment-to-exceed-mean}
For any mean-$0$ random variable $\bx$ with nonzero variance,
\begin{equation*}
\Pr[\bx > 0] \geq \frac{2\sqrt{3} - 3}{\frac{\Ex[\bx^4]}{\Ex[\bx^2]^2}}.
\end{equation*}
\end{fact}
Hence, to prove \Cref{lem:sample-exceeds-mean}, it suffices to estimate the moments of $\bx$. The moments of $\bx$ can be explicitly computed as a function of the elements in $S$, but this results in a rather large number of terms. To make the arithmetic simpler, we instead compare to the scenario where the sample was performed \emph{with replacement}.
\begin{fact}[\cite{hoeffding1994probability}]
\label{fact:hoe-replacement}
For some set $S \in R^N$ and $n \leq N$, let $\bx$ be the sum of $n$ elements sampled uniformly \emph{without replacement} from $S$, and $\by$ be the sum of $n$ elements sampled \emph{with replacement} from $S$. Then, for any convex $f$,
\begin{equation*}
\Ex[f(\bx)]\leq\Ex[f(\by)].
\end{equation*}
\end{fact}
\begin{corollary}
\label{cor:reasonable-replacement}
For any set $S \in R^N$ whose elements sum to $0$, let $\bx$ and $\by$ be as in \Cref{fact:hoe-replacement}. Then,
\begin{equation*}
\frac{\Ex[\bx^4]}{\Ex[\bx^2]^2} \leq \paren*{\frac{N-1}{N-n}}^2 \cdot \frac{\Ex[\by^4]}{\Ex[\by^2]^2}.
\end{equation*}
\end{corollary}
\begin{proof}
By \Cref{fact:hoe-replacement}, $\Ex[\bx^4] \leq \Ex[\by^4]$. Therefore it suffices to show that $\Ex[\bx^2]= \Ex[\by^2] \cdot \frac{N-n}{N-1}$.
Let $\bx_1, \ldots, \bx_n$ and $\by_1, \ldots, \by_n$ be the $n$ elements of $S$ chosen without replacement and with replacement respectively. Then, since the $\by_i$ are mean-$0$ and independent, for $i \neq j \in [n]$, $\Ex[\by_i \by_j] = 0$. Meanwhile, using the fact that the sum of elements in $S$ is $0$,
\begin{equation*}
\Ex[\bx_i \bx_j] = \frac{1}{N(N-1)}\sum_{a \in [N]}\sum_{b \in [N] \setminus \set{a}}S_a S_b = \frac{1}{N(N-1)}\sum_{a \in [N]} S_a \paren*{\sum_{b \in [N]}S_b - S_a} = -\frac{1}{N(N-1)} \sum_{a \in [N]}S_a^2.
\end{equation*}
Furthermore, we have for any $i \in [n]$, that
\begin{equation*}
\Ex[\bx_i^2] = \Ex[\by_i^2] = \frac{1}{N} \sum_{a \in [N]} S_a^2.
\end{equation*}
Next, we can compute the second moment of $\by$,
\begin{equation*}
\Ex[\by^2] = \sum_{i \in [n]}\Ex[\by_i^2] = \frac{n}{N}\sum_{a \in [N]} S_a^2.
\end{equation*}
For $\bx$,
\begin{equation*}
\Ex[\bx^2] = \sum_{i \in [n]}\Ex[\bx_i^2] + \sum_{i \neq j} \Ex[\bx_i \bx_j] = \frac{n}{N}\sum_{a \in [N]} S_a^2 - \frac{n(n-1)}{N(N-1)}\sum_{a \in [N]} S_a^2 = \frac{n}{N} \paren*{1 - \frac{n-1}{N-1}}\sum_{a \in [N]} S_a^2.
\end{equation*}
Comparing the above gives the desired bound.
\end{proof}
Next, we bound the desired moment ratio in the setting where the elements are sampled \emph{with} replacement.
\begin{proposition}
\label{prop:moment-ratio}
Let $\by_1, \ldots, \by_n$ be iid mean-$0$ variables each bounded on $[-1,1]$. Then, for $\by \coloneqq \sum_{i \in [n]}\by_i$
\begin{equation*}
\frac{\Ex[\by^4]}{\Ex[\by^2]^2} \leq 3 + \frac{1}{\Var[\by]}
\end{equation*}
\end{proposition}
\begin{proof}
We'll denote $\sigma^2 \coloneqq \Ex[\by_i^2]$. Since $\by_i$ is bounded, we further have that $\Ex[\by_i^4] \leq \sigma^2$. Then,
\begin{equation*}
\Ex[\by^2] = n \sigma^2.
\end{equation*}
Expanding $\Ex[\by^4]$ gives many terms. Luckily, since the $\by_i$ are each independent and mean-$0$, most of those terms cancel. We are left with,
\begin{equation*}
\Ex[\by^4] = n\Ex[\by_i^4] + 3n(n-1)(\sigma^2)^2 \leq n\sigma^2 + 3n^2\sigma^4.
\end{equation*}
This gives
\begin{equation*}
\frac{\Ex[\by^4]}{\Ex[\by^2]^2} \leq \frac{n\sigma^2 + 3n^2 \sigma^4}{n^2\sigma^4} = 3 + \frac{1}{n\sigma^2}.
\end{equation*}
The desired result follows from $\Var[\by] = n\sigma^2$.
\end{proof}
Finally, we complete the proof of \Cref{lem:sample-exceeds-mean}
\begin{proof}
Let $\mu \coloneqq \frac{1}{N} \sum_{x \in S} x$, and $S'$ be a copy of $S$ with each element shifted by $-\mu$. Clearly, each element of $S'$ is bounded on $[-1,1]$ and they sum to $0$. It suffices to bound $\Pr[\bx' > 0]$ for $\bx'$ being the sum of $n$ uniform elements chosen without replacement from $S'$. Furthermore, without loss of generality, we may assume that $n \leq \frac{N}{2}$ as, if $n > N/2$, we may instead consider the sum of the elements \emph{not} sampled. $\bx' > 0$ iff the sum of the elements not sampled is negative.
Let $\by'$ be the sum of $n$ elements chosen uniformly without replacement from $S'$. Then,
\begin{align*}
\Pr[\bx > \Ex[\bx]] &= \Pr[\bx' > 0] \\
&\geq \frac{2\sqrt{3} - 3}{\frac{\Ex[\bx'^4]}{\Ex[\bx'^2]^2}} \tag{\Cref{fact:moment-to-exceed-mean}} \\
& \geq \frac{2\sqrt{3} - 3}{\paren*{\frac{N-1}{N-n}}^2\frac{\Ex[\by'^4]}{\Ex[\by'^2]^2}} \tag{\Cref{cor:reasonable-replacement}}\\
& \geq \frac{2\sqrt{3} - 3}{4\frac{\Ex[\by'^4]}{\Ex[\by'^2]^2}} \tag{$n \leq N/2$}\\
& \geq \frac{2\sqrt{3} - 3}{4\paren*{3 + \frac{1}{\Var[\by']}}} \tag{\Cref{prop:moment-ratio}}\\
& \geq \frac{2\sqrt{3} - 3}{4\paren*{3 + \frac{1}{\Var[\bx]}}} \tag{\Cref{fact:hoe-replacement}}
\end{align*}
\end{proof}
We'll use the following one-sided variant of Chebyshev's inequality
\begin{fact}[Cantelli's inequality]
\label{fact:cantelli}
Let $\bx$ be a random variable with variance $\sigma^2$. Then,
\begin{equation*}
\Pr[\bx - \Ex[\bx] \geq \eps \sigma] \leq \frac{1}{1 + \eps}.
\end{equation*}
\end{fact}
As an easy consequence of the above two results, we obtain the following.
\begin{corollary}
\label{cor:sample-exceeds-mean}
For $\bx$ as in \Cref{lem:sample-exceeds-mean},
\begin{equation*}
\Pr[\bx > E[\bx] - 1] \geq \frac{2 \sqrt{3} - 3}{13} > 0.0357.
\end{equation*}
\end{corollary}
\begin{proof}
If $\Var[\bx] \geq 4$, the desired result follows from \Cref{lem:sample-exceeds-mean}. Otherwise, it follows from \Cref{fact:cantelli}.
\end{proof}
\begin{proof}[Proof of \Cref{lem:groups-to-overall}]
We'll use $\ba$ as shorthand for $\Ind[\psi(\bS^{(i)}) \geq \tau + 1/n]$. Since $\Pr[\bb \geq 0.03k] \geq \Pr[\bb \geq 0.03k \mid \ba] \cdot \Pr[\ba]$,
\begin{equation*}
\Pr[\ba] \leq \frac{\Pr[\bb \geq 0.03k]}{\Pr[\bb \geq 0.03k \mid \ba]}.
\end{equation*}
Therefore, it suffices to show that $\Pr[\bb \geq 0.03k \mid \ba] \geq \frac{1}{200}$. By \Cref{cor:sample-exceeds-mean}, for any $i \in [k]$
\begin{equation*}
\Pr[\psi(\bS^{(i)}) \geq \tau \mid \ba] > 0.0357.
\end{equation*}
Using linearity of expectation,
\begin{equation*}
\Ex[\bb \mid \ba] > 0.0357k.
\end{equation*}
The random variable $k - \bb$ is nonnegative and satisfies
\begin{equation*}
\Ex[k - \bb \mid \ba] < k -0.0357k = 0.9643k.
\end{equation*}
Therefore, by Markov's inequality
\begin{equation*}
\Pr[k - \bb \geq 0.97k \mid \ba] \leq \frac{0.9643k}{0.97k} < 0.995.
\end{equation*}
Equivalently, $\bb \geq 0.03k$ with probability at least $0.005$ conditioned on $\psi(\bS) \geq\tau + 1/n$, which is exactly what we wished to show.
\end{proof}
\subsection{Proof of \texorpdfstring{\Cref{thm:high-probability}}{Theorem 4} and the second part of \texorpdfstring{\Cref{thm:main-noisy}}{Theorem 5}}
\begin{proof}
We'll prove the second part of \Cref{thm:main-noisy}, as \Cref{thm:high-probability} follows from it. Let $k = O(\log(1/\delta))$. First, we note that if $\mcA$ is $(\mathrm{cost}_{n,\delta}, b)$-budgeted almost surely (according to \Cref{eq:cost-noisy-hp}), then it is $(\mathrm{cost}_{n/k}, O(bk))$-budgeted almost surely (according to \Cref{eq:cost-noisy-exp}). For each test function $\psi:X^1 \to [0,1]$, define the threshold to be
\begin{equation*}
\tau_{\psi} \coloneqq \psi(\mcD) + O\paren*{\max\paren*{\frac{k(b+1)}{n}, \sqrt{\frac{k(b+1)}{n} \cdot \Var_{\mcD}(\psi)}}}.
\end{equation*}
By the first part of \Cref{thm:main-noisy} and Markov's inequality, for any analyst $\mcA$ that is $(\mathrm{cost}_{n/k}, O(b))$-budgeted,
\begin{equation*}
\Prx_{\bS \sim \mcD^n, \bpsi \sim \mcA(\bS)}[\psi(\bS) \geq \tau_{\psi}] \leq \frac{1}{100}.
\end{equation*}
By \Cref{lem:auto-boost}, for any analyst $\mcA'$ that is $(\mathrm{cost}_{n/k}, O(bk))$-budgeted almost surely, or equivalently, is $(\mathrm{cost}_{n,\delta}, b)$-budgeted almost surely
\begin{equation*}
\Prx_{\bS \sim \mcD^{n}, \bpsi \sim \mcA'(\bS)}\bracket*{\psi(\bS) \geq \tau_{\psi} + k/n} \leq \exp(-\Omega(k)) \leq \delta/2.
\end{equation*}
Therefore, by a union bound applied to both $\psi$ and $-\psi$,
\begin{equation*}
\Prx_{\bS \sim \mcD^{n}, \bpsi \sim \mcA'(\bS)}\bracket*{\abs{\psi(\bS) - \psi(\mcD)} > O\paren*{\max\paren*{\frac{k(b+1)}{n}, \sqrt{\frac{k(b+1)}{n} \cdot \Var_{\mcD}(\psi)}}} + k/n} \leq \delta.
\end{equation*}
Equivalently, substituting in the definition of error in \Cref{def:error-simple} and $k \coloneqq O(\log(1/\delta))$
\begin{equation*}
\Prx_{\bS \sim \mcD^{n}, \bpsi \sim \mcA'(\bS)}\bracket*{\mathrm{error}(\psi, \bS, \Delta)\geq O\paren*{\log(1/\delta)\cdot\paren*{\frac{b + 1}{n}}}} \leq \delta. \qedhere
\end{equation*}
\end{proof}
\section{Bounding ALKL-stability from \texorpdfstring{$\chi^2$}{chi-squared}-stability}
\label{sec:stability}
The starting point of our analysis is the work of \cite{FS18}. They introduce a notion of \emph{algorithmic stability}, measuring how much the output of the algorithm depends on its input.
\begin{definition}[Average leave-one-out KL stability, \Cref{def:ALKL-first} restated, \cite{FS18}]
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-ALKL stable with respect to a randomized algorithm $\mcM': X^{n-1} \to Y$ if, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps.
\end{equation*}
We simply say that $\mcM$ is $\eps$-ALKL stable if there exists some $\mcM$' with respect to which it is $\eps$-ALKL stable.
\end{definition}
Intuitively, the output of a mechanism that is $\eps$-ALKL cannot depend too much on individual points in the sample. Feldman and Steinke were able to formalize this intuition to show that, if each step of an adaptive mechanism is ALKL stable, it does not reveal too much information about its input.
\begin{fact}[ALKL stability bounds mutual information \cite{FS18}]
\label{fact:KL-MI}
if $\mcM:X^n \to Y$ is $\eps$-ALKL stable, then for $\bS \sim \mcD^n$ and $\by \sim \mcM(\bS)$,
\begin{equation*}
I(\bS; \by) \leq n\eps.
\end{equation*}
Furthermore, if $\mcM_1:X^n \to Y_1$ is $\eps_1$-ALKL stable, and for each $y \in Y_1$, $\mcM_2^{y_1}:X^n \to Y_2$ is $\eps_2$-ALKL stable, then the randomized algorithm that takes as input a sample $S$, draws $\by_1 \sim \mcM_1(S)$, $\by_2 \sim \mcM_2^{\by_1}(S)$, and outputs $\by = (\by_1, \by_2)$ is $(\eps_1 + \eps_2)$-ALKL stable.
\end{fact}
The second component of \Cref{fact:KL-MI} says that ALKL stability composes adaptively: To bound the ALKL stability of a mechanism, it is sufficient to bound the ALKL stability of each individual step. Taken together with the first part of \Cref{fact:KL-MI}, that ALKL stability upper bounds mutual information, it gives a convenient way to uppder bound the mutual information of a mechanism $\mcM$ and its input.
In this work, we will not directly bound the ALKL stability of subsampling queries. Instead, we find it easier to first introduce the following intermediate notion of stability.
\begin{definition}[Average leave-one-out $\chi^2$ stability, \Cref{def:chi-first} restated]
\label{def:chi-stability}
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-$\chi^2$ stable with respect to a randomized algorithm $\mcM': X^{n-1} \to Y$ if, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps.
\end{equation*}
We simply say that $\mcM$ is $\eps$-$\chi^2$ stable if there exists some $\mcM$' with respect to which it is $\eps$-$\chi^2$ stable.
\end{definition}
The key technical advantage of $\chi^2$ stability is its quadratic nature. This makes it more amenable to the techniques of linear algebra used in \Cref{sec:MI}. Interestingly, $\chi^2$ does not appear to directly satisfy the same notion of adaptive composability satisfied by ALKL stability. However, we are able to show that it implies ALKL stability, and take advantage of its adaptive composition properties.
\begin{theorem}[$\chi^2$ to ALKL stability]
\label{thm:chi-to-KL}
If $\mcM: X^n \to Y$ is $\eps$-ALKL stable with respect to $\mcM': X^{n-1} \to Y$,
\begin{enumerate}
\item It is $\eps'$-ALKL stable for
\begin{equation*}
\eps' \coloneqq \eps \cdot(3 + 2 \log(|Y|/\eps)).
\end{equation*}
Note that this is with respect to a \emph{different} randomized algorithm than $\mcM'$.\footnote{Indeed, without further assumptions, there may not exist any finite $\eps'$ for which $\mcM$ is $\eps'$-ALKL stable with respect to $\mcM'$.}
\item If for some $\tau \in (0,1]$, all $S \in X^n$, $i \in [n]$, and $y \in Y$, $\mcM'(S_{-i})(y) \geq \tau \cdot \mcM(S)(y)$, then $\mcM$ is $\eps'$-ALKL stable with respect to $\mcM'$ for
\begin{equation}
\eps' \coloneqq \eps \cdot \paren*{1 + \log(1/\tau)}. \label{eq:kl-bound-ratio}
\end{equation}
\end{enumerate}
\end{theorem}
The proof of \Cref{thm:chi-to-KL} uses the following two technical propositions relating KL and $\chi^2$-divergences.
\begin{proposition}
\label{prop:chi-to-KL}
For any distributions $\mcD$, $\mcE$ supported on $Y$, if for any $\tau \in (0,1]$ and all $y \in Y$, $\mcE(y) \geq \tau \cdot \mcD(y)$, then
\begin{equation*}
\KL{\mcD}{\mcE} \leq (1 + \log(1/\tau)) \cdot \chisq{\mcD}{\mcE}.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\by$ be drawn $\by \sim \mcD$ and $\bt$ be the random variable $\bt \coloneqq \frac{\mcE(\by)}{\mcD(\by)}$. Then, we can write KL-divergence and $\chi^2$-divergence as
\begin{equation*}
\KL{\mcD}{\mcE} \coloneqq \Ex\bracket*{-\log \bt} \quad\quad\quad\text{and}\quad\quad\quad \chisq{\mcD}{\mcE} \coloneqq \Ex\bracket*{(\bt-1)^2}.
\end{equation*}
Note that
\begin{equation*}
\Ex[\bt] = \sum_{y \in Y} \mcD(y) \frac{\mcE(\by)}{\mcD(\by)} = \sum_{y \in Y} \mcE(\by) = 1.
\end{equation*}
Therefore, we can equivalently write KL-divergence in the following form
\begin{equation*}
\KL{\mcD}{\mcE} = \Ex\bracket*{-\log \bt + (\bt - 1)}.
\end{equation*}
We claim that for all $t > 0$,
\begin{equation}
\label{eq:t-log-bound}
-\log t + (t - 1) \leq \begin{cases}
\frac{1}{2}(t-1)^2 & \text{if }t \geq 1 \\
(1 - \log t) \cdot (t - 1)^2 &\text{if }0 < t < 1.
\end{cases}
\end{equation}
For $t \geq 1$, consider the function $f(t) \coloneqq \frac{1}{2}(t-1)^2 - \paren*{-\log t + (t - 1)}$. It satisfies $f(1) = 0$ and $f'(t) = (t-1)^2/t$, which is nonnegative for $t \geq 0$. Therefore, $f(t) > 0$ for all $t \geq 1$, which is sufficient for the first case in \Cref{eq:t-log-bound}.
For the second case of $t \in (0,1)$, consider the function $g(t) \coloneqq (1 - \log t) \cdot (t - 1)^2 - \paren*{-\log t + (t - 1)}$. It satisfies $g(1) = 0$ and $g'(t) = (t-1)(1 - 2\log t)$. That derivative is nonpositive for $t \in (0,1)$, so $g(t) \geq 0$ for all $t \in (0,1)$, proving the second case of \Cref{eq:t-log-bound}.
Finally, we use the fact that for all $t$ in the support of $\bt$, $t \geq \tau$ where $\tau \in (0,1]$. This gives
\begin{align*}
\KL{\mcD}{\mcE} &= \Ex\bracket*{-\log \bt + (\bt - 1)} \\
&\leq \Ex\bracket*{(1 - \log \tau )\cdot(\bt - 1)^2} \\
&= (1 + \log(1/\tau)) \cdot \chisq{\mcD}{\mcE}. \qedhere
\end{align*}
\end{proof}
\begin{proposition}
\label{prop:chi-to-KL-mix}
For any distributions $\mcD$, $\mcE$ supported on $Y$ and $\tau \in (0,1]$, let $\mcE'$ be the mixture distribution $\mcE' = (1 - \tau)\cdot\mcE + \tau \cdot \mathrm{Unif}(Y)$. Then,
\begin{equation*}
\KL{\mcD}{\mcE'} \leq (1 + \log(|Y|/\tau)) \cdot (\chisq{\mcD}{\mcE} + \tau) + \tau.
\end{equation*}
\end{proposition}
\begin{proof}
As in the proof of \Cref{prop:chi-to-KL} let $\by$ be drawn $\by \sim \mcD$ and $\bt$ be the random variable $\bt \coloneqq \frac{\mcE(\by)}{\mcD(\by)}$. We also define
\begin{equation*}
\bt' \coloneqq \frac{\mcE'(\by)}{\mcD(\by)} = \frac{(1 - \tau)\cdot \mcE(\by) + \frac{\tau}{|Y}|}{\mcD(\by)} = (1 - \tau)\cdot\bt + \frac{\tau}{|Y|\cdot\mcD(\by)}.
\end{equation*}
Let $f(t) \coloneqq -\log t + t - 1$. We claim that, for all $t,\Delta > 0$,
\begin{equation}
\label{eq:t-delta-bound}
f(t + \Delta) \leq (1 - \log \min(1,\Delta))\cdot (t-1)^2 + \Delta.
\end{equation}
The proof of \Cref{eq:t-delta-bound} is separated into three cases.
\begin{enumerate}
\item[\textbf{Case 1:}] $t \geq 1$. Here, we use that $f'(t) \leq 1$ which means that $f(t +\Delta) \leq f(t) + \Delta$. The desired bound follows from \Cref{eq:t-log-bound} and $(1 - \log \min(1,\Delta)) \geq \frac{1}{2}$.
\item[\textbf{Case 2:}] $t < 1$ and $t + \Delta \geq 1$. Once again using $f'(t) \leq 1$, we have that $f(t + \Delta) \leq f(1) + (t + \Delta - 1)$. The desired bound follows from $f(1) = 0$ and $(t + \Delta - 1) \leq \Delta$.
\item[\textbf{Case 3:}] $t + \Delta < 1$. Then,
\begin{align*}
f(t + \Delta) &\leq (1 - \log(t + \Delta))\cdot(t + \Delta - 1)^2 \tag{\Cref{eq:t-log-bound}} \\
&\leq (1 - \log(t + \Delta))\cdot(t - 1)^2 \tag{$t + \Delta < 1$} \\
& \leq (1 - \log \Delta)(t- 1)^2.
\end{align*}
\end{enumerate}
Applying \Cref{eq:t-delta-bound} and using the fact that $\mcD(\by) \leq 1$,
\begin{equation*}
f(\bt') = f\paren*{(1 - \tau)\cdot\bt + \frac{\tau}{|Y|\cdot\mcD(\by)}} \leq \paren*{1 - \log\paren*{\frac{\tau}{|Y|}}} \cdot((1 - \tau)\cdot \bt - 1)^2 + \frac{\tau}{|Y|\cdot\mcD(\by)}.
\end{equation*}
For any $c \in [0,1]$ and $t \in R$,
\begin{equation}
\label{eq:quadratic-scaling}
(ct - 1)^2 - (t-1)^2 = (c^2 - 1)\paren*{t - \frac{1}{c+1}}^2 + \frac{c-1}{c+1} \leq \frac{1-c}{1+c} \leq 1-c.
\end{equation}
Finally, we bound,
\begin{align*}
\KL{\mcD}{\mcE'} &= \Ex[f(\bt')] \leq \Ex\bracket*{\paren*{1 - \log\paren*{\frac{\tau}{|Y|}}} \cdot((1 - \tau)\cdot \bt - 1)^2 + \frac{\tau}{|Y|\cdot\mcD(\by)}} \\
&\leq \Ex\bracket*{\paren*{1 - \log\paren*{\frac{\tau}{|Y|}}} \cdot((\bt - 1)^2 + \tau) + \frac{\tau}{|Y|\cdot\mcD(\by)}} \tag{\Cref{eq:quadratic-scaling}} \\
&= \paren*{1 + \log\paren*{\frac{|Y|}{\tau}}} \cdot \paren*{\tau + \chisq{\mcD}{\mcE}} + \sum_{y \in Y} \mcD(y) \cdot \frac{\tau}{|Y|\cdot\mcD(y)} \\
&= \paren*{1 + \log\paren*{\frac{|Y|}{\tau}}} \cdot \paren*{\tau + \chisq{\mcD}{\mcE}} + \tau. \qedhere
\end{align*}
\end{proof}
We conclude this section with a proof of \Cref{thm:chi-to-KL}.
\begin{proof}
We begin with the first case, aiming to prove that there is a mechanism $\mcM''$ with respect to which $\mcM$ is $\eps'$-ALKL stable for $\eps' \coloneqq \eps \cdot(3 + 2 \log(|Y|/\eps))$. Given a sample $S_{-i} \in X^{n-1}$, $\mcM''$ runs $\by \sim \mcM'(S_{-i})$. With probability $(1 - \eps)$, it outputs $\by$. Otherwise, with probability $\eps$, it output a draw from $\mathrm{Unif}(Y)$. Then, for any $S \in X^n$,
\begin{align*}
\Ex_{\bi \sim [n]} &\bracket*{\KLbig{\mcM(S)}{\mcM''(S_{-\bi})}} \\
&= \Ex_{\bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{(1-\eps)\cdot \mcM'(S_{-\bi}) + \eps \cdot \mathrm{Unif}(Y)}} \\
&\leq \Ex_{\bi \sim [n]} \bracket*{(1 + \log(|Y|/\eps)) \cdot (\chisq{\mcM(S)}{\mcM'(S_{-\bi})} + \eps) + \eps} \tag{\Cref{prop:chi-to-KL-mix}}\\
&= (1 + \log(|Y|/\eps)) \cdot \paren*{\Ex_{\bi \sim [n]} \bracket*{\chisq{\mcM(S)}{\mcM'(S_{-\bi})}} + \eps} + \eps \tag{Linearity of expectation}\\
&\leq (1 + \log(|Y|/\eps)) \cdot \paren*{\eps + \eps} + \eps \tag{$\mcM$ is $\eps$-$\chi^2$ stable w.r.t. $\mcM'$} \\
&=\eps \cdot(3 + 2 \log(|Y|/\eps))= \eps'.
\end{align*}
The second case is similar, but we apply \Cref{prop:chi-to-KL} instead of \Cref{prop:chi-to-KL-mix}. Letting $\tau$ be as defined in the statement of \Cref{thm:chi-to-KL},
\begin{align*}
\Ex_{\bi \sim [n]} &\bracket*{\KLbig{\mcM(S)}{\mcM''(S_{-\bi})}} \\
&\leq \Ex_{\bi \sim [n]} \bracket*{(1 + \log(1/\tau)) \cdot \chisq{\mcM(S)}{\mcM'(S_{-\bi})} } \tag{\Cref{prop:chi-to-KL}}\\
&= (1 + \log(1/\tau)) \cdot\Ex_{\bi \sim [n]} \bracket*{\chisq{\mcM(S)}{\mcM'(S_{-\bi})}} \tag{Linearity of expectation}\\
&\leq (1 + \log(1/\tau)) \cdot \eps \tag{$\mcM$ is $\eps$-$\chi^2$ stable w.r.t. $\mcM'$} \\
&=\eps'.
\end{align*}
\end{proof}
\section{Technical Overview}
\label{sec:technical-overview}
We consider the entire transcript of interaction between the analyst and the random sample $\bS \sim \mcD^n$. This transcript, denoted $\by$, records the history of queries $\phi_1, \ldots, \phi_T$ asked by the analyst, as well as the responses $\by_t \sim \phi(\bS)$ for each $t \in [T]$. The bulk of the work in proving \Cref{thm:main-binary} and its generalization given in \Cref{thm:main-noisy} is the following mutual information bound.
\begin{theorem}
\label{thm:MI-informal}
In the settings of \Cref{thm:main-binary,thm:main-noisy}, let $\by$ be the transcript of interaction between the analyst and sample $\bS$. Then, the mutual information of $\bS$ and $\by$ is at most $O(nb)$.
\end{theorem}
The starting point for \Cref{thm:MI-informal} is the work of Feldman and Steinke \cite{FS18}. They introduce the following notion of algorithmic stability.
\begin{definition}[Average leave-one-out KL stability, \cite{FS18}]
\label{def:ALKL-first}
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-ALKL stable if there is some randomized algorithm $\mcM': X^{n-1} \to Y$ such that, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\KLbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps
\end{equation*}
where $\KL{\cdot}{\cdot}$ is the KL-divergence and $S_{-i} \in X^{n-1}$ refers to all but the $i^{\text{th}}$ point of $S$.
\end{definition}
Intuitively, a mechanism that is ALKL stable cannot depend too much on any single point in the sample. Feldman and Steinke show that if a transcript $\by$ is produced via the adaptive composition of $T$ queries, each of which are individually $\eps$-ALKL stable, then the mutual information between $\by$ and $\bS$ is at most $nT\eps$. Hence, our proof of \Cref{thm:MI-informal} proceeds by showing that a subsampling query $\phi^{(n)}$ is $(O(\mathrm{cost}_n(\phi)))$-ALKL stable.
The most natural candidate for $\mcM'$ in \Cref{def:ALKL-first} is $\phi^{(n-1)}$. Unfortunately, its not hard to construct a query $\phi$, sample $S \in X^n$, and $i \in [n]$ for which $\KL{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-i})}$ is infinite, and so this candidate does not work. For example, if $X = \zo$, $\phi:X^1 \to \zo$ is the identity function, and $S = \{1,0,\ldots, 0\}$, the support of $\phi^{(n)}(S)$ is $\zo$ whereas the support of $\phi^{(n-1)}(S_{-i})$ is only $\{0\}$, leading to an infinite KL-divergence. To get around this, we define the following alternative notion of stability.
\begin{definition}[Average leave-one-out $\chi^2$ stability]
\label{def:chi-first}
A randomized algorithm $\mcM: X^n \to Y$ is $\eps$-$\chi^2$ stable if there is some randomized algorithm $\mcM': X^{n-1} \to Y$ such that, for all samples $S \in X^n$,
\begin{equation*}
\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\mcM(S)}{\mcM'(S_{-\bi})}} \leq \eps
\end{equation*}
where $\chisqbig{\mcD}{\mcE}$ is the reversed $\chi^2$ divergence, which is infinite if $\supp(\mcE) \not \subseteq \supp(\mcD)$ and otherwise equal to $\Ex_{\bx \sim \mcD}\bracket*{\frac{(\mcD(\bx) - \mcE(\bx))^2}{\mcD(\bx)^2}}$.
\end{definition}
Whereas $\KL{\mcD}{\mcE}$ is finite iff $\supp(\mcD) \subseteq \supp(\mcE)$, $\chisq{\mcD}{\mcE}$ is finite iff $\supp(\mcE) \subseteq \supp(\mcD)$. As a result $\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-i})}$ is guaranteed to be finite. Furthermore, given the quadratic nature of $\chi^2$-divergence, we are able to use the techniques of linear algebra to give a bound on $\Ex_{\bi \sim [n]} \bracket*{\chisqbig{\phi^{(n)}(S)}{\phi^{(n-1)}(S_{-\bi})}}$. That bound is given in \Cref{sec:MI}.
We are further able to show, in \Cref{sec:stability}, a surprising connection between ALKL stability and $\chi^2$ stability: If $\mcM$ is $\eps$-$\chi^2$ stable, it is $(\eps' \coloneqq \eps \log(|Y|/\eps))$-ALKL stable. Since $\KL{\cdot}{\cdot}$ and $\chisq{\cdot}{\cdot}$ are finite in different regimes, we cannot use the same $\mcM'$ in \Cref{def:ALKL-first,def:chi-first}. Instead, we ``smooth" the $\mcM'$ of \Cref{def:chi-first} to ensure its support includes all elements of $Y$.
\Cref{sec:stability,sec:MI} together prove \Cref{thm:MI-informal}. Bounded mutual information is known to imply various generalization bounds. Our setting is non-standard, so in \Cref{sec:gen}, we connect mutual information to the generalization guarantee of \Cref{thm:main-binary}.
\subsection{An auto-boosting result}
\label{subsec:autoboost-overview}
While low mutual information is sufficient to guarantee generalization on average, it does not imply the sort of high probability guarantee given in \Cref{thm:high-probability}. Instead, we show an ``auto-boosting" result in the case where $w_t = 1$ at all time steps $t$.
\begin{lemma}[Informal version of \Cref{lem:auto-boost}]
\label{lem:auto-boost-informal}
Suppose that the analyst always asks subsampling queries $\phi:X^{w_t} \to Y_t$ where $w_t = 1$ for all $t \in [T]$. Any low-bias guarantee that holds with constant probability for a sample size of $n$ holds with probability $1 - 2^{\Omega(-N/n)}$ given a sample of size $N > n$.
\end{lemma}
The intuition behind \Cref{lem:auto-boost-informal} is the following natural way to boost success probabilities: Start by drawing $k$ disjoint samples $\bS^{(1)} ,\ldots, \bS^{(k)} \iid \mcD^n$. Then, whenever a query $\phi$ appears, choose a random group $\bi \in [k]$ and output $\phi(\bS^{(\bi)})$. The hypothesis of \Cref{lem:auto-boost-informal} then guarantees the following: For any test $\psi$ chosen as a function of the query responses, the probability $\bS^{(i)}$ is biased with respect to $\psi$ is at most a constant, say $0.1$, for each $i \in [k]$. If these events were independent, then the probability a large fraction, say $0.2k$, of the disjoint samples are ``biased" is $2^{-\Omega(k)}$.
It turns out there is some subtlety in applying this intuition. Because the analyst can choose which query to ask a group $S^{(i)}$ as a function of responses from some other group $S^{(i')}$, the events indicating whether each group is biased need not be independent. To handle this we generalize a direct product theorem of Shaltiel that was originally applied to fair decision trees \cite{Sha04}. That generalization, given in \Cref{lem:direct-product} shows that while those events need not be independent, they behave no worse than if they were.
Furthermore, the above natural way of boosting success probabilities happens automatically! Given a sample of size $S \in X^{N \coloneqq nk}$, the answer of a query $\by = \phi(S)$ where $\phi:X^1 \to Y$ is completely equivalent to the answer $\by' = \phi(S^{(\bi)})$ where $S^{(1)}, \ldots S^{(k)} \in X^n$ are partitions of $S$ and $\bi \sim \mathrm{Unif}([k])$. We can answer subsampling queries in the most natural way, and they automatically boost their own success probabilities. Note that this step importantly relies on $w_t = 1$ and fails for larger $w$.
Finally, in \Cref{sec:apps}, we prove that both of SQ mechanism and median-finding mechanism both have low bias. Given our framework, these proofs are simple.
\section{Acknowledgments}
The author thanks Li-Yang Tan and Jonathan Ullman for their helpful discussions and feedback. He furthermore thanks the STOC reviewers for their helpful feedback.
Guy is supported by NSF awards 1942123, 2211237, and 2224246.
\bibliographystyle{alpha}
| {
"timestamp": "2023-02-20T02:05:46",
"yymm": "2302",
"arxiv_id": "2302.08661",
"language": "en",
"url": "https://arxiv.org/abs/2302.08661",
"abstract": "Ensuring that analyses performed on a dataset are representative of the entire population is one of the central problems in statistics. Most classical techniques assume that the dataset is independent of the analyst's query and break down in the common setting where a dataset is reused for multiple, adaptively chosen, queries. This problem of \\emph{adaptive data analysis} was formalized in the seminal works of Dwork et al. (STOC, 2015) and Hardt and Ullman (FOCS, 2014).We identify a remarkably simple set of assumptions under which the queries will continue to be representative even when chosen adaptively: The only requirements are that each query takes as input a random subsample and outputs few bits. This result shows that the noise inherent in subsampling is sufficient to guarantee that query responses generalize. The simplicity of this subsampling-based framework allows it to model a variety of real-world scenarios not covered by prior work.In addition to its simplicity, we demonstrate the utility of this framework by designing mechanisms for two foundational tasks, statistical queries and median finding. In particular, our mechanism for answering the broadly applicable class of statistical queries is both extremely simple and state of the art in many parameter regimes.",
"subjects": "Machine Learning (cs.LG); Data Structures and Algorithms (cs.DS); Information Theory (cs.IT)",
"title": "Subsampling Suffices for Adaptive Data Analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399026119353,
"lm_q2_score": 0.7310585844894971,
"lm_q1q2_score": 0.709302209818709
} |
https://arxiv.org/abs/2106.01608 | A Discussion On the Validity of Manifold Learning | Dimensionality reduction (DR) and manifold learning (ManL) have been applied extensively in many machine learning tasks, including signal processing, speech recognition, and neuroinformatics. However, the understanding of whether DR and ManL models can generate valid learning results remains unclear. In this work, we investigate the validity of learning results of some widely used DR and ManL methods through the chart mapping function of a manifold. We identify a fundamental problem of these methods: the mapping functions induced by these methods violate the basic settings of manifolds, and hence they are not learning manifold in the mathematical sense. To address this problem, we provide a provably correct algorithm called fixed points Laplacian mapping (FPLM), that has the geometric guarantee to find a valid manifold representation (up to a homeomorphism). Combining one additional condition(orientation preserving), we discuss a sufficient condition for an algorithm to be bijective for any d-simplex decomposition result on a d-manifold. However, constructing such a mapping function and its computational method satisfying these conditions is still an open problem in mathematics. | \section{Appendix one: Analysis of FPLM and Geometric guarantees} \label{Append1}
In this Appendix one, we first provide the algebraic solution of FPLM; then prove that the graph from the triangulation on 2-manifold is planar; thirdly, we show that the constraints that we assign to each round of FPLM can ensure that FPLM is one-to-one over entire triangulation. We also show that FPLM can be used for any edge-to-edge tessellation of polygons on $2$-manifolds.
For arbitrary dimensional manifold, we discuss sufficient conditions for bijectivity. Unfortunately, computational methods on discrete data sampled from manifold with orientation preserving is still an open question in mathematics.
\subsection{Algebraic solution of FPLM} \label{Algebraic solution of FPLM}
We show the optimization process of both two rounds of FPLM here. First, consider the optimization problem of FPLM in Section \ref{FPLM_algorithm}. We set some elements in $\mathbf Y \in \mathbb{R}^{N \times d}$ to be fixed points $\mathbf C \in \mathbb{R}^{p \times d}$ ($p \geq d+1$) and optimize the rest. Therefore, we rearrange $\mathbf Y = [ \tilde{\mathbf Y}; \mathbf C ] $ with $\tilde{\mathbf Y} \in \mathbb{R}^{(N-p)\times d}$ being the unknowns and
\[
\mathbf L = \begin{bmatrix} \mathbf L_y & \mathbf L_{yc} \\[3pt] \mathbf L_{yc}^T & \mathbf L_c \end{bmatrix}
\]
where $\mathbf L_{yc} \in \mathbb{R}^{(N-p)\times p}$. Therefore we can reformulate the problem as
\begin{align*}
\min_{ \tilde{\mathbf Y} \in \mathbb{R}^{(N-p) \times d}} \text{tr} ( \mathbf C^T \mathbf L_c \mathbf C + \tilde{\mathbf Y}^T \mathbf L_{yc} \mathbf C + \mathbf C^T \mathbf L_{yc}^T \tilde{\mathbf Y} + \tilde{\mathbf Y}^T \mathbf L_y \tilde{\mathbf Y}) \label{barycenter_mapping}
\end{align*}
This quadratic optimization problem is convex as $\mathbf L_y$ is positive definite. Then by first order condition, a global minimizer $\tilde{\mathbf Y}^*$ exists such that
\begin{equation}
\mathbf L_{y} \tilde{\mathbf Y}^* + \mathbf L_{yc} \mathbf C = \boldsymbol 0, \text{ with solution } \tilde{\mathbf Y}^* = - \mathbf L_y^{-1} \mathbf L_{yc} \mathbf C, \label{foc_FPLM}
\end{equation}
where $\mathbf L_y^{-1}$ is the inverse of $\mathbf L_y$. Since Laplacian matrix acts as a difference operator on features, a geometric interpretation of \eqref{foc_FPLM} is that $\tilde{\mathbf Y}^*$ should have its sum of the weighted difference of its neighbours equal $\boldsymbol 0$, regardless of whether they are connected to the fixed points. This is obvious after rewriting \eqref{foc_FPLM} by components. That is, for any $\tilde{\mathbf y}_i^*$, $i = 1,...,n-p$,
\begin{equation}
\mathbf D_{ii} \tilde{\mathbf y}_i^* - \sum_{j\in [1, n-p]} \mathbf A_{ij} \tilde{\mathbf y}_j^* - \sum_{l \in [n-p+1, n]} \mathbf A_{il} \mathbf c_l = \boldsymbol 0, \label{convex_combination_function}
\end{equation}
where $\mathbf A_{ij}$ is the weight between sample $i, j$ and $\mathbf D_{ii}$ is the degree of $\mathbf{x}_i$, including the fixed points. We may further simplify \eqref{convex_combination_function} by considering $\mathbf Y^* = [ \tilde{\mathbf Y}^*; \mathbf C ]$. That is,
\begin{equation}
{\mathbf y}_i^* = \sum_{j =1}^{n} \frac{\mathbf A_{ij}}{\mathbf D_{ii}} {\mathbf y}_j^* = \sum_{j =1}^{n} \lambda_{ij} {\mathbf y}_j^*, \quad \forall{i = 1, ..., n-p}, \label{barycenter_mapping}
\end{equation}
By definition of the degree matrix, we have $\sum_{j=1}^n \lambda_{ij} = 1$, $\forall i$. This shows that every optimal non-fixed point is a convex combination of points in its neighbourhood.
\subsection{Planarity}\label{Planarity}
Given a triangulation on $\mathcal M$, we denote $G(V, E)$ as the graph containing the adjacency information of vertices and edges from $\mathcal T$. For any vertex $v$ in $G$, we denote $N^{\mathcal T}_v$ as the vertices set that contains $v$'s neighboring vertices directly connected to $v$ in $\mathcal T$. We also denote $E^{\mathcal T}_v$ as the set contains all the edges of $v$ as starting/ending point. Based on the second feature of simplex decomposition in definition 1, we denote the boundary of manifold $\partial \mathcal M$ (from triangulation) as those edges that are only contained in one triangle.
In this section, we will demonstrate that the graph induced from triangulation on $\mathcal M$ is planar that can be reduced into a subset of plane so that edges will only intersect at their endpoints. We now define planar graph by stating Kuratowski's theorem\cite{kuratowski1930probleme}:
\begin{thm}[Planar Graph,Kuratowski]\label{thm:planargraph}
A finite graph is planar if and only if it does not contain a sub-graph of the complete graph $K_5$ or the complete bipartite graph $K_{3,3}$ (utility graph).
\end{thm}
We say a graph is complete if the graph is a simple undirected graph in which a unique edge connects every pair of distinct vertices. Figure~\ref{figure_Kuratowski}(a) shows a complete graph of five vertices $K_5$. We say a graph is a complete bipartite graph if there are two sets of vertices $U$ and $V$ and every vertex of the first set is connected to every vertex of the second set. Figure~\ref{figure_Kuratowski}(b) shows a complete bipartite graph ($K_{3,3})$ in which each vertex set contains 3 vertices.
\begin{figure}[H]
\centering
\subfloat[$K_5$]{\includegraphics[width = 0.25\textwidth, height = 0.2\textwidth]{k5.png}}
\hspace{0.8in}
\subfloat[$K_{3,3}$]{\includegraphics[width =0.25\textwidth, height = 0.2\textwidth]{k33.png}}
\caption{Kuratowski subgraph $K_5$ and $K_{3,3}$}
\label{figure_Kuratowski}
\end{figure}
\subsection{Planarity of the graph induced from triangulation}
\begin{prop}\label{prop:nok5}
For a triangulation on a 2-manifold in $\mathbb R^l$, there is no Kuratowski sub-graph $K_5$.
\end{prop}
\begin{proof}
The proof is by contradiction. Assume there is $K_5$. By the definition of triangulation on manifold, the intersection of any pair of triangles is either empty, a common vertex, or a common edge. However, a $K_5$ in $\mathbb R^2$ has two triangles intersecting with thire edges of them. For example, in Figure~\ref{figure_Kuratowski} (a) The intersection of triangle T[A,B,D] and triangle T[A,B,C] is line segment [A,B] while their edges [A,C] and [B,D], intersects, contradicting the condition of triangulation on 2-manifold.
Furthermore, as the triangulation on the manifold is conducted on $\mathbb R^l$ (for example $l=3$), it is possible to have the situation that one of $K_5$'s vertices is lifted up in another dimension so that the entire $K_5$ in $\mathbb R^3$ becomes a pyramid shown in Figure~\ref{high_dim_k5} below. However, from the definition of triangulation, all edges can only be shared by at most once. From the figure below, it is clear to see that edge [C,D] is shared by triangle T[B,C,D], T[A,C,D],T[E,C,D], and that contradicts to the definition of triangulation.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[width =0.3\textwidth, height = 0.3\textwidth]{high_dim_k5.png}
\caption{Sub-graph of $K_5$ in $\mathbb R^3$}\label{high_dim_k5}
\end{figure}
\begin{prop}\label{prop:nok33}
For a triangulation on 2-manifold in $\mathbb R^l$, there is no Kuratowski sub-graph $K_{3,3}$.
\end{prop}
\begin{proof}
The proof is also done by contradiction. Assume there is $K_{3,3}$. If $l=2$, the result is trivial as shown in Figur~\ref{figure_Kuratowski}(b):
$K_{3,3}$ in $\mathbb R^2$ is always with line-segment cross, and that is contradict to our definition of triangulation.
If $l>2$, we have the situation shown in Figur~\ref{high_dim_k33}, in which all vertices are in $\mathbb R^l$. Observe that there is no line-cross (edge intersection) contained in this high-dimensional $K_{3,3}$.
However, the plane define by the vertices $[F,A,B]$ intersects the plane defined by vertices $[A,B,E]$ at line-segment $[A,B]$, indicating self-intersection of the manifold due to the fact that the simplex decomposition is homeomorphic to $\mathcal M$. Thus the manifold that contains such feature can only be immersed to $\mathbb R^l$ but not embedded. That leads to a contradiction to our basic assumption to 2-manifold being proper embedding.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[width =0.3\textwidth, height = 0.3\textwidth]{high_dim_k33.png}
\caption{Sub-graph of $K_{3,3}$ in $\mathbb R^3$}\label{high_dim_k33}
\end{figure}
Applying propositions \ref{prop:nok5} and \ref{prop:nok33} with theorem \ref{thm:planargraph} leads the claim in Theorem \ref{thm:2dmfdtriangulationplanar}.
From Kuratowski theorem, the triangulation $\mathcal T$ on $\mathcal M$ with the above features induces a \textit{planar straight line graph}. Note that the triangulation we discuss can be the results from any triangulation outputs such as surface triangulation and tangential complex.
\begin{rem}[Manifold without boundary] \label{manifold_without_boundary}
For some manifolds without boundary, e.g. two sphere $S^2$, it is well known that we can not map the entire manifold on the plane. One can only discretely sample from the underlying manifold and hence leave with many ``oles'' in the manifold. In other words, what we observed is not the entire sphere but a measure-less subset of it. Thus, it is reasonable to randomly select a triangle from this triangulation as the boundary of a ``new'' manifold without it, which is almost the same as $S^2$ but with the missing triangle. Figure~\ref{Sphere}below shows a triangle selected whose boundary serves as the boundary of this new manifold for the sampled points. There is no information loss as we know what we have taken away.
\end{rem}
\begin{figure}[H]
\centering
\includegraphics[width =0.6\textwidth, height = 0.4\textwidth]{sphere_boundary.png}
\caption{Boundary triangle for a triangulation of the subset of 2D-sphere}\label{Sphere}
\end{figure}
\subsection{Piece-wise linear mapping of FPLM}\label{Piece-wise linear mapping of FPLM}
As we mentioned earlier, the connectivity between simplices formed on manifold should remain the same in both image and pre-image of a bijective function over the entire simplex decomposition. We let the image set of the chart map as $\mathbf Z =\{\mathbf z_1, \mathbf z_2,.. \mathbf z_N \} \in \mathbb R^d$. The exact position of every point in $ \mathbf Z$ is unknown; however, from the bijectivity of the chart map, we know that $ \mathbf Z$ perfectly preserves the connectivity between simplices. This means if we repeat the graph in $\mathbf Z$ using the adjacency information from the simplex decomposition ($\mathcal S$) on $\mathcal M$, the integrity, connectivity, and neighboring relations between simplices will remain unchanged. We write the simplex decomposition in $\mathbb R^d$ as $\mathcal S'$ although it is the same as $\mathcal S$. Then for the points in both $\mathcal S$ and $\mathcal S'$, they are linearly related, e.g. the weights on the edges, which is apparent in equation (\ref{barycenter_mapping}) equal to $\mathbf{\frac{A_{ij}}{D_{ii}}}$.
We will use $N^{\mathcal S'}_{\mathbf z_i}$ to denote the set of neighbors of $\mathbf z_i$ in $\mathcal S'$. From equation \eqref{barycenter_mapping}, the solution of FPLM is obtained by solving a system of linear equations.
From equation \eqref{barycenter_mapping} we know that each interior vertex $\mathbf y_i^*$ is a convex combination of its neighbors, well aligned with the convex combination function defined as follows:
\begin{defn}[Convex Combination function]
For every interior vertex $\mathbf{z}_i$ of a simplex decomposition {\mathcal S'} in $\mathbb R^d$ and $\lambda_{ij}\ge0$, for $N^{\mathcal S'}_{\mathbf z_i}$, if a piece-wise linear function $f: D_{\mathcal S'} \rightarrow \mathbb R$ satisfies:
\begin{equation}
\sum_{\mathbf z_j \in N^{\mathcal S'}_{\mathbf z_i}} \lambda_{ij} =1 \label{summation of weights}
\end{equation}
and
\begin{equation}
f(\mathbf z_i) = \sum_{\mathbf z_{j} \in N^{\mathcal S'}_{\mathbf z_i}}\lambda_{ij}f(\mathbf z_{j})
\end{equation}
Then we call $f$ a convex combination function
\end{defn}
\begin{comment}
\end{comment}
Similarly, we will have \textit{piece-wise linear mapping } $\phi : D_{\mathcal S'} \rightarrow \mathbb R^d$ to be any mapping that $\phi = (f_1,\ldots,f_d)$ in which $f_i $'s are the piece-wise linear function act on each coordinate component of a given vertex $\mathbf z_i$. We call $\phi$ a \textit{convex combination mapping} given a set of fixed non-negative weights $\lambda_{ij}$ for the neighbours $N^{\mathcal S'}_{\mathbf z_i}$ of each interior vertices $\mathbf z_i \in \mathbf Z$.
We have:
\begin{equation} \label{convex_combination_final}
\mathbf{y}^*_i=\phi(\mathbf z_i) = \sum_{\mathbf z_{j} \in N^{\mathcal T'}_{\mathbf z_i}}\lambda_{ij}\phi(\mathbf z_{j})
\end{equation}
The convex combination mapping linearly adjusts the coordinates of each interior vertex in $\mathcal S'$ so that for each vertex, the mapping result $\phi(\mathbf z)$ lies in the convex hull formed by its neighbors. It is clear that $\mathbf{Y^*}$, the optimizer of FPLM, satisfies equation \eqref{convex_combination_final}. For the rest of the paper, we write $\phi_1$ for the convex combination mapping for the first round of FPLM, similarly, $\phi_2$ for the second round of FPLM.
\begin{rem}\label{Function_of_1stround_FPLM}
Together with the chart map $\psi$, we now summarize the whole process of Algorithm \ref{2sFPLM_algorithm}.
If $\mathcal M$ is a manifold without boundary as required, then the whole process of FPLM (one round) will be: $ \phi_1 \circ \psi(\mathbf{X})$; Otherwise, the two rounds of FPLM is: $ \phi_2 \circ \phi_1 \circ \psi(\mathbf{X})$.
\end{rem}
\subsection{one-to-one mapping induced from FPLM on triangulation} \label{one_to_one_analysis}
\begin{prop}
\label{inside_fp_prop}
FPLM maps all non-fixed points inside the convex hull formed by the fixed points ($\mathbf{P}(\mathbf{C})$).
\end{prop}
\begin{proof}
From previous discussion, it is clear that FPLM is a convex combination mapping, meaning every non-fixed point must be a convex combination of its neighbors. Assuming on contrary, there is one point outside the convex hull of $\mathbf{P}(\mathbf{C})$, then there must be more points outside too due to \eqref{barycenter_mapping1}. For those outside points, find the one on the edge of the convex hull (this point always exists due to finiteness), then it must have more points surrounding it too. Continue this process until all non-fixed points are exhausted. The out-most one will not have a convex hull formed by its neighbors according to supporting hyperplane theorem \cite{boyd2004convex}. This is against the fact that every non-fixed point has to be convex combination of its neighbors. Therefore the assumption is incorrect.
\end{proof}
We now restrict to 2-manifolds and prove that the mappings induced by both two rounds of FPLM are one-to-one over triangulation, followed by the work from \cite{floater2003one}, we firstly state the Rad\'o-Kneser-Choquet theorem (RKC):
\begin{thm}[Rad\'o-Kneser-Choquet] \label{RKC}
Suppose $\mathcal T$ is a strongly connected triangulation and that $\phi: D_{\mathcal T} \rightarrow \mathbb R^2$ is a convex combination mapping which maps $\partial D_{\mathcal T}$ homeomorphically into the boundary $\partial \Omega$ of some (closed) convex region $\Omega \subset \mathbb R^2$. Then $\phi$ is one-to-one.
\end{thm}
By generalizing the RKC theorem, Floater's \cite{floater2003one} work provided a necessary and sufficient one-to-one condition of $\phi$ for any triangulation:
\begin{thm}[Floater, 2003]
Suppose $\mathcal{T}$ is any triangulation and let :$\phi: D_{\mathcal T} \rightarrow \mathbb R^2$ is a convex combination mapping which maps $\partial D_{\mathcal T}$ homeomorphically into the boundary $\partial \Omega$ of some (closed) convex region $\Omega \subset \mathbb R^2$. Then $\phi$ is one-to-one if and only if no dividing edge $[v, w]$ of $\mathcal{T}$ is mapped by $\phi$ into $\partial \Omega$ \label{no_dividing_edge_thm}.
\end{thm}
Followed by the above claims, we now explore the features of $\phi_1$ which is the induced mapping from the first round of FPLM.
\begin{prop}\label{prop:innerconvex}
If $\mathcal T$ is strongly connected, then $\mathbf{P}(\mathbf{C_2})$ must be a convex polygon formed by the boundary vertices of $\mathcal T$.
\end{prop}
\begin{proof}
We first identify that all points in $\mathbf C_2$ are boundary points of the 2-manifold $\mathcal M$. This is due to Proposition \ref{inside_fp_prop} and the distance minimisation in FPLM. Assume that $\mathbf{P}(\mathbf{C_2})$ is concave. By \eqref{convex_combination_final} there must be a dividing edge that connects the boundary vertex with the reflex interior angle to another boundary vertex. However, by the definition of strongly connected triangulation, there is no dividing edge between boundary vertices. Hence, $\mathbf{P}(\mathbf{C_2})$ must be convex.
\end{proof}
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.3\textwidth, height = 0.3\textwidth]{parabola_results/parabola_FLE_1step_bd.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.3\textwidth, height = 0.3\textwidth]{parabola_C2.pdf}}\;\;\;
\caption{The convex polygon (Vertices shown as black stars) formed by the boundary points from strongly connected triangulation. } \label{convexity_of_polygon}
\end{figure}
\begin{prop}
If $\mathcal T$ is strongly connected. The mapping $\phi_1$ induced from the first round of FPLM is one-to-one. \label{lemma2}
\end{prop}
\begin{proof}
The result is directly from Theorem \ref{no_dividing_edge_thm} as a triangle is a convex hull in $\mathbb R^2$ and without any dividing edge across the triangle.
\end{proof}
We now explore the property of the second round of FPLM. Recall that the second round of FPLM takes the simple polygon formed by joining the boundary vertices inside the first round FPLM result.
\begin{lem}
Given a strongly connected $\mathcal T'$, the mapping of second round of FPLM bounded by the convex polygon $\mathbf {P(C_2)}$ is one-to-one. \label{lemma3}
\end{lem}
\begin{proof}
Given $\mathbf {P(C_2)}$ is a convex polygon without dividing edge due to Proposition \ref{prop:innerconvex}, the one-to-one result is obvious combinging Theorem \ref{no_dividing_edge_thm}.
\end{proof}
\begin{rem}[Special case of the first round of FPLM]
When the triangulation is not strongly connected, the first round of FPLM is no longer injective because the dividing edge forms a closed boundary of a subset of the manifold causing the part of the manifold that does not contain the selected triangle to collapse into the dividing edge. Therefore, we have to directly detect the boundary from $\mathcal{G}_{\mathcal{S}}$ and generate a $p$-side convex polygon in $\mathbb R^2$ so that all dividing edges remain inside the boundary and none of the boundary vertices are colinear. Theorem \ref{no_dividing_edge_thm} leads to Theorem \ref{thm:withdividingedge} directly and justifies our choice.
\end{rem}
From the lemma above, we know that given a strongly connected triangulation $\mathcal T$ on the manifold, $\mathcal T$ can be mapped into a closed convex subset in $\mathbb R^2$ by using either one or two rounds of FPLM.
\begin{rem}[Manifold with genus]
For surface manifolds, the genus of them is an integer ($g$) representing the maximum number of cuttings along non-intersecting closed simple curves without rendering the resultant manifold disconnected \cite{munkres2018elements}. It will interfere with the boundary detection on the manifold compromising boundary identifiability.
Hence FPLM is not functional to the surface manifold with non-zero genus such as torus.
\end{rem}
\begin{rem}[Bijectivity and Homeomorphism]
The sample $\mathbf X =\{\mathbf x_1, \mathbf x_2,..\mathbf x_N\}$ we observed is a subset of a manifold. Since the FPLM maps the triangulation $\mathcal T$ on the 2-manifold to a convex closed area in $\mathbb R^2$, and every $2,1,0$-simplex in $\mathcal T$ is mapped to exactly one specific $2,1,0$-simplex in $\mathbb R^2$, together with the chart map, the mapping induced from FPLM (i.e. $\phi_1 \circ \psi$ for one round, $\phi_2 \circ \phi_1 \circ \psi$ for two rounds) is at least continuous over $\mathcal T$ and one-to-one. Hence, the mapping generated by Algorithm \ref{2sFPLM_algorithm} process ($\phi \circ \psi)$ restricted to a closed area in $\mathbb R^2$ is bijective.
\end{rem}
Based on the property of FPLM on 2-manifolds, we now explore the feature of FPLM when the manifold structure is obtained by edge-to-edge tessellation of polygons (triangle is a three-sided polygon).
\begin{defn}[Edge-to-Edge Tessellation of polygons on 2-manifolds]
Given 2-manifold, if the manifold can be decomposed with a list of polygons with the number of side ($n$) larger than equal to 3, and the intersection between each polygon can only either be empty, a common point, or a common edge, we then say this manifold is tessellated by these polygons, and that is an edge-to-edge tessellation on the manifold.
\end{defn}
Let $\mathcal{TL}$ be the tessellation described above. If we further triangulate $\mathcal{TL}$, for example we add edges which partition each face of $\mathcal{TL}$ into triangles, then we can use convex combination mapping $\phi': D_{\mathcal{TL}} \rightarrow \mathbb R^2$ and the $\phi'$ is linear over each triangle in $D_{\mathcal{TL}}$ and continuous. Clearly, if $\mathcal{TL}$ is strongly connected, based what we discussed earlier, Algorithm \ref{2sFPLM_algorithm} is one-to-one with the requirement that the selected polygon in the first round is convex. If $\mathcal{TL}$ is not strongly connected, again, boundary detection is necessary to form a $p$ side polygon manually.
We now focus on the property of FPLM in higher dimenional simplex decomposition. Taking 3-simplex decomposition (tetrahedralization) as an example, it has been reported that the convex combination mapping may not be one-to-one over tetrahedral meshes, and a counter-example has been reported in \cite{floater2006convex}. However, the counter-example mentioned on that paper has four points positioned in one face of a tetrahedron, this conflicts our assumption that all points should in general position. Also as FPLM starts from sum of squared distances, which corresponding to a special type of convex mapping, different from the one in \cite{floater2006convex}. Hence, FPLM still works for this counter example.
In regards to $d$-manifold for $d>2$, the situation is more complicated. The bijectivity of piece-wise linear mapping relates to orientation preserving and some boundary conditions. We restate the key theorem here, which is in \cite{lipman2014bijective}.
\begin{thm}[Sufficient conditions for bijectivity]
Given a $d$ dimensional connected orientable manifold $\mathcal M$ and its $d$-simplex decomposition constructed on a discrete sample, then a piece-wise linear mapping $\phi$ from $\mathcal M$ to $\mathbb R^d$ is bijective if it satisfies the following conditions :
\begin{enumerate}
\item The mapping $\phi$ is orientation preserving over entire decomposition.
\item The boundary of simplex decomposition is mapped to a polytope in $\mathbb R^d$ bijectively.
\end{enumerate}
\end{thm}
The first round FPLM with a selected simplex maps the boundary of the manifold in the centre of the simplex as a convex polytope as shown in \ref{prop:innerconvex}. This step guarantees the second condition mentioned above. However, orientation preserving property is not yet clarified, although we conjure that it may be there. The experiments of 3-manifolds support this conjecture. Therefore rigorous proof is still wanted.
\newpage
\section{Appendix two: More experimental results}
In this section, we add more experimental results and briefly introduce the d-simplex decomposition methods such as Tangential Complex and Tetgen.
\textbf{Tangential Complex}\\
Followed by \cite{boissonnat2014manifold}, we use the Tangential Complex (TC) algorithm to construct triangulation on the manifold. One requirement for conducting TC is to require that each point's tangent space on the manifold be estimated by using PCA. The tangential complex is obtained by gluing the local (Delaunay) triangulations around each sample point. The output of TC is a sub-complex of the $l$-dimensional Delaunay simplices of the sample points, but it can be computed using mostly operations in the $d$-dimensional tangent spaces.\cite{boissonnat2014manifold}. It can be proved that the output of the reconstructed manifold from the TC algorithm can be isotopic to the original manifold. However, due to the appearance of so-called \textit{inconsistencies}, TC does not always generate the triangulation result that we defined in \ref{Triangulation and connectivity}. Even though this situation has been reported \cite{freedman2002efficient}, there is no universal solution except for the case of curves ($d=1$) \cite{flototto2003coordinate}. Hence, one way to deal with this problem is to give each point that contained inconsistent simplex a small perturbation of their weights so that the position of\textit{ medial axis} of the points can be adjusted accordingly. Unfortunately, there is no guarantee that this perturbation method can always reduce the number of inconsistencies to zero. Hence, if the TC result has inconsistency even after perturbation, we will use Delaunay or surface triangulation.
\textbf{Tetgen}\\
One of the most widely applied tetrahedral mesh generation methods: Tetgen, is comprehensively described in \cite{si2015tetgen}. It is a mixture of a few classic constrain methods described in \cite{george1991automatic} and the classic Delaunay refinement algorithm \cite{ruppert1995delaunay}. Given a set of points from an underlying manifold in $\mathbb R^l$, with an intrinsic dimension equal to three,Tetgen can generate a 3D piece-wise linear complex, collectively named as cells. The property of such cells includes 1. the boundary of each cell in the complex is a union of cells in the complex; 2. The intersection (if it exists) of two cells is the simplicial complex with a lower dimension, at least less than one compared to the two intersected cells. If all the cells in these underlying 3-manifolds are tetrahedral, we would call the piece-wise linear complex formed by tetrahedral mesh. More generally, the piece-wise linear meshes generated from Tetgen offer a facet-to-facet tessellation of manifold in $\mathbb R^l$.
\subsection{Additional results on 2-manifolds}
We add some additional experimental results for 2-manifolds here:
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.13\textwidth, height = 0.13\textwidth]{monkey_saddle_results/monkey_saddle_3d_pts.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{monkey_saddle_results/monkey_saddle_3d_tri.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{monkey_saddle_results/monkey_saddle_FLE_3d_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{monkey_saddle_results/monkey_saddle_FLE_1step.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{monkey_saddle_results/monkey_saddle_FLE_1step_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{monkey_saddle_results/monkey_saddle_FLE_2step.pdf}}\;\;\;
\caption{FPLM on Monkey Saddle, (a) Manifold scatters,(b) Triangulation on manifold, (c) Boundary detection from triangulation result, (d) First round FPLM result, (e) Boundary detection of the first round FPLM, (f) Final result. }
\label{FPLM_monkey_saddle}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[AE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_AE_21.pdf}} \;\;\;
\subfloat[Isomap]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_ISOMAP_46.pdf}}\;\;\;
\subfloat[LE]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_LE_815.pdf}}\;\;\;
\subfloat[LLE]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_LLE_57.pdf}}\;\;\;
\subfloat[LSTA]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_LTSA_33.pdf}}\;\;\;
\subfloat[MDS]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_MDS_41.pdf}}\;\;\;
\subfloat[t-SNE]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{monkey_saddle_other_methods/monkey_saddle_TSNE_735.pdf}}\;\;\;
\caption{Other methods on monkey saddle. (a) 21 crosses,(b) 46 crosses, (c) 815 crosses, (d) 57 cross, (e) 33 crosses, (f) 41 crosses, (g) 735 crosses} \label{other_methods_monkey_saddle}
\end{figure}
Manifold: Paraboloid
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.13\textwidth, height = 0.13\textwidth]{parabola_results/parabola_3d_pts.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{parabola_results/parabola_3d_tri.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{parabola_results/parabola_FLE_3d_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{parabola_results/parabola_FLE_1step.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{parabola_results/parabola_FLE_1step_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{parabola_results/parabola_FLE_2step.pdf}}\;\;\;
\caption{FPLM on Paraboloid, (a) Manifold scatters,(b) Triangulation on manifold, (c) Boundary detection from triangulation result, (d) First round FPLM result, (e) Boundary detection of the first round FPLM, (f) Final result.}
\label{FPLM_Parabola}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[AE ]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{
parabola_other_methods/parabola_AE_122.pdf}} \;\;\;
\subfloat[Isomap ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{ parabola_other_methods/parabola_ISOMAP_39.pdf}}\;\;\;
\subfloat[LE ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{ parabola_other_methods/parabola_LE_1468.pdf}}\;\;\;
\subfloat[LLE ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{ parabola_other_methods/parabola_LLE_309.pdf}}\;\;\;
\subfloat[LTSA ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{ parabola_other_methods/parabola_LTSA_30.pdf}}\;\;\;
\subfloat[MDS ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{ parabola_other_methods/parabola_MDS_57.pdf}}\;\;\;
\subfloat[t-SNE ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{ parabola_other_methods/parabola_TSNE_282.pdf}}\;\;\;
\caption{Other methods on Paraboloid: (a) 122 crosses,(b) 39 crosses, (c) 1468 crosses, (d) 309 cross, (e) 30 crosses, (f) 57 crosses, (g) 282 crosses}
\label{other_methods_Paraboloid}
\end{figure}
Manifold: Twin peaks
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.13\textwidth, height = 0.13\textwidth]{twin_peaks_results/twinpeaks_3d_pts.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{twin_peaks_results/twinpeaks_3d_tri.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{twin_peaks_results/twinpeaks_FLE_3d_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{twin_peaks_results/twinpeaks_FLE_1step.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{twin_peaks_results/twinpeaks_FLE_1step_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{twin_peaks_results/twinpeaks_FLE_2step.pdf}}\;\;\;
\caption{FPLM on Twinpeaks }
\label{FPLM_twinpeaks}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[AE ]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_AE_3332.pdf}} \;\;\;
\subfloat[Isomap ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_ISOMAP_964.pdf}}\;\;\;
\subfloat[LE ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_LE_764.pdf}}\;\;\;
\subfloat[LLE ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_LLE_3571.pdf}}\;\;\;
\subfloat[LTSA ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_LTSA_2976.pdf}}\;\;\;
\subfloat[MDS ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_MDS_3584.pdf}}\;\;\;
\subfloat[t-SNE ]{\includegraphics[width =0.11\textwidth, height = 0.13\textwidth]{twin_peaks_other_methods/twinpeaks_TSNE_18282.pdf}}\;\;\;
\caption{Other methods on Twinpeaks: (a) 3332 crosses,(b) 964 crosses, (c) 764 crosses, (d) 3751 cross, (e) 2976 crosses, (f) 3584 crosses, (g) 18282 crosses}
\label{other_methods_twinpeaks}
\end{figure}
A summary of all manifolds included in the experiment and the number of line crosses generated from the methods other than FPLM are included in the following table:
\begin{table}[H]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|}
\hline
\textbf{Manifolds} & & & \multicolumn{3}{l|}{\textbf{Methods and Line crosses}} & & \\ \hline
& Autoencoder & Isomap & LE & LLE & LTSA & MDS & TSNE \\ \hline
Monkey Saddle & 54 & 46 & 815 & 57 & 33 & 47 & 735 \\ \hline
Swiss Roll & 4585 & 1942 & 937 & 3623 & 36773 & 3804 & 10088 \\ \hline
Sphere & 770 & 1786 & 1683 & 1883 & 1795 & 1667 & 2329 \\ \hline
Twin Peaks & 100 & 114 & 903 & 2806 & 696 & 91 & 235 \\ \hline
Paraboloid & 56 & 39 & 1468 & 309 & 309 & 38 & 282 \\ \hline
\end{tabular}
\end{table}
\subsection{Additional result on tetrahedral meshes}
We additionally provide this result to show that FPLM can deal with large number of tetrahedral mesh in $\mathbb R^3$. Note that the boundary of manifold (i.e. in $\mathbb R^4$ or higher)will generally be different compared with the boundary detected in $\mathbb R^3$, since there are many types of embedding functions to map 3 dimensional tetrahedral mesh into $\mathbb R^4$. We select the famous SHARK tetrahedral mesh \cite{sullivan2019pyvista} that contains 17061 tetrahedrons to check the efficiency of FPLM. The results are as follows:
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.15\textwidth, height = 0.15\textwidth]{shark/shark_scatter.png}} \;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{shark/tet_shark.png}}\;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{shark/shark_boundary.png}}\;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{shark/shark_first_round.png}}\;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{shark/shark_second_round.png}}\;\;\;
\caption{FPLM on shark sharp manifold example: 1. Point scatter 2. tetrahedralization on scatters 3. Boundary detection (faces) 4. First round FPLM 5. Second round FPLM. Total running time:38.5s}
\label{FPLM_shark}
\end{figure}
\section{Introduction}\label{introduction}
Dimensionality reduction (DR) and manifold learning (ManL) have been widely applied in many machine learning tasks, such as signal processing \cite{rui2016dimensionality}, speech recognition \cite{van2009dimensionality}, neuroinformatics \cite{mwangi2014review} and bioinformatics \cite{zou2016novel}. The reason is the well-accepted manifold assumption: \textit{the observed data distribute in a non-linear low dimensional manifold embedded in a high dimensional ambient space}. The main focus of various ManL is to learn the manifold structure from the sampled data along with DR methods to obtain the latent space representation of the manifold. Many methods have been proposed in recent decades. For example, if the manifold is linear, some classic methods such as Principal Component Analysis (PCA) \cite{wold1987principal} and Multi-Dimensional Scaling (MDS) \cite{cox2008multidimensional} can be efficiently applied. When the manifold is non-linear, several embedding methods can be used such as Isomap \cite{balasubramanian2002isomap}, Local Linear Embedding (LLE) \cite{roweis2000nonlinear}, Laplacian Eigenmap (LE) \cite{belkin2003laplacian}, Local Tangent Space Alignment (LTSA) \cite{zhang2004principal} and Hessian Eigenmaps \cite{donoho2003hessian} to name a few.
Although ManL and DR methods are important pre-processing in machine learning and widely used, unfortunately, the understanding of what results they produce is largely missing. It is not clear whether these methods generate valid latent space representation when examined under the mathematical definition of a manifold. As subsequent learning process is built on ManL and DR, it is crucial to investigate the mathematical validity of the outputs generated from these methods so that the entire learning algorithm can be well understood and interpreted.
\paragraph{Manifold Structure}
Given a $d$-dimensional manifold $\mathcal{M}$ embedded in $\mathbb R^l$ ($l >d$) covered by a set of open sets $\mathcal{M} \subset \bigcup_{\alpha} U_{\alpha}$. For each set $ U_{\alpha}$, there is a homeoomorphism $\psi_{\alpha}: U_{\alpha} \rightarrow \mathbb R^d$. The pair $(U_{\alpha},\psi_{\alpha})$ forms a chart. The image of the chart map is deemed to present the manifold strucuture \cite{guillemin2010differential}. So by definition of homeomorphism, the chart map has to be bijective, i.e. one-to-one and onto, mapping a neighbourhood of a manifold to latent space. On top of that, one can require chart map to preserve other geometric aspects of manifold such as angles \cite{courant2005dirichlet} and distance \cite{maceachren1987sampling} if it is possible. To obtain manifold local structures, one common way used in most of ManL/DR algorithms is applying K nearest neighborhood (KNN) given the observed data is discretely sampled from the manifold. The estimated local structure is then used to infer the global coordinates of all data points by minimizing some loss function. A more advanced method is to approximate manifold by simplex decomposition \cite{boissonnat2018delaunay}, such as surface triangulation for 2-manifolds (surfaces) and tetrahedralization for 3-manifolds. These methods generate a piece-wise linear approximation of $\mathcal M$ from the sample so that, under appropriate sampling conditions, the manifold approximation quality can be guaranteed. Many achievements have been made along with this line \cite{cazals2006delaunay} \cite{boissonnat2018delaunay} \cite{maglo2012progressive}.
For example, Boissonnat et al. \cite{boissonnat2018delaunay} developed an algorithm named tangential complex (TC) to decompose manifold with $d$-simplices. The reconstructed manifold generated from TC is isotopic to the original manifold if the density of data points is higher enough.
In this paper, we use these methods to generate manifold structures.
\paragraph{Observation and Motivation}
Any chart map bijectively maps between the manifold to a latent space locally. Given DR/ManL methods claim to learn the latent representation, therefore, it is vital to ensure that those learnt representations are bijective or at least one-to-one, i.e. injective globally. Now the central problem is how to check a map between sets of discrete points is bijective/injective. For 1D-manifold, there is natural ordering if it is considered as a curve and hence the bijectivity is easily checked by looking at the order of the representations in $\mathbb R$. However, when $d>1$, there is no such ordering. Therefore, the crux is the implementation of such ordering for any $d$. Our solution to this is simplex preserving. Given a $d$-simplex decomposition on $\mathcal M$ in $\mathbb R^l$, a bijective mapping $\mathbb R^l \rightarrow \mathbb R^d$ must preserve simplex structure, i.e. their integrity, connectivity and neighboring relations. For example, given a triangulation on a 2-manifold, a bijective map over the entire triangulation should be one-to-one over each triangle, edge, and point, no degeneracy and overlapping of triangles and edges. Existence of any degeneracy or overlap violates the required bijectivity/injectivity and hence indicates deviation from learning manifold properly.
Following the above line of thoughts, we investigate the learning performance of commonly used DR/ManL algorithms and models. We observe that none of those methods could preserve the simplex structure even though we directly provide adjacency information from the simplex decomposition rather than generating it by their own. For example, the results of these methods on 2-manifolds usually contain a large number of edge crosses, indicating that the map is not one-to-one. Figure~\ref{failed_result} shows a simple example of embedding result with/without line segments intersections.
\begin{figure}[H]
\centering
\subfloat[Paraboloid scatter]{\includegraphics[width = 0.2\textwidth, height = 0.18\textwidth]{illustration/parabola_3d_pts.pdf}} \;\;\;
\subfloat[Triangulation on paraboloid]{\includegraphics[width =0.2\textwidth, height=0.18\textwidth]{illustration/parabola_3d_tri.pdf}}\;\;\;
\subfloat[Failed embedding]{\includegraphics[width =0.2\textwidth, height = 0.18\textwidth]{illustration/parabola_LE_1468_cross_highlight.pdf}}\;\;\;
\subfloat[Successful embedding]{\includegraphics[width = 0.2\textwidth, height = 0.18\textwidth]{parabola_results/parabola_FLE_2step.pdf}} \;\;\;
\caption{Two embedding algorithms on Paraboloid. One failed (c), Another one succeed(d).}
\label{failed_result}
\end{figure}
We discover that these methods failed the bijectivity test that is essential in manifold definition. Nonetheless they may work for particular type of manifolds. For example, Isomap provides a correct embedding only if the manifold itself is isometric to an open set in the latent space, and LLE only reconstructs topological balls \cite{chen2011locally}. While the type of manifold is not known beforehand, it is important to have a geometrically correct algorithm which guarantees bijectivity. To address this problem, we propose a new method that is proved to be bijective (when restricted to codomain of the mapping) at least for 2-manifolds. The theory we use is Tutte's planar embedding theorem, which states that every planar graph has a convex representation in $\mathbb R^2$. The theorem was later generalized in \cite{floater2003one} that proves under some additional requirements, a convex combination map that maps a triangulation from $\mathbb R^2$ to $\mathbb R^2$ is one-to-one. Our algorithm is applicable to any $d$-manifolds (without genus) with facet-to-facet tessellation. Most importantly, it has theoretical guarantee (refer to the Appendix) to produce a valid latent space representation for 2-manifolds deferring from the true latent space representation by a homeomorphism.
\paragraph{Contributions.}
In summary, the main contributions of this paper is listed below.
\begin{enumerate}
\item We propose a method to validate manifold learning algorithms from the viewpoint of the definition of manifolds. Instead of examining the cluster quality, we focus on the bijectivity of the induced mapping function, which has to be homeomorphic to be the chart map of the manifold.
\item By using this method, we identify a fundamental problem in some prominent methods: the mapping function induced by these methods is not bijective, and in turn, violates the basic settings of manifolds.
\item We offer a provably correct algorithm called fixed point Laplacian mapping (FPLM) to learn the manifold. This method has geometric guarantee to find a valid latent space representation (up to a homeomorphism).
\item By generalizing the previous embedding theorem, we make our algorithm adaptive to any non-degenerate edge-to-edge tessellation of 2-manifolds and most of 3-manifolds with and without boundary. We also discuss a sufficient condition to ensure a mapping that is always bijective to any manifold mesh generated by $d$-simplex decomposition.
\end{enumerate}
\paragraph{Organization.}
The remainder of the paper is organized as follows. In Section 2, we introduce the basic concepts used in this paper. FPLM will be introduced in Section 3 with the analysis and geometric guarantees discussed in Section 4. Section 5 provides the
experimental results to validate our claims. We discuss the strength and limitation of our methods in Section 6, and conclude this paper in Section 7 with some possible extensions.
\section{Definitions and Preliminaries} \label{Definition and Preliminaries}
In this paper, $\mathcal M$ is an orientable connected manifold with intrinsic dimension $d$ embedded in $\mathbb R^l$ and a finite sample $\mathbf X =\{\mathbf x_1, \mathbf x_2,..\mathbf x_N\} \subset \mathcal M$.
The chart map $\psi$ of the manifold is a continuous bijective function that maps a neighbourhood of the manifold $\mathcal M$ to a subset in $\mathbb R^d$. Its inverse, $\psi^{-1}$, is deemed to generate the structure of $\mathcal M$ \cite{guillemin2010differential}.
We also assume that $\mathcal M$ is embedded in the ambient space with dimension at least $d+1$ {\em without self-intersection}. It is known that the Klein bottle in $\mathbb R^3$ is not an embedding \cite{whitney1944singularities}, merely an immersion, and hence will not be considered in this paper.
\subsection{d-Simplex decomposition of $\mathcal M$}\label{Triangulation and connectivity}
Let $\mathcal M$ be a $d$-manifold with boundary embedded in $\mathbb R^l$ with discrete sample $\mathbf X$. By a $d$-simplex, we mean a $d$-dimensional polytope that is the convex body formed by its $d+1$ vertices. For example, a 0,1,2-simplex stands for a point, line segment, and triangle. Given a $d$-simplex, we call this simplex \textit{degenerate} if it is less than $d$-dimension. We will further assume that all data points sampled in $\mathcal M$ are in \textit{general position}, i.e. no colinearity among points, or in other words, no extra point inside a simplex. For example, there will be no point inside a triangle or edge in triangulation.
\begin{defn}[d-simplex decomposition of $\mathcal M$]
Let $\mathcal S$ be a finite set of non-degenerate $d$-simplex and let $D_{\mathcal{S}} = \bigcup_{S\in \mathcal S} S$ we will call $\mathcal S$ a $d$-simplex decomposition of $\mathcal M$ if:
\begin{enumerate}
\item The intersection of any pair of $d$-simplex can either be empty or a common $\{d-1,d-2,..0\}$ simplex, and $\mathbf X$ are the vertices.
\item The boundary of $D_{\mathcal S}$, a closed polytope written as $\partial D_{\mathcal S}$, is formed by those ($d-1$)-simplices in $\mathcal S$ that are not shared.
\item $D_{\mathcal S}$ is homeomorphic to $\mathcal M$.
\end{enumerate}
\end{defn}
This $d$-simplex decomposition of $\mathcal M$ is actually the best piece-wise linear manifold approximating $\mathcal M$ one could possibly have given a discrete sample from $\mathcal M$. Note that this differs from normal simplical complex on $\mathbf X$ in $\mathbb R^l$, which would be an $l$-simplex decomposition whose convex hull circumscribes $\mathbf X$. For example, for 2-manifold, 2-simplex decomposition (triangulation) should be applied and tetrahedronization for $d=3$. We now take triangulation as an example to provide a few more definitions.
Let $D_{\mathcal{T}} = \bigcup_{T\in \mathcal T} T$ be a triangulation $\mathcal T$ in $\mathbb{R}^2$ ($T$ stands for a triangle). Following \cite{floater2003one}, based on the second requirement of simplex decomposition, $D_\mathcal{T}$ will be simply connected with boundary $\partial D_{\mathcal{T}}$. For the vertices and edges contained in the boundary $\partial D_{\mathcal{T}}$, we call them \textit{boundary} vertices and edges, and otherwise the \textit{interior} vertices and edges.
If an edge with both ends on boundary vertices, we will call it a \textit{dividing edge}. For example, in Figure~\ref{dividing_edge}, the edge $[V,W]$ is a dividing edge.
\begin{figure}[H]
\centering
\includegraphics[width =0.3\textwidth, height = 0.18\textwidth]{triangle_connectivity.png}
\caption{Connectivity between triangles and dividing edges}\label{dividing_edge}
\end{figure}
\begin{defn}
We say that a triangulation is strongly connected if it contains no dividing edges.
\end{defn}
As we mentioned in the previous sections, if a mapping bijectively maps the manifold simplex decomposition result to the latent space, then the connectivity between simplices must be preserved. It is easy to see that if there is overlap from simplices, the points inside the overlap will certainly have more than 1 preimage, and hence not injective/bijective. In terms of simplex decomposition, for unknown manifold one can apply Tangential Complex algorithm \cite{boissonnat2014manifold} with required conditions and consistency test. For test purpose, or validity checking for a given DR/ManL algorithm, one can generate those manifolds such that the simplex mesh in latent space is preserved, for example, graph of some function, i.e. $(x_1,\ldots,x_d,f(x_1,\ldots,x_d))$, where $x_i\in \mathbb R$ are latent variables and $f$ is the function, which is a $d$-manifold, a hypersurface in $\mathbb R^{d+1}$.
\section{Fixed Point Laplacian Mapping}\label{Fixed Point Laplacian Eigenmap: FPLM}
By using the validity checking method mentioned above, e.g. the example in Section 1 and many more in experiment section, we realised that those mostly used DR/ManL methods we tested are not bijective, and hence do not really learn a manifold. The question is then, is it possible to design such an algorithm which has bijectivity guarantee, at least for some manifolds? The answer is positive.
\subsection{Settings}\label{settings}
Based on the previous observation, one necessary condition for bijectivity is that the simplex structure in $\mathcal M$, a graph written as $\mathcal{G}_{\mathcal{S}}$, is preserved by the mapping. Unfortunately, this is highly nontrivial. Normal neighborhood preserving and alignment ideas often seen in many DR/ManL methods do not work as they lack "hard" enforcement to ensure the preserving results, which is also the reason they fail bijectivity test. We need geometry inspired constraints and/or procedures with bijectivity embedded naturally.
We start from constructing a (weighted) adjacency matrix $\mathbf A \in \mathbb{R}^{N \times N}$ derived from a simplex decomposition of $\mathcal M$, with the associated degree matrix and Laplacian denoted by $\mathbf D$ and $\mathbf L$.
The algorithm that we will show below is a two-round procedure with the same optimization performed twice with different constraints each time. We call this optimization fixed-point Laplacian mapping (FPLM), where the fixed points are the constraints. We denote these fixed points as $\mathbf{C}=[\mathbf {c}_1,...,\mathbf{c}_p]^T \in \mathbb{R}^{p\times d}$. We write $\mathbf{P}(\mathbf{C})$ as the simple polytope formed by joining fixed points in $\mathbf C$ as vertices.
\subsection{Fixed-point Laplacian Mapping (FPLM)}
\label{FPLM_algorithm}
FPLM is formulated as follows:
\begin{equation}\label{foc_FPLM1}
\min_{\mathbf Y \in \mathbb R^{N \times d}} \, \text{tr}(\mathbf{Y}^T\mathbf{LY}), \qquad \text{ subject to } \, \mathbf y_{i} = \mathbf c_i, i \in [1,p]
\end{equation}
where $\mathbf c_i\in\mathbb R^d, i\in [1,p]$ are the fixed points. We firstly determine whether $\mathcal{G}_{\mathcal{S}}$ is strongly connected or not (i.e., whether there is a dividing edge). If $\mathcal{G}_{\mathcal{S}}$ is strongly connected, the fixed points in the first round, collected in $\mathbf C_1$, are the images of the vertices from a randomly selected $d$-simplex after reducing its dimensionality. Therefore, $\mathbf C=\mathbf C_1$ in FPLM and $p=d+1$. Note that this step is lossless as $d$-simplex in $\mathbb R^l$ is intrinsical $d$-dimensional and linear. After the first round of FPLM, the boundary of the simplex decomposition, i.e., $\partial D_{\mathcal S}$, will be mapped inside $\mathbf P(\mathbf C_1)$ in $\mathbb R^d$. Recall that the boundary of a $d$-simplex decomposition is the $d-1$ simplicial complex that is not shared. It is straightforward to use the boundary of the simplex decomposition as the boundary to conduct the second round of FPLM. We collect these $p$ vertices in the boundary polytope in $\mathbf C_2$. When $\mathcal{G}_{\mathcal{S}}$ is not strongly connected, we let $n$ be the number of boundary vertices detected from simplex decomposition in $\mathbb R^l$, and construct a $p$-face ($p=n$) convex polytope in $\mathbb R^d$. One example of such convex polytope is the regular $p$-face polytope.
We now summarize two rounds of FPLM below.
\begin{algorithm}[H]
\caption{Two Rounds of FPLM}\label{2sFPLM_algorithm}
\begin{algorithmic}[1]
\STATE \textbf{Input:} Simplex decomposition graph $\mathcal{G}_{\mathcal{S}}$, first round fixed points $\mathbf C_1$.
\STATE Construct weighted adjacency matrix $\mathbf A$ and its Laplacian $\mathbf L$ from $\mathcal{G}_{\mathcal{S}}$.
\IF{No dividing edge in $\mathcal{G}_{\mathcal{S}}$}
\STATE Obtain first-step $\mathbf Y_1$ by \eqref{foc_FPLM1} using $\mathbf C=\mathbf C_1$.
\IF{No boundary detected inside $\mathbf P(\mathbf C_1)$}
\RETURN{$\mathbf Y_1$}
\ELSE
\STATE Use the boundary detected as $\mathbf C_2$.
\STATE Obtain second-step $\mathbf Y_2$ by \eqref{foc_FPLM1} using $\mathbf C=\mathbf C_2$.
\RETURN{$\mathbf Y_2$}
\ENDIF
\ELSE
\STATE Find the number of boundary points of $\mathcal{G}_{\mathcal{S}}$ as $p$ and construct a $p$-face convex polytope as $\mathbf C_1$.
\STATE Obtain first step $\mathbf Y_1$ by \eqref{foc_FPLM1} using $\mathbf C=\mathbf C_1$.
\RETURN{$\mathbf Y_1$}
\ENDIF
\end{algorithmic}
\end{algorithm}
\section{Analysis of Algorithm and Geometric Guarantees} \label{Analysis of Algorithm and Geometric Guarantees}
We now justify that the about procedure results in a bijective mapping. A summary of the line of proofs is the following. We first show that the mapping induced from FPLM is a convex combination mapping over simplex decomposition. Then taking 2-manifolds as an example, we prove that the convex combination mapping is one-to-one over entire triangulation. Restricting the mapping to the codomain, the mapping is bijective. We further prove that the procedure is applicable to any 2-manifold which structure is estimated from a non-degenerate edge-to-edge tessellation of polygons. Due to page limit, we only present the central theorems here; see Appendix for detailed proofs and derivations.
\paragraph{Algebraic solution of FPLM and Convex Combination Mapping}
The global minimizer of FPLM $\tilde{\mathbf Y}^*$ is:
\begin{equation}
{\mathbf y}_i^* = \sum_{j =1}^{n} \frac{\mathbf A_{ij}}{\mathbf D_{ii}} {\mathbf y}_j^* = \sum_{j =1}^{n} \lambda_{ij} {\mathbf y}_j^*, \quad \forall{i = 1, ..., n-p}, \label{barycenter_mapping1}
\end{equation}
Where $\mathbf{D}_{ii}$ is the diagonal of the degree matrix. By the definition of degree matrix we have $\sum_{j=1}^n \lambda_{ij} = 1$, $\forall i$. This shows that every optimal non-fixed point is a convex combination of its neighbours.
As we mentioned earlier, the connectivity between simplices should remain the same in both image and pre-image of a bijective function over the entire simplex decomposition. Given two simplex decomposition $\mathcal S$ and $\mathcal S'$ of some subsets in $\mathbb R^d$, with some abuse of notation, we call a function $f: \mathcal S \rightarrow \mathcal S'$ a \textit{piece-wise linear function} if it is continuous over entire $D_{\mathcal S'}$ and linear over each simplex. Similarly, we have \textit{piece-wise linear mapping } $\phi : \mathcal G_{\mathcal S} \rightarrow \mathcal S$ to be a mapping taking from $\mathcal M$ simplex decomposition to its latent space where the simplex structure remains. A typical DR/ManL method learns $\phi$ such that $\mathbf y_i=\phi(\mathbf x_i)$. If $\phi$ satisfies \eqref{barycenter_mapping1}, we call $\phi$ a convex combination mapping \cite{floater2003one}. Clearly, FPLM generates a convex combination mapping over $d$-simplex decomposition of the manifold.
\paragraph{Geometric Guarantees of FPLM}
We present our central theorems for the geometric guarantees of FPLM here for 2-manifolds.
\begin{thm}\label{thm:2dmfdtriangulationplanar}
For any 2-manifold without genus, the graph induced from any valid triangulation on the manifold is planar.
\end{thm}
The idea is to prove that the graph induced from a triangulation does not contain Kuratowski subgraph $K_5$ and $K_{3,3}$. We now show the features of $\phi$, which is the convex combination mapping induced from FPLM.
\begin{prop}
FPLM maps all non-fixed points inside the convex hull formed by the fixed points ($\mathbf{P}(\mathbf{C})$).
\end{prop}
We only present a sketch of the proof here. If on contrary, there is a point outside the convex hull of $\mathbf{P}(\mathbf{C})$, then there must be more points outside too due to \eqref{barycenter_mapping1}. For those outside points, find the one on the edge of the convex hull, then it must have more points surrounding it too. Continue this process until all non-fixed points are exhausted. The out-most one will not have a convex hull formed by its neighbors according to supporting hyperplane theorem \cite{boyd2004convex} against the fact that every non-fixed point has to be convex combination of its neighbors. Therefore the assumption is incorrect. Another way to prove this is by direct observation of the minimization from FPLM.
By applying the conclusion from previous works \cite{kneser1926losung} \cite{floater2003one}, we proved that the first round of FPLM is one-to-one over any strongly connected triangulation (Appendix One). We then explored the convexity of the boundary polygon of $\mathcal{G}_{\mathcal{S}}$ after the first round of FPLM and concluded the following lemma:
\begin{lem}
Given a strongly connected triangulation $\mathcal T$, $\partial D_{\mathcal S}$ is mapped as a convex hull after the first round of FPLM, and hence the results from algorithm \ref{2sFPLM_algorithm} is one-to-one.
\end{lem}
The conclusion from the above Lemma is proved by virtual of Tutte's embedding theorm \cite{tutte1963draw} after we show the convexity of the image of $\partial D_{\mathcal S}$.
However, when the triangulation is not strongly connected, the first round of FPLM is no longer injective because the dividing edge will be mapped as the boundary edge of the manifold inside the selected triangle in the first round of FPLM. Therefore, we have to directly detect the boundary from $\mathcal{G}_{\mathcal{S}}$ and generate a $p$-side convex polygon in $\mathbb R^2$ so that all dividing edges remain inside the boundary and none of the boundary vertices are then collinear. The following theorem justifies this part in algorithm \ref{2sFPLM_algorithm}.
\begin{thm}\label{thm:withdividingedge}
Given a triangulation $\mathcal T$ with dividing edges, FPLM with fixed points $\mathbf C$ as vertices of a p-side convex polygon is one-to-one, where $p$ is the number of boundary vertices.
\end{thm}
The above conclusions make FPLM applicable to any 2-manifold (orientable and connected) without genus. However, when $d \geq 3$ we need an extra condition, orientation preserving property to ensure injectivity. We discuss this in the appendix.
\section{Experiments} \label{Experiments}
In this section, we investigate the learning performance of widely used state-of-the-art DR/ManL algorithms. The structure of every 2-manifold was generated by applying either Tangential Complex (TC) algorithm \cite{boissonnat2014manifold} or Delaunary/Surface triangulation. The structure of 3-manifold is generated by using Delaunay tetrahedralization algorithm \cite{si2015tetgen} included in Tetgen and TC.
To have a fair comparison between FPLM and other prominent methods, the adjacency information obtained from the simplex decomposition will be used as the input as manifold structure. The number of the line segments crosses will be counted as a measure to evaluate the learning performance for all included models.
\textbf{Experiment setup.}
The 2-manifolds included in the experiment are: Monkey saddle, Swiss roll, Paraboloid, Twin peaks and Sphere.
We construct a weighted adjacency matrix from triangulation via rbf kernel function. That is, $ A_{ij} = \exp(-\gamma d_m(\mathbf x_i, \mathbf x_j))$ if $\mathbf x_i$ is connected to $\mathbf x_j$, where we use $l_2$ distance $d_m(\mathbf x, \mathbf y) = \sqrt{\sum_{i=1}^d (x_i - y_i)^2}$ for $\mathbf x, \mathbf y \in \mathbb{R}^d$. For all experiments, we fix $\gamma = 0.1$.
The settings of other learning algorithms are as follows: for LE and LTSA,
we use pre-computed $\mathbf A$ as input; for Local Linear Embedding, we use adjacency matrix constructed from the simplex decomposed graph as input to replace the neighborhood graph; for Isomap, we construct the distance matrix from the simplex decomposed graph and distance; for MDS and tSNE, we use default settings. Finally, for Manifold Autoencoder, we construct a neural network with layer $3 \times 64 \times 2 \times 64 \times 3$. Activation function is Relu; dropout layer is considered with $p = 0.2$. Batch normalization is applied to the bottleneck layer. The optimizer is chosen to be ADAM with a learning rate of $0.1$. For every experiment, we run 1000 epochs. All experiments are carried out on a laptop computer running on a 64-bit operating system with Intel Core i5-8350U 1.90GHz CPU and 16G RAM with Python 3.36.
For the manifold with boundary, the second round output of FPLM will be compared with other learning algorithms. Due to the space limit, we will only present the investigation results of swiss roll for the manifold with boundary and 2-Sphere for manifold without boundary. For the rest of the result, please see Appendix. Figures below show the comparison results:
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.13\textwidth, height = 0.13\textwidth]{swiss_roll_results/swiss_roll_3d_pts.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{swiss_roll_results/swiss_roll_3d_tri.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{swiss_roll_results/swiss_roll_FLE_3d_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{swiss_roll_results/swiss_roll_FLE_1step.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{swiss_roll_results/swiss_roll_FLE_1step_bd.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.13\textwidth, height = 0.13\textwidth]{swiss_roll_results/swiss_roll_FLE_2step.pdf}}\;\;\;
\caption{FPLM on Swiss roll:(a) Manifold scatters,(b) Triangulation on manifold, (c) Boundary detection (d) First round FPLM, (e) Boundary detection for the first round FPLM, (f) Final result.}
\label{FPLM_swiss_roll}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[AE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_AE_7800.pdf}} \;\;\;
\subfloat[Isomap]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_ISOMAP_1942.pdf}}\;\;\;
\subfloat[LE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_LE_937.pdf}}\;\;\;
\subfloat[LLE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_LLE_3623.pdf}}\;\;\;
\subfloat[LTSA]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_LTSA_36739.pdf}}\;\;\;
\subfloat[MDS]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_MDS_3804.pdf}}\;\;\;
\subfloat[t-SNE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{swiss_roll_other_methods/swiss_roll_TSNE_10088.pdf}}\;\;\;
\caption{Other methods on Swiss roll. (a) 4585 crosses,(b) 1942 crosses, (c) 937 crosses, (d) 3623 cross, (e) 36773 crosses, (f) 3804 crosses, (g) 10088 crosses}
\label{other_methods_swiss_roll}
\end{figure}
As we can see, all results in Figure~\ref{other_methods_swiss_roll} are with line crosses, indicating that the connectivity between triangles is not preserved, thus the mapping induced from these methods is not one-to-one. We now show the result for the manifold without boundary e.g. 2-sphere. Note that we only need one round of FPLM to finish the entire process. This is because we assumed that the sample $\mathbf{X}$ is a subset of the manifold, hence the triangulation conducted on $\mathbf{X}$ is always with a boundary. Thus, it is reasonable for us to only use one round of FPLM given any single triangle can be served as the boundary.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.2\textwidth, height = 0.2\textwidth]{sphere_results/sphere_3d_pts.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.2\textwidth, height = 0.2\textwidth]{sphere_results/sphere_3d_tri.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.2\textwidth, height = 0.2\textwidth]{sphere_results/sphere_FLE_1step.pdf}}\;\;\;
\caption{FPLM on Sphere}
\end{figure}
\begin{figure}[H]
\centering
\subfloat[AE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_AE_885.pdf}} \;\;\;
\subfloat[Isomap]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_ISOMAP_1786.pdf}}\;\;\;
\subfloat[LE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_LE_1683.pdf}}\;\;\;
\subfloat[LLE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_LLE_1883.pdf}}\;\;\;
\subfloat[LTSA]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_LTSA_1787.pdf}}\;\;\;
\subfloat[MDS]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_MDS_1638.pdf}}\;\;\;
\subfloat[t-SNE]{\includegraphics[width = 0.11\textwidth, height = 0.13\textwidth]{sphere_other_methods/sphere_TSNE_2329.pdf}}\;\;\;
\caption{Other methods on Sphere. (a) 770 crosses,(b) 1786 crosses, (c) 1683 crosses, (d) 1883 cross, (e) 1795 crosses, (f) 1667 crosses, (g) 2329 crosses}
\label{other_methods_sphere}
\end{figure}
\subsection{FPLM on 3-manifolds}
To show the learning performance of FPLM on any given tetrahedron mesh,
we use both Delaunay tetrahedralization algorithm described in Tetgen \cite{si2015tetgen} and TC to create tetrahedral meshes in $\mathbb R^3$. Given that all the $d$-manifolds we considered in this paper can at least be embedded into $\mathbb R^{d+1}$; hence the points that we simulated in the manifold latent space can always be embedded in at least $\mathbb R^4$ without self-intersection. Note that the boundary of tetrahedral mesh can be detected as the $2$-simplex that is not shared in tetrahedral mesh. Due to the variety of embedding functions from $\mathbb R^3$ to $\mathbb R^4$, the boundary detect from $\mathbb R^3$ will be, in general, different from the boundary of the manifold in $\mathbb R^4 $ or higher. Moreover, for the first round of FPLM, the fixed points will be the vertices of randomly selected tetrahedral; for the second round of FPLM, the fixed points will be the vertices of the polytope directly detected from tetrahedralization mesh. The following figure shows FPLM results on tetrahedral mesh of 3-ball.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.2\textwidth, height = 0.15\textwidth]{3_sphere/hyper_sphere_scatter.pdf}} \;\;\;
\subfloat[]{\includegraphics[width =0.2\textwidth, height = 0.15\textwidth]{3_sphere/tet_hyper_sphere.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.2\textwidth, height = 0.15\textwidth]{3_sphere/hyper_sphere_boundary_surface.png}}\;\;\;
\subfloat[]{\includegraphics[width =0.2\textwidth, height = 0.15\textwidth]{3_sphere/FLE_hyper_sphere.pdf}}\;\;\;
\caption{FPLM on 3-ball ground truth latent variables ($\psi, \phi$ and $\theta$), where $\phi$ and $\theta$ are from $[0,\pi]$ and $\psi$ is from $[0,2\pi]$. Figure (a) Scatter plot on 3-ball (will be 3-sphere in $\mathbb R^4$) in $\mathbb R^3$. (b): Tetrahedralization (c): Boundary face detection (d): FPLM on 3-ball.}
\label{FPLM_three_ball}
\end{figure}
By counting the number of intersections between the planes formed by the faces (triangles) of tetrahedrons (Figure~\ref{FPLM_three_ball}(d)), we found that the result generated from FPLM perfectly preserved the structure of the manifold, since all planes are only intersect with either a common edge, or point. For 3-manifold with boundary, we use the famous ``Delaunay Example'' tetrahedral mesh provided in python vista \cite{sullivan2019pyvista} to check the performance of FPLM. In addition, to show a better visualization result, we plot a subset of the tetrahedralization result by visualizing the tetrahedron below the (x,y) plane. The FPLM process, however, will still conduct using the entire dataset. By direct observation, we can see FPLM preserves the structure of tetrahedralization result.
\begin{figure}[H]
\centering
\subfloat[]{\includegraphics[width = 0.15\textwidth, height = 0.15\textwidth]{delaunay_example/delaunay_example_scatter.png}} \;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{delaunay_example/delaunay_example_tet_mesh.png}}\;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{delaunay_example/Delaunay_boundary.png}}\;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{delaunay_example/first_FLE_delauany_example.pdf}}\;\;\;
\subfloat[]{\includegraphics[width =0.15\textwidth, height = 0.15\textwidth]{delaunay_example/2nd_round_FLE.png}}\;\;\;
\caption{FPLM on tetrahedron mesh example: (a). Point scatter (b). tetrahedralization (c). Boundary detection (d). First round FPLM (e). Second round FPLM}
\label{FPLM_delaunay_example}
\end{figure}
\section{Discussion}\label{Discussion}
\paragraph{Boundary Advantage of FPLM}
To get the manifold structure, we apply a simplex decomposition algorithm (if exist) on $\mathcal M$. By our definition of the simplex decomposition, the boundary of the decomposition result will be a $(d-1)$-dimenional closed polytope formed by those $(d-1)$-simplices that are not shared. However, in practice, when $d$ is large, constructing a bijective mapping between boundaries can be as challenging as the original manifold learning problem. Fortunately, when $\mathcal{G}_{\mathcal{S}}$ is strongly connected, we evade this problem by two rounds FPLM in Algorithm 1 because the the boundary of the decomposition result is automatically determined. This is also the reason for the first round.
\paragraph{Limitation in Higher Dimensional Manifold Mesh}
It has been reported that convex combination may not be one-to-one if $d \geq 3$. A counter-example has been reported in \cite{floater2006convex}. However, in that particular counter-example, they created one point within a facet of a tetrahedron conflicting with our assumption on the discrete sample on manifold, i.e. all points are assumed in general position. Hence, this counter-example does not apply. We further point that orientation preserving (OP) is necessary for an algorithm with its induced mapping to be bijective for any $d$-simplex decomposition \cite{lipman2014bijective}. Based on the conclusion from \cite{floater2003one}, we easily derive that FPLM is both local/global OP for connected orientable 2-manifolds due to its proven bijectivity over triangulation. However, when $d \geq 3$, the proof of OP in FPLM is still wanted.
Nevertheless, we hypothesize that FPLM will always be bijective in high dimensional manifolds under some conditions. One can understand the process of FPLM as to draw a $d$-simplex decomposition result in $\mathbb R^d$ at the same time minimizing the sum of distances. The minimization process in FPLM is equivalent to minimizing the Dirichlet energy of the piece-wise linear mapping $\phi$. It is well-known that Delaunay simplex decomposition minimizes Dirichlet energy of the piece-wise linear function \cite{rippa1990minimal}, suggesting that FPLM could possibly map to Delaunay simplex decomposition. A solid mathematical proof will be sought in future work.
\section{Conclusion Remarks}\label{conclusion}
\paragraph{Summary of the Paper}\label{summary_of_the_paper}
This paper explores the learning performances of the most widely used state-of-the-art dimensionality reduction algorithms by assessing whether these methods can generate a valid latent space representation aligning with the basic definition of manifold, which is the bijectivity of its chart map. We show that the mapping induced by all examined DR/ML are not one-to-one. Hence they are not learning the manifold in the mathematical sense. We develop a method, two-round FPLM, with the geometric guarantee of bijectivity in its induced map.
From the experimental results, we found that two-round FPLM can perfectly deal with many 2-manifolds and some 3-manifolds. A future study is to investigate if this procedure has injectivity/bijectivity for manifold with arbitrary dimension.
\bibliographystyle{plain}
| {
"timestamp": "2021-06-04T02:11:14",
"yymm": "2106",
"arxiv_id": "2106.01608",
"language": "en",
"url": "https://arxiv.org/abs/2106.01608",
"abstract": "Dimensionality reduction (DR) and manifold learning (ManL) have been applied extensively in many machine learning tasks, including signal processing, speech recognition, and neuroinformatics. However, the understanding of whether DR and ManL models can generate valid learning results remains unclear. In this work, we investigate the validity of learning results of some widely used DR and ManL methods through the chart mapping function of a manifold. We identify a fundamental problem of these methods: the mapping functions induced by these methods violate the basic settings of manifolds, and hence they are not learning manifold in the mathematical sense. To address this problem, we provide a provably correct algorithm called fixed points Laplacian mapping (FPLM), that has the geometric guarantee to find a valid manifold representation (up to a homeomorphism). Combining one additional condition(orientation preserving), we discuss a sufficient condition for an algorithm to be bijective for any d-simplex decomposition result on a d-manifold. However, constructing such a mapping function and its computational method satisfying these conditions is still an open problem in mathematics.",
"subjects": "Machine Learning (cs.LG)",
"title": "A Discussion On the Validity of Manifold Learning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.970239907775086,
"lm_q2_score": 0.7310585786300049,
"lm_q1q2_score": 0.7093022079081613
} |
https://arxiv.org/abs/2112.05745 | A Simple and Efficient Sampling-based Algorithm for General Reachability Analysis | In this work, we analyze an efficient sampling-based algorithm for general-purpose reachability analysis, which remains a notoriously challenging problem with applications ranging from neural network verification to safety analysis of dynamical systems. By sampling inputs, evaluating their images in the true reachable set, and taking their $\epsilon$-padded convex hull as a set estimator, this algorithm applies to general problem settings and is simple to implement. Our main contribution is the derivation of asymptotic and finite-sample accuracy guarantees using random set theory. This analysis informs algorithmic design to obtain an $\epsilon$-close reachable set approximation with high probability, provides insights into which reachability problems are most challenging, and motivates safety-critical applications of the technique. On a neural network verification task, we show that this approach is more accurate and significantly faster than prior work. Informed by our analysis, we also design a robust model predictive controller that we demonstrate in hardware experiments. | \section{Introduction}\label{sec:intro}
\begin{wrapfigure}{!R}{0.44\linewidth}
\begin{minipage}{0.95\linewidth}
\vspace{-12.5mm}
\includegraphics[width=1\linewidth,trim=0 0 0 120, clip]{figs/RandUP4.png}
\vspace{-5mm}
\caption{$\epsilon$-\textsc{RandUP}\xspace consists of three simple steps: 1) sampling $M$ inputs $x_i$ in $\mathcal{X}$, 2) propagating these inputs through the reachability map $f$, and 3) taking the $\epsilon$-padded convex hull $\hat\mathcal{Y}_\epsilon^M$ to approximate the reachable set $\mathcal{Y}$.
}\label{fig:randup}
\vspace{-5mm}
\end{minipage
\end{wrapfigure}
Forward reachability analysis entails characterizing the reachable set of outputs of a given function corresponding to a set of inputs.
This type of analysis underpins a plethora of applications in model predictive control, neural network verification, and safety analysis of dynamical systems.
Sampling-based reachability analysis techniques are a particularly simple class of methods to implement; however, conventional wisdom suggests that if insufficient representative samples
are considered, these methods may not be robust in that they cannot rule out edge cases missed by the sampling procedure.
Alternatively, by leveraging structure in specific problem formulations or computational methods designed for exhaustivity (e.g., branch and bound), a large range of algorithms with deterministic accuracy and performance guarantees have been developed. However, these methods often sacrifice simplicity and generality for their power, motivating the development of algorithms that avoid such restrictions.
In this work, we analyze a simple yet efficient sampling-based algorithm for general-purpose reachability analysis.
As depicted in Figure \ref{fig:randup}, it consists of 1) sampling inputs, 2) propagating these inputs, and 3) taking the padded convex hull of these output samples.
We refer to this \textsc{Rand}omized \textsc{U}ncertainty \textsc{P}ropagation algorithm as $\epsilon$-\textsc{RandUP}\xspace: it is simple to implement,
benefits from statistical accuracy guarantees,
and
applies to a wide range of problems including reachability analysis of uncertain dynamical systems with neural network controllers.
Importantly, $\epsilon$-\textsc{RandUP}\xspace fulfills key desiderata that a general-purpose reachability analysis algorithm should satisfy:
\begin{itemize}[leftmargin=5mm]
\setlength\itemsep{0mm}
\vspace{-1mm}
\item it works with any choice of possibly nonlinear reachability maps and non-convex input sets,
\item its estimate of the reachable set is conservative with high probability and tighter than prior work,
\item it is efficient and does not require precomputations, which is a key advantage for learning-based control applications where uncertainty bounds and models are updated in real-time.
\end{itemize}
Our main contribution is a thorough analysis of the statistical properties of $\epsilon$-\textsc{RandUP}\xspace.
Specifically:
\begin{enumerate}[leftmargin=5mm]
\setlength\itemsep{0.0mm}
\item
We prove that the
set estimator converges to the $\epsilon$-padded convex hull of the true reachable set as the number of samples increases.
Our assumption about the sampling distribution is weaker than in related work and implies that sampling the boundary of the input set is sufficient.
This asymptotic result justifies using
$\epsilon$-\textsc{RandUP}\xspace
as a thrustworthy baseline for offline validation whenever the reachability map and the input set are complex and no tractable algorithm
exists.
\item
We derive a finite-sample bound for the Hausdorff distance between the output of $\epsilon$-\textsc{RandUP}\xspace and the convex hull of the true reachable set,
assuming that the reachability map is Lipschitz continuous.
This result informs algorithmic design (e.g., how to choose the number of samples to obtain an $\epsilon$-accurate approximation with high probability),
sheds insights into
which
problems are most challenging,
and motivates using this simple algorithm in safety-critical applications.
\end{enumerate}
We demonstrate $\epsilon$-\textsc{RandUP}\xspace on a neural network controller verification task and show that it is highly competitive with prior work. We also embed this algorithm within a robust model predictive controller and present hardware results demonstrating the reliability of the approach.
\vspace{-1mm}
\section{Related work}\label{sec:related_work}
Reachability analysis has found a wide range of applications ranging from model predictive control \citep{Schurmann2018},
robotics \citep{Shao2021,LewEtAl2021_2},
neural network verification \citep{Tran2019,Hu2020},
to orbital mechanics \citep{Wittig2015}.
Reachability analysis is particularly relevant in safety-critical applications which require the strict satisfaction of specifications.
For instance,
a drone transporting a package should never collide with obstacles and respect velocity bounds for any payload mass in a bounded input set.
In contrast to stochastic problem formulations which typically consider the inputs as random variables with known probability distributions \citep{Webb2019,Sinha2020,DevonportL4DC2020},
we consider robust formulations which are of
interest whenever minimal information about the inputs is available.
Deterministic
algorithms are often tailored to the particular parameterization of the reachability map and to the shape of the input set. For instance, one finds methods that are particularly designed for neural networks \citep{Tran2019,IvanovVerisig2019,Hu2020},
nonlinear hybrid systems \citep{Chen2013,Kong2015},
linear dynamical systems with zonotopic \citep{Girard2005} and ellipsoidal \citep{Kurzhanski2000} parameter sets,
etc.
We refer to \citep{Liu2021} and \citep{Althoff2021} for recent comprehensive surveys.
Such algorithms
have deterministic accuracy guarantees but require problem-specific structure that restricts the class of systems
they apply to.
Given the wide range of applications of reachability analysis, there is a pressing need for the development and analysis of simple algorithms that can be applied to general problem formulations.
On the other hand, sampling-based algorithms reconstruct the reachable set
from sampled outputs.
The stochasticity is typically controlled by the engineer, who selects the number of samples and their distribution.
A key strength of this methodology is the possible use of black-box models
with arbitrary input sets,
which allows using complex simulators of the system.
For instance, kernel-based methods \citep{DeVito2014,Rudi2017,ThorpeL4DC2021} have been proposed as a strong approach for data-driven reachability analysis. Kernel-based methods are highly expressive, as selecting a completely separating kernel \citep{DeVito2014} enables reconstructing any closed set
to arbitrary precision given enough samples.
Their main drawback
is the potentially expensive evaluation of
the estimator for a large number of samples.
Its implicit representation
as a level set is also not particularly convenient for downstream applications.
Sampling-based reachable set estimators with pre-specified shapes have been proposed to simplify computations and downstream applications.
Recently, \citep{LewPavone2020} proposed to approximate
reachable sets
with the convex hull of the samples,
but this approach is not guaranteed to return a conservative approximation.
Ellipsoidal and rectangular sets are computed in \citep{DevonportL4DC2020} using the scenario approach, but this work tackles a different problem formulation with inputs that are random variables with known distribution.
To tackle the robust reachability analyis problem setting,
\citep{Gruenbacher2021} use a ball estimator
that bounds the samples.
The statistical analysis is restricted to ball-parameterized input sets,
uniform sampling distributions, and smooth diffeomorphic reachability maps
that represent the solution of a neural ordinary differential equation \citep{Chen2018} from the input set.
In practice, using an outer-bounding ball
is more conservative than taking the convex hull of the samples, see
Section \ref{sec:results}.
In this work, we slightly modify \textsc{RandUP}\xspace \citep{LewPavone2020} with an additional $\epsilon$-padding step to yield finite-sample outer-approximation guarantees,
Our analysis leverages
random set theory \citep{Matheron1975,Molchanov_BookTheoryOfRandomSets2017}, which provides a natural mathematical framework to analyze the reachable set estimator.
We characterize its accuracy using the Hausdorff distance to the convex hull of the true reachable set, which provides an intuitive error measure
that can be directly used for downstream control applications.
Our analysis draws inspiration from the vast literature on statistical geometric inference, which proposes different
set estimators including
union of balls \citep{Devroye1980,Baillo2001},
convex hulls \citep{RipleyPoissonForest1977,Schneider1988,Dumbgen1996},
$r$-convex hulls \citep{Rodriguez2016,RodriguezCasal2019,AriasCastro2019},
Delaunay complexes \citep{Boissonnat2013,AamariPhD2017,Aamari2018},
and kernel-based estimators \citep{DeVito2014,Rudi2017}.
This research typically makes assumptions about the set to be reconstructed
(e.g.,
it is convex \citep{Dumbgen1996} or has bounded reach \citep{Cuevas2009}) and considers points that are directly sampled from this set.
In this work, we derive similar results for reachable sets
given known properties of the
input set,
reachability map, and
chosen input sampling distribution.
\section{Problem definition}\label{sec:formal_setting}
In this section, we introduce our notations and problem formulation.
Due to space constraints, we leave measure-theoretic details to Appendix \ref{apdx:random_set_theory}.
We denote
$\lambda(\cdot)$ for the Lebesgue measure over $\mathbb{R}^p$,
$\Gamma(\cdot)$ for the
gamma function,
$\hull(A)$ for the convex hull of a subset $A\subset\mathbb{R}^n$, $A^\comp=\mathbb{R}^n\setminus A$ for its complement,
$\partial A$ for its boundary,
$\oplus$ for the Minkowski sum,
$B(x,r)\,{:=}\,\{y\,{\in}\,\mathbb{R}^n{:}\,
\|y\,{-}\,x\|\,{\leq}\, r
\}$ for the closed ball of center $x\in\mathbb{R}^n$ and radius $r\geq 0$,
and $\mathring{B}(x,r)$ for the open ball.
The family of nonempty compact subsets of $\mathbb{R}^n$ is denoted as $\K$.
For any $A\in\K$ and $d>0$, $D(A, d)\,{:=}\,\min\{n\,{\in}\,\mathbb{N} :
\exists \{a_1,{\mydots},a_n\}\,{\subset}\,\mathbb{R}^n, \
A\,{\subset}\, B(a_1,d)\,{\cup}\,{\mydots}\,{\cup}\, B(a_n,d)
\}$ denotes the $d$-covering number of $A$.
Let $\mathcal{X}\subset\mathbb{R}^p$ be a compact nonempty set of inputs and
$f:\mathbb{R}^p\rightarrow\mathbb{R}^n$ be a continuous function.
In this work, we tackle the general problem of reachability analysis, i.e.,
characterizing the set of reachable outputs $y=f(x)$ for all possible inputs $x\in\mathcal{X}$.
This problem is also often referred to as uncertainty propagation.
Mathematically, the objective consists of efficiently computing an accurate approximation of the reachable set $\mathcal{Y}\subset\mathbb{R}^n$, which is defined as
\begin{align}
\label{eq:reach_set}
\mathcal{Y} = f(\mathcal{X}) =
\{
f(x) \, :\, x\in\mathcal{X}
\}
.
\end{align}
To tackle this problem,
$\epsilon$-\textsc{RandUP}\xspace relies on the choice of three parameters:
a number of samples $M\in\mathbb{N}$,
a padding constant $\epsilon>0$,
and
a sampling distribution $\mathbb{P}_\mathcal{X}$ on measurable subsets of $\mathbb{R}^{p}$.
As depicted in Figure \ref{fig:randup},
$\epsilon$-\textsc{RandUP}\xspace consists of sampling $M$ independent identically-distributed inputs $x_i$ in $\mathcal{X}$ according to $\mathbb{P}_\mathcal{X}$,
of evaluating each output $y_i=f(x_i)$,
and
of computing the $\epsilon$-padded convex hull
\begin{equation}\label{eq:estimator_eps}
\hat\mathcal{Y}_\epsilon^M:=\hull\left(\{y_i\}_{i{=}1}^M\right)\oplus B(0,\epsilon).
\end{equation}
Our analysis hinges on the observation that the reachable set estimator $\hat\mathcal{Y}_\epsilon^M$ is a \textit{random compact set}, i.e., $\hat\mathcal{Y}_\epsilon^M$ is a random variable taking values in the family of nonempty compact sets $\K$.
We refer to Appendix \ref{apdx:random_set_theory} for rigorous definitions using random set theory.
Intuitively, different input samples $x_i$ in $\mathcal{X}$ induce different output samples $y_i$ in $\mathcal{Y}$, resulting in different approximated reachable sets $\hat\mathcal{Y}_\epsilon^M$.
To characterize the accuracy of
the estimator,
we use the \textit{Hausdorff metric}, which is defined as
\begin{equation}\label{eq:metric:Hausdorf}
d_\textrm{H}(A,B)
:=
\max\big(
\sup_{x\in B}
\operatornamewithlimits{inf\vphantom{p}}_{y\in A}
\|x-y\|, \
\sup_{x\in A}
\operatornamewithlimits{inf\vphantom{p}}_{y\in B}
\|x-y\|
\big)
\quad
\text{for any $A,B\in\K$.}
\end{equation}
This metric induces a topology and an associated $\sigma$-algebra, which enables
rigorously defining random compact sets as random variables and describing their convergence; see Appendix \ref{apdx:random_set_theory}.
Interestingly, the distribution of a random compact set is characterized by the probability that it intersects any given compact set.
We use this fact in Sections \ref{sec:asymptotic} and \ref{sec:finite_sample}, where we
characterize the probability that the set estimator $\hat\mathcal{Y}_\epsilon^M$ intersects well-chosen sets along the boundary of the true reachable set.
By analyzing the distribution of $\hat\mathcal{Y}_\epsilon^M$,
this approach allows bounding the Hausdorff distance between $\hat\mathcal{Y}_\epsilon^M$ and the convex hull of the true reachable set $\hull(\mathcal{Y})$ with high probability.
\vspace{-1mm}
\section{Asymptotic analysis}\label{sec:asymptotic}
In this section, we provide an asymptotic analysis under minimal assumptions about the input set and the reachability map (namely, that $\mathcal{X}$ is compact and $f$ is continuous).
To enable the reconstruction of the true convex hull $\hull(\mathcal{Y})$ using the sampling-based set estimator $\hat\mathcal{Y}_\epsilon^M$, we make one assumption about the sampling distribution $\mathbb{P}_\mathcal{X}$ for the inputs $x_i$. Note that by definition, $\mathbb{P}_\mathcal{X}(\mathcal{X})=1$.
\begin{myassumption}\label{assum:XBoundary:posMeasure}
$\mathbb{P}_\mathcal{X}(\{x\in\mathcal{X}: f(x)\in\mathring{B}(y,r)\})>0$
for all $y\in\partial\mathcal{Y}$ and all $r>0$.
\end{myassumption}
This assumption states that the probability of sampling an output arbitrarily close to any point on the boundary of the true reachable set is strictly positive.
In other words, the boundary of the reachable set should be contained in the support of the distribution of the output samples $y_i$.
Assumption \ref{assum:XBoundary:posMeasure} is weaker than the associated assumption in \citep[Theorem 2]{LewPavone2020}, which can be restated as ``\textit{$\mathbb{P}_\mathcal{X}(f^{-1}(A))>0$ for any open set $A\subset\mathbb{R}^n$ such that $\mathcal{Y}\cap A\neq \emptyset$}''.
Indeed, Assumption \ref{assum:XBoundary:posMeasure} only considers open neighborhoods of the boundary $\partial\mathcal{Y}$, as opposed to all open sets intersecting $\mathcal{Y}$.
Selecting a sampling distribution $\mathbb{P}_\mathcal{X}$ that satisfies Assumption \ref{assum:XBoundary:posMeasure} is easy. For instance, if $\mathcal{X}$ has a smooth boundary (see Assumption \ref{assum:Theta:r_convex}), then the uniform distribution over $\mathcal{X}$ satisfies Assumption \ref{assum:XBoundary:posMeasure}.
Assumption \ref{assum:XBoundary:posMeasure} is sufficient to prove that the random set estimator $\hat\mathcal{Y}_\epsilon^M$ converges to the $\epsilon$-padded convex hull of $\mathcal{Y}$ as the number of samples $M$ increases.
Below, we prove a more general result which allows for variations of the padding radius $\epsilon$ as the number of samples increases.
\begin{thm}[Asymptotic Convergence]\label{thm:asymptotic_convergence}
Let $\bar{\epsilon}\geq 0$ and
$(\epsilon_M)_{M\in\mathbb{N}}$ be a sequence of padding radii such that $\epsilon_M\geq 0$ for all $M\in\mathbb{N}$ and $\epsilon_M\rightarrow \bar{\epsilon}$ as $M\rightarrow\infty$.
For any $\epsilon\geq 0$, define the estimator $\hat\mathcal{Y}^M_{\epsilon}=\hull\left(\{y_i\}_{i{=}1}^M\right)\oplus B(0,\epsilon)$.
Then, under Assumption \ref{assum:XBoundary:posMeasure},
almost surely,
as $M\rightarrow\infty$,
$$
d_H(
\hat\mathcal{Y}_{\epsilon_M}^M,
\hull(\mathcal{Y})\oplus B(0,\bar{\epsilon})
)
\mathop{\longrightarrow} 0.
$$
\end{thm}
\begin{proof}
We refer to Appendix \ref{apdx:proof:thm:asymptotic_convergence}.
We leverage \citep[Proposition 1.7.23]{Molchanov_BookTheoryOfRandomSets2017} which states sufficient conditions for the convergence of random compact sets
and use properties of the convex hull to relax the corresponding assumption in \citep{LewPavone2020} with Assumption \ref{assum:XBoundary:posMeasure}.
\end{proof}
\vspace{-3mm}
Practically, Theorem \ref{thm:asymptotic_convergence} justifies using $\epsilon$-\textsc{RandUP}\xspace for general continuous maps $f$ and compact sets $\mathcal{X}$.
This consistency result implies that choosing any converging sequence of padding radii (e.g., $\epsilon_M=1/M$) guarantees the convergence of the random set estimator $\hat\mathcal{Y}_{\epsilon_M}^M$ to the $\bar{\epsilon}$-padded convex hull of the true reachable set.
As a particular case, selecting a constant padding radius $\epsilon$ (which yields $\epsilon$-\textsc{RandUP}\xspace) guarantees that $\mathcal{Y}_{\epsilon}^M$ converges to the $\epsilon$-padded convex hull $\hull(\mathcal{Y})\oplus B(0,\epsilon)$.
Compared to \citep[Theorem 2]{LewPavone2020}, which only treats the case with a constant zero padding radius $\epsilon=0$ (i.e., without $\epsilon$-padding the convex hull of the output samples), Theorem \ref{thm:asymptotic_convergence} allows for variations of the padding radii $\epsilon_M$ and is proved under weaker assumptions.
Instead of relying on
$\epsilon$-covering arguments (e.g., see Corollary 1 in \citep{Dumbgen1996} which
assumes that $\mathcal{Y}$ is convex), we use \citep[Proposition 1.7.23]{Molchanov_BookTheoryOfRandomSets2017} to conclude asymptotic convergence.
This proof technique allows deriving a general result that does not depend on the exact sampling density along the boundary $\partial\mathcal{Y}$ and uses a (possibly non-decreasing) sequence of padding radii $\epsilon_M$ converging arbitrarily slowly to some constant $\bar{\epsilon}\geq 0$.
\section{Finite-sample analysis}\label{sec:finite_sample}
Theorem \ref{thm:asymptotic_convergence} provides asymptotic convergence guarantees that support the application of $\epsilon$-\textsc{RandUP}\xspace in general scenarios (e.g., as a baseline for offline validation in complex problem settings), but does not provide finite-sample guarantees which are of practical interest in safety-critical applications.
Deriving stronger statistical guarantees requires leveraging more information about the structure of the problem. We derive finite-sample rates under general assumptions in Section \ref{sec:finite_sample:general} and analyze a particular case in Section \ref{sec:finite_sample:rconvex}.
We discuss practical implications of our results in Section \ref{sec:finite_sample:insights}.
\subsection{General finite-sample statistical guarantees}\label{sec:finite_sample:general}
To derive convergence rates and outer-approximation guarantees given a finite number of samples $M$,
we first make an assumption
about the smoothness of the reachability map $f$.
\begin{myassumption}\label{assum:f:lipschitz}
The reachability map $f:\mathbb{R}^p\rightarrow\mathbb{R}^n$ is $L$-Lipschitz: for some constant $L\geq 0$,
$\|f(x_1)-f(x_2)\|
\leq
L\,\|x_1-x_2\|\ \
\text{for all }\ x_1,x_2\in\mathcal{X}$.
\end{myassumption}
Next, we make an assumption about the sampling distribution $\mathbb{P}_\mathcal{X}$ along the input set boundary $\partial\mathcal{X}$.
\begin{myassumption}\label{assum:sampling_density}
Given $\epsilon,L\,{>}\,0$, there exists $\Lambda_{\epsilon}^{L}\,{>}\,0$ such that $\mathbb{P}_\mathcal{X}\left(B\left(x,\frac{\epsilon}{2L}\right)\right)\,{\geq}\, \Lambda_{\epsilon}^{L}$ for all
$x\in\partial\mathcal{X}$.
\end{myassumption}
Given any boundary input $x\in\partial\mathcal{X}$,
the constant $\Lambda_{\epsilon}^{L}$ characterizes the probability of sampling an input $x_i$ that is $\epsilon/(2L)$-close to $x$.
Selecting a sampling distribution that satisfies Assumption \ref{assum:sampling_density} is simple; we provide examples in Sections \ref{sec:finite_sample:rconvex} and \ref{sec:results}.
As we show next, these two assumptions are sufficient to derive finite-sample convergence rates for $\epsilon$-\textsc{RandUP}\xspace.
Recall that $D(\partial\mathcal{X}, d)$ denotes the $d$-packing number of $\partial\mathcal{X}$, which is necessarily finite by the compactness of $\mathcal{X}$.
\begin{thm}[Finite-Sample Bound] \label{thm:conservative_finite_sample}
Define the estimator $\hat\mathcal{Y}^M=\hull\left(\{y_i\}_{i{=}1}^M\right)$ and the probability threshold
$
\delta_M=
D(\partial\mathcal{X},\epsilon/(2L))(1 -
\Lambda_{\epsilon}^{L}
)^M
$.
Then, under Assumptions \ref{assum:f:lipschitz} and \ref{assum:sampling_density}, with probability at least $1-\delta_M$,
$$
d_H(
\hat\mathcal{Y}^M,
\hull(\mathcal{Y})
)\leq \epsilon
\quad\text{and}\quad
\mathcal{Y}\subseteq\hat\mathcal{Y}_\epsilon^M.
$$
\end{thm}
\begin{proof}
We refer to Appendix \ref{apdx:proof:thm:conservative_finite_sample} for a complete proof.
\end{proof}
Using a similar analysis,
one could derive convergence rates for the $\epsilon$-padded union of balls estimator
\citep{Devroye1980,Baillo2001} that would depend on the $\epsilon$-covering number of the entire input set $D(\mathcal{X},\epsilon)$.
In the general case,
$D(\partial\mathcal{X},\epsilon)\leq D(\mathcal{X},\epsilon)$:
Theorem \ref{thm:conservative_finite_sample} indicates that
using a convex hull is more sample-efficient than a union of balls. It is better suited if $\mathcal{Y}$ is convex or if an approximation of $\hull(\mathcal{Y})$ is sufficient for the downstream application, as is usual in control applications which typically use convex reachable set approximations, see \citep{LewPavone2020}.
\subsection{Analysis of a particular setting: smooth input set and continuous distribution}\label{sec:finite_sample:rconvex}
\begin{wrapfigure}{R}{0.33\linewidth}
\begin{minipage}{0.95\linewidth}
\centering
\vspace{-1mm}
\includegraphics[width=0.95\linewidth,trim=0 40 0 0, clip]{figs/rconvex/Xrconvex11.png}
\centering
\includegraphics[width=0.95\linewidth,trim=0 70 0 0, clip]{figs/rconvex/Xrconvex22.png}
\caption
\textbf{Top}: sets $\mathcal{X}$ satisfying Assumption \ref{assum:Theta:r_convex} can be non-convex, have holes, and be disconnected.
\textbf{Bottom}: if $\mathcal{X}^\comp$ is not $r$-convex, it is still possible to find a conservative approximation that is $r$-convex.}\label{fig:rconvex}
\vspace{-10mm}
\end{minipage
\end{wrapfigure}
In many applications, the boundary of the input set is smooth (e.g., $\mathcal{X}$ is a $2$-norm ball).
In this setting, we can apply Theorem \ref{thm:conservative_finite_sample} to derive finite-sample guarantees for general continuous sampling distributions. We state this smoothness assumption below.
\begin{myassumption}\label{assum:Theta:r_convex}
$\mathcal{X}^\comp$ is $r$-convex for some $r>0$. Equivalently,
for any $x\in\partial\mathcal{X}$,
there exists $\tilde{x}\in\mathcal{X}$ such that
$x\in B(\tilde{x},r)\subseteq \mathcal{X}$.
\end{myassumption}
Assumption \ref{assum:Theta:r_convex} guarantees that for any parameter $x$ on the boundary $\partial\mathcal{X}$, one can find a ball of radius $r$ contained in $\mathcal{X}$ that also contains $x$, see Figure \ref{fig:rconvex}.
This assumption corresponds to a general inwards-curvature condition of the boundary $\partial\mathcal{X}$.
It is a common assumption in the literature \citep{Walther1997,Rodriguez2016,RodriguezCasal2019,AriasCastro2019} and is related to the notion of reach \citep{Federer1959,Cuevas2009,AamariPhD2017} that bounds the curvature of the boundary $\partial\mathcal{X}$.
To guarantee its satisfaction,
one can replace $\mathcal{X}$ with $\mathcal{X}\oplus B(0,r)$ \citep{Walther1997} before performing reachability analysis, which would yield a more conservative estimate of $\mathcal{Y}$.
Next, we state an assumption about the sampling distribution $\mathbb{P}_\mathcal{X}$.
\begin{myassumption}\label{assum:sampling_density:cor}
$\mathbb{P}_\mathcal{X}(A)\geq p_0\lambda(A)$ for all
measurable sets $A\subset\mathcal{X}$ for some constant $p_0>0$.
\end{myassumption}
This assumption states that the sampling distribution
admits a lower-bounded continuous density.
Specifically, there exists a density function $p_\mathcal{X}:\mathbb{R}^p\rightarrow\mathbb{R}_+$ such that $
\mathbb{P}_\mathcal{X}(A)
=
\int_{A} p_{\mathcal{X}}(x)\dd x\geq p_0\int_{A} \dd x=p_0\lambda(A)$ for any measurable subset $A\subset\mathcal{X}$.
For instance, the uniform distribution over $\mathcal{X}$ satisfies this assumption.
Similarly to Assumption \ref{assum:sampling_density}, this density assumption
can be relaxed to neighborhoods
of $\partial\mathcal{X}$; we leave this extension for future work.
We obtain the following corollary.
\begin{cor
\label{cor:conservative_finite_sample}
Define the estimator $\hat\mathcal{Y}^M=\hull\left(\{y_i\}_{i{=}1}^M\right)$,
the offset vector
$\vec{r}=(r,0,\dots,0)\in\mathbb{R}^p$,
the volume $\Lambda_{\epsilon}^{r,L}=\lambda\big(
B(0,\epsilon/(2L))
\cap B(\vec{r},r)
\big)$,
and the threshold
$
\delta_M=
D(\partial\mathcal{X},\epsilon/(2L))\smash{(1 -
p_0 \Lambda_{\epsilon}^{r,L}
)^M}
$.
Then, under Assumptions \ref{assum:f:lipschitz}, \ref{assum:Theta:r_convex} and \ref{assum:sampling_density:cor}, with probability at least $ 1-\delta_M$,
$$
d_H(
\hat\mathcal{Y}^M,
\hull(\mathcal{Y})
)\leq \epsilon
\quad \text{ and }\quad\
\mathcal{Y}\subseteq\hat\mathcal{Y}_\epsilon^M.
$$
\end{cor}
\begin{proof}
We refer to Appendix \ref{apdx:proof:cor:conservative_finite_sample}.
We first prove that Assumptions \ref{assum:Theta:r_convex} and \ref{assum:sampling_density:cor} imply that Assumption \ref{assum:sampling_density} holds with $\Lambda_{\epsilon}^{L}=p_0\Lambda_{\epsilon}^{r,L}$.
The finite-sample bound then follows by applying Theorem \ref{thm:conservative_finite_sample}.
\end{proof}
The constant $\Lambda_{\epsilon}^{r,L}$ corresponds to the $p$-dimensional Lebesgue volume of two hyperspherical caps and can be computed analytically, see \citep{Li2011,Petitjean2013} and Appendix \ref{appendix:spherical_caps}.
\subsection{Insights: the difficulty of reachability analysis and algorithmic design}\label{sec:finite_sample:insights}
Theorem \ref{thm:conservative_finite_sample} reveals which characteristics of the problem make reachability analysis challenging:
\begin{itemize}[leftmargin=4.5mm]
\setlength\itemsep{0.0mm
\item \textbf{Assuming the smoothness of $f$ is necessary:}
given an input set $\mathcal{X}$ and a sampling distribution $\mathbb{P}_{\mathcal{X}}$,
one can construct problems for which sampling-based reachability analysis algorithms require arbitrarily many samples to compute an $\epsilon$-accurate approximation of $\mathcal{Y}$, see Section \ref{sec:results:sensitivity}. To derive finite-sample rates, assuming that the reachability map $f$ is $L$-Lipschitz (Assumption \ref{assum:f:lipschitz}) is necessary if only assumptions on input coverage density (Assumption \ref{assum:sampling_density}) are available.
\item \textbf{The smoother the easier}:
a smaller Lipschitz constant $L$
and a larger radius parameter $r$
induce
tighter bounds in Theorem \ref{thm:conservative_finite_sample}, requiring a smaller number of samples $M$ to obtain a desired accuracy with high probability $1-\delta_M$.
Indeed, such conditions guarantee a lower bound on the probability of sampling outputs $y_i=f(x_i)\in\mathcal{Y}$ that are close to the boundary $\partial\mathcal{Y}$, which is necessary to accurately reconstruct the true convex hull of the reachable set from samples.
\item \textbf{Scalability}: by Theorem \ref{thm:conservative_finite_sample}, the number of required samples to reach a desired $\epsilon$-accuracy with high probability depends on the covering number. This constant characterizes the size of the parameter space
in terms of dimensionality
(the number of different parameters) and volume (variations of each parameter).
Given any $\mathcal{X}\,{\in}\,\K$ and $d\,{=}\,\sup_{x\in\partial\mathcal{X}}\|x\|$, a simple and general bound for the covering number is
$
D(\partial\mathcal{X},\epsilon)
\,{\leq}\,
\left(
2d\sqrt{n}/\epsilon
\right)^n
$ \citep{ShalevShwartz2009}.
\end{itemize}
\section{Results and applications}\label{sec:results}
We perform a sensitivity analysis in Section \ref{sec:results:sensitivity} to illustrate the insights from Theorem \ref{thm:conservative_finite_sample}.
In Section \ref{sec:results:verif_closed_nn}, we compute the reachable sets of a dynamical system with a simple neural network policy and compare with prior work.
Finally, in Section \ref{sec:results:mpc}, we embed $\epsilon$-\textsc{RandUP}\xspace in a model predictive control (MPC) framework to reliably control a robotic platform.
Our code and hardware results are available at {\scriptsize \url{https://github.com/StanfordASL/RandUP}} and {\scriptsize \url{https://youtu.be/sDkblTwPuEg}}.
All computation times are measured on a computer with a 3.70GHz Intel Core i7-8700K CPU.
\newpage
\phantom{asdf}
\vspace{-14mm}
\subsection{Sensitivity analysis}\label{sec:results:sensitivity}
\begin{wrapfigure}{R}{0.40\linewidth}
\begin{minipage}{0.95\linewidth}
\vspace{-11mm}
\centering
\includegraphics[width=1\linewidth]{figs/results/sensitivity/alpha_lip.png}
\caption{Results for the sensitivity analysis in Section \ref{sec:results:sensitivity}. Experimental results are shown with continuous lines, theorical upper bounds with dashed lines.}\label{fig:sensitivity}
\vspace{-3mm}
\end{minipage
\end{wrapfigure}
We analyze the sensitivity of $\epsilon$-\textsc{RandUP}\xspace to the sampling distribution and the smoothness of the reachability map.
We consider a $2$-dimensional input ball $\mathcal{X}=B(0,1)$ and the map $f(x)=(Lx_1,x_2)$ with $L\geq 1$. Clearly, $\mathcal{X}^\comp$ is $1$-convex and $f$ is $L$-Lipschitz continuous, so
Corollary \ref{cor:conservative_finite_sample} applies for any sampling distribution satisfying Assumption \ref{assum:sampling_density:cor}.
We consider a distribution $\mathbb{P}_\mathcal{X}^\alpha$ that depends on a parameter $\alpha\geq 1$, such that $\mathbb{P}_\mathcal{X}^\alpha$ varies from a uniform distribution over $\mathcal{X}$ for $\alpha=1$ to a uniform distribution over the boundary $\partial\mathcal{X}$ as $\alpha\rightarrow\infty$. Given $\delta_M=10^{-3}$, we determine the minimum padding $\epsilon$ guaranteeing $\mathbb{P}(
d_H(
\hat\mathcal{Y}^M,
\mathcal{Y}
)\leq \epsilon
)\geq 1-\delta_M$ using Corollary \ref{cor:conservative_finite_sample}, see Appendix \ref{apdx:sensitivity}.
We take $M=1000$ samples and present results in Figure \ref{fig:sensitivity}.
We observe better performance than the predicted finite-sample bounds
and that distributions with a higher probability of sampling close to the boundary
(i.e., larger values of $\alpha$) perform better, corresponding to lower Hausdorff distance errors.
Also, $\epsilon$-\textsc{RandUP}\xspace performs better on problems with smoother reachability maps,
as is visible from our empirical evaluation and theoretical bounds on the Hausdorff distance. This validates the discussion in Section \ref{sec:finite_sample:insights}.
\vspace{-2mm}
\subsection{Verification of neural network controllers}\label{sec:results:verif_closed_nn}
\begin{figure}[htb!]
\vspace{-3mm}
\centering
\includegraphics[width=0.95\linewidth]{figs/results/nn_controller/combined_nn_fig3.png}
\vspace{-3mm}
\caption{Reachable sets computed in Section \ref{sec:results:verif_closed_nn} for a total prediction horizon $N=9$. Sets from the formal method \textsc{ReachLP}\xspace are shown in green, dashed sets correspond to no input splitting, straight-lines correspond to splitting $\mathcal{X}_0$ into $16$ components.
We use $M=10^3$ samples for all sampling-based methods and $\epsilon=0.02$.
}
\label{fig:nn_controller:all}
\vspace{-3mm}
\end{figure}
Next, we consider the verification of a neural network controller $u_t=\pi_{\textrm{nn}}(x_t)$ for a known linear dynamical system $x_{t+1}=Ax_t+Bu_t$, where $t\in\mathbb{N}$ denotes a time index, and $x_t\in\mathbb{R}^2$ and $u_t\in\mathbb{R}$ denote the state and control input.
Given a rectangular set of initial states $\mathcal{X}_0\subset\mathbb{R}^2$,
the problem consists of estimating the reachable set at time $t\in\mathbb{N}$ defined as
$
\mathcal{X}_t=\{(A(\cdot)+B\pi_{\textrm{nn}}(\cdot))\circ\dots\circ
(Ax_0+B\pi_{\textrm{nn}}(x_0)): x_0\in\mathcal{X}_0\}$.
Defining $(\mathcal{X},\mathcal{Y})=(\mathcal{X}_0,\mathcal{X}_t)$ and $f(x)=(A(\cdot)+B\pi_{\textrm{nn}}(\cdot))\circ\dots\circ
(Ax+B\pi_{\textrm{nn}}(x))$, we see that this problem fits the mathematical form described in Section \ref{sec:intro}.
We use a ReLU network $\pi_{\textrm{nn}}$ from \citep{Everett21_journal} with two layers of $5$ neurons each.
\begin{wrapfigure}{R}{0.40\linewidth}
\begin{minipage}{0.95\linewidth}
\vspace{-2.5mm}
\centering \includegraphics[width=1\linewidth,trim=0 30 0 0, clip]{figs/results/nn_controller/exp_2021_12_06__18_07_54.pkl_dH.png}
\includegraphics[width=1\linewidth]{figs/results/nn_controller/exp_2021_12_06__18_07_54.pkl_comp_time.png}
\vspace{-8mm}
\caption{Neural network verification analysis in Section \ref{sec:results:verif_closed_nn}: we report the computation time of each algorithm and their averaged Hausdorff distance error (with $\epsilon{=}0$ for $\epsilon$-\textsc{RandUP}\xspace and \textsc{GoTube}\xspace) over $100$ tries when estimating $\mathcal{Y}=\mathcal{X}_4$.}\label{fig:nn_controller:M_vs_dH_time}
\vspace{-4mm}
\end{minipage
\end{wrapfigure}
We compare $\epsilon$-\textsc{RandUP}\xspace with the formal method \textsc{ReachLP}\xspace \citep{Everett21_journal}\footnote{Comparisons with \textsc{ReachSDP}\xspace \citep{Hu2020}, which is more conservative than \textsc{ReachLP}\xspace, show a similar trend.}
and with two recently-derived sampling-based approaches: the kernel method proposed in \citep{ThorpeL4DC2021} and \textsc{GoTube}\xspace \citep{Gruenbacher2021}. We implement \textsc{GoTube}\xspace using the $\epsilon$-\textsc{RandUP}\xspace algorithm where we replace the last convex hull bounding step with an outer-bounding ball.
As ground-truth, we use the reachable sets from $\epsilon$-\textsc{RandUP}\xspace with $\epsilon\,{=}\,0$ and $M\,{=}\,10^6$, which is motivated by the asymptotic results from Theorem \ref{thm:asymptotic_convergence} and was previously done in \citep{Everett21_journal}.
We refer to Appendix \ref{apdx:exps:nn} for details and present results in Figures \ref{fig:nn_controller:all} and \ref{fig:nn_controller:M_vs_dH_time}.
\textbf{Formal methods} that explicitly bound the output of each layer of the neural network can guarantee that
their reachable set approximations are always conservative.
However, obtaining tight approximations with \textsc{ReachLP}\xspace requires splitting the input set: a computationally expensive procedure (Fig.\,\ref{fig:nn_controller:M_vs_dH_time}, bottom).
Figures \ref{fig:nn_controller:all} and \ref{fig:nn_controller:M_vs_dH_time} show that \textsc{ReachLP}\xspace is more conservative than $\epsilon$-\textsc{RandUP}\xspace even when considering polytopic outputs with eight facets.
As shown in Figure \ref{fig:nn_controller:all} (right),
the conservatism of these methods increases over time. This shows that even when considering small neural networks, verifying safety specifications over long horizons remains an open challenge.
\textbf{Sampling-based} approaches
do not suffer from the long-horizon conservatism of formal methods.
This comes at the expense of probabilistic guarantees (that rely on knowledge of the Lipschitz constant of the model), as opposed to deterministic conservatism guarantees.
$\epsilon$-\textsc{RandUP}\xspace and \textsc{GoTube}\xspace have comparable computation time\footnote
Plotting the kernel-based level set estimator in \citep{ThorpeL4DC2021} from $M$ samples requires classifying a dense grid of points. To evaluate the computation time of this method, we only account for the time to classify $M$ new samples.} and are significantly faster than other approaches.
$\epsilon$-\textsc{RandUP}\xspace is significantly more accurate than prior work, especially for larger values of $M$.
Also, the results from Theorem \ref{thm:conservative_finite_sample} allow for principled hyperparameter selection for $\epsilon$-\textsc{RandUP}\xspace: given $\epsilon=10^{-2}$,
sampling $3000$ uniformly-distributed inputs on $\partial\mathcal{X}$ is sufficient for the output sets to be conservative with probability at least $1-10^{-4}$ (for $L=1$, see Section \ref{apdx:exps:nn}).
These experiments show that for short-horizon problems ($5$ steps) with relatively simple network architectures, both \textsc{ReachLP}\xspace and $\epsilon$-\textsc{RandUP}\xspace return accurate reachable set approximations.
For longer-horizon problems ($9$ steps) with networks of moderate dimensions (which allows using existing methods to pre-compute a Lipschitz constant, see \citep{Fazlyab2019} and Section \ref{appendix:lipschitz_relu}), $\epsilon$-\textsc{RandUP}\xspace is guaranteed to efficiently return non-overly-conservative reachable set approximations with high probability.
Finally, though we do not present such results here,
the generality of $\epsilon$-\textsc{RandUP}\xspace allows it to tackle complex model architectures (see \citep{LewEtAl2021_2} for experiments with longer horizons and more complex networks with uncertain weights) for which no alternative methods exist, albeit without finite-sample accuracy guarantees.
\vspace{-1mm}
\subsection{Application to robust model predictive control}\label{sec:results:mpc}
Finally, we show that $\epsilon$-\textsc{RandUP}\xspace can be embedded in a robust MPC formulation to reliably control a planar spacecraft system actuated by cold-gas thrusters. Its state at time $t\geq 0$ is denoted as
$x_t\in\mathbb{R}^6$ and its control inputs are given as $u_t\in\mathbb{R}^3$.
We use an auxiliary linear feedback controller \citep{LewEtAl2021_2} and an uncertain linear model $x_{t+1}=f(x_t,u_t,m,F)$ that depends on an uncertain mass $m\in[10,18] \,\textrm{kg}$ (depending on the payload transported by the robot and the current weight of the gas tanks) and an unknown force $F=(F_x,F_y)\in[-0.015,0.015]^2 \,\textrm{N}$ that accounts for the tilt of the table.
To control the system from an initial state $x_0\in\mathbb{R}^n$ to a goal region $\mathcal{X}_{\text{goal}}\subset\mathbb{R}^n$ while
minimizing fuel consumption and
remaining in a feasible set $\mathcal{X}_{\text{free}}$ (i.e., avoiding obstacles and respecting velocity bounds), we consider the following MPC formulation:
\begin{subequations}
\label{eq:full_problem}
\begin{align}
\mathop{\text{min}}_{(\mu,\nu)}
\quad &
\scalebox{0.95}{$\sum_{t=1}^{N}$} (\mu_t-x_{\textrm{goal}})^\top Q(\mu_t-x_{\textrm{goal}})
+
\scalebox{0.95}{$\sum_{t=1}^{N}$}
\nu_t^\top R \nu_t,
\quad
\textrm{s.t.}
\quad\,
\mu_0=x_0,
\label{eq:cost:measure}
\\[-1mm]
\textrm{ }
\quad
&
\mu_{t+1} = f(\mu_t,\nu_t, \bar{m}, \bar{F}),
\ \
\nu_t\in\mathcal{U},
\ \
\mathcal{X}_t(\nu) \subset \mathcal{X}_{\text{free}},
\ \
\mathcal{X}_N(\nu) \subset \mathcal{X}_{\text{goal}},
\ \
\ {\tiny t\,{=}\,0, \mydots,N\,{-}\,1}
.
\label{eq:robust_constraints_orig}
\end{align}
\end{subequations}
where $\mu=(\mu_0,\dots,\mu_N)$
and $\nu=(\nu_0,\dots,\nu_{N-1})$
are optimization variables representing the nominal state and control trajectories, $(\bar{m},\bar{F}_x, \bar{F}_y)=(14,0,0)$ are nominal parameter values, $x_{\textrm{goal}}\in\mathcal{X}_{\text{goal}}$ is the center of the goal set, and the reachable sets $\mathcal{X}_t(\nu)\subset\mathbb{R}^n$ are defined as
$\mathcal{X}_t(\nu) =
\{
x_t=f(\cdot,\nu_{t-1},m,F)
\circ\dots\circ
f(x_0,\nu_0,m,F):
\
(m,F)\in[10,18]\times[-0.015,0.015]
\}$.
The numerical implementation
is described in \citep{LewPavone2020}.
With a Python implementation, $\epsilon=0.03$, and $M=10^3$, our MPC controller runs at $10$\textrm{Hz}
which is sufficient for this platform and could be improved, e.g., by parallelizing computations on a GPU.
We compare with a MPC baseline that does not consider uncertainty over the parameters (i.e., assumes $(m,F)\in\{14\}{\times}\{(0,0)\}$). As shown in Figure \ref{fig:results:hardware} and in the attached video, this baseline is unsafe and collides with an obstacle.
In contrast, our reachability-aware controller is recursively feasible, satisfies all constraints, and allows safely reaching the goal.
These experiments motivate the development of efficient reachability algorithms that can be embedded in generic control frameworks to account for uncertain parameters.
\begin{figure}[t]
\begin{minipage}{.32\linewidth}
\centering
\vspace{-7mm}
\includegraphics[width=1\linewidth]{figs/results/mpc/ff_problem.png}
\end{minipage}
\hspace{1mm}
\begin{minipage}{.26\linewidth}
\centering
\vspace{-5mm}
\includegraphics[width=1\linewidth]{figs/results/mpc/randup_vs_baseline_mpc_plan.png}
\end{minipage}
\begin{minipage}{.4\linewidth}
\centering
\vspace{-5mm}
\includegraphics[width=0.95\linewidth]{figs/results/mpc/controls_3.png}
\end{minipage}
\vspace{-3mm}
\caption{Application of $\epsilon$-\textsc{RandUP}\xspace to safely control a free-flyer robot in a cluttered environment (left). Using a model predictive controller that does not account for the uncertain dynamics (middle) leads to unsafe behavior, colliding with an obstacle and causing the optimization problem to be infeasible at run-time (right).
}
\label{fig:results:hardware}
\vspace{-6mm}
\end{figure}
\section{Conclusion}
We derived new asymptotic and finite-sample statistical guarantees for $\epsilon$-\textsc{RandUP}\xspace, a simple yet efficient algorithm for reachability analysis of general systems. We demonstrated its efficacy for a neural network verification task and its applicability to robust model predictive control.
In future work, we will investigate tighter finite-sample bounds by leveraging further information about the smoothness of the input set boundary $\partial\mathcal{X}$.
Of practical interest is investigating which sampling distributions enable better sample efficiency, interfacing $\epsilon$-\textsc{RandUP}\xspace with Lipschitz constant computation methods (e.g., \citep{Fazlyab2019} for neural networks),
exploring methods to scale to high-dimensional input spaces, and applying the technique to safety-aware reinforcement learning.
\acks{The authors thank Robin Brown for her helpful feedback and insightful discussions about neural network verification,
Edward Schmerling for his helpful comments and suggestions, and Adam Thorpe for helpful discussions about kernel methods.
The NASA University Leadership Initiative (grant \#80NSSC20M0163) provided funds to assist the authors with their research, but this article solely reflects the opinions and conclusions of its authors and not any NASA entity. L.J. was supported by the National Science Foundation via grant CBET-2112085.}
| {
"timestamp": "2021-12-13T02:25:14",
"yymm": "2112",
"arxiv_id": "2112.05745",
"language": "en",
"url": "https://arxiv.org/abs/2112.05745",
"abstract": "In this work, we analyze an efficient sampling-based algorithm for general-purpose reachability analysis, which remains a notoriously challenging problem with applications ranging from neural network verification to safety analysis of dynamical systems. By sampling inputs, evaluating their images in the true reachable set, and taking their $\\epsilon$-padded convex hull as a set estimator, this algorithm applies to general problem settings and is simple to implement. Our main contribution is the derivation of asymptotic and finite-sample accuracy guarantees using random set theory. This analysis informs algorithmic design to obtain an $\\epsilon$-close reachable set approximation with high probability, provides insights into which reachability problems are most challenging, and motivates safety-critical applications of the technique. On a neural network verification task, we show that this approach is more accurate and significantly faster than prior work. Informed by our analysis, we also design a robust model predictive controller that we demonstrate in hardware experiments.",
"subjects": "Systems and Control (eess.SY); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Robotics (cs.RO)",
"title": "A Simple and Efficient Sampling-based Algorithm for General Reachability Analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399026119352,
"lm_q2_score": 0.7310585786300048,
"lm_q1q2_score": 0.7093022041335956
} |
https://arxiv.org/abs/1608.01189 | On the power propagation time of a graph | In this paper, we give Nordhaus-Gaddum upper and lower bounds on the sum of the power propagation time of a graph and its complement, and we consider the effects of edge subdivisions and edge contractions on the power propagation time of a graph. We also study a generalization of power propagation time, known as $k-$power propagation time, by characterizing all simple graphs on $n$ vertices whose $k-$power propagation time is $n-1$ or $n-2$ (for $k\geq 1$) and $n-3$ (for $k\geq 2$). We determine all trees on $n$ vertices whose power propagation time ($k=1$) is $n-3$, and give partial characterizations of graphs whose $k-$power propagation time is equal to 1 (for $k\geq 1$). | \subsection{General Bounds}
\abstract {In this paper, we characterize all graphs $G$ with extreme $k-$power propagation time $|G|-1$ or $|G|-2$ for $k\geq 1,$ and $|G|-3$ for $k\geq 2$. We determine all trees $T$ whose $1-$power propagation time (also called power propagation time or {\em standard} power propagation time) is $|T|-3$. Partial characterizations of graphs with $k-$power propagation time equal to 1 are also established. Finally, we consider the effects of edge subdivisions and edge contractions on the standard power propagation time of a graph, and give an upper bound on the sum of the standard power propagation time of a graph and its complement.}
\section{Introduction} Phasor Measurement Units (PMUs) are machines used by energy companies to monitor the electric power grid system. They are placed at selected electrical nodes (locations at which transmission lines, loads, and generators are connected) within the system. Due to the high cost of the machines, an extensive amount of research has been devoted to minimizing the placement of the PMUs while maintaining the ability to observed the entire system. In \cite{HHHH}, Haynes et al. studied this problems in terms of graphs. \\\indent An electric power grid system is modeled by a graph by letting vertices represent the electrical nodes (also called buses), and edges represent transmissions lines between nodes. The {\em power domination process} is defined as follows \cite{HHHH}: A PMU placed at a vertex measures the voltage and phasor angle at that vertex, as well as the incident edges and the vertices at the endpoints of these edges. These vertices and edges are said to be {\em observed}. \cite{ HHHH} The rest of the system is observed according to the following propagation rules:
\begin{enumerate}
\item[1.] Any vertex that is incident to an observed edge is observed.
\item[2.] Any edge joining two observed vertices is observed.
\item[3.] If a vertex is incident to a total of $t>1$ edges and if $t-1$ of these edges are observed, then all $t$ of these edges are observed.
\end{enumerate}
\indent We state an equivalent formulation of the power domination process as done in \cite{FHKY15}. Let $G=(V,E)$ be a graph and $v\in V(G)$. (All graphs discussed are simple graphs.) The {\em open neighborhood} of $v$, denoted $N(v),$ is given by $N(v)=\{u\in V(G) | vu\in E(G)\}.$ The {\em closed neighborhood} of $v$ is $N[v]=N(v)\cup \{v\}$. For a set $S\subseteq V(G)$, $N(S)=\cup_{s\in S} N(s)$ and $N[S]=\cup_{s\in S} N[s].$\\\indent For a set $S\subseteq V(G)$, define the following sets:
\begin{enumerate}
\item[1.] $S^{[1]}=N[S].$
\item[2.]For $t\geq 1$, $S^{[t+1]}=S^{[t]}\cup \{w\in V(G)| \hspace{1mm}\exists \hspace{1mm} v\in S^{[t]}, w\in N(v) \cap (V(G)\setminus S^{[t]}) \text{ and } |N(v) \cap (V(G)\setminus S^{[t]})|= 1\}$.
\end{enumerate}
For vertices $w$ and $v$ given in 2. above, we say $v$ {\em forces} $w$. Computing $S^{[1]}$ is the {\em domination step} and the computations of $S^{[t+1]}$ (for $t\geq 1$) are the {\em propagation steps}. A set $S$ is said to be a {\em power dominating set} if there exists an $l$ such that $S^{[l]}=V(G).$ The {\em power domination number} of $G$, denoted $\gamma_P(G)$, is the minimum cardinality over all power dominating sets of $G$. Power domination was first introduced and studied in \cite{HHHH}. \\\indent A set $S\subseteq V(G)$ is a {\em dominating set} if for each $v\in V(G)\setminus S$, there exists a $u\in S$ such that $v$ is adjacent to $u$. The {\em domination number} of a graph $G,$ denoted $\gamma(G)$, is the minimum cardinality over all dominating sets of $G$. Note that each dominating set is a power dominating set, so $\gamma_P(G)\leq \gamma(G)$ \cite{HHHH}.\\\indent The authors of \cite{Chang} introduced the following generalization of power domination, known as {\em $k-$power domination.} Let $k\geq 1$. For a set $S\subseteq V(G)$, define the following sets:
\begin{enumerate}
\item[1.] $S^{[1]}=N[S].$
\item[2.]For $t\geq 1$, $S^{[t+1]}=S^{[t]}\cup \{w\in V(G)| \hspace{1mm} \exists \hspace{1mm} v\in S^{[t]}, w\in N(v) \cap (V(G)\setminus S^{[t]}) \text{ and } |N(v) \cap (V(G)\setminus S^{[t]})|\leq k\}$.
\end{enumerate}
A set $S$ is said to be a {\em $k-$power dominating set} if there exists an $l$ such that $S^{[l]}=V(G).$ Note that the case with $k=1$, the set is simply a power dominating set. The {\em $k-$power domination number} of $G$, denoted $\gamma_{P,k}(G)$, is defined to be the minimum cardinality over all $k-$power dominating sets of $G$, and $\gamma_{P,k}(G)\leq \gamma_P(G)\leq \gamma(G)$ for all $k\geq 1$ \cite{Chang}. \\\indent Let $S$ be a $k-$power dominating set. The {\em $k-$power propagation time of $G$ with $S$}, denoted $\operatorname{ppt}_k(G,S)$ (or $\operatorname{ppt}(G,S)$ when $k=1$), is the smallest $l$ such that $S^{[l]}=V(G)$. In the case with $k=1,$ we simply write {\em power propagation time}. The {\em minimum $k-$power propagation time of $G$}, denoted $\operatorname{ppt}_k(G)$ (or $\operatorname{ppt}(G)$ when $k=1$), is given by $$\operatorname{ppt}_k(G)=\min\{\operatorname{ppt}_k(G,S)|S \text{ is a minimum $k-$power dominating set} \}.$$ The {\em maximum $k-$power propagation time of $G$}, denoted $\operatorname{PPT}_k(G)$, is given by $$\operatorname{PPT}_k(G)=\max\{\operatorname{ppt}_k(G,S)|S \text{ is a minimum $k-$power dominating set} \}.$$ The $k$-{\em power propagation time interval} of $G$ is defined as $$[\operatorname{ppt}_k(G),\operatorname{PPT}_k(G)]=\{\operatorname{ppt}_k(G), \operatorname{ppt}_k(G)+1,\ldots, \operatorname{PPT}_k(G)\}.$$ If there exists a minimum $k$-power dominating set $S$ such that $\operatorname{ppt}_k(G,S)=r$ for each $r$ in the $k-$power propagation time interval, we say that the interval is {\em full}. A natural question is whether or not the $k-$propagation time interval is full for all graphs. The $k-$propagation time interval need not be full, as demonstrated by example 4.5 in \cite{FHKY15}
A minimum $k-$power dominating set $S$ of a graph $G$ is {\em efficient} if $\operatorname{ppt}_k(G,S)=\operatorname{ppt}_k(G)$.\\\indent {\em Zero forcing} is a game played on a graph using the following {\em color change rule:} Let $B$ be a set of vertices of $G$ that are colored blue with $V-B$ colored white. If $v$ is a blue vertex and $u$ is the only neighbor of $v$ that is colored white, then change the color of $u$ to blue. For a set $B$ of vertices that are initially colored blue, the set of blue vertices that results from applying the color change rule until no more color changes are possible is the {\em final coloring of $B$.} A set $B$ is said to be a {\em zero forcing set} if the final coloring of $B$ is the entire vertex set $V(G)$. The minimum cardinality over all zero forcing sets of $G$ is the {\em zero forcing number} of $G$, denoted $\operatorname{Z}(G)$. The zero forcing number was first introduced and studied in \cite{AIM08} as an upper bound on the linear algebraic parameter of a graph known as the maximum nullity, and independently in \cite{physics} to study the control of quantum systems. \\\indent For a given zero forcing set $B$ of $G$, construct the final coloring. The set of forces that are performed is called a {\em set of forces of} $B$. Given a set of forces $\mathcal{F}$, a {\em forcing chain} of $\mathcal{F}$ is a sequence of vertices $(v_1,...,v_k)$ such that for $i=1,...,k-1$, $v_i$ forces $v_{i+1}$ in $\mathcal{F}$ ($k=1$ is permitted) \cite{proptime}. A {\em maximal forcing chain} is a forcing chain that is not a proper subsequence of another forcing chain \cite{proptime}. Note that maximal forcing chains correspond to induced paths in the graph $G$.
\begin{obs}{\rm \cite{PD2015}}\label{NeighborhoodZFS} {\rm A set $S$ is a power dominating set of $G$ if and only if $N[S]$ is a zero forcing set of $G$, and $N(S)\setminus S$ is a zero forcing set for $G-S$.}
\end{obs}
The authors of \cite{proptime} introduced the {\em propagation time} of a zero forcing set and of a graph (given below). Many of the results given in this paper were motivated by the study of the propagation time of a graph.
\begin{defn}{\rm \cite{proptime}} {\rm Let $G =(V,E)$ be a graph and B a zero forcing set of G. Define $B^{(0)} = B$, and for $t \geq 0,B^{(t+1)}$ is the set of vertices $w$ for which there exists a vertex $b \in \cup_{s=0}^{t} B^{(s)}$ such that $w$ is the only neighbor of $b$ not in $\cup _{s=0}^{t}B^{(s)}$. The {\em propagation time} of $B$ in $G$, denoted $\operatorname{pt}(G, B)$, is the smallest integer $t_0$ such that $V = \cup_{t=0}^{t_0} B^{(t)}$. The {\em minimum propagation time of $G$} is $\operatorname{pt}(G)=\min\{\operatorname{pt}(G,B)| B \text{ is a minimum zero forcing set of $G$} \},$ and the {\em maximum propagation time of $G$} is $\operatorname{PT}(G)=\max\{\operatorname{pt}(G,B)| B \text{ is a minimum zero forcing set of $G$} \}.$ }
\end{defn}
Note that this definition is analogous to the definition of the power propagation time of a graph.
\indent We use $P_n, C_n, \text{ and } K_n$ to denote the path on $n$ vertices, the cycle on $n$ vertices, and the complete graph on $n$ vertices, respectively. The notation $K_n-e$ represents the complete graph on $n$ vertices minus an edge, and $L(s,t)$ denotes the lollipop graph consisting of a complete graph on $s$ vertices and a path on $t$ vertices connected with a bridge. The graph $K_{s,t}$ is the complete bipartite graph with bipartition $X,Y$ where $|X|=s$ and $|Y|=t$.\\\indent Let $G=(V,E)$ be a graph and $e=uv\in E(G)$. The graph resulting from {\em subdividing} the edge $e=uv$, denoted $G_e,$ is obtained from $G$ by adding a new vertex $w$ such that $V(G_e)=V(G)\cup \{w\}$ and $E(G_e)=(E(G)\setminus \{uv\})\cup \{uw, wv\}$. To {\em contract} the edge $e=uv$ is to identify vertices $u$ and $v$ as a single vertex $w$ such that $N(w)=(N(u)\cup N(v))\setminus \{u,v\}$. The graph obtained from $G$ by contracting the edge $e$ is denoted by $G/e$. \\\indent A {\em spider} or {\em generalized star} is a tree formed from a $K_{1,n}$ by subdividing any number of its edges any number of times. We use $sp(i_1,i_2,\ldots, i_n)$ to denote the spider obtained from $K_{1,n}$ by subdividing edge $e_j$ a total of $i_j-1$ times for $1\leq j\leq n$.
\begin{obs}{\rm
Let $G$ be a graph and $S$ a $k-$power dominating set of $G$. Then,
\begin{equation}
\operatorname{ppt}_k(G,S)\leq |G|-|S| \\ \label{pptbound1}
\end{equation} and
\begin{equation}
\operatorname{ppt}_k(G,S)-1\leq |G|-|N[S]| \\ \label{pptbound2}
\end{equation}
because at least 1 vertex must be forced at each step.}\end{obs}
\indent In Section \ref{high}, we characterize all graphs on $n$ vertices whose $k-$power propagation time is $n-1$ and $n-2$ for $k\geq 1$. For $k\geq 2,$ we characterize all graphs whose $k-$power propagation time is $n-3$, and for $k=1$ we give partial characterizations for such graphs. In Section \ref{low}, we give a characterization of graphs with $k-$power propagation time $1$ for $k\geq 1$. An upper bound on the power domination number of a graph and its complement is given in Section \ref{NGbound}, and in Section \ref{operations} we consider the effects of edge subdivision and edge contraction on the power propagation time of a graph.
\section{Preliminaries}\label{high}
In this section, we give preliminary results that will be used as tools for characterizing graphs with high and low $k-$power propagation times.
It is clear that the $k-$power domination number of the graphs $P_n \text{ and } C_n$ is 1 for all $k\geq 1$. We determine the $k-$power propagation time for these graphs.
\begin{prop}\label{proppath}Let $P_n$ be the path on $n$ vertices. Then $\operatorname{ppt}_k(P_n)=\left \lfloor \frac{n}{2} \right \rfloor$ for all $k\geq 1$.
\end{prop}
\begin{proof}
Let $G=P_n$. Any one vertex of $G$ is a minimum $k-$power dominating set. Label the vertices of $G$ with $v_1,\ldots, v_n$ where $\{v_i, v_{i+1}\}\in E(G)$ for $i\in \{1,\ldots, n-1\}$. For any vertex $v_t$, $\operatorname{ppt}_k(G, \{v_t\})=\max\{t-1, n-t\}.$ It follows that for $n$ odd, $\operatorname{ppt}_k(G)\geq \frac{n-1}{2}$, and equality is obtained by choosing the $k-$power dominating set to be $\{v_t\}$ where $t=\frac{n+1}{2}.$ For $n$ even $\operatorname{ppt}_k(G)\geq \frac{n}{2},$ and equality is obtained by choosing the $k-$power dominating set $\{v_t\}$ with $t\in \{\frac{n}{2}, \frac{n+1}{2}\}$.
\end{proof}
\begin{prop}\label{propcycle}Let $C_n$ be the cycle on $n$ vertices. Then $\operatorname{ppt}_k(C_n)=\left \lfloor \frac{n}{2} \right \rfloor$ for all $k\geq 1$.
\end{prop}
\begin{proof}Let $G=C_n.$ Any one vertex of $G$ is an efficient $k-$power dominating set. For $n$ even, $\operatorname{ppt}_k(G, \{v\})=\frac{n}{2}$ for all $ v\in V(G)$. For $n$ odd, $\operatorname{ppt}_k(G, \{v\})=\frac{n-1}{2}$ for all $ v\in V(G)$.
\end{proof}
\begin{rem}\label{remarkk2}{\rm It is a well known fact that for a connected graph $G$ of order at least 3, there exists an efficient $k-$power dominating set of $G$ in which every vertex has degree at least 2. For if $v$ is a leaf of an efficient $k-$power dominating set $S$ and $vw\in E(G)$, then $w$ is not a leaf since $G$ is connected and $G\neq K_2$, $S'=(S \setminus \{v\}) \cup \{w\}$ is a minimum $k-$power dominating set, and $\operatorname{ppt}_k(G, S')\leq \operatorname{ppt}_k(G,S)$. Repeating this process for each leaf in $S$, we obtain an efficient $k-$power dominating set of $G$ with no leaves.
}
\end{rem}
\begin{lem}\label{minpowerdomwithdegthree} {\rm \cite{Chang}} Let $k\geq 1$ and let $G$ be a connected graph with $\Delta(G)\geq k+2$. Then there exist a minimum $k-$power dominating set $S$ of $G$ such that $\deg(s)\geq k+2$ for each $s\in S.$
\end{lem}
Note that $\Delta(G)\geq k+2$ does not guarantee that there exists an efficient $k-$power dominating set $S$ such that $\deg(s)\geq k+2$ for each $s\in S.$ This is demonstrated in the following example with $k=1$.
\begin{ex}{\rm
Let $G$ be the graph on $n+2$ vertices ($n\geq 5$) obtained from a path $(v_1,v_2,\ldots, v_n)$ by adding a leaf to $v_2$ and adding a leaf to $v_3$. Then $S=\{v_2,v_3\}$ is the unique power dominating set such that $\deg(s)\geq 3$ for each $s\in S,$ but for $S'=\{v_2,v_4\}$, $n-6=\operatorname{ppt}(G,S')< \operatorname{ppt}(G,S)=n-5$.
}
\end{ex}
\begin{rem}\label{deltat}{\rm
For any $3\leq t \leq k+2 $, if $G$ is connected with $\Delta(G) \geq t,$ then there exists a minimum $k-$power dominating set $S$ such that every vertex in $S$ has degree at least $t$.
}
\end{rem}
\section{High $k$-power propagation time}\label{high}
In \cite{proptime}, it is shown that $\operatorname{pt}(G)=|G|-1$ if and only if $G$ is a path. The authors also characterize all graphs $G$ with $\operatorname{pt}(G)=|G|-2$. Here we consider graphs with high $k-$power propagation times. We first characterize all graphs $G$ with $\operatorname{ppt}_k(G)=|G|-1$ or $\operatorname{ppt}_k(G)=|G|-2$.
\begin{thm}\label{orderminus1} For a graph $G$ and $k\geq 1$, $\operatorname{ppt}_k(G)=|G|-1$ if and only if $G=K_1$ or $G=K_2.$
\end{thm}
\begin{proof} Let $S$ be an efficient $k$-power dominating set of $G$. Since $\operatorname{ppt}_k(G)=|G|-1$, then $S=\{s\}$ for some $s\in V(G)$, and $G$ is connected. Note that at most 1 vertex may be forced at each step, including the domination step, so $\deg(s)\leq 1$. By Remark \ref{remarkk2}, $|G|\leq 2$, so $G=K_1$ or $G=K_2$.
\end{proof}
\begin{thm}\label{orderminus2} Let $k\geq 1$ and let $G$ be a graph with $\operatorname{ppt}_k(G)=|G|-2$. Then $G\in \{K_1\cup K_1, K_1\cup K_2, P_3, P_4, C_3, C_4\}$.
\end{thm}
\begin{proof}
Since $\operatorname{ppt}_k(G)=|G|-2$, then for any minimum $k-$power dominating set $S$, $|S|\leq 2$ and $|N[S]|\leq 3$. Suppose $\Delta(G)\geq 3$. If $G$ is connected, by Remark \ref{deltat}, there exists a minimum $k-$power dominating set $S$ such that each $s\in S$ has degree at least 3, contradicting $|N[S]|\leq 3$. If $G$ is disconnected, then we apply Remark \ref{deltat} to a connected component of $G$ that contains a vertex of maximum degree to obtain that there exists a minimum $k-$power dominating set $S$ of $G$ that contains a vertex of degree at least 3. This contradicts $|N[S]|\leq 3$. So $\Delta(G)\leq 2$ and $G$ is the union of cycles and paths. Since $\gamma_{P,k}(G)\leq 2$, then $G$ has at most 2 components. If $G$ has exactly one component, $G$ is a path or a cycle, and it follows from Propositions \ref{proppath} and \ref{propcycle} that $G\in \{P_3, P_4, C_3, C_4\}$. Suppose $G$ has 2 components. Since $|N[S]|\leq 3$, one component is $K_1$, and by Remark \ref{remarkk2} (or Theorem \ref{orderminus1}), the other component is $K_1$ or $K_2$.
\end{proof}
Next we consider graphs on $n$ vertices whose $k-$power propagation time is $n-3$. The case with $k=1$ behaves differently than the cases with $k\geq 2$, so we first consider the latter.\\\indent We use $\mathfrak{G}$ to denote the family of connected graphs $G$ on 5 vertices with $\Delta(G)=3.$
\begin{thm}\label{orderminus2} Let $k\geq 2$ and let $G$ be a graph with $\operatorname{ppt}_k(G)=|G|-3$. Then $G\in \{P_5, P_6, C_5, C_6,\text{sp}(1,1,1), L(3,1), K_4-e, K_4, K_1\cup P_3, K_1\cup P_4, K_1\cup C_3, K_1\cup C_4,K_2\cup K_2,\overline{K_3}, \overline{K_2} \cup K_2\}\cup \mathfrak{G}$.
\end{thm}
\begin{proof}Since $\operatorname{ppt}_k(G)\leq |G|-3,$ then for any minimum $k-$power dominating set $S$,$|S|\leq 3$ and $|N[S]|\leq 4.$ It follows from Remark \ref{deltat} that $\Delta(G)\leq 3$. \\\indent If $\Delta(G)\leq 2$, then $G$ is the union of paths and cycles. Since $\gamma_{P,k}(G)\leq 3,$ $G$ has at most 3 components. If $G$ is connected, it follows from Propositions \ref{proppath} and \ref{propcycle} that $G\in \{P_5, P_6, C_5, C_6\}.$ Suppose $G$ has two connected components, $G_1$ and $G_2$, and suppose $|G_1|\geq 3$. By applying Remark \ref{remarkk2} to $G_1$, there exists an efficient $k-$power dominating set $S$ of $G$ such that $|N_{G_1}[S]|\geq 3,$ where $N_{G_1}[S]=N[S]\cap V(G_1)$. Since $|N[S]|\leq 4$, we have $G_2=K_1,\operatorname{ppt}_k(G_1)=|G_1|-2,$ and it follows from Theorem \ref{orderminus2} that $G\in\{ K_1\cup P_3, K_1\cup P_4, K_1\cup C_3, K_1\cup C_4\}$. If $|G_1|\leq 2$ and $|G_2|\leq 2$, then $G=K_2\cup K_2.$ \\\indent If $G$ has 3 connected components, it follows from $|N[S]|\leq 4$ that $G\in \{\overline{K_3}, \overline{K_2} \cup K_2\}$. \\\indent Suppose $\Delta(G)=3$. Let $S$ be a minimum $k-$power dominating set such that every vertex in $S$ has degree at least 3. Since $|N[S]|\leq 4$, then $S=\{s\}$ for some $s\in V(G)$. Let $N[S]=\{s, s_1, s_2, s_3\}$. If $|G|=4,$ then $G\in \{\text{sp}(1,1,1), L(3,1), K_4-e, K_4\}.$\\\indent Next, we show that if $|G|>4$, then $|G|=5.$ Since $|N[S]|=4$ and $\operatorname{ppt}_k(G)=|G|-4$, then after the domination step, exactly one force is performed during each step. Without loss of generality, suppose $s_1$ forces $v$ in step 2. \\\\{\em Claim 1:} For $i\in \{1, 2,3\}$, if $u\in N(s_i)$, then $u\in \{s, s_1, s_2, s_3, v\}.$ To see this, recall that $\Delta(G)=3$. So if $s_1$ has a neighbor $u$ not in $\{s, s_2, s_3, v\},$ it has exactly one such neighbor, and it will force $u$ and $v$ in step 2, contradicting $\operatorname{ppt}_k(G)=|G|-3$. Similarly, if $s_i$ (for $i=2,3$) has a neighbor $u$ not in $\{s,s_1 s_2, s_3, v\}$, it has at most two such neighbors, so $s_1$ will force $v$ in step 2 and $s_i$ will force $u$ in step 2, contradicting $\operatorname{ppt}_k(G)=|G|-3.$
{\em Claim 2:} Vertex $v$ has no neighbor not in $\{s_1, s_2, s_3\}$. To see this, suppose $v$ has a neighbor $u$ not in $\{s_1, s_2, s_3\}$. Since $\Delta(G)=3$ and $v$ is adjacent to $s_1$ by assumption, then $v$ has at most two such neighbors. Then $\{s_1\}$ is a minimum $k-$power dominating set with $\operatorname{ppt}_k(G,\{s_1\})\leq |G|-4$ since $s_1$ will dominate $\{v, s\}$ in step 1, and if necessary, $s$ will force $\{s_2, s_3\}$ in step 2 and $v$ will force $u$ in step 2.
Therefore, $G$ is a connected graph on 5 vertices with $\Delta(G)=3.$ Also note that all connected graphs on 5 vertices with maximum degree 3 have $\operatorname{ppt}_k(G)=2$ (for $k\geq 2).$ This completes the proof.
\end{proof}
Next we consider graphs with $\operatorname{ppt}(G)=|G|-3$. We characterize all trees with $\operatorname{ppt}(G)=|G|-3$ and all graphs with $\operatorname{ppt}(G)=|G|-3$ and $\gamma_{P}(G)\in \{2,3\}$. In Figure \ref{graphsnminus3}, we provide some graphs (including infinite families) with $\operatorname{ppt}(G)=|G|-3$ and $\gamma_{P}(G)=1$, but characterizing all such graphs is less tractable.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1.0]
\node[bvertex] (1) at (0,0) {};
\node[Bvertex] (2) at (1,0) {};
\node[Bvertex] (3) at (0,-1) {};
\node[Bvertex] (4) at (1,-1) {};
\node[Bvertex] (5) at (-1,-0.5) {};
\node[Bvertex] (6) at (2,-0.5) {};
\draw(1) to (2);
\draw(3) to (4);
\draw(1) to (3);
\draw(2) to (4);
\draw(2) to (6);
\draw(4) to (6);
\draw(1) to (5);
\draw(5) to (3);
\node[Bvertex] (7) at (5,-0.5) {};
\node[Bvertex] (8) at (5.5,-0.5) {};
\node[Bvertex] (9) at (6,-0.5) {};
\node[Bvertex] (10) at (5.5,0) {};
\node[bvertex] (11) at (5.5,-1) {};
\node[Bvertex] (12) at (6.5,-0.5) {};
\node[Bvertex] (13) at (7,-0.5) {};
\node[Bvertex] [label=right:$\ldots$](14) at (7.5,-0.5) {};
\node[Bvertex] (15) at (8.4,-0.5) {};
\node[Bvertex] (16) at (8.9,-0.5) {};
\node[Bvertex] (17) at (9.4,-0.5) {};
\draw(7) to (10);
\draw(10) to (9);
\draw(9) to (11);
\draw(11) to (7);
\draw(9) to (12);
\draw(12) to (13);
\draw(13) to (14);
\draw(15) to (16);
\draw(16) to (17);
\draw(10) to (8);
\draw(8) to (11);
\node[Bvertex] (18) at (-1,-3.2) {};
\node[Bvertex] (19) at (-0.5,-3.2) {};
\node[Bvertex] [label=right:$\ldots$](20) at (0,-3.2) {};
\node[Bvertex] (21) at (0.9,-3.2) {};
\node[bvertex] (22) at (1.4,-3.2) {};
\node[Bvertex] (23) at (0.2,-2.5) {};
\node[Bvertex] (24) at (0.2,-3.9) {};
\node[Bvertex] (25) at (1.9,-3.2) {};
\node[Bvertex] [label=right:$\ldots$](26) at (2.4,-3.2) {};
\node[Bvertex] (27) at (3.3,-3.2) {};
\node[Bvertex] (28) at (3.8,-3.2) {};
\node[Bvertex] (29) at (4.2,-3.2) {};
\draw(18) to (23);
\draw(23) to (22);
\draw(24) to (22);
\draw(18) to (24);
\draw(18) to (19);
\draw(19) to (20);
\draw(21) to (22);
\draw(24) to (25);
\draw(23) to (25);
\draw(25) to (26);
\draw(27) to (28);
\draw(28) to (29);
\node[Bvertex] (30) at (7.5,-2.5) {};
\node[Bvertex] (31) at (8.3,-2.5) {};
\node[bvertex] (32) at (7.5,-3) {};
\node[Bvertex] (33) at (8.3,-3) {};
\node[Bvertex] (34) at (7,-3.5) {};
\node[Bvertex] (35) at (7,-4.3) {};
\node[Bvertex] [label=right:$\ldots$](36) at (7.5,-4.6) {};
\node[Bvertex] (37) at (8.4,-4.6) {};
\node[Bvertex] (38) at (8.9,-4.1) {};
\node[Bvertex] (39) at (8.9,-3.5) {};
\draw(30) to (32);
\draw(31) to (33);
\draw(32) to (34);
\draw(34) to (35);
\draw(35) to (36);
\draw(32) to (33);
\draw(37) to (38);
\draw(38) to (39);
\draw (39) to (33);
\end{tikzpicture}
\caption{Graphs $G$ with $\operatorname{ppt}(G)=|G|-3$ and $\gamma_P(G)=1$. An efficient power dominating set in blue.}
\label{graphsnminus3}
\end{center}
\end{figure}
\begin{prop}\label{Treeorderminus3} Let $T$ be a tree with $\operatorname{ppt}(T)=|T|-3$. Then $T=P_5$, $T=P_6$, or $T= \text{sp}(1,1,k)$, for some $k\geq 1$.
\end{prop}
\begin{proof} If $\Delta(T)\leq2,$ then $T$ must be a path, and by Proposition \ref{proppath}, $T=P_5$ or $T=P_6$. Suppose $\Delta(T)\geq 3$. From Lemma \ref{minpowerdomwithdegthree}, there exists a minimum power dominating set $S$ such that each vertex in $S$ has degree at least 3. Since each vertex in $S$ has degree at least 3, then $|N[S]|\geq 4$ and $\operatorname{ppt}(T,S)\leq |T|-3.$ This implies that $\operatorname{ppt}(T,S)=|T|-3$ since $\operatorname{ppt}(T)=|T|-3$. So, $|S|=1$ and $|N[S]|=4.$ \\\indent Let $S=\{s\}$ and $N[S]=\{s, s_1, s_2, s_3\}$. If $\operatorname{ppt}(T)=1$, then $|T|=4$ and $T=\text{star}(1,1,1)$. Suppose $\operatorname{ppt}(T)\geq 2$. Since $|N[S]|=4$, then for steps $2$ through $\operatorname{ppt}(T)$, there is exactly one force per step. Consider a set of forces and the corresponding maximal forcing chains $\mathfrak{C_1}, \mathfrak{C_2}, \mathfrak{C_3}$ for the zero forcing set $\{s_1,s_2,s_3\}$ in $T'=T-s$. Recall that $\mathfrak{C_1}, \mathfrak{C_2}, \text{ and } \mathfrak{C_3}$ correspond to induced paths in $T'$. Also note that there is no edge between any two of these chains, as this would create a cycle in $T$. Since there is exactly 1 force per step, it follows that two of the chains must consist of a single vertex. So, $T=\text{star}(1,1,k)$ for some $k\geq 1$.
\end{proof}
\begin{thm}\label{orderminus3} Let $G$ be a graph with $\operatorname{ppt}(G)=|G|-3$ and $\gamma_p(G)\in \{2,3\}$. Then $G\in \mathcal{F}$ where $\mathcal{F}=\{\overline{K_3}, \overline{K_2}\cup K_2, K_1\cup C_3, K_1\cup P_3, K_1\cup P_4, K_1\cup C_4, K_2\cup K_2\}$.
\end{thm}
\begin{proof} For any minimum power dominating set $S$ of $G$, $|S|\leq 3$ and $|N[S]|\leq 4$. Suppose $G$ is connected. If $\Delta(G)\geq 3,$ by Lemma \ref{minpowerdomwithdegthree}, $G$ has a minimum power dominating set $S$ such that each $s\in S$ has degree at least 3. Since $|S|\in \{2,3\}$, this gives $|N[S]|\geq 5$. If $\Delta(G)\leq 2$, then $G$ is a path or a cycle with $\gamma_p(G)=1$. Thus, $G$ has at least two components. \\\indent Suppose $G$ has only two components, $G_1$ and $G_2$. Without loss of generality, if $G_1$ has at least 3 vertices and $\Delta(G_1)\geq 3$, by applying Lemma \ref{minpowerdomwithdegthree} to $G_1$, it follows that there exists a minimum power dominating set $S$ of $G$ with $|N[S]|\geq 5.$ So, $\Delta(G_i)\leq 2$, $G_i$ is a path or a cycle, and $\gamma_p(G)=2$. If $G_1$ is a path on at least 3 vertices or a cycle, then $G_2=K_1$ (since $|N[S]|\leq 4$) and $\operatorname{ppt}(G)=\operatorname{ppt}(G_1)=|G_1|-2$. By Proposition \ref{orderminus2}, $G_1 \in\{P_3, P_4, C_3, C_4\}$. Otherwise, $G=K_2\cup K_2$.\\\indent If $G$ has three components $G_1,G_2,G_3$, then $\gamma_p(G)=3$ and exactly one force is performed at each step. So, $G_1=G_2=K_1$, and $\operatorname{ppt}(G)=\operatorname{ppt}(G_3)=|G_3|-1$. By Proposition \ref{orderminus1}, $G_3\in\{K_1, K_2\}$.
\end{proof}
\section{Low $k$-power propagation time}\label{low}
In this section we study graphs with low $k-$propagation time. If $G$ is a graph with $k$-propagation time 1, then any efficient $k$-power dominating set of $G$ is also a dominating set, so $\gamma_{P,k}(G)=\gamma(G)$. It was shown in \cite{HHHH} that every graph $H$ is an induced subgraph of a graph $G$ with $\gamma_P(G)=\gamma(G)$. Thus, there is no forbidden induced subgraph characterization of graphs with power propagation time 1. \\\indent Let $G$ be a graph. For $k\geq 1$, a vertex $v$ in $V(G)$ is called a {\em k-strong support vertex} if $v$ is adjacent to $k+1$ or more leaves. A $1-$strong support vertex is also known as a {\em strong support vertex} and was originally defined in \cite{HHHH}.
\begin{rem}\label{powerequalsdom}{\rm
Note that every $k$-strong support vertex of a graph $G$ is in every minimum dominating set of $G$. Also, if $S$ is a $k-$power dominating set of $G$ and $v$ is a $k$-strong support vertex of $G$ then either $v$ is in $S$ or all but $k$ of the leaves adjacent to $v$ are in $S$. So $\gamma_{P,k}(G)$ is at least the number of $k$-strong support vertices in $G$. Since $\gamma_{P,k}(G)\leq \gamma(G)$, it follows that if $S$ is a dominating set of $G$ such that every vertex in $S$ is a $k$-strong support vertex, then $S$ is minimum and unique, $\gamma_{P,k}(G)=\gamma(G)$, and $\operatorname{ppt}_k(G)=1$.
}
\end{rem}
\indent For a minimum $k-$power dominating set $S$ and a vertex $v$ in $S$, the {\em private neighborhood} of $v$ with respect to $S$, denoted $pn[v,S]$, is the set $N[v]-N[S-\{v\}]$. Every vertex of $pn[v,S]$ is called a {\em private neighbor} of $v$ with respect to $S$, and $A_v$ denotes the set $V-(S\cup pn[v,S])$ \cite{HHHH}.
For $k\geq 1$, let $\mathfrak{F}_k$ be the set of graphs defined by $\mathfrak{F}_k=\{C_3, K_{2,k+1}, K_2\vee \overline{K_k}\}.$ \\\indent For graphs $G$ and $H$, $G$ is said to be $H-${\em free} if $G$ does not contain $H$ as an induced subgraph. If $\mathcal{H}$ represents a family of graphs, then $G$ is said to be $\mathcal{H}-$free if for all $H\in \mathcal{H}$, $G$ does not contain $H$ as an induced subgraph.
The next theorem and proof is a generalization of Theorem 9 given in \cite{HHHH}.
\begin{thm}\label{ppt1girth5} For $k\geq 1$, let $G$ be a connected graph on at least k+2 vertices that is $\mathfrak{F}_k$-free. Then $\operatorname{ppt}_k(G)=1$ if and only if $G$ has a minimum dominating set $S$ such that every vertex in $S$ is a $k$-strong support vertex.
\end{thm}
\begin{proof}
If $G$ has a dominating set $S$ such that each vertex in $S$ is a $k-$strong support vertex, then by Remark \ref{powerequalsdom}, $\gamma_{P,k}(G)=\gamma(G)$ and $\operatorname{ppt}_k(G)=1$. Conversely, suppose $\operatorname{ppt}_k(G)=1$ (i.e $\gamma_{P,k}(G)=\gamma(G)$). To obtain a contradiction, suppose $S$ is a minimum dominating set of $G$ such that there exists a vertex $v\in S$ that is not a $k-$strong support vertex. If $pn[v,S]=\emptyset,$ then $S-\{v\}$ is a smaller dominating set. Suppose that $pn[v,S]=\{v\}$. Then $S-\{v\}$ dominates $V-\{v\}$, and since $G$ is connected, $S-\{v\}$ is a smaller $k$-power dominating set. So there exists a vertex $w\neq v$ in $pn[v,S]$. \\\indent Suppose $pn[v,S]$ contains no leaves. Since $G$ is $\mathfrak{F}_k-$free, then each vertex in $pn[v,S]$ that is not $v$ is adjacent to a vertex in $A_v$, and for each $u\in A_v$, $|N(u)\cap (pn[v,S]\cup \{v\})|\leq k$. It follows that $S-\{v\}$ is a $k-$power dominating set of $G$ since $S-\{v\}$ dominates $A_v$, each vertex in $pn[v,S]-\{v\}$ is forced by a neighbors in $A_v$, and if necessary, any vertex in $pn[v,S]-\{v\}$ can force $\{v\}$. So there must exist at least one leaf in $pn[v,S].$ \\\indent Suppose vertices $u_1,...,u_t$ ($1\leq t\leq k-1$) are leaves $pn[v,S]$. If $pn[v,S]=\{u_1,\ldots,u_t,v\}$, then $v$ has a neighbor in $A_v$ since $G$ is connected, and we again have that $S-\{v\}$ is a $k-$power dominating set of $G$. If there exists a $w\in pn[v,S]$ such that $w\notin \{u_1,\ldots, u_t,v\}$, then $w$ has a neighbor in $A_v$ and $S-\{v\}$ is a $k-$power dominating set of $G$.
\end{proof}
For the remainder of the paper, we focus our attention on the standard power propagation time, $\operatorname{ppt}(G)$.
\section{Nordhaus-Gaddum sum upper bound}\label{NGbound}
In this section we show that for all graphs on $n$ vertices, $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq n+2$. We also conjecture that $n$ is the least upper bound, and demonstrate an infinite family of graphs with $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})= n$ for each $G$ in the family.
\begin{prop}\label{improvedNGppt} Let $G$ be a graph on $n$ vertices. Then $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq n+2$.
\end{prop}
\begin{proof}If $G$ has no edges, then $\operatorname{ppt}(G)=0$ and $\operatorname{ppt}(\overline{G})=1$ so the claim holds. Suppose $G$ has an edge. Let $S$ be an efficient power dominating set of $G$. Note that $N[S]$ is a zero forcing set of $G$, but it is not minimum. To see this, consider a fixed $s\in S$ (such that $\deg(s)\geq 1$) and a vertex $v_s\in N(s)$. By removing $v_s$, $N[S]\setminus\{v_s\}$ is also a zero forcing set, so $\operatorname{Z}(G)+1\leq |N[S]|$. Similarly, $\operatorname{Z}(\overline{G})+1\leq |N[S']|$, where $S'$ is an efficient power dominating set of $\overline{G}$. It follows from inequality (\ref{pptbound2}) that $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq 2n-(\operatorname{Z}(G)+\operatorname{Z}(\overline{G})),$ and since $n-2\leq \operatorname{Z}(G)+\operatorname{Z}(\overline{G})$ from \cite{PSDZF}, then $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq |G|+2$.
\end{proof}
We have not found a graph $G$ such that $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})=n+1$ or one such that $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})=n+2$. We have computationally checked all connected graphs on at most 10 vertices and found several graphs with $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})=n$. Evidence suggests that this is the least upper bound for all graphs. The next example gives an infinite family of graphs such that $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})=n$ for all graphs in the family.
\begin{ex}\label{examplen}{\rm Let $G_9$ denote the graph given in the Figure \ref{pptn}. For $n\geq 10$, let $G_n$ be a graph on $n$ vertices constructed from $G_{n-1}$ by adding an $n^{th}$ vertex and adding the edges $\{v_{n-2},v_n\}$ and $\{v_{n-1}, v_n\}.$ Note that the set $V(G_n)\setminus \{v_2,v_3\}$ is not a power dominating set of $G_n$ since $N(v_2)=N(v_3)$. So for every power dominating set $S$ of $G_n$, $N[S]$ must contain either $v_2$ or $v_3$. Also note that the sets $\{v_2\}$ and $\{v_3\}$ are minimum power dominating sets of $G_n$ with $\operatorname{ppt}(G_n, v_2)=\operatorname{ppt}(G_n,v_3)=n-3$. Thus, $\gamma_P(G)=1$. For $6\leq i \leq n$, the set $\{v_i\}$ is not a power dominating set since $v_2, v_3\notin N[\{v_i\}]$. Furthermore, it follows from inspection that the sets $\{v_1\}, \{v_4\}, \text{ and } \{v_5\}$ are not power dominating sets. Thus, $\operatorname{ppt}(G_n)=n-3$. \\\indent Similarly, in $\overline{G_n}$, note that $V(\overline{G_n})\setminus \{v_2,v_3\}$ is not a power dominating set of $\overline{G_n}$ since $N(v_2)\setminus \{v_3\}=N(v_3)\setminus \{v_2\}$. Thus, for each power dominating set $S'$ of $\overline{G_n},$ $N[S']$ must contain $v_2$ or $v_3$. Furthermore, the sets $\{v_2\}$ and $\{v_3\}$ are power dominating sets with $\operatorname{ppt}(\overline{G_n}, \{v_2\})=\operatorname{ppt}(\overline{G_n}, \{v_3\})=4$, so $\gamma_P(G)=1$. For $i\in \{1,4,5\},$ note that the set $\{v_i\}$ is not a power dominating set since $v_2, v_3 \notin N[\{v_i\}].$ It follows from inspection that the set $\{v_i\}$ is not a power dominating set for $7\leq i \leq n-2$ and the sets $\{v_{n-1}\}$ and $\{v_n\}$ are power dominating sets of $\overline{G_n}$ with $\operatorname{ppt}(\overline{G_n}, \{v_{n-1}\})=\operatorname{ppt}(\overline{G_n}, \{v_n\})=3.$ So, $\operatorname{ppt}(G_n)+\operatorname{ppt}(\overline{G_n})=n$ for all $n\geq 9$.
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1.0]
\node[Bvertex] [label=above:$v_1$] (1) at (-1,0) {};
\node[Bvertex] [label=above:$v_3$] (3) at (0,0) {};
\node[Bvertex] [label=above:$v_5$] (5) at (1,0) {};
\node[Bvertex] [label=above:$v_7$](7) at (2,0) {};
\node[Bvertex] [label=above:$v_9$](9) at (3,0) {};
\node[Bvertex] (2)[label=below:$v_2$] at (-0.5,-1) {};
\node[Bvertex] (4)[label=below:$v_4$] at (0.5,-1) {};
\node[Bvertex] (6)[label=below:$v_6$] at (1.5,-1) {};
\node[Bvertex] (8)[label=below:$v_8$] at (2.5,-1) {};
\draw (1) to (3);
\draw (1) to (2);
\draw (2) to (5);
\draw (3) to (4);
\draw (3) to (5);
\draw (5) to (7);
\draw (7) to (9);
\draw (2) to (4);
\draw (4) to (6);
\draw (6) to (8);
\draw (8) to (9);
\draw (5) to (6);
\draw (6) to (7);
\draw (7) to (8);
\draw (8) to (9);
\node[Bvertex] [label=above:$v_9$] (19) at (5,1) {};
\node[Bvertex] [label=left:$v_8$] (18) at (4.5,0) {};
\node[Bvertex] [label=left:$v_7$] (17) at (4.5 ,-1) {};
\node[Bvertex] [label=left:$v_6$](16) at (5,-2) {};
\node[Bvertex] [label=below:$v_5$](15) at (6.2,-2.3) {};
\node[Bvertex] (14)[label=below:$v_4$] at (7.2,-1.6) {};
\node[Bvertex] (11)[label=above:$v_1$] at (6.2,1.3) {};
\node[Bvertex] (12)[label=right:$v_2$] at (7.2, 0.8) {};
\node[Bvertex] (13)[label=right:$v_3$] at (7.5, -0.5) {};
\draw(11) to (14);
\draw(11) to (15);
\draw(11) to (16);
\draw(11) to (17);
\draw(11) to (18);
\draw(11) to (19);
\draw(12) to (13);
\draw(12) to (16);
\draw(12) to (17);
\draw(12) to (18);
\draw(12) to (19);
\draw(13) to (16);
\draw(13) to (17);
\draw(13) to (18);
\draw(13) to (19);
\draw(14) to (15);
\draw(14) to (17);
\draw(14) to (18);
\draw(14) to (19);
\draw(15) to (18);
\draw(15) to (19);
\draw(16) to (19);
\end{tikzpicture}
\caption{Graphs $G_9$ (left) and $\overline{G_9}$ (right) in Example \ref{examplen}.}
\label{pptn}
\end{center}
\end{figure}
}
\end{ex}
\begin{conj} For all graphs $G$ on $n$ vertices, $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq n$.
\end{conj}
We now show that the conjecture is true for all graphs with at least one leaf.
\begin{prop}\label{NGtrees} Let $G\neq P_4$ be a connected graph on $n$ vertices that has a leaf. Then $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq n-1$ and this bound is tight. For $G=P_4$, $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})=n=4$.
\end{prop}
\begin{proof} The claim holds when $n\leq 2$, so let $n\geq 3$. We first show that $\operatorname{ppt}(\overline{G})\leq 2$. Let $uv\in E(G)$ such that $v$ is a leaf. If $\deg(u)= n-1$, then $\{v,u\}$ is a efficient power dominating set for $\overline{G}$ and $\operatorname{ppt}(\overline{G})=1$. If $\deg(u)\neq n-1$, then $\{v\}$ is an efficient power dominating set for $\overline{G},$ and $\operatorname{ppt}(\overline{G})=2.$ \\\indent Suppose $\Delta(G)\geq 3$. By Lemma \ref{minpowerdomwithdegthree}, $G$ has a minimum power dominating set $S$ such that each vertex in $S$ has degree at least 3. Then $|N[S]|\geq 4$, $\operatorname{ppt}(G)\leq n-3$, and $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq n-1$. If $\Delta(G)=2$, then $G$ is a path. By Proposition \ref{proppath}, $\operatorname{ppt}(P_n)=\left \lfloor \frac{n}{2} \right\rfloor$, so $\operatorname{ppt}(P_n)\leq n-3$ for all $n\geq 6.$ For $P_3,P_4,P_5$, we have by inspection that $\operatorname{ppt}(P_3)+\operatorname{ppt}(\overline{P_3})=2,\operatorname{ppt}(P_4)+\operatorname{ppt}(\overline{P_4})=4,$ and $\operatorname{ppt}(P_5)+\operatorname{ppt}(\overline{P_5})=4$. Thus, $\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})\leq n-1$ for all graphs $G\neq P_4$ containing a leaf. The bound is tight for $G=\text{sp}(1,1,t)$ ($t\geq 2$) since $\operatorname{ppt}(G)=|G|-3$ by Proposition \ref{Treeorderminus3} and $\operatorname{ppt}(\overline{G})=2$.
\end{proof}
\begin{lem}\label{sizedecrease}{\rm \cite{Chang}} Let $G$ be a graph such that $\Delta(G)\geq 3$. Then there exists a minimum power dominating set $S$ such that each $s\in S$ has at least two neighbors which are not neighbors of any vertex in $N[S\setminus \{v\}].$
\end{lem}
The next proposition shows that if $\Delta(G)\geq 3, \Delta(\overline{G})\geq 3,$ and $\gamma_{P}(G)+\gamma_{P}(\overline{G})\geq 4,$ then $ \operatorname{ppt}(G)+\operatorname{ppt}\left (\overline{G}\right)\leq n.$
\begin{prop} Let $G$ be a graph on $n$ vertices such that $\Delta(G)\geq 3$ and $\Delta(\overline{G})\geq 3.$ Then $\operatorname{ppt}(G)+\operatorname{ppt}\left (\overline{G}\right)\leq n-(\gamma_P(G)+\gamma_p(\overline{G}))+4$.
\end{prop}
\noindent {\em Proof.} By Lemma \ref{sizedecrease} and the assumption that $\Delta(G)\geq 3$, there is a minimum power dominating set $S$ of $G$ such that each $s\in S$ has at least one neighbor not in $N[S\setminus\{s\}]$. We first show that $\operatorname{Z}(G)\leq |N[S]|-\gamma_p(G)$. Recall that $N[S]$ is a zero forcing set of $G$. For each $s\in S$, choose a $v_s\in N(s)$ such that $v_s\notin N[S\setminus\{s\}].$ Then $N[S]\setminus\{v_1,v_2,\ldots,v_{|S|}\}$ is also a zero forcing set since $s$ will force $v_s$ in step one. So, $\operatorname{Z}(G)\leq |N[S]|-\gamma_p(G)$.\\\indent Similarly, let $S'$ be a minimum power dominating set of $G$ such that each $s'\in S'$ has at least one neighbor not in $N[S'\setminus\{s'\}]$. By the same argument, we have $\operatorname{Z}(\overline{G})\leq |N[S']|-\gamma_p(\overline{G})$. Using the bounds $\operatorname{ppt}(G,S)-1\leq n-|N[S]|$ and $\operatorname{ppt}(\overline{G},S')-1\leq n-|N[S']|$ (from inequality (\ref{pptbound2})), and $n-2\leq \operatorname{Z}(G)+\operatorname{Z}(\overline{G})$ from \cite{PSDZF}, it follows that
\begin{eqnarray*}
\operatorname{ppt}(G)+\operatorname{ppt}(\overline{G})&\leq & \operatorname{ppt}(G,S)+\operatorname{ppt}(\overline{G},S')\\
&\leq & 2n+2 -(|N[S]|+|N[S']|)\\
&\leq & 2n+2-(\operatorname{Z}(G)+\operatorname{Z}(\overline{G}))-(\gamma_P(G)+\gamma_P(\overline{G}))\\
&\leq & 2n+2-(n-2)-(\gamma_P(G)+\gamma_P(\overline{G}))\\
&= & n-(\gamma_P(G)+\gamma_P(\overline{G}))+4. \qed
\end{eqnarray*}
\section{Effects of graph operations on standard power propagation time}\label{operations}
Let $G_e$ be a graph obtained from $G=(V,E)$ by subdividing the edge $e\in E$ and let $G/e$ denote the graph resulting from $G$ by contracting the edge $e$. It is shown in both \cite{PD2015} and \cite{arxivpaper} that $\gamma_P(G)-1 \leq\gamma_P(G/e)\leq \gamma_P(G)+1$ and in \cite{PD2015} that $\gamma_P(G) \leq\gamma_P(G_e)\leq \gamma_P(G)+1$. We show that the power propagation time may increase or decrease by any amount when subdividing or contracting an edge.
\subsection{Edge subdivision}
\begin{prop}\label{subdecrease} For any $t\geq 0$, there exists a graph $G=(V,E)$ and edge $e\in E$ such that $\operatorname{ppt}(G_e)\leq \operatorname{ppt}(G)-t.$
\end{prop}
\begin{proof} Construct the graph $G$ in the following way: Starting with the path $P_{\ell}=(v_1,v_2,\ldots, v_{\ell}), (\ell\geq 7)$ , add two leaves to vertex $v_1$ and add two leaves to vertex $v_{\ell}$ so that vertices $v_1$ and $v_{\ell}$ become strong support vertices. Add one leaf to vertex $v_{\ell-1}$ and add one leaf to vertex $v_{\ell-2}$. (See Figure \ref{subdecreasefig}.) Then $\{v_1,v_{\ell}\}$ is the unique power dominating set of $G$ and $\operatorname{ppt}(G)=\ell-2$. For $e=\{v_{l-2}v_{l-1}\}$, we consider the graph $G_e$. Note that $\gamma_p(G_e)=3$ because $v_1, v_{\ell}\in S$ for any power dominating set $S$ and $\{v_1, v_{\ell}\}$ is not a power dominating set. For $S=\{v_1, v_{l-2}, v_{\ell}\}, \operatorname{ppt}(G_e,S)=\left \lceil \frac{\ell-4}{2} \right \rceil$. By choosing $\ell\geq 2t+1$, $\operatorname{ppt}(G_e)\leq \operatorname{ppt}(G)-t.$
\end{proof}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1.0]
\node[Bvertex] [label=below:$v_1$](1) at (0,0) {};
\node[Bvertex] [label=below:$v_2$](2) at (1,0){};
\node[Bvertex] [label=right:$\ldots$](3) at (2,0) {};
\node[Bvertex] (4) at (3,0){};
\node[Bvertex] [label=below:$v_{\ell-2}$](5) at (4,0) {};
\node[Bvertex] [label=below:$v_{\ell-1}$](6) at (5,0){};
\node[Bvertex] [label=below:$v_{\ell}$](7) at (6,0){};
\node[Bvertex] (8) at (5,1){};
\node[Bvertex] (9) at (4,1){};
\node[Bvertex] (10) at (-1,0.5) {};
\node[Bvertex] (11) at (-1,-0.5) {};
\node[Bvertex] (12) at (7,0.5) {};
\node[Bvertex] (13) at (7,-0.5) {};
\draw (1) to (2);
\draw(2) to (3);
\draw (4) to (5);
\draw(5) to (6);
\draw (6) to (7);
\draw(8) to (6);
\draw (9) to (5);
\draw(1) to (11);
\draw(1) to (10);
\draw(7) to (12);
\draw(7) to (13);
\node[Bvertex] [label=below:$v_1$](21) at (0,-3) {};
\node[Bvertex] [label=below:$v_2$](22) at (1,-3){};
\node[Bvertex] [label=right:$\ldots$](23) at (2,-3) {};
\node[Bvertex] (24) at (3,-3){};
\node[Bvertex] [label=below:$v_{\ell-2}$](25) at (4,-3) {};
\node[Bvertex] (214) at (4.5,-3) {};
\node[Bvertex] [label=below:$v_{\ell-1}$](26) at (5,-3){};
\node[Bvertex] [label=below:$v_{\ell}$](27) at (6,-3){};
\node[Bvertex] (28) at (5,-2){};
\node[Bvertex] (29) at (4,-2){};
\node[Bvertex] (210) at (-1,-2.5) {};
\node[Bvertex] (211) at (-1,-3.5) {};
\node[Bvertex] (212) at (7,-2.5) {};
\node[Bvertex] (213) at (7,-3.5) {};
\draw(25) to (214);
\draw(214) to (26);
\draw (21) to (22);
\draw(22) to (23);
\draw (24) to (25);
\draw (26) to (27);
\draw(28) to (26);
\draw (29) to (25);
\draw(21) to (211);
\draw(21) to (210);
\draw(27) to (212);
\draw(27) to (213);
\end{tikzpicture}
\caption{Graphs $G$ and $G_e$ in Proposition \ref{subdecrease}.}
\label{subdecreasefig}
\end{center}
\end{figure}
Similarly, subdividing an edge can cause the power propagation time to increase by any amount, as demonstrated by the following proposition.
\begin{prop}\label{subincrease} For any $t\geq 0$, there exists a graph $G=(V,E)$ and edge $e\in E$ such that $\operatorname{ppt}(G_e)\geq \operatorname{ppt}(G)+t.$
\end{prop}
\begin{proof}
Let $G$ be a graph on $n\geq 8$ vertices constructed from the cycle $(v_1,v_2,\ldots, v_{n-4})$ by adding the edges $\{v_1,v_{n-3}\}, \{v_1,v_{n-2}\}, \{v_1,v_{n-1}\}, \{v_2,v_{n-1}\}, \text{ and } \{v_n,v_{n-1}\}$. Let $e=\{v_2,v_{n-1}\}$, and consider $G_e$. The set $\{v_1\}$ is the unique minimum power dominating set of $G$ and $\operatorname{ppt}(G)=\left \lfloor \frac{n-4}{2}\right \rfloor.$ The set $\{v_1\}$ is also the unique minimum power dominating set of $G_e$ and $\operatorname{ppt}(G_e)=n-4$. So, by choosing $n \geq 2t+4,$ $\operatorname{ppt}(G_e)\geq \operatorname{ppt}(G)+t.$ \end{proof}
\begin{figure}[h!]
\begin{center}
\begin{tikzpicture}[scale=1.0]
\node[Bvertex] [label=above:$v_{n-2}$](30) at (7.5,-2) {};
\node[Bvertex] [label=left:$v_{n-3}$](40) at (7,-2.2) {};
\node[Bvertex] [label=above:$v_{n-1}$] (31) at (8.3,-2) {};
\node[Bvertex] [label=left:$v_{1}$](32) at (7.5,-3) {};
\node[Bvertex] [label=right:$v_{2}$](33) at (8.3,-3) {};
\node[Bvertex] [label=left:$v_{n-4}$] (34) at (7,-3.5) {};
\node[Bvertex] [label=left:$v_{n-5}$](35) at (7,-4.3) {};
\node[Bvertex] [label=right:$\ldots$](36) at (7.5,-4.6) {};
\node[Bvertex] (37) at (8.4,-4.6) {};
\node[Bvertex] [label=right:$v_{4}$](38) at (8.9,-4.1) {};
\node[Bvertex] [label=right:$v_{3}$](39) at (8.9,-3.5) {};
\node[Bvertex] [label=right:$v_{n}$](41) at (9,-2) {};
\draw(30) to (32);
\draw(40) to (32);
\draw(31) to (32);
\draw(31) to (41);
\draw(31) to (33);
\draw(32) to (34);
\draw(34) to (35);
\draw(35) to (36);
\draw(32) to (33);
\draw(37) to (38);
\draw(38) to (39);
\draw (39) to (33);
\node[Bvertex] [label=above:$v_{n-2}$](130) at (12.5,-2) {};
\node[Bvertex] [label=left:$v_{n-3}$](140) at (12,-2.2) {};
\node[Bvertex] [label=above:$v_{n-1}$] (131) at (13.3,-2) {};
\node[Bvertex] [label=left:$v_{1}$](132) at (12.5,-3) {};
\node[Bvertex] [label=right:$v_{2}$](133) at (13.3,-3) {};
\node[Bvertex] [label=left:$v_{n-4}$] (134) at (12,-3.5) {};
\node[Bvertex] [label=left:$v_{n-5}$](135) at (12,-4.3) {};
\node[Bvertex] [label=right:$\ldots$](136) at (12.5,-4.6) {};
\node[Bvertex] (137) at (13.4,-4.6) {};
\node[Bvertex] [label=right:$v_{4}$](138) at (13.9,-4.1) {};
\node[Bvertex] [label=right:$v_{3}$](139) at (13.9,-3.5) {};
\node[Bvertex] [label=right:$v_{n}$](141) at (14,-2) {};
\node[Bvertex] (143) at (13.3,-2.5) {};
\draw(130) to (132);
\draw(140) to (132);
\draw(131) to (132);
\draw(131) to (141);
\draw(143) to (133);
\draw(143) to (131);
\draw(132) to (134);
\draw(134) to (135);
\draw(135) to (136);
\draw(132) to (133);
\draw(137) to (138);
\draw(138) to (139);
\draw (139) to (133);
\end{tikzpicture}
\caption{Graphs $G$ and $G_e$ in Proposition \ref{subincrease}.}
\label{subincreasefig}
\end{center}
\end{figure}
\subsection{Edge Contraction}
\begin{prop}\label{contractincrease} For any $t\geq 0$, there exists a graph $H=(V,E)$ and edge $e\in E$ such that $\operatorname{ppt}(H/e)\geq \operatorname{ppt}(H)+t.$
\end{prop}
\begin{proof} From Proposition \ref{subdecrease}, there exist graphs $G$ and $G_e$ such that $\operatorname{ppt}(G_e)\leq \operatorname{ppt}(G)-t.$ Let $H=G_e$ and $H/e=G$. Then $\operatorname{ppt}(H/e)\geq \operatorname{ppt}(H)+t.$
\end{proof}
\begin{prop}\label{contractdecrease} For any $t\geq 0$, there exists a graph $H=(V,E)$ and edge $e\in E$ such that $\operatorname{ppt}(H/e)\leq \operatorname{ppt}(H)-t.$
\end{prop}
\begin{proof}
From Proposition \ref{subincrease}, there exist graphs $G$ and $G_e$ such that $\operatorname{ppt}(G_e)\geq \operatorname{ppt}(G)+t.$ Let $H=G$ and $H/e=G_e$. Then $\operatorname{ppt}(H/e)\leq \operatorname{ppt}(H)-t.$
\end{proof}
\noindent {\bf Acknowledgements}\newline \noindent The author would like to thank Dr. Leslie Hogben for her insight throughout this project, and Dr. Steve Butler for his assistance in the coding aspect of this project.
\bibliographystyle{alpha}
| {
"timestamp": "2016-08-04T02:08:28",
"yymm": "1608",
"arxiv_id": "1608.01189",
"language": "en",
"url": "https://arxiv.org/abs/1608.01189",
"abstract": "In this paper, we give Nordhaus-Gaddum upper and lower bounds on the sum of the power propagation time of a graph and its complement, and we consider the effects of edge subdivisions and edge contractions on the power propagation time of a graph. We also study a generalization of power propagation time, known as $k-$power propagation time, by characterizing all simple graphs on $n$ vertices whose $k-$power propagation time is $n-1$ or $n-2$ (for $k\\geq 1$) and $n-3$ (for $k\\geq 2$). We determine all trees on $n$ vertices whose power propagation time ($k=1$) is $n-3$, and give partial characterizations of graphs whose $k-$power propagation time is equal to 1 (for $k\\geq 1$).",
"subjects": "Combinatorics (math.CO)",
"title": "On the power propagation time of a graph",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399051935107,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.7093022003357654
} |
https://arxiv.org/abs/2106.02452 | Proving Equivalence Between Complex Expressions Using Graph-to-Sequence Neural Models | We target the problem of provably computing the equivalence between two complex expression trees. To this end, we formalize the problem of equivalence between two such programs as finding a set of semantics-preserving rewrite rules from one into the other, such that after the rewrite the two programs are structurally identical, and therefore trivially equivalent.We then develop a graph-to-sequence neural network system for program equivalence, trained to produce such rewrite sequences from a carefully crafted automatic example generation algorithm. We extensively evaluate our system on a rich multi-type linear algebra expression language, using arbitrary combinations of 100+ graph-rewriting axioms of equivalence. Our machine learning system guarantees correctness for all true negatives, and ensures 0 false positive by design. It outputs via inference a valid proof of equivalence for 93% of the 10,000 equivalent expression pairs isolated for testing, using up to 50-term expressions. In all cases, the validity of the sequence produced and therefore the provable assertion of program equivalence is always computable, in negligible time. |
\section{Introduction}
Text of paper \ldots
\begin{acks}
This material is based upon work supported by the
\grantsponsor{GS100000001}{National Science
Foundation}{http://dx.doi.org/10.13039/100000001} under Grant
No.~\grantnum{GS100000001}{nnnnnnn} and Grant
No.~\grantnum{GS100000001}{mmmmmmm}. Any opinions, findings, and
conclusions or recommendations expressed in this material are those
of the author and do not necessarily reflect the views of the
National Science Foundation.
\end{acks}
\subsection{Intuitions on Axiomatic Program Equivalence}
\label{subsec:appendix:runningexample}
\input{figexample1original.tex}
\paragraph*{Input program representation}
We illustrate in Fig.~\ref{fig:appendix:treeexamples} four very simple computations, represented as graphs, that are all equivalent under various axioms of natural arithmetic.
For example, \texttt{P1} models the expression $a(1b+1c)$, one can imagine it to be the result of $a(db+dc)$ after e.g. constant-propagation of $1$ to $d$ by a compiler.
They are defined by a single root, have nodes which can be operations consuming the value of their immediate predecessor or terminal/input values, and a node produces a value that can be used by its immediate successors. In essence this is a classical dataflow representation of the computation \cite{buck1993scheduling}, and what our system uses as input program representation.
In this work we represent programs with symbolic expressions made of variables (e.g., $a$, $b$, $c$), operators (e.g., \texttt{+}, \texttt{*}) and neutral/absorbing elements (e.g., $1$). We consider a rich linear algebra expression language, supporting three variable types (scalars as shown in \texttt{P1}-\texttt{P4}, vectors, and matrices) and 5 different variables per type; 16 operators including operators mixing different variable types such as vector-matrix product. Details are provided in Sec.~\ref{sec:appendix:progequivframework} and Sec.~\ref{sec:Axioms}.
\paragraph*{Rewrite rules as axioms of equivalence}
Consider the programs \texttt{P1} versus \texttt{P2}. The multiplication of an integer value by $1$ does not change the value, if we rely on an axiom of equivalence $A1:~ 1_{\mathbb{N}} * x = x,~\forall a \in \mathbb{N}$. This axiom specifies a strict criterion of application: the node must be of type $\mathbb{N}$, the expression pattern must be $1_{\mathbb{N}}*x$; and a strict rewrite rule: replace a sub-graph $1_{\mathbb{N}}*x$ for any $x\in{\mathbb{N}}$ by the graph $x$. In other words, replacing $1*b$ by $b$ in \texttt{P1} is a semantics-preserving rewrite, from the axiom of equivalence $A1$. In this work we view the problem of program equivalence as finding a sequence of semantics-preserving rewrites, each from a precisely defined axiom of equivalence, that rewrites one program into the other. If one program can be rewritten by a sequence of individually-correct semantics-preserving transformations into another one, then not only are they equivalent under the set of axioms used, but the sequence forms the constructive and verifiable proof of equivalence.
\paragraph*{An example} In this work we illustrate and experimentally evaluate our system using a rich linear algebra expression language because it exposes clearly (and intuitively) the various key concepts that must be handled: (1) operating on dataflow graphs as input, supporting transformations that can (2) delete or (3) create new nodes in the graph, and transformations that (4) manipulate entire subtrees. We also wanted a language with (5) multiple variable types, e.g. scalars, vectors and matrices and (6) a large number of different operators with (7) distinct axioms applicable for each. All of these are captured in the language we experiment with, see Sec.~\ref{sec:appendix:progequivframework} for its formal definition.
When applying the axiom $A1:1 * x = x,~\forall x \in \mathbb{N}$ on the program \texttt{P1} for its node $b$, we obtain an equivalent and yet syntactically different program, we have $P1 \equiv A1(b,P1)$. Applying the same axiom $A1$ on $c$ in the resulting program leads to program \texttt{P2}, and $P2\equiv P1 \equiv A1(c,A1(b,P1))$.
Consider now the axiom $A2:x * (y+z) = x*y+x*z,~\forall x,y,z \in \mathbb{N}$. This is the standard distributivity axiom on natural arithmetic. In terms of graph transformations, this is a complex rewrite: a new node is created ($*$), one node is moved ($+$ to the root), and edges are significantly modified. When this complex, but semantics-preserving, rewrite is applied to \texttt{P2}, we obtain \texttt{P3}, that is $P3 \equiv A2(*,A1(c,A1(b,P1)))$.
Finally consider the axiom $A3:x+y = y+x,~\forall x,y \in \mathbb{N}$, the standard commutativity axiom for $+$. The graph transformation does not change the number of nodes nor edges, instead only alters two specific edges. Note that as the previous axioms, it also illustrates operations on sub-graphs: indeed $x$ and $y$ do not need to be input/terminal nodes, they can be any subgraph producing a value of the proper type. This is illustrated by applying it on
\texttt{P3} to obtain \texttt{P4}, that is the computation $ac+ab$. We have
$P4 \equiv A3(+,A2(*,A1(c,A1(b,P1))))$, a verifiable proof of equivalence under our axioms between the programs $a(1b+1c)$ and $ac+ab$, which involved structural changes including node deletion, creation and edge modification. Note the bidirectional nature of the process: one can rewrite from $a(1b+1c)$ to $ac+ab$, or the converse using the same (but reverted) sequence. Note also the non-unicity of a sequence: by possibly many ways a program can be rewritten into another one, for example the sequence $P4 \equiv A3(+,A1(c,A1(b,A2(*,P1))$ also correctly rewrites \texttt{P1} into \texttt{P4}. Conversely, a sequence may not exist: for example no sequence of the 3 above axioms allow to rewrite $a+b$ into $a*b$. We call these non-equivalent in our system, that is precisely if there is no sequence of axioms that can be applied to rewrite one program into the other.
\paragraph*{The need for a verifiable procedure} A key motivation of our work is to enable in a safe and provably correct way the use of machine learning for program equivalence by ensuring no false negative can be produced. For full automation of the process, we focus on ensuring correctness in case an equivalence result is computed by the system. That is, our system by design answers only with a probability of confidence that the two programs are not equivalent, but \emph{it produces a verifiable procedure to assess equivalence} otherwise. We believe such an approach is key for a practical, automated deployment of neural networks for program equivalence: verifiably proving equivalence to ensure no false positive, while tolerating a moderate amount of false negative (i.e., missing that two programs were in fact equivalent).
Applications of such a system include for example the automatic generation and correction of exercises for students, where they typically need to prove equivalence between two formulas by successive application of other formulas/axioms. Languages like e.g. Matlab could use interactive checking of the equivalence between the expression being typed and the pre-existing library implementations (e.g., BLAS-based \cite{goto2008high}) to use instead accelerated implementations when possible in real-time. But we have designed and evaluated our system in a robust enough way to be applicable to a wide variety of languages and problems, as long as they can be cast in the framework in Sec.~\ref{sec:appendix:progequivframework}
\paragraph*{The space of equivalences} Intuitively, our approach to program equivalence is as follows. We can intellectually reason on a graph for equivalent programs where each node represents a distinct program in the language, and two nodes (i.e., two different programs) are connected by a directed edge iff the source node can be rewritten as the target node by the application of a single one of the pre-defined axioms for equivalence. The edge is labeled by the axiom used and the specific position in the source node's program to where it needs to be applied to obtain the program in the target node.
Then there will be one or more paths in this graph from the two nodes modeling the two input programs if they are equivalent (one can be rewritten into the other while preserving semantics); and no path if no such rewrite is possible, that is the programs would be not equivalent in our framework. Exposing a path between two nodes is sufficient to prove the equivalence of their associated programs.
This path is exactly a sequence of rewrite rules from one program to another. To test the correctness of an arbitrary sequence, i.e., verify if this path exists in the graph and assess equivalence if it does, one then needs to simply apply the proposed sequence to one of the input programs: verify at each step that the rewrite in the sequence is indeed applicable (by a simple check of the applicability of the axiom at this particular program point), and eventually ensure the rewritten program is identical to the other input one. This test can be computed in time mostly linear with the program size in our framework, and when successful it implements a constructive proof of equivalence between the two programs.
\paragraph*{Pathfinding equivalence proofs}
When formulating the program equivalence problem this way, we can then view its solution as learning how to build at least one feasible path between any two pairs of nodes in the above graph, when it can exist. We can see that by design, there is a lot of redundancy in this space: the same labeled path will occur between many different pairs of programs (e.g., those where only the variable symbols differ), and there are typically many paths between the same two (equivalent) programs. This creates opportunities for the system to learn program representation and path construction techniques more easily.
Our key contribution is the development of a deep learning framework that learns this procedure automatically. The neural network system we build is trained by randomly sampling this graph, with samples made of two nodes and a path between them when training on equivalent programs, and an empty path otherwise. We specifically learn a generalization of the problem of finding paths in this graph as follows. We represent input programs in a carefully-crafted normalized dataflow-like graph encoded as a gated graph neural network \cite{Scarselli09,Beck18}, to enable structural, size-tolerant reasoning by the network on the inputs. It is combined with a global attention-based mechanism and a memory-based LSTM \cite{Hochreiter97} decoder which can memorize graph changes for producing the rewrite sequence and enable path-size tolerant reasoning, while following the properties of the axioms for equivalence.
In a nutshell, we make the network learn a stochastic approximation of an iterative algorithm that would be able to construct a feasible path (when possible) between any two pairs of nodes in this equivalence graph, but trained simply by randomly sampling pairs of nodes and one carefully labeled path between them. This avoids entirely the need to craft smart exploration heuristics to make this path-finding problem feasible in practice. This is instead what we let the neural network learn automatically; and specifically why we implemented graph neural networks to solve this problem \cite{Scarselli09,Xu17}. We rely on the network to suggest a transformation path by inference, and then verify its validity in linear time.
\subsection{System Overview}
\label{subsec:appendix:systemoverview}
\begin{figure*}
\includegraphics[width=14cm]{./images/Full.png}
\caption{\texttt{pe-graph2axiom} System Overview}
\label{fig:Full:appendix}
\end{figure*}
To implement our approach, we enumerate randomly valid sentences in a language, and a set of axioms of equivalence expressible as semantics-preserving rewrite rules from one to the other.
The system in Fig.~\ref{fig:Full:appendix} takes as input two programs represented as symbolic trees representing a dataflow graph of an expression computation, and eventually produces a sequence of axioms along with their position of application (or node) that can be used to rewrite sequentially one input program into the other input program.
As each axiom is produced, it is checked to insure it is a legal application within the grammar and if the transformed program matches the target then a correct proof of equivalence has been found.
To train the system, we generate pairs of equivalent programs by iterating the axioms with random probability on one program, thereby generating both a path to equivalence and the target program. Random programs are generated so as to respect the grammar defined. The training set is then appropriately selected from these random samples, as detailed in Sec.~\ref{sec:suppl:neuralnetwork}.
\emph{Node initialization} initializes the graph neural network, converting the input programs text (e.g., $(a+(b+c))$ into nodes and edges in the \emph{Graph Neural Network} \cite{Scarselli09,Xu17}.
The details of the network are covered in Sec.~\ref{sec:progequivdnn}. In a nutshell, the key principle is to combine a memory-based neural network approach, e.g., using Long-Short Term Memory (LSTM) \cite{Hochreiter97} neurons and a graph neural network design (which uses Gated Recurrent Units (GRUs) internally) \cite{Beck18} that matches our program graph representation. \emph{Token embedding} is a neural network layer in which tokens are assigned a learnable multidimensional embedding vector \cite{Mikolov13}.
Each layer in \emph{LSTM 2 layers} has 256 neurons, which support sequence generation.
\emph{Token generator} is the final output portion of the network. It learns to output the tokens based on the current LSTM hidden states and the \emph{Global Attention} from the graph neural network. As each token is output, it feeds back into the LSTM layer through the embedding layer to affect its next state. We use a sequence
generation principle, using a global attention mechanism \cite{luong15} to allow observation of program graph node information while generating the axiom and location on which it is applied. As developed below, we specifically study the robustness of our approach to generate proofs of increasingly complex length, contrasting models to output the entire path at once with \texttt{pe-graph2axiom} which incrementally builds the sequence one step at a time, as shown in Sec.~\ref{sec:suppl:additionalresults}.
\subsection{Program Equivalence Framework}
We now present the formalism we use in this work to represent programs and their equivalences. We carefully co-designed the chosen problem representation and the (graph) neural network approach to make the best use of machine learning via deep networks, as discussed in Sec.~\ref{sec:suppl:neuralnetwork}.
\subsection{Program Representation}
A key design aspect is to match the capability of the neural network to model the input as a walkable graph with the actual input program representation to be handled. We therefore model programs in a dataflow-like representation (i.e., a directed graph), using a single root/output node, that is programs produce a single final value, at their root.
Note we restrict programs evaluated in our work to be directed acyclic graphs, and ensure every node has a single successor: if a value is used multiple times, nodes representing this value are replicated in the input program accordingly, as shown in Fig.~\ref{fig:appendix:treeexamples} where several nodes modeling the same variable $a$ are used in the same program.
A program is represented by its program graph, defined as follows.
\begin{definition}[Program graph node]
\label{def:proggraphnode}
A node $n \in N$ in the program graph models n-ary operations and input operands. A node produces a value which can be consumed by any of its immediate successors in the graph. When a node has no predecessor, it models an input value. The output value for the computation is produced by the unique root node $n_{root}$ of the graph, the only node without successor.
\end{definition}
\begin{definition}[Program graph directed edge]
\label{def:proggraphedge}
A directed edge $e_{n_1,n_2} : n_1 \rightarrow n_2$ with $n_1, n_2 \in N$ in the program graph connects the producer of a value ($n_1$) to a node consuming this value in the computation.
\end{definition}
\begin{definition}[Program graph]
\label{def:proggraph}
A program graph $G$ is a directed dataflow graph modeling the computation, made of nodes $n_i \in N$ and edges $e_{n_i,n_j} \in E$ as defined in Def.~\ref{def:proggraphnode} and Def.~\ref{def:proggraphedge}. That is, $G = \langle n_{root}, N, E \rangle$. There is no dangling edge nor unconnected node in $G$.
\end{definition}
\paragraph*{Language of linear algebra expressions} We developed a complex-enough language to evaluate carefully our work, that captures rich linear algebra expressions. Specifically, we support 3 types of data/variables in the program: scalars, vectors and matrices. We use the standard notation $a,\vec a,A$ for scalars, vectors and matrices.
We evaluate using different variable names for each of the 3 types above, along with their identity and absorbing elements.
We also model a rich set of operators, mixing different unary and binary operations for each type. Specifically, we support $*_s,+_s,-_s,/_s$ between scalar operands, $+_v,-_v,*_v$ between vectors and $+_m,-_m,*_m$ for matrices. For $-,/$ we also support their unary version for all types, e.g. $^{-1_{s}}$ for unary scalar inversion and $-_{m}$ for unary matrix negation. For example $a^{-1_s}$ computes to $1/a$.
We support two specific unary matrix operations, transpose $^{t_m}$ and matrix inversion as $^{-1_m}$. The operators designate the type of their result, and hence the $*_v$ supports scalar, vector, and matrix operands, and $*_m$ supports scalar and matrix operands. Operator types facilitate the learning of the program embedding, avoiding the need to learn type propagation.
\paragraph*{Examples} Programs of the form $A (B C^t D) E^{-1}$, $\vec a + b^{-1}\vec c-0\vec e$, $(a+b)+(c(d/e))$, $(aA+bB)C^t$ etc. can be parsed trivially to our representation, one simply needs to be able to provide a unique name for each operand and operator type (possibly via some analysis, or simple language design principles), that is avoiding to overload the semantics of operators and operands. Note the semantics is never explicitly provided to our DNN approach, it is learned by examples. There will be no example of the form e.g. $a+A$, an invalid program in our language.
We believe a sensible approach is to develop a clean, regular grammar for the language to be handled, as implicitly these are concepts the DNN will need to learn. We did so, using a classical LL(1) grammar description of our linear algebra language. This is not a requirement of our approach, as one can arrive to the desired input program graph by any means necessary, but we believe making the reasoning on the language structure simple enough is an important design aspect.
\subsection{Axioms of Equivalence}
A central aspect of our approach is to view the problem of program
equivalence as finding a sequence of locally-correct rewrite rules
that each preserve the semantics, \emph{thereby making incremental reasoning possible}. We explicitly do not
consider non-semantics-preserving ``axioms''. A rich structure of alternate but
equivalent ways to rewrite one program to another makes the problem
easier to sample and more amenable to machine learning. Semantics-preserving axioms enable incremental per-axiom reasoning, and enforce semantics preservation without overly complicated semantics analysis; while still manipulating a very
rich space of transformations. To illustrate this we specifically
design axioms that perform complex graph modifications, such as node
deletion or creation, subtree manipulation, multi-node graph changes, etc.
A graph pattern can be viewed as a pattern-matching rule on graphs and its precise applicability criteria. It can also be viewed as a sentential form of the language grammar, e.g. \texttt{ScalarVal PlusOp ScalarVal} is a pattern, if the grammar is well formed.
\begin{definition}[Graph pattern]
\label{def:graphpattern}
A graph pattern $P$ is an unambiguous structural description of a (sub-)graph $G_P$, which can be deterministically matched in any program graph $G$. We have $P = \langle G_P, M_n, M_e \rangle$ where for each node $n_i \in N^{G_P}$, $\{n_{match}\} = M_n(n_i)$ returns the set of node values $n_{match}$ accepted to match $n_i$ on a graph $G$. For $n_i,n_j \in N^{G_P}$, $e_i = M_e(n_i, n_j)$ returns the set of edges between $M(n_i)$ and $M(n_j)$ to be matched in $G$. A pattern $G_P$ is matched in $G$ if (a) $\forall n_i \in G_p,~ \exists~ n_m = M(n_i) \in N^G$; (b) $\forall e_i \in E^{G_P}, \exists~ e_{M_n(n_i),M_n(n_j)} = M_e(n_i, n_j) \in E^G$; and (c) $\not \exists e_{M_n(n_i),M_n(n_j)} \in E^G \ne M_e(n_i, n_j)$.
\end{definition}
Note when a graph pattern models a rewrite, $M_n$ and $M_e$ are adjusted accordingly to output the rewrite of a node $n \in N^G$ into its desired value, instead of the set of acceptable nodes from $n \in N^{G_P}$.
\begin{definition}[Axiom of equivalence] An axiom $A$ is a semantics-preserving rewrite rule $G' = A(n,G)$ that can arbitrarily modify a program graph $G$, and produces another program graph $G'$ respecting Def.~\ref{def:proggraph} with identical semantics to $G$. We note $A : \langle P_{match}, P_{replace} \rangle$ an axiom, where $P_{match}, P_{replace}$ are graph patterns as per Def.~\ref{def:graphpattern}. The application of axiom $A$ to node $n$ in $G$ is written $A(n,G)$.
\end{definition}
We can compose axioms to form a complex rewrite sequence.
\begin{definition}[Semantics-preserving axiom composition]
\label{def:axiomcompos}
Given a sequence $S:~ A_1(n_1,A_2(n_2,...,A_m(n_m,G)))$ of $m$ axioms applications. It is a semantics-preserving composition if for each $G_j = A_i(n_i,G_i) \in S$, $P_{match}^{A_i}$ succeeds on the subgraph with root $n_i$ in $G_i$, and $G_j$ is obtained by applying $P_{replace}^{A_i}$ to $n_i$.
\end{definition}
\begin{theorem}[Equivalence between program graphs]
\label{th:progequiv}
Given a program $G$. If $G' = S(G)$ such that $S$ is a semantics-preserving sequence as per Def.~\ref{def:axiomcompos}, then $G \equiv G'$, they are equivalent under the axiom system used in $S$.
\end{theorem}
This is a direct consequence of using only semantics-preserving axioms, each rewrite cannot individually alter the semantics, so such incremental composition does not. It leads to the formal problem we are addressing:
\begin{corollary}[Sufficient condition for program equivalence]
\label{th:progequivmatching}
Given two programs $G,G'$. If there exist a semantics-preserving sequence $S$ such that $G' = S(G)$, then $G \equiv G'$.
\end{corollary}
Note here $=$ means complete structural equivalence between the two graphs: they are identical in structure \emph{and} label/node values. Determining $G = G'$ amounts to visiting both graphs simultaneously e.g. in depth-first search from the root to ensure structural equivalence, and also verifying the same node labels appear in both at the same time. This is trivally implemented in linear time in the graph size.
\paragraph*{Language of linear algebra expressions} We have implemented a total of 143 different axioms for our language, which are grouped for learning by our network into 14 multi-typed rewrite rules described later in Table~\ref{tab:TransformPct}. They all follow established linear algebra properties. Note different data types have different axioms following typical linear algebra rules, e.g., matrix-multiplication does not commute, but scalar and vector multiplications do. Examples of axioms include $x(yz) \rightarrow (xy)z$, $X-X\rightarrow O$, $-(\vec x - \vec y) \rightarrow \vec y - \vec x$, or $X^{t^t} \rightarrow X$, an exhaustive list is displayed in Sec.~\ref{sec:Axioms}.
In our experiments, we presume matrix and vector dimensions are appropriate for the given operation. Such dimension compatibility checks are simple to implement by e.g. introducing additional nodes in the program representation, but are not considered in our test language.
\paragraph*{Examples} We illustrate axiom-based rewrites using axioms presented in main Table~\ref{tab:TransformPct}. Note axiom names follow the structural changes applied. For example, we have $a+b \equiv b+a:~\{a+b\}= Commute(\{+\},\{b+a\})$. $a+b+c \equiv b+c+a:~\{a+b+c\}= Commute(\{+_1\},Commute(\{+_2\},\{b+c+a\})$. Note we refer to different nodes with the same symbol (e.g., $+_2$) subscripting them by their order in a DFS traversal of the program graph, starting from the unique root. We have $0 \equiv a-a:~\{0\}= Cancel(\{-\},\{a-a\})$. These can be combined in complex paths, e.g., $b+c \equiv c+b+(a-a):~\{b+c\}= Commute(\{+\},Noop(\{+\},Cancel(\{-\},\{c+b+(a-a)\})))$. Such axioms are developed for scalars, matrices and vectors, and include complex rewrites such as distributivity rules and transpositions. A total of 143 axioms are used in our system.
\subsection{Space of Equivalences}
We now define the search space being explored in this work, i.e., the exact space of solutions on which the DNN system formally operates, and that we sample for training.
\begin{definition}[Graph of Program Equivalences]
\label{def:graphofequiv}
Given a language $\mathcal{L}$. The directed graph of equivalences between programs is $GPE = \langle N^{equiv}, E^{equiv}\rangle$ such that $\forall l \in \mathcal{L}, n_l \in N^{equiv}$, and $e^{A_i,x}_{n_i,n_j} : n_i \rightarrow n_j \in E^{equiv}$ iff $n_j \equiv A_i(x,n_i)$, $\forall A_i$ in the axiom system and $x$ a position in $n_i$ where $A_i$ is applicable.
\end{definition}
In other words, the graph has one node per possible program in the language $\mathcal{L}$, and a single axiom application leads to connecting two nodes. We immediately note that $GPE$ is a (possibly infinite) multigraph, and contains circuits.
\begin{theorem}[Program equivalence with pathfinding]
Given two programs $n_i,n_j \in N^{equiv}$. If there is any path from $n_i$ to $n_j$ in $GPE$, then $n_i \equiv n_j$.
\end{theorem}
The proof is a direct consequence of Def.~\ref{def:graphofequiv}. In this work, we randomly sample this exact graph $GPE$ to learn how to build paths between arbitrary programs, themselves represented as nodes in the $GPE$. As it is a multigraph, there will be possibly many different sequences modeled to prove the equivalence between two programs. It is sufficient to expose one to prove equivalence.
\begin{corollary}[Semantics-preserving rewrite sequence]
Any directed path in $GPE$ is a semantics-preserving rewrite sequence between the programs, described by the sequence of axioms and program position labeling the edges in this path. This sequence forms the proof of equivalence.
\end{corollary}
We believe that ensuring there are possibly (usually) many ways to compute a proof of equivalence in our specific framework is key to enable the DNN approach to learn automatically the pathfinding algorithm for building such proofs. Other more compact representations of this space of equivalences are clearly possible, including by folding nodes in the equivalence graph for structurally-similar programs and folding equivalent paths between nodes. When building e.g. a deterministic algorithm for pathfinding, such space size reduction would bring complexity benefits \cite{kaplan1969regular,barthou2002}. We believe that for the efficient deployment of graph-to-sequence systems, exposing significant redundancy in the space facilitates the learning process. We also alleviate the need to reason on the properties of this space to find an efficient traversal heuristic.
\section*{Document Overview}
\label{sec:appendix:intro}
\input{appendix-toc}
\section{Dataset generation}
\label{sec:appendix:datasetgen}
\subsection{Generation of Examples}
\label{sec:appendix:genexamples}
Machine learning
benefits from large training sets, so in order to produce this data, we
created algorithms that would generate programs meeting a given language
grammar along with target programs which could be reached by applying a
given axiom set. By creating this process, we could create as large and
varied a dataset as our machine learning approach required.
Algorithm \ref{alg:GenP1} provides an overview of the full program generation
algorithm. For this generation process, we define a set of operations and
operands on scalars, matrices, and vectors. For our process, we presume matrix
and vector dimensions are appropriate for the given operation as such dimension
checks are simple to implement and are not considered in our procedure.
Note the token syntax here is \emph{exactly} the one used by our system,
and is \emph{strictly} semantically equivalent to the mathematical notations used to describe these operations, e.g. $1_{\mathbb{N}}$ is \texttt{1}.
\begin{itemize}[noitemsep,topsep=0pt,wide=0pt]
\item Scalar operations: \texttt{+s -s *s /s is ns}, where \texttt{is} the unary reciprical and \texttt{ns} is the unary negation.
\item Matrix operations: \texttt{+m -m *m im nm tm}, where \texttt{im} is matrix inversion, \texttt{nm} negates the matrix, and \texttt{tm} is matrix transpose.
\item Vector operations: \texttt{+v -v *s nv}, where \texttt{nv} is the unary negation.
\item Scalars: \texttt{a b c d e 0 1}
\item Matrices: \texttt{A B C D E O I}, where \texttt{O} is the empty matrix and \texttt{I} is the identity matrix.
\item Vectors: \texttt{v w x y z o}, where \texttt{o} is the empty vector.
\item Summary: 16 operations, 20 terminal symbols
\end{itemize}
Initially, \texttt{GenP1} is called with \texttt{GenP1("+s -s *s /s +s -s *s /s +s -s *s /s is ns +m -m *m +m -m *m +m -m *m im nm tm +v -v *v +v -v *v +v -v *v nv",0.94)}"
In this initial call binary operations are repeated so
that they are more likely to be created than unary operations, and the
initial probability that a child of the created graph node will itself
be an operation (as opposed to a terminal symbol) is set to
94\%. Since the algorithm subtracts a 19\% probability for children at
each level of the graph, trees are limited to 7 levels.
Algorithm \ref{alg:GenP1} starts execution by randomly selecting an
operation from the set provided as input. When \texttt{GenP1} is called
recursively, the operation set is limited such that the operation produces
the correct type as output (scalar, matrix, or vector). Lines 3 through 15
of the algorithm show an example case where the \texttt{*s} operation is
processed. This operation requires scalar operands. If the probability of
children at this level is met, then \texttt{GenP1} is called recursively
with only scalar operands available, otherwise a random scalar operand is
chosen.
The text for algorithm \ref{alg:GenP1} does not show the process for all
operations. Certain operations, such as \texttt{*v}, have a variety of
operand types that can be chosen. The \texttt{*v} operand is a multiplication
which produces a vector. As such, $Av$ (matrix times vector), $bv$ (scalar
times vector), or $vc$ (vector times scalar) are all valid
options and will be chosen randomly.
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetAlgoLined
\KwResult{Prefix notation of computation with parenthesis}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{Ops, P}
\Output{(op L R) or (op L)}
\BlankLine
op = select randomly from Ops
// Create subtree for chosen op
\If{op == "*s"}{
\eIf{random < P}{
L = GenP1("+s -s *s /s +s -s *s /s is ns",P-0.19)
}{
L = select random scalar operand
}
\eIf{random < P}{
R = GenP1("+s -s *s /s +s -s *s /s is ns",P-0.19)
}{
R = select random scalar operand
}
return (op L R)
}
// Other ops may have more complex options for children types.
// (For example, "*m" may have a matrix multiplied by a scalar or matrix)
...
\caption{GenP1}
\label{alg:GenP1}
\end{algorithm}
After generating a program which follows the grammar rules of our language,
algorithm \ref{alg:GenP2} will produce a new program along with a set of
rewrite rules which transform the source program to the target program.
Algorithm \ref{alg:GenP2} receives as input the source program (or
subprogram) along with the \texttt{path} to the current root node of the
source program. If the source program is a terminal symbol, the algorithm
returns with no action taken. Otherwise, the program starts with an
operation and the algorithm proceeds to process options for transforming
the given operation. For our wholeproof10 and wholeproof5 datasets, algorithm \ref{alg:GenP2} is only called once, simplifying the possible node order and proof complexity. for the axiomstep10 and axiomstep5 datasets, algorithm \ref{alg:GenP2} is called multiple times, allowing for the possibility that after a path is chosen for one axiom any node can be accessed for the next axiom (including the same node).
As shown on line 10 of the algorithm, when the operation and children meet the
conditions necessary for a rewrite rule (in this case \texttt{NeutralOp}), the rule
is applied with some probability (in this case 50\%). Note that before
processing a node, the left and right operands are further analyzed to
determine their operators and operands as well (or $\bot$ if the child is a
terminal). Processing the left and right operands allows for complex axioms
to be applied, such as distribution or factorization. When a rule is
applied, the rewrite rule is added to the
rewrite rule sequence and
a new target program is generated for any remaining subtrees.
When creating the rewrite rules for subtrees, the \texttt{path} variable is updated as rewrites are done. In the case of \texttt{NeutralOp}, the current node is being updated, so the path is not changed. But in the case of the Commute rule, the return would be generated with \texttt{(op GenP2(R,path."left ") GenP2(L,path."right "))} which creates rewrite rules for the prior right and left operands of the \texttt{op} and updates the path used to the new node positions.
In order to analyze nearly equal programs, illegal rewrites can be optionally enabled;
for example, commuting a subtraction operation or mutating one operation into another.
In that case, the \texttt{GenP2} process continues to create a target program, but
\texttt{transform\_sequence} is set to \texttt{Not\_equal}.
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetAlgoLined
\KwResult{Second program and transform\_sequence}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{P1, path}
\Output{P2}
\BlankLine
\If{terminal symbol}{return P1}
op = find operator of P1
L = find left operand of P1
R = find right operand of P1
Lop,LL,LR = operator and operands of left child
Rop,RL,RR = operator and operands of right child
// Randomly apply transform if allowed
\If{random < 0.5 and ((op == "+v" and (L == "o" or R == "o")) or (op == "-v" and R == "o"))}{
append path."NeutralOp " to transform\_sequence
// Eliminate unnecessary operator and 0 vector
\eIf{L == "o"}{
return GenP2(R,path)
}{
return GenP2(L,path)
}
}
\caption{GenP2}
\label{alg:GenP2}
\end{algorithm}
After these generation algorithms are run, a final data preparation
process is done which prunes the data set for the learning
algorithm. The pruning used on our final data set insures that the
$(P1,P2)$ program pair total to 100 tokens or fewer (where a
token is an operation or terminal), that the graph is such that every node is reachable from the root with a path of length 6 or less,
and that there are 10 or fewer rewrite rules applied. But within these restrictions, we assert that our random production rule procedure has a non-zero probability of producing any program allowed by the grammar. Also, the pruning insures that there are no
lexically equivalent programs in the process and removes some of the cases with fewer than 10
rewrite rules generated to bias the dataset to longer rewrite
sequences. Table \ref{tab:TransformPct} details the distribution of
rewrite rules created by the full process. Section \ref{sec:Axioms} details
all axioms when variable types and operators are considered.
We produce equivalent program samples by pseudo-randomly applying axioms on one randomly generated program to produce a rewrite sequence and the associated equivalent program. Given a randomly selected node in the program graph, our process checks which axiom(s) can be applied. E.g., the $+_m$ operator can have the Commute axiom applied, or depending on subtrees it may be allowed to have the Factorleft axiom applied, as discussed in Sec.~\ref{sec:expresults}. Generally we choose to apply or not an operator with 50\% probability, so that \texttt{pe-graph2axiom} is forced to rely on analysis of the two programs to determine whether an operator is applied instead of learning a bias due to the local node features.
\subsection{Intermediate program generation}
The intermediate program generation algorithm is very similar to algorithm \ref{alg:GenP2}. For
program generation of the target program, algorithm \ref{alg:GenP2} will check that a node can
legally apply a given rule, apply the rule with some probability, record the action,
and process the remaining program. For intermediate program generation, we begin with a
$P1$ and a rewrite rule. We follow the path provided to identify the node,
check that a node can legally accept a rule, apply the
rule, and return the adjusted program.
If a rule cannot legally be applied, $P1$ is not successfully transformed.
If a rule can be legally applied to $P1$, the program is compared
lexically to $P2$ and if they match then equivalence has been proven.
\subsection{Complexity of Proving Equivalence}
\label{subsec:complexityofprovingequiv}
Table ~\ref{tab:count} shows the complexity of the solution space for our problem for proofs from our AxiomStep10 test dataset up to length 7 (deterministically computing all possible programs requires too many resources for longer proof lengths). The 'All possible nodes and axioms' row includes the total number of proofs of a given length available to our problem space. The entry 5933 for a single axiom represents that for an AST depth of 7 we have 43 axioms which can be applied to all 63 possible operator nodes and 104 axioms which can be applied to the 31 nodes which possibly have child operator nodes themselves: 63*43+31*104=5933. Subsequent columns can select repeatedly from the same set growing as $5933^2$ to $5933^7$. The 'sample node + axiom group' row is based on our 10,000 sample test dataset and represents the possible selection of any of the 14 axiom groups being applied to any node in the program. The 'sample node + legal axiom' row represents only legal node plus legal axiom group being applied and effectively represents the total number of programs derivable from the start program in the test dataset. The final row 'Sample derivable unique programs' represents the total number of programs derived from legal node and axiom sequences which are lexically unique.
\begin{table*}[h!tb]
\vspace{-.5cm}
\caption{Counts for equivalence proof possibilities\vspace{-.3cm}}
\label{tab:count}
\small
\centering
\begin{tabular}{@{}lrrrrrrrr@{}}
\toprule
& & \multicolumn{7}{c}{Proof length in axioms} \\
Proof description & & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\
\cmidrule{1-1} \cmidrule{3-9}
All Possible nodes and axioms & & 5933 & 3.5E+07 & 2.1E+11 & 1.2E+15 & 7.4E+18 & 4.4E+22 & 2.6E+26 \\
Sample Node + Any Axiom & & 226 & 46900 & 1.5E+07 & 8.8E+09 & 5.0E+12 & 3.3E+15 & 2.7E+18 \\
Sample Node + Legal Axiom & & 11.2 & 77.8 & 931 & 15812 & 3.4E+05 & 8.2E+06 & 1.8E+08 \\
Unique Programs from Sample & & 9.2 & 47.4 & 264 & 1574 & 10052 & 65176 & 4.6E+05 \\
\bottomrule
\end{tabular}
\end{table*}
\section{Language and Axioms for Complex Linear Algebra Expressions}
\label{sec:Axioms}
\input{appendix-axiomsandlangdesc}
\section{Details on neural network model}
\label{sec:suppl:neuralnetwork}
\begin{figure*}
\includegraphics[width=14cm]{./images/Full.png}
\caption{\texttt{pe-graph2axiom} System Overview}
\label{fig:Full:appendix}
\end{figure*}
\input{figexample1appendix.tex}
Figure \ref{fig:Full:appendix} overviews the entire \texttt{pe-graph2axiom} architecture including sample generation, the graph-to-sequence network, the intermediate program generation, and lexical equivalence checker. In this section we will discuss the implementation details of these components.
\paragraph*{Graph neural network internal representation}
The sample generation discussed in section \ref{sec:samplegen}
provides input to the \textsf{Node Initialization} module in
Fig.~\ref{fig:Full:appendix} to create the initial state of our graph neural
network. For each node in the program graph, a node will be initialized in our
graph neural network. Each node has a hidden state represented by a
vector of 256 floating point values which are used to create an
embedding for the full meaning of the given node. Initially all 256
dimensions of the hidden states of the nodes are set to zero except for 2.
Given $N$ tokens in our input program language, one of the dimensions from
1 through $N$ of a node will be set based on the token at the program position
that the node represents. For example, if the scalar variable $a$ is assigned to
be token 3 in our language, then the $a$ nodes of Fig.~\ref{fig:appendix:treeexamples} recalled below
would have their 3rd dimension initialized to 1.0. This is a one-hot encoding similar to that used
in neural machine translation models which leverage Word2vec \cite{Mikolov13}. The second non-zero
dimension in our node initialization indicates the tree depth, with the root for the program being at depth 1. We set the dimension $N$+$depth$ to 1.0; hence, the $a$ nodes in Fig~\ref{fig:appendix:treeexamples}, which vary from level 2 or 3 in the graph, would set dimension $N+2$ or $N+3$ to 1.
In addition to nodes correlating to all tokens in both input programs, we initialize
a root node for program comparison which has edges connecting to the root nodes of both programs. The root node does not represent a token from the language, but it is initialized with a 1.0 in a hidden state dimension reserved for its identification.
For a graph neural network, the edge connections between nodes are a
crucial part of the setup. In particular, to match the formulation of our problem, we must ease the ability of the network to walk the input program graphs. We therefore designed a unified graph input, where both program graphs are unified in a single graph using a single connecting root node; and where additional edges are inserted to make the graph fully walkable.
In our full model, we support 9 edge types and their reverse edges. The edge types are: 1) left child of binary
op, 2) right child of binary op, 3) child of unary op, 4) root node to
program 1, 5) root node to program 2, 6-9) there are 4 edge
types for the four node grandchilden (LL, LR, RL, RR). After the node
hidden states and edge adjacency matrix are initialized, the network is
ready to begin processing. This initial state is indicated in
figure \ref{fig:Network} by the solid circles in the lower left of the
diagram.
\begin{figure*}[h!tb]
\vspace{-.5cm}
\centering
\includegraphics[width=0.8\textwidth]{images/GGNNdetail.png}
\caption{Graph-to-sequence neural network data flow details.}
\label{fig:Network}
\vspace{-.5cm}
\end{figure*}
\paragraph*{Beam search}
A typical approach when using sequence-to-sequence systems is to
enable \emph{beam search}, the process of asking for multiple answers to
the same question to the network. It is particularly relevant when
creating outputs which can be automatically
checked \cite{Chen19,ahmed18}. Beam search can be viewed
as proposing multiple possible axioms to apply. Given
the stochastic nature of generation model, a beam width of $n$ can
be thought of as creating the $n$ most likely sequences given
the training data the model as learned on. Each proposal
can be checked for validity, the first valid one is outputted by the
system, demonstrating equivalence. Our system builds on the neural network beam search provided by OpenNMT to create a 'system beam search' of variable width. In particular, we set the OpenNMT network beam search to 3, which constrains the token generator to produce 3 possible axiom/node proposals for a given pair of input programs. Using these 3 proposals, when our system beam width is 10, we build up to 10 intermediate programs that are being processed in the search for a proof. To illustrate with a system beam width of 5, after $P1$ and $P2$ are provided to the neural network, 3 possible intermediate programs may be created (so long as all axioms are legal and don't produce duplicates). After those 3 intermediates are processed, 9 possible new intermediates are created, all of which are checked for lexical equivalence with $P2$, but only 5 of which are fed back into the neural network for further axiom generation. This process is continued for up to 12 axioms at which point the system concludes an equivalence proof cannot be found and the programs are likely not equivalent.
We evaluate in
Sec.~\ref{sec:expresults} beam sizes ranging from 1 to 10,
showing higher success with larger beams.
\section{Details on Experimental Results}
\label{sec:suppl:additionalresults}
\subsection{Complementary Results and Observations}
Table~\ref{tab:hyperparameter} describes part of our neural network hyperparameter tuning showing that our golden model has as high a result as other variations explored. Note that the validation token accuracy is not too high (it's not above 90\%) despite the ability to predict full correct proofs with over 93\% accuracy. This is because the training dataset can have multiple examples of axioms given similar input programs. For example, proving "(a+b)(c+d) = (b+a)(d+c)" requires commuting the left and right subexpressions. The training dataset could have similar programs which are sometimes transformed first with a right Commute and then a left or vice-versa. Given this data, the network would learn to apply one or the other (it would not get trained to use associativity for these program pairs for example), hence the actual output given may or may not match the validation target axiom. We will discuss this further in section~\ref{sec:appendix:multiaxiom}.
\begin{table}[h!tb]
\caption{Hyperparameter experiments. Summary of best validation token accuracy result after 2 runs for up to 100,000 training iterations. The golden model has 256 graph nodes and decoder dimensions, 2 decoder LSTM layers, starts training with a learning rate of 0.8, and uses 10 steps to stabilize the GGNN encoder.}
\label{tab:hyperparameter}
\small
\centering
\begin{tabular}{@{}lrr@{}}
\toprule
Parameter & Value & Validation \\
& & token accuracy \\
\midrule
Golden model & & 83.89 \\
Graph node+decoder LSTM dimension & 192 & 83.89 \\
& 320 & 83.58 \\
Decoder LSTM layers & 1 & 83.53 \\
Initial learning rate & 0.75 & 83.76 \\
& 0.85 & 83.57 \\
GGNN stability steps & 12 & 83.19 \\
& 8 & 83.61 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph*{Training convergence}
Since our model trains on axiomatic proofs which may vary in order (allowing 2 or 3 options to be correct and occur in the training set), we see our training and token accuracies plateau below 90\% during training for AxiomStep10 as shown in Figure~\ref{fig:training}. Full testset proof accuracies for beam width 10 exceed 90\%, but also plateau along with the training and validation results. This result differs from our WholeProof10 training, which achieves training and validation accuracies above 96\% because the expected axiom sequence is more predictable, but as we have seen less generalized.
As another observation on generalization and overfitting, we note that figure~\ref{fig:training} shows a slight separation between the training and validation accuracies starting at around iteration 180,000. While the training accuracy rises slowly, validation accuracy plateaus, indicating slight overfitting on the training data. Yet our model continues to slowly increase in quality, with the model snapshot that scores best on both validation and test accuracies occurring at iteration 300,000. This is our golden model, with 93.1\% of P1 to P2 proofs accurately found using beam width 10.
\begin{figure}[h!tb]
\centering
\includegraphics[width=8cm]{./images/Training.png}
\caption{Model training percentage accuracy up to 300,000 iterations on AxiomStep10. Training and Validation accuracies are per-token on the target axioms in the samples. Test accuracies are for full correct proofs of P1 to P2.}
\label{fig:training}
\end{figure}
\paragraph*{Testing simpler models}
In addition to the sequence-to-sequence and graph-to-sequence models, we
explored a feed-forward equal/not equal classifier on a simple version of
our language. That model uses an autoencoder on the program to find an
embedding of the program and then a classifier based on the program embeddings
found. It achieves a 73\% accuracy on identifying equivalent pairs in the test data, which, as expected, is much lower than the full proof rate of 93\% achieved with a graph-to-sequence proof generator on our full language. This simple experiment highlights the importance of a system which prevents the false positives which a classifier might have by creating a verifiable proof.
We explore initial language generation using a simple language in order to assess feasibility of different approaches. For fine tuning network parameters and architectural features, we add more complexity to the language as shown in table \ref{tab:ResultLang2}. Language IDs 1 through 3 are all based on a simple grammar which only allows the "+" or "-" operators on scalar variables labeled a through j. The only axiom is \texttt{Commute}, which can be applied on up to 3 nodes in language IDs 2 and 3. Language ID 4 adds the scalar constants 0 and 1, scalar operations * and /, and 4 more axioms. We perform a fair amount of network development on this model in an effort to maintain high accuracy rates. Language ID also 4 expands the operands to 3 types and hence the number of operators also increases. To speed up model evaluation, we reduced the program length for IDs 5, 6, and 7, allowing us to train larger data sets for more epochs. ID 7 is a forward looking-model which makes a minor increment to the language to support the analysis of loop rolling and unrolling, discussed further in section~\ref{sec:appendix:backedge}. ID 8 is the WholeProof5 model in relation to these early experiments.
\input{tableresults2}
We designed our datasets in section~\ref{sec:samplegen} with the goal of using the varied models to understand the generalizability of \texttt{pe-graph2axiom} and to show that our model is not overfitting on training data. For these next experiments, all results of for beam width 10, which provides for a neural-network directed search of up to 10 axiomatic proofs of equivalence for each program pair. Recall that our most complex dataset is AxiomStep10 which includes $(P1,P2,S)$ samples requiring up to 10 rewrite rules, $P1$ and $P2$ can have up to 50 AST nodes each, and an AST depth of up to 7. AxiomStep5 has samples requiring up to 5 rewrite rules, $P1$ and $P2$ can have up to 25 AST nodes each, and an AST depth of up to 6. Tables~\ref{tab:newnodes} and~\ref{tab:appendix:newtreedepth} (repeated from main paper below) demonstrate the ability of a model trained on AxiomStep5 to perform well on the larger distribution of programs from AxiomStep10, implying that the model has generalized well to our program equivalence problem and that \texttt{pe-graph2axiom} does not overfit its response to merely the training set distribution.
\begin{table*}[h!tb]
\caption{Generalizing to longer P1 inputs. Percentage pass rates for equivalence proofs with P1 having increasing program graph nodes. The model trained with the AxiomStep5 dataset had no training examples more than 25 program graph nodes yet it performs relatively well on these more complex problems. The furthest right column shows the \texttt{pe-graph2axiom} model results on the most complex dataset.}
\label{tab:newnodes}
\small
\centering
\begin{tabular}{@{}rrrrrrrrrr@{}}
\toprule
& & \multicolumn{2}{c}{Testset} & & \multicolumn{2}{c}{Model trained} & & \multicolumn{2}{c}{Model trained} \\
& & \multicolumn{2}{c}{Sample Count} & & \multicolumn{2}{c}{on AxiomStep5} & & \multicolumn{2}{c}{on AxiomStep10} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10}
P1 nodes & & AS5 & AS10 & & AS5 & AS10 & & AS5 & AS10 \\
\midrule
1-5 & & 231 & 109 & & 100 & 100 & & 100 & \textbf{100} \\
6-10 & & 2147 & 1050 & & 100 & 99 & & 99 & \textbf{99} \\
11-15 & & 3980 & 2175 & & 99 & 96 & & 99 & \textbf{96} \\
16-20 & & 2583 & 2327 & & 98 & 92 & & 98 & \textbf{93} \\
21-25 & & 1059 & 1989 & & 97 & 89 & & 98 & \textbf{92} \\
26-30 & & 0 & 1229 & & N/A & 83 & & N/A & \textbf{90} \\
31-35 & & 0 & 698 & & N/A & 78 & & N/A & \textbf{88} \\
36-40 & & 0 & 304 & & N/A & 74 & & N/A & \textbf{87} \\
41-45 & & 0 & 101 & & N/A & 68 & & N/A & \textbf{84} \\
46-50 & & 0 & 27 & & N/A & 67 & & N/A & \textbf{85} \\
All & & 10000 & 10000 & & 99 & 90 & & 99 & \textbf{93} \\
\bottomrule
\end{tabular}
\end{table*}
\begin{table*}[h!tb]
\caption{\label{tab:appendix:newtreedepth}Performance vs. AST size: counts and percentage pass rates.}
\small
\centering
\begin{tabular}{@{}rrrrrrrrrr@{}}
\toprule
& & \multicolumn{2}{c}{Testset} & & \multicolumn{2}{c}{Model trained} & & \multicolumn{2}{c}{Model trained} \\
& & \multicolumn{2}{c}{Sample Count} & & \multicolumn{2}{c}{on AxiomStep5} & & \multicolumn{2}{c}{on AxiomStep10} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10}
AST depth & & AS5 & AS10 & & AS5 & AS10 & & AS5 & AS10 \\
\midrule
2 & & 5 & 3 & & 100 & 100 & & 100 & 100 \\
3 & & 306 & 133 & & 100 & 100 & & 100 & 100 \\
4 & & 1489 & 577 & & 100 & 99 & & 99 & 99 \\
5 & & 4744 & 1844 & & 99 & 94 & & 98 & 95 \\
6 & & 3456 & 4308 & & 98 & 90 & & 98 & 93 \\
7 & & 0 & 3135 & & n/a & 86 & & n/a & 92 \\
All & & 10000 & 10000 & & 99 & 90 & & 99 & \textbf{93} \\
\bottomrule
\end{tabular}
\end{table*}
Table~\ref{tab:appendix:newtreedepth} illustrates the ability of a model trained on AxiomStep5 (i.e., limited to proofs of length 5) to perform well when evaluated on the more complex AxiomStep10, which includes proofs of unseen length of up to 10. The robustness to the input program complexity is illustrated with the 86\% pass rate on AST depth 7, for the model trained on AxiomStep5 which never saw programs of depth 7 during training.
As an indication of the breadth of equivalent programs represented by AxiomStep10 relative to WholeProof10, table \ref{tab:modeltest:appendix} shows the full detail of models trained on all 4 datasets when tested on test data from all 4 datasets. AxiomStep10, while training on our broadest dataset in which axioms can be applied to nodes repeatedly and in variable order, achieves a 93\% average success rate. 72\% of the proofs of length 6 from the WholeProof10 testset were solved by the model trained on WholeProof10, but only 5\% of such proofs from AxiomStep10 were, suggesting the method of generating AxiomStep pairs covers the problem space more thoroughly.
The complete result for the WholeProof10 model on the WholeProof10 dataset was 8,388 out of 10,000 program pairs had a correct proof found; of those, 8,350 were the exact proof created during $P1,P2$ generation, implying that WholeProof10, while performing well on its own testset distribution, is not learning to generalize to alternative proof paths.
\begin{table*}[h!tb]
\caption{Generalizing to longer proofs. Percentage pass rates for equivalence proofs of increasing axiom counts when testing each of 4 datasets on models trained using each of 4 datasets.}
\label{tab:modeltest:appendix}
\small
\centering
\setlength\tabcolsep{2pt}
\begin{tabular}{@{}r|rrrrrrrrrrrrrrrrrrrr@{}}
\toprule
Axiom & & \multicolumn{4}{c}{Model trained on} & & \multicolumn{4}{c}{Model trained on} & & \multicolumn{4}{c}{Model trained on} & & \multicolumn{4}{c}{Model trained on} \\
Count in & & \multicolumn{4}{c}{WholeProof5 (WP5)} & & \multicolumn{4}{c}{WholeProof10 (WP10)} & & \multicolumn{4}{c}{AxiomStep5 (AS5)} & & \multicolumn{4}{c}{AxiomStep10 (AS10)} \\
\cmidrule{3-6} \cmidrule{8-11} \cmidrule{13-16} \cmidrule{18-21}
Proof & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} \\
\midrule
1 & & 100 & 100 & 100 & 99 & & 100 & 100 & 100 & 100 & & 100 & 100 & 100 & 100 & & 100 & 100 & 100 & \textbf{100} \\
2 & & 99 & 98 & 66 & 64 & & 99 & 99 & 65 & 63 & & 100 & 99 & 100 & 99 & & 100 & 100 & 100 & \textbf{100} \\
3 & & 98 & 94 & 34 & 33 & & 97 & 95 & 33 & 33 & & 100 & 98 & 99 & 98 & & 100 & 99 & 99 & \textbf{99} \\
4 & & 93 & 84 & 16 & 15 & & 90 & 88 & 16 & 15 & & 98 & 95 & 98 & 97 & & 99 & 98 & 98 & \textbf{98} \\
5 & & 84 & 70 & 8 & 7 & & 84 & 82 & 8 & 7 & & 96 & 91 & 96 & 95 & & 97 & 95 & 96 & \textbf{96} \\
\midrule
6 & & & 14 & & 4 & & & 72 & & 5 & & & 81 & & 88 & & & 90 & & \textbf{93} \\
7 & & & 0 & & 1 & & & 63 & & 2 & & & 67 & & 81 & & & 83 & & \textbf{87} \\
8 & & & 0 & & 0 & & & 54 & & 1 & & & 54 & & 75 & & & 73 & & \textbf{82} \\
9 & & & 0 & & 0 & & & 47 & & 0 & & & 35 & & 64 & & & 63 & & \textbf{74} \\
10 & & & 0 & & 0 & & & 34 & & 0 & & & 24 & & 57 & & & 46 & & \textbf{66} \\
All & & 95 & 66 & 44 & 27 & & 94 & 84 & 44 & 27 & & 99 & 87 & 99 & 90 & & 99 & 93 & 99 & \textbf{93} \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph*{Manual verifications} We conducted a series of manual verifications of the system used to produce all the above results. First, we are happy to confirm that most likely $AB \ne BA$ given no verifiable equivalence sequence was produced, but that provably $ab=ba$ indeed. We also verified that $A^{t^t}(B+C-C) = AB$, and that $AB\vec v -AB\vec w=AB(\vec v-\vec w)$ which would be a much faster implementation. The system correctly suggests that $AB\vec v -BA\vec w\ne AB(\vec v-\vec w)$. We ensured that $A^t(AA^t)^{-1}A\ne A^t(AA^{-1})^tA$, from a typo we once made when typing the computation of an orthonormal sub-space. We also verified that indeed $AB + AC + aD - aD = A(B+C)$.
\paragraph*{Generalizing variable types}
We explored the ability of the model to understand variable typing by training a model with the AxiomStep10 distribution but with no samples that included the scalar variable 'e' and scalar multiplication $*_s$. This removed about 50\% of the training set, as longer programs were often included both tokens. When tested with the unaltered AxiomStep10 test set and beam width 10, test samples that included a scalar variable not 'e' and $*_s$ were proven equal 90\% of the time; test samples that included 'e' and $*_s$ were also proven equal 90\% of the time. For beam width 1 the proof success rates were 72\% and 70\% for without and with 'e', implying that the heavily biased training set did have a small effect on the system generalization. \texttt{pe-graph2axiom} was still able to generalize the relation of 'e' to the $*_s$ operator given that 'e' was used in contexts similar to other scalar variables in the training samples that were provided, implying it was forming an internal representation of a 'scalar' type by learning from examples.
\subsection{Learning that multiple axiom choices are possible}
\label{sec:appendix:multiaxiom}
Our AxiomStep10 model is trained on axioms which may be applied in varying order in the training set. For example, $((a+b)*(c+d))=((b+a)*(d+c))$ may have the training data to Commute the left node $a+b$ first and then $c+d$ second; in the same dataset, $((a+e)*(b+c))=((e+a)*(c+b))$ might occur and the training data has the right node Commuted first. In this way, we expect the model to learn that either commuting the left or right node is a proper first axiom choice. Table~\ref{tab:beam3} explores the ability of the model to produce such axiom proposals. Given 5 scalar variables, there are 120 possible expressions where two 2-variable additions are multiplied together such as $((a+b)*(c+d))$. We consider here all 120 program pairs in which the left and right additions are commuted. The table shows which axioms and positions are recommended by the graph-to-sequence neural network model within the \texttt{pe-graph2axiom} system as most probably moving the 2 programs closer to equivalence by the beam width 3 on this problem. Note that the 2 correct axioms are always within the top 3 choices and the other 2 axioms (Commute and DistributeLeft on the root), while not necessary for this problem, are at least legal choices for axioms within our expression language.
The results in table~\ref{tab:beam3} relate to the value of our approach in relation to reinforcement learning models for proof generation \cite{Alhussein19} \cite{Bansal19}. To make an analogy with reinforcement learning, in our training, the world 'state' is presented as a $P1,P2$ pair and the system must learn to produce an axiom at a location which performs an 'action' on the 'state' of $P1$ in a predictable way. Unlike reinforcement learning, we do not produce a reward function and our system cannot learn from a poor reward produced by an incorrect axiom. However, we have demonstrated that our system, as it is presented with a wide distribution of $(P1,P2,S)$ tuples to train on, learns a probability distribution of possibly correct axioms to produce for a given program pair. There may be value in combining our graph-neural-network within a reinforcement learning framework that used a hindsight mechanism \cite{Andrychowicz17} to learn from every attempted axiom, but it is not immediately obvious that our approach of learning only from examples of successful equivalence proofs would be improved.
\begin{table}[h!tb]
\caption{Learning multiple output options. When considering scalar expressions that can be proven equivalent by commuting the left and right subexpressions, such as $(a+b)(c+d)=(b+a)(d+c)$, \texttt{pe-graph2axiom} learns that either the left or right commute can occur first. The columns show counts for axioms and locations proposed by the token generator with beam width of 3 when given 120 different scalar expression pairs.}
\label{tab:beam3}
\small
\centering
\begin{tabular}{@{}lrrrrr@{}}
\toprule
& & \multicolumn{4}{c}{Axiom} \\
Beam & & Commute & Commute & Commute & DistributeLeft \\
position & & left child & right child & root & root \\
\cmidrule{1-1} \cmidrule{3-6}
First & & 49 & 35 & 36 & 0 \\
Second & & 58 & 59 & 3 & 0 \\
Third & & 13 & 26 & 45 & 36 \\
Any of top 3 & & 120 & 120 & 84 & 36 \\
\bottomrule
\end{tabular}
\end{table}
\paragraph*{Exploration of alternate designs}
In order to design the system, we explored parts of the design space quickly and performed several single training run comparisons between 2 options, as shown in Table~\ref{tab:ParamSearch}.
In cases where 2 options were similar, we
chose the model which ran faster, or run the models a second time to
get a more precise evaluation, or use our experience based on prior
experiments to select an option.
\begin{table}[h!tb]
\caption{Example explorations as a single feature or parameter is changed. Each comparison is a distinct experiment, as the entire network and language used was being varied.}
\label{tab:ParamSearch}
\centering\small
{\begin{tabular}{ |p{5.2cm}|p{1cm}|p{1.2cm}| }
\hline
& Match & Match \\
Options compared & beam 1 & beam 10 \\
\hline
\hline
1 layer LSTM vs & 198 & 1380 \\
2 layer LSTM vs & 5020 & 9457 \\
3 layer LSTM & 4358 & 8728 \\
\hline
No edges to grandchild nodes vs & 9244 & 9728 \\
Edges to grandchild nodes & 9284 & 9774 \\
\hline
Encoder->Decoder only root node vs & 8616 & 9472 \\
Encoder->Decoder avg all nodes & 7828 & 9292 \\
\hline
\end{tabular}}
\end{table}
Experiments such as these informed our final network architecture. For
example, in \texttt{pe-graph2axiom}, we include 4 edges with learnable weight
matrices from a node to its grandchildren because such edges were found to
improve results on multiple runs.
Li et al. \cite{Li19} discusses the importance of selecting the optimal process for aggregating
the graph information hence we explore that issue for our network.
Our approach uses the root comparison
node to create aggregate the graph information for the decoder as it performs
better than a node averaging.
\paragraph*{Including Not\_equal option}
Table \ref{tab:Proof} analyzes the challenge related to a model which only predicts
Equal or Not\_equal for program pairs along with various options which produce
rewrite rules which can be checked for correctness. In all 4 output cases shown,
2 programs are provided as input. These programs use an earlier version of our language model with 16 operators, 13 core axioms, and 20 operands generated with a distribution similar to WholeProof5.
\begin{table}[h!tb]
\caption{Table showing alternate options for handling not equal programs\label{tab:Proof}}
\centering{\small
\begin{tabular}{ |p{2.2cm}|p{0.8cm}||p{1.2cm}|p{1.2cm}|p{1cm}| }
\hline
Network & & & Predicted & Correct \\
output & & Predicted & Rules & Rewrite \\
Description & Actual & NotEq & or Eq & Rules \\
\hline
\hline
Eq or NotEq, & Eq & 5.4\% & 94.6\% & N/A \\
Beam width 1 & NotEq & 90.4\% & 9.6\% & N/A \\
\hline
Rules or NotEq, & Eq & 6.6\% & 93.4\% & 70.7\% \\
Beam width 1 & NotEq & 90.9\% & 9.1\% & N/A \\
\hline
Rules only, & Eq & N/A & 100\% & 87.8\% \\
Beam width 1 & NotEq & N/A & N/A & N/A \\
\hline
Rules only, & Eq & N/A & 100\% & 96.2\% \\
Beam width 10 & NotEq & N/A & N/A & N/A \\
\hline
\end{tabular}
}
\end{table}
For the first output case, the output
sequence to produce is either \texttt{Equal} or \texttt{Not\_equal}. Given a
false positive rate of 9.6\%, these results
demonstrate the importance of producing a verifiable proof of equivalence
when using machine learning for automated equivalence checking.
For the second output case, the model can produce either \texttt{Not\_equal}
or a rewrite rule sequence which can be checked for correctness. The source
programs for the first and second case are identical: 250,000 equivalent program pairs
and 250,000 non-equivalent program pairs. In the second case, the false positive
rate from the network is 9.1\% (rules predicted for Not\_equal programs), but
the model only produces correct rewrite rules between actual equivalent programs
in 70.7\% of the cases.
One challenge with a model that produce rules or
\texttt{Not\_equal} is that beam widths beyond 1 are less usable. Consider that
with a beam width of 1, if the network predicts \texttt{Not\_equal} then the
checker would conclude the programs are not equal (which is
correct for 90.9\% of the actually not equal programs). With a beam width of 10,
there would be more proposed rewrite rules for equal programs to test with, but
if 1 of the 10 proposals is \texttt{Not\_equal}, should the checker conclude they
are not equal? Or should the the checker only consider the most likely prediction
(beam width 1) when checking for non-equivalence? The third and fourth network
output cases provide an answer. For these 2 cases, the training set is 400,000
equivalent program pairs - none are non-equivalent. 250,000 of these pairs are
identical to the equivalent programs in the first 2 cases, and 150,000 are new
but were produced using the same random generation process. Note that by requiring
the network to focus only on creating rewrite rules, beam width 1 is able to
create correct rewrite rules for 87.8\% of the equivalent programs. And now,
since we've remove the confusion of the \texttt{Not\_equal} prediction option,
beam width 10 can be used to produce 10 possible rewrite rule sequences and
in 96.2\% of the cases these rules are correct. Hence, we propose the preferred
use model for pe-graph2axiom is to always use the model which is trained for
rule generation with beam width 10 and rely on our rule checker to prevent
false positives. From the 10 rewrite rule proposals, non-equivalent programs
will never have a correct rewrite rule sequence produced, hence we
guarantee there are no false positives.
\subsection{An Example of Back-Edge in the Program Graph}
\label{sec:appendix:backedge}
Figure~\ref{fig:LoopAST} shows an example of DoX and DoHalf.
The new operators result in 2 new edges in our graph representation (along with 2 new back-edges): there is a 'loopbody' edge type from the loop operator node to the start of the subgraph, and there is a 'loopfeedback' edge type from the variable which is written to each loop iteration. These 2 edge types are shown in the figure. The new $Dohalf$ axiom intuitively states that $DoX(g(y)) = DoHalf(g(g(y)))$ (where $y$ is the variable reused each iteration), and $Dox$ states the reverse.
\begin{figure}[h!tb]
\centering
\begin{minipage}[b]{0.25 \textwidth}
\centering
\subfloat [DoX($b=(a+b)/c$)] {
\begin{tikzpicture} [level distance=2.25em, inner sep=1pt, minimum size=1.5em, sibling distance=2em, edge from parent/.style={draw,latex-}]
\node [circle, double, draw] (root) {\tiny DoX}
child {node [circle, draw] {$/$}
child{node [circle, draw] {$+$}
child {node [circle, draw] {$a$}}
child {node [circle, draw] (leaf) {$b$}}
}
child {node [circle, draw] {$c$}
child{edge from parent[draw=none] node [opacity=0] {}}
child{edge from parent[draw=none] node [opacity=0] {}}
}
};
\path [->, draw] (root) to [out=335, in=20] (leaf);
\end{tikzpicture}
}
\end{minipage}
\hfill
\unskip\ \vrule\
\begin{minipage}[b]{0.2 \textwidth}
\centering
\subfloat [DoHalf($b=(a+(a+b)/c)/c$)] {
\begin{tikzpicture} [level distance=2.25em, inner sep=1pt, minimum size=1.5em, sibling distance=2em, edge from parent/.style={draw,latex-}]
\node [circle, double, draw] (root) {\tiny DoH}
child {node [circle, draw] {$/$}
child{node [circle, draw] {$+$}
child {node [circle, draw] {$a$}}
child {node [circle, draw] {$/$}
child{node [circle, draw] {$+$}
child {node [circle, draw] {$a$}}
child {node [circle, draw] (leaf) {$b$}}
}
child {node [circle, draw] {$c$}}
}
}
child {node [circle, draw] {$c$}}
};
\path [->, draw] (root) to [out=335, in=20] (leaf);
\end{tikzpicture}
}
\end{minipage}
\caption{Adding loop constructs creates cycles in the program graph.}
\label{fig:LoopAST}
\end{figure}
\section{Introduction}
\label{sec:introduction}
\input{introduction}
\section{Motivation and Overview}
\label{sec:motivation}
\input{motivation}
\section{Framework for Program Equivalence}
\label{sec:proglangdefs}
\input{proglangdefs}
\section{Samples Generation}
\label{sec:samplegen}
\input{samplegen}
\section{Deep Neural Networks for Program Equivalence}
\label{sec:progequivdnn}
\input{progequivdnn}
\section{Experimental Results}
\label{sec:expresults}
\input{expresults}
\section{Related Work}
\label{sec:related}
\input{related}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion}
\begin{acks}
This work was supported in part by the U.S. National Science Foundation award CCF-1750399.
\end{acks}
\balance
\section{Conclusion}
In this work, we presented \texttt{pe-graph2axiom}, the first graph-to-sequence neural network system to generate verifiable axiomatic proofs (via rewrite rules) for equivalence for a class of symbolic programs. Evaluated on a rich language for linear algebra expressions, this system produces correct proofs of up to 10 axioms in length in 93\% of the 10,000 equivalent cases evaluated. We believe the performance of our approach comes in part from using graph neural networks for what they aim to excel at: learning efficient heuristics to quickly find paths in a graph; and the observation that program equivalence can be cast as a path-based solution that is efficiently found by such networks.
\section{Experimental Results}
We now present extensive experimental results, and compare the quality of several neural network approaches to address the problem of program equivalence. We have proceeded incrementally for fine-tuning the final system design, and report on several of these design points below.
We focus our experiments below on 4 key questions: 1) Is performance related to input program size? 2) Is performance related to proof length? 3) Is the incremental, per-axiom approach more generalizable than producing the full sequence in a single inference step? And 4) Is performance consistent across a range of datasets, including human-written examples?
\paragraph*{Implementation setup}
\label{sec:expresults:setup}
We developed the neural network system presented in the OpenNMT-py system \citep{opennmt}, adding on a new encoder based on a prior implementation of gated graph neural networks \citep{Li16}. For our training and evaluation experiments, we use systems with Intel Xeon 3.6GHz CPUs and 6GB GeForce GTX 1060 GPUs.
During training, we save a model snapshot every 50,000 iterations and score the accuracy the model achieved on the validation dataset. Graphs showing that validation accuracy plateaus at 200,000 to 300,000 iterations are provided in section~\ref{sec:suppl:additionalresults}. We run each model twice and evaluate the test set using the saved model which achieved the highest validation score.
\paragraph*{Evaluation procedure and neural network alternatives}
The benefits of key components of our neural network model are studied in table~\ref{tab:metamodel}. The bidirectional RNN model is similar to state-of-the-art sequence-to-sequence models used for program repair \citep{Chen19}. The results for the graph-to-sequence model without attention show the benefit of providing the node information during the axiom generation process.
\begin{table}[h!tb]
\vspace{-.3cm}
\caption{\texttt{pe-graph2axiom} mini ablation study.\vspace{-.3cm} }
\label{tab:metamodel}
\small
\centering
\begin{tabular}{@{}lrrrrr@{}}
\toprule
& & \multicolumn{4}{c}{Beam width} \\
Model description & & 1 & 2 & 5 & 10 \\
\cmidrule{1-1} \cmidrule{3-6}
Bidirectional RNN seq-to-seq with attention & & 48 & 62 & 71 & 75 \\
Graph-to-sequence w/o attention & & 73 & 81 & 87 & 90 \\
\texttt{pe-graph2axiom} model & & 76 & 84 & 90 & \textbf{93} \\
\bottomrule
\end{tabular}
\end{table}
Our final design was influenced by explorations we performed on varied models, datasets, and hyperparameters such as LSTM layers and graph neural network parameters. In relation to the model's ability to learn a representation of the proof sequence, we note that our GGNN initialization using the root node connection to the decoder outperforms the embedding learned by a bidirectional RNN model. Also, we found that averaging the embedding of all graph nodes had about 10\% lower accuracy than using the more specific root node information. Numerous additional results are reported in Suppl. material~\ref{sec:suppl:additionalresults}.
\paragraph*{Generalizing across different datasets}
We specifically look at the generalization potential for our models by studying their success rate as a function of the input program complexity, represented as the AST depth, in Table~\ref{tab:newtreedepth}, and as a function of the output complexity, represented by the proof length in Table~\ref{tab:modeltest}, all using a beam size of 10. We designed our datasets in Sec.~\ref{sec:samplegen} to study how well \texttt{pe-graph2axiom} generalizes and to assess we are not overfitting on training data. Extensive in-depth additional experimental results are presented in Suppl. Material~\ref{sec:suppl:additionalresults}, we summarize key results only below.
\begin{table}[h!tb]
\vspace{-.3cm}
\caption{\label{tab:newtreedepth}Performance vs. AST size: counts and percentage pass rates.\vspace{-.3cm}}
\scriptsize
\centering
\begin{tabular}{@{}rrrrrrrrrr@{}}
\toprule
& & \multicolumn{2}{c}{Testset} & & \multicolumn{2}{c}{Model trained} & & \multicolumn{2}{c}{Model trained} \\
& & \multicolumn{2}{c}{Sample Count} & & \multicolumn{2}{c}{on AxiomStep5} & & \multicolumn{2}{c}{on AxiomStep10} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10}
AST depth & & AS5 & AS10 & & AS5 & AS10 & & AS5 & AS10 \\
\midrule
2-6 & & 10000 & 6865 & & 99 & 93 & & 99 & 94 \\
7 & & 0 & 3135 & & n/a & 86 & & n/a & 92 \\
All & & 10000 & 10000 & & 99 & 90 & & 99 & \textbf{93} \\
\bottomrule
\end{tabular}
\vspace{-.3cm}
\end{table}
Table~\ref{tab:newtreedepth} illustrates the ability of a model trained on AxiomStep5 (i.e., limited to proofs of length 5) to perform well when evaluated on the more complex AxiomStep10, which includes proofs of unseen length of up to 10. The robustness to the input program complexity is illustrated with the 86\% pass rate on AST depth 7, for the model trained on AxiomStep5 which never saw programs of depth 7 during training.
\begin{table*}[h!tb]
\vspace{-.3cm}
\caption{Performance vs. proof length: percentage pass rates.\vspace{-.3cm}}
\label{tab:modeltest}
\scriptsize
\centering
\setlength\tabcolsep{2pt}
\begin{tabular}{@{}r|rrrrrrrrrrrrrrrrrrrr@{}}
\toprule
Axiom & & \multicolumn{4}{c}{Model trained on} & & \multicolumn{4}{c}{Model trained on} & & \multicolumn{4}{c}{Model trained on} & & \multicolumn{4}{c}{Model trained on} \\
Count in & & \multicolumn{4}{c}{WholeProof5 (WP5)} & & \multicolumn{4}{c}{WholeProof10 (WP10)} & & \multicolumn{4}{c}{AxiomStep5 (AS5)} & & \multicolumn{4}{c}{AxiomStep10 (AS10)} \\
\cmidrule{3-6} \cmidrule{8-11} \cmidrule{13-16} \cmidrule{18-21}
Proof & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} & & WP5 & \scriptsize{WP10} & AS5 & \scriptsize{AS10} \\
\midrule
1-5 & & 95 & 89 & 44 & 44 & & 94 & 93 & 44 & 44 & & 99 & 97 & 99 & 98 & & 99 & 98 & 99 & \textbf{98} \\
6 & & & 14 & & 4 & & & 72 & & 5 & & & 81 & & 88 & & & 90 & & \textbf{93} \\
7 & & & 0 & & 1 & & & 63 & & 2 & & & 67 & & 81 & & & 83 & & \textbf{87} \\
8 & & & 0 & & 0 & & & 54 & & 1 & & & 54 & & 75 & & & 73 & & \textbf{82} \\
9 & & & 0 & & 0 & & & 47 & & 0 & & & 35 & & 64 & & & 63 & & \textbf{74} \\
10 & & & 0 & & 0 & & & 34 & & 0 & & & 24 & & 57 & & & 46 & & \textbf{66} \\
All & & 95 & 66 & 44 & 27 & & 94 & 84 & 44 & 27 & & 99 & 87 & 99 & 90 & & 99 & 93 & 99 & \textbf{93} \\
\bottomrule
\end{tabular}
\vspace{-.3cm}
\end{table*}
Table~\ref{tab:modeltest} compares the results of our 4 models, each trained on one of our 4 datasets, and evaluated with the test set of all 4 datasets. The models all have identical hypermeter settings.
We observe the inability of models trained to output the whole proof to generalize to proofs of higher length (WP5 model on AS10/WP10), with near zero success rate. However, per-axiom models (AS5 and AS10) show potential for generalization to proof length: AS5 model performs well when evaluated on AS10, showing the ability to produce proofs of length/complexity unseen in training. Overall, the success rate degrades gracefully with proof length, bottoming at 66\% for AS10 for proofs of length 10.
\input{tableresults1}
\subsection{WholeProof Models: Language Complexity and Performance}
Table~\ref{tab:ResultLang} shows the result of 12 different experiments and designs specifically for the WholeProof5 models. In particular, we incrementally increase the problem complexity from rows 1 to 10, increasing the number of \textsf{Operators} that can be used in any input program, of \textsf{Axioms} used in the rewrite sequence, of \textsf{Operands} in any input program, of the maximal number of nodes in an input program graph (the \textsf{Program length}, directly influencing the size of the graph network), and the \textsf{Rewrite rule length}, which contains the description of paths from the root node to reach the position of application of an axiom, this is directly related to the maximal graph height, itself determined by the maximal program size. Details on each row are provided in Supplementary Material.
We specifically compare against a sequence-to-sequence (S2S) approach, to quantify the gains brought by employing graph-to-sequence (G2S). When the space is small enough, S2S still performs well, especially using aggressive beam search. We recall that by design of our system testing the correctness of one sequence is trivial and deterministic, so one can easily use large beam sizes without any correctness impact nor major performance penalty during inference. For example, inference of beam 1 is about 15ms for our most complex networks, but beam 10 only takes 16ms. Checking correctness is $<<$ 1ms.
Contrasting rows 2 and 3 displays the merits of the G2S approach for our problem: on this simple problem, in fact G2S gets near-perfect accuracy already. Progressively increasing the complexity of the search space, till row 9 and 10, displays a slow but steady decrease in quality, while still maintaining excellent scores near or above 95\% with beam 10. To reassess the limits of a sequence-to-sequence approach, row 9 and 11 can be constrasted: they operate on the same search space, but S2S peaks at 81\% accuracy, while G2S reaches 95\%.
Row 10 displays the result when learning using also samples of non-equivalent programs, using the ``empty path'' symbol Not\_equal. We evaluated this system to measure the impact of training on only equivalent programs vs. also sampling pairs of unconnected nodes in the equivalences graph. We recall that by design, if no rewrite rule produced is verified as correct, our system outputs the programs are not equivalent. In other words, whichever the sequence(s) produced by the network, if the two input programs are non-equivalent, the system will \emph{always} output they are not equivalent: no equivalence sequence produced can be verified as correct. So training on only equivalent programs is clearly sensible for such system; furthermore as shown in row 10 vs. 9, even increasing the training set size, training using non-equivalent programs seem to lower the performance slightly.
\paragraph*{Human written test expressions from Khan academy exercises}
Unfortunately there is a dearth of existing large reference datasets for equivalence of linear algebra expressions, which justified our careful dataset creation approach in Sec.~\ref{sec:samplegen} and their upcoming public release. However numerous math exercises involve exactly this problem, and can provide small but human-written datasets. We solve all of the matrix expression equivalence programs from 2 relevant Khan academy modules designed to test student's knowledge of matrix algebra \citep{Khan20}.
Our AxiomStep10 model is able to correctly prove all 15 equivalent pairs from the modules with beam width 1 and wider. With a beam width of 10, the WholeProof10 model proved 12. An example problem solvable by AxiomStep10 but not WholeProof10 is: $c(1A +B) = cB + cA$ which can be proven by applying the rewrite rules NeutralOp, DistributeRight, and Commute to the proper nodes. The WholeProof10 model mostly fails because it was not trained on how to apply repeated transformations at the same point in the AST. This suggests AxiomStep10 has generalized well to these hand-written problems.
\section{Introduction}
Deep neural network systems have excelled at a variety of classification and reinforcement learning tasks \cite{Goodfellow16}. However, their stochastic nature tends to hinder their deployment for automated program analysis: ensuring the correctness of the solution produced is often required, e.g., when determining the semantics equivalence between two programs (or symbolic expressions).
In this work we target the problem of automatically computing whether two input symbolic expressions are semantically equivalent \cite{kaplan1969regular}, under a well-defined axiomatic system for equivalence using semantics-preserving rewrite rules \cite{dershowitz1985computing}. Program equivalence is summarized as determining whether two programs would always produce the same outputs for all possible inputs, and is a central problem in computing \cite{kaplan1969regular,godlin2008inference,verdoolaege2009equivalence}. The problem ranges from undecidable, e.g. \cite{goldblatt2012well}, to trivial in cases of testing the equivalence of a program with itself. Our work directly studies the subset of programs represented by symbolic linear algebra expressions which include scalar, vector, and matrix types for both constants and variables, and 16 different operators with 147 distinct axioms of equivalence. For example, the expression using matrices, scalars, and a vector: $(A+B)I((a+(b-b))/a)\vec v - A\vec v$ can be proven equivalent to $B\vec v$ by applying 10 axioms in sequence; our work generates the proof steps between these expressions.
While prior work has shown promises for deep networks to compute some forms of program equivalence \cite{Xu17,Alon19}, the system typically outputs only a probability of equivalence, without any reasoning or insight that can be verified easily: false positive can be produced. Programs can be represented as a tree (or graph) of symbols, and deep networks for symbolic reasoning have been studied, e.g. to compute the derivative of a symbolic expression \cite{Lample20}. In this work, we take a significantly different approach to the problem of symbolic program reasoning with deep networks: we make the system produce the sequence of steps that lead to rewriting one program into another, that is the \emph{reasoning} for (or proof of) equivalence between the two programs, instead of producing directly the result of this reasoning (e.g., a probability of equivalence, without explanation about the reasoning). In a nutshell, we approach expression equivalence as a theorem proving problem, in which all the axioms as well as tactics to compute a proof are all learned by example in a deep learning system, without any human insight.
We propose a method for generating training samples using probabilistic applications of production rules within a formal grammar, and then develop a graph-to-sequence \cite{Li16,Beck18} neural network system for program equivalence, trained to learn and combine rewrite rules to rewrite one program into another.
It can \emph{deterministically} prove equivalence, entirely avoids false
positives, and quickly invalidates incorrect answers produced by the network
(no deterministic answer is provided in this case, only a probability of non-equivalence). In a
nutshell, we develop the first graph-to-sequence neural network system to
accelerate the search in the space of possible combinations of
transformation rules (i.e., axioms of equivalence in the input
language) to make two graphs representing symbolic expressions structurally identical without violating their original semantics.
We propose a machine learning system for program equivalence which ensures correctness for all non-equivalent programs input (specificity = 100\%) , and a deterministically checkable output for equivalent programs (no false positives). We make the following contributions:
\begin{enumerate
\item We design, implement and evaluate two competing approaches using graph-to-sequence neural network systems to generate proofs of equivalence. We provide the first implementation of such graph-to-sequence systems in the popular OpenNMT-py framework \cite{opennmt}.
\item We present a complete implementation of our system operating on a rich language for multi-type linear algebra expressions. Our system provides a correct rewrite rule sequence between two equivalent programs for 93\% of the 10,000 test cases. The correctness of the rewrite rule is deterministically checkable in all cases in negligible time.
\end{enumerate}
The rest of the paper is organized as follows. Sec.~\ref{sec:motivation} outlines the program equivalence problem we address, and motivates our proposed approach. Sec.~\ref{sec:proglangdefs} formalizes the equivalence problem addressed. Automatic sample generation is discussed in Sec.~\ref{sec:samplegen} before Sec.~\ref{sec:progequivdnn} which introduces our DNN system, its overall design principles and key components. A complete experimental evaluation of our system is detailed in Sec.~\ref{sec:expresults}. We present related work in Sec.~\ref{sec:related} before concluding.
\section{Motivation and Overview}
\input{figexample1.tex}
\paragraph*{Rewrite rules as axioms of equivalence}
In this work we represent programs with symbolic expressions made of variables (e.g., $a$, $b$, $c$), operators (e.g., \texttt{+}, \texttt{*}) and neutral/absorbing elements (e.g., $1$). We consider a rich linear algebra expression language, supporting three variable types (scalars as shown in P1-P4, vectors, and matrices) and 5 different variables per type; 16 operators including operators mixing different variable types such as vector-matrix product. We represent these programs as dataflow graphs \cite{buck1993scheduling} with a single root node, that is to compute a single value.
P1 is equivalent to P2 if we consider the axiom $A1: 1_{\mathbb{N}} * x = x,~\forall x \in \mathbb{N}$. This axiom is also a clear rewrite rule: the LHS expression $1_{\mathbb{N}} * x$ (with $x\in\mathbb{N}$) can be matched and replaced by the RHS expression $x$ anywhere in the program without altering its semantics. An axiom, or equivalently here a graph rewrite rule, may be applied repeatedly to different subtrees. When applying $A1$ on a specific location, the node $b$ of $P1$, we obtain an equivalent and yet syntactically different program, we note $P1 \equiv A1(b,P1)$. These equivalences can be composed, incrementally, to form a complex transformation: we have $P1 \equiv A1(c,A1(b,P1))$. The result of these semantics-preserving transformations can be computed in sequence: first implement $A1(b,P1)$ to obtain a new program $P'$, then $A1(c,P')$ to obtain $P''$. To prove $P1 \equiv P2$, we simply check $P''$ is structurally identical to $P2$, a linear time process.
To assess the validity of a transformation sequence $S$ where $P2 = S(P1)$, one simply needs to check for $S$, in sequence, that each axiom is applicable at that program point, apply it to obtain a new temporary program, and repeat the process for each axiom in the complete sequence. If the sequence is verified to be valid, and $S(P1)$ is structurally equivalent to $P2$, then we have proved $P1 \equiv P2$, and $S$ forms the complete proof of equivalence between the two programs. Using $A2:x * (y+z) = x*y+x*z,~\forall x,y,z \in \mathbb{N}$ and $A3:x+y = y+x,~\forall x,y \in \mathbb{N}$, we have
$P1 \equiv P4 \equiv A3(+,A2(*,A1(c,A1(b,P1))))$, a verifiable proof of equivalence under our axioms between the programs $a(1b+1c)$ and $ac+ab$, which involved structural changes including node deletion, creation and edge modification.
Note the bidirectional nature of the process: one can rewrite from $a(1b+1c)$ to $ac+ab$, or the converse using the same (but reverted) sequence. Note also the non-unicity of a sequence: by possibly many ways a program can be rewritten into another one, for example the sequence $P4 \equiv A3(+,A1(c,A1(b,A2(*,P1))$ also correctly rewrites $P1$ into $P4$. Conversely, a sequence may not exist: for example no sequence of the 3 above axioms allow to rewrite $a+b$ into $a*b$. We call these non-equivalent in our system, that is precisely if there is no sequence of axioms that can be applied to rewrite one program into the other.
Our approach aims to compute some $S$ for a pair of programs $P1,P2$, so that $S$ is verified correct when $P1 \equiv P2$. Consequently, if $P1 \not\equiv P2$, no sequence $S$ produced can be verified correct: true negatives are trivially detected.
\paragraph*{Pathfinding program equivalence proofs}
Intuitively, we can view the solution space as a graph, where every possible syntactically different program in the language is represented by its own vertex $v_i$. And
$\exists ~e^{({A_k},x)}: v_i \rightarrow v_j$ iff $\exists A_k$ an axiom and $x$ a node in $v_i$ such that $v_j = A_k(x,v_i)$.
Any two programs connected by a path in this graph are therefore semantically equivalent. Building $S$ for $P1 \equiv S(P2)$ amounts to exposing one path between $P1$ and $P2$ in this graph when it exists, the path forming the proof of equivalence. We build a deep learning graph-to-sequence system to learn a stochastic approximation of an iterative algorithm to construct such feasible path when possible, trained only by randomly sampling pairs of programs and one carefully labeled path between them. This avoids the need to craft smart exploration heuristics to make this path-finding problem practical.
\begin{figure*}[h!tb]
\vspace{-.4cm}
\centering\includegraphics[width=14cm]{./images/Full.png}
\caption{\texttt{pe-graph2axiom} System Overview}
\vspace{-.4cm}
\label{fig:Full}
\end{figure*}
\paragraph*{Graph-to-sequence network for pathfinding}
This is instead what we let the neural network learn automatically; and specifically why we implemented graph neural networks to solve this problem \cite{Scarselli09,Xu17}. We rely on the network to suggest a transformation path by inference, and then verify its validity in linear time.
To implement our approach, we enumerate randomly valid sentences in a language, and a set of axioms of equivalence expressible as semantics-preserving rewrite rules from one to the other.
The system in Fig.~\ref{fig:Full} takes as input two programs represented as symbolic trees, and produces a sequence of axioms along with their position of application (or node) that can be used to rewrite sequentially one input program into the other input program.
To train the system, we generate pairs of equivalent programs by iterating the axioms with random probability on one program, thereby generating both a path to equivalence and the target program. Random programs are generated so as to respect the grammar defined. The training set is then appropriately selected from these random samples, as detailed in Sec.~\ref{sec:expresults}.
\emph{Node initialization} initializes the graph neural network, converting the input programs text (e.g., $(a+(b+c))$ into nodes and edges in the \emph{Graph Neural Network} \cite{Scarselli09,Xu17}.
The details of the network are covered in Sec.~\ref{sec:progequivdnn}. In a nutshell, the key principle is to combine a memory-based neural network approach, e.g., using Long-Short Term Memory (LSTM) \cite{Hochreiter97} neurons and a graph neural network design (which uses Gated Recurrent Units (GRUs) internally) \cite{Beck18} that matches our program graph representation. \emph{Token embedding} is a neural network layer in which tokens are assigned a learnable multidimensional embedding vector \cite{Mikolov13}.
Each layer in \emph{LSTM 2 layers} has 256 neurons, which support sequence generation.
\emph{Token generator} is the final output portion of the network. It learns to output the tokens based on the current LSTM hidden states and the \emph{Global Attention} from the graph neural network. As each token is output, it feeds back into the LSTM layer through the embedding layer to affect its next state. We use a sequence
generation principle, using a global attention mechanism \cite{luong15} to allow observation of program graph node information while generating the axiom and location on which it is applied. As developed below, we specifically study the robustness of our approach to generate proofs of increasingly complex length, contrasting models to output the entire path at once with \texttt{pe-graph2axiom} which incrementally builds the sequence one step at a time, as shown in Sec.~\ref{sec:expresults}.
\subsection{PolyBench}
PolyBench is a benchmark suite of 30 numerical computations with
static control flow, extracted from operations in various application
domains (linear algebra computations, image processing, physics
simulation, dynamic programming, statistics, etc.). Its original
objective is to offer a set of representative numerical computations
that are amenable to polyhedral optimizations, and to offer a platform
to ease reproducibility of experiments, including numerous features
for e.g. cache flushing, highly accurate timing, support for PAPI
hardware counters, and non-random initialization data. PolyBench/C was
developed by Pouchet, with significant contributions from Yuki
starting PolyBench/C 4.0 \cite{polybench-url}.
PolyBench/Python is based on PolyBench/C 4.2.1 by Pouchet and
Yuki \cite{polybench-url}. By design, PolyBench/Python
implements \emph{exactly} the same computation as in the equivalent
PolyBench/C program: the same data types, exact same algorithm
(including the same execution order for the operations, for loop-based
Python implementations), and exact same dataset sizes. The objective
is to provide implementations in different languages of the exact same
computation to allow ``apple-to-apple'' comparisons between native C
implementations and JIT/interpreted implementations in Python, thereby
enabling to observe the overhead of the Python techniques compared to
the execution of only-necessary instructions in the native C program.
Table~\ref{table:polybenchdesc} presents the 30 kernels in PolyBench
and their description, for completeness.
\begin{table}[h!tb]
\begin{center}
\caption{\label{table:polybenchdesc}30 kernels in PolyBench}
{\small
\begin{tabular}{l|l}
\textsf{Benchmark} & \textsf{Description} \\ \hline
\texttt{2mm} & 2 Matrix Multiplications (alpha * A * B * C + beta * D) \\
\texttt{3mm} & 3 Matrix Multiplications ((A*B)*(C*D)) \\
\texttt{adi} & Alternating Direction Implicit solver \\
\texttt{atax} & Matrix Transpose and Vector Multiplication\\
\texttt{bicg} & BiCG Sub Kernel of BiCGStab Linear Solver \\
\texttt{cholesky} & Cholesky Decomposition \\
\texttt{correlation} & Correlation Computation\\
\texttt{covariance} & Covariance Computation\\
\texttt{deriche} & Edge detection filter\\
\texttt{doitgen} & Multi-resolution analysis kernel (MADNESS) \\
\texttt{durbin} & Toeplitz system solver \\
\texttt{fdtd-2d} & 2-D Finite Different Time Domain Kernel\\
\texttt{gemm} & Matrix-multiply C=alpha.A.B+beta.C\\
\texttt{gemver} & Vector Multiplication and Matrix Addition\\
\texttt{gesummv} & Scalar, Vector and Matrix Multiplication\\
\texttt{gramschmidt} & Gram-Schmidt decomposition\\
\texttt{head-3d} & Heat equation over 3D data domain\\
\texttt{jacobi-1D} & 1-D Jacobi stencil computation\\
\texttt{jacobi-2D} & 2-D Jacobi stencil computation\\
\texttt{lu} & LU decomposition\\
\texttt{ludcmp} & LU decomposition followed by Forward Substitution\\
\texttt{mvt} & Matrix Vector Product and Transpose\\
\texttt{nussinov} & Dynamic programming for sequence alignment\\
\texttt{seidel} & 2-D Seidel stencil computation\\
\texttt{symm} & Symmetric matrix-multiply\\
\texttt{syr2k} & Symmetric rank-2k update\\
\texttt{syrk} & Symmetric rank-k update\\
\texttt{trisolv} & Triangular solver\\
\texttt{trmm} & Triangular matrix-multiply
\end{tabular}
}
\end{center}
\end{table}
\subsection{PolyBench/Python: General design}
The PolyBench/Python benchmarks have been implemented following the
Python 3 standard. Note no support was considered for Python 2 as it
was discontinued in early 2020~\cite{Python:Python2Discontinued}.
The actual benchmark implementation is built around
the \texttt{PolyBench} abstract class. It includes routines that
support benchmarking and array allocation, and defines abstract
handles that must be filled by implementing subclasses. In particular,
a benchmark will extend the \texttt{PolyBench} class and define the
abstract methods in
Figure~\ref{fig:PolyBench:AbstractFunctions}. These implement the
actual benchmark functionality, including array initialization, the
kernel code itself, and printing the results in a standardized format
which is readily compatible with the outputs produced by PolyBench/C,
allowing for cross-language validation. The \texttt{run\_benchmark()}
method is similar to \texttt{main()} in C codes. It is in charge of
defining the input and output structures of the benchmark, and
initializing them and running the kernel via calls to the appropriate
abstract methods.
\begin{figure}[h!tb]
\scriptsize
\begin{verbatim}
def initialize_array(self, *args, **kwargs): ...
def print_array_custom(self, array: list, dump_message: str = ''): ...
def kernel(self, *args, **kwargs): ...
def run_benchmark(self) -> list[tuple]: ...
\end{verbatim}
\caption{Abstract functions to be implemented by a PolyBench/Python benchmark.}
\label{fig:PolyBench:AbstractFunctions}
\end{figure}
PolyBench/Python provides two different ways to measure
performance. The first is to measure the execution time of the kernel
using the Timestamp Counter (TSC register). Its contents are directly
accessed using assembly code executed through the \texttt{inlineasm}
library. The second way is to measure performance counters using the
PAPI library~\cite{PAPIweb,PAPI} through the \texttt{python\_papi}
module.
\subsubsection{Implementations Available}
Translating PolyBench/C to Python requires a number of design
decisions to deal with the intrinsic differences between both
languages. One of the critical differences is related to data
representation. Polyhedral codes in general, and the PolyBench
benchmarks in particular, manipulate arrays and scalar variables of
basic types. In C, these are stored sequentially in memory in
row-major order. There is no equivalent representation in pure Python,
where everything is an object and there are no basic datatypes, in
contrast to other interpreted languages, such as Java, where both
concepts coexist. Since everything is an object, lists of \texttt{int}
values are not a collection of contiguously stored 32- or 64-bit basic
values, but rather a collection of contiguously stored 64-bit pointers
to \texttt{int} objects. This creates an additional level of
indirection which degrades performance when traversing the array.
Furthermore, when considering multidimensional structures, there is a
choice between implementing them as a sequence of nested lists,
similar to how a cascade of pointers to pointers would work in C, or
flattening the structure and linearizing the accesses, as
automatically done by C compilers with multidimensional array
allocations. One can envision how the flattening should be more
performant, in the same way that in C using cascaded pointers
introduces an additional level of indirection for each dimension in
the data structure, degrading memory performance.
One additional alternative for array implementation is to directly use
NumPy arrays~\cite{Python:NumPy}. This looks like a good design
alternative, since NumPy arrays are, by design, C-like objects, with
homogeneously-typed data and contiguous in memory. This has the
potential to greatly improve the memory behavior, and therefore
performance, but it comes with its own performance pitfalls that need
to be carefully studied.
In PolyBench/Python, the abstract \texttt{PolyBench} class that all
benchmarks must extend implements all these differents strategies for
array allocation. Figure~\ref{fig:PolyBench:ArrayImplementation}
details the three array implementation alternatives. The user must
select the desired implementation using runtime knobs. These
alternatives will be studied and compared in the experimental analysis
in Section~\ref{sec:expresults}.
\newsavebox{\gemmlist}
\begin{lrbox}{\gemmlist}
\scriptsize
\begin{minipage}{.49\columnwidth}
\begin{verbatim}
for i in range(0, self.NI):
for j in range(0, self.NJ):
C[i][j] *= beta
for k in range(0, self.NK):
for j in range(0, self.NJ):
C[i][j] += alpha * A[i][k]
* B[k][j]
\end{verbatim}
\end{minipage}
\end{lrbox}%
\newsavebox{\gemmflist}
\begin{lrbox}{\gemmflist}
\scriptsize
\begin{minipage}{\columnwidth}
\begin{verbatim}
for i in range(0, self.NI):
for j in range(0, self.NJ):
C[self.NJ * i + j] *= beta
for k in range(0, self.NK):
for j in range(0, self.NJ):
C[self.NJ * i + j] += alpha * A[self.NK * i + k]
* B[self.NJ * k + j]
\end{verbatim}
\end{minipage}
\end{lrbox}%
\newsavebox{\gemmnp}
\begin{lrbox}{\gemmnp}
\scriptsize
\begin{minipage}{.49\columnwidth}
\begin{verbatim}
C *= beta
C += alpha * np.dot( A, B )
\end{verbatim}
\end{minipage}
\end{lrbox}%
\begin{figure}
\subfloat[List]{\usebox{\gemmlist}}
\subfloat[NumPy]{\usebox{\gemmnp}}\\
\subfloat[Flattened List]{\usebox{\gemmflist}}
\caption{List, flattened list, and NumPy alternative array implementations for the \texttt{gemm} kernel.}
\label{fig:PolyBench:ArrayImplementation}
\end{figure}
\subsubsection{Control structures}
The PolyBench/C benchmark prominently feature two control
structures: \texttt{if} statements and \texttt{for} loops. The
conditionals have a direct translation to Python, with no semantic and
minimal syntactic variations. However, \texttt{for} loops in Python
are \emph{foreach} style loops that traverse a collection of
objects. In order to implement them efficiently a C \texttt{for} loop
is translated to a Python \texttt{for} traversing a \texttt{range}
expression. This is a special type of collection spawned by a
Python \texttt{generator} object. This kind of collections are
populated on demand, one object at a time, avoiding the memory
overhead of instantiating the full collection.
\subsection{Support for Polyhedral Optimizations}
We have implemented an
analysis and translation layer capable of reading Python kernels and
generating their ScopLib \cite{scoplib} representation, as
well as backgenerating Python codes from the ScopLib. The input to
this tool is not the Python source code, but the bytecode generated by
CPython. This allows the optimizations to be performed in a
just-in-time fashion, upon execution of the code, although this
approach has not been used in the current work. However, no search or
isolation of the static control part (SCoP) is performed at this
time. The tool receives a Python function and assumes its entire body
to be a valid SCoP.
\subsection{Python interpreters}
Similar as to using different compilers for C codes, a collection of
different interpreters exist for Python. CPython~\cite{Python:CPython}
is developed by the Python Software Foundation. Its goal is not to
achieve high performance, but to provide a multiplatform environment
that serves as the reference interpreter for other projects.
PyPy~\cite{Python:PyPy} is a performance-oriented interpreter
developed using the RPython~\cite{Ancona:RPython} toolchain. The
Python code is translated to RPython, which is then translated to flow
graphs, and then to C. The RPython layer includes a tracing
just-in-time layer including an optimizer and a back-end that
generates machine code.
Intel Distribution for Python~\cite{Python:IntelPython} is designed to
make Intel libraries such as the Math Kernel Library~\cite{Intel:MKL}
and the Data Analytics Acceleration Library~\cite{Intel:DAAL} usable
from Python. It does not intend to provide fast pure Python code, but
to bridge the technological gap between Python libraries such as NumPy
and Intel products.
The performance of these interpreters will be compared in the next section.
\section{Deep Neural Networks for Program Equivalence}
\section{Static Program Equivalence}
\section{Programming Language}
We now present the formalism we use in this work to represent symbolic expressions and their equivalences. We carefully co-designed this problem representation and the (graph) neural network approach to make the best use of machine learning via deep networks, as discussed in Sec.~\ref{sec:progequivdnn}.
\subsection{Input Representation}
A key design aspect is to match the capability of the neural network to model the input as a walkable graph with the actual input program representation to be handled. We therefore model ``programs'' in a dataflow-like representation (i.e., a directed graph), using a single root/output node. Symbolic expressions computing a single result typically fit this representation. The following definitions are applicable to programs represented as dataflow graphs, albeit we specialize them to symbolic expressions.
\begin{definition}[Expression graph node]
\label{def:proggraphnode}
A node $n \in N$ in the expression graph models n-ary operations and input operands. A node produces a value which can be consumed by any of its immediate successors in the graph. When a node has no predecessor, it models an input value. The output value for the computation is produced by the unique root node $n_{root}$ of the graph, the only node without successor.
\end{definition}
\begin{definition}[Expression graph directed edge]
\label{def:proggraphedge}
A directed edge $e_{n_1,n_2} : n_1 \rightarrow n_2$ with $n_1, n_2 \in N$ in the expression graph connects the producer of a value ($n_1$) to a node consuming this value in the computation.
\end{definition}
\begin{definition}[Expression graph]
\label{def:proggraph}
A expression graph $G$ is a directed dataflow graph modeling the computation, made of nodes $n_i \in N$ and edges $e_{n_i,n_j} \in E$ as defined in Def.~\ref{def:proggraphnode} and Def.~\ref{def:proggraphedge}. That is, $G = \langle n_{root}, N, E \rangle$. There is no dandling edge nor unconnected node in $G$.
\end{definition}
\paragraph*{Language of linear algebra expressions} We developed a complex-enough language to evaluate carefully our work, that captures rich linear algebra expressions. Specifically, we support 3 types of data/variables in the expression: scalars, vectors and matrices. We use the standard notation $a,\vec a,A$ for scalars, vectors and matrices.
We evaluate using different variable names for each of the 3 types above, along with their identity and absorbing elements.
We also model a rich set of operators, mixing different unary and binary operations for each type. Specifically, we support $*_s,+_s,-_s,/_s$ between scalar operands, and $+_v,-_v,*_v$ between vectors and $+_m,-_m,*_m$ for matrices. For $-,/$ we also support their unary version for all types, e.g. $^{-1_{s}}$ for unary scalar inversion and $-_{um}$ for unary matrix negation. For example $a^{-1_s}$ computes to $1/a$.
We also support multi-type operations, such as vector and matrix scaling by a scalar $*_{sv}, *_{sm}$. We support two specific unary matrix operations, transpose $^{t_m}$ and matrix inversion as $^{-1_m}$. Note every operator has a unique name in our language, driven by the type of its operand. This will facilitate the learning of the expression embedding, avoiding the need to learn type propagation.
\paragraph*{Examples} Expressions of the form $A (B C^t D) E^{-1}$, $\vec a + b\vec c^{-1}-0\vec e$, $(a+b)+(c(d/e))$, $(aA+bB)C^t$ etc. can be parsed trivially to our representation, one simply needs to be able to provide a unique name for each operand and operator type (possibly via some analysis, or simple language design principles), that is avoiding to overload the semantics of operators and operands. Note the semantics is never explicitly provided to our DNN approach, it is learned by examples. There will be no example of the form e.g. $a+A$, an invalid expression in our language.
We believe a sensible approach is to develop a clean, regular grammar for the language to be handled, as implicitly these are concepts the DNN will need to learn. We did so, using a classical LL(1) grammar description of our linear algebra language. This is not a requirement of our approach, as one can arrive to the desired input expression graph by any means necessary, but we believe making the reasoning on the language structure ``easy'' is an important design aspect.
\subsection{Axioms of Equivalence}
A central aspect of our approach is to view the problem of expression
equivalence as finding a sequence of locally-correct rewrite rules
that each preserve the semantics, \emph{thereby making incremental reasoning possible}. We explicitly do not
consider non-semantics-preserving axioms. A rich structure of alternate but
equivalent ways to rewrite one expression to another makes the problem
easier to sample and more amenable to machine learning. Semantics-preserving axioms enable incremental per-axiom reasoning, and enforce semantics preservation without overly complicated semantics analysis; while still manipulating a very
rich space of transformations. To illustrate this we specifically
design axioms that perform complex graph modifications, such as node
deletion or creation, subtree manipulation, multi-node graph changes,
etc.
A graph pattern can be viewed as a pattern-matching rule on graphs and its precise applicability criteria. It can also be viewed as a sentential form of the language grammar, e.g. \texttt{ScalarVal PlusOp ScalarVal} is a pattern, if the grammar is well formed.
\begin{definition}[Graph pattern]
\label{def:graphpattern}
A graph pattern $P$ is an unambiguous structural description of a (sub-)graph $G_P$, which can be deterministically matched in any expression graph $G$. We have $P = \langle G_P, M_n, M_e \rangle$ where for each node $n_i \in N^{G_P}$, $\{n_{match}\} = M_n(n_i)$ returns the set of node values $n_{match}$ accepted to match $n_i$ on a graph $G$. For $n_i,n_j \in N^{G_P}$, $e_i = M_e(n_i, n_j)$ returns the set of edges between $M(n_i)$ and $M(n_j)$ to be matched in $G$. A pattern $G_P$ is matched in $G$ if (a) $\forall n_i \in G_p,~ \exists~ n_m = M(n_i) \in N^G$; (b) $\forall e_i \in E^{G_P}, \exists~ e_{M_n(n_i),M_n(n_j)} = M_e(n_i, n_j) \in E^G$; and (c) $\not \exists e_{M_n(n_i),M_n(n_j)} \in E^G \ne M_e(n_i, n_j)$.
\end{definition}
Note when a graph pattern models a rewrite, $M_n$ and $M_e$ are adjusted accordingly to output the rewrite of a node $n \in N^G$ into its desired value, instead of the set of acceptable nodes from $n \in N^{G_P}$.
\begin{definition}[Axiom of equivalence] An axiom $A$ is a semantics-preserving rewrite rule $G' = A(n,G)$ that can arbitrarily modify a expression graph $G$, and produces another expression graph $G'$ respecting Def.~\ref{def:proggraph} with identical semantics to $G$. We note $A : \langle P_{match}, P_{replace} \rangle$ an axiom, where $P_{match}, P_{replace}$ are graph patterns as per Def.~\ref{def:graphpattern}. The application of axiom $A$ to node $n$ in $G$ is written $A(n,G)$.
\end{definition}
We can compose axioms to form a complex rewrite sequence.
\begin{definition}[Semantics-preserving axiom composition]
\label{def:axiomcompos}
Given a sequence $S:~ A_1(n_1,A_2(n_2,...,A_m(n_m,G)))$ of $m$ axioms applications. It is a semantics-preserving composition if for each $G_j = A_i(n_i,G_i) \in S$, $P_{match}^{A_i}$ succeeds on the subgraph with root $n_i$ in $G_i$, and $G_j$ is obtained by applying $P_{replace}^{A_i}$ to $n_i$.
\end{definition}
\begin{theorem}[Expression graph equivalence]
\label{th:progequiv}
Given a expression $G$. If $G' = S(G)$ such that $S$ is a semantics-preserving sequence as per Def.~\ref{def:axiomcompos}, then $G \equiv G'$, they are equivalent under the axiom system used in $S$.
\end{theorem}
This is a direct consequence of using only semantics-preserving axioms, each rewrite cannot individually alter the semantics, so such incremental composition does not. It leads to the formal problem we are addressing:
\begin{corollary}[Expression graphs equivalence matching]
\label{th:progequivmatching}
Given two expressions $G,G'$. If there exist a semantics-preserving sequence $S$ such that $G' = S(G)$, then $G \equiv G'$.
\end{corollary}
Note here $=$ means complete structural equivalence between the two graphs: they are identical in structure \emph{and} label/node values. Determining $G = G'$ amounts to visiting both graphs simultaneously e.g. in depth-first search from the root to ensure structural equivalence, and also verifying the same node labels appear in both at the same time. This is trivally implemented in linear time in the graph size.
\paragraph*{Language of linear algebra expressions} We have implememented a total of 102 different axioms for our language, made of the multi-type versions of the 13 core restructuring axioms described later in Table~\ref{tab:TransformPct}. They all follow established linear algebra properties. Note different data types have different axioms following typical linear algebra rules, e.g., matrix-multiplication does not commute, but scalar and vector multiplications do. Examples of axioms include $x(yz) \rightarrow (xy)z$, $X-X\rightarrow O$, $-(\vec x - \vec y) \rightarrow \vec y - \vec x$, or $X^{t^t} \rightarrow X$, an exhaustive list is displayed in the Supplementary Material.
In our experiments, we presume matrix and vector dimensions are appropriate for the given operation. Such dimension compatibility
checks are simple to implement by e.g. introducing additional nodes in the prgram representation, but are not considered in our test language.
\paragraph*{Examples} We illustrate axiom-based rewrites using axioms presented in later Table~\ref{tab:TransformPct}. Note axiom names follow the structural changes applied. For example, we have $a+b \equiv b+a:~\{a+b\}= Commute(\{+\},\{b+a\})$. $a+b+c \equiv b+c+a:~\{a+b+c\}= Commute(\{+_1\},Commute(\{+_2\},\{b+c+a\})$. Note we refer to different nodes with the same symbol (e.g., $+_2$) subscripting them by their order in a DFS traversal of the expression graph, starting from the unique root. We have $0 \equiv a-a:~\{0\}= Cancel(\{-\},\{a-a\})$. These can be combined in complex paths, e.g., $b+c \equiv c+b+(a-a):~\{b+c\}= Commute(\{+\},Noop(\{+\},Cancel(\{-\},\{c+b+(a-a)\})))$. Such axioms are developed for scalars, matrices and vectors, and include complex rewrites such as distributivity rules and transpositions. A total of 102 axioms are used in our system.
\subsection{Space of Equivalences}
We now define the search space being explored in this work, i.e., the exact space of solutions on which the DNN system formally operates, and that we sample for training.
\begin{definition}[Graph of the space of equivalences]
\label{def:graphofequiv}
Given a language $\mathcal{L}$. The directed graph of equivalences between expressions is $G^{equiv} = \langle N^{equiv}, E^{equiv}\rangle$ such that $\forall l \in \mathcal{L}, n_l \in N^{equiv}$, and $e^{A_i,x}_{n_i,n_j} : n_i \rightarrow n_j \in E^{equiv}$ iff $n_j \equiv A_i(x,n_i)$, $\forall A_i$ in the axiom system and $x$ a position in $n_i$ where $A_i$ is applicable.
\end{definition}
In other words, the graph has one node per possible expression in the language $\mathcal{L}$, and a single axiom application leads to connecting two nodes. We immediately note that $G^{equiv}$ is a (possibly infinite) multigraph, and contains circuits.
\begin{theorem}[Expression equivalence with pathfinding]
Given two expressions $n_i,n_j \in N^{equiv}$. If there is any path from $n_i$ to $n_j$ in $G^{equiv}$, then $n_i \equiv n_j$.
\end{theorem}
The proof is a direct consequence of Def.~\ref{def:graphofequiv}. In this work, we randomly sample this exact graph to learn how to build paths between arbitrary expressions. As it is a multigraph, there will be possibly many different sequences modeled to prove the equivalence between two expressions. It is sufficient to expose one to prove equivalence.
\begin{corollary}[Semantics-preserving rewrite sequence]
Any directed path in $G^{equiv}$ is a semantics-preserving rewrite sequence between the expressions, described by the sequence of axioms and expression position labeling the edges in this path. This sequence forms the proof of equivalence.
\end{corollary}
We believe that ensuring there are possibly (usually) many ways to compute a proof of equivalence in our specific framework is key to enable the DNN approach to learn automatically the pathfinding algorithm for building such proofs. Other more compact representations of this space of equivalences are clearly possible, including by folding nodes in the equivalence graph for structurally-similar expressions and folding equivalent paths between nodes. When building e.g. a deterministic algorithm for pathfinding, such space size reduction would bring complexity benefits \cite{kaplan1969regular,barthou2002}. We believe that for the efficient deployment of graph-to-sequence systems, exposing significant redundancy in the space facilitates the learning process. We also alleviate the need to reason on the properties of this space to find an efficient traversal heuristic.
\section{Related Work}
\paragraph*{Theorem provers} The problem of equivalence as we formulated may be solved by other (smart) brute-force approaches, where a problem is solved by pathfinding. This ranges from theoreom proving systems like Coq \cite{bertot2013interactive} which supports the formal framework for equivalence we describe in this paper, to (Approximate Probabilistic) Model Checking \cite{clarke1994model,burch1992symbolic,herault2004approximate}, where a program equivalence system can also be built, e.g. \cite{steffen1991data,clarke2003behavioral,visser2003model,namjoshi2000syntactic}. Our contribution is not in the formal definition of program equivalence we presented, semantics-preserving rewrite systems have been studied, e.g. \cite{visser2004program,lucanu2015program,reddy1989rewriting}. But understanding why this particular formalism was well suited to deep learning graph-to-sequence systems was key.
The merits of stochastic search to accelerate such systems has been demonstrated, e.g. \cite{murawski2005probabilistic,herault2004approximate,gogate2012probabilistic}. The novelty of our approach is to develop carefully crafted graph-to-sequence neural networks to automatically learn an efficient pathfinding heuristic for this problem. Our approach is potentially applicable in these areas too, however training scalability can become a challenge if increasing the input representation size excessively.
Theorem provers using deep learning have recently started to be investigated, Aygun et al.~\cite{Aygun20} developed a graph neural network system for automatic proof generation. Wu et al. \cite{Wu20} explores the ability of theorem provers using GNNs, TreeLSTMs, and BagOfWords architectures to generalize and solve proofs with lengths up to 7 axioms and found that GNNs performed the best of the architectures studied when more complex proofs were required. While our model works in a slightly different problem space, we study the ability of our models to generalize on proofs with lengths up to 10, with 14 different rewrite rules acting on 147 distinct axioms. These frameworks could also be used to prove equivalence between symbolic expressions, as theorem provers.
\paragraph*{Static program equivalence} Algorithms for static program equivalence have been developed, e.g. \cite{verdoolaege2012equivalence,alias2004recognition,barthou2002,iooss2014program}. These approaches typically restrict to demonstrating the equivalence of different schedules of the operations, possibly dynamically \cite{bao2016polycheck}. In this work we target graph-modifying rewrites (and therefore which alter the operation count). Barthou et
al. \cite{barthou2002,alias2004recognition} have developed
techniques to recognize algorithm templates in programs. These
approaches are restricted to static/affine transformed programs.
Karfa et al. also designed a method that works for a subset of affine
programs using array data dependence graphs (ADDGs) to represent input
and transforming behaviors. Operator-level equivalence checking
provides the capability to normalize expressions and establish
matching relations under algebraic transformations
\cite{karfa2013verification}. Mansky and Gunter used the TRANS
language \cite{kalvala2009program} to represent transformations. The
correctness proof implemented in the verification framework
\cite{mansky2010framework} is verified by the Isabelle
\cite{isabelle-web} proof assistant. Other works
also include translation
validation \cite{sorin:proving,necula2000translation}.
\paragraph*{Program analysis with machine learning} Numerous prior work has employed (deep) machine learning for program analysis, e.g. \cite{AllamanisACM18,Alon19,Tufano19,LacomisDIRE2019,Raychev2015,Bavishi17}.
code2vec \cite{Alon19} teaches a method for
creating a useful embedding vector that summarizes the semantic
meaning of a snippet of code. Program repair approaches, e.g. \cite{Tufano19,Chen19} are deployed to automatically repair bugs in a program. Output accuracies of up to 20\% on the test set is reported, using sequence-to-sequence models.
Wang et al. \cite{Wang18} learns to extract the rules for Tomita grammars \cite{tomita82} with recurrent neural networks. The learned network weights are processed to create a verifiable deterministic finite automata (DFA) representation of the learned grammar. This work demonstrates that deterministic grammars can be learned with RNNs, which we rely on.
\paragraph*{Graph Neural Networks}
Graph neural networks \cite{Scarselli09,Wu19} use machine learning
to analyze a set of nodes and edges for patterns related to a target problem.
Using a graph-to-sequence network with attention has been analyzed for natural
language processing \cite{Beck18}. Allamanis et al. use graph
neural networks to analyze code sequences and add edge types representing
LastUse, ComputedFrom, and LastWrite to improve the system's ability to
reason about the code \cite{AllamanisICLR18}. Their work achieves
84\% accuracy on correcting variable misuse cases and provides insights
to useful edge types.
Structure2vec \cite{Xu17} uses a graph neural network to detect binary code similarity. Structure2vec uses a graph
neural network to learn an embedding from a annotated control flow graph (ACFG)
of a program. This learning process targets the embedding so that equivalent
programs will have equivalent embeddings, reporting precision
scores of 84\% and 85\% on various test datasets for correctly predicting
program equivalence. It only outputs a probability of equivalence, and not a verifiable proof, which is sufficient in their context.
The G2SKGE model \cite{Li19} has a similar graph network structure which uses a node embedding (which they refer to as an information fusion mechanism) in order to predict
relationships between nodes. This technique of using a neural network to understand and predict
node interelationships is common to our approach.
\section{Samples Generation}
The careful creation of our training dataset is key: as we let the DNN learn \emph{by example only} what the axioms are and when they are applicable in the structure of a program, we must carefully sample the space of equivalences to ensure appropriate distributions of the examples. We produce a final dataset of tuples $(P1,P2,S)$, a pair of input programs and a possible rewrite rule sequence that proves the pair equivalent. Duplicates are removed such that all samples have a unique $P1$. From this dataset, we create 1,000,000 training samples, 10,000 validation samples, and 10,000 test samples.
We outline below its generation principles; extensive details and the algorithms used are presented in section~\ref{sec:appendix:genexamples}.
\paragraph*{Random sample generation}
Deep learning typically requires large training sets to be effectively deployed, hence we developed a process to automate sample generation. We specifically use randomized program generation algorithms that are inspired by a given language grammar. By randomly choosing between production rulse, one can build random parse trees by simply iterating the grammar. The leaves obtained will form a sentence accepted by the language, i.e., a program \cite{bielik16}.
We limit to programs of 50 nodes in the program tree (or AST), with a maximal tree depth of 7.
We assert that our random production rule procedure has a non-zero probability of producing any program allowed by the grammar for our datasets.
We produce equivalent program samples by pseudo-randomly applying axioms on one randomly generated program to produce a rewrite sequence and the associated equivalent program. Given a randomly selected node in the program graph, our process checks which axiom(s) can be applied. E.g., the $+_m$ operator may have the Commute axiom category applied, or it may have the Transpose axiom category applied, which affects the operator's children.
\paragraph*{Final experimental dataset: AxiomStep10}
To train our network to produce one axiom step at a time, as described in Sec.~\ref{sec:motivation}, AxiomStep10 has a single axiom in each output sequence $S$.
For a complete proof $S:A_1(A_2(...)$ in a $(P1,P2,S)$ we generated made of N axioms, we then create N training examples for the network: $(P1,P2,A_N)$ the first intermediate step by applying the first axiom, then $(A_N(P1),P2,A_{N-1})$, etc.
We limit proof length to 10 axioms in our experiments (hence AxiomStep10). Test samples only have the original and target program and the network proposes axioms which create intermediate programs towards the proof, fed back to the system.
\begin{table}[h!tb]
\caption{Distribution for the 14 axiom categories in AxiomStep10 test set. Considering scalars (a, b, ...), vectors ($\vec v$,$\vec w$, ...) and matrices (A, B, ...) types combinations, 147 distinct axioms are represented.\vspace{-.3cm}}
\label{tab:TransformPct}
\small
\setlength\tabcolsep{2pt}
\centering
\begin{tabular}{@{}llr@{}}
\toprule
Axiom Category & Example axiom(s) & Samples with \\
\midrule
Cancel & (A-A)$\rightarrow$O,(b/b)$\rightarrow$1 & 13.8\% \\
NeutralOp & ($\vec v$ - $\vec o$) $\rightarrow$ $\vec v$ & 40.0\% \\
DoubleOp & $A^{t^t} \rightarrow A$, 1/1/x$\rightarrow$x & 7.3\% \\
AbsorbOp & (A*O)$\rightarrow$O, (b*0)$\rightarrow$0 & 30.3\% \\
Commute & (a + b) $\rightarrow$ (b + a) & 48.6\% \\
DistributeLeft & (a + b)c $\rightarrow$ ac + bc & 36.3\% \\
DistributeRight & a(b + c) $\rightarrow$ ab + ac & 27.8\% \\
FactorLeft & ab + ac $\rightarrow$ a(b+c) & 6.1\% \\
FactorRight & ac + bc $\rightarrow$ (a+b)c & 9.0\% \\
AssociativeLeft & a(bc) $\rightarrow$ (ab)c & 46.3\% \\
AssociativeRight & (ab)c $\rightarrow$ a(bc) & 43.1\% \\
FlipLeft & -($\vec v$ - $\vec w$) $\rightarrow$ $\vec w-\vec v$ & 8.4\% \\
FlipRight & a/(b/c) $\rightarrow$ a(c/b) & 26.1\% \\
Transpose & $(AB)^{t} \rightarrow B^{t}A^{t}$, & 11.1\% \\
\bottomrule
\end{tabular}
\end{table}
\vspace{-.3cm}
\paragraph*{Datasets to study generalizability and robustness}
In order to study our model's ability to generalize, we have created alternate datasets on which to train and test models which are summarized in table~\ref{tab:Datasets}.
\emph{WholeProof10} will help us
contrast learning approaches.
This dataset has the complete proof sequence $S$ made of $N\ge 1$ axioms as reference output for a program pair, while for AxiomStep10, $N=1$.
Models trained on WholeProofX must maintain internal state representing the graph transformations that the axioms create. They are not "iterative": a single inference is expected to produce the complete proof; in contrast to AxiomStep10 for which a single axiom of the sequence is produced at each inference step. Training long output sequences can benefit from complex training approaches such as Professor forcing \cite{Lamb16}, but we will show that our AxiomStep10 model generalizes well with our sequence model training approach.
\begin{table}[h!tb]
\vspace{-.3cm}
\caption{Datasets used for studies in experiments.\vspace{-.3cm}}
\label{tab:Datasets}
\small
\centering
\begin{tabular}{@{}lcccl@{}}
\toprule
Dataset & AST depth & AST \#nodes & Proof length & Iterative \\
\midrule
AxiomStep10 & 2-7 & 2-50 & 1-10 & Yes\\
AxiomStep5 & 2-6 & 2-25 & 1-5 & Yes\\
WholeProof10 & 2-7 & 2-50 & 1-10 & No\\
WholeProof5 & 2-6 & 2-25 & 1-5 & No\\
\bottomrule
\end{tabular}
\end{table}
\paragraph*{Complexity of equivalence space}
Figure~\ref{fig:Datadist} provides a view of the complexity of the
equivalence problem we tackle. The distribution of the dataset per
proof length is displayed in the right chart; the left chart shows by
size of bubble the number of test samples with a given number
of \emph{semantics-preserving} axioms that may be implemented as the
first step of the proof and the proof length needed.
\begin{figure}[h!tb]
\vspace{-.3cm}
\includegraphics[width=7cm]{./images/Datadist-fig1.png}
\includegraphics[width=7cm]{./images/Datadist-fig2.png}
\vspace{-.6cm}
\caption{Distribution of axiom possibilities and proof complexity for test datasets.}
\label{fig:Datadist}
\vspace{-.5cm}
\end{figure}
There is a large number of proofs possible in our system, as detailed
in Appendix~\ref{subsec:complexityofprovingequiv}. For example, for
proofs of length 5, about 340,000 proofs made only of legal
applications of axioms can be performed on the average sample in our
dataset. Since many programs have multiple possible proofs, about
10,000 different programs can be produced, only one of which is the
target to prove, i.e., randomly drawing a valid 5 axiom proof on a
program known to be 5 axiom steps from the target has roughly a 1 in
10,000 chance of being a correct proof of equivalence between the two
programs.
| {
"timestamp": "2021-06-10T02:09:15",
"yymm": "2106",
"arxiv_id": "2106.02452",
"language": "en",
"url": "https://arxiv.org/abs/2106.02452",
"abstract": "We target the problem of provably computing the equivalence between two complex expression trees. To this end, we formalize the problem of equivalence between two such programs as finding a set of semantics-preserving rewrite rules from one into the other, such that after the rewrite the two programs are structurally identical, and therefore trivially equivalent.We then develop a graph-to-sequence neural network system for program equivalence, trained to produce such rewrite sequences from a carefully crafted automatic example generation algorithm. We extensively evaluate our system on a rich multi-type linear algebra expression language, using arbitrary combinations of 100+ graph-rewriting axioms of equivalence. Our machine learning system guarantees correctness for all true negatives, and ensures 0 false positive by design. It outputs via inference a valid proof of equivalence for 93% of the 10,000 equivalent expression pairs isolated for testing, using up to 50-term expressions. In all cases, the validity of the sequence produced and therefore the provable assertion of program equivalence is always computable, in negligible time.",
"subjects": "Programming Languages (cs.PL); Machine Learning (cs.LG)",
"title": "Proving Equivalence Between Complex Expressions Using Graph-to-Sequence Neural Models",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399043329855,
"lm_q2_score": 0.7310585727705126,
"lm_q1q2_score": 0.709302199706671
} |
https://arxiv.org/abs/1801.10607 | Hypercube Packings and Coverings with Higher Dimensional Rooks | We introduce a generalization of classical $q$-ary codes by allowing points to cover other points that are Hamming distance $1$ or $2$ in a freely chosen subset of all directions. More specifically, we generalize the notion of $1$-covering, $1$-packing, and $2$-packing in the case of $q$-ary codes. In the covering case, we establish the analog of the sphere-packing bound and in the packing case, we establish an analog of the singleton bound. Given these analogs, in the covering case we establish that the sphere-packing bound is asymptotically never tight except in trivial cases. This is in essence an analog of a seminal result of Rodemich regarding $q$-ary codes. In the packing case we establish for the $1$-packing and $2$-packing cases that the analog of the singleton bound is tight in several possible cases and conjecture that these bounds are optimal in general. | \section{Introduction}
Consider a set of $n$ football matches which each end in either a win, draw, or loss. How many bets are necessary for an individual to guarantee that they predict at least $n-1$ of outcomes of the games correctly? What about having at least $n-k$ outcomes correct?
The above problem is the classical Football Pool Problem that has been extremely well studied for small specific values of $n$, as well as the generalization allowing for ``more" possible outcomes \cite{blokhuis1984more,fernandes1983football,haas2007lower,haas2009lower,hamalainen1991upper, kalbfleisch1969combinatorial,koschnick1993new, rodemich1970coverings,singleton1964maximum}.
In particular, consider the hypercube $H_{n,k}$, the $k$-dimensional hypercube with side length $n$. Then place $H_{n,k}$ on the lattice $\{0,\ldots,n-1\}^k$ and define the distance between two points in $H_{n,k}$ to be the Hamming distance between their coordinate representations. In other words, the Hamming distance is the number of places in which the coordinate representations of the points differ. An $R$-\emph{covering} is a set of points $S$ such that every point in the hypercube is within distance $R$ of a point in $S$ \cite{van1991bounds}. In the literature the minimum possible size of an $R$-covering has been the primary subject of interest, especially when $R=1$. Similarly, a set $T$ is an $R'$-\emph{packing} if no two points are within distance $R'$ of each other. In this case the primary object of study is the largest possible $R'$-packing \cite{van1991bounds}. For the remainder of this paper, we will focus on generalizations of the well studied cases where $R=1$ and $R'=1,2$.
Given the extensive research in the case where points can cover in all directions parallel to the axes, we instead consider the generalization where each point can cover in only a subset of these directions. In particular define an $\ell$-\emph{rook} to be rook which can cover in $\ell$ dimensions. More precisely, an $\ell$-rook is a point in $\mathbb Z^k$ along with a selection of $\ell$ out of $k$ coordinates and this point covers exactly the points which differ in one of the $\ell$ chosen coordinates. For example a $2$-rook in two dimensional space is a regular planar rook. Given this close relation with the chessboard piece, we use the terms ``attack" and ``cover" interchangeably. With this notion, we can now define the primary objects of study for this paper.
\begin{defn}
Let $n$, $k$, $\ell$ be positive integers with $k\ge \ell$. Define $a_{n,k,\ell}$ to be the minimum number of $\ell$-rooks that can cover $H_{n,k}$. Similarly define $b_{n,k,\ell}$ to be the maximum number of $\ell$-rooks in $H_{n,k}$ with no rooks attacking another. Finally, define $c_{n,k,\ell}$ to be the maximum number of $\ell$-rooks that can be placed in $H_{n,k}$ so that no two rooks attack the same point. Note that in all of these cases we do not allow multiple rooks at a single point.
\end{defn}
Below are three figures which demonstrate the optimal constructions for $a_{n, k, \ell}, b_{n, k, \ell}$, and $c_{n, k, \ell}$ in the case $(n, k, \ell)=(3, 3, 2)$.
\begin{figure}[H]
\centering
\scalebox{0.4}{\begin{tikzpicture}
\tikzset{>=latex}
\def \dx{4};
\def \dy{4};
\def \dz{4};
\def \nbx{3};
\def \nby{3};
\def \nbz{3};
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf y in {1,...,\nby} {
\foreach \mathbf z in {1,...,\nbz} {
\node at (\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz) [circle, fill=black] {};
}
}
}
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf z in {1,...,\nbz}{
\foreach \mathbf y in {2,...,\nby}{
\draw [-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy - \dy,\mathbf z*\dz) -- ( \mathbf x*\dx , \mathbf y*\dy, \mathbf z*\dz);
}
}
}
\foreach \mathbf y in {1,...,\nbx} {
\foreach \mathbf z in {1,...,\nbz}{
\foreach \mathbf x in {2,...,\nbx}{
\draw[-, color = black, line width = 1pt](\mathbf x * \dx - \dx,\mathbf y*\dy,\mathbf z*\dz) -- ( \mathbf x * \dx,\mathbf y*\dy,\mathbf z*\dz);
}
}
}
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf y in {1,...,\nbz}{
\foreach \mathbf z in {2,...,\nby}{
\draw[-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz - \dz) -- ( \mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz);
}
}
}
\node at (\dx, \dx, \dx)[circle, fill=red, color=red]{};
\node at (\dx, 2*\dx, \dx)[circle, fill=red, color=red]{};
\node at (\dx, 3*\dx, \dx)[circle, fill=red, color=red]{};
\node at (2*\dx, 3*\dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (2*\dx, \dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (2*\dx, 2*\dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (3*\dx, \dx, 3*\dx)[circle, fill=red, color=red]{};
\draw [->, color = red, line width = \dx] (\dx,\dx,\dx)--(1.3*\dx, \dx, \dx);
\draw [->, color = red, line width = \dx] (\dx,\dx,\dx)--(\dx, \dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(\dx, 2*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(1.3*\dx, 2*\dx, \dx);
\draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(\dx, 3*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(1.3*\dx, 3*\dx, \dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2.3*\dx, \dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2*\dx, \dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2.3*\dx, 2*\dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2.3*\dx, 3*\dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(1.7*\dx, \dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2*\dx, \dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(1.7*\dx, 2*\dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(1.7*\dx, 3*\dx, 2*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,\dx,3*\dx)--(3*\dx, 1.3*\dx, 3*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,\dx,3*\dx)--(2.7*\dx, \dx, 3*\dx);
\end{tikzpicture}
\hspace{20mm}
\begin{tikzpicture}
\tikzset{>=latex}
\def \dx{4};
\def \dy{4};
\def \dz{4};
\def \nbx{3};
\def \nby{3};
\def \nbz{3};
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf y in {1,...,\nby} {
\foreach \mathbf z in {1,...,\nbz} {
\node at (\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz) [circle, fill=black] {};
}
}
}
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf z in {1,...,\nbz}{
\foreach \mathbf y in {2,...,\nby}{
\draw [-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy - \dy,\mathbf z*\dz) -- ( \mathbf x*\dx , \mathbf y*\dy, \mathbf z*\dz);
}
}
}
\foreach \mathbf y in {1,...,\nbx} {
\foreach \mathbf z in {1,...,\nbz}{
\foreach \mathbf x in {2,...,\nbx}{
\draw[-, color = black, line width = 1pt](\mathbf x * \dx - \dx,\mathbf y*\dy,\mathbf z*\dz) -- ( \mathbf x * \dx,\mathbf y*\dy,\mathbf z*\dz);
}
}
}
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf y in {1,...,\nbz}{
\foreach \mathbf z in {2,...,\nby}{
\draw[-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz - \dz) -- ( \mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz);
}
}
}
\node at (3*\dx, 2*\dx, 3*\dx)[circle, fill=red, color=red]{};
\node at (\dx, 2*\dx, \dx)[circle, fill=red, color=red]{};
\node at (\dx, 3*\dx, \dx)[circle, fill=red, color=red]{};
\node at (2*\dx, 3*\dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (3*\dx, \dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (3*\dx, \dx, \dx)[circle, fill=red, color=red]{};
\node at (\dx, \dx, 3*\dx)[circle, fill=red, color=red]{};
\node at (2*\dx, \dx, 3*\dx)[circle, fill=red, color=red]{};
\node at (2*\dx, 2*\dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (3*\dx, 3*\dx, 3*\dx)[circle, fill=red, color=red]{};
\draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(\dx, \dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(\dx, 1.3*\dx, 3*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,3*\dx)--(2*\dx, \dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,3*\dx)--(2*\dx, 1.3*\dx, 3*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,\dx,\dx)--(2.7*\dx, \dx, \dx);
\draw [->, color = red, line width = \dx] (3*\dx,\dx,\dx)--(3*\dx, 1.3*\dx, \dx);
\draw [->, color = red, line width = \dx] (3*\dx,\dx,2*\dx)--(2.7*\dx, \dx, 2*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,\dx,2*\dx)--(3*\dx, 1.3*\dx, 2*\dx);
\draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(\dx, 2*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (\dx,2*\dx,\dx)--(1.3*\dx, 2*\dx,\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2.3*\dx, 2*\dx,2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(2*\dx, 2*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,2*\dx,2*\dx)--(1.7*\dx, 2*\dx,2*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,2*\dx,3*\dx)--(3*\dx, 2*\dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,2*\dx,3*\dx)--(2.7*\dx, 2*\dx,3*\dx);
\draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(\dx, 3*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (\dx,3*\dx,\dx)--(1.3*\dx, 3*\dx,\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2.3*\dx, 3*\dx,2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(2*\dx, 3*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,3*\dx,2*\dx)--(1.7*\dx, 3*\dx,2*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,3*\dx,3*\dx)--(3*\dx, 3*\dx, 2.5*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,3*\dx,3*\dx)--(2.7*\dx, 3*\dx,3*\dx);
\end{tikzpicture}
\hspace{20mm}
\begin{tikzpicture}
\tikzset{>=latex}
\def \dx{4};
\def \dy{4};
\def \dz{4};
\def \nbx{3};
\def \nby{3};
\def \nbz{3};
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf y in {1,...,\nby} {
\foreach \mathbf z in {1,...,\nbz} {
\node at (\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz) [circle, fill=black] {};
}
}
}
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf z in {1,...,\nbz}{
\foreach \mathbf y in {2,...,\nby}{
\draw [-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy - \dy,\mathbf z*\dz) -- ( \mathbf x*\dx , \mathbf y*\dy, \mathbf z*\dz);
}
}
}
\foreach \mathbf y in {1,...,\nbx} {
\foreach \mathbf z in {1,...,\nbz}{
\foreach \mathbf x in {2,...,\nbx}{
\draw[-, color = black, line width = 1pt](\mathbf x * \dx - \dx,\mathbf y*\dy,\mathbf z*\dz) -- ( \mathbf x * \dx,\mathbf y*\dy,\mathbf z*\dz);
}
}
}
\foreach \mathbf x in {1,...,\nbx} {
\foreach \mathbf y in {1,...,\nbz}{
\foreach \mathbf z in {2,...,\nby}{
\draw[-, color = black, line width = 1pt](\mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz - \dz) -- ( \mathbf x*\dx,\mathbf y*\dy,\mathbf z*\dz);
}
}
}
\node at (\dx, \dx, 3*\dx)[circle, fill=red, color=red]{};
\node at (2*\dx, \dx, 2*\dx)[circle, fill=red, color=red]{};
\node at (3*\dx, 2*\dx, \dx)[circle, fill=red, color=red]{};
\node at (3*\dx, 3*\dx, \dx)[circle, fill=red, color=red]{};
\draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(\dx, 1.3*\dx, 3*\dx);
\draw [->, color = red, line width = \dx] (\dx,\dx,3*\dx)--(1.3*\dx, \dx, 3*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2*\dx, 1.3*\dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(1.7*\dx, \dx, 2*\dx);
\draw [->, color = red, line width = \dx] (2*\dx,\dx,2*\dx)--(2.3*\dx, \dx, 2*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,2*\dx,\dx)--(3*\dx, 2*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,2*\dx,\dx)--(2.7*\dx, 2*\dx, \dx);
\draw [->, color = red, line width = \dx] (3*\dx,3*\dx,\dx)--(3*\dx, 3*\dx, 1.5*\dx);
\draw [->, color = red, line width = \dx] (3*\dx,3*\dx,\dx)--(2.7*\dx, 3*\dx, \dx);
\end{tikzpicture}}
\caption{ $a_{3, 3, 2}=7, b_{3, 3, 2}=10, c_{3, 3, 2}=4$}
\end{figure}
Previous research studies the case when $k=\ell$. In particular, $a_{n,k,k}$ corresponds to a $1$-covering while $b_{n,k,k}$ and $c_{n,k,k}$ correspond to a $1$-packing and $2$-packing, respectively. Furthermore $c_{q, k, k}$ corresponds to generalized $q$-ary Hamming-distance-$3$ subsets of $H_{q, k}$, which are useful for error correcting codes. The most classical bound in the case of coverings is the sphere-packing bound. We give the analog in this case; our proof is nearly identical to the classical one. This determines $a_{n,k,l}$ to within a constant depending on $l$.
\begin{thm}\label{SpherePacking}
We have
\[
\frac{n^k}{\ell(n-1)+1}\le a_{n,k,l}\le n^{k-1}.
\]
\end{thm}
\begin{proof}
Suppose for the sake of contradiction, there exists a covering $S$ of $H_{n,k}$ with $\ell$-rooks and $|S|<\frac{n^k}{\ell(n-1)+1}$. Since each rook covers at most $\ell(n-1)+1$ points, it follows that $S$ covers at most $ |S|(\ell(n-1)+1)<n^k$ points, which is a contradiction. To prove the upper bound, let S be the set of all points with first coordinate 0. Allow each point in S to attack in the direction of the first coordinate, and arbitrarily choose the other $\ell-1$ directions in which it may attack and these rooks collectively cover the cube. Note that we do not utilize the last $\ell-1$ dimensions for the upper bound.
\end{proof}
Note that since the above theorem holds for $\ell=1$ and it implies that $a_{n,k,1}=n^{k-1}$. Given the triviality of this case, we consider $\ell\ge 2$ in the remainder of the paper. The analogous lower bounds for $b_{n,k,k}$ and $c_{n,k,k}$ comes from the classical Singleton bound \cite{singleton1964maximum}. The proof presented in the classical case can be adapted to this situation as well, however we rely on a more geometrical argument.
\begin{thm}\label{Singleton}
For all positive integers $n$, $k$, and $\ell$ with $k\ge \ell$, we have
\[
b_{n,k,\ell}\le \frac{k n^{k-1}}{\ell}.
\]
Furthermore, if $k\ge\ell\ge 2$ then
\[
c_{n,k,\ell}\le \frac{\binom{k}{2} n^{k-2}}{\binom{\ell}{2}}.
\]
\end{thm}
\begin{proof}
For $b_{n,k,\ell}$, consider all lines parallel to edges of the $H_{n,k}$ containing $n$ points in $H_{n,k}$. Note that there are $kn^{k-1}$ such lines by choosing a direction and letting the remaining coordinates vary over all possibilities within the cube. Furthermore, no two $\ell$-rooks can cover the same axis. Since each $\ell$-rook cover $\ell$ axes, it follows that $b_{n,k,\ell}\le \frac{k n^{k-1}}{\ell}.$ Similarly, for $c_{n,k,\ell}$ consider all planes passing through $H_{n,k}$, parallel to one of the faces. Note there are $\binom{k}{2}n^{k-2}$ of these faces and each $\ell$-rook covers $\binom{\ell}{2}$ planes. If two rooks cover the same plane, then they intersect, and it follows that
$
c_{n,k,\ell}\le \frac{\binom{k}{2} n^{k-2}}{\binom{\ell}{2}}
$
for $\ell\ge 2$. (If $\ell=1$, the $\ell$-rook does not determine a plane, so the proof does not follow.)
\end{proof}
Note that $c_{n,k,1}\le n^{k-1}$ as each $1$-rook covers $n$ points and the points these rooks cover are distinct. This can be achieved by putting $1$-rooks on all points with the first coordinate $0$ and having all rooks point in the direction of the first coordinate. Given this difference in behavior between $\ell\ge 2$ and $\ell=1$ for $c_{n,k,\ell}$, we assume that $\ell\ge 2$ for the remainder of the paper in this case as well.
In the remainder of the paper, we focus on the asymptotic growth rates of $a_{n,k,\ell}$, $b_{n,k,\ell}$, and $c_{n,k,\ell}$ when $k$ and $\ell$ are fixed and $n$ increases.
\begin{notation}
Let $a_{k,\ell}=\displaystyle\lim_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}},$
$b_{k,\ell}=\displaystyle\lim_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}},$
and $c_{k,\ell}=\displaystyle\lim_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}.$
\end{notation}
The remainder of the paper is organized as follows. Section $2$ establishes the existence of such limits for all $k$ and $\ell$ (with $\ell\ge 2$ for $c_{k,\ell}$). Section 3 focuses on covering bounds and demonstrates that for $\ell\neq 1$ that the lower sphere-packing bound in Theorem 2 is never asymptotically tight. Furthermore, Section 3 proves that $a_{k,\ell}\to\frac{1}{\ell}$ as $k\to\infty$. Section 4 focuses on the packing bounds and demonstrates that $b_{k,\ell}$ and $c_{k,\ell}$ achieve the bounds in Theorem \ref{Singleton} in several possible cases. Finally, Section 5 presents a series of open problems regarding $a_{k,l}, b_{k,l},$ and $c_{k,l}$.
\section{Existence Results}
The general idea for our proofs in this section is to demonstrate that $a_{nm,k,\ell}\le m^{k-1} a_{n,k,\ell}$ for all integers $m$ and then show that adjacent terms are sufficiently close. (The first inequality is reversed for $b_{n,k,\ell}$ and a similar result holds for $c_{n,k,\ell}$.) For $a_{n,k,\ell}$ and $b_{n,k,\ell}$, the first inequality is demonstrated using a construction of Blokhuis and Lam \cite{blokhuis1984more} whereas for $c_{n,k,\ell}$ we rely on a different construction.
\begin{thm}\label{aexists}
For positive integers $k\ge \ell$, the limits \[a_{k,\ell}=\lim_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}},\]
\[b_{k,\ell}=\lim_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}}\] exist.
\end{thm}
\begin{proof}
We first consider $a_{k,\ell}$. For $\ell=1$, $a_{n,k,1}=n^{k-1}$ and the result is trivial. Therefore it suffices to assume that $k\ge 2$. Using Theorem \ref{SpherePacking}, it follows that
\[\frac{1}{\ell}\le\liminf_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}\le\limsup_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}\le 1.\] Now suppose that $L=\liminf_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}$. Then for every $\epsilon>0$, there exists an integer $m$ such that $\frac{a_{m,k,\ell}}{m^{k-1}}\le L+\frac{\epsilon}{2}$. Now consider the points $(x_1,\ldots,x_k)$ in $\{0,1,\ldots,n-1\}^k$ such that \[x_1+\cdots+x_k\equiv 0\mod n.\] (This is the construction present in \cite{blokhuis1984more}.) Note that if a $k$-rook is placed at every point in this construction, all points are covered and that every point of an outer face of the hypercube has a axis ``protruding" out of it. Therefore we can essentially blowup every point in $H_{m,k}$ to a copy of $H_{n,k}$ to create an $H_{mn,k}$, mark all the corresponding $H_{n,k}$ in $H_{mn,k}$ that correspond to rooks from the construction of $a_{m,k,\ell}$ and place $\ell$-rooks within these $H_{n,k}$ corresponding to the points from the earlier construction. Then choose the $\ell$ axes for each of these rooks that corresponds to the orientation for the $\ell$-rook in the original construction of $H_{m,k}$ in $a_{m,k,\ell}$. This gives a covering of $H_{mn,k}$, so it follows that $a_{nm,k,\ell}\le n^{k-1} a_{m,k,\ell}$.
Now consider $a_{n+1,k,\ell}$ and $a_{n,k,\ell}$. If we let $H_{n,k}=\{0,\ldots,n-1\}^{k}$ and $H_{n+1,k}=\{0,\ldots,n\}^{k}$ then we place the construction for $a_{n,k,\ell}$ in $\{1,\ldots,n\}^{k}$. In order to cover the rest of the cube, place $\ell$-rooks at every point with at least $2$-coordinates being $0$ and we choose the directions of the points arbitrarily. For the remaining $k(n-1)^{k-1}$ points with exactly one $0$ and we break into cases with points of the form $(a_1,\ldots,a_{i-1},0,a_{i+1},\ldots,a_k)$. In order to cover these point we take all points such points with $a_{i}=0$ and place one axis of the $\ell$ possible in the direction of the $(i+1)^{st}$ coordinate where indices are taken$\mod n$. These points together cover the $H_{n+1,k}$ and we have added on at most $kn^{k-2}+\sum_{i=2}^{k}\binom{k}{i}n^{k-i}\le \sum_{i=1}^{k}\binom{k}{i}n^{k-2}\le 2^{k}n^{k-2}$ additional points. Therefore it follows that $a_{n+1,k,\ell}\le 2^{k}n^{k-2}+a_{n,k,\ell}$ and thus \[\frac{a_{n+1,k,\ell}}{(n+1)^{k-1}}\le \frac{2^{k}n^{k-2}+a_{n,k,\ell}}{(n+1)^{k-1}}\]\[\le \frac{2^{k}n^{k-2}+a_{n,k,\ell}}{n^{k-1}}=\frac{2^k}{n}+\frac{a_{n,k,\ell}}{n^{k-1}}.\]
Taking $n$ sufficiently large it follows that \[\sum_{i=mn}^{mn+m-1}\frac{2^k}{i}<\frac{\epsilon}{2}.\] Thus for $i\ge mn$ it follows that $\frac{a_{i,k,\ell}}{i^{k-1}}\le L+\epsilon$. Therefore \[\limsup_{n\to\infty}\frac{a_{n,k,\ell}}{n^{k-1}}\le L+\epsilon\] and since $\epsilon$ was an arbitrary constant greater than $0$, the result follows.
For $b_{k,\ell}$, an identical procedure demonstrates that $\frac{b_{mn,k,\ell}}{(mn)^{k-1}}\ge \frac{b_{n,k,\ell}}{(n)^{k-1}}$ for all positive integers $m$ and $n$. Furthermore the sequence $\frac{b_{n,k,\ell}}{n^{k-1}}$ is bounded due to Theorem \ref{Singleton} and note that \[\frac{b_{n+1,k,\ell}}{(n+1)^{k-1}}\ge \frac{b_{n,k,\ell}}{(n+1)^{k-1}}= \frac{n^{k-1}}{(n+1)^{k-1}}\bigg(\frac{b_{n,k,\ell}}{n^{k-1}}\bigg).\] Thus taking $L=\limsup_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}}$ and choosing $\epsilon>0$ arbitrarily there exists an $m$ such that $\frac{b_{m,k,\ell}}{m^{k-1}}>L-\frac{\epsilon}{2}$. Now suppose that $m$ satisfies $(\frac{mn}{mn+n-1})^k>\frac{L-\frac{\epsilon}{2}}{L-\epsilon}$. Then for all $i\ge mn$, $\frac{a_{i,k,\ell}}{i^{k-1}}>L-\epsilon$. Therefore,
\[\limsup_{n\to\infty}\frac{b_{n,k,\ell}}{n^{k-1}}>L-\epsilon.\] Since $\epsilon$ was arbitrary the result follows.
\end{proof}
For the existence of $c_{k,\ell}$, we follow a similar strategy except we rely on a different construction for the initial inequality that allows only for prime ``blowup" factors. This construction is closely related and motivated by the construction of general $q$-ary codes.
\begin{thm}\label{cExists}
For positive integers $k\ge \ell\ge 2$, the limit \[c_{k,\ell}=\lim_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}\] exists.
\end{thm}
\begin{proof}
Suppose $p$ is prime and $p\ge k$. Consider the set of points $(x_1,\ldots,x_k)$ in $H_{p,k}$ that satisfy $x_{k-1}\equiv x_1+\cdots+x_{k-2} \mod p$ and $x_k\equiv x_1+2x_2+3x_3+\cdots+(k-2)x_{k-2}\mod p$. We will show that in this construction no two points are less than distance $3$ apart. Suppose for sake of contradiction that there are two points $A=(a_1,\ldots,a_k)$ and $B=(b_1,\ldots,b_k)$ such that the distance between A and B is at most 2. If $a_i=b_i$ for all $t$ with $1\leq t\leq k-2$, then $A=B$. If $a_t=b_t$ for $1\le t\le k-2$ except for $i\in \{1,\ldots, k-2\}$ where $a_i\neq b_i$, then $a_{k-1}\neq b_{k-1}$ and $a_{k}\neq b_{k}$. Finally, we consider the case where $a_t=b_t$ for $1\le t\le k-2$ except for $i,j\in \{1,\ldots, k-2\}$ where $a_i\neq b_i$ and $a_j\neq b_j$. If both of the last two digits match then $a_i+a_j\equiv b_i+b_j\mod p$ and $ia_i+ja_j\equiv ib_i+jb_j\mod p$. Subtracting $i$ times the first equation from the second yields $(j-i)a_j\equiv (j-i)b_j\mod p$ or $a_j\equiv b_j\mod p$, which is impossible. Thus each pair of points in $S$ differ by at least a distance $3$. Furthermore note the set $S$ has exactly $p^{k-2}$ points.
Now given a construction for $c_{n,k,l}$ in $H_{n,k}$, we can blow up each point to a copy of $H_{p,k}$ (for $p>k$ and $p$ prime). Then place the construction given above into each $H_{p,k}$ corresponding to marked points in the original construction. Orienting the set of points in each $H_{p,k}$ to match the original orientation of the corresponding point in $H_{n,k}$, it follows that $\frac{H_{np,k}}{(np)^{k-2}}\ge \frac{H_{n,k}}{n^{k-2}}$ for all primes greater than $k$.
Furthermore, note that \[\frac{c_{n+1,k,\ell}}{(n+1)^{k-2}}\ge \frac{c_{n,k,\ell}}{(n+1)^{k-2}}= \frac{n^{k-2}}{(n+1)^{k-2}}\bigg(\frac{c_{n,k,\ell}}{n^{k-2}}\bigg).\]
Now $\frac{c_{n,k,l}}{n^{k-2}}$ is bounded above due to Theorem \ref{Singleton} and bounded below as it is positive. Let $L=\limsup_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}$ and thus for every $\epsilon>0$ there is an $m$ such that $\frac{c_{n,k,\ell}}{n^{k-2}}>L-\frac{\epsilon}{2}$. Now order the primes $2=p_1<p_2<\cdots$. Since $\lim_{i\to\infty}\frac{p_{i+1}}{p_{i}}=1$ it follows that there exists $j$ such that for $i\ge j$, $\frac{p_{i+1}}{p_i}<(\frac{L-\frac{\epsilon}{2}}{L+\epsilon})^{\frac{1}{k-2}}$. For every integer $t>p_jn$ it follows that $t\in [p_in,p_{i+1}n-1]$ and $\frac{c_{t,k,\ell}}{t^{k-2}}>(\frac{t}{p_in})^{k-2}\frac{c_{p_in,k,\ell}}{(p_in)^{k-2}}>L-\epsilon$. Therefore $\lim\inf_{n\to\infty}\frac{c_{n,k,\ell}}{n^{k-2}}>L-\epsilon$, and since $\epsilon$ was arbitrary the result follows.
\end{proof}
\section{Bounds for Covering}
Given the initial bounds from Theorem \ref{SpherePacking}, it follows that $\frac{1}{\ell}\le a_{k,\ell}\le 1$. However, in general we demonstrate that $a_{k,\ell}\neq \frac{1}{\ell}$, except for the trivial case $a_{k,1}=1$. To do this it is necessary to ``amortize" a result of Rodemich \cite{rodemich1970coverings} which is equivalent to $a_{n,k,k}\ge \frac{n^{k-1}}{k-1}$. However the original proof given by Rodemich can be replicated for this situation and we reproduce the proof below for the readers convenience.
\begin{thm}\label{KeyIdea}
Suppose that $N\le n^{k-1}$. Then for sufficiently large $n$, $N$ $k$-rooks on a $H_{n, k}$ cover at most $kNn-\frac{(k-1)N^2}{n^{k-2}}$ points.
\end{thm}
\begin{proof}
The bound is clear when $k=1$. For $k=2$ note that $N$ $2$-rooks cover at most $n^2-(n-N)^2=2Nn-N^2$ points as least $n-N$ rows and columns are uncovered. Therefore it suffices to consider $k\ge 3$. Furthermore, when $N\in [\frac{n^{k-1}}{k-1}, n^{k-1}]$, we have $kNn-\frac{(k-1)N^2}{n^{k-2}}\ge n^{k}$ so the bound holds in these cases. Hence, it suffices to consider $N\le \frac{n^{k-1}}{k-1}$.
Now consider any set $S$ of $k$-rooks with $|S|=N$. For any point $P\in H_{n,k}$ define $c_{j}(P)$ to be the number of points of times that the point $P$ is attacked in the $j^{th}$ direction. Furthermore define $q(P)$ the number of directions that $P$ is attacked in and define
\[m(P)=\sum_{1\le j\le k}c_j(P)=q(P)+\sum_{c_j(P)>0}(c_j(P)-1).\] Furthermore define $e_{i,j}(P)$ to be $1$ if $P$ is covered in the $i$ and $j$ directions and $0$ otherwise. Then note that
\[\sum_{1\le i<j\le k}e_{i,j}(P)=\frac{q(P)(q(P)-1)}{2}\le \frac{k(q(P)-1)}{2}\]
for points $P$ that are attacked and therefore
\[q(P)\ge 1+\frac{2}{k}\sum_{1\le i<j\le k}{e_{i,j}(P)}.\] Finally define $n_j(P)=c_j(P)-1$ if $c_j(P)$ is positive and $0$ otherwise. Therefore
\begin{align*}m(P)&=q(P)+\sum_{1\le j\le k}n_j(P)
\\ &\ge 1+\sum_{1\le j\le k}n_j(P)+\frac{2}{k}\sum_{1\le i<j\le k}{e_{i,j}(P)}
\end{align*}
for points $P$ that are attacked and suppose that $S$ attacks the points $T\in H_{n,k}$. Summing over $P\in T$ yields
\begin{align*}kNn &\ge |T|+\sum_{1\le j\le k}\sum_{P\in T}n_j(P)+\frac{2}{k}\sum_{1\le i<j\le k}\sum_{P\in T}e_{i,j}(P)
\\ &=|T|+\sum_{1\le j\le k}n_j+\frac{2}{k}\sum_{1\le i<j\le k}e_{i,j}
\end{align*}
where we have defined
\[n_j=\sum_{P\in T}n_j(P)\] and
\[e_{i,j}=\sum_{P\in T}e_{i,j}(P).\] Now we arbitrarily order the $n^{k-2}$ planes in the $(i,j)$ direction. For $r^{th}$ plane suppose there are $a_r$ rows in the $i^{th}$ direction with a point of $S$ in them, $b_r$ rows in the $j^{th}$ direction with a point of $S$ in them, and $d_r$ total points in this plane. Furthermore for convenience define $\alpha_r=d_r-a_r$ and $\beta_r=d_r-b_r$. Then it follows that
\begin{align*}e_{i,j}&=\sum_{1\le r\le n^{k-2}}{a_rb_r}\\ &=\sum_{1\le r\le n^{k-2}}(d_r-\alpha_r)(d_r-\beta_r)
\\ &=\sum_{1\le r\le n^{k-2}}\bigg(d_r-\frac{\alpha_r+\beta_r}{2}\bigg)^2-\bigg(\frac{\alpha_r-\beta_r}{2}\bigg)^2\end{align*}
Using the trivial inequality that $|\alpha_r-\beta_r|\le n$ it follows that
\begin{align*}e_{i,j} &\ge\frac{1}{n^{k-2}}\bigg(\sum_{1\le r\le n^{k-2}}d_r-\frac{\alpha_r+\beta_r}{2}\bigg)^2-\frac{n}{2}\sum_{1\le r\le n^{k-2}}\frac{\alpha_r+\beta_r}{2}
\\ &=\frac{1}{n^{k-2}}\bigg(N-\frac{n_i+n_j}{2n}\bigg)^2-\frac{n_i+n_j}{4}.\end{align*}
Here we have used the fact that
\[n\sum_{1\le r\le n^{k-2}}\alpha_r+\beta_r=n_i+n_j\]
which follows from counting the number of points covered multiple times in the $i$th and $j$th directions.
Summing over all $i,j$ it follows that
\begin{align*}\sum_{1\le i<j\le k}e_{i,j}&\ge \frac{k(k-1)N^2}{2n^{k-2}}-\frac{(k-1)N}{n^{k-1}}\sum_{1\le j\le k}n_j-\frac{k-1}{4}\sum_{1\le j\le k}n_j+\frac{1}{4n^k}\sum_{1\le i<j\le k}(n_i+n_j)^2.\end{align*}
Applying this inequality it follows that
\begin{align*}kNn &\ge |T|+\sum_{1\le j\le k}n_j+\frac{2}{k}\sum_{1\le i<j\le k}e_{i,j}
\\ &\ge |T|+\bigg(1-\frac{2(k-1)N}{kn^{k-1}}-\frac{k-1}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}}+\frac{1}{2kn^k}\sum_{1\le i<j\le k}(n_i+n_j)^2
\\ &\ge |T|+\bigg(1-\frac{2(k-1)N}{kn^{k-1}}-\frac{k-1}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}}.\end{align*}
Using $N\le \frac{n^{k-1}}{k-1}$ it then follows that
\begin{align*}kNn &\ge |T|+\bigg(1-\frac{2}{k}-\frac{k-1}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}}
\\ &\ge |T|+\bigg(\frac{k-3}{2k}\bigg)\sum_{1\le j\le k} n_j+\frac{(k-1)N^2}{n^{k-2}}
\\ &\ge |T|+\frac{(k-1)N^2}{n^{k-2}}\end{align*}
and therefore it follows that
\[|T|\le kNn-\frac{(k-1)N^2}{n^{k-2}}\] as desired.
\end{proof}
Note the previous bound in general cannot be improved as $a_{k+1,k+1}=\frac{1}{k}$ when $k$ is a prime power due to the existence of perfect codes \cite{blokhuis1984more}. Using this amortized version of Rodemich's result, we now prove a better lower bound for $a_{k,\ell}$.
\begin{thm}
For every pair of positive integers $(\ell, k)$ with $\ell\le k$, we have
\[a_{k, \ell}\ge \frac{2}{\ell(1+\sqrt{1-\frac{4(\ell-1)}{\ell^2\binom{k}{\ell}}})}.\]
\end{thm}
\begin{proof}
Suppose we have a configuration of $N$ $\ell$-rooks that covers $H_{n, k}$. Since the $\ell=k$ case of Theorem $8$ is established in by Rodemich's result \cite{rodemich1970coverings}, it suffices to consider when $k>\ell$. In this case $\binom{k}{\ell}>1$, so the right-hand side above is less than $\frac{1}{\ell-1}$. Therefore, it suffices to consider the case $N\le \frac{n^{k-1}}{\ell-1}$. We first prove the following lemma:
\begin{lemma}
Suppose that $a_1, \ldots, a_{n^{k-\ell}}$ are nonnegative reals that satisfy $\displaystyle\sum_{i=1}^{n^{k-\ell}}a_i=A\le \frac{n^{k-1}}{\ell-1}$. Then
\[\sum_{i, a_i\le \frac{n^{\ell-1}}{\ell-1}}(\ell n a_i-\frac{\ell-1}{n^{\ell-2}}a_i^2)+\sum_{i, a_i>\frac{n^{\ell-1}}{\ell-1}}n^\ell\le \ell nA-\frac{\ell-1}{n^{k-2}}A^2.\]
\end{lemma}
\begin{proof}
Consider the piecewise function $f(x)$ defined by
\[f(x)=\begin{cases}\ell n x-\frac{\ell-1}{n^{\ell-2}}x^2 & x\le \frac{n^{\ell-1}}{\ell-1};
\\ n^\ell & x\ge \frac{n^{\ell-1}}{\ell-1}.
\end{cases}\]
Then $f(x)$ is continuous and concave on the region $[0, A]$. It follows that for $A=\sum_{i=1}^{n^{k-\ell}}a_i$ fixed, the left-hand side achieves its minimum when the $a_i$ are all equal to $\frac{A}{n^{k-\ell}}$. Since $\frac{A}{n^{k-\ell}}\le \frac{n^{\ell-1}}{\ell-1}$, it follows that the left-hand side is at most
\[n^{k-\ell}f(\frac{A}{n^{k-\ell}})=\ell n A-\frac{\ell-1}{n^{k-2}}A^2\]
as required.
\end{proof}
Now we proceed with the proof of Theorem 10. We consider the $\binom{k}{\ell}$ possible choices of direction for the $\ell$-points separately. Label these, directions $1, \ldots, \binom{k}{\ell}$ arbitrarily. Each choice of direction corresponds to a choice of $\ell$ out of $k$ coordinates, so there are $n^{k-\ell}$ distinct dimension-$\ell$ hypercubes for each direction, and these collectively form a partition of the full $H_{n, k}$. Order these $\ell$-dimensional hyperplanes arbitrarily and let $a_{i, j}$ denote the number of $\ell$-points in the $j^{th}$ hyperplane of the $i^{th}$ direction with points in that direction. Furthermore let $A_i=\sum_{j=1}^{n^{k-\ell}}a_{i, j}$. Since the $\binom{k}{\ell}$ directions contain all points exactly once between them, $\sum_{i=1}^{\binom{k}{l}}a_i=N$. Also, since $N\le \frac{n^{k-1}}{\ell-1}$, we have $A_i\le \frac{n^{k-1}}{\ell-1}$ for each $i$. Now invoking Theorem $6$, the total number of cubes covered is bounded above by
\[\sum_{i=1}^{\binom{k}{\ell}}(\sum_{j, a_{i, j}\le \frac{n^{\ell-1}}{\ell-1}}(\ell n a_{i, j}-\frac{\ell-1}{n^{\ell-2}}a_{i, j}^2)+\sum_{j, a_{i, j}>\frac{n^{\ell-1}}{\ell-1}}n^\ell).\]
It follows that
\begin{align*}
n^k&\le \sum_{i=1}^{\binom{k}{\ell}}(\sum_{j, a_{i, j}\le \frac{n^{\ell-1}}{\ell-1}}(\ell n a_{i, j}-\frac{\ell-1}{n^{\ell-2}}a_{i, j}^2)+\sum_{j, a_{i, j}>\frac{n^{\ell-1}}{\ell-1}}n^\ell)
\\ &\le \sum_{i=1}^{\binom{k}{\ell}}(\ell nA_i-\frac{\ell-1}{n^{k-2}}A_i^2)
\\ &= \ell nN-\frac{\ell-1}{n^{k-2}}\sum_{i=1}^{\binom{k}{\ell}}A_i^2
\\ &\le \ell nN-\frac{\ell-1}{\binom{k}{\ell}n^{k-2}}N^2
\end{align*}
where we have used Lemma 9 and then the Cauchy-Schwarz inequality. Rearranging this gives
\[(\ell-1)(\frac{N}{n^{k-1}})^2-\binom{k}{l}(\frac{\ell N}{n^{k-1}}-1)\le 0.\]
It follows that for all $n$,
\[a_{n, k, \ell} \ge \frac{2n^{k-1}}{\ell(1+\sqrt{1-\frac{4(\ell-1)}{\ell^2\binom{k}{\ell}}})},\]
and the result follows.
\end{proof}
\begin{corollary}
For $k\ge \ell\ge 2$, $a_{k,\ell}\neq \frac{1}{\ell}$. Therefore, in the limit, $a_{n,k,\ell}$ never achieves the lower bound of the sphere-packing bound.
\end{corollary}
However, despite the fact that $a_{k,\ell}\neq \frac{1}{\ell}$ for $\ell\ge 2$, we can show that as $k$ gets large $a_{k,\ell}$ in fact approaches $\frac{1}{\ell}$. In particular the portion of forced ``overlap" of the attacking rooks goes to $0$. For convenience, define $f(k)$ to be the largest prime power less than or equal to $k$.
\begin{thm}\label{intLower}
For every pair of positive integers $(k, \ell)$, with $k\ge 2$ and $f(k)\ge \ell$,
\[a_{k, \ell}\le \frac{1}{f(k)-1}\left\lceil\frac{f(k)}{\ell}\right\rceil.\]
\end{thm}
\begin{proof}
Take an integer $n_1>\frac{f(k)}{\ell}$. We first construct a size-$n_1$, dimension-$k$ block from $\left\lceil\frac{f(k)}{\ell}\right\rceil$ $\ell$-rooks. In particular consider points that satisfy $x_1+x_2+\cdots+x_k\equiv i\mod n_1$ for $0\le i\le \lceil\frac{f(k)}{\ell}\rceil-1$ and then choosing the $(i\ell+1)^{st}$ through $((i+1)\ell)^{th}$ directions to attack for the points whose coordinate sum is equivalent to $i$ where we take the specified direction $\mod n$. Note that this block has an attacking line in every possible axis.
Since perfect $q$-nary covering codes exists for prime powers $q$ (see \cite{van1991bounds} for example) and $f(k)\le k$, we have $a_{k, k}\le a_{f(k), f(k)}=\frac{1}{f(k)-1}$. Now we note using the size $n_1$ scaled blocks in place of points for a construction of $a_{n_2,k,k}$, an $H_{n_1n_2, k}$ can be tiled with at most
\[(\left\lceil\frac{f(k)}{\ell}\right\rceil n_1^{k-1})(a_{n_2, k, k})\]
$\ell$-rooks, and the result follows from $\lim_{n\to\infty}\frac{a_{n, k, k}}{n^{k-1}}=a_{k, k}$.
\end{proof}
\begin{corollary}
For every positive integer $\ell$, $\lim_{k\to\infty}a_{k, \ell}=\frac{1}{\ell}$.
\end{corollary}
We end the section on packing with the specific case of $a_{3,2}$ that demonstrates that the bounds in the previous two theorems are not tight in general.
\begin{thm} It holds that
\[a_{3, 2}\le \frac{1}{\sqrt{2}}.\]
\end{thm}
Note that this bound is less than $\frac{3}{4}$, the bound achieved by Theorem \ref{intLower}.
\begin{proof}
Let $(a, b)$ be any pair of positive integers that satisfies $2<\frac{a}{b}<\sqrt{2}+1$, so that $\frac{4ab}{2a-2b}\ge a+b$. Consider a construction on $H_{2a+2b, 3}$. For $0\le i\le 2a-1, 0\le j\le 2b-1$ we place a $2$-rook at $(i, j, \lfloor\frac{2bi+j}{2a-2b}\rfloor)$ that covers along the second and third coordinates and place a $2$-rook at $(2a+2b-j-1, 2a+2b-i-1, 2a+2b-1-\lfloor\frac{2bi+j}{2a-2b}\rfloor)$ which attacks along the first and third coordinates. We note that the points between these groups are distinct since the first coordinates between them never coincide.
Now we claim that the uncovered squares in the plane $z=k$ are contained in the union of $2b$ columns and $2b$ rows. Indeed, in each plane of this form, either $2a-2b$ rooks of the first type are covering in the second coordinate, or $2a-2b$ rooks of the second type are covering the third direction. Since the corresponding rooks are in distinct rows, the remaining plane can be covered via at most $2b$ $2$-rooks, so this construction yields a covering of $H_{2a+2b, 3}$ with at most $8ab+2b(2a+2b)=4b^2+12ab$ $2$-rooks. This yields an upper bound $a_{3, 2}\le \frac{4b^2+12ab}{4(a+b)^2}=\frac{1+3t}{(1+t)^2}$ where $t=\frac{a}{b}$ as the proof of Theorem \ref{aexists} implies $\frac{a_{n,3,2}}{n^2}\ge a_{3,2}$. Taking $t=\frac{a}{b}$ to be an arbitrarily precise rational approximation of $\sqrt{2}+1$ from below, we obtain
\[a_{3, 2}\le \frac{4+3\sqrt{2}}{(2+\sqrt{2})^2}=\frac{1}{\sqrt{2}}\]
as required.
\begin{thm}
It holds that
\[\frac{9-3\sqrt{5}}{4}\le a_{3,2}\].
\end{thm}
In order to prove this lower bound, we first introduce some algebraic lemmas:
\begin{lemma}
Suppose that $c_i, x_i$ are nonnegative integers with $c_i\in [0, n], x_i\in [0, n^2]$ for $1\le i\le n$ and set $C=\sum_{i=1}^nc_i$ and $X=\sum_{i=1}^nx_i$. Suppose that $(n-c_i-x_i)(n-c_i)\le X$ for each $i$. Then:
\[C+X\ge n^2(\frac{t}{2}+1-\frac{t(1+t)}{2(1-t)}-(1-\frac{t}{1-t})\sqrt{t})-2n\]
where $t=\frac{X}{n^2}$.
\end{lemma}
\begin{proof}
Suppose we have $x_i, c_i$ for which the minimum value of $X+C$ is achieved. We first claim that it suffices to prove the statement when $(n-c_i)(n-c_i-x_i)=X$ for each $i$. To prove this take $x_i, c_i$ such that $C+X$ is minimized. For fixed $x_i$, it suffices to consider the case where the $c_i$ are small as possible. The allowable range for each $c_i$ is the intersection of the intervals $[0, n]$ and $\bigg[\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}, \frac{2n-x_i+\sqrt{x_i^2+4X}}{2}\bigg]$.
Hence it suffices to set $c_i$ equal to $\max\bigg\{0, \frac{2n-x_i-\sqrt{x_i^2+4X}}{2}\bigg\}$. Now suppose that some $c_i$ is equal to $0$ with $\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}<0$. We claim that $c_j>0$ for some $j$. Indeed, suppose otherwise. Then $n(n-x_i)\le X$ for each $i$, and summing over $i$ yields $n^3\le 2nX$, or $X\ge\frac{n^2}{2}$, which is false in this case. So $c_j>0$ for some $j$. But then we can simultaneously decrease $x_i$ and increase $x_j$ at the same rate until $\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}=0$. At this point, $c_i=0$ still satisfies the required condition, but $c_j$ can be replaced with a $c_j'<c_j$ since $x_j$ increased.
So, it suffices to prove the statement in the case that $c_i=\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}\ge 0$ for each $i$. Then
\begin{align*}
C+X&=\sum_{i=1}^n\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}+\sum_{i=1}^nx_i
\\ &=\sum_{i=1}^n (n+\frac{x_i-\sqrt{x_i^2+4X}}{2})
\end{align*}
We focus on minimizing this last expression. In addition to $x_i\ge 0$, the additional condition $c_i\ge 0$ implies that $\frac{2n-x_i-\sqrt{x_i^2+4X}}{2}\ge 0$ for each $i$, so that $x_i\le n-\frac{X}{n}$ for each $i$.
Because the function $f(x)=\sqrt{x^2+c}$ is convex for $x\ge 0$ when $c\ge 0$, it follows that the expression above is minmized when all but one of the $x_i$ are equal to one of the boundaries of the interval $[0, n-\frac{X}{n}]$. Let $A=\lfloor\frac{X}{n-\frac{X}{n}}\rfloor$, so that there are $A$ values $i$ with $x_i=A$. Then:
\begin{align*}
\sum_{i=1}^n (n+\frac{x_i}{2}-\frac{\sqrt{x_i^2+4X}}{2})&\ge n^2+\frac{X}{2}-\frac{1}{2}(A+1)\sqrt{(n-\frac{X}{n})^2+4X}-(n-A)\sqrt{X}
\\ &\ge n^2+\frac{X}{2}-\frac{1}{2}(\frac{X}{n-\frac{X}{n}})(n+\frac{X}{n})-(n-\frac{X}{n-\frac{X}{n}})\sqrt{X}-2n
\\ &= n^2(\frac{t}{2}+1-\frac{t(1+t)}{2(1-t)}-(1-\frac{t}{1-t})\sqrt{t})-2n
\end{align*}
as required.
\end{proof}
\begin{lemma}
Suppose that $c_i, x_i$ are nonnegative integers with $c_i\in [0, n], x_i\in [0, n^2]$ for $1\le i\le n$. Let $C=\sum_{i=1}^nc_i$ and $X=\sum_{i=1}^nx_i$, and suppose that $C\ge \frac{X}{2}$ and $(n-c_i-x_i)(n-c_i)\le X$ for each $i$. Further suppose that $\sum_{i=1}^n (X-(n-c_i-x_i)(n-c_i))\ge \frac{X^2}{2n-\frac{X}{n}}$. Then $C+X\ge (\frac{9-3\sqrt{5}}{4}) n^2-2n$.
\end{lemma}
\begin{proof}
We take two cases based on the size of $X$. Let $\alpha=\frac{9-3\sqrt{5}}{4}$.
Case 1: $X\ge \frac{2\alpha}{3}n^2$. Then $C+X\ge \frac{3X}{2}\ge \alpha n^2$ as required.
Case 2: $X\le \frac{2\alpha}{3}n^2$. Then consider the indices $i$ for which $c_i>\max\{0, \frac{2n-x_i-\sqrt{x_i+4X}}{2}\}$. For each such index $i$, decrease the $c_i$ until it equals this maximum. Repeat this procedure for all of the $c_i$. Then, repeat the procedure described in Lemma 15 so that each $c_i$ is equal to $\frac{2n-x_i-\sqrt{x_i+4X}}{2}$. Let $C'$ denote the new sum of the $c_i$. Then according to Lemma 15:
\[C'+X\ge n^2(1-\frac{t^2}{1-t})-2n\]
where $t=\frac{X}{n^2}$ as before. Now we claim $C\ge C'+\frac{n^2t^2}{2(2-t)}$. Indeed, note that:
\[\frac{\partial}{\partial c_i}((n-c_i)(n-c_i-x_i))=2c_i+x_i-2n\ge -2n\].
It follows that, in the process of decreasing $\sum_{i=1}^n (n-c_i)(n-x_i-c_i)$ by $\frac{X^2}{2(2n-\frac{X}{n})}$, the sum of the $c_i$ decreases by at least $\frac{X^2}{2n(2n-\frac{X}{n})}=\frac{n^2t^2}{2(2-t)}$ as required. Hence:
\begin{align*}
C+X&=(C'+X)+(C-C')
\\ &\ge n^2(\frac{t}{2}+1-\frac{t(1+t)}{2(1-t)}-(1-\frac{t}{1-t})\sqrt{t}+\frac{t^2}{2(2-t)})-2n
\\ &\ge \alpha n^2-2n
\end{align*}
where we here used $t\le \frac{2\alpha}{3}$, and that:
\[f(x)=\frac{x}{2}+1-\frac{x(1+x)}{2(1-x)}-(1-\frac{x}{1-x})\sqrt{x}+\frac{x^2}{2(2-x)}\]
is decreasing on $(0, 0.4)$ with $f(\frac{2\alpha}{3})=\alpha$.
\end{proof}
\begin{lemma}
Suppose that $0<X\le n^2$ squares are marked in a $n\times n$ grid, and in each marked square is written either the number of marked squares in the same row, or the number of marked squares in the same column. Then the sum of the written numbers is greater than $\frac{X^2}{2n-\frac{X}{n}}$.
\end{lemma}
\begin{proof}
We claim that the sum of the reciprocal of the written numbers is at most $2n-\frac{X}{n}$. Indeed, for a marked square $m_i$, let $c_i, n_i$ denote the number of marked squares in the chosen and not chosen direction of $m_i$ respectively. Then:
\begin{align*}
\sum_{i=1}^X\frac{1}{c_i}&= \sum_{i=1}^X(\frac{1}{c_i}+\frac{1}{n_i})-\sum_{i=1}^X\frac{1}{n_i}
\\ &= 2n-\sum_{i=1}^X\frac{1}{n_i}
\\ &\le 2n-\frac{X}{n}
\end{align*}
Then the result follows from Cauchy-Schwarz, since:
\[(\sum_{i=1}^X\frac{1}{c_i})(\sum_{i=1}^Xc_i)\ge X^2\].
\end{proof}
Now we proceed to the proof of the lower bound. Suppose that there is a configuration of $2$-rooks which covers an $H_{n, 3}$ situated at $1\le x, y, z\le n$ in coordinate space. We may orient this configuration so that at least $\frac{1}{3}$ of the rooks cover in the $x$ and $y$ directions. We call these rooks \textit{cross rooks,} and all other rooks \textit{axis rooks}. For each $i, 1\le i\le n$,we denote by $x_i$ the number of axis rooks which lie in the plane $z=i$, and $c_i$ the number of cross rooks which lie in this plane. Let $C=\sum_{i=1}^nc_i, X=\sum_{i=1}^nx_i$, so that $C\ge \frac{X}{2}$. Note that we may assume $c_i\le n$ since $n$ cross rooks can already cover a plane. We claim that, in the $i$th plane, at most $n^2-(n-c_i)(n-c_i-x_i)$ points are covered by rooks in that plane. Indeed, suppose that $h_i$ of the $x_i$ axis points choose to cover a row in $z=i$, and $v_i$ choose to cover a column, so that $v_i+h_i=x_i$. Then at most $c_i+v_i$ rows are covered by the rooks in plane $i$, and at most $c_i+h_i$ columns are covered. Hence in total, at most:
\[n^2-(n-c_i-h_i)(n-c_i-v_i)=n^2-(n-c_i)(n-c_i-x_i)-h_iv_i\le n^2-(n-c_i)(n-c_i-x_i)\]
points in plane $i$ are covered by rooks in that plane as required. It follows that the remaining points in plane $i$ must by covered by axis points from other planes, so that in particular $X\ge (n-c_i)(n-c_i-x_i)$ for every $i$. Furthermore, due to the combination of lemmas 15, 16 it follows that:
\[\sum_{i=1}^n (X-(n-c_i-x_i)(n-c_i))\ge \frac{X^2}{2n-\frac{X}{n}}\]
Therefore, the $x_i, c_i$ satisfy the conditions given in Lemma 16, so it follows that the total number of rooks used is:
\[C+X\ge \alpha n^2-2n\]
Hence $a_{3, 2}\ge \alpha$ as required.
\end{proof}
\section{Bounds for Packing}
In this section we prove that $b_{k,\ell}=\frac{k}{\ell}$ and $c_{k,\ell}=\frac{\binom{k}{2}}{\binom{\ell}{2}}$ for certain special values of $k$ and $\ell$. We begin by demonstrating that this equality holds when $\ell$ divides $k$.
\begin{thm}\label{firstStep}
For positive integers $k, t$, we have $b_{kt, t}=k$.
\end{thm}
\begin{proof}
By Theorem \ref{Singleton}, it follows that $b_{kt,t}\le k$. Therefore, it suffices to demonstrate that $b_{kt,t}\ge k$. We prove this by demonstrating that $b_{n,kt,t}\ge kn^{k(t-1)}(n-1)^{k-1}$ through explicit construction.
Consider points of the form $(x_1,\ldots,x_t,x_{t+1},\ldots,x_{2t},\ldots, x_{kt})$ with $0\le x_i\le n-1$ for $1\le i\le kt$. Define the $L_j$ block of points as the set of points that satisfy
\[\sum_{i=0}^{t-1}x_{jt-i}\equiv 0\mod n\] and satisfy for $m\in \{1,\ldots, \ell\}$ and $m\neq j$,
\[\sum_{i=0}^{t-1}x_{mt-i}\nequiv 0\mod n.\] For each point in the $L_j$ block, place an $\ell$-rook that attacks in the direction of the $((j-1)t+1)^{th}$ coordinate to the ${jt}^{th}$ coordinate. Note that $|L_j|=n^{t-1}(n^{t-1}(n-1))^{k-1}=n^{k(t-1)}(n-1)^{k-1}$ and thus taking the union of these rooks for $1\le j\le k$ it follows that \[\left|\bigcup_{j=1}^{k}L_j\right|=kn^{k(t-1)}(n-1)^{k-1}.\]
Now we demonstrate that no point attacks another in the above constructions. Suppose for the sake of contradiction that $R_1$ attacks $R_2$ with $R_1\in L_i$ and $R_2\in L_j$. If $i\neq j$ then note that $R_1$ and $R_2$ differ in at least one coordinate in $x_{(i-1)t+1},\ldots,x_{it}$ and at least one coordinate in $x_{(j-1)t+1},\ldots,x_{jt}$. Since attacking rooks differ by at most $1$ coordinate, such rooks $R_1$ and $R_2$ do not exist. Otherwise $R_1$ and $R_2$ both lie in $L_i$. If these points differ in the coordinates $x_{(i-1)t+1},\ldots,x_{it}$ then they differ in at least two positions and therefore they cannot attack each other. Otherwise $R_1$ and $R_2$ differ outside of the coordinates $x_{(i-1)t+1},\ldots,x_{it}$, and since $R_1$ and $R_2$ attack in these coordinates, $R_1$ does not attack $R_2$ and the result follows.
\end{proof}
We can now establish a crude lower bound for $b_{k,\ell}$.
\begin{corollary}
For positive integers $k, \ell$, $b_{k, \ell}\ge \lfloor\frac{k}{\ell}\rfloor$.
\end{corollary}
\begin{proof}
Note that $nb_{n,k,\ell}\le b_{n,k+1,\ell}$ as we can stack $n$ constructions of $b_{n,k,\ell}$ in the $(k+1)^{st}$ dimension. Therefore it follows that $b_{k,\ell}\le b_{k+1,\ell}$ and that $b_{k,\ell}\ge b_{\ell\lfloor\frac{k}{\ell}\rfloor,\ell}=\lfloor\frac{k}{\ell}\rfloor$ where we have used Theorem \ref{firstStep} in the final step.
\end{proof}
The last bound we establish for $b_{k,\ell}$ is that in fact $b_{k,2}=\frac{k}{2}$ for all integers $k\ge 2$.
\begin{thm}
For integers $k\ge 2$, we have $b_k=\frac{k}{2}$.
\end{thm}
\begin{proof}
We will provide an inductive construction based on constructions in Section 2. In particular, we show the following.
Claim: For every integer $k\ge 2$, there exists a nonnegative constant $c_k$ such that $b_{n, k, 2}\ge \frac{k}{2}n^{k-1}-c_kn^{k-2}$.
For the base case $k=2$, we may take $c_2=0$, and place $2$-rooks along the main diagonal of $H_{n, 2}$ which is a size-$n$ square. Suppose the claim holds for all $j\le k-1$ and that $(k-1)!^2$ divides $n$. Then there exists some set $S$ of $\frac{k-1}{2}n^{k-2}-c_{k-1}n^{k-3}$ $2$-rooks that tiles $H_{n, k-1}$.
We now describe a way to pack \[\frac{(k-1)^k(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}\] labeled $1$-rooks into a $H_{n, k-1}$ so that no two $1$-rooks of the same label attack each other. For this, we first group some of the $n^{k-1}$ points in the hypercube into \[\frac{(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}\] buckets of size $(k-1)^{k-1}$. We do this by sending the point $(x_1, \ldots, x_{k-1})$ to a bucket labeled $(\lfloor\frac{x_1}{k-1}\rfloor, \ldots, \lfloor \frac{x_k}{k-1}\rfloor)$ if and only if the $\lfloor\frac{x_i}{k-1}\rfloor$ are distinct.
Notice that the points in each bucket form a $H_{k-1, k-1}$. Within each bucket, we partition the points into $k-1$ parts of the form $\sum_{i=1}^{k-1}x_i\equiv j\mod k-1$, each of which has $(k-1)^{k-2}$ points. We then label each point in the $j$th part of such a partition with the label $\lfloor \frac{x_j}{k-1}\rfloor$.
When this is done, there are \[\frac{(k-1)^k(\frac{n}{k-1})!}{n(\frac{n}{k-1}-k+1)!}\] points of label $i$ for each $i\in \{1, \ldots, \frac{n}{k-1}\}$. All points of label $i$ have $\lfloor \frac{x_j}{k-1}\rfloor=i$ for exactly one index $j$. Therefore, assigning all points of label $i$ to attack in the direction corresponding to this direction yields a packing of $\frac{(k-1)^k(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}$ labeled $1$-rooks into a $H_{n, k-1}$ so that no two $1$-rooks of the same label attack each other, as required.
Now we combine this partition $P$ with the set $S$. For $1\le x_k\le \frac{n}{k-1}$, we let the last coordinate act as the label for $P$ and have all of these rooks attack in the direction of the last coordinate in addition to their normal direction. For $\frac{n}{k-1}+1\le x_k\le n$, we fill each layer corresponding to a fixed $x_k$ according to $S$. The number of points in the construction at this point is at least \[\frac{(k-1)^k(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}+(\frac{k-2}{k-1}n)(\frac{k-1}{2}n^{k-1}-c_kn^{k-2})\ge \frac{k}{2}n^{k-1}-(\frac{k-2}{k-1}c_k+\frac{k(k-1)^2}{2})n^{k-2}.\]
Here we have used the estimate that \[\frac{(\frac{n}{k-1})!}{(\frac{n}{k-1}-k+1)!}\ge (\frac{n}{k-1})^{k-1}-\frac{k(k-1)}{2}(\frac{n}{k-1})^{k-2}.\]
At this point, the only pairs of $2$-rooks that attack other $2$-rooks are the rooks from $P$ that attack the rooks from $S$. But there are at most $\frac{k-1}{2}n^{k-2}$ points in $S$ and each point in $S$ lead to at most $1$ offending point in $P$. Therefore we may simply remove these points to obtain a configuration of at least
$\frac{k}{2}n^{k-1}-(\frac{k-2}{k-1}c_k+\frac{k(k-1)^2}{2}+\frac{k-1}{2})n^{k-2}$
$2$-rooks, none of which attack each other.
It follows that $b_{k, 2}=\lim_{t\to\infty}b_{(k-1)!^2t, k, 2}=\frac{k}{2}$, as desired.
\end{proof}
Transitioning, we now determine $c_{k,2}$ and $c_{k,k}$. The second constant is known implicitly in the literature, but the proof is included in the following theorem.
\begin{thm}
For all positive integers $k$, $c_{k,k}=1$ and for $k\ge 2$, $c_{k,2}=\binom{k}{2}$.
\end{thm}
\begin{proof}
We begin by proving that $c_{k,k}=1$. By Theorem \ref{Singleton}, $c_{k,k}\le 1$. The construction that $c_{k,k}\ge1$ is the exactly the one given in Theorem \ref{cExists} as this demonstrates $c_{p,k,k}=p^{k-2}$ for primes $p>k$. The result follows.
For the second part of the proof, note that $c_{k,2}\le \binom{k}{2}$ by Theorem \ref{Singleton}. Therefore, it suffices to demonstrate that $c_{k,2}\ge \binom{k}{2}$. To demonstrate this we prove that $c_{n,k,2}\ge \binom{k}{2}(n-2\binom{k}{2})^{k-2}$ for $n>2\binom{k}{2}$. In particular, for $i<j$, let $A_{i,j}$ be the set of points with $i^{th}$ and $j^{th}$ coordinates being $2i-2+(j-1)(j-2)$ and $2i-1+(j-1)(j-2)$ respectively, and other coordinates varying in the range $[k(k-1)+1,n-1]$. Note that $2i-2+(j-1)(j-2)$ and $2i-1+(j-1)(j-2)$ lie between $[0,k(k-1)]$ and take each value in this range exactly one. Now for each point in $A_{i,j}$, orient the corresponding $2$-rook to attack in the direction of the $i^{th}$ and $j^{th}$ axes. No two points within a set attack each other as they differ outside the $i^{th}$ and $j^{th}$ coordinates and any pair of rooks from differing sets differ in at least $3$ coordinates and therefore cannot attack each other. Therefore, the result follows by taking all $A_{i,j}$ where the $2$-rooks in $A_{i,j}$ are directed to attack along the $i^{th}$ and ${j}^{th}$ dimension.
\end{proof}
\section{Conclusion and Open Questions}
There are several natural questions and conjectures regarding the values of $a_{k,\ell}$, $b_{k,\ell}$, and $c_{k,\ell}$. The most
surprising open question is the following.
\begin{conj}
For integers $k\ge 2$, $a_{k,k}=\frac{1}{k-1}$.
\end{conj}
Note that the above conjecture is known when $k$ is one more than a power of a prime \cite{blokhuis1984more}, and the construction is essentially that of perfect codes. This construction implies that the first open case of this conjecture is $a_{7,7}$. The most natural case when $k\neq \ell$ and that is not covered by our result is $a_{3,2}$.
\begin{question}
What is the exact value of $a_{3,2}$?
\end{question}
We end on pair of conjectures based on the results of the previous section, and in contrast to $a_{k,\ell}$ we conjecture exact values for $b_{k,\ell}$ and $c_{k,\ell}$ that are upper bounds from Theorem \ref{Singleton}.
\begin{conj}
For positive integers $k\ge \ell$, $b_{k,\ell}=\frac{k}{\ell}$.
\end{conj}
\begin{conj}
For positive integers $k\ge \ell\ge 2$, $c_{k,\ell}=\frac{\binom{k}{2}}{\binom{\ell}{2}}$.
\end{conj}
\section{Acknowledgements}
This research was conducted at the University of Minnesota Duluth REU and was supported by NSF grant 1659047. We would like to thank Joe Gallian, Colin Defant, and Ben Gunby for reading over the manuscript and providing valuable comments. We would especially like to thank Ben Gunby for finding a critical error in an earlier version of the paper.
\bibliographystyle{plain}
| {
"timestamp": "2018-02-01T02:12:26",
"yymm": "1801",
"arxiv_id": "1801.10607",
"language": "en",
"url": "https://arxiv.org/abs/1801.10607",
"abstract": "We introduce a generalization of classical $q$-ary codes by allowing points to cover other points that are Hamming distance $1$ or $2$ in a freely chosen subset of all directions. More specifically, we generalize the notion of $1$-covering, $1$-packing, and $2$-packing in the case of $q$-ary codes. In the covering case, we establish the analog of the sphere-packing bound and in the packing case, we establish an analog of the singleton bound. Given these analogs, in the covering case we establish that the sphere-packing bound is asymptotically never tight except in trivial cases. This is in essence an analog of a seminal result of Rodemich regarding $q$-ary codes. In the packing case we establish for the $1$-packing and $2$-packing cases that the analog of the singleton bound is tight in several possible cases and conjecture that these bounds are optimal in general.",
"subjects": "Combinatorics (math.CO)",
"title": "Hypercube Packings and Coverings with Higher Dimensional Rooks",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9702399034724605,
"lm_q2_score": 0.7310585727705127,
"lm_q1q2_score": 0.709302199077577
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.